repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
content
stringlengths
335
154k
keras-team/keras-io
examples/keras_recipes/ipynb/better_knowledge_distillation.ipynb
apache-2.0
!!pip install -q tensorflow-addons """ Explanation: Knowledge distillation recipes Author: Sayak Paul<br> Date created: 2021/08/01<br> Last modified: 2021/08/01<br> Description: Training better student models via knowledge distillation with function matching. Introduction Knowledge distillation (Hinton et al.) is a technique that enables us to compress larger models into smaller ones. This allows us to reap the benefits of high performing larger models, while reducing storage and memory costs and achieving higher inference speed: Smaller models -> smaller memory footprint Reduced complexity -> fewer floating-point operations (FLOPs) In Knowledge distillation: A good teacher is patient and consistent, Beyer et al. investigate various existing setups for performing knowledge distillation and show that all of them lead to sub-optimal performance. Due to this, practitioners often settle for other alternatives (quantization, pruning, weight clustering, etc.) when developing production systems that are resource-constrained. Beyer et al. investigate how we can improve the student models that come out of the knowledge distillation process and always match the performance of their teacher models. In this example, we will study the recipes introduced by them, using the Flowers102 dataset. As a reference, with these recipes, the authors were able to produce a ResNet50 model that achieves 82.8% accuracy on the ImageNet-1k dataset. In case you need a refresher on knowledge distillation and want to study how it is implemented in Keras, you can refer to this example. You can also follow this example that shows an extension of knowledge distillation applied to consistency training. To follow this example, you will need TensorFlow 2.5 or higher as well as TensorFlow Addons, which can be installed using the command below: End of explanation """ from tensorflow import keras import tensorflow_addons as tfa import tensorflow as tf import matplotlib.pyplot as plt import numpy as np import tensorflow_datasets as tfds tfds.disable_progress_bar() """ Explanation: Imports End of explanation """ AUTO = tf.data.AUTOTUNE # Used to dynamically adjust parallelism. BATCH_SIZE = 64 # Comes from Table 4 and "Training setup" section. TEMPERATURE = 10 # Used to soften the logits before they go to softmax. INIT_LR = 0.003 # Initial learning rate that will be decayed over the training period. WEIGHT_DECAY = 0.001 # Used for regularization. CLIP_THRESHOLD = 1.0 # Used for clipping the gradients by L2-norm. # We will first resize the training images to a bigger size and then we will take # random crops of a lower size. BIGGER = 160 RESIZE = 128 """ Explanation: Hyperparameters and contants End of explanation """ train_ds, validation_ds, test_ds = tfds.load( "oxford_flowers102", split=["train", "validation", "test"], as_supervised=True ) print(f"Number of training examples: {train_ds.cardinality()}.") print( f"Number of validation examples: {validation_ds.cardinality()}." ) print(f"Number of test examples: {test_ds.cardinality()}.") """ Explanation: Load the Flowers102 dataset End of explanation """ import os os.environ["KAGGLE_USERNAME"] = "" # TODO: enter your Kaggle user name here os.environ["KAGGLE_KEY"] = "" # TODO: enter your Kaggle API key here !!kaggle datasets download -d spsayakpaul/bitresnet101x3flowers102 !!unzip -qq bitresnet101x3flowers102.zip # Since the teacher model is not going to be trained further we make # it non-trainable. teacher_model = keras.models.load_model( "/home/jupyter/keras-io/examples/keras_recipes/T-r101x3-128" ) teacher_model.trainable = False teacher_model.summary() """ Explanation: Teacher model As is common with any distillation technique, it's important to first train a well-performing teacher model which is usually larger than the subsequent student model. The authors distill a BiT ResNet152x2 model (teacher) into a BiT ResNet50 model (student). BiT stands for Big Transfer and was introduced in Big Transfer (BiT): General Visual Representation Learning. BiT variants of ResNets use Group Normalization (Wu et al.) and Weight Standardization (Qiao et al.) in place of Batch Normalization (Ioffe et al.). In order to limit the time it takes to run this example, we will be using a BiT ResNet101x3 already trained on the Flowers102 dataset. You can refer to this notebook to learn more about the training process. This model reaches 98.18% accuracy on the test set of Flowers102. The model weights are hosted on Kaggle as a dataset. To download the weights, follow these steps: Create an account on Kaggle here. Go to the "Account" tab of your user profile. Select "Create API Token". This will trigger the download of kaggle.json, a file containing your API credentials. From that JSON file, copy your Kaggle username and API key. Now run the following: ```python import os os.environ["KAGGLE_USERNAME"] = "" # TODO: enter your Kaggle user name here os.environ["KAGGLE_KEY"] = "" # TODO: enter your Kaggle key here ``` Once the environment variables are set, run: shell $ kaggle datasets download -d spsayakpaul/bitresnet101x3flowers102 $ unzip -qq bitresnet101x3flowers102.zip This should generate a folder named T-r101x3-128 which is essentially a teacher SavedModel. End of explanation """ def mixup(images, labels): alpha = tf.random.uniform([], 0, 1) mixedup_images = alpha * images + (1 - alpha) * tf.reverse(images, axis=[0]) # The labels do not matter here since they are NOT used during # training. return mixedup_images, labels def preprocess_image(image, label, train=True): image = tf.cast(image, tf.float32) / 255.0 if train: image = tf.image.resize(image, (BIGGER, BIGGER)) image = tf.image.random_crop(image, (RESIZE, RESIZE, 3)) image = tf.image.random_flip_left_right(image) else: # Central fraction amount is from here: # https://git.io/J8Kda. image = tf.image.central_crop(image, central_fraction=0.875) image = tf.image.resize(image, (RESIZE, RESIZE)) return image, label def prepare_dataset(dataset, train=True, batch_size=BATCH_SIZE): if train: dataset = dataset.map(preprocess_image, num_parallel_calls=AUTO) dataset = dataset.shuffle(BATCH_SIZE * 10) else: dataset = dataset.map( lambda x, y: (preprocess_image(x, y, train)), num_parallel_calls=AUTO ) dataset = dataset.batch(batch_size) if train: dataset = dataset.map(mixup, num_parallel_calls=AUTO) dataset = dataset.prefetch(AUTO) return dataset """ Explanation: The "function matching" recipe To train a high-quality student model, the authors propose the following changes to the student training workflow: Use an aggressive variant of MixUp (Zhang et al.). This is done by sampling the alpha parameter from a uniform distribution instead of a beta distribution. MixUp is used here in order to help the student model capture the function underlying the teacher model. MixUp linearly interpolates between different samples across the data manifold. So the rationale here is if the student is trained to fit that it should be able to match the teacher model better. To incorporate more invariance MixUp is coupled with "Inception-style" cropping (Szegedy et al.). This is where the "function matching" term makes its way in the original paper. Unlike other works (Noisy Student Training for example), both the teacher and student models receive the same copy of an image, which is mixed up and randomly cropped. By providing the same inputs to both the models, the authors make the teacher consistent with the student. With MixUp, we are essentially introducing a strong form of regularization when training the student. As such, it should be trained for a relatively long period of time (1000 epochs at least). Since the student is trained with strong regularization, the risk of overfitting due to a longer training schedule are also mitigated. In summary, one needs to be consistent and patient while training the student model. Data input pipeline End of explanation """ train_ds = prepare_dataset(train_ds, True) validation_ds = prepare_dataset(validation_ds, False) test_ds = prepare_dataset(test_ds, False) """ Explanation: Note that for brevity, we used mild crops for the training set but in practice "Inception-style" preprocessing should be applied. You can refer to this script for a closer implementation. Also, the ground-truth labels are not used for training the student. End of explanation """ sample_images, _ = next(iter(train_ds)) plt.figure(figsize=(10, 10)) for n in range(25): ax = plt.subplot(5, 5, n + 1) plt.imshow(sample_images[n].numpy()) plt.axis("off") plt.show() """ Explanation: Visualization End of explanation """ def get_resnetv2(): resnet_v2 = keras.applications.ResNet50V2( weights=None, input_shape=(RESIZE, RESIZE, 3), classes=102, classifier_activation="linear", ) return resnet_v2 get_resnetv2().count_params() """ Explanation: Student model For the purpose of this example, we will use the standard ResNet50V2 (He et al.). End of explanation """ class Distiller(tf.keras.Model): def __init__(self, student, teacher): super(Distiller, self).__init__() self.student = student self.teacher = teacher self.loss_tracker = keras.metrics.Mean(name="distillation_loss") @property def metrics(self): metrics = super().metrics metrics.append(self.loss_tracker) return metrics def compile( self, optimizer, metrics, distillation_loss_fn, temperature=TEMPERATURE, ): super(Distiller, self).compile(optimizer=optimizer, metrics=metrics) self.distillation_loss_fn = distillation_loss_fn self.temperature = temperature def train_step(self, data): # Unpack data x, _ = data # Forward pass of teacher teacher_predictions = self.teacher(x, training=False) with tf.GradientTape() as tape: # Forward pass of student student_predictions = self.student(x, training=True) # Compute loss distillation_loss = self.distillation_loss_fn( tf.nn.softmax(teacher_predictions / self.temperature, axis=1), tf.nn.softmax(student_predictions / self.temperature, axis=1), ) # Compute gradients trainable_vars = self.student.trainable_variables gradients = tape.gradient(distillation_loss, trainable_vars) # Update weights self.optimizer.apply_gradients(zip(gradients, trainable_vars)) # Report progress self.loss_tracker.update_state(distillation_loss) return {"distillation_loss": self.loss_tracker.result()} def test_step(self, data): # Unpack data x, y = data # Forward passes teacher_predictions = self.teacher(x, training=False) student_predictions = self.student(x, training=False) # Calculate the loss distillation_loss = self.distillation_loss_fn( tf.nn.softmax(teacher_predictions / self.temperature, axis=1), tf.nn.softmax(student_predictions / self.temperature, axis=1), ) # Report progress self.loss_tracker.update_state(distillation_loss) self.compiled_metrics.update_state(y, student_predictions) results = {m.name: m.result() for m in self.metrics} return results """ Explanation: Compared to the teacher model, this model has 358 Million fewer parameters. Distillation utility We will reuse some code from this example on knowledge distillation. End of explanation """ # Some code is taken from: # https://www.kaggle.com/ashusma/training-rfcx-tensorflow-tpu-effnet-b2. class WarmUpCosine(keras.optimizers.schedules.LearningRateSchedule): def __init__( self, learning_rate_base, total_steps, warmup_learning_rate, warmup_steps ): super(WarmUpCosine, self).__init__() self.learning_rate_base = learning_rate_base self.total_steps = total_steps self.warmup_learning_rate = warmup_learning_rate self.warmup_steps = warmup_steps self.pi = tf.constant(np.pi) def __call__(self, step): if self.total_steps < self.warmup_steps: raise ValueError("Total_steps must be larger or equal to warmup_steps.") cos_annealed_lr = tf.cos( self.pi * (tf.cast(step, tf.float32) - self.warmup_steps) / float(self.total_steps - self.warmup_steps) ) learning_rate = 0.5 * self.learning_rate_base * (1 + cos_annealed_lr) if self.warmup_steps > 0: if self.learning_rate_base < self.warmup_learning_rate: raise ValueError( "Learning_rate_base must be larger or equal to " "warmup_learning_rate." ) slope = ( self.learning_rate_base - self.warmup_learning_rate ) / self.warmup_steps warmup_rate = slope * tf.cast(step, tf.float32) + self.warmup_learning_rate learning_rate = tf.where( step < self.warmup_steps, warmup_rate, learning_rate ) return tf.where( step > self.total_steps, 0.0, learning_rate, name="learning_rate" ) """ Explanation: Learning rate schedule A warmup cosine learning rate schedule is used in the paper. This schedule is also typical for many pre-training methods especially for computer vision. End of explanation """ ARTIFICIAL_EPOCHS = 1000 ARTIFICIAL_BATCH_SIZE = 512 DATASET_NUM_TRAIN_EXAMPLES = 1020 TOTAL_STEPS = int( DATASET_NUM_TRAIN_EXAMPLES / ARTIFICIAL_BATCH_SIZE * ARTIFICIAL_EPOCHS ) scheduled_lrs = WarmUpCosine( learning_rate_base=INIT_LR, total_steps=TOTAL_STEPS, warmup_learning_rate=0.0, warmup_steps=1500, ) lrs = [scheduled_lrs(step) for step in range(TOTAL_STEPS)] plt.plot(lrs) plt.xlabel("Step", fontsize=14) plt.ylabel("LR", fontsize=14) plt.show() """ Explanation: We can now plot a a graph of learning rates generated using this schedule. End of explanation """ optimizer = tfa.optimizers.AdamW( weight_decay=WEIGHT_DECAY, learning_rate=scheduled_lrs, clipnorm=CLIP_THRESHOLD ) student_model = get_resnetv2() distiller = Distiller(student=student_model, teacher=teacher_model) distiller.compile( optimizer, metrics=[keras.metrics.SparseCategoricalAccuracy()], distillation_loss_fn=keras.losses.KLDivergence(), temperature=TEMPERATURE, ) history = distiller.fit( train_ds, steps_per_epoch=int(np.ceil(DATASET_NUM_TRAIN_EXAMPLES / BATCH_SIZE)), validation_data=validation_ds, epochs=30, # This should be at least 1000. ) student = distiller.student student_model.compile(metrics=["accuracy"]) _, top1_accuracy = student.evaluate(test_ds) print(f"Top-1 accuracy on the test set: {round(top1_accuracy * 100, 2)}%") """ Explanation: The original paper uses at least 1000 epochs and a batch size of 512 to perform "function matching". The objective of this example is to present a workflow to implement the recipe and not to demonstrate the results when they are applied at full scale. However, these recipes will transfer to the original settings from the paper. Please refer to this repository if you are interested in finding out more. Training End of explanation """ !# Download the pre-trained weights. !!wget https://git.io/JBO3Y -O S-r50x1-128-1000.tar.gz !!tar xf S-r50x1-128-1000.tar.gz pretrained_student = keras.models.load_model("S-r50x1-128-1000") pretrained_student.summary() """ Explanation: Results With just 30 epochs of training, the results are nowhere near expected. This is where the benefits of patience aka a longer training schedule will come into play. Let's investigate what the model trained for 1000 epochs can do. End of explanation """ _, top1_accuracy = pretrained_student.evaluate(test_ds) print(f"Top-1 accuracy on the test set: {round(top1_accuracy * 100, 2)}%") """ Explanation: This model exactly follows what the authors have used in their student models. This is why the model summary is a bit different. End of explanation """
junhwanjang/DataSchool
Lecture/14. 선형 회귀 분석/8) patsy 패키지 소개.ipynb
mit
from patsy import dmatrix, dmatrices np.random.seed(0) x1 = np.random.rand(5) + 10 x2 = np.random.rand(5) * 10 x1, x2 dmatrix("x1") """ Explanation: patsy 패키지 소개 회귀 분석 전처리 패키지 encoding/transform/design matrix 기능 R-style formula 문자열 지원 design matrix dmatrix(fomula[, data]) R-style formula 문자열을 받아서 X matrix 생성 자동으로 intercept (bias) column 생성 local namespace에서 변수를 찾음 data parameter에 pandas DataFrame을 주면 column lable에서 변수를 찾음 End of explanation """ dmatrix("x1 - 1") dmatrix("x1 + 0") dmatrix("x1 + x2") dmatrix("x1 + x2 - 1") df = pd.DataFrame(np.array([x1, x2]).T, columns=["x1", "x2"]) df dmatrix("x1 + x2 - 1", data=df) """ Explanation: R-style formula | 기호 | 설명 | |-|-| |+| 설명 변수 추가 | |-| 설명 변수 제거 | |1, 0| intercept. (제거시 사용) | |:| interaction (곱) | |*| a*b = a + b + a:b | |/| a/b = a + a:b | |~| 종속 - 독립 관계 | End of explanation """ dmatrix("x1 + np.log(np.abs(x2))", data=df) def doubleit(x): return 2 * x dmatrix("doubleit(x1)", data=df) dmatrix("center(x1) + standardize(x2)", data=df) """ Explanation: 변환(Transform) numpy 함수 이름 사용 가능 사용자 정의 함수 사용 가능 patsy 전용 함수 이름 사용 가능 center(x) standardize(x) scale(x) End of explanation """ dmatrix("x1 + x2", data=df) dmatrix("I(x1 + x2)", data=df) """ Explanation: 변수 보호 I() 다른 formula 기호로부터 보호 End of explanation """ dmatrix("x1 + I(x1**2) + I(x1**3) + I(x1**4)", data=df) """ Explanation: 다항 선형 회귀 End of explanation """ df["a1"] = pd.Series(["a1", "a1", "a2", "a2", "a3", "a5"]) df["a2"] = pd.Series([1, 4, 5, 6, 8, 9]) df dmatrix("a1", data=df) dmatrix("a2", data=df) dmatrix("C(a2)", data=df) """ Explanation: 카테고리 변수 End of explanation """
GoogleCloudPlatform/asl-ml-immersion
notebooks/launching_into_ml/labs/supplemental/intro_linear_regression.ipynb
apache-2.0
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst import os import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns # Seaborn is a Python data visualization library based on matplotlib. %matplotlib inline """ Explanation: Introduction to Linear Regression Learning Objectives Analyze a Pandas Dataframe Create Seaborn plots for Exporatory Data Analysis Train a Linear Regression Model using Scikit-Learn Introduction This lab is in introduction to linear regression using Python and Scikit-Learn. This lab serves as a foundation for more complex algortithms and machine learning models that you will encounter in the course. We will train a linear regression model to predict housing price. Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook. Import Libraries End of explanation """ df_USAhousing = pd.read_csv("../USA_Housing.csv") # Show the first five row. df_USAhousing.head() """ Explanation: Load the Dataset We will use the USA housing prices dataset found on Kaggle. The data contains the following columns: 'Avg. Area Income': Avg. Income of residents of the city house is located in. 'Avg. Area House Age': Avg Age of Houses in same city 'Avg. Area Number of Rooms': Avg Number of Rooms for Houses in same city 'Avg. Area Number of Bedrooms': Avg Number of Bedrooms for Houses in same city 'Area Population': Population of city house is located in 'Price': Price that the house sold at 'Address': Address for the house Next, we read the dataset into a Pandas dataframe. End of explanation """ df_USAhousing.isnull().sum() df_USAhousing.describe() df_USAhousing.info() """ Explanation: Let's check for any null values. End of explanation """ # TODO 1 -- your code goes here """ Explanation: Let's take a peek at the first and last five rows of the data for all columns. Lab Task 1: Print the first and last five rows of the data for all columns. End of explanation """ sns.pairplot(df_USAhousing) sns.distplot(df_USAhousing["Price"]) """ Explanation: Exploratory Data Analysis (EDA) Let's create some simple plots to check out the data! End of explanation """ # TODO 2 -- your code goes here """ Explanation: Lab Task 2: Create the plots using heatmap(): End of explanation """ X = df_USAhousing[ [ "Avg. Area Income", "Avg. Area House Age", "Avg. Area Number of Rooms", "Avg. Area Number of Bedrooms", "Area Population", ] ] y = df_USAhousing["Price"] """ Explanation: Training a Linear Regression Model Regression is a supervised machine learning process. It is similar to classification, but rather than predicting a label, we try to predict a continuous value. Linear regression defines the relationship between a target variable (y) and a set of predictive features (x). Simply stated, If you need to predict a number, then use regression. Let's now begin to train our regression model! We will need to first split up our data into an X array that contains the features to train on, and a y array with the target variable, in this case the Price column. We will toss out the Address column because it only has text info that the linear regression model can't use. X and y arrays Next, let's define the features and label. Briefly, feature is input; label is output. This applies to both classification and regression problems. End of explanation """ from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.4, random_state=101 ) """ Explanation: Train - Test - Split Now let's split the data into a training set and a testing set. We will train out model on the training set and then use the test set to evaluate the model. Note that we are using 40% of the data for testing. What is Random State? If an integer for random state is not specified in the code, then every time the code is executed, a new random value is generated and the train and test datasets will have different values each time. However, if a fixed value is assigned -- like random_state = 0 or 1 or 101 or any other integer, then no matter how many times you execute your code the result would be the same, e.g. the same values will be in the train and test datasets. Thus, the random state that you provide is used as a seed to the random number generator. This ensures that the random numbers are generated in the same order. End of explanation """ from sklearn.linear_model import LinearRegression lm = LinearRegression() """ Explanation: Creating and Training the Model End of explanation """ # TODO 3 -- your code goes here """ Explanation: Lab Task 3: Training the Model using fit(): End of explanation """ # print the intercept print(lm.intercept_) coeff_df = pd.DataFrame(lm.coef_, X.columns, columns=["Coefficient"]) coeff_df """ Explanation: Model Evaluation Let's evaluate the model by checking out it's coefficients and how we can interpret them. End of explanation """ predictions = lm.predict(X_test) plt.scatter(y_test, predictions) """ Explanation: Interpreting the coefficients: Holding all other features fixed, a 1 unit increase in Avg. Area Income is associated with an increase of \$21.52 . Holding all other features fixed, a 1 unit increase in Avg. Area House Age is associated with an increase of \$164883.28 . Holding all other features fixed, a 1 unit increase in Avg. Area Number of Rooms is associated with an increase of \$122368.67 . Holding all other features fixed, a 1 unit increase in Avg. Area Number of Bedrooms is associated with an increase of \$2233.80 . Holding all other features fixed, a 1 unit increase in Area Population is associated with an increase of \$15.15 . Predictions from our Model Let's grab predictions off our test set and see how well it did! End of explanation """ sns.distplot((y_test - predictions), bins=50); """ Explanation: Residual Histogram End of explanation """ from sklearn import metrics print("MAE:", metrics.mean_absolute_error(y_test, predictions)) print("MSE:", metrics.mean_squared_error(y_test, predictions)) print("RMSE:", np.sqrt(metrics.mean_squared_error(y_test, predictions))) """ Explanation: Regression Evaluation Metrics Here are three common evaluation metrics for regression problems: Mean Absolute Error (MAE) is the mean of the absolute value of the errors: $$\frac 1n\sum_{i=1}^n|y_i-\hat{y}_i|$$ Mean Squared Error (MSE) is the mean of the squared errors: $$\frac 1n\sum_{i=1}^n(y_i-\hat{y}_i)^2$$ Root Mean Squared Error (RMSE) is the square root of the mean of the squared errors: $$\sqrt{\frac 1n\sum_{i=1}^n(y_i-\hat{y}_i)^2}$$ Comparing these metrics: MAE is the easiest to understand, because it's the average error. MSE is more popular than MAE, because MSE "punishes" larger errors, which tends to be useful in the real world. RMSE is even more popular than MSE, because RMSE is interpretable in the "y" units. All of these are loss functions, because we want to minimize them. End of explanation """
edwardd1/phys202-2015-work
assignments/assignment04/MatplotlibEx02.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import numpy as np """ Explanation: Matplotlib Exercise 2 Imports End of explanation """ !head -n 30 open_exoplanet_catalogue.txt """ Explanation: Exoplanet properties Over the past few decades, astronomers have discovered thousands of extrasolar planets. The following paper describes the properties of some of these planets. http://iopscience.iop.org/1402-4896/2008/T130/014001 Your job is to reproduce Figures 2 and 4 from this paper using an up-to-date dataset of extrasolar planets found on this GitHub repo: https://github.com/OpenExoplanetCatalogue/open_exoplanet_catalogue A text version of the dataset has already been put into this directory. The top of the file has documentation about each column of data: End of explanation """ data = np.genfromtxt('open_exoplanet_catalogue.txt', delimiter=',') data #raise NotImplementedError() assert data.shape==(1993,24) """ Explanation: Use np.genfromtxt with a delimiter of ',' to read the data into a NumPy array called data: End of explanation """ np.histogram(data) #raise NotImplementedError() assert True # leave for grading """ Explanation: Make a histogram of the distribution of planetary masses. This will reproduce Figure 2 in the original paper. Customize your plot to follow Tufte's principles of visualizations. Customize the box, grid, spines and ticks to match the requirements of this data. Pick the number of bins for the histogram appropriately. End of explanation """ # YOUR CODE HERE raise NotImplementedError() assert True # leave for grading """ Explanation: Make a scatter plot of the orbital eccentricity (y) versus the semimajor axis. This will reproduce Figure 4 of the original paper. Use a log scale on the x axis. Customize your plot to follow Tufte's principles of visualizations. Customize the box, grid, spines and ticks to match the requirements of this data. End of explanation """
AllenDowney/ThinkStats2
workshop/effect_size_soln.ipynb
gpl-3.0
%matplotlib inline import numpy import scipy.stats import matplotlib.pyplot as plt from ipywidgets import interact, interactive, fixed import ipywidgets as widgets # seed the random number generator so we all get the same results numpy.random.seed(17) """ Explanation: Effect Size Examples and exercises for a tutorial on statistical inference. Copyright 2016 Allen Downey License: Creative Commons Attribution 4.0 International End of explanation """ mu1, sig1 = 178, 7.7 male_height = scipy.stats.norm(mu1, sig1) mu2, sig2 = 163, 7.3 female_height = scipy.stats.norm(mu2, sig2) """ Explanation: Part One To explore statistics that quantify effect size, we'll look at the difference in height between men and women. I used data from the Behavioral Risk Factor Surveillance System (BRFSS) to estimate the mean and standard deviation of height in cm for adult women and men in the U.S. I'll use scipy.stats.norm to represent the distributions. The result is an rv object (which stands for random variable). End of explanation """ def eval_pdf(rv, num=4): mean, std = rv.mean(), rv.std() xs = numpy.linspace(mean - num*std, mean + num*std, 100) ys = rv.pdf(xs) return xs, ys """ Explanation: The following function evaluates the normal (Gaussian) probability density function (PDF) within 4 standard deviations of the mean. It takes and rv object and returns a pair of NumPy arrays. End of explanation """ xs, ys = eval_pdf(male_height) plt.plot(xs, ys, label='male', linewidth=4, color='C0') xs, ys = eval_pdf(female_height) plt.plot(xs, ys, label='female', linewidth=4, color='C1') plt.xlabel('height (cm)'); """ Explanation: Here's what the two distributions look like. End of explanation """ male_sample = male_height.rvs(1000) female_sample = female_height.rvs(1000) """ Explanation: Let's assume for now that those are the true distributions for the population. I'll use rvs to generate random samples from the population distributions. Note that these are totally random, totally representative samples, with no measurement error! End of explanation """ mean1, std1 = male_sample.mean(), male_sample.std() mean1, std1 """ Explanation: Both samples are NumPy arrays. Now we can compute sample statistics like the mean and standard deviation. End of explanation """ mean2, std2 = female_sample.mean(), female_sample.std() mean2, std2 """ Explanation: The sample mean is close to the population mean, but not exact, as expected. End of explanation """ difference_in_means = male_sample.mean() - female_sample.mean() difference_in_means # in cm """ Explanation: And the results are similar for the female sample. Now, there are many ways to describe the magnitude of the difference between these distributions. An obvious one is the difference in the means: End of explanation """ # Solution goes here relative_difference = difference_in_means / male_sample.mean() print(relative_difference * 100) # percent # A problem with relative differences is that you have to choose # which mean to express them relative to. relative_difference = difference_in_means / female_sample.mean() print(relative_difference * 100) # percent """ Explanation: On average, men are 14--15 centimeters taller. For some applications, that would be a good way to describe the difference, but there are a few problems: Without knowing more about the distributions (like the standard deviations) it's hard to interpret whether a difference like 15 cm is a lot or not. The magnitude of the difference depends on the units of measure, making it hard to compare across different studies. There are a number of ways to quantify the difference between distributions. A simple option is to express the difference as a percentage of the mean. Exercise 1: what is the relative difference in means, expressed as a percentage? End of explanation """ simple_thresh = (mean1 + mean2) / 2 simple_thresh """ Explanation: STOP HERE: We'll regroup and discuss before you move on. Part Two An alternative way to express the difference between distributions is to see how much they overlap. To define overlap, we choose a threshold between the two means. The simple threshold is the midpoint between the means: End of explanation """ thresh = (std1 * mean2 + std2 * mean1) / (std1 + std2) thresh """ Explanation: A better, but slightly more complicated threshold is the place where the PDFs cross. End of explanation """ male_below_thresh = sum(male_sample < thresh) male_below_thresh """ Explanation: In this example, there's not much difference between the two thresholds. Now we can count how many men are below the threshold: End of explanation """ female_above_thresh = sum(female_sample > thresh) female_above_thresh """ Explanation: And how many women are above it: End of explanation """ male_overlap = male_below_thresh / len(male_sample) female_overlap = female_above_thresh / len(female_sample) male_overlap, female_overlap """ Explanation: The "overlap" is the area under the curves that ends up on the wrong side of the threshold. End of explanation """ misclassification_rate = (male_overlap + female_overlap) / 2 misclassification_rate """ Explanation: In practical terms, you might report the fraction of people who would be misclassified if you tried to use height to guess sex, which is the average of the male and female overlap rates: End of explanation """ # Solution goes here sum(x > y for x, y in zip(male_sample, female_sample)) / len(male_sample) # Solution goes here (male_sample > female_sample).mean() """ Explanation: Another way to quantify the difference between distributions is what's called "probability of superiority", which is a problematic term, but in this context it's the probability that a randomly-chosen man is taller than a randomly-chosen woman. Exercise 2: Suppose I choose a man and a woman at random. What is the probability that the man is taller? HINT: You can zip the two samples together and count the number of pairs where the male is taller, or use NumPy array operations. End of explanation """ def CohenEffectSize(group1, group2): """Compute Cohen's d. group1: Series or NumPy array group2: Series or NumPy array returns: float """ diff = group1.mean() - group2.mean() n1, n2 = len(group1), len(group2) var1 = group1.var() var2 = group2.var() pooled_var = (n1 * var1 + n2 * var2) / (n1 + n2) d = diff / numpy.sqrt(pooled_var) return d """ Explanation: Overlap (or misclassification rate) and "probability of superiority" have two good properties: As probabilities, they don't depend on units of measure, so they are comparable between studies. They are expressed in operational terms, so a reader has a sense of what practical effect the difference makes. Cohen's effect size There is one other common way to express the difference between distributions. Cohen's $d$ is the difference in means, standardized by dividing by the standard deviation. Here's the math notation: $ d = \frac{\bar{x}_1 - \bar{x}_2} s $ where $s$ is the pooled standard deviation: $s = \sqrt{\frac{n_1 s^2_1 + n_2 s^2_2}{n_1+n_2}}$ Here's a function that computes it: End of explanation """ CohenEffectSize(male_sample, female_sample) """ Explanation: Computing the denominator is a little complicated; in fact, people have proposed several ways to do it. This implementation uses the "pooled standard deviation", which is a weighted average of the standard deviations of the two groups. And here's the result for the difference in height between men and women. End of explanation """ def overlap_superiority(control, treatment, n=1000): """Estimates overlap and superiority based on a sample. control: scipy.stats rv object treatment: scipy.stats rv object n: sample size """ control_sample = control.rvs(n) treatment_sample = treatment.rvs(n) thresh = (control.mean() + treatment.mean()) / 2 control_above = sum(control_sample > thresh) treatment_below = sum(treatment_sample < thresh) overlap = (control_above + treatment_below) / n superiority = (treatment_sample > control_sample).mean() return overlap, superiority """ Explanation: Most people don't have a good sense of how big $d=1.9$ is, so let's make a visualization to get calibrated. Here's a function that encapsulates the code we already saw for computing overlap and probability of superiority. End of explanation """ def plot_pdfs(cohen_d=2): """Plot PDFs for distributions that differ by some number of stds. cohen_d: number of standard deviations between the means """ control = scipy.stats.norm(0, 1) treatment = scipy.stats.norm(cohen_d, 1) xs, ys = eval_pdf(control) plt.fill_between(xs, ys, label='control', color='C1', alpha=0.5) xs, ys = eval_pdf(treatment) plt.fill_between(xs, ys, label='treatment', color='C0', alpha=0.5) o, s = overlap_superiority(control, treatment) plt.text(0, 0.05, 'overlap ' + str(o)) plt.text(0, 0.15, 'superiority ' + str(s)) plt.show() #print('overlap', o) #print('superiority', s) """ Explanation: Here's the function that takes Cohen's $d$, plots normal distributions with the given effect size, and prints their overlap and superiority. End of explanation """ plot_pdfs(2) """ Explanation: Here's an example that demonstrates the function: End of explanation """ slider = widgets.FloatSlider(min=0, max=4, value=2) interact(plot_pdfs, cohen_d=slider); """ Explanation: And an interactive widget you can use to visualize what different values of $d$ mean: End of explanation """
mne-tools/mne-tools.github.io
stable/_downloads/a9e07affc8c71aa96bb4ffe855ff552c/morph_surface_stc.ipynb
bsd-3-clause
# Author: Tommy Clausner <tommy.clausner@gmail.com> # # License: BSD-3-Clause import os import os.path as op import mne from mne.datasets import sample print(__doc__) """ Explanation: Morph surface source estimate This example demonstrates how to morph an individual subject's :class:mne.SourceEstimate to a common reference space. We achieve this using :class:mne.SourceMorph. Pre-computed data will be morphed based on a spherical representation of the cortex computed using the spherical registration of FreeSurfer &lt;tut-freesurfer-mne&gt; (https://surfer.nmr.mgh.harvard.edu/fswiki/SurfaceRegAndTemplates) :footcite:GreveEtAl2013. This transform will be used to morph the surface vertices of the subject towards the reference vertices. Here we will use 'fsaverage' as a reference space (see https://surfer.nmr.mgh.harvard.edu/fswiki/FsAverage). The transformation will be applied to the surface source estimate. A plot depicting the successful morph will be created for the spherical and inflated surface representation of 'fsaverage', overlaid with the morphed surface source estimate. <div class="alert alert-info"><h4>Note</h4><p>For background information about morphing see `ch_morph`.</p></div> End of explanation """ data_path = sample.data_path() sample_dir = op.join(data_path, 'MEG', 'sample') subjects_dir = op.join(data_path, 'subjects') fname_src = op.join(subjects_dir, 'sample', 'bem', 'sample-oct-6-src.fif') fname_fwd = op.join(sample_dir, 'sample_audvis-meg-oct-6-fwd.fif') fname_fsaverage_src = os.path.join(subjects_dir, 'fsaverage', 'bem', 'fsaverage-ico-5-src.fif') fname_stc = os.path.join(sample_dir, 'sample_audvis-meg') """ Explanation: Setup paths End of explanation """ # Read stc from file stc = mne.read_source_estimate(fname_stc, subject='sample') """ Explanation: Load example data End of explanation """ src_orig = mne.read_source_spaces(fname_src) print(src_orig) # n_used=4098, 4098 fwd = mne.read_forward_solution(fname_fwd) print(fwd['src']) # n_used=3732, 3766 print([len(v) for v in stc.vertices]) """ Explanation: Setting up SourceMorph for SourceEstimate In MNE, surface source estimates represent the source space simply as lists of vertices (see tut-source-estimate-class). This list can either be obtained from :class:mne.SourceSpaces (src) or from the stc itself. If you use the source space, be sure to use the source space from the forward or inverse operator, because vertices can be excluded during forward computation due to proximity to the BEM inner skull surface: End of explanation """ src_to = mne.read_source_spaces(fname_fsaverage_src) print(src_to[0]['vertno']) # special, np.arange(10242) morph = mne.compute_source_morph(stc, subject_from='sample', subject_to='fsaverage', src_to=src_to, subjects_dir=subjects_dir) """ Explanation: We also need to specify the set of vertices to morph to. This can be done using the spacing parameter, but for consistency it's better to pass the src_to parameter. <div class="alert alert-info"><h4>Note</h4><p>Since the default values of :func:`mne.compute_source_morph` are ``spacing=5, subject_to='fsaverage'``, in this example we could actually omit the ``src_to`` and ``subject_to`` arguments below. The ico-5 ``fsaverage`` source space contains the special values ``[np.arange(10242)] * 2``, but in general this will not be true for other spacings or other subjects. Thus it is recommended to always pass the destination ``src`` for consistency.</p></div> Initialize SourceMorph for SourceEstimate End of explanation """ stc_fsaverage = morph.apply(stc) """ Explanation: Apply morph to (Vector) SourceEstimate The morph will be applied to the source estimate data, by giving it as the first argument to the morph we computed above. End of explanation """ # Define plotting parameters surfer_kwargs = dict( hemi='lh', subjects_dir=subjects_dir, clim=dict(kind='value', lims=[8, 12, 15]), views='lateral', initial_time=0.09, time_unit='s', size=(800, 800), smoothing_steps=5) # As spherical surface brain = stc_fsaverage.plot(surface='sphere', **surfer_kwargs) # Add title brain.add_text(0.1, 0.9, 'Morphed to fsaverage (spherical)', 'title', font_size=16) """ Explanation: Plot results End of explanation """ brain_inf = stc_fsaverage.plot(surface='inflated', **surfer_kwargs) # Add title brain_inf.add_text(0.1, 0.9, 'Morphed to fsaverage (inflated)', 'title', font_size=16) """ Explanation: As inflated surface End of explanation """ stc_fsaverage = mne.compute_source_morph(stc, subjects_dir=subjects_dir).apply(stc) """ Explanation: Reading and writing SourceMorph from and to disk An instance of SourceMorph can be saved, by calling :meth:morph.save &lt;mne.SourceMorph.save&gt;. This method allows for specification of a filename under which the morph will be save in ".h5" format. If no file extension is provided, "-morph.h5" will be appended to the respective defined filename:: &gt;&gt;&gt; morph.save('my-file-name') Reading a saved source morph can be achieved by using :func:mne.read_source_morph:: &gt;&gt;&gt; morph = mne.read_source_morph('my-file-name-morph.h5') Once the environment is set up correctly, no information such as subject_from or subjects_dir must be provided, since it can be inferred from the data and use morph to 'fsaverage' by default. SourceMorph can further be used without creating an instance and assigning it to a variable. Instead :func:mne.compute_source_morph and :meth:mne.SourceMorph.apply can be easily chained into a handy one-liner. Taking this together the shortest possible way to morph data directly would be: End of explanation """
SnShine/aima-python
mdp.ipynb
mit
from mdp import * from notebook import psource, pseudocode """ Explanation: Markov decision processes (MDPs) This IPy notebook acts as supporting material for topics covered in Chapter 17 Making Complex Decisions of the book Artificial Intelligence: A Modern Approach. We makes use of the implementations in mdp.py module. This notebook also includes a brief summary of the main topics as a review. Let us import everything from the mdp module to get started. End of explanation """ %psource MDP """ Explanation: CONTENTS Overview MDP Grid MDP Value Iteration Visualization OVERVIEW Before we start playing with the actual implementations let us review a couple of things about MDPs. A stochastic process has the Markov property if the conditional probability distribution of future states of the process (conditional on both past and present states) depends only upon the present state, not on the sequence of events that preceded it. -- Source: Wikipedia Often it is possible to model many different phenomena as a Markov process by being flexible with our definition of state. MDPs help us deal with fully-observable and non-deterministic/stochastic environments. For dealing with partially-observable and stochastic cases we make use of generalization of MDPs named POMDPs (partially observable Markov decision process). Our overall goal to solve a MDP is to come up with a policy which guides us to select the best action in each state so as to maximize the expected sum of future rewards. MDP To begin with let us look at the implementation of MDP class defined in mdp.py The docstring tells us what all is required to define a MDP namely - set of states,actions, initial state, transition model, and a reward function. Each of these are implemented as methods. Do not close the popup so that you can follow along the description of code below. End of explanation """ # Transition Matrix as nested dict. State -> Actions in state -> States by each action -> Probabilty t = { "A": { "X": {"A":0.3, "B":0.7}, "Y": {"A":1.0} }, "B": { "X": {"End":0.8, "B":0.2}, "Y": {"A":1.0} }, "End": {} } init = "A" terminals = ["End"] rewards = { "A": 5, "B": -10, "End": 100 } class CustomMDP(MDP): def __init__(self, transition_matrix, rewards, terminals, init, gamma=.9): # All possible actions. actlist = [] for state in transition_matrix.keys(): actlist.extend(transition_matrix.keys()) actlist = list(set(actlist)) MDP.__init__(self, init, actlist, terminals=terminals, gamma=gamma) self.t = transition_matrix self.reward = rewards for state in self.t: self.states.add(state) def T(self, state, action): return [(new_state, prob) for new_state, prob in self.t[state][action].items()] """ Explanation: The _init _ method takes in the following parameters: init: the initial state. actlist: List of actions possible in each state. terminals: List of terminal states where only possible action is exit gamma: Discounting factor. This makes sure that delayed rewards have less value compared to immediate ones. R method returns the reward for each state by using the self.reward dict. T method is not implemented and is somewhat different from the text. Here we return (probability, s') pairs where s' belongs to list of possible state by taking action a in state s. actions method returns list of actions possible in each state. By default it returns all actions for states other than terminal states. Now let us implement the simple MDP in the image below. States A, B have actions X, Y available in them. Their probabilities are shown just above the arrows. We start with using MDP as base class for our CustomMDP. Obviously we need to make a few changes to suit our case. We make use of a transition matrix as our transitions are not very simple. <img src="files/images/mdp-a.png"> End of explanation """ our_mdp = CustomMDP(t, rewards, terminals, init, gamma=.9) """ Explanation: Finally we instantize the class with the parameters for our MDP in the picture. End of explanation """ %psource GridMDP """ Explanation: With this we have sucessfully represented our MDP. Later we will look at ways to solve this MDP. GRID MDP Now we look at a concrete implementation that makes use of the MDP as base class. The GridMDP class in the mdp module is used to represent a grid world MDP like the one shown in in Fig 17.1 of the AIMA Book. The code should be easy to understand if you have gone through the CustomMDP example. End of explanation """ sequential_decision_environment """ Explanation: The _init _ method takes grid as an extra parameter compared to the MDP class. The grid is a nested list of rewards in states. go method returns the state by going in particular direction by using vector_add. T method is not implemented and is somewhat different from the text. Here we return (probability, s') pairs where s' belongs to list of possible state by taking action a in state s. actions method returns list of actions possible in each state. By default it returns all actions for states other than terminal states. to_arrows are used for representing the policy in a grid like format. We can create a GridMDP like the one in Fig 17.1 as follows: GridMDP([[-0.04, -0.04, -0.04, +1], [-0.04, None, -0.04, -1], [-0.04, -0.04, -0.04, -0.04]], terminals=[(3, 2), (3, 1)]) In fact the sequential_decision_environment in mdp module has been instantized using the exact same code. End of explanation """ psource(value_iteration) """ Explanation: Value Iteration Now that we have looked how to represent MDPs. Let's aim at solving them. Our ultimate goal is to obtain an optimal policy. We start with looking at Value Iteration and a visualisation that should help us understanding it better. We start by calculating Value/Utility for each of the states. The Value of each state is the expected sum of discounted future rewards given we start in that state and follow a particular policy pi.The algorithm Value Iteration (Fig. 17.4 in the book) relies on finding solutions of the Bellman's Equation. The intuition Value Iteration works is because values propagate. This point will we more clear after we encounter the visualisation. For more information you can refer to Section 17.2 of the book. End of explanation """ value_iteration(sequential_decision_environment) """ Explanation: It takes as inputs two parameters, an MDP to solve and epsilon the maximum error allowed in the utility of any state. It returns a dictionary containing utilities where the keys are the states and values represent utilities. Let us solve the sequencial_decision_enviornment GridMDP. End of explanation """ pseudocode("Value-Iteration") """ Explanation: The pseudocode for the algorithm: End of explanation """ def value_iteration_instru(mdp, iterations=20): U_over_time = [] U1 = {s: 0 for s in mdp.states} R, T, gamma = mdp.R, mdp.T, mdp.gamma for _ in range(iterations): U = U1.copy() for s in mdp.states: U1[s] = R(s) + gamma * max([sum([p * U[s1] for (p, s1) in T(s, a)]) for a in mdp.actions(s)]) U_over_time.append(U) return U_over_time """ Explanation: VALUE ITERATION VISUALIZATION To illustrate that values propagate out of states let us create a simple visualisation. We will be using a modified version of the value_iteration function which will store U over time. We will also remove the parameter epsilon and instead add the number of iterations we want. End of explanation """ columns = 4 rows = 3 U_over_time = value_iteration_instru(sequential_decision_environment) %matplotlib inline from notebook import make_plot_grid_step_function plot_grid_step = make_plot_grid_step_function(columns, rows, U_over_time) import ipywidgets as widgets from IPython.display import display from notebook import make_visualize iteration_slider = widgets.IntSlider(min=1, max=15, step=1, value=0) w=widgets.interactive(plot_grid_step,iteration=iteration_slider) display(w) visualize_callback = make_visualize(iteration_slider) visualize_button = widgets.ToggleButton(desctiption = "Visualize", value = False) time_select = widgets.ToggleButtons(description='Extra Delay:',options=['0', '0.1', '0.2', '0.5', '0.7', '1.0']) a = widgets.interactive(visualize_callback, Visualize = visualize_button, time_step=time_select) display(a) """ Explanation: Next, we define a function to create the visualisation from the utilities returned by value_iteration_instru. The reader need not concern himself with the code that immediately follows as it is the usage of Matplotib with IPython Widgets. If you are interested in reading more about these visit ipywidgets.readthedocs.io End of explanation """
moosekaka/sweepython
Pipeline.ipynb
mit
%matplotlib inline import sys import errno import os import os.path as op import cPickle as pickle import wrappers as wr from pipeline import pipefuncs as pf from pipeline import _make_networkx as mn from mayavi import mlab from IPython.display import Image from mombud.vtk_viz import vtkvizfuncs as vf mlab.options.offscreen = True """ Explanation: Pipeline example End of explanation """ datadir = op.join(os.getcwd(),'data', 'pipeline') """ Explanation: Working directory for input and output data End of explanation """ try: with open(op.join(datadir, 'fileMetas.pkl'), 'rb') as inpt: filemetas = pickle.load(inpt) except IOError: print "Error: Make sure you have file metadatas in working directory" try: vtkSkel = wr.swalk(datadir, '*skeleton.vtk', stop=-13) vtkVolRfp = wr.swalk(datadir, '*RF*resampled.vtk', stop=-14) vtkVolGfp = wr.swalk(datadir, '*GF*resampled.vtk', stop=-14) vtkSurf = wr.swalk(datadir, '*urface*.vtk', stop=-12) except Exception: print "Error: check your filepaths" sys.exit() key = '052315_009_RFPstack_012' """ Explanation: The pipeline starts with taking the skeletonized data output from MitoGraph v2.0 along with the raw 3D volume data in both channels (GFP and RFP) along with the surface file End of explanation """ figone = mlab.figure(figure=key, size=(1200, 800), bgcolor=(.086, .086, .086)) vtkobj, vtktube = vf.cellplot(figone, vtkSkel[key], scalartype='Width', rad=.06) vtktube.actor.mapper.scalar_visibility = False mlab.savefig(op.join(datadir, 'bare' + '.png')) mlab.close(figone) Image(filename=op.join(datadir, 'bare' + '.png')) """ Explanation: The skeleton has no scalar values yet End of explanation """ data = pf.add_scalars(vtkSkel[key], vtkVolRfp[key], vtkVolGfp[key.replace('RFP', 'GFP')]) filename = op.join(datadir, 'Norm_%s_skeleton.vtk' % key) nm, rw, rb, gb, wq = pf.normSkel(data, filemetas['YPD_'+key]) calc = {'DY_minmax': nm, 'DY_raw': rw, 'bkstRFP': rb, 'bkstGFP': gb, 'WidthEq': wq} pf.writevtk(data, filename, **calc) vtkNorm = wr.swalk(op.join(datadir, ), 'N*Skeleton.vtk', start=5, stop=-13) # render the various skeleton heatmaps offscreen as Mayavi doesnt allow easy inline plotting in Ipython for t in calc: figone = mlab.figure(figure=key, size=(1200, 800), bgcolor=(.086, .086, .086)) _, vtktube = vf.cellplot(figone, vtkNorm[key], scalartype=t, rad=.08) mlab.savefig(op.join(datadir, t + '.png')) mlab.close(figone) """ Explanation: We calculate scalar values using a point cloud averaging along every pixel in the skeleton. A graph network is also calculated from by examining the end point of each line segment in the skeleton for coincident points. End of explanation """ Image(filename=op.join(datadir, 'DY_raw' + '.png')) """ Explanation: Raw heatmap of function (mito. membrane potential ΔΨ). End of explanation """ figone = mlab.figure(figure=key, size=(1200, 800), bgcolor=(.086, .086, .086)) vtkobj, vtktube = vf.cellplot(figone, vtkNorm[key], rad=.06) vtktube.actor.mapper.scalar_visibility = False _, _, nxgrph = mn.makegraph(vf.callreader(vtkNorm[key]), key) vf.labelbpoints(nxgrph, bsize=.10, esize=0.08) mlab.savefig(op.join(datadir, 'bare2' + '.png')) mlab.close(figone) Image(filename=op.join(datadir, 'bare2' + '.png')) """ Explanation: The network has can now be viewed as a weighted graph with branchpoints (magenta dots) and end points (cyan dots). End of explanation """
tensorflow/tfjs-models
speech-commands/training/browser-fft/tflite_conversion.ipynb
apache-2.0
# We need scipy for .wav file IO. !pip install tensorflowjs==2.1.0 scipy==1.4.1 # TensorFlow 2.3.0 is required due to https://github.com/tensorflow/tensorflow/issues/38135 # TODO: Switch to 2.3.0 final release when it comes out. !pip install tensorflow-cpu==2.3.0 """ Explanation: Converting a TensorFlow.js Speech-Commands Model to Python and TFLite formats This notebook showcases how to convert a TensorFlow.js (TF.js) Speech Commands model to the Python (tensorflow.keras) and TFLite formats. The TFLite format enables the model to be deployed to mobile enviroments such as Android phones. The technique outlined in this notebook are applicable to: - the original Speech Commands models (including the 18w and directional4w) variants, - transfer-learned models based on the original models, which can be trained and exported from Teachable Machine's Audio Project First, install the required tensorflow and tensorflowjs Python packages. End of explanation """ !mkdir -p /tmp/tfjs-sc-model !curl -o /tmp/tfjs-sc-model/metadata.json -fsSL https://storage.googleapis.com/tfjs-models/tfjs/speech-commands/v0.3/browser_fft/18w/metadata.json !curl -o /tmp/tfjs-sc-model/model.json -fsSL https://storage.googleapis.com/tfjs-models/tfjs/speech-commands/v0.3/browser_fft/18w/model.json !curl -o /tmp/tfjs-sc-model/group1-shard1of2 -fSsL https://storage.googleapis.com/tfjs-models/tfjs/speech-commands/v0.3/browser_fft/18w/group1-shard1of2 !curl -o /tmp/tfjs-sc-model/group1-shard2of2 -fsSL https://storage.googleapis.com/tfjs-models/tfjs/speech-commands/v0.3/browser_fft/18w/group1-shard2of2 import json import tensorflow as tf import tensorflowjs as tfjs # Specify the path to the TensorFlow.js Speech Commands model, # either original or transfer-learned on https://teachablemachine.withgoogle.com/) tfjs_model_json_path = '/tmp/tfjs-sc-model/model.json' # This is the main classifier model. model = tfjs.converters.load_keras_model(tfjs_model_json_path) """ Explanation: Below we download the files of the original or transfer-learned TF.js Speech Commands model. The code example here downloads the original model. But the approach is the same for a transfer-learned model downloaded from Teachable Machine, except that the files may come in as a ZIP archive in the case of Teachable Machine and hence requires unzippping. End of explanation """ !curl -o /tmp/tfjs-sc-model/sc_preproc_model.tar.gz -fSsL https://storage.googleapis.com/tfjs-models/tfjs/speech-commands/conversion/sc_preproc_model.tar.gz !cd /tmp/tfjs-sc-model && tar xzvf ./sc_preproc_model.tar.gz # Load the preprocessing layer (wrapped in a tf.keras Model). preproc_model_path = '/tmp/tfjs-sc-model/sc_preproc_model' preproc_model = tf.keras.models.load_model(preproc_model_path) preproc_model.summary() # From the input_shape of the preproc_model, we can determine the # required length of the input audio snippet. input_length = preproc_model.input_shape[-1] print("Input audio length = %d" % input_length) # Construct the new non-browser model by combining the preprocessing # layer with the main classifier model. combined_model = tf.keras.Sequential(name='combined_model') combined_model.add(preproc_model) combined_model.add(model) combined_model.build([None, input_length]) combined_model.summary() """ Explanation: As a required step, we download the audio preprocessing layer that replicates WebAudio's Fourier transform for non-browser environments such as Android phones. End of explanation """ !curl -o /tmp/tfjs-sc-model/audio_sample_one_male_adult.wav -fSsL https://storage.googleapis.com/tfjs-models/tfjs/speech-commands/conversion/audio_sample_one_male_adult.wav # Listen to the audio sample. wav_file_path = '/tmp/tfjs-sc-model/audio_sample_one_male_adult.wav' import IPython.display as ipd ipd.Audio(wav_file_path) # Play the .wav file. # Read the wav file and truncate it to the an input length # suitable for the model. from scipy.io import wavfile # fs: sample rate in Hz; xs: the audio PCM samples. fs, xs = wavfile.read(wav_file_path) if len(xs) >= input_length: xs = xs[:input_length] else: raise ValueError("Audio from .wav file is too short") # Try running some examples through the combined model. input_tensor = tf.constant(xs, shape=(1, input_length), dtype=tf.float32) / 32768.0 # The model outputs the probabilties for the classes (`probs`). probs = combined_model.predict(input_tensor) # Read class labels of the model. metadata_json_path = '/tmp/tfjs-sc-model/metadata.json' with open(metadata_json_path, 'r') as f: metadata = json.load(f) class_labels = metadata["words"] # Get sorted probabilities and their corresponding class labels. probs_and_labels = list(zip(probs[0].tolist(), class_labels)) # Sort the probabilities in descending order. probs_and_labels = sorted(probs_and_labels, key=lambda x: -x[0]) probs_and_labels # len(probs_and_labels) # Print the top-5 labels: print('top-5 class probabilities:') for i in range(5): prob, label = probs_and_labels[i] print('%20s: %.4e' % (label, prob)) # Save the model as a tflite file. tflite_output_path = '/tmp/tfjs-sc-model/combined_model.tflite' converter = tf.lite.TFLiteConverter.from_keras_model(combined_model) converter.target_spec.supported_ops = [ tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS ] with open(tflite_output_path, 'wb') as f: f.write(converter.convert()) print("Saved tflite file at: %s" % tflite_output_path) """ Explanation: In order to quickly test that the converted model works, let's download a sample .wav file. End of explanation """
adamsteer/nci-notebooks
.ipynb_checkpoints/Point cloud to HDF-checkpoint.ipynb
apache-2.0
import numpy as np import matplotlib.pyplot as plt from matplotlib.colors import LogNorm %matplotlib inline #import plot_lidar from datetime import datetime """ Explanation: What is the proposed task: ingest some liDAR points into a HDF file ingest the aircraft trajectory into the file anything else ...and then extract data from the HDF file at different rates using a spatial 'query' What do the data look like? ASCII point clouds with the following attributes, currently used as a full M x N array: - time (GPS second of week, float) - X coordinate (UTM metres, float) - Y coordinate (UTM metres, float) - Z coordinate (ellipsoidal height, float) - return intensity (unscaled, float) - scan angle (degrees, float) Everything above is easily stored in the binary .LAS format (or .LAZ). It is kept in ASCII because the following additional data have no slots in .LAS: X uncertainty (aircraft reference frame, m, float) Y uncertainty (aircraft reference frame, m, float) Z uncertainty (aircraft reference frame, m, float) 3D uncertainty (metres, float) ...and optionally (depending on the use case): aircraft trajectory height (to ITRF08, metres) aicraft position uncertainty X (metres, relative to aircraft position) aircraft and sensor attributes ...and derived data: sea ice elevations (m, float) estimted snow depths (m, float) estimated snow depth uncertainty (m, float) estimated ice thickness (m, float) estimated ice thickness uncertainty (m, float) So, you can see how quickly .LAS loses it's value. ASCII point clouds are conceptually simple, but very big - and not well suited to use in a HPC context. Too much storage overhead, and you have to read the entire file in order to extract a subset. Six million points gets to around 50MB, it's pretty inefficient. So lets look at about 6 million points... Let's look at a small set of points Scenario: A small set of LiDAR and 3D photogrammetry points collected adjacent to a ship (RSV Aurora Australia) parked in sea ice. Source: AAD's APPLS system (http://seaice.acecrc.org.au/crrs/) https://data.aad.gov.au/aadc/metadata/metadata.cfm?entry_id=SIPEX_II_RAPPLS https://data.aad.gov.au/aadc/metadata/metadata.cfm?entry_id=SIPEX_LiDAR_sea_ice Pretty cover photo: <img src="http://seaice.acecrc.org.au/wp-content/uploads/2013/09/geometry2.png"> End of explanation """ swath = np.genfromtxt('../../PhD/python-phd/swaths/is6_f11_pass1_aa_nr2_522816_523019_c.xyz') import pandas as pd columns = ['time', 'X', 'Y', 'Z', 'I','A', 'x_u', 'y_u', 'z_u', '3D_u'] swath = pd.DataFrame(swath, columns=columns) swath[1:5] """ Explanation: import a LiDAR swath End of explanation """ air_traj = np.genfromtxt('../../PhD/is6_f11/trajectory/is6_f11_pass1_local_ice_rot.3dp') columns = ['time', 'X', 'Y', 'Z', 'R', 'P', 'H', 'x_u', 'y_u', 'z_u', 'r_u', 'p_u', 'h_u'] air_traj = pd.DataFrame(air_traj, columns=columns) air_traj[1:5] """ Explanation: Now load up the aircraft trajectory End of explanation """ fig = plt.figure(figsize = ([30/2.54, 6/2.54])) ax0 = fig.add_subplot(111) a0 = ax0.scatter(swath['Y'], swath['X'], c=swath['Z'] - np.min(swath['Z']), cmap = 'gist_earth', vmin=0, vmax=10, edgecolors=None,lw=0, s=0.6) a1 = ax0.scatter(air_traj['Y'], air_traj['X'], c=air_traj['Z'], cmap = 'Reds', lw=0, s=1) plt.tight_layout() """ Explanation: take a quick look at the data End of explanation """ import h5py #create a file instance, with the intention to write it out lidar_test = h5py.File('lidar_test.hdf5', 'w') swath_data = lidar_test.create_group('swath_data') swath_data.create_dataset('GPS_SOW', data=swath['time']) #some data swath_data.create_dataset('UTM_X', data=swath['X']) swath_data.create_dataset('UTM_Y', data=swath['Y']) swath_data.create_dataset('Z', data=swath['Z']) swath_data.create_dataset('INTENS', data=swath['I']) swath_data.create_dataset('ANGLE', data=swath['A']) swath_data.create_dataset('X_UNCERT', data=swath['x_u']) swath_data.create_dataset('Y_UNCERT', data=swath['y_u']) swath_data.create_dataset('Z_UNCERT', data=swath['z_u']) swath_data.create_dataset('3D_UNCERT', data=swath['3D_u']) #some attributes lidar_test.attrs['file_name'] = 'lidar_test.hdf5' lidar_test.attrs['codebase'] = 'https://github.com/adamsteer/matlab_LIDAR' """ Explanation: Making a HDF file out of those points End of explanation """ traj_data = lidar_test.create_group('traj_data') #some attributes traj_data.attrs['flight'] = 11 traj_data.attrs['pass'] = 1 traj_data.attrs['source'] = 'RAPPLS flight 11, SIPEX-II 2012' #some data traj_data.create_dataset('pos_x', data = air_traj['X']) traj_data.create_dataset('pos_y', data = air_traj['Y']) traj_data.create_dataset('pos_z', data = air_traj['Z']) """ Explanation: That's some swath data, now some trajectory data at a different sampling rate End of explanation """ lidar_test.close() """ Explanation: close and write the file out End of explanation """ photo = np.genfromtxt('/Users/adam/Documents/PhD/is6_f11/photoscan/is6_f11_photoscan_Cloud.txt',skip_header=1) columns = ['X', 'Y', 'Z', 'R', 'G', 'B'] photo = pd.DataFrame(photo[:,0:6], columns=columns) #create a file instance, with the intention to write it out lidar_test = h5py.File('lidar_test.hdf5', 'r+') photo_data = lidar_test.create_group('3d_photo') photo_data.create_dataset('UTM_X', data=photo['X']) photo_data.create_dataset('UTM_Y', data=photo['Y']) photo_data.create_dataset('Z', data=photo['Z']) photo_data.create_dataset('R', data=photo['R']) photo_data.create_dataset('G', data=photo['G']) photo_data.create_dataset('B', data=photo['B']) #del lidar_test['3d_photo'] lidar_test.close() """ Explanation: OK, that's an arbitrary HDF file built The generated file is substantially smaller than the combined sources - 158 MB from 193, with no attention paid to optimisation. The .LAZ version of the input text file here is 66 MB. More compact, but we can't query it directly - and we have to fake fields! Everything in the swath dataset can be stored, but we need to pretend uncertainties are RGB, so if person X comes along and doesn't read the metadata well, they get crazy colours, call us up and complain. Or we need to use .LAZ extra bits, and deal with awkward ways of describing things. It's also probably a terrible HDF, with no respect to CF compliance at all. That's to come :) And now we add some 3D photogrammetry at about 80 points/m^2: End of explanation """ from netCDF4 import Dataset thedata = Dataset('lidar_test.hdf5', 'r') thedata """ Explanation: Storage is a bit less efficient here. ASCII cloud: 2.1 GB .LAZ format with same data: 215 MB HDF file containing LiDAR, trajectory, 3D photo cloud: 1.33 GB So, there's probably a case for keeping super dense clouds in different files (along with all their ancillary data). Note that .LAZ is able to store all the data used for the super dense cloud here. But - how do we query it efficiently? Also, this is just a demonstration, so we push on! now, lets look at the HDF file... and get stuff End of explanation """ swath = thedata['swath_data'] swath utm_xy = np.column_stack((swath['UTM_X'],swath['UTM_Y'])) idx = np.where((utm_xy[:,0] > -100) & (utm_xy[:,0] < 200) & (utm_xy[:,1] > -100) & (utm_xy[:,1] < 200) ) chunk_z = swath['Z'][idx] chunk_z.size max(chunk_z) chunk_x = swath['UTM_X'][idx] chunk_x.size chunk_y = swath['UTM_Y'][idx] chunk_y.size chunk_uncert = swath['Z_UNCERT'][idx] chunk_uncert.size plt.scatter(chunk_x, chunk_y, c=chunk_z, lw=0, s=2) """ Explanation: There are the two groups - swath_data and traj_data End of explanation """ traj = thedata['traj_data'] traj """ Explanation: That gave us a small chunk of LIDAR points, without loading the whole point dataset. Neat! ...but being continually dissatisfied, we want more! Lets get just the corresponding trajectory: End of explanation """ pos_y = traj['pos_y'] idx = np.where((pos_y[:] > -100.) & (pos_y[:] < 200.)) cpos_x = traj['pos_x'][idx] cpos_y = traj['pos_y'][idx] cpos_z = traj['pos_z'][idx] """ Explanation: Because there's essentiually no X extent for flight data, only the Y coordinate of the flight data are needed... End of explanation """ plt.scatter(chunk_x, chunk_y, c=chunk_z, lw=0, s=3, cmap='gist_earth') plt.scatter(cpos_x, cpos_y, c=cpos_z, lw=0, s=5, cmap='Oranges') """ Explanation: Now plot the flight line and LiDAR together End of explanation """ from mpl_toolkits.mplot3d import Axes3D #set up a plot plt_az=310 plt_elev = 40. plt_s = 3 cb_fmt = '%.1f' cmap1 = plt.get_cmap('gist_earth', 10) #make a plot fig = plt.figure() fig.set_size_inches(35/2.51, 20/2.51) ax0 = fig.add_subplot(111, projection='3d') a0 = ax0.scatter(chunk_x, chunk_y, (chunk_z-min(chunk_z))*2, c=np.ndarray.tolist((chunk_z-min(chunk_z))*2),\ cmap=cmap1,lw=0, vmin = -0.5, vmax = 5, s=plt_s) ax0.scatter(cpos_x, cpos_y, cpos_z, c=np.ndarray.tolist(cpos_z),\ cmap='hot', lw=0, vmin = 250, vmax = 265, s=10) ax0.view_init(elev=plt_elev, azim=plt_az) plt.tight_layout() """ Explanation: ...and prove that we are looking at a trajectory and some LiDAR End of explanation """ #set up a plot plt_az=310 plt_elev = 40. plt_s = 3 cb_fmt = '%.1f' cmap1 = plt.get_cmap('gist_earth', 30) #make a plot fig = plt.figure() fig.set_size_inches(35/2.51, 20/2.51) ax0 = fig.add_subplot(111, projection='3d') a0 = ax0.scatter(chunk_x, chunk_y, (chunk_z-min(chunk_z))*2, c=np.ndarray.tolist(chunk_uncert),\ cmap=cmap1, lw=0, vmin = 0, vmax = 0.2, s=plt_s) ax0.scatter(cpos_x, cpos_y, cpos_z, c=np.ndarray.tolist(cpos_z),\ cmap='hot', lw=0, vmin = 250, vmax = 265, s=10) ax0.view_init(elev=plt_elev, azim=plt_az) plt.tight_layout() plt.savefig('thefig.png') """ Explanation: plot coloured by point uncertainty End of explanation """ photo = thedata['3d_photo'] photo photo_xy = np.column_stack((photo['UTM_X'],photo['UTM_Y'])) idx_p = np.where((photo_xy[:,0] > 0) & (photo_xy[:,0] < 100) & (photo_xy[:,1] > 0) & (photo_xy[:,1] < 100) ) plt.scatter(photo['UTM_X'][idx_p], photo['UTM_Y'][idx_p], c = photo['Z'][idx_p],\ cmap='hot',vmin=-1, vmax=1, lw=0, s=plt_s) p_x = photo['UTM_X'][idx_p] p_y = photo['UTM_Y'][idx_p] p_z = photo['Z'][idx_p] plt_az=310 plt_elev = 70. plt_s = 2 #make a plot fig = plt.figure() fig.set_size_inches(25/2.51, 10/2.51) ax0 = fig.add_subplot(111, projection='3d') #LiDAR points ax0.scatter(chunk_x, chunk_y, chunk_z-50, \ c=np.ndarray.tolist(chunk_z),\ cmap=cmap1, vmin=-30, vmax=2, lw=0, s=plt_s) #3D photogrammetry pointd ax0.scatter(p_x, p_y, p_z, c=np.ndarray.tolist(p_z),\ cmap='hot', vmin=-1, vmax=1, lw=0, s=5) #aicraft trajectory ax0.scatter(cpos_x, cpos_y, cpos_z, c=np.ndarray.tolist(cpos_z),\ cmap='hot', lw=0, vmin = 250, vmax = 265, s=10) ax0.view_init(elev=plt_elev, azim=plt_az) plt.tight_layout() plt.savefig('with_photo.png') """ Explanation: now pull in the photogrammetry cloud This gets a little messy, since it appears we still need to grab X and Y dimensions - so still 20 x 10^6 x 2 points. Better than 20 x 10^6 x 6, but I wonder if I'm missing something about indexing. End of explanation """ print('LiDAR points: {0}\nphotogrammetry points: {1}\ntrajectory points: {2}'. format(len(chunk_x), len(p_x), len(cpos_x) )) """ Explanation: This is kind of a clunky plot - but you get the idea (I hope). LiDAR is in blues, the 100 x 100 photogrammetry patch in orange, trajectory in orange. Different data sources, different resolutions, extracted using pretty much the same set of queries. End of explanation """
mdeff/ntds_2016
algorithms/01_ex_graph_science.ipynb
mit
# Load libraries # Math import numpy as np # Visualization %matplotlib notebook import matplotlib.pyplot as plt plt.rcParams.update({'figure.max_open_warning': 0}) from mpl_toolkits.axes_grid1 import make_axes_locatable from scipy import ndimage # Print output of LFR code import subprocess # Sparse matrix import scipy.sparse import scipy.sparse.linalg # 3D visualization import pylab from mpl_toolkits.mplot3d import Axes3D from matplotlib import pyplot # Import data import scipy.io # Import functions in lib folder import sys sys.path.insert(1, 'lib') # Import helper functions %load_ext autoreload %autoreload 2 from lib.utils import compute_ncut from lib.utils import reindex_W_with_classes from lib.utils import nldr_visualization from lib.utils import construct_knn_graph from lib.utils import compute_purity # Import distance function import sklearn.metrics.pairwise # Remove warnings import warnings warnings.filterwarnings("ignore") # Load 10 classes of 4,000 text documents mat = scipy.io.loadmat('datasets/20news_5classes_raw_data.mat') X = mat['X'] n = X.shape[0] d = X.shape[1] Cgt = mat['Cgt'] - 1; Cgt = Cgt.squeeze() nc = len(np.unique(Cgt)) print('Number of data =',n) print('Data dimensionality =',d); print('Number of classes =',nc); """ Explanation: A Network Tour of Data Science &nbsp; &nbsp; &nbsp; Xavier Bresson, Winter 2016/17 Exercise 4 - Code 1 : Graph Science Construct Network of Text Documents End of explanation """ # Your code here """ Explanation: Question 1a: Compute the k-NN graph (k=10) with L2/Euclidean distance<br> Hint: You may use the function W=construct_knn_graph(X,k,'euclidean') End of explanation """ # Your code here """ Explanation: Question 1b: Plot the adjacency matrix of the graph. <br> Hint: Use function plt.spy(W, markersize=1) End of explanation """ # Your code here # Your code here """ Explanation: Question 1c: Reindex the adjacency matrix of the graph w.r.t. ground truth communities. Plot the reindexed adjacency matrix of the graph.<br> Hint: You may use the function [W_reindex,C_classes_reindex]=reindex_W_with_classes(W,C_classes). End of explanation """ # Your code here """ Explanation: Question 1d: Perform graph clustering with NCut technique. What is the clustering accuracy of the NCut solution? What is the clustering accuracy of a random partition? Reindex the adjacency matrix of the graph w.r.t. NCut communities. Plot the reindexed adjacency matrix of the graph.<br> Hint: You may use function C_ncut, accuracy = compute_ncut(W,C_solution,n_clusters) that performs Ncut clustering.<br> Hint: You may use function accuracy = compute_purity(C_computed,C_solution,n_clusters) that returns the accuracy of a computed partition w.r.t. the ground truth partition. A random partition can be generated with the function np.random.randint. End of explanation """ # Reload data matrix X = mat['X'] # Your code here # Compute the k-NN graph with Cosine distance # Your code here # Your code here # Your code here """ Explanation: Question 2a: Compute the k-NN graph (k=10) with Cosine distance.<br> Answer to questions 1b-1d for this graph.<br> Hint: You may use function W=construct_knn_graph(X,10,'cosine'). End of explanation """ # Your code here """ Explanation: Question 2b: Visualize the adjacency matrix with the non-linear reduction technique in 2D and 3D. <br> Hint: You may use function [X,Y,Z] = nldr_visualization(W).<br> Hint: You may use function plt.scatter(X,Y,c=Cncut) for 2D visualization and ax.scatter(X,Y,Z,c=Cncut) for 3D visualization. End of explanation """
napsternxg/gensim
docs/notebooks/WMD_tutorial.ipynb
gpl-3.0
from time import time start_nb = time() # Initialize logging. import logging logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s') sentence_obama = 'Obama speaks to the media in Illinois' sentence_president = 'The president greets the press in Chicago' sentence_obama = sentence_obama.lower().split() sentence_president = sentence_president.lower().split() """ Explanation: Finding similar documents with Word2Vec and WMD Word Mover's Distance is a promising new tool in machine learning that allows us to submit a query and return the most relevant documents. For example, in a blog post OpenTable use WMD on restaurant reviews. Using this approach, they are able to mine different aspects of the reviews. In part 2 of this tutorial, we show how you can use Gensim's WmdSimilarity to do something similar to what OpenTable did. In part 1 shows how you can compute the WMD distance between two documents using wmdistance. Part 1 is optional if you want use WmdSimilarity, but is also useful in it's own merit. First, however, we go through the basics of what WMD is. Word Mover's Distance basics WMD is a method that allows us to assess the "distance" between two documents in a meaningful way, even when they have no words in common. It uses word2vec [4] vector embeddings of words. It been shown to outperform many of the state-of-the-art methods in k-nearest neighbors classification [3]. WMD is illustrated below for two very similar sentences (illustration taken from Vlad Niculae's blog). The sentences have no words in common, but by matching the relevant words, WMD is able to accurately measure the (dis)similarity between the two sentences. The method also uses the bag-of-words representation of the documents (simply put, the word's frequencies in the documents), noted as $d$ in the figure below. The intuition behind the method is that we find the minimum "traveling distance" between documents, in other words the most efficient way to "move" the distribution of document 1 to the distribution of document 2. <img src='https://vene.ro/images/wmd-obama.png' height='600' width='600'> This method was introduced in the article "From Word Embeddings To Document Distances" by Matt Kusner et al. (link to PDF). It is inspired by the "Earth Mover's Distance", and employs a solver of the "transportation problem". In this tutorial, we will learn how to use Gensim's WMD functionality, which consists of the wmdistance method for distance computation, and the WmdSimilarity class for corpus based similarity queries. Note: If you use this software, please consider citing [1], [2] and [3]. Running this notebook You can download this iPython Notebook, and run it on your own computer, provided you have installed Gensim, PyEMD, NLTK, and downloaded the necessary data. The notebook was run on an Ubuntu machine with an Intel core i7-4770 CPU 3.40GHz (8 cores) and 32 GB memory. Running the entire notebook on this machine takes about 3 minutes. Part 1: Computing the Word Mover's Distance To use WMD, we need some word embeddings first of all. You could train a word2vec (see tutorial here) model on some corpus, but we will start by downloading some pre-trained word2vec embeddings. Download the GoogleNews-vectors-negative300.bin.gz embeddings here (warning: 1.5 GB, file is not needed for part 2). Training your own embeddings can be beneficial, but to simplify this tutorial, we will be using pre-trained embeddings at first. Let's take some sentences to compute the distance between. End of explanation """ # Import and download stopwords from NLTK. from nltk.corpus import stopwords from nltk import download download('stopwords') # Download stopwords list. # Remove stopwords. stop_words = stopwords.words('english') sentence_obama = [w for w in sentence_obama if w not in stop_words] sentence_president = [w for w in sentence_president if w not in stop_words] """ Explanation: These sentences have very similar content, and as such the WMD should be low. Before we compute the WMD, we want to remove stopwords ("the", "to", etc.), as these do not contribute a lot to the information in the sentences. End of explanation """ start = time() import os from gensim.models import KeyedVectors if not os.path.exists('/data/w2v_googlenews/GoogleNews-vectors-negative300.bin.gz'): raise ValueError("SKIP: You need to download the google news model") model = KeyedVectors.load_word2vec_format('/data/w2v_googlenews/GoogleNews-vectors-negative300.bin.gz', binary=True) print('Cell took %.2f seconds to run.' % (time() - start)) """ Explanation: Now, as mentioned earlier, we will be using some downloaded pre-trained embeddings. We load these into a Gensim Word2Vec model class. Note that the embeddings we have chosen here require a lot of memory. End of explanation """ distance = model.wmdistance(sentence_obama, sentence_president) print 'distance = %.4f' % distance """ Explanation: So let's compute WMD using the wmdistance method. End of explanation """ sentence_orange = 'Oranges are my favorite fruit' sentence_orange = sentence_orange.lower().split() sentence_orange = [w for w in sentence_orange if w not in stop_words] distance = model.wmdistance(sentence_obama, sentence_orange) print 'distance = %.4f' % distance """ Explanation: Let's try the same thing with two completely unrelated sentences. Notice that the distance is larger. End of explanation """ # Normalizing word2vec vectors. start = time() model.init_sims(replace=True) # Normalizes the vectors in the word2vec class. distance = model.wmdistance(sentence_obama, sentence_president) # Compute WMD as normal. print 'Cell took %.2f seconds to run.' %(time() - start) """ Explanation: Normalizing word2vec vectors When using the wmdistance method, it is beneficial to normalize the word2vec vectors first, so they all have equal length. To do this, simply call model.init_sims(replace=True) and Gensim will take care of that for you. Usually, one measures the distance between two word2vec vectors using the cosine distance (see cosine similarity), which measures the angle between vectors. WMD, on the other hand, uses the Euclidean distance. The Euclidean distance between two vectors might be large because their lengths differ, but the cosine distance is small because the angle between them is small; we can mitigate some of this by normalizing the vectors. Note that normalizing the vectors can take some time, especially if you have a large vocabulary and/or large vectors. Usage is illustrated in the example below. It just so happens that the vectors we have downloaded are already normalized, so it won't do any difference in this case. End of explanation """ # Pre-processing a document. from nltk import word_tokenize download('punkt') # Download data for tokenizer. def preprocess(doc): doc = doc.lower() # Lower the text. doc = word_tokenize(doc) # Split into words. doc = [w for w in doc if not w in stop_words] # Remove stopwords. doc = [w for w in doc if w.isalpha()] # Remove numbers and punctuation. return doc start = time() import json from smart_open import smart_open # Business IDs of the restaurants. ids = ['4bEjOyTaDG24SY5TxsaUNQ', '2e2e7WgqU1BnpxmQL5jbfw', 'zt1TpTuJ6y9n551sw9TaEg', 'Xhg93cMdemu5pAMkDoEdtQ', 'sIyHTizqAiGu12XMLX3N3g', 'YNQgak-ZLtYJQxlDwN-qIg'] w2v_corpus = [] # Documents to train word2vec on (all 6 restaurants). wmd_corpus = [] # Documents to run queries against (only one restaurant). documents = [] # wmd_corpus, with no pre-processing (so we can see the original documents). with smart_open('/data/yelp_academic_dataset_review.json', 'rb') as data_file: for line in data_file: json_line = json.loads(line) if json_line['business_id'] not in ids: # Not one of the 6 restaurants. continue # Pre-process document. text = json_line['text'] # Extract text from JSON object. text = preprocess(text) # Add to corpus for training Word2Vec. w2v_corpus.append(text) if json_line['business_id'] == ids[0]: # Add to corpus for similarity queries. wmd_corpus.append(text) documents.append(json_line['text']) print 'Cell took %.2f seconds to run.' %(time() - start) """ Explanation: Part 2: Similarity queries using WmdSimilarity You can use WMD to get the most similar documents to a query, using the WmdSimilarity class. Its interface is similar to what is described in the Similarity Queries Gensim tutorial. Important note: WMD is a measure of distance. The similarities in WmdSimilarity are simply the negative distance. Be careful not to confuse distances and similarities. Two similar documents will have a high similarity score and a small distance; two very different documents will have low similarity score, and a large distance. Yelp data Let's try similarity queries using some real world data. For that we'll be using Yelp reviews, available at http://www.yelp.com/dataset_challenge. Specifically, we will be using reviews of a single restaurant, namely the Mon Ami Gabi. To get the Yelp data, you need to register by name and email address. The data is 775 MB. This time around, we are going to train the Word2Vec embeddings on the data ourselves. One restaurant is not enough to train Word2Vec properly, so we use 6 restaurants for that, but only run queries against one of them. In addition to the Mon Ami Gabi, mentioned above, we will be using: Earl of Sandwich. Wicked Spoon. Serendipity 3. Bacchanal Buffet. The Buffet. The restaurants we chose were those with the highest number of reviews in the Yelp dataset. Incidentally, they all are on the Las Vegas Boulevard. The corpus we trained Word2Vec on has 18957 documents (reviews), and the corpus we used for WmdSimilarity has 4137 documents. Below a JSON file with Yelp reviews is read line by line, the text is extracted, tokenized, and stopwords and punctuation are removed. End of explanation """ from matplotlib import pyplot as plt %matplotlib inline # Document lengths. lens = [len(doc) for doc in wmd_corpus] # Plot. plt.rc('figure', figsize=(8,6)) plt.rc('font', size=14) plt.rc('lines', linewidth=2) plt.rc('axes', color_cycle=('#377eb8','#e41a1c','#4daf4a', '#984ea3','#ff7f00','#ffff33')) # Histogram. plt.hist(lens, bins=20) plt.hold(True) # Average length. avg_len = sum(lens) / float(len(lens)) plt.axvline(avg_len, color='#e41a1c') plt.hold(False) plt.title('Histogram of document lengths.') plt.xlabel('Length') plt.text(100, 800, 'mean = %.2f' % avg_len) plt.show() """ Explanation: Below is a plot with a histogram of document lengths and includes the average document length as well. Note that these are the pre-processed documents, meaning stopwords are removed, punctuation is removed, etc. Document lengths have a high impact on the running time of WMD, so when comparing running times with this experiment, the number of documents in query corpus (about 4000) and the length of the documents (about 62 words on average) should be taken into account. End of explanation """ # Train Word2Vec on all the restaurants. model = Word2Vec(w2v_corpus, workers=3, size=100) # Initialize WmdSimilarity. from gensim.similarities import WmdSimilarity num_best = 10 instance = WmdSimilarity(wmd_corpus, model, num_best=10) """ Explanation: Now we want to initialize the similarity class with a corpus and a word2vec model (which provides the embeddings and the wmdistance method itself). End of explanation """ start = time() sent = 'Very good, you should seat outdoor.' query = preprocess(sent) sims = instance[query] # A query is simply a "look-up" in the similarity class. print 'Cell took %.2f seconds to run.' %(time() - start) """ Explanation: The num_best parameter decides how many results the queries return. Now let's try making a query. The output is a list of indeces and similarities of documents in the corpus, sorted by similarity. Note that the output format is slightly different when num_best is None (i.e. not assigned). In this case, you get an array of similarities, corresponding to each of the documents in the corpus. The query below is taken directly from one of the reviews in the corpus. Let's see if there are other reviews that are similar to this one. End of explanation """ # Print the query and the retrieved documents, together with their similarities. print 'Query:' print sent for i in range(num_best): print print 'sim = %.4f' % sims[i][1] print documents[sims[i][0]] """ Explanation: The query and the most similar documents, together with the similarities, are printed below. We see that the retrieved documents are discussing the same thing as the query, although using different words. The query talks about getting a seat "outdoor", while the results talk about sitting "outside", and one of them says the restaurant has a "nice view". End of explanation """ start = time() sent = 'I felt that the prices were extremely reasonable for the Strip' query = preprocess(sent) sims = instance[query] # A query is simply a "look-up" in the similarity class. print 'Query:' print sent for i in range(num_best): print print 'sim = %.4f' % sims[i][1] print documents[sims[i][0]] print '\nCell took %.2f seconds to run.' %(time() - start) """ Explanation: Let's try a different query, also taken directly from one of the reviews in the corpus. End of explanation """ print 'Notebook took %.2f seconds to run.' %(time() - start_nb) """ Explanation: This time around, the results are more straight forward; the retrieved documents basically contain the same words as the query. WmdSimilarity normalizes the word embeddings by default (using init_sims(), as explained before), but you can overwrite this behaviour by calling WmdSimilarity with normalize_w2v_and_replace=False. End of explanation """
goddoe/CADL
session-2/session-2.ipynb
apache-2.0
# First check the Python version import sys if sys.version_info < (3,4): print('You are running an older version of Python!\n\n' \ 'You should consider updating to Python 3.4.0 or ' \ 'higher as the libraries built for this course ' \ 'have only been tested in Python 3.4 and higher.\n') print('Try installing the Python 3.5 version of anaconda ' 'and then restart `jupyter notebook`:\n' \ 'https://www.continuum.io/downloads\n\n') # Now get necessary libraries try: import os import numpy as np import matplotlib.pyplot as plt from skimage.transform import resize from skimage import data from scipy.misc import imresize except ImportError: print('You are missing some packages! ' \ 'We will try installing them before continuing!') !pip install "numpy>=1.11.0" "matplotlib>=1.5.1" "scikit-image>=0.11.3" "scikit-learn>=0.17" "scipy>=0.17.0" import os import numpy as np import matplotlib.pyplot as plt from skimage.transform import resize from skimage import data from scipy.misc import imresize print('Done!') # Import Tensorflow try: import tensorflow as tf except ImportError: print("You do not have tensorflow installed!") print("Follow the instructions on the following link") print("to install tensorflow before continuing:") print("") print("https://github.com/pkmital/CADL#installation-preliminaries") # This cell includes the provided libraries from the zip file # and a library for displaying images from ipython, which # we will use to display the gif try: from libs import utils, gif import IPython.display as ipyd except ImportError: print("Make sure you have started notebook in the same directory" + " as the provided zip file which includes the 'libs' folder" + " and the file 'utils.py' inside of it. You will NOT be able" " to complete this assignment unless you restart jupyter" " notebook inside the directory created by extracting" " the zip file or cloning the github repo.") # We'll tell matplotlib to inline any drawn figures like so: %matplotlib inline plt.style.use('ggplot') # Bit of formatting because I don't like the default inline code style: from IPython.core.display import HTML HTML("""<style> .rendered_html code { padding: 2px 4px; color: #c7254e; background-color: #f9f2f4; border-radius: 4px; } </style>""") """ Explanation: Session 2 - Training a Network w/ Tensorflow <p class="lead"> Assignment: Teach a Deep Neural Network to Paint </p> <p class="lead"> Parag K. Mital<br /> <a href="https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/info">Creative Applications of Deep Learning w/ Tensorflow</a><br /> <a href="https://www.kadenze.com/partners/kadenze-academy">Kadenze Academy</a><br /> <a href="https://twitter.com/hashtag/CADL">#CADL</a> </p> This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>. Learning Goals Learn how to create a Neural Network Learn to use a neural network to paint an image Apply creative thinking to the inputs, outputs, and definition of a network Outline <!-- MarkdownTOC autolink=true autoanchor=true bracket=round --> Assignment Synopsis Part One - Fully Connected Network Instructions Code Variable Scopes Part Two - Image Painting Network Instructions Preparing the Data Cost Function Explore A Note on Crossvalidation Part Three - Learning More than One Image Instructions Code Part Four - Open Exploration (Extra Credit) Assignment Submission <!-- /MarkdownTOC --> This next section will just make sure you have the right version of python and the libraries that we'll be using. Don't change the code here but make sure you "run" it (use "shift+enter")! End of explanation """ xs = np.linspace(-6, 6, 100) plt.plot(xs, np.maximum(xs, 0), label='relu') plt.plot(xs, 1 / (1 + np.exp(-xs)), label='sigmoid') plt.plot(xs, np.tanh(xs), label='tanh') plt.xlabel('Input') plt.xlim([-6, 6]) plt.ylabel('Output') plt.ylim([-1.5, 1.5]) plt.title('Common Activation Functions/Nonlinearities') plt.legend(loc='lower right') """ Explanation: <a name="assignment-synopsis"></a> Assignment Synopsis In this assignment, we're going to create our first neural network capable of taking any two continuous values as inputs. Those two values will go through a series of multiplications, additions, and nonlinearities, coming out of the network as 3 outputs. Remember from the last homework, we used convolution to filter an image so that the representations in the image were accentuated. We're not going to be using convolution w/ Neural Networks until the next session, but we're effectively doing the same thing here: using multiplications to accentuate the representations in our data, in order to minimize whatever our cost function is. To find out what those multiplications need to be, we're going to use Gradient Descent and Backpropagation, which will take our cost, and find the appropriate updates to all the parameters in our network to best optimize the cost. In the next session, we'll explore much bigger networks and convolution. This "toy" network is really to help us get up and running with neural networks, and aid our exploration of the different components that make up a neural network. You will be expected to explore manipulations of the neural networks in this notebook as much as possible to help aid your understanding of how they effect the final result. We're going to build our first neural network to understand what color "to paint" given a location in an image, or the row, col of the image. So in goes a row/col, and out goes a R/G/B. In the next lesson, we'll learn what this network is really doing is performing regression. For now, we'll focus on the creative applications of such a network to help us get a better understanding of the different components that make up the neural network. You'll be asked to explore many of the different components of a neural network, including changing the inputs/outputs (i.e. the dataset), the number of layers, their activation functions, the cost functions, learning rate, and batch size. You'll also explore a modification to this same network which takes a 3rd input: an index for an image. This will let us try to learn multiple images at once, though with limited success. We'll now dive right into creating deep neural networks, and I'm going to show you the math along the way. Don't worry if a lot of it doesn't make sense, and it really takes a bit of practice before it starts to come together. <a name="part-one---fully-connected-network"></a> Part One - Fully Connected Network <a name="instructions"></a> Instructions Create the operations necessary for connecting an input to a network, defined by a tf.Placeholder, to a series of fully connected, or linear, layers, using the formula: $$\textbf{H} = \phi(\textbf{X}\textbf{W} + \textbf{b})$$ where $\textbf{H}$ is an output layer representing the "hidden" activations of a network, $\phi$ represents some nonlinearity, $\textbf{X}$ represents an input to that layer, $\textbf{W}$ is that layer's weight matrix, and $\textbf{b}$ is that layer's bias. If you're thinking, what is going on? Where did all that math come from? Don't be afraid of it. Once you learn how to "speak" the symbolic representation of the equation, it starts to get easier. And once we put it into practice with some code, it should start to feel like there is some association with what is written in the equation, and what we've written in code. Practice trying to say the equation in a meaningful way: "The output of a hidden layer is equal to some input multiplied by another matrix, adding some bias, and applying a non-linearity". Or perhaps: "The hidden layer is equal to a nonlinearity applied to an input multiplied by a matrix and adding some bias". Explore your own interpretations of the equation, or ways of describing it, and it starts to become much, much easier to apply the equation. The first thing that happens in this equation is the input matrix $\textbf{X}$ is multiplied by another matrix, $\textbf{W}$. This is the most complicated part of the equation. It's performing matrix multiplication, as we've seen from last session, and is effectively scaling and rotating our input. The bias $\textbf{b}$ allows for a global shift in the resulting values. Finally, the nonlinearity of $\phi$ allows the input space to be nonlinearly warped, allowing it to express a lot more interesting distributions of data. Have a look below at some common nonlinearities. If you're unfamiliar with looking at graphs like this, it is common to read the horizontal axis as X, as the input, and the vertical axis as Y, as the output. End of explanation """ # Create a placeholder with None x 2 dimensions of dtype tf.float32, and name it "X": X = ... """ Explanation: Remember, having series of linear followed by nonlinear operations is what makes neural networks expressive. By stacking a lot of "linear" + "nonlinear" operations in a series, we can create a deep neural network! Have a look at the output ranges of the above nonlinearity when considering which nonlinearity seems most appropriate. For instance, the relu is always above 0, but does not saturate at any value above 0, meaning it can be anything above 0. That's unlike the sigmoid which does saturate at both 0 and 1, meaning its values for a single output neuron will always be between 0 and 1. Similarly, the tanh saturates at -1 and 1. Choosing between these is often a matter of trial and error. Though you can make some insights depending on your normalization scheme. For instance, if your output is expected to be in the range of 0 to 1, you may not want to use a tanh function, which ranges from -1 to 1, but likely would want to use a sigmoid. Keep the ranges of these activation functions in mind when designing your network, especially the final output layer of your network. <a name="code"></a> Code In this section, we're going to work out how to represent a fully connected neural network with code. First, create a 2D tf.placeholder called $\textbf{X}$ with None for the batch size and 2 features. Make its dtype tf.float32. Recall that we use the dimension of None for the batch size dimension to say that this dimension can be any number. Here is the docstring for the tf.placeholder function, have a look at what args it takes: Help on function placeholder in module tensorflow.python.ops.array_ops: python placeholder(dtype, shape=None, name=None) Inserts a placeholder for a tensor that will be always fed. **Important**: This tensor will produce an error if evaluated. Its value must be fed using the `feed_dict` optional argument to `Session.run()`, `Tensor.eval()`, or `Operation.run()`. For example: ```python x = tf.placeholder(tf.float32, shape=(1024, 1024)) y = tf.matmul(x, x) with tf.Session() as sess: print(sess.run(y)) # ERROR: will fail because x was not fed. rand_array = np.random.rand(1024, 1024) print(sess.run(y, feed_dict={x: rand_array})) # Will succeed. ``` Args: dtype: The type of elements in the tensor to be fed. shape: The shape of the tensor to be fed (optional). If the shape is not specified, you can feed a tensor of any shape. name: A name for the operation (optional). Returns: A `Tensor` that may be used as a handle for feeding a value, but not evaluated directly. <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation """ W = tf.get_variable(... h = tf.matmul(... """ Explanation: Now multiply the tensor using a new variable, $\textbf{W}$, which has 2 rows and 20 columns, so that when it is left mutiplied by $\textbf{X}$, the output of the multiplication is None x 20, giving you 20 output neurons. Recall that the tf.matmul function takes two arguments, the left hand ($\textbf{X}$) and right hand side ($\textbf{W}$) of a matrix multiplication. To create $\textbf{W}$, you will use tf.get_variable to create a matrix which is 2 x 20 in dimension. Look up the docstrings of functions tf.get_variable and tf.random_normal_initializer to get familiar with these functions. There are many options we will ignore for now. Just be sure to set the name, shape (this is the one that has to be [2, 20]), dtype (i.e. tf.float32), and initializer (the tf.random_normal_intializer you should create) when creating your $\textbf{W}$ variable with tf.get_variable(...). For the random normal initializer, often the mean is set to 0, and the standard deviation is set based on the number of neurons. But that really depends on the input and outputs of your network, how you've "normalized" your dataset, what your nonlinearity/activation function is, and what your expected range of inputs/outputs are. Don't worry about the values for the initializer for now, as this part will take a bit more experimentation to understand better! This part is to encourage you to learn how to look up the documentation on Tensorflow, ideally using tf.get_variable? in the notebook. If you are really stuck, just scroll down a bit and I've shown you how to use it. <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation """ b = tf.get_variable(... h = tf.nn.bias_add(... """ Explanation: And add to this result another new variable, $\textbf{b}$, which has [20] dimensions. These values will be added to every output neuron after the multiplication above. Instead of the tf.random_normal_initializer that you used for creating $\textbf{W}$, now use the tf.constant_initializer. Often for bias, you'll set the constant bias initialization to 0 or 1. <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation """ h = ... """ Explanation: So far we have done: $$\textbf{X}\textbf{W} + \textbf{b}$$ Finally, apply a nonlinear activation to this output, such as tf.nn.relu, to complete the equation: $$\textbf{H} = \phi(\textbf{X}\textbf{W} + \textbf{b})$$ <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation """ h, W = utils.linear( x=X, n_output=20, name='linear', activation=tf.nn.relu) """ Explanation: Now that we've done all of this work, let's stick it inside a function. I've already done this for you and placed it inside the utils module under the function name linear. We've already imported the utils module so we can call it like so, utils.linear(...). The docstring is copied below, and the code itself. Note that this function is slightly different to the one in the lecture. It does not require you to specify n_input, and the input scope is called name. It also has a few more extras in there including automatically converting a 4-d input tensor to a 2-d tensor so that you can fully connect the layer with a matrix multiply (don't worry about what this means if it doesn't make sense!). python utils.linear?? ```python def linear(x, n_output, name=None, activation=None, reuse=None): """Fully connected layer Parameters ---------- x : tf.Tensor Input tensor to connect n_output : int Number of output neurons name : None, optional Scope to apply Returns ------- op : tf.Tensor Output of fully connected layer. """ if len(x.get_shape()) != 2: x = flatten(x, reuse=reuse) n_input = x.get_shape().as_list()[1] with tf.variable_scope(name or "fc", reuse=reuse): W = tf.get_variable( name='W', shape=[n_input, n_output], dtype=tf.float32, initializer=tf.contrib.layers.xavier_initializer()) b = tf.get_variable( name='b', shape=[n_output], dtype=tf.float32, initializer=tf.constant_initializer(0.0)) h = tf.nn.bias_add( name='h', value=tf.matmul(x, W), bias=b) if activation: h = activation(h) return h, W ``` <a name="variable-scopes"></a> Variable Scopes Note that since we are using variable_scope and explicitly telling the scope which name we would like, if there is already a variable created with the same name, then Tensorflow will raise an exception! If this happens, you should consider one of three possible solutions: If this happens while you are interactively editing a graph, you may need to reset the current graph: python tf.reset_default_graph() You should really only have to use this if you are in an interactive console! If you are creating Python scripts to run via command line, you should really be using solution 3 listed below, and be explicit with your graph contexts! If this happens and you were not expecting any name conflicts, then perhaps you had a typo and created another layer with the same name! That's a good reason to keep useful names for everything in your graph! More likely, you should be using context managers when creating your graphs and running sessions. This works like so: python g = tf.Graph() with tf.Session(graph=g) as sess: Y_pred, W = linear(X, 2, 3, activation=tf.nn.relu) or: python g = tf.Graph() with tf.Session(graph=g) as sess, g.as_default(): Y_pred, W = linear(X, 2, 3, activation=tf.nn.relu) You can now write the same process as the above steps by simply calling: End of explanation """ # First load an image img = ... # Be careful with the size of your image. # Try a fairly small image to begin with, # then come back here and try larger sizes. img = imresize(img, (100, 100)) plt.figure(figsize=(5, 5)) plt.imshow(img) # Make sure you save this image as "reference.png" # and include it in your zipped submission file # so we can tell what image you are trying to paint! plt.imsave(fname='reference.png', arr=img) """ Explanation: <a name="part-two---image-painting-network"></a> Part Two - Image Painting Network <a name="instructions-1"></a> Instructions Follow along the steps below, first setting up input and output data of the network, $\textbf{X}$ and $\textbf{Y}$. Then work through building the neural network which will try to compress the information in $\textbf{X}$ through a series of linear and non-linear functions so that whatever it is given as input, it minimized the error of its prediction, $\hat{\textbf{Y}}$, and the true output $\textbf{Y}$ through its training process. You'll also create an animated GIF of the training which you'll need to submit for the homework! Through this, we'll explore our first creative application: painting an image. This network is just meant to demonstrate how easily networks can be scaled to more complicated tasks without much modification. It is also meant to get you thinking about neural networks as building blocks that can be reconfigured, replaced, reorganized, and get you thinking about how the inputs and outputs can be anything you can imagine. <a name="preparing-the-data"></a> Preparing the Data We'll follow an example that Andrej Karpathy has done in his online demonstration of "image inpainting". What we're going to do is teach the network to go from the location on an image frame to a particular color. So given any position in an image, the network will need to learn what color to paint. Let's first get an image that we'll try to teach a neural network to paint. <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation """ def split_image(img): # We'll first collect all the positions in the image in our list, xs xs = [] # And the corresponding colors for each of these positions ys = [] # Now loop over the image for row_i in range(img.shape[0]): for col_i in range(img.shape[1]): # And store the inputs xs.append([row_i, col_i]) # And outputs that the network needs to learn to predict ys.append(img[row_i, col_i]) # we'll convert our lists to arrays xs = np.array(xs) ys = np.array(ys) return xs, ys """ Explanation: In the lecture, I showed how to aggregate the pixel locations and their colors using a loop over every pixel position. I put that code into a function split_image below. Feel free to experiment with other features for xs or ys. End of explanation """ xs, ys = split_image(img) # and print the shapes xs.shape, ys.shape """ Explanation: Let's use this function to create the inputs (xs) and outputs (ys) to our network as the pixel locations (xs) and their colors (ys): End of explanation """ # Normalize the input (xs) using its mean and standard deviation xs = ... # Just to make sure you have normalized it correctly: print(np.min(xs), np.max(xs)) assert(np.min(xs) > -3.0 and np.max(xs) < 3.0) """ Explanation: Also remember, we should normalize our input values! <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation """ print(np.min(ys), np.max(ys)) """ Explanation: Similarly for the output: End of explanation """ ys = ys / 255.0 print(np.min(ys), np.max(ys)) """ Explanation: We'll normalize the output using a simpler normalization method, since we know the values range from 0-255: End of explanation """ plt.imshow(ys.reshape(img.shape)) """ Explanation: Scaling the image values like this has the advantage that it is still interpretable as an image, unlike if we have negative values. What we're going to do is use regression to predict the value of a pixel given its (row, col) position. So the input to our network is X = (row, col) value. And the output of the network is Y = (r, g, b). We can get our original image back by reshaping the colors back into the original image shape. This works because the ys are still in order: End of explanation """ # Let's reset the graph: tf.reset_default_graph() # Create a placeholder of None x 2 dimensions and dtype tf.float32 # This will be the input to the network which takes the row/col X = tf.placeholder(... # Create the placeholder, Y, with 3 output dimensions instead of 2. # This will be the output of the network, the R, G, B values. Y = tf.placeholder(... """ Explanation: But when we give inputs of (row, col) to our network, it won't know what order they are, because we will randomize them. So it will have to learn what color value should be output for any given (row, col). Create 2 placeholders of dtype tf.float32: one for the input of the network, a None x 2 dimension placeholder called $\textbf{X}$, and another for the true output of the network, a None x 3 dimension placeholder called $\textbf{Y}$. <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation """ # We'll create 6 hidden layers. Let's create a variable # to say how many neurons we want for each of the layers # (try 20 to begin with, then explore other values) n_neurons = ... # Create the first linear + nonlinear layer which will # take the 2 input neurons and fully connects it to 20 neurons. # Use the `utils.linear` function to do this just like before, # but also remember to give names for each layer, such as # "1", "2", ... "5", or "layer1", "layer2", ... "layer6". h1, W1 = ... # Create another one: h2, W2 = ... # and four more (or replace all of this with a loop if you can!): h3, W3 = ... h4, W4 = ... h5, W5 = ... h6, W6 = ... # Now, make one last layer to make sure your network has 3 outputs: Y_pred, W7 = utils.linear(h6, 3, activation=None, name='pred') assert(X.get_shape().as_list() == [None, 2]) assert(Y_pred.get_shape().as_list() == [None, 3]) assert(Y.get_shape().as_list() == [None, 3]) """ Explanation: Now create a deep neural network that takes your network input $\textbf{X}$ of 2 neurons, multiplies it by a linear and non-linear transformation which makes its shape [None, 20], meaning it will have 20 output neurons. Then repeat the same process again to give you 20 neurons again, and then again and again until you've done 6 layers of 20 neurons. Then finally one last layer which will output 3 neurons, your predicted output, which I've been denoting mathematically as $\hat{\textbf{Y}}$, for a total of 6 hidden layers, or 8 layers total including the input and output layers. Mathematically, we'll be creating a deep neural network that looks just like the previous fully connected layer we've created, but with a few more connections. So recall the first layer's connection is: \begin{align} \textbf{H}_1=\phi(\textbf{X}\textbf{W}_1 + \textbf{b}_1) \ \end{align} So the next layer will take that output, and connect it up again: \begin{align} \textbf{H}_2=\phi(\textbf{H}_1\textbf{W}_2 + \textbf{b}_2) \ \end{align} And same for every other layer: \begin{align} \textbf{H}_3=\phi(\textbf{H}_2\textbf{W}_3 + \textbf{b}_3) \ \textbf{H}_4=\phi(\textbf{H}_3\textbf{W}_4 + \textbf{b}_4) \ \textbf{H}_5=\phi(\textbf{H}_4\textbf{W}_5 + \textbf{b}_5) \ \textbf{H}_6=\phi(\textbf{H}_5\textbf{W}_6 + \textbf{b}_6) \ \end{align} Including the very last layer, which will be the prediction of the network: \begin{align} \hat{\textbf{Y}}=\phi(\textbf{H}_6\textbf{W}_7 + \textbf{b}_7) \end{align} Remember if you run into issues with variable scopes/names, that you cannot recreate a variable with the same name! Revisit the section on <a href='#Variable-Scopes'>Variable Scopes</a> if you get stuck with name issues. <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation """ error = np.linspace(0.0, 128.0**2, 100) loss = error**2.0 plt.plot(error, loss) plt.xlabel('error') plt.ylabel('loss') """ Explanation: <a name="cost-function"></a> Cost Function Now we're going to work on creating a cost function. The cost should represent how much error there is in the network, and provide the optimizer this value to help it train the network's parameters using gradient descent and backpropagation. Let's say our error is E, then the cost will be: $$cost(\textbf{Y}, \hat{\textbf{Y}}) = \frac{1}{\text{B}} \displaystyle\sum\limits_{b=0}^{\text{B}} \textbf{E}_b $$ where the error is measured as, e.g.: $$\textbf{E} = \displaystyle\sum\limits_{c=0}^{\text{C}} (\textbf{Y}{c} - \hat{\textbf{Y}}{c})^2$$ Don't worry if this scares you. This is mathematically expressing the same concept as: "the cost of an actual $\textbf{Y}$, and a predicted $\hat{\textbf{Y}}$ is equal to the mean across batches, of which there are $\text{B}$ total batches, of the sum of distances across $\text{C}$ color channels of every predicted output and true output". Basically, we're trying to see on average, or at least within a single minibatches average, how wrong was our prediction? We create a measure of error for every output feature by squaring the predicted output and the actual output it should have, i.e. the actual color value it should have output for a given input pixel position. By squaring it, we penalize large distances, but not so much small distances. Consider how the square function (i.e., $f(x) = x^2$) changes for a given error. If our color values range between 0-255, then a typical amount of error would be between $0$ and $128^2$. For example if my prediction was (120, 50, 167), and the color should have been (0, 100, 120), then the error for the Red channel is (120 - 0) or 120. And the Green channel is (50 - 100) or -50, and for the Blue channel, (167 - 120) = 47. When I square this result, I get: (120)^2, (-50)^2, and (47)^2. I then add all of these and that is my error, $\textbf{E}$, for this one observation. But I will have a few observations per minibatch. So I add all the error in my batch together, then divide by the number of observations in the batch, essentially finding the mean error of my batch. Let's try to see what the square in our measure of error is doing graphically. End of explanation """ error = np.linspace(0.0, 1.0, 100) plt.plot(error, error**2, label='l_2 loss') plt.plot(error, np.abs(error), label='l_1 loss') plt.xlabel('error') plt.ylabel('loss') plt.legend(loc='lower right') """ Explanation: This is known as the $l_2$ (pronounced el-two) loss. It doesn't penalize small errors as much as it does large errors. This is easier to see when we compare it with another common loss, the $l_1$ (el-one) loss. It is linear in error, by taking the absolute value of the error. We'll compare the $l_1$ loss with normalized values from $0$ to $1$. So instead of having $0$ to $255$ for our RGB values, we'd have $0$ to $1$, simply by dividing our color values by $255.0$. End of explanation """ # first compute the error, the inner part of the summation. # This should be the l1-norm or l2-norm of the distance # between each color channel. error = ... assert(error.get_shape().as_list() == [None, 3]) """ Explanation: So unlike the $l_2$ loss, the $l_1$ loss is really quickly upset if there is any error at all: as soon as error moves away from $0.0$, to $0.1$, the $l_1$ loss is $0.1$. But the $l_2$ loss is $0.1^2 = 0.01$. Having a stronger penalty on smaller errors often leads to what the literature calls "sparse" solutions, since it favors activations that try to explain as much of the data as possible, rather than a lot of activations that do a sort of good job, but when put together, do a great job of explaining the data. Don't worry about what this means if you are more unfamiliar with Machine Learning. There is a lot of literature surrounding each of these loss functions that we won't have time to get into, but look them up if they interest you. During the lecture, we've seen how to create a cost function using Tensorflow. To create a $l_2$ loss function, you can for instance use tensorflow's tf.squared_difference or for an $l_1$ loss function, tf.abs. You'll need to refer to the Y and Y_pred variables only, and your resulting cost should be a single value. Try creating the $l_1$ loss to begin with, and come back here after you have trained your network, to compare the performance with a $l_2$ loss. The equation for computing cost I mentioned above is more succintly written as, for $l_2$ norm: $$cost(\textbf{Y}, \hat{\textbf{Y}}) = \frac{1}{\text{B}} \displaystyle\sum\limits_{b=0}^{\text{B}} \displaystyle\sum\limits_{c=0}^{\text{C}} (\textbf{Y}{c} - \hat{\textbf{Y}}{c})^2$$ For $l_1$ norm, we'd have: $$cost(\textbf{Y}, \hat{\textbf{Y}}) = \frac{1}{\text{B}} \displaystyle\sum\limits_{b=0}^{\text{B}} \displaystyle\sum\limits_{c=0}^{\text{C}} \text{abs}(\textbf{Y}{c} - \hat{\textbf{Y}}{c})$$ Remember, to understand this equation, try to say it out loud: the $cost$ given two variables, $\textbf{Y}$, the actual output we want the network to have, and $\hat{\textbf{Y}}$ the predicted output from the network, is equal to the mean across $\text{B}$ batches, of the sum of $\textbf{C}$ color channels distance between the actual and predicted outputs. If you're still unsure, refer to the lecture where I've computed this, or scroll down a bit to where I've included the answer. <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation """ # Now sum the error for each feature in Y. # If Y is [Batch, Features], the sum should be [Batch]: sum_error = ... assert(sum_error.get_shape().as_list() == [None]) """ Explanation: <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation """ # Finally, compute the cost, as the mean error of the batch. # This should be a single value. cost = ... assert(cost.get_shape().as_list() == []) """ Explanation: <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation """ # Refer to the help for the function optimizer = tf.train....minimize(cost) # Create parameters for the number of iterations to run for (< 100) n_iterations = ... # And how much data is in each minibatch (< 500) batch_size = ... # Then create a session sess = tf.Session() """ Explanation: We now need an optimizer which will take our cost and a learning_rate, which says how far along the gradient to move. This optimizer calculates all the gradients in our network with respect to the cost variable and updates all of the weights in our network using backpropagation. We'll then create mini-batches of our training data and run the optimizer using a session. <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation """ # Initialize all your variables and run the operation with your session sess.run(tf.global_variables_initializer()) # Optimize over a few iterations, each time following the gradient # a little at a time imgs = [] costs = [] gif_step = n_iterations // 10 step_i = 0 for it_i in range(n_iterations): # Get a random sampling of the dataset idxs = np.random.permutation(range(len(xs))) # The number of batches we have to iterate over n_batches = len(idxs) // batch_size # Now iterate over our stochastic minibatches: for batch_i in range(n_batches): # Get just minibatch amount of data idxs_i = idxs[batch_i * batch_size: (batch_i + 1) * batch_size] # And optimize, also returning the cost so we can monitor # how our optimization is doing. training_cost = sess.run( [cost, optimizer], feed_dict={X: xs[idxs_i], Y: ys[idxs_i]})[0] # Also, every 20 iterations, we'll draw the prediction of our # input xs, which should try to recreate our image! if (it_i + 1) % gif_step == 0: costs.append(training_cost / n_batches) ys_pred = Y_pred.eval(feed_dict={X: xs}, session=sess) img = np.clip(ys_pred.reshape(img.shape), 0, 1) imgs.append(img) # Plot the cost over time fig, ax = plt.subplots(1, 2) ax[0].plot(costs) ax[0].set_xlabel('Iteration') ax[0].set_ylabel('Cost') ax[1].imshow(img) fig.suptitle('Iteration {}'.format(it_i)) plt.show() # Save the images as a GIF _ = gif.build_gif(imgs, saveto='single.gif', show_gif=False) """ Explanation: We'll now train our network! The code below should do this for you if you've setup everything else properly. Please read through this and make sure you understand each step! Note that this can take a VERY LONG time depending on the size of your image (make it < 100 x 100 pixels), the number of neurons per layer (e.g. < 30), the number of layers (e.g. < 8), and number of iterations (< 1000). Welcome to Deep Learning :) End of explanation """ ipyd.Image(url='single.gif?{}'.format(np.random.rand()), height=500, width=500) """ Explanation: Let's now display the GIF we've just created: End of explanation """ def build_model(xs, ys, n_neurons, n_layers, activation_fn, final_activation_fn, cost_type): xs = np.asarray(xs) ys = np.asarray(ys) if xs.ndim != 2: raise ValueError( 'xs should be a n_observates x n_features, ' + 'or a 2-dimensional array.') if ys.ndim != 2: raise ValueError( 'ys should be a n_observates x n_features, ' + 'or a 2-dimensional array.') n_xs = xs.shape[1] n_ys = ys.shape[1] X = tf.placeholder(name='X', shape=[None, n_xs], dtype=tf.float32) Y = tf.placeholder(name='Y', shape=[None, n_ys], dtype=tf.float32) current_input = X for layer_i in range(n_layers): current_input = utils.linear( current_input, n_neurons, activation=activation_fn, name='layer{}'.format(layer_i))[0] Y_pred = utils.linear( current_input, n_ys, activation=final_activation_fn, name='pred')[0] if cost_type == 'l1_norm': cost = tf.reduce_mean(tf.reduce_sum( tf.abs(Y - Y_pred), 1)) elif cost_type == 'l2_norm': cost = tf.reduce_mean(tf.reduce_sum( tf.squared_difference(Y, Y_pred), 1)) else: raise ValueError( 'Unknown cost_type: {}. '.format( cost_type) + 'Use only "l1_norm" or "l2_norm"') return {'X': X, 'Y': Y, 'Y_pred': Y_pred, 'cost': cost} def train(imgs, learning_rate=0.0001, batch_size=200, n_iterations=10, gif_step=2, n_neurons=30, n_layers=10, activation_fn=tf.nn.relu, final_activation_fn=tf.nn.tanh, cost_type='l2_norm'): N, H, W, C = imgs.shape all_xs, all_ys = [], [] for img_i, img in enumerate(imgs): xs, ys = split_image(img) all_xs.append(np.c_[xs, np.repeat(img_i, [xs.shape[0]])]) all_ys.append(ys) xs = np.array(all_xs).reshape(-1, 3) xs = (xs - np.mean(xs, 0)) / np.std(xs, 0) ys = np.array(all_ys).reshape(-1, 3) ys = ys / 127.5 - 1 g = tf.Graph() with tf.Session(graph=g) as sess: model = build_model(xs, ys, n_neurons, n_layers, activation_fn, final_activation_fn, cost_type) optimizer = tf.train.AdamOptimizer( learning_rate=learning_rate).minimize(model['cost']) sess.run(tf.global_variables_initializer()) gifs = [] costs = [] step_i = 0 for it_i in range(n_iterations): # Get a random sampling of the dataset idxs = np.random.permutation(range(len(xs))) # The number of batches we have to iterate over n_batches = len(idxs) // batch_size training_cost = 0 # Now iterate over our stochastic minibatches: for batch_i in range(n_batches): # Get just minibatch amount of data idxs_i = idxs[batch_i * batch_size: (batch_i + 1) * batch_size] # And optimize, also returning the cost so we can monitor # how our optimization is doing. cost = sess.run( [model['cost'], optimizer], feed_dict={model['X']: xs[idxs_i], model['Y']: ys[idxs_i]})[0] training_cost += cost print('iteration {}/{}: cost {}'.format( it_i + 1, n_iterations, training_cost / n_batches)) # Also, every 20 iterations, we'll draw the prediction of our # input xs, which should try to recreate our image! if (it_i + 1) % gif_step == 0: costs.append(training_cost / n_batches) ys_pred = model['Y_pred'].eval( feed_dict={model['X']: xs}, session=sess) img = ys_pred.reshape(imgs.shape) gifs.append(img) return gifs """ Explanation: <a name="explore"></a> Explore Go back over the previous cells and exploring changing different parameters of the network. I would suggest first trying to change the learning_rate parameter to different values and see how the cost curve changes. What do you notice? Try exponents of $10$, e.g. $10^1$, $10^2$, $10^3$... and so on. Also try changing the batch_size: $50, 100, 200, 500, ...$ How does it effect how the cost changes over time? Be sure to explore other manipulations of the network, such as changing the loss function to $l_2$ or $l_1$. How does it change the resulting learning? Also try changing the activation functions, the number of layers/neurons, different optimizers, and anything else that you may think of, and try to get a basic understanding on this toy problem of how it effects the network's training. Also try comparing creating a fairly shallow/wide net (e.g. 1-2 layers with many neurons, e.g. > 100), versus a deep/narrow net (e.g. 6-20 layers with fewer neurons, e.g. < 20). What do you notice? <a name="a-note-on-crossvalidation"></a> A Note on Crossvalidation The cost curve plotted above is only showing the cost for our "training" dataset. Ideally, we should split our dataset into what are called "train", "validation", and "test" sets. This is done by taking random subsets of the entire dataset. For instance, we partition our dataset by saying we'll only use 80% of it for training, 10% for validation, and the last 10% for testing. Then when training as above, you would only use the 80% of the data you had partitioned, and then monitor accuracy on both the data you have used to train, but also that new 10% of unseen validation data. This gives you a sense of how "general" your network is. If it is performing just as well on that 10% of data, then you know it is doing a good job. Finally, once you are done training, you would test one last time on your "test" dataset. Ideally, you'd do this a number of times, so that every part of the dataset had a chance to be the test set. This would also give you a measure of the variance of the accuracy on the final test. If it changes a lot, you know something is wrong. If it remains fairly stable, then you know that it is a good representation of the model's accuracy on unseen data. We didn't get a chance to cover this in class, as it is less useful for exploring creative applications, though it is very useful to know and to use in practice, as it avoids overfitting/overgeneralizing your network to all of the data. Feel free to explore how to do this on the application above! <a name="part-three---learning-more-than-one-image"></a> Part Three - Learning More than One Image <a name="instructions-2"></a> Instructions We're now going to make use of our Dataset from Session 1 and apply what we've just learned to try and paint every single image in our dataset. How would you guess is the best way to approach this? We could for instance feed in every possible image by having multiple row, col -> r, g, b values. So for any given row, col, we'd have 100 possible r, g, b values. This likely won't work very well as there are many possible values a pixel could take, not just one. What if we also tell the network which image's row and column we wanted painted? We're going to try and see how that does. You can execute all of the cells below unchanged to see how this works with the first 100 images of the celeb dataset. But you should replace the images with your own dataset, and vary the parameters of the network to get the best results! I've placed the same code for running the previous algorithm into two functions, build_model and train. You can directly call the function train with a 4-d image shaped as N x H x W x C, and it will collect all of the points of every image and try to predict the output colors of those pixels, just like before. The only difference now is that you are able to try this with a few images at a time. There are a few ways we could have tried to handle multiple images. The way I've shown in the train function is to include an additional input neuron for which image it is. So as well as receiving the row and column, the network will also receive as input which image it is as a number. This should help the network to better distinguish the patterns it uses, as it has knowledge that helps it separates its process based on which image is fed as input. End of explanation """ celeb_imgs = utils.get_celeb_imgs() plt.figure(figsize=(10, 10)) plt.imshow(utils.montage(celeb_imgs).astype(np.uint8)) # It doesn't have to be 100 images, explore! imgs = np.array(celeb_imgs).copy() """ Explanation: <a name="code-1"></a> Code Below, I've shown code for loading the first 100 celeb files. Run through the next few cells to see how this works with the celeb dataset, and then come back here and replace the imgs variable with your own set of images. For instance, you can try your entire sorted dataset from Session 1 as an N x H x W x C array. Explore! <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation """ # Change the parameters of the train function and # explore changing the dataset gifs = train(imgs=imgs) """ Explanation: Explore changing the parameters of the train function and your own dataset of images. Note, you do not have to use the dataset from the last assignment! Explore different numbers of images, whatever you prefer. <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation """ montage_gifs = [np.clip(utils.montage( (m * 127.5) + 127.5), 0, 255).astype(np.uint8) for m in gifs] _ = gif.build_gif(montage_gifs, saveto='multiple.gif') """ Explanation: Now we'll create a gif out of the training process. Be sure to call this 'multiple.gif' for your homework submission: End of explanation """ ipyd.Image(url='multiple.gif?{}'.format(np.random.rand()), height=500, width=500) """ Explanation: And show it in the notebook End of explanation """ final = gifs[-1] final_gif = [np.clip(((m * 127.5) + 127.5), 0, 255).astype(np.uint8) for m in final] gif.build_gif(final_gif, saveto='final.gif') ipyd.Image(url='final.gif?{}'.format(np.random.rand()), height=200, width=200) """ Explanation: What we're seeing is the training process over time. We feed in our xs, which consist of the pixel values of each of our 100 images, it goes through the neural network, and out come predicted color values for every possible input value. We visualize it above as a gif by seeing how at each iteration the network has predicted the entire space of the inputs. We can visualize just the last iteration as a "latent" space, going from the first image (the top left image in the montage), to the last image, (the bottom right image). End of explanation """ # Train a network to produce something, storing every few # iterations in the variable gifs, then export the training # over time as a gif. ... gif.build_gif(montage_gifs, saveto='explore.gif') ipyd.Image(url='explore.gif?{}'.format(np.random.rand()), height=500, width=500) """ Explanation: <a name="part-four---open-exploration-extra-credit"></a> Part Four - Open Exploration (Extra Credit) I now what you to explore what other possible manipulations of the network and/or dataset you could imagine. Perhaps a process that does the reverse, tries to guess where a given color should be painted? What if it was only taught a certain palette, and had to reason about other colors, how it would interpret those colors? Or what if you fed it pixel locations that weren't part of the training set, or outside the frame of what it was trained on? Or what happens with different activation functions, number of layers, increasing number of neurons or lesser number of neurons? I leave any of these as an open exploration for you. Try exploring this process with your own ideas, materials, and networks, and submit something you've created as a gif! To aid exploration, be sure to scale the image down quite a bit or it will require a much larger machine, and much more time to train. Then whenever you think you may be happy with the process you've created, try scaling up the resolution and leave the training to happen over a few hours/overnight to produce something truly stunning! Make sure to name the result of your gif: "explore.gif", and be sure to include it in your zip file. <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> End of explanation """ utils.build_submission('session-2.zip', ('reference.png', 'single.gif', 'multiple.gif', 'final.gif', 'session-2.ipynb'), ('explore.gif')) """ Explanation: <a name="assignment-submission"></a> Assignment Submission After you've completed the notebook, create a zip file of the current directory using the code below. This code will make sure you have included this completed ipython notebook and the following files named exactly as: <pre> session-2/ session-2.ipynb single.gif multiple.gif final.gif explore.gif* libs/ utils.py * = optional/extra-credit </pre> You'll then submit this zip file for your second assignment on Kadenze for "Assignment 2: Teach a Deep Neural Network to Paint"! If you have any questions, remember to reach out on the forums and connect with your peers or with me. To get assessed, you'll need to be a premium student! This will allow you to build an online portfolio of all of your work and receive grades. If you aren't already enrolled as a student, register now at http://www.kadenze.com/ and join the #CADL community to see what your peers are doing! https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/info Also, if you share any of the GIFs on Facebook/Twitter/Instagram/etc..., be sure to use the #CADL hashtag so that other students can find your work! End of explanation """
kimkipyo/dss_git_kkp
통계, 머신러닝 복습/160531화_10일차_Scikit-Learn & statsmodels 패키지 소개 Introduction to Scikit-Learn & statsmodels packages/4.Scikit-Learn 패키지의 샘플 데이터 - classification용.ipynb
mit
from sklearn.datasets import load_iris iris = load_iris() print(iris.DESCR) df = pd.DataFrame(iris.data, columns=iris.feature_names) sy = pd.Series(iris.target, dtype="category") sy = sy.cat.rename_categories(iris.target_names) df['species'] = sy df sns.pairplot(df, hue='species') plt.show() """ Explanation: Scikit-Learn 패키지의 샘플 데이터 - classification용 Iris Dataset load_iris() https://en.wikipedia.org/wiki/Iris_flower_data_set R.A Fisher의 붓꽃 분류 연구 관찰 자료 꽃받침 길이(Sepal Length) 꽃받침 폭(Sepal Width) 꽃잎 길이(Petal Length) 꽃잎 폭(Petal Width) 종 setosa versicolor virginica End of explanation """ from sklearn.datasets import fetch_20newsgroups newsgroups = fetch_20newsgroups(subset="all") print(newsgroups.description) print(newsgroups.keys()) from pprint import pprint pprint(list(newsgroups.target_names)) print(newsgroups.data[1]) print("="*80) print(newsgroups.target_names[newsgroups.target[1]]) """ Explanation: 뉴스 그룹 텍스트 fetch_20newsgroups(): 20 News Groups text End of explanation """ from sklearn.datasets import fetch_olivetti_faces olivetti = fetch_olivetti_faces() print(olivetti.DESCR) print(olivetti.keys()) N=2; M=5; fig = plt.figure(figsize=(8,5)) plt.subplots_adjust(top=1, bottom=0, hspace=0, wspace=0.05) klist = np.random.choice(range(len(olivetti.data)), N * M) for i in range(N): for j in range(M): k = klist[i*M+j] ax = fig.add_subplot(N, M, i*M+j+1) ax.imshow(olivetti.images[k], cmap=plt.cm.bone); ax.grid(False) ax.xaxis.set_ticks([]) ax.yaxis.set_ticks([]) plt.title(olivetti.target[k]) plt.tight_layout() plt.show() """ Explanation: Olivetti faces fetch_olivetti_faces() 얼굴 인식 이미지 End of explanation """ from sklearn.datasets import fetch_lfw_people lfw_people = fetch_lfw_people(min_faces_per_person=70, resize=0.4) print(lfw_people.DESCR) print(lfw_people.keys()) N=2; M=5; fig = plt.figure(figsize=(8,5)) plt.subplots_adjust(top=1, bottom=0, hspace=0.1, wspace=0.05) klist = np.random.choice(range(len(lfw_people.data)), N * M) for i in range(N): for j in range(M): k = klist[i*M+j] ax = fig.add_subplot(N, M, i*M+j+1) ax.imshow(lfw_people.images[k], cmap=plt.cm.bone); ax.grid(False) ax.xaxis.set_ticks([]) ax.yaxis.set_ticks([]) plt.title(lfw_people.target_names[lfw_people.target[k]]) plt.tight_layout() plt.show() """ Explanation: Labeled Faces in the Wild (LFW) #### fetch_lfw_people() 유명인 얼굴 이미지 Parameters funneled : boolean, optional, default: True Download and use the funneled variant of the dataset. resize : float, optional, default 0.5 Ratio used to resize the each face picture. min_faces_per_person : int, optional, default None The extracted dataset will only retain pictures of people that have at least min_faces_per_person different pictures. color : boolean, optional, default False Keep the 3 RGB channels instead of averaging them to a single gray level channel. If color is True the shape of the data has one more dimension than than the shape with color = False. End of explanation """ from sklearn.datasets import fetch_lfw_pairs lfw_pairs = fetch_lfw_pairs(resize=0.4) print(lfw_pairs.DESCR) print(lfw_pairs.keys()) N=2; M=5; fig = plt.figure(figsize=(8,5)) plt.subplots_adjust(top=1, bottom=0, hspace=0.01, wspace=0.05) klist = np.random.choice(range(len(lfw_pairs.data)), M) for j in range(M): k = klist[j] ax1 = fig.add_subplot(N, M, j+1) ax1.imshow(lfw_pairs.pairs [k][0], cmap=plt.cm.bone); ax1.grid(False) ax1.xaxis.set_ticks([]) ax1.yaxis.set_ticks([]) plt.title(lfw_pairs.target_names[lfw_pairs.target[k]]) ax2 = fig.add_subplot(N, M, j+1 + M) ax2.imshow(lfw_pairs.pairs [k][1], cmap=plt.cm.bone); ax2.grid(False) ax2.xaxis.set_ticks([]) ax2.yaxis.set_ticks([]) plt.tight_layout() plt.show() """ Explanation: #### fetch_lfw_pairs() 얼굴 이미지 Pair 동일 인물일 수도 있고 아닐 수도 있음 End of explanation """ from sklearn.datasets import load_digits digits = load_digits() print(digits.DESCR) print(digits.keys()) N=2; M=5; fig = plt.figure(figsize=(10,5)) plt.subplots_adjust(top=1, bottom=0, hspace=0, wspace=0.05) for i in range(N): for j in range(M): k = i*M+j ax = fig.add_subplot(N, M, k+1) ax.imshow(digits.images[k], cmap=plt.cm.bone, interpolation="none"); ax.grid(False) ax.xaxis.set_ticks([]) ax.yaxis.set_ticks([]) plt.title(digits.target_names[k]) plt.tight_layout() plt.show() """ Explanation: Digits Handwriting Image load_digits() 숫자 필기 이미지 End of explanation """ from sklearn.datasets.mldata import fetch_mldata mnist = fetch_mldata('MNIST original') mnist.keys() N=2; M=5; fig = plt.figure(figsize=(8,5)) plt.subplots_adjust(top=1, bottom=0, hspace=0, wspace=0.05) klist = np.random.choice(range(len(mnist.data)), N * M) for i in range(N): for j in range(M): k = klist[i*M+j] ax = fig.add_subplot(N, M, i*M+j+1) ax.imshow(mnist.data[k].reshape(28, 28), cmap=plt.cm.bone, interpolation="nearest"); ax.grid(False) ax.xaxis.set_ticks([]) ax.yaxis.set_ticks([]) plt.title(mnist.target[k]) plt.tight_layout() plt.show() """ Explanation: mldata.org repository fetch_mldata() http://mldata.org public repository for machine learning data, supported by the PASCAL network 홈페이지에서 data name 을 검색 후 key로 이용 MNIST 숫자 필기인식 자료 https://en.wikipedia.org/wiki/MNIST_database Mixed National Institute of Standards and Technology (MNIST) database 0-9 필기 숫자 이미지 28x28 pixel bounding box anti-aliased, grayscale levels 60,000 training images and 10,000 testing images End of explanation """
DillonNovak/Programming-for-Chemical-Engineering-Applications
Breast+Cancer+Diagnosis.ipynb
gpl-3.0
%matplotlib inline from sklearn.decomposition import PCA import sys import scipy as sp import numpy as np import matplotlib.pyplot as plt import pandas as pd import sklearn as sk import seaborn as sns sns.set_context('talk') #import PCA models from pandas.tools.plotting import scatter_matrix from sklearn import model_selection from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix from sklearn.metrics import accuracy_score from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.discriminant_analysis import LinearDiscriminantAnalysis from sklearn.naive_bayes import GaussianNB from sklearn.svm import SVC # load dataset dfall = pd.read_csv('wdbc.data.txt') # drop standard error and largest value for each attribute df = dfall.drop(dfall.columns[[0,3,4,6,7,9,10,12,13,15,16,18,19,21,22,24,25,27,28,30,31]],axis=1) # name columns df.columns = ['diagnosis','radius','texture','perimeter','area','smoothness','comactness','concavity','concave points','symmetry','fractal dimension'] print(df.shape) print(df.describe()) print(df.groupby('diagnosis').size()) df.head() scatter_matrix(df) plt.show() X = df.ix[:,1:11] X.tail() # plot histogram distribution of each attribute plt.figure(figsize=(16,6)) plt.subplot(2,5,1) k = 1 for c in X.columns: plt.subplot(2,5,k) plt.hist(X[c],normed=True,alpha=0.6,bins=20) plt.title(c) k += 1 plt.tight_layout() """ Explanation: Predicting Malignant Tumors Wisconsin Diagnostic Beast Cancer Dataset https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(Diagnostic) Dataset attributes: 0. diagnosis (malignant or benign) 1. radius (mean of distances from center to points on the perimeter) 2. texture (standard deviation of gray-scale values) 3. perimeter 4. area 5. smoothness (local variation in radius lengths) 6. compactness (perimeter^2 / area - 1.0) 7. concavity (severity of concave portions of the contour) 8. concave points (number of concave portions of the contour) 9. symmetry 10. fractal dimension ("coastline approximation" - 1) End of explanation """ from sklearn.preprocessing import StandardScaler X_std = StandardScaler().fit_transform(X) lbls = X.columns plt.figure(figsize=(16,6)) plt.subplot(2,5,1) k = 0 for c in lbls: plt.subplot(2,5,k+1) plt.hist(X_std[:,k],normed=True,alpha=0.6,bins=20) plt.title(c) k += 1 plt.tight_layout() """ Explanation: Scaling and Centering End of explanation """ pca = PCA(n_components=3) Y = pca.fit_transform(X_std) w = pca.components_ v = pca.explained_variance_ratio_ print(v) for k in range(0,len(w)): plt.subplot(3,1,k+1) plt.bar(range(0,len(w[k])),w[k],width=.5) plt.xticks(range(0,len(w[k])),lbls) plt.title('explained variance ratio = {0:.3f}'.format(v[k])) plt.tight_layout() k = 0 for n in df['diagnosis']: if(df.ix[k,0] == 'M'): plt.scatter(Y[k,0],Y[k,1],color='red',alpha=0.4) else: plt.scatter(Y[k,0],Y[k,1],color='green',alpha=0.4) k += 1 """ Explanation: PCA Analysis End of explanation """ # split out validation dataset diagnosisarray = df.values dfvals = df.drop(df.columns[[0]],axis=1) #print(dfvals.head()) array = dfvals.values X = array[:,0:9] Y = diagnosisarray[:,0] validation_size = 0.20 seed = 7 X_train, X_validation, Y_train, Y_validation = model_selection.train_test_split(X, Y, test_size=validation_size, random_state=seed) #check algorithms models = [] models.append(('LR', LogisticRegression())) models.append(('LDA', LinearDiscriminantAnalysis())) models.append(('KNN', KNeighborsClassifier())) models.append(('CART', DecisionTreeClassifier())) models.append(('NB', GaussianNB())) models.append(('SVM', SVC())) #evaluate each model results = [] names = [] for name, model in models: kfold = model_selection.KFold(n_splits=10, random_state=seed) cv_results = model_selection.cross_val_score(model, X_train, Y_train, cv=kfold, scoring='accuracy') results.append(cv_results) names.append(name) msg = "%s: %f (%f)" % (name, cv_results.mean(), cv_results.std()) print(msg) # compare algorithms fig = plt.figure() fig.suptitle('Algorithm Comparison') ax = fig.add_subplot(111) plt.boxplot(results) ax.set_xticklabels(names) plt.show() """ Explanation: Predictive Analysis Train predictive models to identify malignant tumors and choose the most accurate model to test on a validation set of data End of explanation """ # make predictions on validation dataset lda = LinearDiscriminantAnalysis() lda.fit(X_train, Y_train) predictions = lda.predict(X_validation) print(accuracy_score(Y_validation, predictions)) print(confusion_matrix(Y_validation, predictions)) print(classification_report(Y_validation, predictions)) # make predictions on validation dataset lr = LogisticRegression() lr.fit(X_train, Y_train) predictions = lr.predict(X_validation) print(accuracy_score(Y_validation, predictions)) print(confusion_matrix(Y_validation, predictions)) print(classification_report(Y_validation, predictions)) """ Explanation: LDA is the most accurate. Use LDA model to evaluate the validation dataset End of explanation """
fionapigott/Data-Science-45min-Intros
count-min-101/CountMinSketch.ipynb
unlicense
import sys import random import numpy as np import heapq import json import time BIG_PRIME = 9223372036854775783 def random_parameter(): return random.randrange(0, BIG_PRIME - 1) class Sketch: def __init__(self, delta, epsilon, k): """ Setup a new count-min sketch with parameters delta, epsilon and k The parameters delta and epsilon control the accuracy of the estimates of the sketch Cormode and Muthukrishnan prove that for an item i with count a_i, the estimate from the sketch a_i_hat will satisfy the relation a_hat_i <= a_i + epsilon * ||a||_1 with probability at least 1 - delta, where a is the the vector of all all counts and ||x||_1 is the L1 norm of a vector x Parameters ---------- delta : float A value in the unit interval that sets the precision of the sketch epsilon : float A value in the unit interval that sets the precision of the sketch k : int A positive integer that sets the number of top items counted Examples -------- >>> s = Sketch(10**-7, 0.005, 40) Raises ------ ValueError If delta or epsilon are not in the unit interval, or if k is not a positive integer """ if delta <= 0 or delta >= 1: raise ValueError("delta must be between 0 and 1, exclusive") if epsilon <= 0 or epsilon >= 1: raise ValueError("epsilon must be between 0 and 1, exclusive") if k < 1: raise ValueError("k must be a positive integer") self.w = int(np.ceil(np.exp(1) / epsilon)) self.d = int(np.ceil(np.log(1 / delta))) self.k = k self.hash_functions = [self.__generate_hash_function() for i in range(self.d)] self.count = np.zeros((self.d, self.w), dtype='int32') self.heap, self.top_k = [], {} # top_k => [estimate, key] pairs def update(self, key, increment): """ Updates the sketch for the item with name of key by the amount specified in increment Parameters ---------- key : string The item to update the value of in the sketch increment : integer The amount to update the sketch by for the given key Examples -------- >>> s = Sketch(10**-7, 0.005, 40) >>> s.update('http://www.cnn.com/', 1) """ for row, hash_function in enumerate(self.hash_functions): column = hash_function(abs(hash(key))) self.count[row, column] += increment self.update_heap(key) def update_heap(self, key): """ Updates the class's heap that keeps track of the top k items for a given key For the given key, it checks whether the key is present in the heap, updating accordingly if so, and adding it to the heap if it is absent Parameters ---------- key : string The item to check against the heap """ estimate = self.get(key) if not self.heap or estimate >= self.heap[0][0]: if key in self.top_k: old_pair = self.top_k.get(key) old_pair[0] = estimate heapq.heapify(self.heap) else: if len(self.top_k) < self.k: heapq.heappush(self.heap, [estimate, key]) self.top_k[key] = [estimate, key] else: new_pair = [estimate, key] old_pair = heapq.heappushpop(self.heap, new_pair) if new_pair[1] != old_pair[1]: del self.top_k[old_pair[1]] self.top_k[key] = new_pair self.top_k[key] = new_pair def get(self, key): """ Fetches the sketch estimate for the given key Parameters ---------- key : string The item to produce an estimate for Returns ------- estimate : int The best estimate of the count for the given key based on the sketch Examples -------- >>> s = Sketch(10**-7, 0.005, 40) >>> s.update('http://www.cnn.com/', 1) >>> s.get('http://www.cnn.com/') 1 """ value = sys.maxint for row, hash_function in enumerate(self.hash_functions): column = hash_function(abs(hash(key))) value = min(self.count[row, column], value) return value def __generate_hash_function(self): """ Returns a hash function from a family of pairwise-independent hash functions """ a, b = random_parameter(), random_parameter() return lambda x: (a * x + b) % BIG_PRIME % self.w # define a function to return a list of the exact top users, sorted by count def exact_top_users(f, top_n = 10): import operator counts = {} for user in f: user = user.rstrip('\n') try: if user not in counts: counts[user] = 1 else: counts[user] += 1 except ValueError: pass except KeyError: pass counter = 0 results = [] for user,count in reversed(sorted(counts.iteritems(), key=operator.itemgetter(1))): if counter >= top_n: break results.append('{},{}'.format(user,str(count))) counter += 1 return results # note that the output format is '[user] [count]' f = open('CM_small.txt') results_exact = sorted(exact_top_users(f)) print(results_exact) # define a function to return a list of the estimated top users, sorted by count def CM_top_users(f, s, top_n = 10): for user_name in f: s.update(user_name.rstrip('\n'),1) results = [] counter = 0 for value in reversed(sorted(s.top_k.values())): if counter >= top_n: break results.append('{1},{0}'.format(str(value[0]),str(value[1]))) counter += 1 return results # note that the output format is '[user] [count]' # instantiate a Sketch object s = Sketch(10**-3, 0.1, 10) f = open('CM_small.txt') results_CM = sorted(CM_top_users(f,s)) print(results_CM) for item in zip(results_exact,results_CM): print(item) """ Explanation: Basic Idea of Count Min sketch We map the input value to multiple points in a relatively small output space. Therefore, the count associated with a given input will be applied to multiple counts in the output space. Even though collisions will occur, the minimum count associated with a given input will have some desirable properties, including the ability to be used to estimate the largest N counts. <img src="files/count_min_2.png"> http://debasishg.blogspot.com/2014/01/count-min-sketch-data-structure-for.html Parameters of the sketch: epsilon delta These parameters are inversely and exponentially (respectively) related to the sketch size parameters, d and w. Implementation of the CM sketch End of explanation """ s = Sketch(0.9, 0.9, 10) f = open('CM_small.txt') results_coarse_CM = CM_top_users(f,s) print(results_coarse_CM) """ Explanation: Is it possible to make the sketch so coarse that its estimates are wrong even for this data set? End of explanation """ f = open('CM_large.txt') %time results_exact = exact_top_users(f) print(results_exact) # this could take a few minutes f = open('CM_large.txt') s = Sketch(10**-4, 0.001, 10) %time results_CM = CM_top_users(f,s) print(results_CM) """ Explanation: Yes! (if you try enough) Why? The 'w' parameter goes like ceiling(exp(1)/epsilon), which is always >=~ 3. The 'd' parameter goes like ceiling(log(1/delta), which is always >= 1. So, you're dealing with a space with minimum size 3 x 1. With 10 records, it's possible that all 4 users map their counts to the point. So it's possible to see an estimate as high as 10, in this case. Now for a larger data set. End of explanation """ for item in zip(results_exact,results_CM): print(item) # the CM sketch gets the top entry (an outlier) correct but doesn't do well estimating the order of the more degenerate counts # let's decrease the precision via both the epsilon and delta parameters, and see whether it still gets the "heavy-hitter" correct f = open('CM_large.txt') s = Sketch(10**-3, 0.01, 10) %time results_CM = CM_top_users(f,s) print(results_CM) # nope...sketch is too coarse, too many collisions, and the prominence of user 'Euph0r1a__ 129' is obscured for item in zip(results_exact,results_CM): print(item) """ Explanation: For this precision and dataset size, the CM algo takes much longer than the exact solution. In fact, the crossover point at which the CM sketch can achieve reasonable accuracy in the same time as the exact solution is a very large number of entries. End of explanation """
gvasold/gdp17
basics/funktionen.ipynb
apache-2.0
def say_hello(): print('Hello!') """ Explanation: Funktionen Funktionen sind Prozeduren oder, wenn man so will, Subprogramme, die aus dem Hauptprogramm heraus aufgerufen werden. Die Vorteile der Verwendung von Funktionen sind: Funktionen sind wiederverwendbar: Eine einmal geschriebene Funktion kann in einem Programm mehrfach verwendet werden (oder sogar aus unterschiedlichen Programmen heraus) Funktionen verhindern Redundanz Funktionen gliedern den Code und machen ihn so besser verständlich und wartbar. Eine Funktion schreiben Eine Funktionsdeklaration beginnt in Python mit dem Schlüsselwort def (Viele andere Sprachen verwenden statt dessen function). Nach dem def folgt der Name der Funktion. Dieser sollte idealerweise ein Verb sein, da eine Funktion immer etwas tut. Der Name der Funktion wird durch ein Paar runde Klammern und einen Doppelpunkt abgeschlossen. Danach folgt der eigentliche Funktionskörper mit dem wiederverwendbaren Code. End of explanation """ say_hello() """ Explanation: Damit habe wir eine Funktion mit dem Namen say_hello geschrieben. Die Funktion tut noch nichts, da wir sie noch nicht aufgerufen haben. Der Aufruf der Funktion sieht so aus: End of explanation """ def say_hello(username): print('Hello {}!'.format(username)) """ Explanation: Funktionsparameter Die gerade geschriebene Funktion say_hello() stellt nur die Minimalversion einer Funktion dar, die immer dasselbe tut. Wir können eine Funktion flexibler machen, indem wir ihr Parameter zuweisen: End of explanation """ say_hello('Gunter') say_hello('Anna') """ Explanation: Hier legt die Funktionsdeklaration fest, dass der Funktion beim Aufruf ein Wert übergeben werden muss, der dann innerhalb der Funktion als Variable username verfügbar ist. Wie können diesen Wert als Argument beim Aufruf der Funktion übergeben: End of explanation """ rv = say_hello('Otto') print('Rückgabewert: {}'.format(rv)) """ Explanation: Rückgabewerte Jede Funktion gibt beim Aufruf einen Wert zurück. Dieser ist, wenn nicht anders angegeben, None. End of explanation """ def shorten(long_str): rv = long_str if len(long_str) > 2: rv = "{}{}{}".format(long_str[0], len(long_str)-2, long_str[-1]) return rv shorten('Internationalization') shorten('Gunter') """ Explanation: Rückgabewerte sind immer dann sinnvoll, wenn eine Funktion z.B. etwas berechnet und das Resultat der Berechnung im Hauptprogramm verwendet werden soll. End of explanation """ with open('data/vornamen/vornamen_1984.txt') as fh: names_84 = [n.rstrip() for n in fh.readlines()] """ Explanation: Ein echtes Beispiel Erinnern wir uns an die Hausübung, wo wir die beliebtesten Vornamen von 1984 und 2015 verglichen haben. Hier haben wir für jedes der beiden Jahre die entsprechende Datei eingelesen und die Namen in eine Liste eingelesen: End of explanation """ def read_names(filename): with open(filename) as fh: return [n.rstrip() for n in fh.readlines()] names_84 = read_names('data/vornamen/vornamen_1984.txt') names_15 = read_names('data/vornamen/vornamen_2015.txt') """ Explanation: Anstatt diesen Code für jede zu untersuchende Datei neu zu schreiben, können wir eine entsprechende Funktion programmieren und diese mehrfach aufrufen: End of explanation """ def read_names_for_year(year): filename = "data/vornamen/vornamen_{}.txt".format(year) with open(filename) as fh: return [n.rstrip() for n in fh.readlines()] names_84 = read_names_for_year(1984) """ Explanation: Wenn wir davon ausgehen können, dass wir die beliebtesten Vornamen eines jeden Jahres im Verzeichnis data/vornamen/ finden, und die Dateinamen immer gleich aufgebaut sind (vornamen_YYYY.txt), können wir auch eine spezialisiertere Funktion schreiben: End of explanation """ def read_names_for_year(year): filename = "data/popular_firstnames/vornamen_{}.txt".format(year) with open(filename) as fh: return [n.rstrip() for n in fh.readlines()] """ Explanation: Funktionen vermeiden Redundanz Die zweite Lösung ist nicht so allgemein verwendbar wie die erste (weil sie davon ausgeht, dass alle Vornamen-Dateien in einem bestimmten Verzeichnis liegen und einem bestimmten Namensschema folgen), bietet aber neben der kompakteren Schreibweise beim Aufruf einen weiteren Vorteil: Falls sich z.B. an der Verzeichnisstruktur etwas ändert, brauchen wir diese Änderung nicht bei jedem einzelnen Aufruf der Funktion nachziehen, sondern an genau einer Stelle: in der Funktion. Nehmen wir an, aus irgendwelchen Gründen müssen wir den Verzeichnisnamen von data/vornamen nach data/popular_firstnames ändern. Während wir im ersten Beispiel alle Aufrufe von read_names() suchen und dort den Verzeichnisnamen ändern müßten (was bei zwei Aufrufen jetzt nicht so aufwändig wäre ;-)), brauchen wir im zweiten Fall die Änderung nur einmal (in der Funktion) zu machen: End of explanation """ def compute_weight(length, width, height): ccm = length * width * height return ccm / 1000.0 """ Explanation: Hier aber auch gleich eine Warnung: Die Wiederverwendbarkeit einer Funktion hängt stark von ihrer Flexibilität (sprich: Parametrisierbarkeit) ab. Je spezialisierter eine Funktion ist, desto weniger einfach kann sie wiederverwendet werden. Ebenso gilt das Gegenteil: Man kann eine Funktion sehr flexibel schreiben, indem man viele Parameter verwendet, aber irgendwann wird die Verwendung der Funktion dadurch so kompliziert, dass man sie nicht mehr verwenden will. Eine Funktion mit mehreren Parametern Grundsätzlich kann eine Funktion beliebig viele Parameter haben. In der Praxis sollte man sich, außer man hat gute Gründe, auf maximal 4 oder 5 Parameter beschränken. End of explanation """ def compute_weight(length, width, height): "Return the weight of a fish tank in kg." ccm = length * width * height # we assume a water density of 1000 kg/cbm # so we first convert ccm to cbm and multiply by density # so ccm / 1000000 * 1000 return ccm / 1000 """ Explanation: Kommentare Dokumentation des Source Codes ist wesentlich, weil dadurch der Code erklärt und für spätere Bearbeiter (was auch der ursprüngliche Programmierer sein kann) leichter verständlich wird. Dazu verwendet man Kommentare. Das Kommentarzeichen in Python ist #. Man sollte allerdings nur Dinge kommentieren, die sich nicht ohnehin einfach aus dem Code ableiten lassen: ~~~ ... i += 1 # increase i by 1 ... ~~~ ist ein gutes Beispiel für einen unnötigen Kommentar. Im nächsten Beispiel dokumentieren wir, woher ein bestimmter Wert kommt. Hier macht der Kommentar mehr Sinn. ~~~ def compute_weight(length, width, height): ccm = length * width * height # we assume a water density of 1000 kg/cbm # so we first convert ccm to cbm and multiply by density # so ccm / 1000000 * 1000 return ccm / 1000 ~~~ Docstrings Python bietet mit Docstrings eine Besonderheit. In Form von Docstrings wird die Dokumentation Teil der Funktion (Docstrings können auch für Module und Pakete verwendet werden, aber dazu kommen wir erst später). Ein Docstring muss unmittelbar nach der Funktionsdeklaration in Form eines Strings erscheinen: End of explanation """ compute_weight.__doc__ """ Explanation: Der Docstring in diesem Beispiel wird wirklich Teil der Funktion und kann abgefragt we<rden: End of explanation """ help(compute_weight) """ Explanation: Das machen sich beispielsweise die in Python eingebaute help()-Funktion, aber auch integrierte Entwicklungsumgebungen (IDE) zunutze, um den Benutzern Informationen zu einer Funktion anzubieten. End of explanation """ def compute_weight(length, width, height): """Compute weight of water (in kg) for a fish tank. Args: length -- length in cm. width -- width in cm. height -- height in cm. Returns: Weight in kg as float. """ ccm = length * width * height # we assume a water density of 1000 kg/cbm # so we first convert ccm to cbm and multiply by density # so ccm / 1000000 * 1000 return ccm / 1000 help(compute_weight) """ Explanation: Auch in einem Jupyter Notebook steht dies zur Verfügung. Schreiben Sie in der unten stehenden Zelle einen Funktionsnamen (z.B. compute_weight) und drücken Sie dann gleichzeitig die Tasten Shift und Tab. Mehrzeilige Docstrings Zusätzlich zur Basisform können auch ausführlichere Docstrings geschrieben werden: End of explanation """ # fluid densities only # ice has a density of 918, steam of 0.59 densities = {0: 999.84, 1: 999.9, 2: 999.94, 3: 999.96, 4: 999.97, 5: 999.96, 6: 999.94, 7: 999.9, 8: 999.85, 9: 999.78, 10: 999.7, 11: 999.6, 12: 999.5, 13: 999.38, 14: 999.24, 15: 999.1, 16: 998.94, 17: 998.77, 18: 998.59, 19: 998.4, 20: 998.2, 21: 997.99, 22: 997.77, 23: 997.54, 24: 997.29, 25: 997.04, 26: 996.78, 27: 996.51, 28: 996.23, 29: 995.94, 30: 995.64, 31: 995.34, 32: 995.02, 33: 994.7, 34: 994.37, 35: 994.03, 36: 993.68, 37: 993.32, 38: 992.96, 39: 992.59, 40: 992.21, 45: 990.21, 50: 988.03, 55: 985.69, 60: 983.19, 65: 980.55, 70: 977.76, 75: 974.84, 80: 971.79, 85: 968.61, 90: 965.3, 95: 961.88, 100: 958.35} """ Explanation: Übung Die Dichte des Wassers ist abhängig von der Temperatur. Diese Werte stehen im Dictionary densities. Schreiben sie compute_weight so um, dass das Gewicht abhängig von der Temperatur berechnet wird. End of explanation """ import random def get_temperature(): "Return temperature in Celsius and Fahrenheit.""" # as we do not have a thermometer attached, we return a random temperature celsius = random.randint(-30, 45) fahrenheit = celsius * (9/5) + 32 return celsius, fahrenheit celsius, fahrenheit = get_temperature() print('Aktuelle Temperatur: {}° Celsius ({:0.2f}° Fahrenheit)'.format(celsius, fahrenheit)) """ Explanation: Funktionen mit mehreren Rückgabewerten In den meisten Programmiersprachen können Funktionen nur einen Wert zurückliefern (außer natürlich, wenn die Rückgabewerte in einen Container "verpackt" werden). In Python können aber direkt zwei oder mehr Werte zurückgeliefert werden. End of explanation """ # TODO def char_info(string): "Return num of all and num of distinct chars of string." """ Explanation: Übung Schreiben wir eine Funktion char_info(), der ein String übergeben wird und die zwei Werte zurückliefert: len und distinct_len. Wie wollen also die Zahl der Zeichen und die Zahl der unterschiedlichen Zeichen ermitteln. End of explanation """ def set_userdata(username, age=None): return username, age print(set_userdata('otto', 20)) print(set_userdata('anna')) """ Explanation: Default Parameter Normalerweise muss die Reihenfolge der Argumente beim Aufruf der Funktion der Reihenfolge der Parameter in der Funktionsdeklaration entsprechen und es müssen auch alle Argumente mitgegeben werden: ~~~ def set_userdata(username, age): ... set_userdata('otto', 20) ~~~ Falls wir einen Parameter nicht zwingend erwarten, können wir dafür einen Defaultwert definieren. End of explanation """ def set_userdata(ages=None, username): return username, age """ Explanation: Wenn der Wert mitgegeben wird, wird er in der Funktion verwendet, falls nicht, wird der Defaultwert verwendet. Solche Parameter müssen nach allen anderen Parametern stehen, weshalb der folgende Code-Block nicht funktioniert. End of explanation """ def set_userdata(username, age=None, weight=None, height=None): return username, age, weight, height print(set_userdata('Otto', height=181)) print(set_userdata('Otto', height=181, age=25)) """ Explanation: Wenn mehr als ein Default Parameter definiert wird, können beim Aufruf der Funktion Keyword Arguments verwendet werden, wodurch wir uns nicht mehr an die gegebene Reihenfolge halten müssen: End of explanation """ def my_function(*args): print(type(args)) my_function(1, 2, 3) """ Explanation: Funktionen mit beliebig vielen Parametern Manchmal ist zum Zeitpunkt, an dem die Funktion geschrieben wird, noch nicht bekannt, wie viele Parameter zu erwarten sind. In diesem Fall können wir die Funktionsdeklaration so schreiben: ~~~ def my_function(*args): ~~~ In diesem Fall werden alle Werte in ein Tupel gepackt: End of explanation """ def avg(*args): return sum(args) / len(args) avg(1, 2, 3, 4) """ Explanation: Damit können wir z.B. eine Funktion zur Ermittlung des arithmetische Mittels schreiben: End of explanation """ def increase(val): val += 1 return val val = 1 new_val = increase(val) print(val, new_val) """ Explanation: Gültigkeitsbereich von Variablen Bisher haben wir uns noch keine großen Gedanken darüber gemacht, wann und wo der Wert einer Variablen sichtbar ist. In Zusammenhang mit Funktionen müssen wir uns jedoch damit beschäftigen. Vorauszuschicken ist, dass diese Sichtbarkeit in Python eher ungewöhnlich gelöst ist. End of explanation """ def increase(): val += 1 return val val = 1 new_val = increase() print(val, new_val) """ Explanation: Wir haben in diesem Beispiel zwei Gültigkeitsbereich für die Variable val (und damit zwei Variablen): eine global gültige und eine zweite, die nur innerhalb der Funktion sichtbar ist. Wenn wir innerhalb der Funktion den Wert von val verändern, betrifft das nur das val innerhalb der Funktion und nicht das der außerhalb gültigen Variable. Das geht so weit, dass globale Variablen innerhalb einer Funktion nicht verfügbar sind, wenn wir versuchen diese zu verändern: End of explanation """ def print_val(): print(val) val = 1 print_val() """ Explanation: Das gilt jedoch nur, wenn wir versuchen, die Variable zu verändern. Lesend können wir auf die globale Variable zugreifen: End of explanation """ def compute_final_grade(grades): grades[1] = 1 return sum(grades) / len(grades) grades = [1, 5, 2, 1, 3] print(compute_final_grade(grades)) print(grades) """ Explanation: Damit wird verhindert, dass der Wert einer globalen Variablen irrtümlich innerhalb einer Funktion verändert wird. Das eben Behauptete stimmt jedoch nicht uneingeschränkt: End of explanation """ def compute_final_grade(mygrades): mygrades[1] = 1 return sum(mygrades) / len(mygrades) grades = [1, 5, 2, 1, 3] print(compute_final_grade(grades)) print(grades) """ Explanation: Wie wir sehen, ändert das Umsetzen eines Wertes einer Liste (eines jeden veränderbaren Datentyps) innerhalb einer Funktion nicht eine lokale Kopie der Liste, sondern den Wert der global definierten Liste. Der Grund dafür ist, dass Container wie Listen oder Dictionaries nicht als Kopie der Originalwerts an die Funktion übergeben werden, sondern als Referenz auf das globale Objekt. grades innerhalb und außerhalb der Funktion zeigt also auf dasselbe Objekt! Die beiden Variablen können sogar unterschiedliche Namen haben und zeigen immer noch auf dasselbe Objekt: End of explanation """
phoebe-project/phoebe2-docs
2.3/examples/minimal_contact_binary.ipynb
gpl-3.0
#!pip install -I "phoebe>=2.3,<2.4" """ Explanation: Minimal Contact Binary System Setup Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab). End of explanation """ import phoebe from phoebe import u # units import numpy as np import matplotlib.pyplot as plt logger = phoebe.logger() """ Explanation: As always, let's do imports and initialize a logger and a new bundle. End of explanation """ b = phoebe.default_binary(contact_binary=True) """ Explanation: Here we'll initialize a default binary, but ask for it to be created as a contact system. For more details see the contact binary hierarchy tutorial. End of explanation """ b.add_dataset('mesh', compute_times=[0], dataset='mesh01') b.add_dataset('orb', compute_times=np.linspace(0,1,201), dataset='orb01') b.add_dataset('lc', times=np.linspace(0,1,21), dataset='lc01') b.add_dataset('rv', times=np.linspace(0,1,21), dataset='rv01') """ Explanation: Adding Datasets End of explanation """ b.run_compute(irrad_method='none') """ Explanation: Running Compute End of explanation """ print(b['mesh01@model'].components) """ Explanation: Synthetics To ensure compatibility with computing synthetics in detached and semi-detached systems in Phoebe, the synthetic meshes for our overcontact system are attached to each component separetely, instead of the contact envelope. End of explanation """ afig, mplfig = b['mesh01@model'].plot(x='ws', show=True) """ Explanation: Plotting Meshes End of explanation """ afig, mplfig = b['orb01@model'].plot(x='ws',show=True) """ Explanation: Orbits End of explanation """ afig, mplfig = b['lc01@model'].plot(show=True) """ Explanation: Light Curves End of explanation """ afig, mplfig = b['rv01@model'].plot(show=True) """ Explanation: RVs End of explanation """
termanli/CLIOL
External_data_Drive,_Sheets,_and_Cloud_Storage.ipynb
lgpl-3.0
from google.colab import files uploaded = files.upload() for fn in uploaded.keys(): print('User uploaded file "{name}" with length {length} bytes'.format( name=fn, length=len(uploaded[fn]))) """ Explanation: <a href="https://colab.research.google.com/github/termanli/CLIOL/blob/master/External_data_Drive,_Sheets,_and_Cloud_Storage.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> This notebook provides recipes for loading and saving data from external sources. Local file system Uploading files from your local file system files.upload returns a dictionary of the files which were uploaded. The dictionary is keyed by the file name, the value is the data which was uploaded. End of explanation """ from google.colab import files with open('example.txt', 'w') as f: f.write('some content') files.download('example.txt') """ Explanation: Downloading files to your local file system files.download will invoke a browser download of the file to the user's local computer. End of explanation """ from google.colab import drive drive.mount('/content/gdrive') with open('/content/gdrive/My Drive/foo.txt', 'w') as f: f.write('Hello Google Drive!') !cat /content/gdrive/My\ Drive/foo.txt """ Explanation: Google Drive You can access files in Drive in a number of ways, including: 1. Using the native REST API; 1. Using a wrapper around the API such as PyDrive; or 1. Mounting your Google Drive in the runtime's virtual machine. Example of each are below. Mounting Google Drive locally The example below shows how to mount your Google Drive in your virtual machine using an authorization code, and shows a couple of ways to write & read files there. Once executed, observe the new file (foo.txt) is visible in https://drive.google.com/ Note this only supports reading and writing files; to programmatically change sharing settings etc use one of the other options below. End of explanation """ !pip install -U -q PyDrive from pydrive.auth import GoogleAuth from pydrive.drive import GoogleDrive from google.colab import auth from oauth2client.client import GoogleCredentials # 1. Authenticate and create the PyDrive client. auth.authenticate_user() gauth = GoogleAuth() gauth.credentials = GoogleCredentials.get_application_default() drive = GoogleDrive(gauth) # PyDrive reference: # https://gsuitedevs.github.io/PyDrive/docs/build/html/index.html # 2. Create & upload a file text file. uploaded = drive.CreateFile({'title': 'Sample upload.txt'}) uploaded.SetContentString('Sample upload file content') uploaded.Upload() print('Uploaded file with ID {}'.format(uploaded.get('id'))) # 3. Load a file by ID and print its contents. downloaded = drive.CreateFile({'id': uploaded.get('id')}) print('Downloaded content "{}"'.format(downloaded.GetContentString())) """ Explanation: PyDrive The example below shows 1) authentication, 2) file upload, and 3) file download. More examples are available in the PyDrive documentation End of explanation """ from google.colab import auth auth.authenticate_user() """ Explanation: Drive REST API The first step is to authenticate. End of explanation """ from googleapiclient.discovery import build drive_service = build('drive', 'v3') """ Explanation: Now we can construct a Drive API client. End of explanation """ # Create a local file to upload. with open('/tmp/to_upload.txt', 'w') as f: f.write('my sample file') print('/tmp/to_upload.txt contains:') !cat /tmp/to_upload.txt # Upload the file to Drive. See: # # https://developers.google.com/drive/v3/reference/files/create # https://developers.google.com/drive/v3/web/manage-uploads from googleapiclient.http import MediaFileUpload file_metadata = { 'name': 'Sample file', 'mimeType': 'text/plain' } media = MediaFileUpload('/tmp/to_upload.txt', mimetype='text/plain', resumable=True) created = drive_service.files().create(body=file_metadata, media_body=media, fields='id').execute() print('File ID: {}'.format(created.get('id'))) """ Explanation: With the client created, we can use any of the functions in the Google Drive API reference. Examples follow. Creating a new Drive file with data from Python End of explanation """ # Download the file we just uploaded. # # Replace the assignment below with your file ID # to download a different file. # # A file ID looks like: 1uBtlaggVyWshwcyP6kEI-y_W3P8D26sz file_id = 'target_file_id' import io from googleapiclient.http import MediaIoBaseDownload request = drive_service.files().get_media(fileId=file_id) downloaded = io.BytesIO() downloader = MediaIoBaseDownload(downloaded, request) done = False while done is False: # _ is a placeholder for a progress object that we ignore. # (Our file is small, so we skip reporting progress.) _, done = downloader.next_chunk() downloaded.seek(0) print('Downloaded file contents are: {}'.format(downloaded.read())) """ Explanation: After executing the cell above, a new file named 'Sample file' will appear in your drive.google.com file list. Your file ID will differ since you will have created a new, distinct file from the example above. Downloading data from a Drive file into Python End of explanation """ !pip install --upgrade -q gspread """ Explanation: Google Sheets Our examples below will use the existing open-source gspread library for interacting with Sheets. First, we'll install the package using pip. End of explanation """ from google.colab import auth auth.authenticate_user() import gspread from oauth2client.client import GoogleCredentials gc = gspread.authorize(GoogleCredentials.get_application_default()) """ Explanation: Next, we'll import the library, authenticate, and create the interface to sheets. End of explanation """ sh = gc.create('A new spreadsheet') """ Explanation: Below is a small set of gspread examples. Additional examples are shown on the gspread Github page. Creating a new sheet with data from Python End of explanation """ # Open our new sheet and add some data. worksheet = gc.open('A new spreadsheet').sheet1 cell_list = worksheet.range('A1:C2') import random for cell in cell_list: cell.value = random.randint(1, 10) worksheet.update_cells(cell_list) """ Explanation: After executing the cell above, a new spreadsheet will be shown in your sheets list on sheets.google.com. End of explanation """ # Open our new sheet and read some data. worksheet = gc.open('A new spreadsheet').sheet1 # get_all_values gives a list of rows. rows = worksheet.get_all_values() print(rows) # Convert to a DataFrame and render. import pandas as pd pd.DataFrame.from_records(rows) """ Explanation: After executing the cell above, the sheet will be populated with random numbers in the assigned range. Downloading data from a sheet into Python as a Pandas DataFrame We'll read back to the data that we inserted above and convert the result into a Pandas DataFrame. (The data you observe will differ since the contents of each cell is a random number.) End of explanation """ from google.colab import auth auth.authenticate_user() """ Explanation: Google Cloud Storage (GCS) We'll start by authenticating to GCS and creating the service client. End of explanation """ # Create a local file to upload. with open('/tmp/to_upload.txt', 'w') as f: f.write('my sample file') print('/tmp/to_upload.txt contains:') !cat /tmp/to_upload.txt """ Explanation: Upload a file from Python to a GCS bucket We'll start by creating the sample file to be uploaded. End of explanation """ # First, we need to set our project. Replace the assignment below # with your project ID. project_id = 'Your_project_ID_here' !gcloud config set project {project_id} import uuid # Make a unique bucket to which we'll upload the file. # (GCS buckets are part of a single global namespace.) bucket_name = 'colab-sample-bucket-' + str(uuid.uuid1()) # Full reference: https://cloud.google.com/storage/docs/gsutil/commands/mb !gsutil mb gs://{bucket_name} # Copy the file to our new bucket. # Full reference: https://cloud.google.com/storage/docs/gsutil/commands/cp !gsutil cp /tmp/to_upload.txt gs://{bucket_name}/ # Finally, dump the contents of our newly copied file to make sure everything worked. !gsutil cat gs://{bucket_name}/to_upload.txt """ Explanation: Next, we'll upload the file using the gsutil command, which is included by default on Colab backends. End of explanation """ # The first step is to create a bucket in your cloud project. # # Replace the assignment below with your cloud project ID. # # For details on cloud projects, see: # https://cloud.google.com/resource-manager/docs/creating-managing-projects project_id = 'Your_project_ID_here' # Authenticate to GCS. from google.colab import auth auth.authenticate_user() # Create the service client. from googleapiclient.discovery import build gcs_service = build('storage', 'v1') # Generate a random bucket name to which we'll upload the file. import uuid bucket_name = 'colab-sample-bucket' + str(uuid.uuid1()) body = { 'name': bucket_name, # For a full list of locations, see: # https://cloud.google.com/storage/docs/bucket-locations 'location': 'us', } gcs_service.buckets().insert(project=project_id, body=body).execute() print('Done') """ Explanation: Using Python This section demonstrates how to upload files using the native Python API rather than gsutil. This snippet is based on a larger example with additional uses of the API. End of explanation """ from googleapiclient.http import MediaFileUpload media = MediaFileUpload('/tmp/to_upload.txt', mimetype='text/plain', resumable=True) request = gcs_service.objects().insert(bucket=bucket_name, name='to_upload.txt', media_body=media) response = None while response is None: # _ is a placeholder for a progress object that we ignore. # (Our file is small, so we skip reporting progress.) _, response = request.next_chunk() print('Upload complete') """ Explanation: The cell below uploads the file to our newly created bucket. End of explanation """ # Download the file. !gsutil cp gs://{bucket_name}/to_upload.txt /tmp/gsutil_download.txt # Print the result to make sure the transfer worked. !cat /tmp/gsutil_download.txt """ Explanation: Once the upload has finished, the data will appear in the cloud console storage browser for your project: https://console.cloud.google.com/storage/browser?project=YOUR_PROJECT_ID_HERE Downloading a file from GCS to Python Next, we'll download the file we just uploaded in the example above. It's as simple as reversing the order in the gsutil cp command. End of explanation """ # Authenticate to GCS. from google.colab import auth auth.authenticate_user() # Create the service client. from googleapiclient.discovery import build gcs_service = build('storage', 'v1') from apiclient.http import MediaIoBaseDownload with open('/tmp/downloaded_from_gcs.txt', 'wb') as f: request = gcs_service.objects().get_media(bucket=bucket_name, object='to_upload.txt') media = MediaIoBaseDownload(f, request) done = False while not done: # _ is a placeholder for a progress object that we ignore. # (Our file is small, so we skip reporting progress.) _, done = media.next_chunk() print('Download complete') # Inspect the file we downloaded to /tmp !cat /tmp/downloaded_from_gcs.txt """ Explanation: Using Python We repeat the download example above using the native Python API. End of explanation """
GoogleCloudPlatform/tf-estimator-tutorials
00_Miscellaneous/tf_train_eval_export/Tutorial - Optimising Learning Rate.ipynb
apache-2.0
import math import os import pandas as pd import numpy as np from datetime import datetime import tensorflow as tf from tensorflow import data print "TensorFlow : {}".format(tf.__version__) SEED = 19831060 """ Explanation: TensorFlow: Optimizing Learning Rate End of explanation """ DATA_DIR='data' # !mkdir $DATA_DIR # !gsutil cp gs://cloud-samples-data/ml-engine/census/data/adult.data.csv $DATA_DIR # !gsutil cp gs://cloud-samples-data/ml-engine/census/data/adult.test.csv $DATA_DIR TRAIN_DATA_FILE = os.path.join(DATA_DIR, 'adult.data.csv') EVAL_DATA_FILE = os.path.join(DATA_DIR, 'adult.test.csv') TRAIN_DATA_SIZE = 32561 EVAL_DATA_SIZE = 16278 """ Explanation: Download the Data End of explanation """ HEADER = ['age', 'workclass', 'fnlwgt', 'education', 'education_num', 'marital_status', 'occupation', 'relationship', 'race', 'gender', 'capital_gain', 'capital_loss', 'hours_per_week', 'native_country', 'income_bracket'] HEADER_DEFAULTS = [[0], [''], [0], [''], [0], [''], [''], [''], [''], [''], [0], [0], [0], [''], ['']] NUMERIC_FEATURE_NAMES = ['age', 'education_num', 'capital_gain', 'capital_loss', 'hours_per_week'] CATEGORICAL_FEATURE_NAMES = ['gender', 'race', 'education', 'marital_status', 'relationship', 'workclass', 'occupation', 'native_country'] FEATURE_NAMES = NUMERIC_FEATURE_NAMES + CATEGORICAL_FEATURE_NAMES TARGET_NAME = 'income_bracket' TARGET_LABELS = [' <=50K', ' >50K'] WEIGHT_COLUMN_NAME = 'fnlwgt' NUM_CLASSES = len(TARGET_LABELS) def get_categorical_features_vocabolary(): data = pd.read_csv(TRAIN_DATA_FILE, names=HEADER) return { column: list(data[column].unique()) for column in data.columns if column in CATEGORICAL_FEATURE_NAMES } feature_vocabolary = get_categorical_features_vocabolary() print(feature_vocabolary) """ Explanation: Dataset Metadata End of explanation """ def create_feature_columns(): feature_columns = [] for column in NUMERIC_FEATURE_NAMES: feature_column = tf.feature_column.numeric_column(column) feature_columns.append(feature_column) for column in CATEGORICAL_FEATURE_NAMES: vocabolary = feature_vocabolary[column] embed_size = round(math.sqrt(len(vocabolary)) * 1.5) feature_column = tf.feature_column.embedding_column( tf.feature_column.categorical_column_with_vocabulary_list(column, vocabolary), embed_size) feature_columns.append(feature_column) return feature_columns """ Explanation: Building a TensorFlow Custom Estimator Creating feature columns Creating model_fn Create estimator using the model_fn Define data input_fn Define Train and evaluate experiment Run experiment with parameters 1. Create feature columns End of explanation """ from tensorflow.python.ops import math_ops def find_learning_rate(params): training_step = tf.cast(tf.train.get_global_step(), tf.float32) factor = tf.cast(tf.multiply(1.e-5, training_step*training_step), tf.float32) learning_rate = tf.add(params.learning_rate, factor) return learning_rate def update_learning_rate(params): training_step = tf.cast(tf.train.get_global_step(), tf.int32) base_cycle = tf.floordiv(training_step, params.cycle_length) current_cycle = tf.cast(tf.round(tf.sqrt(tf.cast(base_cycle, tf.float32))) + 1, tf.int32) current_cycle_length = tf.cast(tf.multiply(current_cycle, params.cycle_length), tf.int32) cycle_step = tf.mod(training_step, current_cycle_length) learning_rate = tf.cond( tf.equal(cycle_step, 0), lambda: params.learning_rate, lambda: tf.train.cosine_decay( learning_rate=params.learning_rate, global_step=cycle_step, decay_steps=current_cycle_length, alpha=0.0, ) ) tf.summary.scalar('base_cycle', base_cycle) tf.summary.scalar('current_cycle', current_cycle) tf.summary.scalar('current_cycle_length', current_cycle_length) tf.summary.scalar('cycle_step', cycle_step) tf.summary.scalar('learning_rate', learning_rate) return learning_rate def model_fn(features, labels, mode, params): is_training = True if mode == tf.estimator.ModeKeys.TRAIN else False # model body def _inference(features, mode, params): feature_columns = create_feature_columns() input_layer = tf.feature_column.input_layer(features=features, feature_columns=feature_columns) dense_inputs = input_layer for i in range(len(params.hidden_units)): dense = tf.keras.layers.Dense(params.hidden_units[i], activation='relu')(dense_inputs) dense_dropout = tf.keras.layers.Dropout(params.dropout_prob)(dense, training=is_training) dense_inputs = dense_dropout fully_connected = dense_inputs logits = tf.keras.layers.Dense(units=1, name='logits', activation=None)(fully_connected) return logits # model head head = tf.contrib.estimator.binary_classification_head( label_vocabulary=TARGET_LABELS, weight_column=WEIGHT_COLUMN_NAME ) learning_rate = find_learning_rate(params) if params.lr_search else update_learning_rate(params) return head.create_estimator_spec( features=features, mode=mode, logits=_inference(features, mode, params), labels=labels, optimizer=tf.train.AdamOptimizer(learning_rate) ) """ Explanation: 2. Create model_fn Use feature columns to create input_layer Use tf.keras.layers to define the model architecutre and output Use binary_classification_head for create EstimatorSpec End of explanation """ def create_estimator(params, run_config): feature_columns = create_feature_columns() estimator = tf.estimator.Estimator( model_fn, params=params, config=run_config ) return estimator """ Explanation: 3. Create estimator End of explanation """ def make_input_fn(file_pattern, batch_size, num_epochs, mode=tf.estimator.ModeKeys.EVAL): def _input_fn(): dataset = tf.data.experimental.make_csv_dataset( file_pattern=file_pattern, batch_size=batch_size, column_names=HEADER, column_defaults=HEADER_DEFAULTS, label_name=TARGET_NAME, field_delim=',', use_quote_delim=True, header=False, num_epochs=num_epochs, shuffle=(mode==tf.estimator.ModeKeys.TRAIN) ) iterator = dataset.make_one_shot_iterator() features, target = iterator.get_next() return features, target return _input_fn """ Explanation: 4. Data Input Function End of explanation """ def train_and_evaluate_experiment(params, run_config): # TrainSpec #################################### train_input_fn = make_input_fn( TRAIN_DATA_FILE, batch_size=params.batch_size, num_epochs=None, mode=tf.estimator.ModeKeys.TRAIN ) train_spec = tf.estimator.TrainSpec( input_fn = train_input_fn, max_steps=params.traning_steps ) ############################################### # EvalSpec #################################### eval_input_fn = make_input_fn( EVAL_DATA_FILE, num_epochs=1, batch_size=params.batch_size, ) eval_spec = tf.estimator.EvalSpec( name=datetime.utcnow().strftime("%H%M%S"), input_fn = eval_input_fn, steps=None, start_delay_secs=0, throttle_secs=params.eval_throttle_secs ) ############################################### tf.logging.set_verbosity(tf.logging.INFO) if tf.gfile.Exists(run_config.model_dir): print("Removing previous artefacts...") tf.gfile.DeleteRecursively(run_config.model_dir) print '' estimator = create_estimator(params, run_config) print '' time_start = datetime.utcnow() print("Experiment started at {}".format(time_start.strftime("%H:%M:%S"))) print(".......................................") # tf.estimator.train_and_evaluate( # estimator=estimator, # train_spec=train_spec, # eval_spec=eval_spec # ) estimator.train(train_input_fn, steps=params.traning_steps) time_end = datetime.utcnow() print(".......................................") print("Experiment finished at {}".format(time_end.strftime("%H:%M:%S"))) print("") time_elapsed = time_end - time_start print("Experiment elapsed time: {} seconds".format(time_elapsed.total_seconds())) """ Explanation: 5. Experiment Definition End of explanation """ MODELS_LOCATION = 'models/census' MODEL_NAME = 'dnn_classifier-01' model_dir = os.path.join(MODELS_LOCATION, MODEL_NAME) BATCH_SIZE = 64 NUM_EPOCHS = 10 steps_per_epoch = int(math.ceil((TRAIN_DATA_SIZE / BATCH_SIZE))) training_steps = int(steps_per_epoch * NUM_EPOCHS) print("Training data size: {}".format(TRAIN_DATA_SIZE)) print("Btach data size: {}".format(BATCH_SIZE)) print("Steps per epoch: {}".format(steps_per_epoch)) print("Traing epochs: {}".format(NUM_EPOCHS)) print("Training steps: {}".format(training_steps)) params = tf.contrib.training.HParams( batch_size=BATCH_SIZE, traning_steps=training_steps, hidden_units=[64, 32], learning_rate=1.e-3, cycle_length=500, dropout_prob=0.1, eval_throttle_secs=0, lr_search=False ) run_config = tf.estimator.RunConfig( tf_random_seed=SEED, save_checkpoints_steps=steps_per_epoch, log_step_count_steps=100, save_summary_steps=1, keep_checkpoint_max=3, model_dir=model_dir, ) train_and_evaluate_experiment(params, run_config) """ Explanation: 6. Run Experiment with Parameters End of explanation """
statsmaths/stat665
lectures/lec20/notebook20.ipynb
gpl-2.0
%pylab inline import copy import numpy as np import pandas as pd import matplotlib.pyplot as plt from keras.datasets import imdb, reuters from keras.models import Sequential from keras.layers.core import Dense, Dropout, Activation, Flatten from keras.optimizers import SGD, RMSprop from keras.utils import np_utils from keras.layers.convolutional import Convolution1D, MaxPooling1D, ZeroPadding1D, AveragePooling1D from keras.callbacks import EarlyStopping from keras.layers.normalization import BatchNormalization from keras.preprocessing import sequence from keras.layers.embeddings import Embedding from gensim.models import word2vec """ Explanation: Word embeddings Import various modules that we need for this notebook (now using Keras 1.0.0) End of explanation """ (X_train, y_train), (X_test, y_test) = imdb.load_data(nb_words=500, maxlen=100, test_split=0.2) X_train = sequence.pad_sequences(X_train, maxlen=100) X_test = sequence.pad_sequences(X_test, maxlen=100) """ Explanation: Load the MNIST dataset, flatten the images, convert the class labels, and scale the data. I. Example using word embedding We read in the IMDB dataset, using the next 500 most commonly used terms. End of explanation """ print(X_train[0]) print(y_train[:10]) """ Explanation: Let's look at one sample from X_train and the first 10 elements of y_train. The codes give indicies for the word in the vocabulary (unfortunately, we do not have access to the vocabulary for this set). End of explanation """ model = Sequential() model.add(Embedding(500, 32, input_length=100)) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(256)) model.add(Dropout(0.25)) model.add(Activation('relu')) model.add(Dense(1)) model.add(Activation('sigmoid')) model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy']) model.fit(X_train, y_train, batch_size=32, nb_epoch=10, verbose=1, validation_data=(X_test, y_test)) """ Explanation: We now construct a model, the layer of which is a vector embedding. We then have a dense layer and then the activation layer. Notice that the output of the Embedding needs to be Flattened. End of explanation """ print(model.layers[0].get_weights()[0].shape) # Embedding print(model.layers[3].get_weights()[0].shape) # Dense(256) print(model.layers[6].get_weights()[0].shape) # Dense(1) """ Explanation: The accuracy is not terribly, and certainly better than random guessing, but the model is clearly overfitting. To test your understanding, would you have been able to guess the sizes of the weights in these layers? Where does the 3200 comes from the first Dense layer? End of explanation """ model = Sequential() # embedding model.add(Embedding(500, 32, input_length=100)) model.add(Dropout(0.25)) # convolution layers model.add(Convolution1D(nb_filter=32, filter_length=4, border_mode='valid', activation='relu')) model.add(MaxPooling1D(pool_length=2)) # dense layers model.add(Flatten()) model.add(Dense(256)) model.add(Dropout(0.25)) model.add(Activation('relu')) # output layer model.add(Dense(1)) model.add(Activation('sigmoid')) model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy']) model.fit(X_train, y_train, batch_size=32, nb_epoch=15, verbose=1, validation_data=(X_test, y_test)) """ Explanation: II. Word embedding with 1D Convolutions We can use 1-dimensional convolutions to learn local associations between words, rather than having to rely on global associations. End of explanation """ (X_train, y_train), (X_test, y_test) = reuters.load_data(nb_words=500, maxlen=100, test_split=0.2) X_train = sequence.pad_sequences(X_train, maxlen=100) X_test = sequence.pad_sequences(X_test, maxlen=100) Y_train = np_utils.to_categorical(y_train, 46) Y_test = np_utils.to_categorical(y_test, 46) model = Sequential() # embedding model.add(Embedding(500, 32, input_length=100)) model.add(Dropout(0.25)) # convolution layers model.add(Convolution1D(nb_filter=32, filter_length=4, border_mode='valid', activation='relu')) model.add(MaxPooling1D(pool_length=2)) # dense layers model.add(Flatten()) model.add(Dense(256)) model.add(Dropout(0.25)) model.add(Activation('relu')) # output layer model.add(Dense(46)) model.add(Activation('sigmoid')) model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy']) model.fit(X_train, Y_train, batch_size=32, nb_epoch=15, verbose=1, validation_data=(X_test, Y_test)) """ Explanation: The performance is significantly improved, and could be much better if we further tweaked the parameters and constructed a deeper model. III. Reuters classification Let's use the same approach to do document classification on the Reuters corpus. End of explanation """ loc = "/Users/taylor/files/word2vec_python/GoogleNews-vectors-negative300.bin" model = word2vec.Word2Vec.load_word2vec_format(loc, binary=True) jobs = ["professor", "teacher", "actor", "clergy", "musician", "philosopher", "writer", "singer", "dancers", "model", "anesthesiologist", "audiologist", "chiropractor", "optometrist", "pharmacist", "psychologist", "physician", "architect", "firefighter", "judges", "lawyer", "biologist", "botanist", "ecologist", "geneticist", "zoologist", "chemist", "programmer", "designer"] print(model[jobs[0]].shape) print(model[jobs[0]][:25]) embedding = np.array([model[x] for x in jobs]) from sklearn.decomposition import PCA pca = PCA(n_components=2) pca.fit(embedding) embedding_pca = np.transpose(pca.transform(embedding)) embedding_pca.shape plt.figure(figsize=(16, 10)) plt.scatter(embedding_pca[0], embedding_pca[1], alpha=0) for index,(x,y) in enumerate(np.transpose(embedding_pca)): plt.text(x,y,jobs[index]) """ Explanation: The results are less impressive than they may at first seem, as the majority of the articles are in one of three categories. IV. word2vec Let's load in the pre-learned word2vec embeddings. End of explanation """ country = ["United_States", "Afghanistan", "Albania", "Algeria", "Andorra", "Angola", "Argentina", "Armenia", "Australia", "Austria", "Azerbaijan", "Bahrain", "Bangladesh", "Barbados", "Belarus", "Belgium", "Belize", "Benin", "Bhutan", "Bolivia", "Botswana", "Brazil", "Brunei", "Bulgaria", "Burundi", "Cambodia", "Cameroon", "Canada", "Chad", "Chile", "Colombia", "Comoros", "Croatia", "Cuba", "Cyprus", "Denmark", "Djibouti", "Dominica", "Ecuador", "Egypt", "Eritrea", "Estonia", "Ethiopia", "Fiji", "Finland", "France", "Gabon", "Georgia", "Germany", "Ghana", "Greece", "Grenada", "Guatemala", "Guinea", "Guyana", "Haiti", "Honduras", "Hungary", "Iceland", "India", "Indonesia", "Iran", "Iraq", "Ireland", "Israel", "Italy", "Jamaica", "Japan", "Jordan", "Kazakhstan", "Kenya", "Kiribati", "Kuwait", "Kyrgyzstan", "Laos", "Latvia", "Lebanon", "Lesotho", "Liberia", "Libya", "Liechtenstein", "Lithuania", "Luxembourg", "Macedonia", "Madagascar", "Malawi", "Malaysia", "Maldives", "Mali", "Malta", "Mauritania", "Mauritius", "Mexico", "Micronesia", "Moldova", "Monaco", "Mongolia", "Montenegro", "Morocco", "Mozambique", "Namibia", "Nauru", "Nepal", "Netherlands", "Nicaragua", "Niger", "Nigeria", "Norway", "Oman", "Pakistan", "Palau", "Panama", "Paraguay", "Peru", "Philippines", "Poland", "Portugal", "Qatar", "Romania", "Russia", "Rwanda", "Samoa", "Senegal", "Serbia", "Seychelles", "Singapore", "Slovakia", "Slovenia", "Somalia", "Spain", "Sudan", "Suriname", "Swaziland", "Sweden", "Switzerland", "Syria", "Tajikistan", "Tanzania", "Thailand", "Togo", "Tonga", "Tunisia", "Turkey", "Turkmenistan", "Tuvalu", "Uganda", "Ukraine", "Uruguay", "Uzbekistan", "Vanuatu", "Venezuela", "Vietnam", "Yemen", "Zambia", "Zimbabwe", "Abkhazia", "Somaliland", "Mayotte", "Niue", "Tokelau", "Guernsey", "Jersey", "Anguilla", "Bermuda", "Gibraltar", "Montserrat", "Guam", "Macau", "Greenland", "Guadeloupe", "Martinique", "Reunion", "Aland", "Aruba", "Svalbard", "Ascension"] embedding = np.array([model[x] for x in country]) pca = PCA(n_components=2) pca.fit(embedding) embedding_pca = np.transpose(pca.transform(embedding)) embedding_pca.shape plt.figure(figsize=(16, 10)) plt.scatter(embedding_pca[0], embedding_pca[1], alpha=0) for index,(x,y) in enumerate(np.transpose(embedding_pca)): plt.text(x,y,country[index]) """ Explanation: Now, let's repeate with country clubs. End of explanation """ city_pairs = ["Afghanistan", "Belarus", "Belgium", "Brazil", "Costa_Rica", "Canada", "Netherlands", "United_Kingdom", "United_States", "Iran", "Kabul", "Minsk", "Brussels", "Brasilia", "San_Jose", "Ottawa", "Amsterdam", "London", "Washington", "Tehran"] embedding = np.array([model[x] for x in city_pairs]) pca = PCA(n_components=2) pca.fit(embedding) embedding_pca = np.transpose(pca.transform(embedding)) embedding_pca.shape plt.figure(figsize=(16, 10)) plt.scatter(embedding_pca[0], embedding_pca[1], alpha=0) for index,(x,y) in enumerate(np.transpose(embedding_pca)): plt.text(x,y,city_pairs[index]) """ Explanation: And, just because I think this is fun, let's run this on a smaller set of counties and their capitals. End of explanation """ these = model.most_similar('Afghanistan', topn=25) for th in these: print("%02.04f - %s" % th[::-1]) """ Explanation: Look how the line between country and capital has roughly the same slope and length for all of the pairs. It is by no means fast (the algorithm is horribly implemented in gensim) but we can also do the reverse, and find the closest words in the embedding space to a given term: End of explanation """
sdpython/ensae_teaching_cs
_doc/notebooks/1a/recherche_dichotomique.ipynb
mit
from jyquickhelper import add_notebook_menu add_notebook_menu() """ Explanation: 1A.algo - Recherche dichotomique Recherche dichotomique illustrée. Extrait de Recherche dichotomique, récursive, itérative et le logarithme. End of explanation """ from pyquickhelper.helpgen import NbImage NbImage("images/dicho.png") """ Explanation: Lorsqu'on décrit n'importe quel algorithme, on évoque toujours son coût, souvent une formule de ce style : $$O(n^u(\ln_2 n)^v)$$ $u$ et $v$ sont des entiers. $v$ est souvent soit 0, soit 1. Mais d'où vient ce logarithme ? Le premier algorithme auquel on pense et dont le coût correspond au cas $u=0$ et $v=1$ est la recherche dichotomique. Il consiste à chercher un élément dans une liste triée. Le logarithme vient du fait qu'on réduit l'espace de recherche par deux à chaque itération. Fatalement, on trouve très vite l'élément à chercher. Et le logarithme, dans la plupart des algorithmes, vient du fait qu'on divise la dimension du problème par un nombre entier à chaque itération, ici 2. La recherche dichotomique est assez simple : on part d'une liste triée T et on cherche l'élément v (on suppose qu'il s'y trouve). On procède comme suit : On compare v à l'élément du milieu de la liste. S'il est égal à v, on a fini. Sinon, s'il est inférieur, il faut chercher dans la première moitié de la liste. On retourne à l'étape 1 avec la liste réduite. S'il est supérieur, on fait de même avec la seconde moitié de la liste. C'est ce qu'illustre la figure suivante où a désigne le début de la liste, b la fin, m le milieu. A chaque itération, on déplace ces trois positions. End of explanation """ def recherche_dichotomique(element, liste_triee): a = 0 b = len(liste_triee)-1 m = (a+b)//2 while a < b : if liste_triee[m] == element: return m elif liste_triee[m] > element: b = m-1 else : a = m+1 m = (a+b)//2 return a li = [0, 4, 5, 19, 100, 200, 450, 999] recherche_dichotomique(5, li) """ Explanation: Version itérative End of explanation """ def recherche_dichotomique_recursive( element, liste_triee, a = 0, b = -1 ): if a == b : return a if b == -1 : b = len(liste_triee)-1 m = (a+b)//2 if liste_triee[m] == element: return m elif liste_triee[m] > element: return recherche_dichotomique_recursive(element, liste_triee, a, m-1) else : return recherche_dichotomique_recursive(element, liste_triee, m+1, b) recherche_dichotomique(5, li) """ Explanation: Version récursive End of explanation """ def recherche_dichotomique_recursive2(element, liste_triee): if len(liste_triee)==1 : return 0 m = len(liste_triee)//2 if liste_triee[m] == element: return m elif liste_triee[m] > element: return recherche_dichotomique_recursive2(element, liste_triee[:m]) else : return m + recherche_dichotomique_recursive2(element, liste_triee[m:]) recherche_dichotomique(5, li) """ Explanation: Version récursive 2 L'ajout des parametrès a et b peut paraître un peu lourd. Voici une troisième implémentation en Python (toujours récursive) : End of explanation """
nikita-mayorov/math_modelling
pendulum_model.ipynb
gpl-3.0
# Імпортуємо необхідні модулі. from ipywidgets import * from numpy import sin, cos, sqrt, pi, radians, arange import matplotlib.pyplot as plt from matplotlib import rc font = {'family': 'Verdana', 'weight': 'normal'} rc('font', **font) # Оголошуємо функції. def ar(): return v0*sqrt(m1)*sqrt(l)*sin(sqrt(g)*sqrt(m1+m2)*t/(sqrt(m1)*sqrt(l)))/(sqrt(g)*sqrt(m1+m2))+a0*cos(sqrt(g)*sqrt(m1+m2)*t/(sqrt(m1)*sqrt(l))) def xr(): return -(1.0/(m1+m2))*(sin(sqrt(g)*sqrt(m1+m2)*t/(sqrt(m1)*sqrt(l)))*l**(3.0/2.0)*v0*sqrt(m1)*m2/(sqrt(g)*sqrt(m1+m2))+cos(sqrt(g)*sqrt(m1+m2)*t/(sqrt(m1)*sqrt(l)))*a0*l*m2-(l*v0*m2*t*m1/(m1+m2))-(l*v0*(m2**2.0)*t/(m1+m2))-((a0*l*m2+m1*x0+m2*x0)*m1/(m1+m2))-((a0*l*m2+m1*x0+m2*x0)*m2/(m1+m2))) """ Explanation: Математичне моделювання маятника Змістовна постановка Побудувати математичну модель, яка описує рух підвісного крану по жорсткозакріпленій підвісній рейці. Данна модель повинна дозволяти визначити положення підвісу та вантажу в довільний момент часу. Вхідні параметри: * $l$ – довжина тросу підвісу * $m_1$ – масса підвісу * $m_2$ – масса вантажу Концептуальна постановка Трос будемо вважати невагомим неростяжним стержнем, жорсткозакріпленим у точці ванатжу та у точці підвісу. Нехтуємо силами опору тертя у точці підвісу та силами опору зовнішнього середовища. Модель будемо будувати у рамках класичної аналітичної механіки. Математична постановка Задача – знайти ці невідомі функції $x_1(t)$, $x_2(t)$, $y_2(t)$, які описують коливання математичного маятника з рухомою точкою підвісу. Де: * $\alpha(t)$ – кут відхилення від положення рівноваги * $x_2(t) = x_1(t)+lsin(\alpha(t))$ * $y_2(t) = -lcos(\alpha(t))$ Для отримання рівняння руху для розгойдуємо системи використовуємо лагранжевий формалізм. Складаємо Лагранжіан системи: $[Lg = T-U]$, де $T$ – кінетична енергія системи, а $U$ – потенціальна енергія системи. \begin{equation} \begin{matrix} T = T_1+T_2\ T_1 = \frac{m_1V_1^2}{2}=\frac{m_1\dot{x}^2_1}{2}\ T_2 = \frac{m_2V_2^2}{2}=\frac{m_2(\dot{x}^2_2+\dot{y}^2_2)}{2}\ U = U_1+U_2\ U_1 = 0\ U_2 = -m_2gy_2\ \dot{x}_2=\dot{x}_1-l\dot{\alpha}cos(\alpha)\ \dot{y}_2=-l\dot{\alpha}sin(\alpha)\ Lg = \frac{m_1\dot{x}_1^2}{2}+\frac{m_2}{2}(\dot{x}_1^2+l^2\dot{\alpha}^2+2l\dot{x}\dot{\alpha}cos(\alpha))+m_2glcos(\alpha)\ \end{matrix} \end{equation} Складемо рівняння руху: \begin{equation} \begin{matrix} \frac{d}{dt}\left(\frac{\partial Lg}{\partial\dot{\alpha}}\right)-\frac{\partial Lg}{\partial\alpha}=0\ \frac{d}{dt}\left(\frac{\partial Lg}{\partial\dot{x}_1}\right)-\frac{\partial Lg}{\partial x_1}=0\ \frac{\partial Lg}{\partial\alpha}=m_2l^2\dot{\alpha}+m_2l\dot{x}cos(\alpha)\ \frac{d}{dt}\left(\frac{\partial Lg}{\partial\dot{\alpha}}\right)=m_2l^2\ddot{\alpha}-m_2l\ddot{x}cos(\alpha)-m_2l\dot{x}\dot{\alpha}sin(\alpha)\ \frac{\partial Lg}{\partial t}=-m_2l\dot{x}\dot{\alpha}sin(\alpha)-m_2glsin(\alpha)\ m_2l^2\ddot{\alpha}-m_2l\ddot{x}cos(\alpha)+m_2glsin(\alpha)&(1)\ \frac{\partial Lg}{\partial x_1}=m_1\dot{x}_1+m_2\dot{x}_1+m_2l\dot{\alpha}cos(\alpha)\ \frac{d}{dt}\left(\frac{\partial Lg}{\partial\dot{x}_1}\right)=(m_1+m_2)\ddot{x}_1+m_2l\ddot{\alpha}cos(\alpha)-m_2l\ddot{\alpha}^2sin(\alpha)\ (m_1+m_2)\ddot{x}_2+m_2l\ddot{\alpha}cos(\alpha)-m_2l\dot{\alpha}^2sin(\alpha)=0&(2)\ \end{matrix} \end{equation} В результаті підстановки задача представляє собой систему двох диференціальних рівнянь 2-го порядку, які є нелінійними. Точного елементарного розв'язку данна система не має. Визначимо початкові умови: \begin{equation} \begin{matrix} & \left{ \begin{matrix} x_1(0)=x_0\ \alpha(0)=\alpha_0\ \dot{x}_1(0)=v_0\ \dot{\alpha}(0)=\omega_0 \end{matrix} \right. \end{matrix} (3) \end{equation} Постановка (1), (2) і (3) описує поведінку побудованної системи у довільний момент часу $t>0$. Розв'яжемо отриману систему: \begin{equation} \begin{matrix} & \left{ \begin{matrix} \frac{d^2}{dt^2}x(t)+l\frac{d^2}{dt^2}\alpha(t)+g\alpha(t)=0\ \frac{d^2}{dt^t}x(t)(m_1+m_2)+lm_2\frac{d^2}{dt^2}\alpha(t)=0\ x_1(0)=x_0\ \alpha(0)=\alpha_0\ \dot{x}_1(0)=v_0\ \dot{\alpha}(0)=\omega_0 \end{matrix} \right. \end{matrix} \end{equation} Отримаємо: \begin{equation} \begin{matrix} &\alpha(t)=\frac{v_0\sqrt{m_1}\sqrt{l}sin\left(\frac{\sqrt{g}\sqrt{m_1+m_2}t}{\sqrt{m_1}\sqrt{l}}\right)}{\sqrt{g}\sqrt{m_1+m_2}}+a_0cos\left(\frac{\sqrt{g}\sqrt{m_1+m_2}t}{\sqrt{m_1}\sqrt{l}}\right)\ &x(t)=-\frac{1}{m_1+m_2}\left(\frac{sin\left(\frac{\sqrt{g}\sqrt{m_1+m_2}t}{\sqrt{m_1}\sqrt{l}}\right)l^\frac{3}{2}v_0\sqrt{m_1}m_2}{\sqrt{g}\sqrt{m_1+m_2}}+cos\left(\frac{\sqrt{g}\sqrt{m_1+m_2}t}{\sqrt{m_1}\sqrt{l}}\right)la_0m_2-\frac{lv_0m_2tm_1}{m_1+m_2}-\frac{lv_0m^2_2t}{m_1+m_2}-\frac{(a_0lm_2+m_1x_0+m_2x_0)m_1}{m_1+m_2}-\frac{(a_0lm_2+m_1x_0+m_2x_0)m_2}{m_1+m_2}\right)\ \end{matrix} \end{equation} Знайшовши розв'язок, оголошуємо основні функції a(t) та x(t): End of explanation """ def x1(): return xr() def x2(): return x1()+l*sin(ar()) def y2(): return -l*cos(ar()) """ Explanation: А також функції для визначення положення підвісу та вантажу у довільний момент часу: $x_1(t) = x(t)$ $x_2(t) = x_1(t)+lsin(\alpha(t))$ $y_2(t) = -lcos(\alpha(t))$ End of explanation """ g = 9.8 # Час прискорення вільного падіння. m1 = 3.0 # Маса підвісу. m2 = 2.0 # Маса вантажу. l = 6.0 # Довжина тросу підвісу. a0 = pi/4.0 # Кут початкового відхилення від положення рівноваги. v0 = 2.0 # Початкова швидкість. x0 = 0.0 # Початкова координата по вісі Ox. global_length, delta = 15.0, 0.01 # global_length - загальний час руху, delta - крок. t = arange(x0, global_length, delta) # t - інтервал часу від 'x0' до 'global_length' з кроком 'delta'. """ Explanation: Визначаємо початкові умови: End of explanation """ figure1 = plt.figure() plot1 = figure1.add_subplot(3, 2, 1) plot2 = figure1.add_subplot(3, 2, 2) plot3 = figure1.add_subplot(3, 1, 2) for ax in figure1.axes: ax.grid(True) plt.subplots_adjust(top=2.0, right=2.0, wspace=0.10, hspace=0.25) plot1.plot(t, xr(), 'g') plot2.plot(t, ar(), 'g') plot3.plot(x1(), [0.01 for i in range(len(t))], 'b') plot3.plot(x2(), y2(), 'r') plot1.set_title(u'xr(t)') plot2.set_title(u'ar(t)') plot3.set_title(u'Траекторія руху маятника') plt.show() """ Explanation: Будуємо графіки: End of explanation """ x = arange(x0, global_length, delta) a = arange(x0, global_length, delta) y = arange(x0, global_length, delta) b = arange(x0, global_length, delta) x[0], a[0], y[0], b[0] = x0, a0, v0, sqrt(g/l) for i in range(0, len(t)-1): x[i+1] = x[i] + delta * y[i] a[i+1] = a[i] + delta * b[i] y[i+1] = y[i] + delta * (m2 / m1) * g * a[i] b[i+1] = b[i] - delta * ((m1 + m2) / m1) * (g/l) * a[i] def aproxy_x1(): return x def aproxy_x2(): return x+l*sin(a) def aproxy_y2(): return -l*cos(a) """ Explanation: Апроксимація побудованої моделі методом Ейлера Зведення до системи рівнянь першого порядку \begin{equation} \left{ \begin{matrix} l\ddot{\alpha}+\ddot{x}+g\alpha=0\ (m_1+m_2)\ddot{x}+m_1l\ddot{\alpha}=0\ \end{matrix} \right. \end{equation} Отримана система містить рівняння другого порядку. Отже, для побудови наближенного розв'язку методом Ейлера необхідно звести ці рівняння до першого порядку, додавши нові змінні: \begin{equation} \begin{matrix} y=\dot{x}\ \dot{y}=\ddot{x}\ \beta=\dot{\alpha}\ \dot{\beta}=\ddot{\alpha}\ \end{matrix} \Rightarrow \begin{matrix} l\dot{\beta}+\dot{y}+g\alpha=0\ (m_1+m_2)\dot{y}+m_1l\dot{\beta}=0\ \end{matrix} \end{equation} \begin{equation} \begin{matrix} \dot{x}=y&(4)\ \dot{\alpha}=\beta&(5)\ (m_1+m_2)\dot{y}+m_1l\dot{\beta}=0\ \left(\frac{m_1+m_2}{m_1l}\right)\dot{y}+\left(\frac{m_1l\dot{\beta}}{m_1l}\right)=0\ \dot{\beta}=-\frac{m_1+m_2}{m_1l}\dot{y}&(6)\ l\dot{\beta}+\dot{y}+g\alpha=0\ l\left(-\frac{m_1+m_2}{m_1l}\dot{y}\right)+\dot{y}+g\alpha=0\ -\frac{m_1}{m_2}+\dot{y}=-g\alpha\ \dot{y}=\frac{m_2}{m_1}g\alpha&(7)\ \end{matrix} \end{equation} Об'єднуємо у систему рівняння $(4)$, $(5)$, $(6)$, $(7)$ та визначаємо початкові умови: \begin{equation} \left{ \begin{matrix} \dot{x}=y\ \dot{\alpha}=\beta\ \dot{y}=\frac{m_2}{m_1}g\alpha\ \dot{\beta}=-\frac{m_1+m_2}{m_1l}\frac{g}{l}\alpha\ \end{matrix} \right. \begin{matrix} x(0)=x_0\ \alpha(0)=\alpha_0\ y(0)=v_0\ \beta(0)=u_0\ \end{matrix} \end{equation} Побудова ітераційного процесу за методом Ейлера: Задача - знайти розв'язок деякої функції $\dot{y}(t)=f(t,y(t))$, де $t(t_0)=y_0$, на інтервалі від $x_0$ до $n$. Для цього необхідно побудувати цикл, кожна ітерація якого, залежить від результату попередньої: $y_{i+1}=y_i+\Delta tf(t_i, y_i)$, де $\Delta t$ - крок ітерації. \begin{equation} \left{ \begin{matrix} x_{i+1}=x_i+\Delta ty_i\ \alpha_{i+1}=\alpha_i+\Delta t\beta_i\ y_{i+1}=y_i+\Delta t\frac{m_2}{m_1}g\alpha_i\ \beta_{i+1}=\beta_i-\Delta t\frac{m_1+m_2}{m_1}\frac{g}{l}\alpha_i\ \end{matrix} \right. \end{equation} End of explanation """ figure2 = plt.figure() plot4 = figure2.add_subplot(3, 2, 1) plot5 = figure2.add_subplot(3, 2, 2) plot6 = figure2.add_subplot(3, 1, 2) for ax in figure2.axes: ax.grid(True) plt.subplots_adjust(top=2.0, right=2.0, wspace=0.10, hspace=0.25) plot4.plot(t, xr(), 'r') plot4.plot(t, x, 'b') plot5.plot(t, ar(), 'r') plot5.plot(t, a, 'b') plot6.plot(x1(), [0.01 for i in range(len(t))], 'r') plot6.plot(aproxy_x1(), [0.01 for i in range(len(t))], 'b') plot6.plot(x2(), y2(), 'r', label=u'Точний розв\'язок') plot6.plot(aproxy_x2(), aproxy_y2(), 'b', label=u'Метод Ейлера') plot4.set_title(u'xr(t)') plot5.set_title(u'ar(t)') plot6.set_title(u'Порівняння точноі та наближеної траекторії руху маятника') plot6.legend(loc='upper right') plt.show() """ Explanation: Будуємо графіки: End of explanation """
Hvass-Labs/TensorFlow-Tutorials
17_Estimator_API.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import tensorflow as tf import numpy as np """ Explanation: TensorFlow Tutorial #17 Estimator API by Magnus Erik Hvass Pedersen / GitHub / Videos on YouTube WARNING! This tutorial does not work with TensorFlow v.2 and it would take too much effort to update this tutorial to the new API. Introduction High-level API's are extremely important in all software development because they provide simple abstractions for doing very complicated tasks. This makes it easier to write and understand your source-code, and it lowers the risk of errors. In Tutorial #03 we saw how to use various builder API's for creating Neural Networks in TensorFlow. However, there was a lot of additional code required for training the models and using them on new data. The Estimator is another high-level API that implements most of this, although it can be debated how simple it really is. Using the Estimator API consists of several steps: Define functions for inputting data to the Estimator. Either use an existing Estimator (e.g. a Deep Neural Network), which is also called a pre-made or Canned Estimator. Or create your own Estimator, in which case you also need to define the optimizer, performance metrics, etc. Train the Estimator using the training-set defined in step 1. Evaluate the performance of the Estimator on the test-set defined in step 1. Use the trained Estimator to make predictions on other data. Imports End of explanation """ tf.__version__ """ Explanation: This was developed using Python 3.6 (Anaconda) and TensorFlow version: End of explanation """ from mnist import MNIST data = MNIST(data_dir="data/MNIST/") """ Explanation: Load Data The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given dir. End of explanation """ print("Size of:") print("- Training-set:\t\t{}".format(data.num_train)) print("- Validation-set:\t{}".format(data.num_val)) print("- Test-set:\t\t{}".format(data.num_test)) """ Explanation: The MNIST data-set has now been loaded and consists of 70.000 images and class-numbers for the images. The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial. End of explanation """ # The number of pixels in each dimension of an image. img_size = data.img_size # The images are stored in one-dimensional arrays of this length. img_size_flat = data.img_size_flat # Tuple with height and width of images used to reshape arrays. img_shape = data.img_shape # Number of classes, one class for each of 10 digits. num_classes = data.num_classes # Number of colour channels for the images: 1 channel for gray-scale. num_channels = data.num_channels """ Explanation: Copy some of the data-dimensions for convenience. End of explanation """ def plot_images(images, cls_true, cls_pred=None): assert len(images) == len(cls_true) == 9 # Create figure with 3x3 sub-plots. fig, axes = plt.subplots(3, 3) fig.subplots_adjust(hspace=0.3, wspace=0.3) for i, ax in enumerate(axes.flat): # Plot image. ax.imshow(images[i].reshape(img_shape), cmap='binary') # Show true and predicted classes. if cls_pred is None: xlabel = "True: {0}".format(cls_true[i]) else: xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i]) # Show the classes as the label on the x-axis. ax.set_xlabel(xlabel) # Remove ticks from the plot. ax.set_xticks([]) ax.set_yticks([]) # Ensure the plot is shown correctly with multiple plots # in a single Notebook cell. plt.show() """ Explanation: Helper-function for plotting images Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image. End of explanation """ # Get the first images from the test-set. images = data.x_test[0:9] # Get the true classes for those images. cls_true = data.y_test_cls[0:9] # Plot the images and labels using our helper-function above. plot_images(images=images, cls_true=cls_true) """ Explanation: Plot a few images to see if data is correct End of explanation """ train_input_fn = tf.estimator.inputs.numpy_input_fn( x={"x": np.array(data.x_train)}, y=np.array(data.y_train_cls), num_epochs=None, shuffle=True) """ Explanation: Input Functions for the Estimator Rather than providing raw data directly to the Estimator, we must provide functions that return the data. This allows for more flexibility in data-sources and how the data is randomly shuffled and iterated. Note that we will create an Estimator using the DNNClassifier which assumes the class-numbers are integers so we use data.y_train_cls instead of data.y_train which are one-hot encoded arrays. The function also has parameters for batch_size, queue_capacity and num_threads for finer control of the data reading. In our case we take the data directly from a numpy array in memory, so it is not needed. End of explanation """ train_input_fn """ Explanation: This actually returns a function: End of explanation """ train_input_fn() """ Explanation: Calling this function returns a tuple with TensorFlow ops for returning the input and output data: End of explanation """ test_input_fn = tf.estimator.inputs.numpy_input_fn( x={"x": np.array(data.x_test)}, y=np.array(data.y_test_cls), num_epochs=1, shuffle=False) """ Explanation: Similarly we need to create a function for reading the data for the test-set. Note that we only want to process these images once so num_epochs=1 and we do not want the images shuffled so shuffle=False. End of explanation """ some_images = data.x_test[0:9] predict_input_fn = tf.estimator.inputs.numpy_input_fn( x={"x": some_images}, num_epochs=1, shuffle=False) """ Explanation: An input-function is also needed for predicting the class of new data. As an example we just use a few images from the test-set. End of explanation """ some_images_cls = data.y_test_cls[0:9] """ Explanation: The class-numbers are actually not used in the input-function as it is not needed for prediction. However, the true class-number is needed when we plot the images further below. End of explanation """ feature_x = tf.feature_column.numeric_column("x", shape=img_shape) """ Explanation: Pre-Made / Canned Estimator When using a pre-made Estimator, we need to specify the input features for the data. In this case we want to input images from our data-set which are numeric arrays of the given shape. End of explanation """ feature_columns = [feature_x] """ Explanation: You can have several input features which would then be combined in a list: End of explanation """ num_hidden_units = [512, 256, 128] """ Explanation: In this example we want to use a 3-layer DNN with 512, 256 and 128 units respectively. End of explanation """ model = tf.estimator.DNNClassifier(feature_columns=feature_columns, hidden_units=num_hidden_units, activation_fn=tf.nn.relu, n_classes=num_classes, model_dir="./checkpoints_tutorial17-1/") """ Explanation: The DNNClassifier then constructs the neural network for us. We can also specify the activation function and various other parameters (see the docs). Here we just specify the number of classes and the directory where the checkpoints will be saved. End of explanation """ model.train(input_fn=train_input_fn, steps=2000) """ Explanation: Training We can now train the model for a given number of iterations. This automatically loads and saves checkpoints so we can continue the training later. Note that the text INFO:tensorflow: is printed on every line and makes it harder to quickly read the actual progress. It should have been printed on a single line instead. End of explanation """ result = model.evaluate(input_fn=test_input_fn) result print("Classification accuracy: {0:.2%}".format(result["accuracy"])) """ Explanation: Evaluation Once the model has been trained, we can evaluate its performance on the test-set. End of explanation """ predictions = model.predict(input_fn=predict_input_fn) cls = [p['classes'] for p in predictions] cls_pred = np.array(cls, dtype='int').squeeze() cls_pred plot_images(images=some_images, cls_true=some_images_cls, cls_pred=cls_pred) """ Explanation: Predictions The trained model can also be used to make predictions on new data. Note that the TensorFlow graph is recreated and the checkpoint is reloaded every time we make predictions on new data. If the model is very large then this could add a significant overhead. It is unclear why the Estimator is designed this way, possibly because it will always use the latest checkpoint and it can also be distributed easily for use on multiple computers. End of explanation """ def model_fn(features, labels, mode, params): # Args: # # features: This is the x-arg from the input_fn. # labels: This is the y-arg from the input_fn, # see e.g. train_input_fn for these two. # mode: Either TRAIN, EVAL, or PREDICT # params: User-defined hyper-parameters, e.g. learning-rate. # Reference to the tensor named "x" in the input-function. x = features["x"] # The convolutional layers expect 4-rank tensors # but x is a 2-rank tensor, so reshape it. net = tf.reshape(x, [-1, img_size, img_size, num_channels]) # First convolutional layer. net = tf.layers.conv2d(inputs=net, name='layer_conv1', filters=16, kernel_size=5, padding='same', activation=tf.nn.relu) net = tf.layers.max_pooling2d(inputs=net, pool_size=2, strides=2) # Second convolutional layer. net = tf.layers.conv2d(inputs=net, name='layer_conv2', filters=36, kernel_size=5, padding='same', activation=tf.nn.relu) net = tf.layers.max_pooling2d(inputs=net, pool_size=2, strides=2) # Flatten to a 2-rank tensor. net = tf.contrib.layers.flatten(net) # Eventually this should be replaced with: # net = tf.layers.flatten(net) # First fully-connected / dense layer. # This uses the ReLU activation function. net = tf.layers.dense(inputs=net, name='layer_fc1', units=128, activation=tf.nn.relu) # Second fully-connected / dense layer. # This is the last layer so it does not use an activation function. net = tf.layers.dense(inputs=net, name='layer_fc2', units=10) # Logits output of the neural network. logits = net # Softmax output of the neural network. y_pred = tf.nn.softmax(logits=logits) # Classification output of the neural network. y_pred_cls = tf.argmax(y_pred, axis=1) if mode == tf.estimator.ModeKeys.PREDICT: # If the estimator is supposed to be in prediction-mode # then use the predicted class-number that is output by # the neural network. Optimization etc. is not needed. spec = tf.estimator.EstimatorSpec(mode=mode, predictions=y_pred_cls) else: # Otherwise the estimator is supposed to be in either # training or evaluation-mode. Note that the loss-function # is also required in Evaluation mode. # Define the loss-function to be optimized, by first # calculating the cross-entropy between the output of # the neural network and the true labels for the input data. # This gives the cross-entropy for each image in the batch. cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=labels, logits=logits) # Reduce the cross-entropy batch-tensor to a single number # which can be used in optimization of the neural network. loss = tf.reduce_mean(cross_entropy) # Define the optimizer for improving the neural network. optimizer = tf.train.AdamOptimizer(learning_rate=params["learning_rate"]) # Get the TensorFlow op for doing a single optimization step. train_op = optimizer.minimize( loss=loss, global_step=tf.train.get_global_step()) # Define the evaluation metrics, # in this case the classification accuracy. metrics = \ { "accuracy": tf.metrics.accuracy(labels, y_pred_cls) } # Wrap all of this in an EstimatorSpec. spec = tf.estimator.EstimatorSpec( mode=mode, loss=loss, train_op=train_op, eval_metric_ops=metrics) return spec """ Explanation: New Estimator If you cannot use one of the built-in Estimators, then you can create an arbitrary TensorFlow model yourself. To do this, you first need to create a function which defines the following: The TensorFlow model, e.g. a Convolutional Neural Network. The output of the model. The loss-function used to improve the model during optimization. The optimization method. Performance metrics. The Estimator can be run in three modes: Training, Evaluation, or Prediction. The code is mostly the same, but in Prediction-mode we do not need to setup the loss-function and optimizer. This is another aspect of the Estimator API that is poorly designed and resembles how we did ANSI C programming using structs in the old days. It would probably have been more elegant to split this into several functions and sub-classed the Estimator-class. End of explanation """ params = {"learning_rate": 1e-4} """ Explanation: Create an Instance of the Estimator We can specify hyper-parameters e.g. for the learning-rate of the optimizer. End of explanation """ model = tf.estimator.Estimator(model_fn=model_fn, params=params, model_dir="./checkpoints_tutorial17-2/") """ Explanation: We can then create an instance of the new Estimator. Note that we don't provide feature-columns here as it is inferred automatically from the data-functions when model_fn() is called. It is unclear from the TensorFlow documentation why it is necessary to specify the feature-columns when using DNNClassifier in the example above, when it is not needed here. End of explanation """ model.train(input_fn=train_input_fn, steps=2000) """ Explanation: Training Now that our new Estimator has been created, we can train it. End of explanation """ result = model.evaluate(input_fn=test_input_fn) result print("Classification accuracy: {0:.2%}".format(result["accuracy"])) """ Explanation: Evaluation Once the model has been trained, we can evaluate its performance on the test-set. End of explanation """ predictions = model.predict(input_fn=predict_input_fn) cls_pred = np.array(list(predictions)) cls_pred plot_images(images=some_images, cls_true=some_images_cls, cls_pred=cls_pred) """ Explanation: Predictions The model can also be used to make predictions on new data. End of explanation """
alepoydes/introduction-to-numerical-simulation
practice/Differentiation of differentiation methods.ipynb
mit
def f(x): return np.sin(x) # Функция def dfdx(x): return np.cos(x) # и ее производная. x0 = 1 # Точка, в которой производится дифференциирование. dx = np.logspace(-16, 0, 100) # Приращения аргумента. # Найдем приращения функции df = f(x0+dx)-f(x0) # и оценим производные. approx_dfdx = df/dx # Вычислим точное значение производной exact_dfdx = dfdx(x0) # и вычислим относительные погрешности. relative_error = np.abs(1.0-approx_dfdx/exact_dfdx) # Строим график зависимости погрешности от приращения. plt.loglog(dx, relative_error) plt.xlabel("Приращение аргумента") plt.ylabel("Относительная погрешность") plt.show() """ Explanation: Различия методов дифференцирования Многие законы природы сформулированы с использованием производных. Далеко не всегда эти производные можно посчитать аналитически. В задачах численного моделирования функции заданные на областях в $\mathbb R^d$ приходится приближать неким подмножеством функций, образующих конечномерное пространство, так как в компьютере можно хранить только конечное количество коэффициентов. Зная такую приближенную функцию возникает вопрос, как можно оценить производную приближаемой функции наиболее точно? В простейшем случае функция $f:\mathbb R\to\mathbb R$ задана своими значениями $f(x)$ и $f(x+h)$ на паре точек $x$ и $x+h$, прочие же значения приближенно получаются тем или иным способом интерполяции. Исходя из определения производной, ее можно приблизить отношением приращения функции к приращению аргумента: $$f'(x)\approx\frac{f(x+h)-f(x)}{h}.$$ Для любого конечного приращения $h$ аргумента ответ будет получаться ошибочным, однако ошибка должна уменьшаться при приращении стремящемся к нулю $h\to 0$. Проверим, какую минимальную погрешность можно получить в приближенной арифметике. End of explanation """ def experiment(method, f=np.sin, dfdx=np.cos, x0=1, dx = np.logspace(-16, 0, 100)): """ Оценивает производную `f` с помощью функции `method`, сравнивает со значением аналитической производной `dfdx`, и строит график относительной ошибки от приращения аргумента. Оценка производной производиться функцией `method(f, x0, dx)` принимающей на вход функцию `f`, которая дифференцруется в точке `x0`, используя приращения `dx`; функция `method` возвращает вектор значений производной на всех переданных приращениях `dx`. """ approx_dfdx = method(f, x0, dx) # Оценка производных. exact_dfdx = dfdx(x0) # Точное значение производной. relative_error = np.abs(1.0-approx_dfdx/exact_dfdx) # Относительные погрешности. plt.loglog(dx, relative_error, label=method.__name__) plt.xlabel("Приращение аргумента") plt.ylabel("Относительная погрешность") return relative_error def forward_divided_difference(f, x0, dx): """ Прямая разделенная разность. """ return (f(x0+dx)-f(x0))/dx def backward_divided_difference(f, x0, dx): """ Обратная разделенная разность. """ return (f(x0)-f(x0-dx))/dx def central_divided_difference(f, x0, dx): """ Центральная разделенная разность. """ return (f(x0+dx/2)-f(x0-dx/2))/dx # Строим график зависимости погрешности от приращения. experiment(forward_divided_difference) experiment(backward_divided_difference) experiment(central_divided_difference) plt.legend() plt.show() """ Explanation: Как мы видим, погрешность не стремиться к нулю, а достигает своего минимума при шагах около $10^{-8}$, затем снова растет. Задание Объясните график ошибки. Почему ошибка сначала уменьшается, но потом растет? По какому закону происходит уменьшение и рост ошибки? Для произвольной функции, оцените величину шага, при котором ошибка приближения производной минимальна. Какова минимальная ошибка такого метода приближения? Выражение $f(x+h)-f(x)$ называется прямой конечной разностью функции $f$ в точке $x$, будем обозначать ее $\Delta_+ f$. Также часто рассматривают обратную конечную разность $\Delta_- f=f(x)-f(x-h)$ и центральную конечную разность $\Delta_0 f=f(x+\frac h2)-f(x-\frac h2)$. Постренные по конечным разностям разделенные разности могут использоваться для оценки производной: $$f'(x)\approx \frac{\Delta_+ f}{h}\approx \frac{\Delta_- f}{h}\approx \frac{\Delta_0 f}{h}.$$ Сравним погрешности приближений. End of explanation """ class AG: def __init__(self, v, d): """ Инициализирует пару (f, df/dx) = (v, d). """ self.v = v self.d = d # Представление констант @staticmethod def const(x): return AG(x, 1) # Арифметические операции def __add__(self, other): return AG(self.v+other.v, self.d+other.d) def __sub__(self, other): return AG(self.v-other.v, self.d-other.d) def __mul__(self, other): return AG(self.v*other.v, self.d*other.v+self.v*other.d) def __truediv__(self, other): return AG(self.v/other.v, (self.d*other.v-self.v*other.d)/(other.v**2) ) # Возведение в степень def __pow__(self, other): return AG(np.power(self.v, other.v), np.power(self.v,other.v-1.)*other.v*self.d + np.power(self.v,other.v)*np.log(self.v)*other.d ) # Основные функции @staticmethod def sin(x): return AG(np.sin(x.v), np.cos(x.v)*x.d) @staticmethod def cos(x): return AG(np.cos(x.v), -np.sin(x.v)*x.d) @staticmethod def log(x): return AG(np.log(x.v), x.d/x.v) x = AG.const(3) y = x*x/x print(f"y({x.v})={y.v} y'({x.v})={y.d}") # Сравним автоматическое дифференцирование с другими способами счета. # Сложная фукнция def f(x): return x**AG.sin(x**AG.cos(x)) # и ее еще более сложная аналитическая производная def dfdx(x): return x**AG.sin(x**AG.cos(x))*( x**AG.cos(x)*AG.cos(x**AG.cos(x))*AG.log(x)*(AG.cos(x)/x - AG.log(x)*AG.sin(x)) + AG.sin(x**AG.cos(x))/x ) # Точки для оценки производной. x0 = np.linspace(1,10,100) # Шаг для конечной разности. h = 1e-8 # Оценка производной через центральную разделенную разность. divided_difference = ( f(AG.const(x0+h/2)).v - f(AG.const(x0-h/2)).v )/h # Аналитический ответ. analytic = dfdx( AG.const(x0) ).v # Автоматическое дифференцирование. autograd = f( AG.const(x0) ).d def abs_err(x, y): """Считает абсолютную ошибку.""" return np.abs(x-y) # Сравниваем три результата между собой. plt.semilogy(x0, abs_err(divided_difference, analytic), label="DD - A") plt.semilogy(x0, abs_err(divided_difference, autograd), '.', label="DD - AG") plt.semilogy(x0, abs_err(autograd, analytic), label="AG - A") plt.legend() plt.show() """ Explanation: Задание Объясните, почему прямая и обратная разделенные разности дают одинаковую погрешность, а центральная конечная разность дает более точный ответ? Для произольной фукнции оцените скорость уменьшения ошибка для центральной конечной разности. Как зависит скорость от гладкости функции? Какова минимальная погрешность для вычисления центральной конечной разности? В некоторых случаях известно аналитическое выражение для дифференцируемой функции, однако аналитическое выражение для производной слишком громоздко, чтобы использовать его в вычислениях. В этом случае удобно использовать метод автоматического дифференцирования. Идея метода заключается в том, что вместе со значением функции хранится значение производной функции в этой точке, т.е. все функции вычисляют пару $(f(x), f'(x))$. Вычисления начинают со значения $(x, 1)$ (производная $x$ по $x$ равна $1$), затем пользуются правилом дифференцирования сложной функции, например, при счете $\sin(f(x))$ уже найденные значения $f(x)$ преобразуются следующим образом: $$(f(x),f'(x))\mapsto (\sin(f(x)),\cos(f(x))f'(x)).$$ В настоящее время существует множество пакетов для автоматического дифференцирования, например, autograd, также см. библиотеки для работы с искуственными нейронными сетями. С педагогическими целями реализуем простой класс для автоматического дифферецирования. End of explanation """ # Сравним погрешности прямой и центральной разделенной разности на решетке. def f(x): return np.sin(x**2) # Функция def dfdx(x): return 2*x*np.cos(x**2) # и ее производная # Зададим решетку xk = np.linspace(0,10,1000) # Вычислим на ней функцию fk = f(xk) # Приближенные значения производной: central_dfdx = np.empty_like(xk); central_dfdx[:] = np.nan central_dfdx[1:-1] = (fk[2:]-fk[:-2])/(xk[2:]-xk[:-2]) forward_dfdx = np.empty_like(xk); forward_dfdx[:] = np.nan forward_dfdx[:-1] = (fk[1:]-fk[:-1])/(xk[1:]-xk[:-1]) # Точные значения производной exact_dfdx = dfdx(xk) yk = (xk[1:]+xk[:-1])/2 # Смещенная решетка. shifted_dfdx = (fk[1:]-fk[:-1])/(xk[1:]-xk[:-1]) # Оценка центральной разделенной разностью. exact_shifted = dfdx(yk) # Точные значения на смещенной решетке plt.semilogy(xk, abs_err(central_dfdx, exact_dfdx), label="центральная") plt.semilogy(xk, abs_err(forward_dfdx, exact_dfdx), label="прямая") plt.semilogy(yk, abs_err(shifted_dfdx, exact_shifted), label="смещенная решетка") plt.xlabel("Узел решетки") plt.ylabel("Абсолютная ошибка") plt.legend() plt.show() """ Explanation: Приближение через конечные разности дает ожидаемо большую погрешность. Аналитическая формула и автоматичесое дифференцирование дает очень похожие, но все же отличающиеся результаты. Задания Реализуйте автоматическое дифференцирование для вычисления арктангенса. Реализуйте автоматическое дифферецирование для двух переменных (можно ограничиться только арифметикой). Какой ответ получается точнее: через автоматическое дифференцирование или через аналитическое выражение для производной, полученное через символьную алгербру (например, Wolfram Alpha)? Что быстрее считается? При решении сеточными методами дифференциальных уравнений или уравнений в частных производных функцию обычно нельзя вычислять в произвольных точках, так как она известна только в узлах решетки. Например, функция $f(x)$ может быть задана в узлах равномерной решетки $x_k=kh$, где $h$ задает плотность решетки. В этом случае производная должна выражаеться через значения функции в узлах $x_k$. Например, через центральную конечную разность $$f'(x_k) \approx \frac{f(x_{k+1})-f(x_{k-1})}{x_{k+1}-x_{k-1}}$$ или через прямую конечную разность: $$f'(x_k) \approx \frac{f(x_{k+1})-f(x_{k})}{x_{k+1}-x_{k}}.$$ Как мы знаем, центральная конечная разность точнее, но шаг аргумента в этом случае в два раза больше. Чтобы воспользоваться более точными оценками производной, но не увеличивать шаг интегрирования, функцию и ее производную можно задавать на разных решетках. Например, в качестве узлов новой решетки можно выбрать точки между старыми узлами на ранвом расстоянии от соседей. End of explanation """
mkarakoc/aim
examples/01_AIMpy_exp_cos_screened_coulomb_potential.ipynb
gpl-3.0
# Python program to use AIM tools from asymptotic import * """ Explanation: Application of the Asymptotic Iteration Method to <br> the Exponential Cosine Screened Coulomb Potential O. Bayrak, et al. Int. J. Quant. Chem., 107 (2007), p. 1040 http://onlinelibrary.wiley.com/doi/10.1002/qua.21240/epdf Atomic orbitals 1s 2s 2p 2p 2p 3s 3p 3p 3p 4s 3d 3d 3d 3d 3d 4p 4p 4p 5s 4d 4d 4d 4d 4d 5p 5p 5p 6s 4f 4f 4f 4f 4f 4f 4f 5d 5d 5d 5d 5d 6p 6p 6p 7s 5f 5f 5f 5f 5f 5f 5f 6d 6d 6d 6d 6d 7p 7p 7p https://en.wikipedia.org/wiki/Atomic_orbital#Electron_placement_and_the_periodic_table Import AIM library End of explanation """ En, m, hbar, L, r, r0 = se.symbols("En, m, hbar, L, r, r0") beta, delta, A, A1, A2, A3, A4, A5, A6 = se.symbols("beta, delta, A, A1, A2, A3, A4, A5, A6") """ Explanation: Definitions Variables End of explanation """ l0 = 2*(beta - (L+1)/r) s0 = -2*m*En/hbar**2 + A2 - beta**2 + (2*L*beta + 2*beta - A1)/r - A3*r**2 + A4*r**3 - A5*r**4 + A6*r**6 """ Explanation: $\lambda_0$ and $s_0$ End of explanation """ nL = o* 0 ndelta = o* 1/100 nbeta = o* 6/10 nA, nhbar, nm = o* 1, o* 1, o* 1 nr0 = o* (nL+1)/nbeta nA1 = 2*nm*nA/nhbar**2 nA2 = nA1*ndelta nA3 = nA1*ndelta**3/3 nA4 = nA1*ndelta**4/6 nA5 = nA1*ndelta**5/30 nA6 = nA1*ndelta**7/630 pl0 = {beta:nbeta, L:nL} ps0 = {hbar:nhbar, m:nm, delta:ndelta, beta:nbeta, L:nL, r0:nr0, A1:nA1, A2:nA2, A3:nA3, A4:nA4, A5:nA5, A6:nA6} """ Explanation: Case: $\delta=0.01$ s states (1s, 2s, 3s, 4s) Numerical values for variables End of explanation """ %%time # pass lambda_0, s_0 and variable values to aim class ecsc_d01L0 = aim(l0, s0, pl0, ps0) ecsc_d01L0.display_parameters() ecsc_d01L0.display_l0s0(0) ecsc_d01L0.parameters(En, r, nr0, nmax=101, nstep=10, dprec=500, tol=1e-101) """ Explanation: Initialize AIM solver End of explanation """ %%time # create coefficients for improved AIM ecsc_d01L0.c0() ecsc_d01L0.d0() ecsc_d01L0.cndn() """ Explanation: Calculation of Taylor series coefficients of $\lambda_0$ and $s_0$ End of explanation """ %%time ecsc_d01L0.get_arb_roots(showRoots='-r', printFormat="{:22.17f}") """ Explanation: The solution End of explanation """ %%time nL = o* 1 nr0 = o* (nL+1)/nbeta pl0 = {beta:nbeta, L:nL} ps0 = {hbar:nhbar, m:nm, delta:ndelta, beta:nbeta, L:nL, r0:nr0, A1:nA1, A2:nA2, A3:nA3, A4:nA4, A5:nA5, A6:nA6} # pass lambda_0, s_0 and variable values to aim class ecsc_d01L1 = aim(l0, s0, pl0, ps0) ecsc_d01L1.display_parameters() ecsc_d01L1.display_l0s0(0) ecsc_d01L1.parameters(En, r, nr0, nmax=101, nstep=10, dprec=500, tol=1e-101) # create coefficients for improved AIM ecsc_d01L1.c0() ecsc_d01L1.d0() ecsc_d01L1.cndn() # the solution ecsc_d01L1.get_arb_roots(showRoots='-r', printFormat="{:22.17f}") """ Explanation: p states End of explanation """ %%time nL = o* 2 nr0 = o* (nL+1)/nbeta pl0 = {beta:nbeta, L:nL} ps0 = {hbar:nhbar, m:nm, delta:ndelta, beta:nbeta, L:nL, r0:nr0, A1:nA1, A2:nA2, A3:nA3, A4:nA4, A5:nA5, A6:nA6} # pass lambda_0, s_0 and variable values to aim class ecsc_d01L2 = aim(l0, s0, pl0, ps0) ecsc_d01L2.display_parameters() ecsc_d01L2.display_l0s0(0) ecsc_d01L2.parameters(En, r, nr0, nmax=101, nstep=10, dprec=500, tol=1e-101) # create coefficients for improved AIM ecsc_d01L2.c0() ecsc_d01L2.d0() ecsc_d01L2.cndn() # the solution ecsc_d01L2.get_arb_roots(showRoots='-r', printFormat="{:22.17f}") """ Explanation: d states End of explanation """ %%time nL = o* 3 nr0 = o* (nL+1)/nbeta pl0 = {beta:nbeta, L:nL} ps0 = {hbar:nhbar, m:nm, delta:ndelta, beta:nbeta, L:nL, r0:nr0, A1:nA1, A2:nA2, A3:nA3, A4:nA4, A5:nA5, A6:nA6} # pass lambda_0, s_0 and variable values to aim class ecsc_d01L3 = aim(l0, s0, pl0, ps0) ecsc_d01L3.display_parameters() ecsc_d01L3.display_l0s0(0) ecsc_d01L3.parameters(En, r, nr0, nmax=101, nstep=10, dprec=500, tol=1e-101) # create coefficients for improved AIM ecsc_d01L3.c0() ecsc_d01L3.d0() ecsc_d01L3.cndn() # the solution ecsc_d01L3.get_arb_roots(showRoots='-r', printFormat="{:22.17f}") """ Explanation: f states End of explanation """ %%time nL = o* 0 ndelta = o* 2/100 nbeta = o* 6/10 nA, nhbar, nm = o* 1, o* 1, o* 1 nr0 = o* (nL+1)/nbeta nA1 = 2*nm*nA/nhbar**2 nA2 = nA1*ndelta nA3 = nA1*ndelta**3/3 nA4 = nA1*ndelta**4/6 nA5 = nA1*ndelta**5/30 nA6 = nA1*ndelta**7/630 pl0 = {beta:nbeta, L:nL} ps0 = {hbar:nhbar, m:nm, delta:ndelta, beta:nbeta, L:nL, r0:nr0, A1:nA1, A2:nA2, A3:nA3, A4:nA4, A5:nA5, A6:nA6} # pass lambda_0, s_0 and variable values to aim class ecsc_d02L0 = aim(l0, s0, pl0, ps0) ecsc_d02L0.display_parameters() ecsc_d02L0.display_l0s0(0) ecsc_d02L0.parameters(En, r, nr0, nmax=101, nstep=10, dprec=500, tol=1e-101) # create coefficients for improved AIM ecsc_d02L0.c0() ecsc_d02L0.d0() ecsc_d02L0.cndn() # the solution ecsc_d02L0.get_arb_roots(showRoots='-r', printFormat="{:22.17f}") """ Explanation: Case: $\delta=0.02$ s states (1s, 2s, 3s, 4s) End of explanation """ %%time nL = o* 1 nr0 = o* (nL+1)/nbeta pl0 = {beta:nbeta, L:nL} ps0 = {hbar:nhbar, m:nm, delta:ndelta, beta:nbeta, L:nL, r0:nr0, A1:nA1, A2:nA2, A3:nA3, A4:nA4, A5:nA5, A6:nA6} # pass lambda_0, s_0 and variable values to aim class ecsc_d02L1 = aim(l0, s0, pl0, ps0) ecsc_d02L1.display_parameters() ecsc_d02L1.display_l0s0(0) ecsc_d02L1.parameters(En, r, nr0, nmax=101, nstep=10, dprec=500, tol=1e-101) # create coefficients for improved AIM ecsc_d02L1.c0() ecsc_d02L1.d0() ecsc_d02L1.cndn() # the solution ecsc_d02L1.get_arb_roots(showRoots='-r', printFormat="{:22.17f}") """ Explanation: p states End of explanation """ %%time nL = o* 2 nr0 = o* (nL+1)/nbeta pl0 = {beta:nbeta, L:nL} ps0 = {hbar:nhbar, m:nm, delta:ndelta, beta:nbeta, L:nL, r0:nr0, A1:nA1, A2:nA2, A3:nA3, A4:nA4, A5:nA5, A6:nA6} # pass lambda_0, s_0 and variable values to aim class ecsc_d02L2 = aim(l0, s0, pl0, ps0) ecsc_d02L2.display_parameters() ecsc_d02L2.display_l0s0(0) ecsc_d02L2.parameters(En, r, nr0, nmax=101, nstep=10, dprec=500, tol=1e-101) # create coefficients for improved AIM ecsc_d02L2.c0() ecsc_d02L2.d0() ecsc_d02L2.cndn() # the solution ecsc_d02L2.get_arb_roots(showRoots='-r', printFormat="{:22.17f}") """ Explanation: d states End of explanation """ %%time nL = o* 3 nr0 = o* (nL+1)/nbeta pl0 = {beta:nbeta, L:nL} ps0 = {hbar:nhbar, m:nm, delta:ndelta, beta:nbeta, L:nL, r0:nr0, A1:nA1, A2:nA2, A3:nA3, A4:nA4, A5:nA5, A6:nA6} # pass lambda_0, s_0 and variable values to aim class ecsc_d02L3 = aim(l0, s0, pl0, ps0) ecsc_d02L3.display_parameters() ecsc_d02L3.display_l0s0(0) ecsc_d02L3.parameters(En, r, nr0, nmax=101, nstep=10, dprec=500, tol=1e-101) # create coefficients for improved AIM ecsc_d02L3.c0() ecsc_d02L3.d0() ecsc_d02L3.cndn() # the solution ecsc_d02L3.get_arb_roots(showRoots='-r', printFormat="{:22.17f}") """ Explanation: f states End of explanation """ %%time nL = o* 0 ndelta = o* 6/100 nbeta = o* 6/10 nA, nhbar, nm = o* 1, o* 1, o* 1 nr0 = o* (nL+1)/nbeta nA1 = 2*nm*nA/nhbar**2 nA2 = nA1*ndelta nA3 = nA1*ndelta**3/3 nA4 = nA1*ndelta**4/6 nA5 = nA1*ndelta**5/30 nA6 = nA1*ndelta**7/630 pl0 = {beta:nbeta, L:nL} ps0 = {hbar:nhbar, m:nm, delta:ndelta, beta:nbeta, L:nL, r0:nr0, A1:nA1, A2:nA2, A3:nA3, A4:nA4, A5:nA5, A6:nA6} # pass lambda_0, s_0 and variable values to aim class ecsc_d06L0 = aim(l0, s0, pl0, ps0) ecsc_d06L0.display_parameters() ecsc_d06L0.display_l0s0(0) ecsc_d06L0.parameters(En, r, nr0, nmax=101, nstep=10, dprec=500, tol=1e-101) # create coefficients for improved AIM ecsc_d06L0.c0() ecsc_d06L0.d0() ecsc_d06L0.cndn() # the solution ecsc_d06L0.get_arb_roots(showRoots='-r', printFormat="{:22.17f}") """ Explanation: Case: $\delta=0.06$ s states (1s, 2s, 3s) End of explanation """ %%time nL = o* 1 nr0 = o* (nL+1)/nbeta pl0 = {beta:nbeta, L:nL} ps0 = {hbar:nhbar, m:nm, delta:ndelta, beta:nbeta, L:nL, r0:nr0, A1:nA1, A2:nA2, A3:nA3, A4:nA4, A5:nA5, A6:nA6} # pass lambda_0, s_0 and variable values to aim class ecsc_d06L1 = aim(l0, s0, pl0, ps0) ecsc_d06L1.display_parameters() ecsc_d06L1.display_l0s0(0) ecsc_d06L1.parameters(En, r, nr0, nmax=101, nstep=10, dprec=500, tol=1e-101) # create coefficients for improved AIM ecsc_d06L1.c0() ecsc_d06L1.d0() ecsc_d06L1.cndn() # the solution ecsc_d06L1.get_arb_roots(showRoots='-r', printFormat="{:22.17f}") """ Explanation: p states End of explanation """ %%time nL = o* 2 nr0 = o* (nL+1)/nbeta pl0 = {beta:nbeta, L:nL} ps0 = {hbar:nhbar, m:nm, delta:ndelta, beta:nbeta, L:nL, r0:nr0, A1:nA1, A2:nA2, A3:nA3, A4:nA4, A5:nA5, A6:nA6} # pass lambda_0, s_0 and variable values to aim class ecsc_d06L2 = aim(l0, s0, pl0, ps0) ecsc_d06L2.display_parameters() ecsc_d06L2.display_l0s0(0) ecsc_d06L2.parameters(En, r, nr0, nmax=101, nstep=10, dprec=500, tol=1e-101) # create coefficients for improved AIM ecsc_d06L2.c0() ecsc_d06L2.d0() ecsc_d06L2.cndn() # the solution ecsc_d06L2.get_arb_roots(showRoots='-r', printFormat="{:22.17f}") """ Explanation: d states End of explanation """ %%time nL = o* 3 nr0 = o* (nL+1)/nbeta pl0 = {beta:nbeta, L:nL} ps0 = {hbar:nhbar, m:nm, delta:ndelta, beta:nbeta, L:nL, r0:nr0, A1:nA1, A2:nA2, A3:nA3, A4:nA4, A5:nA5, A6:nA6} # pass lambda_0, s_0 and variable values to aim class ecsc_d06L3 = aim(l0, s0, pl0, ps0) ecsc_d06L3.display_parameters() ecsc_d06L3.display_l0s0(0) ecsc_d06L3.parameters(En, r, nr0, nmax=101, nstep=10, dprec=500, tol=1e-101) # create coefficients for improved AIM ecsc_d06L3.c0() ecsc_d06L3.d0() ecsc_d06L3.cndn() # the solution ecsc_d06L3.get_arb_roots(showRoots='-r', printFormat="{:22.17f}") """ Explanation: f states End of explanation """ %%time nL = o* 0 ndelta = o* 10/100 nbeta = o* 6/10 nA, nhbar, nm = o* 1, o* 1, o* 1 nr0 = o* (nL+1)/nbeta nA1 = 2*nm*nA/nhbar**2 nA2 = nA1*ndelta nA3 = nA1*ndelta**3/3 nA4 = nA1*ndelta**4/6 nA5 = nA1*ndelta**5/30 nA6 = nA1*ndelta**7/630 pl0 = {beta:nbeta, L:nL} ps0 = {hbar:nhbar, m:nm, delta:ndelta, beta:nbeta, L:nL, r0:nr0, A1:nA1, A2:nA2, A3:nA3, A4:nA4, A5:nA5, A6:nA6} # pass lambda_0, s_0 and variable values to aim class ecsc_d10L0 = aim(l0, s0, pl0, ps0) ecsc_d10L0.display_parameters() ecsc_d10L0.display_l0s0(0) ecsc_d10L0.parameters(En, r, nr0, nmax=101, nstep=10, dprec=500, tol=1e-101) # create coefficients for improved AIM ecsc_d10L0.c0() ecsc_d10L0.d0() ecsc_d10L0.cndn() # the solution ecsc_d10L0.get_arb_roots(showRoots='-r', printFormat="{:22.17f}") """ Explanation: Case: $\delta=0.1$ s states (1s, 2s) End of explanation """ %%time nL = o* 1 nr0 = o* (nL+1)/nbeta pl0 = {beta:nbeta, L:nL} ps0 = {hbar:nhbar, m:nm, delta:ndelta, beta:nbeta, L:nL, r0:nr0, A1:nA1, A2:nA2, A3:nA3, A4:nA4, A5:nA5, A6:nA6} # pass lambda_0, s_0 and variable values to aim class ecsc_d10L1 = aim(l0, s0, pl0, ps0) ecsc_d10L1.display_parameters() ecsc_d10L1.display_l0s0(0) ecsc_d10L1.parameters(En, r, nr0, nmax=101, nstep=10, dprec=500, tol=1e-101) # create coefficients for improved AIM ecsc_d10L1.c0() ecsc_d10L1.d0() ecsc_d10L1.cndn() # the solution ecsc_d10L1.get_arb_roots(showRoots='-r', printFormat="{:22.17f}") """ Explanation: p states End of explanation """ %%time nL = o* 2 nr0 = o* (nL+1)/nbeta pl0 = {beta:nbeta, L:nL} ps0 = {hbar:nhbar, m:nm, delta:ndelta, beta:nbeta, L:nL, r0:nr0, A1:nA1, A2:nA2, A3:nA3, A4:nA4, A5:nA5, A6:nA6} # pass lambda_0, s_0 and variable values to aim class ecsc_d10L2 = aim(l0, s0, pl0, ps0) ecsc_d10L2.display_parameters() ecsc_d10L2.display_l0s0(0) ecsc_d10L2.parameters(En, r, nr0, nmax=101, nstep=10, dprec=500, tol=1e-101) # create coefficients for improved AIM ecsc_d10L2.c0() ecsc_d10L2.d0() ecsc_d10L2.cndn() # the solution ecsc_d10L2.get_arb_roots(showRoots='-r', printFormat="{:22.17f}") """ Explanation: d states End of explanation """ %%time nL = o* 3 nr0 = o* (nL+1)/nbeta pl0 = {beta:nbeta, L:nL} ps0 = {hbar:nhbar, m:nm, delta:ndelta, beta:nbeta, L:nL, r0:nr0, A1:nA1, A2:nA2, A3:nA3, A4:nA4, A5:nA5, A6:nA6} # pass lambda_0, s_0 and variable values to aim class ecsc_d10L3 = aim(l0, s0, pl0, ps0) ecsc_d10L3.display_parameters() ecsc_d10L3.display_l0s0(0) ecsc_d10L3.parameters(En, r, nr0, nmax=101, nstep=10, dprec=500, tol=1e-101) # create coefficients for improved AIM ecsc_d10L3.c0() ecsc_d10L3.d0() ecsc_d10L3.cndn() # the solution ecsc_d10L3.get_arb_roots(showRoots='-r', printFormat="{:22.17f}") """ Explanation: f states End of explanation """
atulsingh0/MachineLearning
MasteringML_wSkLearn/03_Feature_Extraction_&_Preprocessing.ipynb
gpl-3.0
from sklearn.feature_extraction import DictVectorizer from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer, HashingVectorizer from sklearn.metrics.pairwise import euclidean_distances from sklearn import preprocessing from nltk.stem.wordnet import WordNetLemmatizer from nltk.stem import PorterStemmer from nltk import word_tokenize from nltk import pos_tag import numpy as np """ Explanation: Feature Extraction and Preprocessing End of explanation """ onehot_encoder = DictVectorizer() instances = [ {'city': 'New York'}, {'city': 'San Francisco'}, {'city': 'Chapel Hill'} ] print (onehot_encoder.fit_transform(instances).toarray()) """ Explanation: DictVectorizer End of explanation """ corpus = [ 'UNC played Duke in basketball', 'Duke lost the basketball game' ] vectorizer = CountVectorizer() print (vectorizer.fit_transform(corpus).todense()) print (vectorizer.vocabulary_) # adding one more sentence in corpus corpus = [ 'UNC played Duke in basketball', 'Duke lost the basketball game', 'This is Atul Singh' ] vectorizer = CountVectorizer() print (vectorizer.fit_transform(corpus).todense()) print (vectorizer.vocabulary_) # checking the euclidean distance # converting sentence into CountVectorizer counts = vectorizer.fit_transform(corpus).todense() print("1 & 2", euclidean_distances(counts[0], counts[1])) print("2 & 3", euclidean_distances(counts[1], counts[2])) print("1 & 3", euclidean_distances(counts[0], counts[2])) """ Explanation: CountVectorizer End of explanation """ vectorizer = CountVectorizer(stop_words='english') # added one option which remove the grammer words from corpus print (vectorizer.fit_transform(corpus).todense()) print (vectorizer.vocabulary_) print("1 & 2", euclidean_distances(counts[0], counts[1])) print("2 & 3", euclidean_distances(counts[1], counts[2])) print("1 & 3", euclidean_distances(counts[0], counts[2])) """ Explanation: Stop Word Filtering End of explanation """ corpus = [ 'He ate the sandwiches', 'Every sandwich was eaten by him' ] vectorizer = CountVectorizer(stop_words='english') # added one option which remove the grammer words from corpus print (vectorizer.fit_transform(corpus).todense()) print (vectorizer.vocabulary_) """ Explanation: Stemming and Lemmatization Lemmatization is the process of determining the lemma, or the morphological root, of an inflected word based on its context. Lemmas are the base forms of words that are used to key the word in a dictionary. Stemming has a similar goal to lemmatization, but it does not attempt to produce the morphological roots of words. Instead, stemming removes all patterns of characters that appear to be affixes, resulting in a token that is not necessarily a valid word. Lemmatization frequently requires a lexical resource, like WordNet, and the word's part of speech. Stemming algorithms frequently use rules instead of lexical resources to produce stems and can operate on any token, even without its context. End of explanation """ lemmatizer = WordNetLemmatizer() print (lemmatizer.lemmatize('gathering', 'v')) print (lemmatizer.lemmatize('gathering', 'n')) """ Explanation: As we can see both sentences are having same meaning but their feature vectors have no elements in common. Let's use the lexical analysis on the data End of explanation """ stemmer = PorterStemmer() print (stemmer.stem('gathering')) wordnet_tags = ['n', 'v'] corpus = [ 'He ate the sandwiches', 'Every sandwich was eaten by him' ] stemmer = PorterStemmer() print ('Stemmed:', [[stemmer.stem(token) for token in word_tokenize(document)] for document in corpus]) def lemmatize(token, tag): if tag[0].lower() in ['n', 'v']: return lemmatizer.lemmatize(token, tag[0].lower()) return token lemmatizer = WordNetLemmatizer() tagged_corpus = [pos_tag(word_tokenize(document)) for document in corpus] print ('Lemmatized:', [[lemmatize(token, tag) for token, tag in document] for document in tagged_corpus]) """ Explanation: The Porter stemmer cannot consider the inflected form's part of speech and returns gather for both documents: End of explanation """ corpus = ['The dog ate a sandwich, the wizard transfigured a sandwich, and I ate a sandwich'] vectorizer = CountVectorizer(stop_words='english') print (vectorizer.fit_transform(corpus).todense()) print(vectorizer.vocabulary_) corpus = ['The dog ate a sandwich and I ate a sandwich', 'The wizard transfigured a sandwich'] vectorizer = TfidfVectorizer(stop_words='english') print (vectorizer.fit_transform(corpus).todense()) print(vectorizer.vocabulary_) corpus = ['The dog ate a sandwich and I ate a sandwich', 'The wizard transfigured a sandwich'] vectorizer = HashingVectorizer(n_features=6) print (vectorizer.fit_transform(corpus).todense()) """ Explanation: Extending bag-of-words with TF-IDF weights It is intuitive that the frequency with which a word appears in a document could indicate the extent to which a document pertains to that word. A long document that contains one occurrence of a word may discuss an entirely different topic than a document that contains many occurrences of the same word. In this section, we will create feature vectors that encode the frequencies of words, and discuss strategies to mitigate two problems caused by encoding term frequencies. Instead of using a binary value for each element in the feature vector, we will now use an integer that represents the number of times that the words appeared in the document. End of explanation """ X = [[1,2,3], [4,5,1], [3,6,2] ] print(preprocessing.scale(X)) x1 = preprocessing.StandardScaler() print(x1) print(x1.fit_transform(X)) """ Explanation: Data Standardization End of explanation """
liufuyang/deep_learning_tutorial
jizhi-pytorch-2/01_word_embedding/homework.ipynb
mit
# 加载必要的程序包 # PyTorch的程序包 import torch from torch.autograd import Variable import torch.nn as nn import torch.nn.functional as F import torch.optim as optim # 数值运算和绘图的程序包 import numpy as np import matplotlib.pyplot as plt import matplotlib # 加载机器学习的软件包,主要为了词向量的二维可视化 from sklearn.decomposition import PCA #加载Word2Vec的软件包 import gensim as gensim from gensim.models import Word2Vec from gensim.models.keyedvectors import KeyedVectors from gensim.models.word2vec import LineSentence #加载正则表达式处理的包 import re #在Notebook界面能够直接显示图形 %matplotlib inline """ Explanation: 基于词向量的英汉翻译——“火炬上的深度学习"下第一次作业 在这个作业中,你需要半独立地完成一个英文到中文的单词翻译器 本文件是集智AI学园http://campus.swarma.org 出品的“火炬上的深度学习”第VI课的配套源代码 End of explanation """ # 加载中文词向量,下载地址为:链接:http://pan.baidu.com/s/1gePQAun 密码:kvtg # 该中文词向量库是由尹相志提供,训练语料来源为:微博、人民日报、上海热线、汽车之家等,包含1366130个词向量 word_vectors = KeyedVectors.load_word2vec_format('vectors.bin', binary=True, unicode_errors='ignore') len(word_vectors.vocab) # 加载中文的词向量,下载地址为:http://nlp.stanford.edu/data/glove.6B.zip,解压后将glove.6B.100d.txt文件拷贝到与本notebook # 文件一致的文件夹洗面。 f = open('glove.6B.100d.txt', 'r') i = 1 # 将英文的词向量都存入如下的字典中 word_vectors_en = {} with open('glove.6B.100d.txt') as f: for line in f: numbers = line.split() word = numbers[0] vectors = np.array([float(i) for i in numbers[1 : ]]) word_vectors_en[word] = vectors i += 1 print(len(word_vectors_en)) """ Explanation: 第一步:加载词向量 首先,让我们加载别人已经在大型语料库上训练好的词向量 End of explanation """ # 中文的一二三四五列表 cn_list = {'一', '二', '三', '四', '五', '六', '七', '八', '九', '零'} # 阿拉伯数字的12345列表 en_list = {'1', '2', '3', '4', '5', '6', '7', '8', '9', '0'} # 英文数字的列表 en_list = {'one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine', 'zero'} # 对应词向量都存入到列表中 cn_vectors = [] #中文的词向量列表 en_vectors = [] #英文的词向量列表 for w in cn_list: cn_vectors.append(word_vectors[w]) for w in en_list: en_vectors.append(word_vectors_en[w]) # 将这些词向量统一转化为矩阵 cn_vectors = np.array(cn_vectors) en_vectors = np.array(en_vectors) # 降维实现可视化 X_reduced = PCA(n_components=2).fit_transform(cn_vectors) Y_reduced = PCA(n_components = 2).fit_transform(en_vectors) # 绘制所有单词向量的二维空间投影 f, (ax1, ax2) = plt.subplots(1, 2, figsize = (10, 8)) ax1.plot(X_reduced[:, 0], X_reduced[:, 1], 'o') ax2.plot(Y_reduced[:, 0], Y_reduced[:, 1], 'o') zhfont1 = matplotlib.font_manager.FontProperties(fname='/home/fuyang/.fonts/YaHei.Consolas.1.11b.ttf', size=16) for i, w in enumerate(cn_list): ax1.text(X_reduced[i, 0], X_reduced[i, 1], w, fontproperties = zhfont1, alpha = 1) for i, w in enumerate(en_list): ax2.text(Y_reduced[i, 0], Y_reduced[i, 1], w, alpha = 1) """ Explanation: 第二步:可视化同一组意思词在两种不同语言的词向量中的相互位置关系 End of explanation """ original_words = [] with open('dictionary.txt', 'r') as f: dataset = [] for line in f: itm = line.split('\t') eng = itm[0] chn = itm[1].strip() if eng in word_vectors_en and chn in word_vectors: data = word_vectors_en[eng] target = word_vectors[chn] # 将中英文词对做成数据集 dataset.append([data, target]) original_words.append([eng, chn]) print(len(dataset)) # 共有4962个单词做为总的数据集合 # 建立训练集、测试集和校验集 # 训练集用来训练神经网络,更改网络的参数;校验集用来判断网络模型是否过拟合:当校验集的损失数值超过训练集的时候,即为过拟合 # 测试集用来检验模型的好坏 indx = np.random.permutation(range(len(dataset))) dataset = [dataset[i] for i in indx] original_words = [original_words[i] for i in indx] train_size = 500 train_data = dataset[train_size:] valid_data = dataset[train_size // 2 : train_size] test_data = dataset[: train_size // 2] test_words = original_words[: train_size // 2] print(len(train_data), len(valid_data), len(test_data)) # 开始训练一个多层神经网络,将一个100维度的英文向量映射为200维度的中文词向量,隐含层节点为30 input_size = 100 output_size = 200 hidden_size = 30 # 新建一个神经网络,包含一个隐含层 model = nn.Sequential(nn.Linear(input_size, hidden_size), nn.Tanh(), nn.Linear(hidden_size, output_size) ) print(model) # 构造损失函数 criterion = torch.nn.MSELoss() # 构造优化器 optimizer = torch.optim.Adam(model.parameters(), lr = 0.0001) # 总的循环周期 num_epoch = 100 #开始训练500次,每次对所有的数据都做循环 results = [] for epoch in range(num_epoch): train_loss = [] for data in train_data: # 读入数据 x = Variable(torch.FloatTensor(data[0])).unsqueeze(0) y = Variable(torch.FloatTensor(data[1])).unsqueeze(0) # 模型预测 output = model(x) # 反向传播算法训练 optimizer.zero_grad() loss = criterion(output, y) train_loss.append(loss.data.numpy()[0]) loss.backward() optimizer.step() # 在校验集上测试一下效果 valid_loss = [] for data in valid_data: x = Variable(torch.FloatTensor(data[0])).unsqueeze(0) y = Variable(torch.FloatTensor(data[1])).unsqueeze(0) output = model(x) loss = criterion(output, y) valid_loss.append(loss.data.numpy()[0]) results.append([np.mean(train_loss), np.mean(valid_loss)]) print('{}轮,训练Loss: {:.2f}, 校验Loss: {:.2f}'.format(epoch, np.mean(train_loss), np.mean(valid_loss))) # 绘制图形 a = [i[0] for i in results] b = [i[1] for i in results] plt.plot(a, 'o', label = 'Training Loss') plt.plot(b, 's', label = 'Validation Loss') plt.xlabel('Epoch') plt.ylabel('Loss Function') plt.legend() # 在测试集上验证准确度 # 检验标准有两个:一个是直接用预测的词和标准答案做全词匹配;另一个是做单字的匹配 exact_same = 0 #全词匹配数量 one_same = 0 #单字匹配数量 results = [] for i, data in enumerate(test_data): x = Variable(torch.FloatTensor(data[0])).unsqueeze(0) # 给出模型的输出 output = model(x) output = output.squeeze().data.numpy() # 从中文词向量中找到与输出向量最相似的向量 most_similar = word_vectors.wv.similar_by_vector(output, 1) # 将标准答案中的词与最相似的向量所对应的词打印出来 results.append([original_words[i][1], most_similar[0][0]]) # 全词匹配 if original_words[i][1] == most_similar[0][0]: exact_same += 1 # 某一个字匹配 if list(set(list(original_words[i][1])) & set(list(most_similar[0][0]))) != []: one_same += 1 print("精确匹配率:{:.2f}".format(1.0 * exact_same / len(test_data))) print('一字匹配率:{:.2f}'.format(1.0 * one_same / len(test_data))) print(results) """ Explanation: 结论:可以看出,中文的一、二、等数字彼此之间的关系与英文的数字彼此之间的关系很类似 第三步:训练一个神经网络,输入一个英文单词的词向量,输出一个中文的词向量,并翻译为中文 首先,读入一个已经建立好的词典(dictionary.txt)。本词典是老师调用百度翻译的API,自动将一篇英文小说中的词汇逐个翻译为中文而得来的 我们一个个地载入词典,并查找对应的中文词向量,如果找得到,则放入original_words中,做为正式的训练集 End of explanation """
Gezort/YSDA_deeplearning17
Seminar8/VAE_homework.ipynb
mit
#The following line fetches you two datasets: images, usable for autoencoder training and attributes. #Those attributes will be required for the final part of the assignment (applying smiles), so please keep them in mind from lfw_dataset import fetch_lfw_dataset data,attrs = fetch_lfw_dataset() import numpy as np X_train = data[:10000].reshape((10000,-1)) print(X_train.shape) X_val = data[10000:].reshape((-1,X_train.shape[1])) print(X_val.shape) image_h = data.shape[1] image_w = data.shape[2] """ Explanation: Variational Autoencoder (VAE) Useful links: * original paper http://arxiv.org/abs/1312.6114 * helpful videos explaining the topic * https://www.youtube.com/watch?v=P78QYjWh5sM * http://videolectures.net/deeplearning2015_courville_autoencoder_extension/?q=aaron%20courville In this seminalr we will train an autoencoder to model images of faces. For this we take "Labeled Faces in the Wild" dataset (LFW) (http://vis-www.cs.umass.edu/lfw/), deep funneled version of it. (frontal view of all faces) Prepare the data End of explanation """ X_train = np.float32(X_train) X_train = X_train/255 X_val = np.float32(X_val) X_val = X_val/255 %matplotlib inline import matplotlib.pyplot as plt def plot_gallery(images, h, w, n_row=3, n_col=6): """Helper function to plot a gallery of portraits""" plt.figure(figsize=(1.5 * n_col, 1.7 * n_row)) plt.subplots_adjust(bottom=0, left=.01, right=.99, top=.90, hspace=.35) for i in range(n_row * n_col): plt.subplot(n_row, n_col, i + 1) plt.imshow(images[i].reshape((h, w, 3)), cmap=plt.cm.gray, vmin=-1, vmax=1, interpolation='nearest') plt.xticks(()) plt.yticks(()) plot_gallery(X_train, image_h, image_w) import theano import theano.tensor as T """ Explanation: For simplicity we want all values of the data to lie in the interval $[0,1]$: End of explanation """ import lasagne input_X = T.matrix("X") input_shape = [None,image_h*image_w*3] HU_encoder = 2000 #you can play with this values HU_decoder = 2000 dimZ = 100 #considering face reconstruction task, which size of representation seems reasonable? # define the network # use ReLU for hidden layers' activations # GlorotUniform initialization for W # zero initialization for biases # it's also convenient to put sigmoid activation on output layer to get nice normalized pics l_input = lasagne.layers.InputLayer(input_shape, input_X) l_enc = lasagne.layers.DenseLayer(l_input, HU_encoder, b=lasagne.init.Constant(0)) l_z = lasagne.layers.DenseLayer(l_enc, dimZ, b=lasagne.init.Constant(0)) l_dec = lasagne.layers.DenseLayer(l_z, HU_decoder, b=lasagne.init.Constant(0)) l_out = lasagne.layers.DenseLayer(l_dec, input_shape[1], b=lasagne.init.Constant(0), nonlinearity=lasagne.nonlinearities.sigmoid) # create prediction variable prediction = lasagne.layers.get_output(l_out) # create loss function loss = lasagne.objectives.squared_error(prediction, input_X).mean() # create parameter update expressions params = lasagne.layers.get_all_params(l_out, trainable=True) updates = lasagne.updates.adam(loss, params, learning_rate=0.001) # compile training function that updates parameters and returns training loss # this will take a while train_fn = theano.function([input_X], loss, updates=updates) test_fn = theano.function([input_X], prediction) test_loss = theano.function([input_X], loss) def iterate_minibatches(inputs, batchsize, shuffle=True): if shuffle: indices = np.arange(len(inputs)) np.random.shuffle(indices) for start_idx in range(0, len(inputs) - batchsize + 1, batchsize): if shuffle: excerpt = indices[start_idx:start_idx + batchsize] else: excerpt = slice(start_idx, start_idx + batchsize) yield inputs[excerpt] from IPython.display import clear_output val_log = [] train_log = [] # train your autoencoder # visualize progress in reconstruction and loss decay for ep in range(50): n_batches = 0 err = 0 for X_batch in iterate_minibatches(X_train, 200): err += train_fn(X_batch) n_batches += 1 train_log.append(err / n_batches) n_batches = 0 err = 0 for X_batch in iterate_minibatches(X_val, 200): err += test_loss(X_batch) n_batches += 1 val_log.append(err / n_batches) clear_output(True) plt.figure(figsize=(15,10)) ax1 = plt.subplot2grid((3, 4), (0, 0), colspan=2) ax2 = plt.subplot2grid((3, 4), (0, 2), colspan=2) ax3 = plt.subplot2grid((3, 4), (1, 0), colspan=4, rowspan=2) ind = np.random.randint(X_val.shape[0]) ax1.imshow(X_val[ind].reshape((image_h, image_w, 3)), cmap=plt.cm.gray, vmin=-1, vmax=1, interpolation='nearest') ax2.imshow(test_fn([X_val[ind]]).reshape((image_h, image_w, 3)), cmap=plt.cm.gray, vmin=-1, vmax=1, interpolation='nearest') ax3.plot(train_log, 'r') ax3.plot(val_log, 'b') ax3.grid(True) ax3.legend(('Train loss', 'Val loss'), loc=0, fontsize=16); plt.show() plot_gallery(np.vstack((X_val[:10], test_fn(X_val[:10]))), image_h, image_w, n_row=2, n_col=10) """ Explanation: Autoencoder Why to use all this complicated formulaes and regularizations, what is the need for variational inference? To analyze the difference, let's first train just an autoencoder on the data: <img src="Autoencoder_structure.png" alt="Autoencoder"> End of explanation """ z_sample = T.matrix() # Your code goes here: # generated_x = hidden_state = lasagne.layers.InputLayer([None, 100], z_sample) decoder = lasagne.layers.DenseLayer(hidden_state, HU_decoder, W=l_dec.W, b=l_dec.b) output_layer = lasagne.layers.DenseLayer(decoder, HU_decoder, W=l_out.W, b=l_out.b, nonlinearity=lasagne.nonlinearities.sigmoid) generated_x = lasagne.layers.get_output(output_layer) gen_fn = theano.function([z_sample], generated_x) z = np.random.randn(25, dimZ)*0.5 output = gen_fn(np.asarray(z, dtype=theano.config.floatX)) plot_gallery(output, image_h, image_w, n_row=5, n_col=5) """ Explanation: Sampling This task requires deeper Lasagne knowledge. You need to perform inference from $z$, reconstruct an image given some random $z$ representations. End of explanation """ import GS """ Explanation: Can you visualize how the distribution of $z$ looks like? Is it dense? What properties would we expect from it? Can we perform interpolation in $z$ space? Variational Autoencoder Bayesian approach in deep learning considers everything in terms of distributions. Now our encoder generates not just a vector $z$ but posterior ditribution q(z|x). Technically, the first difference is that you need to split bottleneck layer in two. One dense layer will generate vector $\mu$, and another will generate vector $\sigma$. Reparametrization trick is implemented via the GaussianSampler layer, that generates random vetor $\epsilon$ and returns $\mu+\sigma\epsilon$ The code for this layer taken from "recipes" folder of Lasagne github repo: End of explanation """ # to compare with conventional AE, keep these hyperparameters # or change them for the values that you used before HU_encoder = 2000 HU_decoder = 2000 dimZ = 200 # define the network # you can start from https://github.com/Lasagne/Recipes/blob/master/examples/variational_autoencoder/variational_autoencoder.py # or another example https://github.com/y0ast/Variational-Autoencoder/blob/master/VAE.py # but remember that this is not your ground truth since the data is not MNIST input_X = T.matrix("X") input_shape = [None,image_h*image_w*3] # relu_shift is for numerical stability - if training data has any # dimensions where stdev=0, allowing logsigma to approach -inf # will cause the loss function to become NAN. So we set the limit # stdev >= exp(-1 * relu_shift) relu_shift = 10 l_input = lasagne.layers.InputLayer(input_shape, input_X) l_enc_hid = lasagne.layers.DenseLayer(l_input, HU_encoder, nonlinearity=T.nnet.softplus) l_enc_mu = lasagne.layers.DenseLayer(l_enc_hid, dimZ, nonlinearity=None) l_enc_logsigma = lasagne.layers.DenseLayer(l_enc_hid, dimZ, nonlinearity=None) l_Z = GS.GaussianSampleLayer(l_enc_mu, l_enc_logsigma) l_dec_hid = lasagne.layers.DenseLayer(l_Z, HU_decoder, nonlinearity=T.nnet.softplus) l_dec_mu = lasagne.layers.DenseLayer(l_dec_hid, input_shape[1], nonlinearity=None) l_dec_logsigma = lasagne.layers.DenseLayer(l_dec_hid, input_shape[1], nonlinearity=lambda a: T.nnet.relu(a + relu_shift) - relu_shift) l_output = GS.GaussianSampleLayer(l_dec_mu, l_dec_logsigma) """ Explanation: Since our decoder is also a function that generates distribution, we need to do the same splitting for output layer. When testing the model we will look only on mean values, so one of the output will be actual autoencoder output. In this homework we only ask for implementation of the simplest version of VAE - one $z$ sample per input. You can consider to sample several outputs from one input and average like it is in Lasagne recipes. End of explanation """ lasagne.layers.get_all_layers(l_output) # should be ~9 layers total def KL_divergence(mu, logsigma): return -0.5 * T.sum(2 * logsigma - T.sqr(mu) - T.exp(2 * logsigma)) def log_likelihood(x, mu, logsigma): return T.sum(-logsigma - 0.5 * T.sqr(x - mu) / T.exp(2 * logsigma)) hid_mu, hid_ls, dec_mu, dec_ls = lasagne.layers.get_output([l_enc_mu, l_enc_logsigma, l_dec_mu, l_dec_logsigma]) """ Explanation: And the last, but not least! Place in the code where the most of the formulaes goes to - optimization objective. The objective for VAE has it's own name - variational lowerbound. And as for any lowerbound our intention is to maximize it. Here it is (for one sample $z$ per input $x$): $$\mathcal{L} = -D_{KL}(q_{\phi}(z|x)||p_{\theta}(z)) + \log p_{\theta}(x|z)$$ Your next task is to implement two functions that compute KL-divergence and the second term - log-likelihood of an output. Here is some necessary math for your convenience: $$D_{KL} = -\frac{1}{2}\sum_{i=1}^{dimZ}(1+log(\sigma_i^2)-\mu_i^2-\sigma_i^2)$$ $$\log p_{\theta}(x|z) = \sum_{i=1}^{dimX}\log p_{\theta}(x_i|z)=\sum_{i=1}^{dimX} \log \Big( \frac{1}{\sigma_i\sqrt{2\pi}}e^{-\frac{(\mu_I-x)^2}{2\sigma_i^2}} \Big)=...$$ Don't forget in the code that you are using $\log\sigma$ as variable. Explain, why not $\sigma$? End of explanation """ loss = KL_divergence(hid_mu, hid_ls) - log_likelihood(input_X, dec_mu, dec_ls) pred = lasagne.layers.get_output(l_output, deterministic=True) params = lasagne.layers.get_all_params(l_output, trainable=True) updates = lasagne.updates.adam(loss, params, learning_rate=1e-4) train_fn = theano.function([input_X], loss, updates=updates) test_fn = theano.function([input_X], loss) prediction = theano.function([input_X], pred) """ Explanation: Now build the loss and training function: End of explanation """ val_log = [] train_log = [] # train your autoencoder # visualize progress in reconstruction and loss decay for ep in range(100): n_batches = 0 err = 0 for X_batch in iterate_minibatches(X_train, 500): err += train_fn(X_batch) n_batches += 1 train_log.append(err / n_batches) n_batches = 0 err = 0 for X_batch in iterate_minibatches(X_val, 500): err += test_fn(X_batch) n_batches += 1 val_log.append(err / n_batches) clear_output(True) plt.figure(figsize=(15,10)) ax1 = plt.subplot2grid((3, 4), (0, 0), colspan=2) ax2 = plt.subplot2grid((3, 4), (0, 2), colspan=2) ax3 = plt.subplot2grid((3, 4), (1, 0), colspan=4, rowspan=2) ind = np.random.randint(X_val.shape[0]) ax1.imshow(X_val[ind].reshape((image_h, image_w, 3)), cmap=plt.cm.gray, vmin=-1, vmax=1, interpolation='nearest') ax2.imshow(prediction([X_val[ind]]).reshape((image_h, image_w, 3)), cmap=plt.cm.gray, vmin=-1, vmax=1, interpolation='nearest') ax3.plot(train_log, 'r') ax3.plot(val_log, 'b') ax3.grid(True) ax3.legend(('Train loss', 'Val loss'), loc=0, fontsize=16); plt.show() """ Explanation: And train the model: End of explanation """ attrs[:10] smiling = attrs.Smiling.values ind = np.argsort(smiling) plot_gallery(np.vstack((data[ind[:10]], data[ind[-10:]])), image_h, image_w, n_row=2, n_col=10) """ Explanation: Congrats! If you managed to tune your autoencoders to converge and learn something about the world, now it's time to make fun out of it. As you may have noticed, there are face attributes in dataset. We're interesting in "Smiling" column, but feel free to try others as well! Here is the first task: 1) Extract the "Smilling" column as a separate numpy vector and sort this vector. End of explanation """ X = np.float32(data / 255) z_repr = theano.function([input_X], lasagne.layers.get_output(l_enc_mu)) mean_smile = z_repr(X[ind[-50:]].reshape(50, -1)).mean(axis=0) mean_smile -= z_repr(X[ind[:50]].reshape(50, -1)).mean(axis=0) z_sample = T.matrix() inp_layer = lasagne.layers.InputLayer([None, dimZ], z_sample) hid_state = lasagne.layers.DenseLayer(inp_layer, HU_decoder, nonlinearity=T.nnet.softplus, W=l_dec_hid.W, b=l_dec_hid.b) decoder_mu = lasagne.layers.DenseLayer(hid_state, input_shape[1], nonlinearity=None, W=l_dec_mu.W, b=l_dec_mu.b) gen_fn = theano.function([z_sample], lasagne.layers.get_output(decoder_mu)) img = X[ind[10]] smiling_faces = z_repr(img.reshape(1,-1)) generated = [] for c in range(6): smiling_faces += mean_smile * c * 0.1 generated.append(gen_fn(smiling_faces)) plot_gallery(generated, image_h, image_w, n_row=1, n_col=6) """ Explanation: 2) Take z-representations of those top images (you can do it only for positive or for both) and average them to find "vector representation" of the attribute. 3) Show how "feature arithmetics" works with representations of both VAE and conventional autoencoder. Show how to generate an image with preconditioned attribute. Take some sad faces and make them smiling. 4) (If you didn't manage to tune VAE, just show if it works for just AE.) Discuss the results. End of explanation """
ExaScience/smurff
docs/notebooks/different_noise_models.ipynb
mit
import smurff import logging logging.basicConfig(level = logging.INFO) ic50_train, ic50_test, ecfp = smurff.load_chembl() """ Explanation: Different noise models In this notebook we look at the different noise models. Prepare train, test and side-info We first need to download and prepare the data files. This can be acomplished using this a built-in function is smurff. IC50 is a compound x protein matrix, The ECFP matrix as features as side information on the compounds. End of explanation """ trainSession = smurff.TrainSession( priors = ['normal', 'normal'], num_latent=32, burnin=100, nsamples=500) # the following line is equivalent to the default, not specifing noise_model trainSession.addTrainAndTest(ic50_train, ic50_test, smurff.FixedNoise(5.0)) predictions = trainSession.run() print("RMSE = %.2f" % smurff.calc_rmse(predictions)) """ Explanation: Fixed noise The noise model of observed data can be annotated by calling addTrainAndTest with the optional parameter noise_model. The default for this parameter is FixedNoise with precision 5.0 End of explanation """ trainSession = smurff.TrainSession( priors = ['normal', 'normal'], num_latent=32, burnin=100, nsamples=500) trainSession.addTrainAndTest(ic50_train, ic50_test, smurff.AdaptiveNoise(1.0, 10.)) predictions = trainSession.run() print("RMSE = %.2f" % smurff.calc_rmse(predictions)) """ Explanation: Adaptive noise Instead of a fixed precision, we can also allow the model to automatically determine the precision of the noise by using AdaptiveNoise, with signal-to-noise ratio parameters sn_init and sn_max. sn_init is an initial signal-to-noise ratio. sn_max is the maximum allowed signal-to-noise ratio. This means that if the updated precision would imply a higher signal-to-noise ratio than sn_max, then the precision value is set to (sn_max + 1.0) / Yvar where Yvar is the variance of the training dataset Y. End of explanation """ ic50_threshold = 6. trainSession = smurff.TrainSession( priors = ['normal', 'normal'], num_latent=32, burnin=100, nsamples=100, # Using threshold of 6. to calculate AUC on test data threshold=ic50_threshold) ## using activity threshold pIC50 > 6. to binarize train data trainSession.addTrainAndTest(ic50_train, ic50_test, smurff.ProbitNoise(ic50_threshold)) predictions = trainSession.run() print("RMSE = %.2f" % smurff.calc_rmse(predictions)) print("AUC = %.2f" % smurff.calc_auc(predictions, ic50_threshold)) """ Explanation: Binary matrices SMURFF can also factorize binary matrices (with or without side information). The input matrices can contain arbitrary values, and are converted to 0's and 1' by means of a threshold. To factorize them we employ probit noise model ProbitNoise, taking this threshold as a parameter. To evaluate binary factorization, we recommed to use ROC AUC, which can be enabled by providing a threshold also to the TrainSession. End of explanation """ predictions """ Explanation: The input train and test sets are converted to -1 and +1 values, if the original values are below or above the threshold (respectively). Similarly, the resulting predictions will be negative, if the model predicts the value to be below the threshold, or positive, if the model predicts the value to be above the threshold. End of explanation """ ic50_threshold = 6. trainSession = smurff.TrainSession( priors = ['macau', 'normal'], num_latent=32, burnin=100, nsamples=100, # Using threshold of 6. to calculate AUC on test data threshold=ic50_threshold) ## using activity threshold pIC50 > 6. to binarize train data trainSession.addTrainAndTest(ic50_train, ic50_test, smurff.ProbitNoise(ic50_threshold)) trainSession.addSideInfo(0, ecfp, direct = True) predictions = trainSession.run() print("RMSE = %.2f" % smurff.calc_rmse(predictions)) print("AUC = %.2f" % smurff.calc_auc(predictions, ic50_threshold)) """ Explanation: Binary matrices with Side Info It is possible to enhance the model for binary matrices by adding side information using the Macau algorithm. Note that the binary here refers to the train and test data, not to the side information. End of explanation """
vitojph/2016progpln
examen/progpln-examen-feb.ipynb
mit
tweets = [] RUTA = '' for line in open(RUTA).readlines(): tweets.append(line.split('\t')) """ Explanation: Examen de Programación para el Procesamiento del Lenguaje Natural Grado en Lingüística y Lenguas Aplicadas, UCM 9 de febrero de 2017 tl;dr Vamos a analizar una colección de tweets en inglés publicados durante un partido de fútbol. Contexto El pasado domingo se celebró la 51ª edición de la Superbowl, la gran final del campeonato de fútbol americano de la NFL. El partido enfrentó a los New England Patriots (los favoritos, los de la costa este, con Tom Brady a la cabeza) contra los Atlanta Falcons (los aspirantes, los del Sur, encabezados por Matt Ryan). Como cualquier final, el resultado a priori era impredecible y a un partido podía ganar cualquiera. Pero el del otro día fue un encuentro inolvidable porque comenzó con el equipo débil barriendo al favorito y con un Brady que no daba una. Al descanso, el marcador reflejaba un inesperado 3 - 28 y todo indicaba que los Falcons ganarían su primer anillo. Pero, en la segunda mitad, Brady resurgió... y su equipo comenzó a anotar una y otra vez... con los Falcons ko. Los Patriots consiguieron darle la vuelta al marcador y vencieron por 34 - 28 su quinta Superbowl. Brady fue elegido MVP del encuentro y aclamado como el mejor quaterback de la historia. Como os imaginaréis, tanto vaivén nos va a dar mucho juego a la hora de analizar un corpus de mensajes de Twitter. Durante la primera mitad, es previsible que encuentres mensajes a favor de Atlanta y burlas a New England y a sus jugadores, que no estaban muy finos. Pero al final del partido, con la remontada, las opiniones y las burlas cambiarán de sentido. Como tanto Tom Brady como su entrenador, Bill Belichick, habían declarado públicamente sus preferencias por Donald Trump durante las elecciones a la presidencia, es muy probable que encuentres mensajes al respecto y menciones a demócratas y republicanos. Por último, durante el half time show actuó Lady Gaga, que también levanta pasiones a su manera, así que es probable que haya menciones a otras reinas de la música y comparaciones con actuaciones pasadas. Los datos El fichero 2017-superbowl-tweets.tsv ubicado en el directorio /opt/textos/ contiene una muestra, ordenada cronológicamente, de mensajes escritos en inglés publicados antes, durante y después del partido. Todos los mensajes contienen el hashtag #superbowl. Hazte una copia de este fichero en el directorio notebooks de tu espacio personal. El fichero es en realidad una tabla con cuatro columnas separadas por tabuladores, que contiene líneas (una por tweet) con el siguiente formato: id_del_tweet fecha_y_hora_de_publicación autor_del_tweet texto_del_mensaje La siguiente celda te permite abrir el fichero para lectura y cargar los mensajes en la lista tweets. Modifica el código para que la ruta apunte a la copia local de tu fichero. End of explanation """ ultimo_tweet = tweets[-1] print('id =>', ultimo_tweet[0]) print('fecha =>', ultimo_tweet[1]) print('autor =>', ultimo_tweet[2]) print('texto =>', ultimo_tweet[3]) """ Explanation: Fíjate en la estructura de la lista: se trata de una lista de tuplas con cuatro elementos. Puedes comprobar si el fichero se ha cargado como debe en la siguiente celda: End of explanation """ # escribe tu código a continuación """ Explanation: Al lío A partir de aquí puedes hacer distintos tipos de análisis. Añade tantas celdas como necesites para intentar, por ejemplo: calcular distintas estadísticas de la colección: número de mensajes, longitud de los mensajes, presencia de hashtags y emojis, etc. número de menciones a usuarios, frecuencia de aparición de menciones, frecuencia de autores calcular estadísticas sobre usuarios: menciones, mensajes por usuario, etc. calcular estadísticas sobre las hashtags calcular estadísticas sobre las URLs presentes en los mensajes calcular estadísticas sobre los emojis y emoticonos de los mensajes extraer automáticamente las entidades nombradas que aparecen en los mensajes y su frecuencia procesar los mensajes para extraer y analizar opiniones: calcular la subjetividad y la polaridad de los mensajes extraer las entidades nombradas que levantan más pasiones, quiénes son los más queridos y los más odiados, atendiendo a la polaridad de los mensajes comprobar si la polaridad de alguna entidad varía radicalmente a medida que avanza el partido cualquier otra cosa que se te ocurra :-P Recuerda que tienes a tu disposición las librerías de Procesamiento del Lenguaje Natural que hemos usado durante el curso y que puedes utilizar apuntes de clase y cualquier otro material que encuentres en internet. Si necesitas alguna librería extra, avísame y la instalamos en seguida. También puedes utilizar herramientas de la línea de comandos (accediendo desde este cuaderno o conectándote por SSH). Es tu turno. ¡Mucha suerte! ;-) End of explanation """
hail-is/hail
notebook/worker/resources/Hail-Workshop-Notebook.ipynb
mit
print('Hello, world') """ Explanation: Hail workshop This notebook will introduce the following concepts: Using Jupyter notebooks effectively Loading genetic data into Hail General-purpose data exploration functionality Plotting functionality Quality control of sequencing data Running a Genome-Wide Association Study (GWAS) Rare variant burden tests Hail on Jupyter From https://jupyter.org: "The Jupyter Notebook is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. Uses include: data cleaning and transformation, numerical simulation, statistical modeling, data visualization, machine learning, and much more." In the last year, the Jupyter development team released Jupyter Lab, an integrated environment for data, code, and visualizations. If you've used R Studio, this is the closest thing that works in Python (and many other languages!). Why notebooks? Part of what we think is so exciting about Hail is that it has coincided with a larger shift in the data science community. Three years ago, most computational biologists at Broad analyzed genetic data using command-line tools, and took advantage of research compute clusters by explicitly using scheduling frameworks like LSF or Sun Grid Engine. Now, they have the option to use Hail in interactive Python notebooks backed by thousands of cores on public compute clouds like Google Cloud, Amazon Web Services, or Microsoft Azure. Using Jupyter Running cells Evaluate cells using SHIFT + ENTER. Select the next cell and run it. End of explanation """ # This is a code cell my_variable = 5 """ Explanation: Modes Jupyter has two modes, a navigation mode and an editor mode. Navigation mode: <font color="blue"><strong>BLUE</strong></font> cell borders UP / DOWN move between cells ENTER while a cell is selected will move to editing mode. Many letters are keyboard shortcuts! This is a common trap. Editor mode: <font color="green"><strong>GREEN</strong></font> cell borders UP / DOWN/ move within cells before moving between cells. ESC will return to navigation mode. SHIFT + ENTER will evaluate a cell and return to navigation mode. Cell types There are several types of cells in Jupyter notebooks. The two you will see here are Markdown (text) and Code. End of explanation """ print(my_variable) """ Explanation: This is a markdown cell, so even if something looks like code (as below), it won't get executed! my_variable += 1 End of explanation """ import hail as hl from bokeh.io import output_notebook, show """ Explanation: Common gotcha: a code cell turns into markdown This can happen if you are in navigation mode and hit the keyboard shortcut m while selecting a code cell. You can either navigate to Cell &gt; Cell Type &gt; Code through the top menu, or use the keyboard shortcut y to turn it back to code. Tips and tricks Keyboard shortcuts: SHIFT + ENTER to evaluate a cell ESC to return to navigation mode y to turn a markdown cell into code m to turn a code cell into markdown a to add a new cell above the currently selected cell b to add a new cell below the currently selected cell d, d (repeated) to delete the currently selected cell TAB to activate code completion To try this out, create a new cell below this one using b, and print my_variable by starting with print(my and pressing TAB! Common gotcha: the state of your code seems wrong Jupyter makes it easy to get yourself into trouble by executing cells out-of-order, or multiple times. For example, if I declare x: x = 5 Then have a cell that reads: x += 1 And finally: print(x) If you execute these cells in order and once, I'll see the notebook print 6. However, there is nothing stopping you from executing the middle cell ten times, printing 16! Solution If you get yourself into trouble into this way, the solution is to clear the kernel (Python process) and start again from the top. First, Kernel &gt; Restart &amp; Clear Output &gt; (accept dialog). Second, Cell &gt; Run all above. Set up our Python environment In addition to Hail, we import a few methods from the bokeh plotting library. We'll see examples soon! End of explanation """ hl.init() output_notebook() """ Explanation: Now we initialize Hail and set up Bokeh to display inline in the notebook. End of explanation """ hl.utils.get_1kg('data/') """ Explanation: Download public 1000 Genomes data The workshop materials are designed to work on a small (~20MB) downsampled chunk of the public 1000 Genomes dataset. You can run these same functions on your computer or on the cloud! End of explanation """ ! ls -1 data/ """ Explanation: It is possible to call command-line utilities from Jupyter by prefixing a line with a !: End of explanation """ hl.import_vcf('data/1kg.vcf.bgz').write('data/1kg.mt', overwrite=True) """ Explanation: Part 1: Explore genetic data with Hail Import data from VCF The Variant Call Format (VCF) is a common file format for representing genetic data collected on multiple individuals (samples). Hail's import_vcf function can read this format. However, VCF is a text format that is easy for humans but very bad for computers. The first thing we do is write to a Hail native file format, which is much faster! End of explanation """ mt = hl.read_matrix_table('data/1kg.mt') """ Explanation: Read 1KG into Hail We represent genetic data as a Hail MatrixTable, and name our variable mt to indicate this. End of explanation """ mt.describe() """ Explanation: What is a MatrixTable? Let's describe it! The describe method prints the schema, that is, the fields in the dataset and their types. You can see: - numeric types: - integers (int32, int64), e.g. 5 - floating point numbers (float32, float64), e.g. 5.5 or 3e-8 - strings (str), e.g. "Foo" - boolean values (bool) e.g. True - collections: - arrays (array), e.g. [1,1,2,3] - sets (set), e.g. {1,3} - dictionaries (dict), e.g. {'Foo': 5, 'Bar': 10} - genetic data types: - loci (locus), e.g. [GRCh37] 1:10000 or [GRCh38] chr1:10024 - genotype calls (call), e.g. 0/2 or 1|0 End of explanation """ mt.count() """ Explanation: count MatrixTable.count returns a tuple with the number of rows (variants) and number of columns (samples). End of explanation """ mt.s.show(5) mt.locus.show(5) """ Explanation: show There is no mt.show() method, but you can show individual fields like the sample ID, s, or the locus. End of explanation """ hl.summarize_variants(mt) """ Explanation: <font color="brightred"><strong>Exercise: </strong></font> show other fields You can see the names of fields above. show() the first few values for a few of them, making sure to include at least one row field and at least one entry field. Capitalization is important. To print fields inside the info structure, you must add another dot, e.g. mt.info.AN. What do you notice being printed alongside some of the fields? Hail has functions built for genetics For example, hl.summarize_variants prints useful statistics about the genetic variants in the dataset. End of explanation """ mt.aggregate_rows(hl.agg.count_where(mt.alleles == ['A', 'T'])) """ Explanation: Most of Hail's functionality is totally general-purpose! Functions like summarize_variants are built out of Hail's general-purpose data manipulation functionality. We can use Hail to ask arbitrary questions about the data: End of explanation """ snp_counts = mt.aggregate_rows( hl.array(hl.agg.counter(mt.alleles))) snp_counts """ Explanation: Or if we had flight data: flight_data.aggregate( hl.agg.count_where(flight_data.departure_city == 'Boston') ) The counter aggregator makes it possible to see distributions of categorical data, like alleles: End of explanation """ sorted(snp_counts, key=lambda x: x[1]) """ Explanation: By sorting the result in Python, we can recover an interesting bit of biology... End of explanation """ mt.aggregate_entries(hl.agg.stats(mt.GQ)) """ Explanation: <font color="brightred"><strong>Question: </strong></font> What is interesting about this distribution? <font color="brightred"><strong>Question: </strong></font> Why do the counts come in pairs? A closer look at GQ The GQ field in our dataset is an interesting one to explore further, and we can use various pieces of Hail's functionality to do so. GQ stands for Genotype Quality, and reflects confidence in a genotype call. It is a non-negative integer truncated at 99, and is the phred-scaled probability of the second-most-likely genotype call. Phred-scaling a value is the following transformation: $\quad Phred(x) = -10 * log_{10}(x)$ Example: $\quad p_{0/0} = 0.9899$ $\quad p_{0/1} = 0.01$ $\quad p_{1/1} = 0.001$ In this case, $\quad GQ = -10 * log_{10} (0.01) = 20$ Higher GQ values indicate higher confidence. $GQ=10$ is 90% confidence, $GQ=20$ is 99% confidence, $GQ=30$ is 99.9% confidence, and so on. End of explanation """ mt.aggregate_entries(hl.agg.filter(mt.GT.is_het(), hl.agg.stats(mt.GQ))) """ Explanation: Using our equation above, the mean value indicates about 99.9% confidence. But it's not generally a good idea to draw conclusions just based on a mean and standard deviation... It is possible to build more complicated queries using small pieces. We can use hl.agg.filter to compute conditional statistics: End of explanation """ mt.aggregate_entries(hl.agg.filter(~mt.GT.is_het(), hl.agg.stats(mt.GQ))) """ Explanation: To look at GQ at genotypes that are not heterozygous, we need add only one character (~): End of explanation """ mt.aggregate_entries(hl.agg.group_by(mt.GT, hl.agg.stats(mt.GQ))) """ Explanation: There are often many ways to accomplish something in Hail. We could have done these both together (and more efficiently!) using hl.agg.group_by: End of explanation """ p = hl.plot.histogram( mt.GQ, bins=100) show(p) """ Explanation: Of course, the best way to understand a distribution is to look at it! End of explanation """ ! head data/1kg_annotations.txt """ Explanation: <font color="brightred"><strong>Exercise: </strong></font> What's going on here? Investigate! Hint: try copying some of the cells above and looking at DP, the sequencing depth, as well as GQ. The ratio between the two may also be interesting... Hint: if you want to plot a filtered GQ distribution, you can use something like: p = hl.plot.histogram(mt.filter_entries(mt.GT.is_het()).GQ, bins=100) Remember that you can create a new cell using keyboard shortcuts A or B in navigation mode. Part 2: Annotation and quality control Integrate sample information We're building toward a genome-wide association test in part 3, but we don't just need genetic data to do a GWAS -- we also need phenotype data! Luckily, our hl.utils.get_1kg function also downloaded some simulated phenotype data. This is a text file: End of explanation """ sa = hl.import_table('data/1kg_annotations.txt', impute=True, key='Sample') """ Explanation: We can import it as a Hail Table with hl.import_table. We call it "sa" for "sample annotations". End of explanation """ sa.describe() sa.show() """ Explanation: While we can see the names and types of fields in the logging messages, we can also describe and show this table: End of explanation """ mt = mt.annotate_cols(pheno = sa[mt.s]) """ Explanation: Add sample metadata into our 1KG MatrixTable It's short and easy: End of explanation """ mt.describe() """ Explanation: What's going on here? Understanding what's going on here is a bit more difficult. To understand, we need to understand a few pieces: 1. annotate methods In Hail, annotate methods refer to adding new fields. MatrixTable's annotate_cols adds new column fields. MatrixTable's annotate_rows adds new row fields. MatrixTable's annotate_entries adds new entry fields. Table's annotate adds new row fields. In the above cell, we are adding a new coluimn field called "pheno". This field should be the values in our table sa associated with the sample ID s in our MatrixTable - that is, this is performing a join. Python uses square brackets to look up values in dictionaries: d = {'foo': 5, 'bar': 10} d['foo'] You should think of this in much the same way - for each column of mt, we are looking up the fields in sa using the sample ID s. End of explanation """ mt = hl.sample_qc(mt) mt.sample_qc.describe() p = hl.plot.scatter(x=mt.sample_qc.r_het_hom_var, y=mt.sample_qc.call_rate) show(p) """ Explanation: <font color="brightred"><strong>Exercise: </strong></font> Query some of these column fields using mt.aggregate_cols. Some of the aggregators we used earlier: - hl.agg.counter - hl.agg.stats - hl.agg.count_where Sample QC We'll start with examples of sample QC. Hail has the function hl.sample_qc to compute a list of useful statistics about samples from sequencing data. Click the link above to see the documentation, which lists the fields and their descriptions. End of explanation """ mt = mt.filter_cols(mt.sample_qc.dp_stats.mean >= 4) mt = mt.filter_cols(mt.sample_qc.call_rate >= 0.97) """ Explanation: <font color="brightred"><strong>Exercise: </strong></font> Plot some other fields! Modify the cell above. Remember hl.plot.histogram as well! If you want to start getting fancy, you can plot more complicated expressions -- the ratio between two fields, for instance. Filter columns using generated QC statistics End of explanation """ # call rate before filtering mt.aggregate_entries(hl.agg.fraction(hl.is_defined(mt.GT))) ab = mt.AD[1] / hl.sum(mt.AD) filter_condition_ab = ( hl.case() .when(mt.GT.is_hom_ref(), ab <= 0.1) .when(mt.GT.is_het(), (ab >= 0.25) & (ab <= 0.75)) .default(ab >= 0.9) # hom-var ) mt = mt.filter_entries(filter_condition_ab) # call rate after filtering mt.aggregate_entries(hl.agg.fraction(hl.is_defined(mt.GT))) """ Explanation: Entry QC We explored GQ above, and analysts often set thresholds for GQ to filter entries (genotypes). Another useful metric is allele read balance. This value is defined by: $\quad AB = \dfrac{N_{alt}}{{N_{ref} + N_{alt}}}$ Where $N_{ref}$ is the number of reference reads and $N_{alt}$ is the number of alternate reads. We want End of explanation """ mt = hl.variant_qc(mt) mt.variant_qc.describe() mt.variant_qc.AF.show() """ Explanation: Variant QC Hail has the function hl.variant_qc to compute a list of useful statistics about variants from sequencing data. Once again, Click the link above to see the documentation! End of explanation """ mt = mt.filter_rows(hl.min(mt.variant_qc.AF) > 1e-6) """ Explanation: Remove rare sites: End of explanation """ mt = mt.filter_rows(mt.variant_qc.p_value_hwe > 0.005) # final variant and sample count mt.count() """ Explanation: Remove sites far from Hardy-Weinberg equilbrium: End of explanation """ gwas = hl.linear_regression_rows(y=mt.pheno.CaffeineConsumption, x=mt.GT.n_alt_alleles(), covariates=[1.0]) gwas.describe() """ Explanation: Part 3: GWAS! A GWAS is an independent association test performed per variant of a genetic dataset. We use the same phenotype and covariates, but test the genotypes for each variant separately. In Hail, the method we use is hl.linear_regression_rows. We use the phenotype CaffeineConsumption as our dependent variable, the number of alternate alleles as our independent variable, and no covariates besides an intercept term (that's the 1.0). End of explanation """ p = hl.plot.manhattan(gwas.p_value) show(p) p = hl.plot.qq(gwas.p_value) show(p) """ Explanation: Two of the plots that analysts generally produce are a Manhattan plot and a Q-Q plot. We'll start with the Manhattan plot: End of explanation """ pca_eigenvalues, pca_scores, pca_loadings = hl.hwe_normalized_pca(mt.GT, compute_loadings=True) """ Explanation: Confounded! The Q-Q plot indicates extreme inflation of p-values. If you've done a GWAS before, you've probably included a few other covariates -- age, sex, and principal components. Principal components are a measure of genetic ancestry, and can be used to control for population stratification. We can compute principal components with Hail: End of explanation """ pca_eigenvalues """ Explanation: The eigenvalues reflect the amount of variance explained by each principal component: End of explanation """ pca_scores.describe() pca_scores.show() """ Explanation: The scores are the principal components themselves, computed per sample. End of explanation """ pca_loadings.describe() """ Explanation: The loadings are the contributions to each component for each variant. End of explanation """ mt = mt.annotate_cols(pca = pca_scores[mt.s]) """ Explanation: We can annotate the principal components back onto mt: End of explanation """ p = hl.plot.scatter(mt.pca.scores[0], mt.pca.scores[1], label=mt.pheno.SuperPopulation) show(p) """ Explanation: Principal components measure ancestry End of explanation """ gwas = hl.linear_regression_rows( y=mt.pheno.CaffeineConsumption, x=mt.GT.n_alt_alleles(), covariates=[1.0, mt.pheno.isFemale, mt.pca.scores[0], mt.pca.scores[1], mt.pca.scores[2]]) p = hl.plot.qq(gwas.p_value) show(p) p = hl.plot.manhattan(gwas.p_value) show(p) """ Explanation: <font color="brightred"><strong>Question: </strong></font> Does your plot match your neighbors'? If not, how is it different? Control confounders and run another GWAS End of explanation """ ! wget https://storage.googleapis.com/hail-tutorial/ensembl_gene_annotations.txt -O data/ensembl_gene_annotations.txt gene_ht = hl.import_table('data/ensembl_gene_annotations.txt', impute=True) gene_ht.show() gene_ht.count() """ Explanation: Part 4: Burden tests GWAS is a great tool for finding associations between common variants and disease, but a GWAS can't hope to find associations between rare variants and disease. Even if we have sequencing data for 1,000,000 people, we won't have the statistical power to link a mutation found in only a few people to any disease. But rare variation has lots of information - especially because statistical genetic theory dictates that rarer variants have, on average, stronger effects on disease per allele. One possible strategy is to group together rare variants with similar predicted consequence. For example, we can group all variants that are predicted to knock out the function of each gene and test the variants for each gene as a group. We will be running a burden test on our common variant dataset to demonstrate the technical side, but we shouldn't hope to find anything here -- especially because we've only got 10,000 variants! Import gene data We start by importing data about genes. First, we need to download it: End of explanation """ gene_ht = gene_ht.transmute(interval = hl.locus_interval(gene_ht['Chromosome'], gene_ht['Gene start'], gene_ht['Gene end'])) gene_ht = gene_ht.key_by('interval') """ Explanation: Create an interval key End of explanation """ mt = mt.annotate_rows(gene_info = gene_ht[mt.locus]) mt.gene_info.show() """ Explanation: Annotate variants using these intervals End of explanation """ burden_mt = ( mt .group_rows_by(gene = mt.gene_info['Gene name']) .aggregate(n_variants = hl.agg.count_where(mt.GT.n_alt_alleles() > 0)) ) burden_mt.describe() """ Explanation: Aggregate genotypes per gene There is no hl.burden_test function -- instead, a burden test is the composition of two modular pieces of Hail functionality: group_rows_by / aggregate hl.linear_regression_rows. While this might be a few more lines of code to write than hl.burden_test, it means that you can flexibly specify the genotype aggregation however you like. Using other tools , you may have a few ways to aggregate, but if you want to do something different you are out of luck! End of explanation """ burden_results = hl.linear_regression_rows( y=burden_mt.pheno.CaffeineConsumption, x=burden_mt.n_variants, covariates=[1.0, burden_mt.pheno.isFemale, burden_mt.pca.scores[0], burden_mt.pca.scores[1], burden_mt.pca.scores[2]]) """ Explanation: What is burden_mt? It is a gene-by-sample matrix (compare to mt, a variant-by-sample matrix). It has one row field, the gene. It has one entry field, n_variants. It has all the column fields from mt. Run linear regression per gene This should look familiar! End of explanation """ burden_results.order_by(burden_results.p_value).show() """ Explanation: Sorry, no hl.plot.manhattan for genes! Instead, we can sort by p-value and print: End of explanation """
FederatedAI/FATE
doc/tutorial/pipeline/pipeline_tutorial_upload.ipynb
apache-2.0
!pipeline --help """ Explanation: Pipeline Upload Data Tutorial install Pipeline is distributed along with fate_client. bash pip install fate_client To use Pipeline, we need to first specify which FATE Flow Service to connect to. Once fate_client installed, one can find an cmd enterpoint name pipeline: End of explanation """ !pipeline init --ip 127.0.0.1 --port 9380 """ Explanation: Assume we have a FATE Flow Service in 127.0.0.1:9380(defaults in standalone), then exec End of explanation """ from pipeline.backend.pipeline import PipeLine """ Explanation: upload data Before start a modeling task, the data to be used should be uploaded. Typically, a party is usually a cluster which include multiple nodes. Thus, when we upload these data, the data will be allocated to those nodes. End of explanation """ pipeline_upload = PipeLine().set_initiator(role='guest', party_id=9999).set_roles(guest=9999) """ Explanation: Make a pipeline instance: - initiator: * role: guest * party: 9999 - roles: * guest: 9999 note that only local party id is needed. End of explanation """ partition = 4 """ Explanation: Define partitions for data storage End of explanation """ dense_data_guest = {"name": "breast_hetero_guest", "namespace": f"experiment"} dense_data_host = {"name": "breast_hetero_host", "namespace": f"experiment"} tag_data = {"name": "breast_hetero_host", "namespace": f"experiment"} """ Explanation: Define table name and namespace, which will be used in FATE job configuration End of explanation """ data_base = "/workspace/FATE/" pipeline_upload.add_upload_data(file=os.path.join(data_base, "examples/data/breast_hetero_guest.csv"), table_name=dense_data_guest["name"], # table name namespace=dense_data_guest["namespace"], # namespace head=1, partition=partition) # data info pipeline_upload.add_upload_data(file=os.path.join(data_base, "examples/data/breast_hetero_host.csv"), table_name=dense_data_host["name"], namespace=dense_data_host["namespace"], head=1, partition=partition) pipeline_upload.add_upload_data(file=os.path.join(data_base, "examples/data/breast_hetero_host.csv"), table_name=tag_data["name"], namespace=tag_data["namespace"], head=1, partition=partition) """ Explanation: Now, we add data to be uploaded End of explanation """ pipeline_upload.upload(drop=1) """ Explanation: We can then upload data End of explanation """
tensorflow/hub
examples/colab/image_feature_vector.ipynb
apache-2.0
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== """ Explanation: Copyright 2018 The TensorFlow Hub Authors. Licensed under the Apache License, Version 2.0 (the "License"); End of explanation """ import collections import io import math import os import random from six.moves import urllib from IPython.display import clear_output, Image, display, HTML import tensorflow.compat.v1 as tf tf.disable_v2_behavior() import tensorflow_hub as hub import numpy as np import matplotlib.pyplot as plt import seaborn as sns import sklearn.metrics as sk_metrics import time """ Explanation: Classify Flowers with Transfer Learning <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/hub/tutorials/image_feature_vector"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/image_feature_vector.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/hub/blob/master/examples/colab/image_feature_vector.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/image_feature_vector.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> <td> <a href="https://tfhub.dev/google/imagenet/mobilenet_v2_035_128/feature_vector/2"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a> </td> </table> Have you ever seen a beautiful flower and wondered what kind of flower it is? Well, you're not the first, so let's build a way to identify the type of flower from a photo! For classifying images, a particular type of deep neural network, called a convolutional neural network has proved to be particularly powerful. However, modern convolutional neural networks have millions of parameters. Training them from scratch requires a lot of labeled training data and a lot of computing power (hundreds of GPU-hours or more). We only have about three thousand labeled photos and want to spend much less time, so we need to be more clever. We will use a technique called transfer learning where we take a pre-trained network (trained on about a million general images), use it to extract features, and train a new layer on top for our own task of classifying images of flowers. Setup End of explanation """ FLOWERS_DIR = './flower_photos' TRAIN_FRACTION = 0.8 RANDOM_SEED = 2018 def download_images(): """If the images aren't already downloaded, save them to FLOWERS_DIR.""" if not os.path.exists(FLOWERS_DIR): DOWNLOAD_URL = 'http://download.tensorflow.org/example_images/flower_photos.tgz' print('Downloading flower images from %s...' % DOWNLOAD_URL) urllib.request.urlretrieve(DOWNLOAD_URL, 'flower_photos.tgz') !tar xfz flower_photos.tgz print('Flower photos are located in %s' % FLOWERS_DIR) def make_train_and_test_sets(): """Split the data into train and test sets and get the label classes.""" train_examples, test_examples = [], [] shuffler = random.Random(RANDOM_SEED) is_root = True for (dirname, subdirs, filenames) in tf.gfile.Walk(FLOWERS_DIR): # The root directory gives us the classes if is_root: subdirs = sorted(subdirs) classes = collections.OrderedDict(enumerate(subdirs)) label_to_class = dict([(x, i) for i, x in enumerate(subdirs)]) is_root = False # The sub directories give us the image files for training. else: filenames.sort() shuffler.shuffle(filenames) full_filenames = [os.path.join(dirname, f) for f in filenames] label = dirname.split('/')[-1] label_class = label_to_class[label] # An example is the image file and it's label class. examples = list(zip(full_filenames, [label_class] * len(filenames))) num_train = int(len(filenames) * TRAIN_FRACTION) train_examples.extend(examples[:num_train]) test_examples.extend(examples[num_train:]) shuffler.shuffle(train_examples) shuffler.shuffle(test_examples) return train_examples, test_examples, classes # Download the images and split the images into train and test sets. download_images() TRAIN_EXAMPLES, TEST_EXAMPLES, CLASSES = make_train_and_test_sets() NUM_CLASSES = len(CLASSES) print('\nThe dataset has %d label classes: %s' % (NUM_CLASSES, CLASSES.values())) print('There are %d training images' % len(TRAIN_EXAMPLES)) print('there are %d test images' % len(TEST_EXAMPLES)) """ Explanation: The flowers dataset The flowers dataset consists of images of flowers with 5 possible class labels. When training a machine learning model, we split our data into training and test datasets. We will train the model on our training data and then evaluate how well the model performs on data it has never seen - the test set. Let's download our training and test examples (it may take a while) and split them into train and test sets. Run the following two cells: End of explanation """ #@title Show some labeled images def get_label(example): """Get the label (number) for given example.""" return example[1] def get_class(example): """Get the class (string) of given example.""" return CLASSES[get_label(example)] def get_encoded_image(example): """Get the image data (encoded jpg) of given example.""" image_path = example[0] return tf.gfile.GFile(image_path, 'rb').read() def get_image(example): """Get image as np.array of pixels for given example.""" return plt.imread(io.BytesIO(get_encoded_image(example)), format='jpg') def display_images(images_and_classes, cols=5): """Display given images and their labels in a grid.""" rows = int(math.ceil(len(images_and_classes) / cols)) fig = plt.figure() fig.set_size_inches(cols * 3, rows * 3) for i, (image, flower_class) in enumerate(images_and_classes): plt.subplot(rows, cols, i + 1) plt.axis('off') plt.imshow(image) plt.title(flower_class) NUM_IMAGES = 15 #@param {type: 'integer'} display_images([(get_image(example), get_class(example)) for example in TRAIN_EXAMPLES[:NUM_IMAGES]]) """ Explanation: Explore the data The flowers dataset consists of examples which are labeled images of flowers. Each example contains a JPEG flower image and the class label: what type of flower it is. Let's display a few images together with their labels. End of explanation """ LEARNING_RATE = 0.01 tf.reset_default_graph() # Load a pre-trained TF-Hub module for extracting features from images. We've # chosen this particular module for speed, but many other choices are available. image_module = hub.Module('https://tfhub.dev/google/imagenet/mobilenet_v2_035_128/feature_vector/2') # Preprocessing images into tensors with size expected by the image module. encoded_images = tf.placeholder(tf.string, shape=[None]) image_size = hub.get_expected_image_size(image_module) def decode_and_resize_image(encoded): decoded = tf.image.decode_jpeg(encoded, channels=3) decoded = tf.image.convert_image_dtype(decoded, tf.float32) return tf.image.resize_images(decoded, image_size) batch_images = tf.map_fn(decode_and_resize_image, encoded_images, dtype=tf.float32) # The image module can be applied as a function to extract feature vectors for a # batch of images. features = image_module(batch_images) def create_model(features): """Build a model for classification from extracted features.""" # Currently, the model is just a single linear layer. You can try to add # another layer, but be careful... two linear layers (when activation=None) # are equivalent to a single linear layer. You can create a nonlinear layer # like this: # layer = tf.layers.dense(inputs=..., units=..., activation=tf.nn.relu) layer = tf.layers.dense(inputs=features, units=NUM_CLASSES, activation=None) return layer # For each class (kind of flower), the model outputs some real number as a score # how much the input resembles this class. This vector of numbers is often # called the "logits". logits = create_model(features) labels = tf.placeholder(tf.float32, [None, NUM_CLASSES]) # Mathematically, a good way to measure how much the predicted probabilities # diverge from the truth is the "cross-entropy" between the two probability # distributions. For numerical stability, this is best done directly from the # logits, not the probabilities extracted from them. cross_entropy = tf.nn.softmax_cross_entropy_with_logits_v2(logits=logits, labels=labels) cross_entropy_mean = tf.reduce_mean(cross_entropy) # Let's add an optimizer so we can train the network. optimizer = tf.train.GradientDescentOptimizer(learning_rate=LEARNING_RATE) train_op = optimizer.minimize(loss=cross_entropy_mean) # The "softmax" function transforms the logits vector into a vector of # probabilities: non-negative numbers that sum up to one, and the i-th number # says how likely the input comes from class i. probabilities = tf.nn.softmax(logits) # We choose the highest one as the predicted class. prediction = tf.argmax(probabilities, 1) correct_prediction = tf.equal(prediction, tf.argmax(labels, 1)) # The accuracy will allow us to eval on our test set. accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) """ Explanation: Build the model We will load a TF-Hub image feature vector module, stack a linear classifier on it, and add training and evaluation ops. The following cell builds a TF graph describing the model and its training, but it doesn't run the training (that will be the next step). End of explanation """ # How long will we train the network (number of batches). NUM_TRAIN_STEPS = 100 #@param {type: 'integer'} # How many training examples we use in each step. TRAIN_BATCH_SIZE = 10 #@param {type: 'integer'} # How often to evaluate the model performance. EVAL_EVERY = 10 #@param {type: 'integer'} def get_batch(batch_size=None, test=False): """Get a random batch of examples.""" examples = TEST_EXAMPLES if test else TRAIN_EXAMPLES batch_examples = random.sample(examples, batch_size) if batch_size else examples return batch_examples def get_images_and_labels(batch_examples): images = [get_encoded_image(e) for e in batch_examples] one_hot_labels = [get_label_one_hot(e) for e in batch_examples] return images, one_hot_labels def get_label_one_hot(example): """Get the one hot encoding vector for the example.""" one_hot_vector = np.zeros(NUM_CLASSES) np.put(one_hot_vector, get_label(example), 1) return one_hot_vector with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for i in range(NUM_TRAIN_STEPS): # Get a random batch of training examples. train_batch = get_batch(batch_size=TRAIN_BATCH_SIZE) batch_images, batch_labels = get_images_and_labels(train_batch) # Run the train_op to train the model. train_loss, _, train_accuracy = sess.run( [cross_entropy_mean, train_op, accuracy], feed_dict={encoded_images: batch_images, labels: batch_labels}) is_final_step = (i == (NUM_TRAIN_STEPS - 1)) if i % EVAL_EVERY == 0 or is_final_step: # Get a batch of test examples. test_batch = get_batch(batch_size=None, test=True) batch_images, batch_labels = get_images_and_labels(test_batch) # Evaluate how well our model performs on the test set. test_loss, test_accuracy, test_prediction, correct_predicate = sess.run( [cross_entropy_mean, accuracy, prediction, correct_prediction], feed_dict={encoded_images: batch_images, labels: batch_labels}) print('Test accuracy at step %s: %.2f%%' % (i, (test_accuracy * 100))) def show_confusion_matrix(test_labels, predictions): """Compute confusion matrix and normalize.""" confusion = sk_metrics.confusion_matrix( np.argmax(test_labels, axis=1), predictions) confusion_normalized = confusion.astype("float") / confusion.sum(axis=1) axis_labels = list(CLASSES.values()) ax = sns.heatmap( confusion_normalized, xticklabels=axis_labels, yticklabels=axis_labels, cmap='Blues', annot=True, fmt='.2f', square=True) plt.title("Confusion matrix") plt.ylabel("True label") plt.xlabel("Predicted label") show_confusion_matrix(batch_labels, test_prediction) """ Explanation: Train the network Now that our model is built, let's train it and see how it perfoms on our test set. End of explanation """ incorrect = [ (example, CLASSES[prediction]) for example, prediction, is_correct in zip(test_batch, test_prediction, correct_predicate) if not is_correct ] display_images( [(get_image(example), "prediction: {0}\nlabel:{1}".format(incorrect_prediction, get_class(example))) for (example, incorrect_prediction) in incorrect[:20]]) """ Explanation: Incorrect predictions Let's a take a closer look at the test examples that our model got wrong. Are there any mislabeled examples in our test set? Is there any bad data in the test set - images that aren't actually pictures of flowers? Are there images where you can understand why the model made a mistake? End of explanation """
pastas/pasta
examples/groundwater_paper/Ex1_simple_model/Example1.ipynb
mit
import numpy as np import pandas as pd import os import matplotlib.pyplot as plt import pastas as ps ps.set_log_level("ERROR") ps.show_versions() """ Explanation: <IMG SRC="https://raw.githubusercontent.com/pastas/pastas/master/doc/_static/logo.png" WIDTH=250 ALIGN="right"> Example 1: Pastas Cookbook recipe This notebook is supplementary material to the following article in Groundwater: Collenteur, R.A., Bakker, M., Caljé, R., Klop, S.A. and Schaars, F. (2019), Pastas: Open Source Software for the Analysis of Groundwater Time Series. Groundwater, 57: 877-885. doi:10.1111/gwat.12925 Please note that the numbers and figures in this Notebook may slightly differ from those in the original publication due to some minor improvements/changes in the software code. In this notebook the Pastas "cookbook" recipe is shown. In this example it is investigated how well the heads measured in a borehole near Kingstown, Rhode Island, US, can be simulated using rainfall and potential evaporation. A transfer function noise (TFN) model using impulse response function is created to simulate the observed heads. The observed heads are obtained from the Ground-Water Climate Response Network (CRN) of the USGS (https://groundwaterwatch.usgs.gov/). The corresponding USGS site id is 412918071321001. The rainfall data is taken from the Global Summary of the Day dataset (GSOD) available at the National Climatic Data Center (NCDC). The rainfall series is obtained from the weather station in Kingston (station number: NCDC WBAN 54796) located at 41.491$^\circ$, -71.541$^\circ$. The evaporation is calculated from the mean temperature obtained from the same USGS station using Thornthwaite's method (Pereira and Pruitt, 2004). Pereira AR, Pruitt WO (2004), Adaptation of the Thornthwaite scheme for estimating daily reference evapotranspiration, Agricultural Water Management 66(3), 251-257 Step 1. Importing the python packages The first step to creating the TFN model is to import the python packages. In this notebook two packages are used, the Pastas package and the Pandas package to import the time series data. Both packages are short aliases for convenience (ps for the Pastas package and pd for the Pandas package). The other packages that are imported are not needed for the analysis but are needed to make the publication figures. End of explanation """ obs = pd.read_csv('obs.csv', index_col='Date', parse_dates=True, squeeze=True) * 0.3048 rain = pd.read_csv('rain.csv', index_col='Date', parse_dates=True, squeeze=True) * 0.3048 rain = rain.asfreq("D", fill_value=0.0) # There are some nan-values present evap = pd.read_csv('evap.csv', index_col='Date', parse_dates=True, squeeze=True) * 0.3048 """ Explanation: Step 2. Reading the time series The next step is to import the time series data. Three series are used in this example; the observed groundwater head, the rainfall and the evaporation. The data can be read using different methods, in this case the Pandas read_csv method is used to read the csv files. Each file consists of two columns; a date column called 'Date' and a column containing the values for the time series. The index column is the first column and is read as a date format. The heads series are stored in the variable obs, the rainfall in rain and the evaporation in evap. All variables are transformed to SI-units. End of explanation """ ml = ps.Model(obs.loc[::14], name='Kingstown') """ Explanation: Step 3. Creating the model After reading in the time series, a Pastas Model instance can be created, Model. The Model instance is stored in the variable ml and takes two input arguments; the head time series obs, and a model name: "Kingstown". End of explanation """ rm = ps.RechargeModel(rain, evap, name='recharge', rfunc=ps.Gamma) ml.add_stressmodel(rm) """ Explanation: Step 4. Adding stress models A RechargeModel instance is created and stored in the variable rm, taking the rainfall and potential evaporation time series as input arguments, as well as a name and a response function. In this example the Gamma response function is used (the Gamma function is available as ps.Gamma). After creation the recharge stress model instance is added to the model. End of explanation """ ml.solve(tmax="2014"); # Print some information on the model fit for the validation period print("\nThe R2 and the RMSE in the validation period are ", ml.stats.rsq(tmin="2015", tmax="2019").round(2), "and", ml.stats.rmse(tmin="2015", tmax="2019").round(2), ", respectively.") """ Explanation: Step 5. Solving the model The model parameters are estimated by calling the solve method of the Model instance. In this case the default settings are used (for all but the tmax argument) to solve the model. Several options can be specified in the .solve method, for example; a tmin and tmax or the type of solver used (this defaults to a least squares solver, ps.LeastSquares). This solve method prints a fit report with basic information about the model setup and the results of the model fit. End of explanation """ ml.plots.results(tmax="2018"); """ Explanation: Step 6. Visualizing the results The final step of the "cookbook" recipe is to visualize the results of the TFN model. The Pastas package has several build in plotting methods, available through the ml.plots instance. Here the .results plotting method is used. This method plots an overview of the model results, including the simulation and the observations of the groundwater head, the optimized model parameters, the residuals and the noise, the contribution of each stressmodel, and the step response function for each stressmodel. End of explanation """ ml.plots.diagnostics(); """ Explanation: 7. Diagnosing the noise series The diagnostics plot can be used to interpret how well the noise follows a normal distribution and suffers from autocorrelation (or not). End of explanation """ rfunc = ps.Gamma(cutoff=0.999) p = [100, 1.5, 15] b = np.append(0, rfunc.block(p)) s = rfunc.step(p) rfunc2 = ps.Hantush(cutoff=0.999) p2 = [-100, 30, 0.7] b2 = np.append(0, rfunc2.block(p2)) s2 = rfunc2.step(p2) # Make a figure of the step and block response fig, [ax1, ax2] = plt.subplots(1, 2, sharex=True, figsize=(8, 4)) ax1.plot(b) ax1.plot(b2) ax1.set_ylabel("block response") ax1.set_xlabel("days") ax1.legend(["Gamma", "Hantush"], handlelength=1.3) ax1.axhline(0.0, linestyle="--", c="k") ax2.plot(s) ax2.plot(s2) ax2.set_xlim(0,100) ax2.set_ylim(-105, 105) ax2.set_ylabel("step response") ax2.set_xlabel("days") ax2.axhline(0.0, linestyle="--", c="k") ax2.annotate('', xy=(95, 100), xytext=(95, 0), arrowprops={'arrowstyle': '<->'}) ax2.annotate('A', xy=(95, 100), xytext=(85, 50)) ax2.annotate('', xy=(95, -100), xytext=(95, 0), arrowprops={'arrowstyle': '<->'}) ax2.annotate('A', xy=(95, 100), xytext=(85, -50)) plt.tight_layout() """ Explanation: Make plots for publication In the next codeblocks the Figures used in the Pastas paper are created. The following figures are created: Figure of the impulse and step respons for the scaled Gamma response function Figure of the stresses used in the model Figure of the modelfit and the step response Figure of the model fit as returned by Pastas Figure of the model residuals and noise Figure of the Autocorrelation function Make a plot of the impulse and step response for the Gamma and Hantush functions End of explanation """ fig, [ax1, ax2, ax3] = plt.subplots(3,1, sharex=True, figsize=(8, 7)) ax1.plot(obs, 'k.',label='obs', markersize=2) ax1.set_ylabel('head (m)', labelpad=0) ax1.set_yticks([-4, -3, -2]) plot_rain = ax2.plot(rain * 1000, color='k', label='prec', linewidth=1) ax2.set_ylabel('rain (mm/d)', labelpad=-5) ax2.set_xlabel('Date'); ax2.set_ylim([0,150]) ax2.set_yticks(np.arange(0, 151, 50)) plot_evap = ax3.plot(evap * 1000,'k', label='evap', linewidth=1) ax3.set_ylabel('evap (mm/d)') ax3.tick_params('y') ax3.set_ylim([0,8]) plt.xlim([pd.Timestamp('2003'), pd.Timestamp('2019')]) ax2.set_xlabel("") ax3.set_xlabel("year") """ Explanation: Make a plot of the stresses used in the model End of explanation """ # Create the main plot fig, ax = plt.subplots(figsize=(16,5)) ax.plot(obs, marker=".", c="grey", linestyle=" ") ax.plot(obs.loc[:"2013":14], marker="x", markersize=7, c="C3", linestyle=" ", mew=2) ax.plot(ml.simulate(tmax="2019"), c="k") plt.ylabel('head (m)') plt.xlabel('year') plt.title("") plt.xlim(pd.Timestamp('2003'), pd.Timestamp('2019')) plt.ylim(-4.7, -1.6) plt.yticks(np.arange(-4, -1, 1)) # Create the arrows indicating the calibration and validation period ax.annotate("calibration period", xy=(pd.Timestamp("2003-01-01"), -4.6), xycoords='data', xytext=(300, 0), textcoords='offset points', arrowprops=dict(arrowstyle="->"), va="center", ha="center") ax.annotate("", xy=(pd.Timestamp("2014-01-01"), -4.6), xycoords='data', xytext=(-230, 0), textcoords='offset points', arrowprops=dict(arrowstyle="->"), va="center", ha="center") ax.annotate("validation", xy=(pd.Timestamp("2014-01-01"), -4.6), xycoords='data', xytext=(150, 0), textcoords='offset points', arrowprops=dict(arrowstyle="->"), va="center", ha="center") ax.annotate("", xy=(pd.Timestamp("2019-01-01"), -4.6), xycoords='data', xytext=(-85, 0), textcoords='offset points', arrowprops=dict(arrowstyle="->"), va="center", ha="center") plt.legend(["observed head", "used for calibration","simulated head"], loc=2, numpoints=3) # Create the inset plot with the step response ax2 = plt.axes([0.66, 0.65, 0.22, 0.2]) s = ml.get_step_response("recharge") ax2.plot(s, c="k") ax2.set_ylabel("response") ax2.set_xlabel("days", labelpad=-15) ax2.set_xlim(0, s.index.size) ax2.set_xticks([0, 300]) """ Explanation: Make a custom figure of the model fit and the estimated step response End of explanation """ from matplotlib.font_manager import FontProperties font = FontProperties() #font.set_size(10) font.set_weight('normal') font.set_family('monospace') font.set_name("courier new") plt.text(-1, -1, str(ml.fit_report()), fontproperties=font) plt.axis('off') plt.tight_layout() """ Explanation: Make a figure of the fit report End of explanation """ fig, ax1 = plt.subplots(1,1, figsize=(8, 3)) ml.residuals(tmax="2019").plot(ax=ax1, c="k") ml.noise(tmax="2019").plot(ax=ax1, c="C0") plt.xticks(rotation=0, horizontalalignment='center') ax1.set_ylabel('(m)') ax1.set_xlabel('year') ax1.legend(["residuals", "noise"], ncol=2) ax = ps.stats.plot_acf(ml.noise(), figsize=(9, 2)) ax.set_ylabel('ACF (-)') ax.set_xlabel('lag (days)') plt.legend(["95% confidence interval"], loc=(0.0, 1.05)) plt.xlim(0, 370) plt.ylim(-0.25, 0.25) plt.title("") plt.grid() ml.stats.diagnostics() q, p = ps.stats.stoffer_toloi(ml.noise()) print("The hypothesis that there is significant autocorrelation is:", p) """ Explanation: Make a Figure of the noise, residuals and autocorrelation End of explanation """
gschivley/Supply-Curve
Supply Curve example.ipynb
mit
%matplotlib inline import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt import matplotlib.patches as patches import matplotlib.path as path from palettable.colorbrewer.qualitative import Paired_11 """ Explanation: Supply curve figures of coal and petroleum resources Sources I used for parts of this, and other things that might be helpful: - Patches to make a histogram - Text properties and layout - Add text with arrow line pointing somewhere (annotate) (you can probably get rid of the arrow head for just a line) I had trouble finding a default color set with 11 colors for the petroleum data. Eventually discovered Palettable, and am using Paired_11 here. It is not my first choice of a color plot, but is one of the few defaults with that many colors. I generally recommend using one of the default seaborn color palettes such as muted or deep. Check out the seaborn tutorial for a basic primer colors. End of explanation """ fn = 'nature14016-f1.xlsx' sn = 'Coal data' coal_df = pd.read_excel(fn, sn) """ Explanation: Load coal data Data is from: McGlade, C & Ekins, P. The geographical distribution of fossil fuels unused when limiting global warming to 2 °C. Nature 517, 187–190. (2015) doi:10.1038/nature14016 Coal data from Figure 1c. End of explanation """ coal_df.head() coal_df.tail() names = coal_df['Resource'].values amount = coal_df['Quantity (ZJ)'].values cost = coal_df['Cost (2010$/GJ)'].values """ Explanation: Fortuntely the Cost values are already sorted in ascending order. Cost will be on the y-axis, and cumulative recoverable resources will be on the x-axis. End of explanation """ name_set = set(names) name_set color_dict = {} for i, area in enumerate(name_set): color_dict[area] = i #Assigning index position as value to resource name keys color_dict sns.color_palette('deep', n_colors=4, desat=.8) sns.palplot(sns.color_palette('deep', n_colors=4, desat=.8)) """ Explanation: Create a set of names to use for assigning colors and creating the legend I'm not being picky about the order of colors. End of explanation """ def color_match(name): return sns.color_palette('deep', n_colors=4, desat=.8)[color_dict[name]] """ Explanation: Define a function that returns the integer color choice based on the region name Use the function color_match to create a Series with rgb colors that will be used for each box in the figure. Do this using the map operation, which applies a function to each element in a Pandas Series. End of explanation """ color = coal_df['Resource'].map(color_match) color.head() """ Explanation: color has rgb values for each resource End of explanation """ # get the corners of the rectangles for the histogram left = np.cumsum(np.insert(amount, 0, 0)) right = np.cumsum(np.append(amount, .01)) bottom = np.zeros(len(left)) top = np.append(cost, 0) """ Explanation: Define the edges of the patch objects that will be drawn on the plot End of explanation """ sns.set_style('whitegrid') fig, ax = plt.subplots(figsize=(10,5)) # we need a (numrects x numsides x 2) numpy array for the path helper # function to build a compound path for i, name in enumerate(names): XY = np.array([[left[i:i+1], left[i:i+1], right[i:i+1], right[i:i+1]], [bottom[i:i+1], top[i:i+1], top[i:i+1], bottom[i:i+1]]]).T # get the Path object barpath = path.Path.make_compound_path_from_polys(XY) # make a patch out of it (a patch is the shape drawn on the plot) patch = patches.PathPatch(barpath, facecolor=color[i], ec='0.2') ax.add_patch(patch) #Create patch elements for a custom legend #The legend function expects multiple patch elements as a list patch = [patches.Patch(color=sns.color_palette('deep', 4, 0.8)[color_dict[i]], label=i) for i in color_dict] # Axis labels/limits, remove horizontal gridlines, etc plt.ylabel('Cost (2010$/GJ)', size=14) plt.xlabel('Quantity (ZJ)', size=14) ax.set_xlim(left[0], right[-1]) ax.set_ylim(bottom.min(), 12) ax.yaxis.grid(False) ax.xaxis.grid(False) #remove top and right spines (box lines around figure) sns.despine() #Add the custom legend plt.legend(handles=patch, loc=2, fontsize=12) plt.savefig('Example Supply Curve (coal).png') """ Explanation: Make the figure (coal) End of explanation """ fn = 'nature14016-f1.xlsx' sn = 'Oil data' df = pd.read_excel(fn, sn) """ Explanation: Load oil data Data is from: McGlade, C & Ekins, P. The geographical distribution of fossil fuels unused when limiting global warming to 2 °C. Nature 517, 187–190. (2015) doi:10.1038/nature14016 I'm using data from Figure 1a. End of explanation """ df.head() df.tail() """ Explanation: Fortuntely the Cost values are already sorted in ascending order. Cost will be on the y-axis, and cumulative recoverable resources will be on the x-axis. End of explanation """ names = df['Resource'].values amount = df['Quantity (Gb)'].values cost = df['Cost (2010$/bbl)'].values """ Explanation: Create arrays of values with easy to type names End of explanation """ name_set = set(names) name_set color_dict = {} for i, area in enumerate(name_set): color_dict[area] = i #Assigning index position as value to resource name keys color_dict sns.palplot(Paired_11.mpl_colors) """ Explanation: Create a set of names to use for assigning colors and creating the legend I'm not being picky about the order of colors. End of explanation """ def color_match(name): return Paired_11.mpl_colors[color_dict[name]] def color_match(name): return sns.husl_palette(n_colors=11, h=0.1, s=0.9, l=0.6)[color_dict[name]] color_match('NGL') color = df['Resource'].map(color_match) """ Explanation: Define a function that returns the integer color choice based on the region name Use the function color_match to create a Series with rgb colors that will be used for each box in the figure. Do this using the map operation, which applies a function to each element in a Pandas Series. End of explanation """ # get the corners of the rectangles for the histogram left = np.cumsum(np.insert(amount, 0, 0)) right = np.cumsum(np.append(amount, .01)) bottom = np.zeros(len(left)) top = np.append(cost, 0) """ Explanation: Define the edges of the patch objects that will be drawn on the plot End of explanation """ sns.set_style('whitegrid') fig, ax = plt.subplots(figsize=(10,5)) # we need a (numrects x numsides x 2) numpy array for the path helper # function to build a compound path for i, name in enumerate(names): XY = np.array([[left[i:i+1], left[i:i+1], right[i:i+1], right[i:i+1]], [bottom[i:i+1], top[i:i+1], top[i:i+1], bottom[i:i+1]]]).T # get the Path object barpath = path.Path.make_compound_path_from_polys(XY) # make a patch out of it (a patch is the shape drawn on the plot) patch = patches.PathPatch(barpath, facecolor=color[i], ec='0.8') ax.add_patch(patch) #Create patch elements for a custom legend #The legend function expects multiple patch elements as a list patch = [] for i in color_dict: patch.append(patches.Patch(color=Paired_11.mpl_colors[color_dict[i]], label=i)) # Axis labels/limits, remove horizontal gridlines, etc plt.ylabel('Cost (2010$/bbl)', size=14) plt.xlabel('Quantity (Gb)', size=14) ax.set_xlim(left[0], right[-2]) ax.set_ylim(bottom.min(), 120) ax.yaxis.grid(False) ax.xaxis.grid(False) #remove top and right spines (box lines around figure) sns.despine() #Add the custom legend plt.legend(handles=patch, loc=2, fontsize=12, ncol=2) plt.savefig('Example Supply Curve.png') """ Explanation: Make the figure End of explanation """
machinelearningnanodegree/stanford-cs231
solutions/kvn219/assignment2/FullyConnectedNets.ipynb
mit
# As usual, a bit of setup import time import numpy as np import matplotlib.pyplot as plt from cs231n.classifiers.fc_net import * from cs231n.data_utils import get_CIFAR10_data from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array from cs231n.solver import Solver %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # for auto-reloading external modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 def rel_error(x, y): """ Element-wise relative error for floating point comparison. Input: - x: a numpy array of type float. - y: a numpy array of type float. Returns: - highest relative error. """ return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y)))) # Load the (preprocessed) CIFAR10 data. data = get_CIFAR10_data() for k, v in data.iteritems(): print '%s: ' % k, v.shape """ Explanation: Fully-Connected Neural Nets In the previous homework you implemented a fully-connected two-layer neural network on CIFAR-10. The implementation was simple but not very modular since the loss and gradient were computed in a single monolithic function. This is manageable for a simple two-layer network, but would become impractical as we move to bigger models. Ideally we want to build networks using a more modular design so that we can implement different layer types in isolation and then snap them together into models with different architectures. In this exercise we will implement fully-connected networks using a more modular approach. For each layer we will implement a forward and a backward function. The forward function will receive inputs, weights, and other parameters and will return both an output and a cache object storing data needed for the backward pass, like this: ```python def layer_forward(x, w): """ Receive inputs x and weights w """ # Do some computations ... z = # ... some intermediate value # Do some more computations ... out = # the output cache = (x, w, z, out) # Values we need to compute gradients return out, cache ``` The backward pass will receive upstream derivatives and the cache object, and will return gradients with respect to the inputs and weights, like this: ```python def layer_backward(dout, cache): """ Receive derivative of loss with respect to outputs and cache, and compute derivative with respect to inputs. """ # Unpack cache values x, w, z, out = cache # Use values in cache to compute derivatives dx = # Derivative of loss with respect to x dw = # Derivative of loss with respect to w return dx, dw ``` After implementing a bunch of layers this way, we will be able to easily combine them to build classifiers with different architectures. In addition to implementing fully-connected networks of arbitrary depth, we will also explore different update rules for optimization, and introduce Dropout as a regularizer and Batch Normalization as a tool to more efficiently optimize deep networks. End of explanation """ # Test the affine_forward function num_inputs = 2 input_shape = (4, 5, 6) output_dim = 3 input_size = num_inputs * np.prod(input_shape) weight_size = output_dim * np.prod(input_shape) x = np.linspace(-0.1, 0.5, num=input_size).reshape(num_inputs, *input_shape) w = np.linspace(-0.2, 0.3, num=weight_size).reshape(np.prod(input_shape), output_dim) b = np.linspace(-0.3, 0.1, num=output_dim) out, _ = affine_forward(x, w, b) correct_out = np.array([[ 1.49834967, 1.70660132, 1.91485297], [ 3.25553199, 3.5141327, 3.77273342]]) # Compare your output with ours. The error should be around 1e-9. print 'Testing affine_forward function:' print 'difference: ', rel_error(out, correct_out) """ Explanation: Affine layer: foward Open the file cs231n/layers.py and implement the affine_forward function. Once you are done you can test your implementaion by running the following: End of explanation """ # Test the affine_backward function x = np.random.randn(10, 2, 3) w = np.random.randn(6, 5) b = np.random.randn(5) dout = np.random.randn(10, 5) dx_num = eval_numerical_gradient_array(lambda x: affine_forward(x, w, b)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: affine_forward(x, w, b)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: affine_forward(x, w, b)[0], b, dout) _, cache = affine_forward(x, w, b) dx, dw, db = affine_backward(dout, cache) # The error should be around 1e-10 print 'Testing affine_backward function:' print 'dx error: ', rel_error(dx_num, dx) print 'dw error: ', rel_error(dw_num, dw) print 'db error: ', rel_error(db_num, db) """ Explanation: Affine layer: backward Now implement the affine_backward function and test your implementation using numeric gradient checking. End of explanation """ # Test the relu_forward function x = np.linspace(-0.5, 0.5, num=12).reshape(3, 4) out, _ = relu_forward(x) correct_out = np.array([[ 0., 0., 0., 0., ], [ 0., 0., 0.04545455, 0.13636364,], [ 0.22727273, 0.31818182, 0.40909091, 0.5, ]]) # Compare your output with ours. The error should be around 1e-8 print 'Testing relu_forward function:' print 'difference: ', rel_error(out, correct_out) """ Explanation: ReLU layer: forward Implement the forward pass for the ReLU activation function in the relu_forward function and test your implementation using the following: End of explanation """ x = np.random.randn(10, 10) dout = np.random.randn(*x.shape) dx_num = eval_numerical_gradient_array(lambda x: relu_forward(x)[0], x, dout) _, cache = relu_forward(x) dx = relu_backward(dout, cache) # The error should be around 1e-12 print 'Testing relu_backward function:' print 'dx error: ', rel_error(dx_num, dx) """ Explanation: ReLU layer: backward Now implement the backward pass for the ReLU activation function in the relu_backward function and test your implementation using numeric gradient checking: End of explanation """ from cs231n.layer_utils import affine_relu_forward, affine_relu_backward x = np.random.randn(2, 3, 4) w = np.random.randn(12, 10) b = np.random.randn(10) dout = np.random.randn(2, 10) out, cache = affine_relu_forward(x, w, b) dx, dw, db = affine_relu_backward(dout, cache) dx_num = eval_numerical_gradient_array(lambda x: affine_relu_forward(x, w, b)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: affine_relu_forward(x, w, b)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: affine_relu_forward(x, w, b)[0], b, dout) print 'Testing affine_relu_forward:' print 'dx error: ', rel_error(dx_num, dx) print 'dw error: ', rel_error(dw_num, dw) print 'db error: ', rel_error(db_num, db) """ Explanation: "Sandwich" layers There are some common patterns of layers that are frequently used in neural nets. For example, affine layers are frequently followed by a ReLU nonlinearity. To make these common patterns easy, we define several convenience layers in the file cs231n/layer_utils.py. For now take a look at the affine_relu_forward and affine_relu_backward functions, and run the following to numerically gradient check the backward pass: End of explanation """ num_classes, num_inputs = 10, 50 x = 0.001 * np.random.randn(num_inputs, num_classes) y = np.random.randint(num_classes, size=num_inputs) dx_num = eval_numerical_gradient(lambda x: svm_loss(x, y)[0], x, verbose=False) loss, dx = svm_loss(x, y) # Test svm_loss function. Loss should be around 9 and dx error should be 1e-9 print 'Testing svm_loss:' print 'loss: ', loss print 'dx error: ', rel_error(dx_num, dx) dx_num = eval_numerical_gradient(lambda x: softmax_loss(x, y)[0], x, verbose=False) loss, dx = softmax_loss(x, y) # Test softmax_loss function. Loss should be 2.3 and dx error should be 1e-8 print '\nTesting softmax_loss:' print 'loss: ', loss print 'dx error: ', rel_error(dx_num, dx) """ Explanation: Loss layers: Softmax and SVM You implemented these loss functions in the last assignment, so we'll give them to you for free here. You should still make sure you understand how they work by looking at the implementations in cs231n/layers.py. You can make sure that the implementations are correct by running the following: End of explanation """ N, D, H, C = 3, 5, 50, 7 X = np.random.randn(N, D) y = np.random.randint(C, size=N) std = 1e-2 model = TwoLayerNet(input_dim=D, hidden_dim=H, num_classes=C, weight_scale=std) print 'Testing initialization ... ' W1_std = abs(model.params['W1'].std() - std) b1 = model.params['b1'] W2_std = abs(model.params['W2'].std() - std) b2 = model.params['b2'] assert W1_std < std / 10, 'First layer weights do not seem right' assert np.all(b1 == 0), 'First layer biases do not seem right' assert W2_std < std / 10, 'Second layer weights do not seem right' assert np.all(b2 == 0), 'Second layer biases do not seem right' print 'Testing test-time forward pass ... ' model.params['W1'] = np.linspace(-0.7, 0.3, num=D*H).reshape(D, H) model.params['b1'] = np.linspace(-0.1, 0.9, num=H) model.params['W2'] = np.linspace(-0.3, 0.4, num=H*C).reshape(H, C) model.params['b2'] = np.linspace(-0.9, 0.1, num=C) X = np.linspace(-5.5, 4.5, num=N*D).reshape(D, N).T scores = model.loss(X) correct_scores = np.asarray( [[11.53165108, 12.2917344, 13.05181771, 13.81190102, 14.57198434, 15.33206765, 16.09215096], [12.05769098, 12.74614105, 13.43459113, 14.1230412, 14.81149128, 15.49994135, 16.18839143], [12.58373087, 13.20054771, 13.81736455, 14.43418138, 15.05099822, 15.66781506, 16.2846319 ]]) scores_diff = np.abs(scores - correct_scores).sum() assert scores_diff < 1e-6, 'Problem with test-time forward pass' print 'Testing training loss (no regularization)' y = np.asarray([0, 5, 1]) loss, grads = model.loss(X, y) correct_loss = 3.4702243556 assert abs(loss - correct_loss) < 1e-10, 'Problem with training-time loss' model.reg = 1.0 loss, grads = model.loss(X, y) correct_loss = 26.5948426952 assert abs(loss - correct_loss) < 1e-10, 'Problem with regularization loss' for reg in [0.0, 0.7]: print 'Running numeric gradient check with reg = ', reg model.reg = reg loss, grads = model.loss(X, y) for name in sorted(grads): f = lambda _: model.loss(X, y)[0] grad_num = eval_numerical_gradient(f, model.params[name], verbose=False) print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])) """ Explanation: Two-layer network In the previous assignment you implemented a two-layer neural network in a single monolithic class. Now that you have implemented modular versions of the necessary layers, you will reimplement the two layer network using these modular implementations. Open the file cs231n/classifiers/fc_net.py and complete the implementation of the TwoLayerNet class. This class will serve as a model for the other networks you will implement in this assignment, so read through it to make sure you understand the API. You can run the cell below to test your implementation. End of explanation """ model = TwoLayerNet() solver = None ############################################################################## # TODO: Use a Solver instance to train a TwoLayerNet that achieves at least # # 50% accuracy on the validation set. # ############################################################################## model.reg = 0.25 model.hidden_dim = 250 solver_data = {} solver_config = {} solver_data['X_train'] = data['X_train'] solver_data['X_test'] = data['X_test'] solver_data['y_train'] = data['y_train'] solver_data['y_test'] = data['y_test'] solver_config['learning_rate'] = 0.001 solver = Solver(model, data, optim_config=solver_config, print_every=4900, batch_size=1000, lr_decay = 0.975, num_epochs=25) solver.train() ############################################################################## # END OF YOUR CODE # ############################################################################## # Run this cell to visualize training loss and train / val accuracy plt.subplot(2, 1, 1) plt.title('Training loss') plt.plot(solver.loss_history, 'o') plt.xlabel('Iteration') plt.subplot(2, 1, 2) plt.title('Accuracy') plt.plot(solver.train_acc_history, '-o', label='train') plt.plot(solver.val_acc_history, '-o', label='val') plt.plot([0.5] * len(solver.val_acc_history), 'k--') plt.xlabel('Epoch') plt.legend(loc='lower right') plt.gcf().set_size_inches(15, 12) plt.show() """ Explanation: Solver In the previous assignment, the logic for training models was coupled to the models themselves. Following a more modular design, for this assignment we have split the logic for training models into a separate class. Open the file cs231n/solver.py and read through it to familiarize yourself with the API. After doing so, use a Solver instance to train a TwoLayerNet that achieves at least 50% accuracy on the validation set. End of explanation """ N, D, H1, H2, C = 2, 15, 20, 30, 10 X = np.random.randn(N, D) y = np.random.randint(C, size=(N,)) for reg in [0, 3.14]: print 'Running check with reg = ', reg model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C, reg=reg, weight_scale=5e-2, dtype=np.float64) loss, grads = model.loss(X, y) print 'Initial loss: ', loss for name in sorted(grads): f = lambda _: model.loss(X, y)[0] grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5) print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])) """ Explanation: Multilayer network Next you will implement a fully-connected network with an arbitrary number of hidden layers. Read through the FullyConnectedNet class in the file cs231n/classifiers/fc_net.py. Implement the initialization, the forward pass, and the backward pass. For the moment don't worry about implementing dropout or batch normalization; we will add those features soon. Initial loss and gradient check As a sanity check, run the following to check the initial loss and to gradient check the network both with and without regularization. Do the initial losses seem reasonable? For gradient checking, you should expect to see errors around 1e-6 or less. End of explanation """ # TODO: Use a three-layer Net to overfit 50 training examples. num_train = 50 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } learning_rate = 5e-3 weight_scale = 50e-3 model = FullyConnectedNet([100, 100], weight_scale=weight_scale, dtype=np.float64) solver = Solver(model, small_data, print_every=10, num_epochs=20, batch_size=25, update_rule='sgd', optim_config={ 'learning_rate': learning_rate, } ) solver.train() plt.plot(solver.loss_history, 'o') plt.title('Training loss history') plt.xlabel('Iteration') plt.ylabel('Training loss') plt.show() """ Explanation: As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs. End of explanation """ # TODO: Use a five-layer Net to overfit 50 training examples. num_train = 50 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } learning_rate = 5e-3 weight_scale = 50e-3 model = FullyConnectedNet([100, 100, 100, 100], weight_scale=weight_scale, dtype=np.float64) solver = Solver(model, small_data, print_every=10, num_epochs=20, batch_size=25, update_rule='sgd', optim_config={ 'learning_rate': learning_rate, } ) solver.train() plt.plot(solver.loss_history, 'o') plt.title('Training loss history') plt.xlabel('Iteration') plt.ylabel('Training loss') plt.show() """ Explanation: Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs. End of explanation """ from cs231n.optim import sgd_momentum N, D = 4, 5 w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D) dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D) v = np.linspace(0.6, 0.9, num=N*D).reshape(N, D) config = {'learning_rate': 1e-3, 'velocity': v} next_w, _ = sgd_momentum(w, dw, config=config) expected_next_w = np.asarray([ [ 0.1406, 0.20738947, 0.27417895, 0.34096842, 0.40775789], [ 0.47454737, 0.54133684, 0.60812632, 0.67491579, 0.74170526], [ 0.80849474, 0.87528421, 0.94207368, 1.00886316, 1.07565263], [ 1.14244211, 1.20923158, 1.27602105, 1.34281053, 1.4096 ]]) expected_velocity = np.asarray([ [ 0.5406, 0.55475789, 0.56891579, 0.58307368, 0.59723158], [ 0.61138947, 0.62554737, 0.63970526, 0.65386316, 0.66802105], [ 0.68217895, 0.69633684, 0.71049474, 0.72465263, 0.73881053], [ 0.75296842, 0.76712632, 0.78128421, 0.79544211, 0.8096 ]]) print 'next_w error: ', rel_error(next_w, expected_next_w) print 'velocity error: ', rel_error(expected_velocity, config['velocity']) """ Explanation: Inline question: Did you notice anything about the comparative difficulty of training the three-layer net vs training the five layer net? Answer: [FILL THIS IN] Update rules So far we have used vanilla stochastic gradient descent (SGD) as our update rule. More sophisticated update rules can make it easier to train deep networks. We will implement a few of the most commonly used update rules and compare them to vanilla SGD. SGD+Momentum Stochastic gradient descent with momentum is a widely used update rule that tends to make deep networks converge faster than vanilla stochstic gradient descent. Open the file cs231n/optim.py and read the documentation at the top of the file to make sure you understand the API. Implement the SGD+momentum update rule in the function sgd_momentum and run the following to check your implementation. You should see errors less than 1e-8. End of explanation """ num_train = 4000 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } solvers = {} for update_rule in ['sgd', 'sgd_momentum']: print 'running with ', update_rule model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2) solver = Solver(model, small_data, num_epochs=5, batch_size=100, update_rule=update_rule, optim_config={'learning_rate': 1e-2,}, verbose=True) solvers[update_rule] = solver solver.train() print plt.subplot(3, 1, 1) plt.title('Training loss') plt.xlabel('Iteration') plt.subplot(3, 1, 2) plt.title('Training accuracy') plt.xlabel('Epoch') plt.subplot(3, 1, 3) plt.title('Validation accuracy') plt.xlabel('Epoch') for update_rule, solver in solvers.iteritems(): plt.subplot(3, 1, 1) plt.plot(solver.loss_history, 'o', label=update_rule) plt.subplot(3, 1, 2) plt.plot(solver.train_acc_history, '-o', label=update_rule) plt.subplot(3, 1, 3) plt.plot(solver.val_acc_history, '-o', label=update_rule) for i in [1, 2, 3]: plt.subplot(3, 1, i) plt.legend(loc='upper center', ncol=4) plt.gcf().set_size_inches(15, 15) plt.show() """ Explanation: Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge faster. End of explanation """ # Test RMSProp implementation; you should see errors less than 1e-7 from cs231n.optim import rmsprop N, D = 4, 5 w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D) dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D) cache = np.linspace(0.6, 0.9, num=N*D).reshape(N, D) config = {'learning_rate': 1e-2, 'cache': cache} next_w, _ = rmsprop(w, dw, config=config) expected_next_w = np.asarray([ [-0.39223849, -0.34037513, -0.28849239, -0.23659121, -0.18467247], [-0.132737, -0.08078555, -0.02881884, 0.02316247, 0.07515774], [ 0.12716641, 0.17918792, 0.23122175, 0.28326742, 0.33532447], [ 0.38739248, 0.43947102, 0.49155973, 0.54365823, 0.59576619]]) expected_cache = np.asarray([ [ 0.5976, 0.6126277, 0.6277108, 0.64284931, 0.65804321], [ 0.67329252, 0.68859723, 0.70395734, 0.71937285, 0.73484377], [ 0.75037008, 0.7659518, 0.78158892, 0.79728144, 0.81302936], [ 0.82883269, 0.84469141, 0.86060554, 0.87657507, 0.8926 ]]) print 'next_w error: ', rel_error(expected_next_w, next_w) print 'cache error: ', rel_error(expected_cache, config['cache']) # Test Adam implementation; you should see errors around 1e-7 or less from cs231n.optim import adam N, D = 4, 5 w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D) dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D) m = np.linspace(0.6, 0.9, num=N*D).reshape(N, D) v = np.linspace(0.7, 0.5, num=N*D).reshape(N, D) config = {'learning_rate': 1e-2, 'm': m, 'v': v, 't': 5} next_w, _ = adam(w, dw, config=config) expected_next_w = np.asarray([ [-0.40094747, -0.34836187, -0.29577703, -0.24319299, -0.19060977], [-0.1380274, -0.08544591, -0.03286534, 0.01971428, 0.0722929], [ 0.1248705, 0.17744702, 0.23002243, 0.28259667, 0.33516969], [ 0.38774145, 0.44031188, 0.49288093, 0.54544852, 0.59801459]]) expected_v = np.asarray([ [ 0.69966, 0.68908382, 0.67851319, 0.66794809, 0.65738853,], [ 0.64683452, 0.63628604, 0.6257431, 0.61520571, 0.60467385,], [ 0.59414753, 0.58362676, 0.57311152, 0.56260183, 0.55209767,], [ 0.54159906, 0.53110598, 0.52061845, 0.51013645, 0.49966, ]]) expected_m = np.asarray([ [ 0.48, 0.49947368, 0.51894737, 0.53842105, 0.55789474], [ 0.57736842, 0.59684211, 0.61631579, 0.63578947, 0.65526316], [ 0.67473684, 0.69421053, 0.71368421, 0.73315789, 0.75263158], [ 0.77210526, 0.79157895, 0.81105263, 0.83052632, 0.85 ]]) print 'next_w error: ', rel_error(expected_next_w, next_w) print 'v error: ', rel_error(expected_v, config['v']) print 'm error: ', rel_error(expected_m, config['m']) """ Explanation: RMSProp and Adam RMSProp [1] and Adam [2] are update rules that set per-parameter learning rates by using a running average of the second moments of gradients. In the file cs231n/optim.py, implement the RMSProp update rule in the rmsprop function and implement the Adam update rule in the adam function, and check your implementations using the tests below. [1] Tijmen Tieleman and Geoffrey Hinton. "Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude." COURSERA: Neural Networks for Machine Learning 4 (2012). [2] Diederik Kingma and Jimmy Ba, "Adam: A Method for Stochastic Optimization", ICLR 2015. End of explanation """ learning_rates = {'rmsprop': 1e-4, 'adam': 1e-3} for update_rule in ['adam', 'rmsprop']: print 'running with ', update_rule model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2) solver = Solver(model, small_data, num_epochs=5, batch_size=100, update_rule=update_rule, optim_config={ 'learning_rate': learning_rates[update_rule] }, verbose=True) solvers[update_rule] = solver solver.train() print plt.subplot(3, 1, 1) plt.title('Training loss') plt.xlabel('Iteration') plt.subplot(3, 1, 2) plt.title('Training accuracy') plt.xlabel('Epoch') plt.subplot(3, 1, 3) plt.title('Validation accuracy') plt.xlabel('Epoch') for update_rule, solver in solvers.iteritems(): plt.subplot(3, 1, 1) plt.plot(solver.loss_history, 'o', label=update_rule) plt.subplot(3, 1, 2) plt.plot(solver.train_acc_history, '-o', label=update_rule) plt.subplot(3, 1, 3) plt.plot(solver.val_acc_history, '-o', label=update_rule) for i in [1, 2, 3]: plt.subplot(3, 1, i) plt.legend(loc='upper center', ncol=4) plt.gcf().set_size_inches(15, 15) plt.show() """ Explanation: Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules: End of explanation """ best_model = None ################################################################################ # TODO: Train the best FullyConnectedNet that you can on CIFAR-10. You might # # batch normalization and dropout useful. Store your best model in the # # best_model variable. # ################################################################################ solvers = {} best_val = 0.0 hidden_dims = [100, 100, 100] learning_rates = np.random.uniform(1e-5, 1e-3, 5) # 0.00062226430196974704 learning_rate_decays = [.95, 0.75] for update_rule in learning_rates: for decay in learning_rate_decays: labels = (update_rule, decay) model = FullyConnectedNet(hidden_dims, num_classes=10, use_batchnorm=True, weight_scale=5e-2) solver = Solver(model, data, num_epochs=5, batch_size=200, update_rule='adam', lr_decay = decay, optim_config={'learning_rate': update_rule}, verbose=False) solver.train() solvers[labels] = solver if solver.best_val_acc > best_val: best_val = solver.best_val_acc best_model = solver.model best_solver = solver print("done!") ################################################################################ # END OF YOUR CODE # ################################################################################ """ Explanation: Train a good model! Train the best fully-connected model that you can on CIFAR-10, storing your best model in the best_model variable. We require you to get at least 50% accuracy on the validation set using a fully-connected net. If you are careful it should be possible to get accuracies above 55%, but we don't require it for this part and won't assign extra credit for doing so. Later in the assignment we will ask you to train the best convolutional network that you can on CIFAR-10, and we would prefer that you spend your effort working on convolutional nets rather than fully-connected nets. You might find it useful to complete the BatchNormalization.ipynb and Dropout.ipynb notebooks before completing this part, since those techniques can help you train powerful models. End of explanation """ y_test_pred = np.argmax(best_model.loss(data['X_test']), axis=1) y_val_pred = np.argmax(best_model.loss(data['X_val']), axis=1) print 'Validation set accuracy: ', (y_val_pred == data['y_val']).mean() print 'Test set accuracy: ', (y_test_pred == data['y_test']).mean() # Validation set accuracy: 0.534 # Test set accuracy: 0.52 # best lr_decay and lr print(best_solver.lr_decay, best_solver.optim_config) # (0.95, {'learning_rate': 0.00062226430196974704}) """ Explanation: Test you model Run your best model on the validation and test sets. You should achieve above 50% accuracy on the validation set. End of explanation """
zzsza/Datascience_School
10. 기초 확률론3 - 확률 분포 모형/04. 가우시안 정규 분포 (파이썬 버전).ipynb
mit
mu = 0 std = 1 rv = sp.stats.norm(mu, std) rv """ Explanation: 가우시안 정규 분포 가우시안 정규 분포(Gaussian normal distribution), 혹은 그냥 간단히 정규 분포라고 부르는 분포는 자연 현상에서 나타나는 숫자를 확률 모형으로 모형화할 때 가장 많이 사용되는 확률 모형이다. 정규 분포는 평균 $\mu$와 분산 $\sigma^2$ 이라는 두 개의 모수만으로 정의되며 확률 밀도 함수(pdf: probability density function)는 다음과 같은 수식을 가진다. $$ \mathcal{N}(x; \mu, \sigma^2) = \dfrac{1}{\sqrt{2\pi\sigma^2}} \exp \left(-\dfrac{(x-\mu)^2}{2\sigma^2}\right) $$ 정규 분포 중에서도 평균이 0 이고 분산이 1 인 ($\mu=0$, $\sigma^2=1$) 정규 분포를 표준 정규 분포(standard normal distribution)라고 한다. SciPy를 사용한 정규 분포의 시뮬레이션 Scipy의 stats 서브 패키지에 있는 norm 클래스는 정규 분포에 대한 클래스이다. loc 인수로 평균을 설정하고 scale 인수로 표준 편차를 설정한다. End of explanation """ xx = np.linspace(-5, 5, 100) plt.plot(xx, rv.pdf(xx)) plt.ylabel("p(x)") plt.title("pdf of normal distribution") plt.show() """ Explanation: pdf 메서드를 사용하면 확률 밀도 함수(pdf: probability density function)를 계산할 수 있다. End of explanation """ np.random.seed(0) x = rv.rvs(100) x sns.distplot(x, kde=False, fit=sp.stats.norm) plt.show() """ Explanation: 시뮬레이션을 통해 샘플을 얻으려면 rvs 메서드를 사용한다. End of explanation """ np.random.seed(0) x = np.random.randn(100) plt.figure(figsize=(7,7)) sp.stats.probplot(x, plot=plt) plt.axis("equal") plt.show() """ Explanation: Q-Q 플롯 정규 분포는 여러가지 연속 확률 분포 중에서도 가장 유용한 특성을 지니며 널리 사용되는 확률 분포이다. 따라서 어떤 확률 변수의 분포가 정규 분포인지 아닌지 확인하는 것은 정규 분포 검정(normality test)은 가장 중요한 통계적 분석 중의 하나이다. 그러나 구체적인 정규 분포 검정을 사용하기에 앞서 시작적으로 간단하게 정규 분포를 확인하는 Q-Q 플롯을 사용할 수 있다. Q-Q(Quantile-Quantile) 플롯은 분석하고자 하는 샘플의 분포과 정규 분포의 분포 형태를 비교하는 시각적 도구이다. Q-Q 플롯은 동일 분위수에 해당하는 정상 분포의 값과 주어진 분포의 값을 한 쌍으로 만들어 스캐터 플롯(scatter plot)으로 그린 것이다. Q-Q 플롯을 그리는 구체적인 방법은 다음과 같다. 대상 샘플을 크기에 따라 정렬(sort)한다. 각 샘플의 분위수(quantile number)를 구한다. 각 샘플의 분위수와 일치하는 분위수를 가지는 정규 분포 값을 구한다. 대상 샘플과 정규 분포 값을 하나의 쌍으로 생각하여 2차원 공간에 하나의 점(point)으로 그린다. 모든 샘플에 대해 2부터 4까지의 과정을 반복하여 스캐터 플롯과 유사한 형태의 플롯을 완성한다. 비교를 위한 45도 직선을 그린다. SciPy 패키지의 stats 서브 패키지는 Q-Q 플롯을 계산하고 그리기 위한 probplot 명령을 제공한다. http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.probplot.html probplot은 기본적으로 인수로 보낸 데이터 샘플에 대한 Q-Q 정보만을 반환하고 챠트는 그리지 않는다. 만약 차트를 그리고 싶다면 plot 인수에 matplotlib.pylab 모듈 객체 혹은 Axes 클래스 객체를 넘겨주어야 한다. 정규 분포를 따르는 데이터 샘플을 Q-Q 플롯으로 그리면 다음과 같이 직선의 형태로 나타난다. End of explanation """ np.random.seed(0) x = np.random.rand(100) plt.figure(figsize=(7,7)) sp.stats.probplot(x, plot=plt) plt.ylim(-0.5, 1.5) plt.show() """ Explanation: 정규 분포를 따르지 않는 데이터 샘플을 Q-Q 플롯으로 그리면 다음과 같이 직선이 아닌 휘어진 형태로 나타난다. End of explanation """ xx = np.linspace(-2, 2, 100) plt.figure(figsize=(6,9)) for i, N in enumerate([1, 2, 10]): X = np.random.rand(1000, N) - 0.5 S = X.sum(axis=1)/np.sqrt(N) plt.subplot(3, 2, 2*i+1) sns.distplot(S, bins=10, kde=False, norm_hist=True) plt.xlim(-2, 2) plt.yticks([]) plt.subplot(3, 2, 2*i+2) sp.stats.probplot(S, plot=plt) plt.tight_layout() plt.show() """ Explanation: 중심 극한 정리 실세계에서 발생하는 현상 중 많은 것들이 정규 분포로 모형화 가능하다. 그 이유 중의 하나는 다음과 같은 중심 극한 정리(Central Limit Theorem)이다. 중심 극한 정리는 어떤 분포를 따르는 확류 변수든 간에 해당 확률 변수가 복수인 경우 그 합은 정규 분포와 비슥한 분포를 이루는 현상을 말한다. 좀 더 수학적인 용어로 쓰면 다음과 같다. $X_1, X_2, \ldots, X_n$가 기댓값이 $\mu$이고 분산이 $\sigma^2$으로 동일한 분포이며 서로 독립인 확률 변수들이라고 하자. 이 값들의 합 $$ S_n = X_1+\cdots+X_n $$ 도 마찬가지로 확률 변수이다. 이 확률 변수 $S_n$의 분포는 $n$이 증가할 수록 다음과 같은 정규 분포에 수렴한다. $$ \dfrac{S_n}{\sqrt{n}} \xrightarrow{d}\ N(\mu,\;\sigma^2) $$ 시뮬레이션을 사용하여 중심 극한 정리가 성립하는지 살펴보도록 하자. End of explanation """
dwaithe/ONBI_image_analysis
day4_machineLearning/2015 clustering with ipython practical.ipynb
gpl-2.0
#This line is very important: (It turns on inline the visuals!) %pylab inline import csv #You will need these also. These functions extract the data from the results file. def load_file_return_data(filepath): data =[] with open(filepath,'r') as f: reader=csv.reader(f,delimiter='\t') headers = reader.next() for line in reader: data.append(line) name_list = list(enumerate(headers)) return data, name_list def return_data_with_header(header,data,name_list): for idx, name in name_list: if name == header: ind_to_take = idx data_col = [] for line in data: data_col.append(float(line[ind_to_take])) return np.array(data_col) """ Explanation: Clustering with ipython You may work with other members of the course if you like. This practical is not assessed although some of the skills will be required for your practical project next week. If you are stuck at any stage please ask a demonstrator. Dominic Waithe 2015 (c) End of explanation """ #You insert the local path where your exported imageJ #where 'Results.txt' is currently written. data,name_list = load_file_return_data('Results.txt') print name_list #Insert the header you wish to extract here: header1 = 'Area' data_col1 = return_data_with_header(header1,data,name_list) #Insert the header you wish to extract here: header2 = 'Mean' data_col2 = return_data_with_header(header2,data,name_list) """ Explanation: Reading the data from the Results.txt The first stage is to read your Fiji exported data into python. End of explanation """ plot(data_col1,data_col2, 'o') title('Area Versuses mean intensity plot') xlabel('Area') ylabel('Mean intensity') """ Explanation: Plotting the data It is always handy to plot relationships End of explanation """ #To cluster the data we start by using the kmeans algorithm. from sklearn.cluster import KMeans,SpectralClustering #We initialise a kmeans model in the variable kmeans: kmeans = KMeans(n_clusters=2, init='k-means++', n_init=10, max_iter=300, tol=0.0001, precompute_distances='auto') #For more information. #http://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html #Now we reorganise the data to a format which is compatible with kmeans algorithm. data_arr = np.zeros((data_col2.__len__(),2)) data_arr[:,0] = np.array(data_col1) data_arr[:,1] = np.array(data_col2) #Now we use this data to fit the kmeans model in an unsupervised fashion. kmeans.fit(data_arr) out = kmeans.predict(data_arr) #Now we fit the two clusters we have tried to find plot(data_col1[out == 1],data_col2[out ==1], 'go') plot(data_col1[out == 0],data_col2[out ==0], 'ro') title('Area Versuses mean intensity clustering plot') xlabel('Area') ylabel('Mean intensity') #The clustering algorithm should have highlighted good proportion #of the cells which are both large and green. These represent the #Dendritic cells in the image. """ Explanation: Looking for clusters in the data. Is there structure in the data we can utilise to isolate cells further? We want to use the intensity of the cells and their area to further isolate the cells which we segmented. End of explanation """
michal-hradis/CNN_seminar
03/keras_CNN_architectures.ipynb
bsd-3-clause
from tools import readCIFAR, mapLabelsOneHot # First run ../data/downloadCIFAR.sh # This reads the dataset trnData, tstData, trnLabels, tstLabels = readCIFAR('../data/cifar-10-batches-py') plt.subplot(1, 2, 1) img = collage(trnData[:16]) print(img.shape) plt.imshow(img) plt.subplot(1, 2, 2) img = collage(tstData[:16]) plt.imshow(img) plt.show() # Convert categorical labels to one-hot encoding which # is needed by categorical_crossentropy in Keras. # This is not universal. The loss can be easily implemented # with category IDs as labels. trnLabels = mapLabelsOneHot(trnLabels) tstLabels = mapLabelsOneHot(tstLabels) print('One-hot trn. labels shape:', trnLabels.shape) """ Explanation: Read CIFAR10 dataset End of explanation """ trnData = trnData.astype(np.float32) / 255.0 - 0.5 tstData = tstData.astype(np.float32) / 255.0 - 0.5 from keras.layers import Input, Reshape, Dense, Dropout, Flatten, BatchNormalization from keras.layers import Activation, Conv2D, MaxPooling2D, PReLU from keras.models import Model from keras import regularizers w_decay = 0.0001 w_reg = regularizers.l2(w_decay) """ Explanation: Normalize data This maps all values in trn. and tst. data to range <-0.5,0.5>. Some kind of value normalization is preferable to provide consistent behavior accross different problems and datasets. End of explanation """ def build_VGG_block(net, channels, layers, prefix): for i in range(layers): net = Conv2D(channels, 3, activation='relu', padding='same', name='{}.{}'.format(prefix, i))(net) net = MaxPooling2D(2, 2, padding="same")(net) return net def build_VGG(input_data, block_channels=[16,32,64], block_layers=[2,2,2], fcChannels=[256,256], p_drop=0.4): net = input_data for i, (cCount, lCount) in enumerate(zip(block_channels, block_layers)): net = build_VGG_block(net, cCount, lCount, 'conv{}'.format(i)) net = Flatten()(net) for i, cCount in enumerate(fcChannels): FC = Dense(cCount, activation='relu', name='fc{}'.format(i)) net = Dropout(rate=p_drop)(FC(net)) net = Dense(10, name='out', activation='softmax')(net) return net def build_VGG_Bnorm_block(net, channels, layers, prefix): for i in range(layers): net = Conv2D(channels, 3, padding='same', name='{}.{}'.format(prefix, i))(net) net = BatchNormalization()(net) net = PReLU()(net) net = MaxPooling2D(2, 2, padding="same")(net) return net def build_VGG_Bnorm(input_data, block_channels=[16,32,64], block_layers=[2,2,2], fcChannels=[256,256], p_drop=0.4): net = input_data for i, (cCount, lCount) in enumerate(zip(block_channels, block_layers)): net = build_VGG_Bnorm_block(net, cCount, lCount, 'conv{}'.format(i)) net = Dropout(rate=0.25)(net) net = Flatten()(net) for i, cCount in enumerate(fcChannels): net = Dense(cCount, name='fc{}'.format(i))(net) net = BatchNormalization()(net) net = PReLU()(net) net = Dropout(rate=p_drop)(net) net = Dense(10, name='out', activation='softmax')(net) return net """ Explanation: VGG net http://www.robots.ox.ac.uk/~vgg/research/very_deep/ End of explanation """ from keras import optimizers from keras.models import Model from keras import losses from keras import metrics input_data = Input(shape=(trnData.shape[1:]), name='data') net = build_VGG_Bnorm(input_data, block_channels=[64,128,256], block_layers=[3,3,3], fcChannels=[320,320], p_drop=0.5) model = Model(inputs=[input_data], outputs=[net]) print('Model') model.summary() model.compile( loss=losses.categorical_crossentropy, optimizer=optimizers.Adam(lr=0.001), metrics=[metrics.categorical_accuracy]) """ Explanation: Resnet Inception Build and compile model Create the computation graph of the network and compile a 'model' for optimization inluding loss function and optimizer. End of explanation """ import keras tbCallBack = keras.callbacks.TensorBoard( log_dir='./Graph', histogram_freq=1, write_graph=True, write_images=True) model.fit( x=trnData, y=trnLabels, batch_size=48, epochs=20, verbose=1, validation_data=[tstData, tstLabels], shuffle=True)#, callbacks=[tbCallBack]) """ Explanation: Define TensorBoard callback TensorBoard is able to store network statistics (loss, accuracy, weight histograms, activation histograms, ...) and view them through web interface. To view the statistics, run 'tensorboard --logdir=path/to/log-directory' and go to localhost:6006. End of explanation """ classProb = model.predict(x=tstData[0:2]) print('Class probabilities:', classProb, '\n') loss, acc = model.evaluate(x=tstData, y=tstLabels, batch_size=1024) print() print('loss', loss) print('acc', acc) """ Explanation: Predict and evaluate End of explanation """ classProb = model.predict(x=tstData) print(classProb.shape) correctProb = (classProb * tstLabels).sum(axis=1) wrongProb = (classProb * (1-tstLabels)).max(axis=1) print(correctProb.shape, wrongProb.shape) accuracy = (correctProb > wrongProb).mean() print('Accuracy: ', accuracy) """ Explanation: Compute test accuracy by hand End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/cams/cmip6/models/cams-csm1-0/toplevel.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'cams', 'cams-csm1-0', 'toplevel') """ Explanation: ES-DOC CMIP6 Model Properties - Toplevel MIP Era: CMIP6 Institute: CAMS Source ID: CAMS-CSM1-0 Sub-Topics: Radiative Forcings. Properties: 85 (42 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:53:43 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Flux Correction 3. Key Properties --&gt; Genealogy 4. Key Properties --&gt; Software Properties 5. Key Properties --&gt; Coupling 6. Key Properties --&gt; Tuning Applied 7. Key Properties --&gt; Conservation --&gt; Heat 8. Key Properties --&gt; Conservation --&gt; Fresh Water 9. Key Properties --&gt; Conservation --&gt; Salt 10. Key Properties --&gt; Conservation --&gt; Momentum 11. Radiative Forcings 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect 24. Radiative Forcings --&gt; Aerosols --&gt; Dust 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt 28. Radiative Forcings --&gt; Other --&gt; Land Use 29. Radiative Forcings --&gt; Other --&gt; Solar 1. Key Properties Key properties of the model 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top level overview of coupled model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of coupled model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Flux Correction Flux correction properties of the model 2.1. Details Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how flux corrections are applied in the model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Genealogy Genealogy and history of the model 3.1. Year Released Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Year the model was released End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.2. CMIP3 Parent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 CMIP3 parent if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.3. CMIP5 Parent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 CMIP5 parent if any End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.4. Previous Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Previously known as End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Software Properties Software properties of model 4.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.4. Components Structure Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OASIS" # "OASIS3-MCT" # "ESMF" # "NUOPC" # "Bespoke" # "Unknown" # "None" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 4.5. Coupler Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Overarching coupling framework for model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Key Properties --&gt; Coupling ** 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of coupling in the model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 5.2. Atmosphere Double Flux Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Atmosphere grid" # "Ocean grid" # "Specific coupler grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 5.3. Atmosphere Fluxes Calculation Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Where are the air-sea fluxes calculated End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 5.4. Atmosphere Relative Winds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6. Key Properties --&gt; Tuning Applied Tuning methodology for model 6.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics/diagnostics of the global mean state used in tuning model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics/diagnostics used in tuning model/component (such as 20th century) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.5. Energy Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.6. Fresh Water Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7. Key Properties --&gt; Conservation --&gt; Heat Global heat convervation properties of the model 7.1. Global Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how heat is conserved globally End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.2. Atmos Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the atmosphere/ocean coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.3. Atmos Land Interface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how heat is conserved at the atmosphere/land coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.4. Atmos Sea-ice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.5. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the ocean/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.6. Land Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the land/ocean coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Key Properties --&gt; Conservation --&gt; Fresh Water Global fresh water convervation properties of the model 8.1. Global Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how fresh_water is conserved globally End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.2. Atmos Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.3. Atmos Land Interface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how fresh water is conserved at the atmosphere/land coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.4. Atmos Sea-ice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.5. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.6. Runoff Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how runoff is distributed and conserved End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.7. Iceberg Calving Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how iceberg calving is modeled and conserved End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.8. Endoreic Basins Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how endoreic basins (no ocean access) are treated End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.9. Snow Accumulation Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how snow accumulation over land and over sea-ice is treated End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9. Key Properties --&gt; Conservation --&gt; Salt Global salt convervation properties of the model 9.1. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how salt is conserved at the ocean/sea-ice coupling interface End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 10. Key Properties --&gt; Conservation --&gt; Momentum Global momentum convervation properties of the model 10.1. Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how momentum is conserved in the model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11. Radiative Forcings Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5) 11.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of radiative forcings (GHG and aerosols) implementation in model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 Carbon dioxide forcing 12.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 Methane forcing 13.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 13.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O Nitrous oxide forcing 14.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 Troposheric ozone forcing 15.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 Stratospheric ozone forcing 16.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 16.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC Ozone-depleting and non-ozone-depleting fluorinated gases forcing 17.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "Option 1" # "Option 2" # "Option 3" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.2. Equivalence Concentration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Details of any equivalence concentrations used End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 SO4 aerosol forcing 18.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 18.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon Black carbon aerosol forcing 19.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon Organic carbon aerosol forcing 20.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 20.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate Nitrate forcing 21.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 21.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect Cloud albedo effect forcing (RFaci) 22.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 22.2. Aerosol Effect On Ice Clouds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative effects of aerosols on ice clouds are represented? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect Cloud lifetime effect forcing (ERFaci) 23.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 23.2. Aerosol Effect On Ice Clouds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative effects of aerosols on ice clouds are represented? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 23.3. RFaci From Sulfate Only Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative forcing from aerosol cloud interactions from sulfate aerosol only? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 23.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 24. Radiative Forcings --&gt; Aerosols --&gt; Dust Dust forcing 24.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 24.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic Tropospheric volcanic forcing 25.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in historical simulations End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in future simulations End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 25.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic Stratospheric volcanic forcing 26.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in historical simulations End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in future simulations End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 26.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt Sea salt forcing 27.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 28. Radiative Forcings --&gt; Other --&gt; Land Use Land use forcing 28.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 28.2. Crop Change Only Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Land use change represented via crop change only? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 28.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "irradiance" # "proton" # "electron" # "cosmic ray" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 29. Radiative Forcings --&gt; Other --&gt; Solar Solar forcing 29.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How solar forcing is provided End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 29.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation """
rokkamsatyakalyan/Machine_Learning
Stylometry/Bag of Words/BagOfWords.ipynb
gpl-3.0
import pandas as pd df = pd.read_csv('scan1.csv',sep=',', header=None, names=['author_label','ass_num', 'author_writing']) # df = pd.read_csv('bow3.csv',sep=',', header=None, names=['author_label', 'author_writing']) # Output printing out last 5 columns df = df.drop('ass_num', axis=1) df.tail() # print len(df['author_writing'][0].split(" ")) """ Explanation: Authorship Attribution In this project, we train our model with writings of different authors and try to predict the correct author. In this version, we are going to predict the results using bag of words technique and Naive Bayes prediction algorithms. Input Data Preprocessing: We received the writing assignments from the softskills department. This data has many null values(missing assignments) and repeated values(Student Details and Question related Information). We dropped all the unwanted data and missing student assignmets. This data is used to generate a csv file which has all student information. Reading the data First, we should load the data and print any five data points End of explanation """ print(df.shape) """ Explanation: Check the number of data points. Our dataset contains 1028 tuples. and two columns. End of explanation """ from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(df['author_writing'],df['author_label'],random_state=1) print('Number of rows in the total set: {}'.format(df.shape[0])) print('Number of rows in the training set: {}'.format(X_train.shape[0])) print('Number of rows in the test set: {}'.format(X_test.shape[0])) """ Explanation: Splitting the data into Train and Test sets We should train the model before testing. But we need a training set and a testing set. So, we should divide the data into test and train sets by using train_test_split module from sklearn. End of explanation """ from sklearn.feature_extraction.text import CountVectorizer count_vector = CountVectorizer() # Fit the training data and then return the matrix training_data = count_vector.fit_transform(X_train) # Transform testing data and return the matrix. Note we are not fitting the testing data into the CountVectorizer() testing_data = count_vector.transform(X_test) """ Explanation: Applying Bag of Words To apply the Naive Bayes theorem to our dataset, we should convert all our data into numeric values since sklearn can't work with non numeric values. So, we created a frequency matrix(word frequency) of our dataset and apply Bayes theorem to that. So, we used a module called CountVectorizer to that. End of explanation """ from sklearn.naive_bayes import MultinomialNB naive_bayes = MultinomialNB() naive_bayes.fit(training_data, y_train) ## Now that we trained our model, it's time to test the model with the dataset. predictions = naive_bayes.predict(testing_data) """ Explanation: Now, we should apply bayes technique for this dataset. So, we should import MultinomialNB module from sklearn. Here, we use Multinimial Naive Bayes because it good for classification with discrete values. End of explanation """ from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score print('Accuracy score: ', format(accuracy_score(y_test, predictions))) print('Precision score: ', format(precision_score(y_test, predictions,average="weighted"))) print('Recall score: ', format(recall_score(y_test, predictions,average="weighted"))) print('F1 score: ', format(f1_score(y_test, predictions,average="weighted"))) """ Explanation: The performance our model can be known by computing the accuracy, precision, recall and the f1 score of our model. End of explanation """
ajdawson/python_for_climate_scientists
course_content/notebooks/cartopy_intro.ipynb
gpl-3.0
import matplotlib.pyplot as plt import cartopy.crs as ccrs """ Explanation: Cartopy in a nutshell Cartopy is a Python package that provides easy creation of maps, using matplotlib, for the visualisation of geospatial data. In order to create a map with cartopy and matplotlib, we typically need to import pyplot from matplotlib and cartopy's crs (coordinate reference system) submodule. These are usually imported as follows: End of explanation """ ax = plt.axes(projection=ccrs.PlateCarree()) ax.coastlines() plt.show() """ Explanation: Cartopy's matplotlib interface is set up via the projection keyword when constructing a matplotlib Axes / SubAxes instance. The resulting axes instance has new methods, such as the coastlines() method, which are specific to drawing cartographic data: End of explanation """ ax = plt.axes(projection=ccrs.InterruptedGoodeHomolosine()) ax.coastlines() plt.show() """ Explanation: Cartopy can draw maps in many different projections, from the very mundane to the more unusual: End of explanation """ ax = plt.axes(projection=ccrs.PlateCarree()) ax.coastlines() ax.set_global() ax.plot([-100, 50], [25, 25], linewidth=4, transform=ccrs.Geodetic()) plt.show() """ Explanation: A full list of Cartopy projections is available at http://scitools.org.uk/cartopy/docs/latest/crs/projections.html. To draw cartographic data, we use the the standard matplotlib plotting routines with an additional transform keyword argument. The value of the transform argument should be the cartopy coordinate reference system of the data being plotted: End of explanation """ ax = plt.axes(projection=ccrs.PlateCarree()) ax.coastlines() ax.plot([-100, 50], [25, 25], linewidth=4, transform=ccrs.Geodetic()) plt.show() """ Explanation: Notice that unless we specify a map extent (we did so via the set_global() method in this case) the map will zoom into the range of the plotted data. End of explanation """ ax = plt.axes(projection=ccrs.Mercator()) ax.coastlines() gl = ax.gridlines(draw_labels=True) plt.show() """ Explanation: We can add graticule lines and tick labels to the map using the gridlines method (this currently is limited to just a few coordinate reference systems): End of explanation """ import matplotlib.ticker as mticker from cartopy.mpl.gridliner import (LATITUDE_FORMATTER, LONGITUDE_FORMATTER) ax = plt.axes(projection=ccrs.PlateCarree()) ax.coastlines() gl = ax.gridlines(draw_labels=True) gl.xlocator = mticker.FixedLocator([-180, -60, 60, 180]) gl.yformatter = LATITUDE_FORMATTER gl.xformatter = LONGITUDE_FORMATTER plt.show() """ Explanation: We can control the specific tick values by using matplotlib's locator object, and the formatting can be controlled with matplotlib formatters: End of explanation """ import numpy as np import matplotlib.pyplot as plt import cartopy.crs as ccrs x = np.linspace(310, 390, 25) y = np.linspace(-24, 25, 35) x2d, y2d = np.meshgrid(x, y) data = np.cos(np.deg2rad(y2d) * 4) + np.sin(np.deg2rad(x2d) * 4) """ Explanation: Cartopy cannot currently label all types of projection, though more work is intended on this functionality in the future. Exercise 1 The following snippet of code produces coordinate arrays and some data in a rotated pole coordinate system. The coordinate system for the x and y values, which is similar to that found in the some limited area models of Europe, has a projection "north pole" at 177.5°E longitude and 37.5°N latitude. End of explanation """ rotated_pole = ccrs.RotatedPole(pole_latitude=37.5, pole_longitude=177.5) """ Explanation: Part 1 Define a cartopy coordinate reference system which represents a rotated pole with a pole latitude of 37.5°N and a pole longitude of 177.5°E. Part 2 Produce a map, with coastlines, using the coordinate reference system created in Part 1 as the projection. Part 3 Produce a map, with coastlines, in a plate carrée (ccrs.PlateCarree()) projection with a pcolormesh of the data generated by the code snippet provided at the beginning of the exercise. Remember that the data is supplied in the rotated coordinate system defined in Part 1. Understanding projection and transform It can be easy to get confused about what the projection and transform arguments actually mean. We'll use the rotated pole example to illustrate the effect of each. End of explanation """ ax = plt.axes(projection=rotated_pole) ax.coastlines() ax.set_global() ax.pcolormesh(x, y, data) # omitted the transform keyword! plt.show() """ Explanation: The core concept here is that the projection of your axes is independent of the coordinate system your data is defined in. The projection argument used when creating plots determines the projection of the resulting plot. The transform argument to plotting functions tells cartopy what coordinate sytstem your data uses. Let's try making a plot without specifying the transform argument. Since the data happens to be defined on the same coordinate system as we are plotting in this actually works OK: End of explanation """ ax = plt.axes(projection=rotated_pole) ax.coastlines() ax.set_global() ax.pcolormesh(x, y, data, transform=rotated_pole) plt.show() """ Explanation: Now let's add in the transform keyword when we plot. See that the plot doesn't change, this is because the default assumption when the transform argument is not supplied is that the coordinate system matches the projection. End of explanation """ ax = plt.axes(projection=ccrs.PlateCarree()) ax.coastlines() ax.set_global() ax.pcolormesh(x, y, data) # omitted the transform keyword, # this is now incorrect! plt.show() """ Explanation: Now we'll try this again, omitting the transform argument, but this time using a projection that does not match the coordinate system the data are defined in. The data are plotted in the wrong place, because cartopy assumed the coordinate system of the data matched the projection, which is PlateCarree(). End of explanation """ ax = plt.axes(projection=ccrs.PlateCarree()) ax.coastlines() ax.set_global() ax.pcolormesh(x, y, data, transform=rotated_pole) plt.show() """ Explanation: In order to get the correct plot we need to tell cartopy the data are defined in a rotated pole coordinate system: End of explanation """ ax = plt.axes(projection=ccrs.InterruptedGoodeHomolosine()) ax.coastlines() ax.set_global() ax.pcolormesh(x, y, data, transform=rotated_pole) plt.show() """ Explanation: The safest thing to do is always provide the transform keyword regardless of the projection you are using, and avoid letting cartopy make assumptions about your data's coordinate system. Doing so allows you to choose any map projection for your plot and allow cartopy to plot your data where it should be. End of explanation """
landlab/landlab
notebooks/tutorials/boundary_conds/set_watershed_BCs_raster.ipynb
mit
from landlab import RasterModelGrid import numpy as np """ Explanation: <a href="http://landlab.github.io"><img style="float: left" src="../../landlab_header.png"></a> Setting watershed boundary conditions on a raster grid This tutorial ilustrates how to set watershed boundary conditions on a raster grid. Note that a watershed is assumed to have a ring of nodes around the core nodes that are closed boundaries (i.e. no flux can cross these nodes, or more correctly, no flux can cross the faces around the nodes). This means that automatically the nodes on the outer perimeter of the grid will be set to be closed boundary. By definitation a watershed also has one outlet through which fluxes can pass. Here the outlet is set as the node that has the lowest value, is not a nodata_value node, and is adjacent to at least one closed boundary node. This means that an outlet can be on the outer perimeter of the raster. However, the outlet does not need to be on the outer perimeter of the raster. The first example uses set_watershed_boundary_condition, which finds the outlet for the user. First import what we need. End of explanation """ mg1 = RasterModelGrid((5, 5), 1.0) z1 = mg1.add_ones("topographic__elevation", at="node") mg1.at_node["topographic__elevation"][2] = 0.0 mg1.at_node["topographic__elevation"] """ Explanation: Now we create a 5 by 5 grid with a spacing (dx and dy) of 1. We also create an elevation field with value of 1. everywhere, except at the outlet, where the elevation is 0. In this case the outlet is in the middle of the bottom row, at location (0,2), and has a node id of 2. End of explanation """ mg1.set_watershed_boundary_condition(z1) """ Explanation: The set_watershed_boundary_condition in RasterModelGrid will find the outlet of the watershed. This method takes the node data, in this case z, and, optionally the no_data value. This method sets all nodes that have no_data values to closed boundaries. This example does not have any no_data values, which is fine. In this case, the code will set all of the perimeter nodes as BC_NODE_IS_CLOSED(boundary status 4) in order to create this boundary around the core nodes. The exception on the perimeter is node 2 (with elevation of 0). Although it is on the perimeter, it has a value and it has the lowest value. So in this case node 2 will be set as BC_NODE_IS_FIXED_VALUE (boundary status 1). The rest of the nodes are set as a CORE_NODE (boundary status 0) End of explanation """ mg1.imshow(mg1.status_at_node, color_for_closed="blue") """ Explanation: Check to see that node status were set correctly. imshow will default to not plot the value of BC_NODE_IS_CLOSED nodes, which is why we override this below with the option color_for_closed End of explanation """ mg2 = RasterModelGrid((5, 5), 10.0) z2 = mg2.add_ones("topographic__elevation", at="node") mg2.at_node["topographic__elevation"][1] = 0.0 mg2.at_node["topographic__elevation"] """ Explanation: The second example uses set_watershed_boundary_condition_outlet_coords In this case the user knows the coordinates of the outlet node. First instantiate a new grid, with new data values. End of explanation """ mg2.set_watershed_boundary_condition_outlet_coords((0, 1), z2) """ Explanation: Note that the node with zero elevation, which will be the outlet, is now at location (0,1). Note that even though this grid has a dx & dy of 10., the outlet coords are still (0,1). Set the boundary conditions. End of explanation """ mg2.imshow(mg2.status_at_node, color_for_closed="blue") """ Explanation: Plot grid of boundary status information End of explanation """ mg3 = RasterModelGrid((5, 5), 5.0) z3 = mg3.add_ones("topographic__elevation", at="node") mg3.at_node["topographic__elevation"][5] = 0.0 mg3.at_node["topographic__elevation"] """ Explanation: The third example uses set_watershed_boundary_condition_outlet_id In this case the user knows the node id value of the outlet node. First instantiate a new grid, with new data values. End of explanation """ mg3.set_watershed_boundary_condition_outlet_id(5, z3) """ Explanation: Set boundary conditions with the outlet id. Note that here we know the id of the node that has a value of zero and choose this as the outlet. But the code will not complain if you give it an id value of a node that does not have the smallest data value. End of explanation """ mg3.imshow(mg3.status_at_node, color_for_closed="blue") """ Explanation: Another plot to illustrate the results. End of explanation """ from landlab.io import read_esri_ascii (grid_bijou, z_bijou) = read_esri_ascii("west_bijou_gully.asc", halo=1) """ Explanation: The final example uses set_watershed_boundary_condition on a watershed that was exported from Arc. First import read_esri_ascii and then import the DEM data. An optional value of halo=1 is used with read_esri_ascii. This puts a perimeter of nodata values around the DEM. This is done just in case there are data values on the edge of the raster. These would have to become closed to set watershed boundary conditions, but in order to avoid that, we add a perimeter to the data. End of explanation """ grid_bijou.imshow(z_bijou) """ Explanation: Let's plot the data to see what the topography looks like. End of explanation """ grid_bijou.set_watershed_boundary_condition(z_bijou, 0) """ Explanation: In this case the nodata value is zero. This skews the colorbar, but we can at least see the shape of the watershed. Let's set the boundary condition. Remember we don't know the outlet id. End of explanation """ grid_bijou.imshow(grid_bijou.status_at_node, color_for_closed="blue") """ Explanation: Now we can look at the boundary status of the nodes to see where the found outlet was. End of explanation """ grid_bijou.imshow(z_bijou) """ Explanation: This looks sensible. Now that the boundary conditions ae set, we can also look at the topography. imshow will default to show boundaries as black, as illustrated below. But that can be overwridden as we have been doing all along. End of explanation """
liufuyang/ManagingBigData_MySQL_DukeUniv
week4/MySQL_Exercise_09_Subqueries_and_Derived_Tables.ipynb
mit
%load_ext sql %sql mysql://studentuser:studentpw@mysqlserver/dognitiondb %sql USE dognitiondb %config SqlMagic.displaylimit=25 """ Explanation: Copyright Jana Schaich Borg/Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) MySQL Exercise 9: Subqueries and Derived Tables Now that you understand how joins work, in this lesson we are going to learn how to incorporate subqueries and derived tables into our queries. Subqueries, which are also sometimes called inner queries or nested queries, are queries that are embedded within the context of another query. The output of a subquery is incorporated into the queries that surround it. Subqueries can be used in SELECT, WHERE, and FROM clauses. When they are used in FROM clauses they create what are called derived tables. The main reasons to use subqueries are: Sometimes they are the most logical way to retrieve the information you want They can be used to isolate each logical part of a statement, which can be helpful for troubleshooting long and complicated queries Sometimes they run faster than joins Some people find subqueries easier to read than joins. However, that is often a result of not feeling comfortable with the concepts behind joins in the first place (I prefer join syntax, so admittedly, that is my preference). Subqueries must be enclosed in parentheses. Subqueries have a couple of rules that joins don't: ORDER BY phrases cannot be used in subqueries (although ORDER BY phrases can still be used in outer queries that contain subqueries). Subqueries in SELECT or WHERE clauses that return more than one row must be used in combination with operators that are explicitly designed to handle multiple values, such as the IN operator. Otherwise, subqueries in SELECT or WHERE statements can output no more than 1 row. So why would you use subqueries? Let's look at some examples. Start by loading the sql library and database, and making the Dognition database your default database: End of explanation """ %%sql SELECT AVG(TIMESTAMPDIFF(minute,start_time,end_time)) AS AvgDuration_YWU FROM exam_answers WHERE TIMESTAMPDIFF(minute,start_time,end_time)>0 AND test_name='Yawn Warm-Up' """ Explanation: 1) "On the fly calculations" (or, doing calculations as you need them) One of the main uses of subqueries is to calculate values as you need them. This allows you to use a summary calculation in your query without having to enter the value outputted by the calculation explicitly. A situation when this capability would be useful is if you wanted to see all the records that were greater than the average value of a subset of your data. Recall one of the queries we wrote in "MySQL Exercise 4: Summarizing your Data" to calculate the average amount of time it took customers to complete all of the tests in the exam_answers table (we had to exclude negative durations from the calculation due to some abnormalities in the data): sql SELECT AVG(TIMESTAMPDIFF(minute,start_time,end_time)) AS AvgDuration FROM exam_answers WHERE TIMESTAMPDIFF(minute,start_time,end_time)&gt;0; What if we wanted to look at just the data from rows whose durations were greater than the average, so that we could determine whether there are any features that seem to correlate with dogs taking a longer time to finish their tests? We could use a subquery to calculate the average duration, and then indicate in our SELECT and WHERE clauses that we only wanted to retrieve the rows whose durations were greater than the average. Here's what the query would look like: sql SELECT * FROM exam_answers WHERE TIMESTAMPDIFF(minute,start_time,end_time) &gt; (SELECT AVG(TIMESTAMPDIFF(minute,start_time,end_time)) AS AvgDuration FROM exam_answers WHERE TIMESTAMPDIFF(minute,start_time,end_time)&gt;0); You can see that TIMESTAMPDIFF gets compared to the singular average value outputted by the subquery surrounded by parentheses. You can also see that it's easier to read the query as a whole if you indent and align all the clauses associated with the subquery, relative to the main query. Question 1: How could you use a subquery to extract all the data from exam_answers that had test durations that were greater than the average duration for the "Yawn Warm-Up" game? Start by writing the query that gives you the average duration for the "Yawn Warm-Up" game by itself (and don't forget to exclude negative values; your average duration should be about 9934): End of explanation """ %%sql SELECT test_name, TIMESTAMPDIFF(minute,start_time,end_time) FROM exam_answers WHERE TIMESTAMPDIFF(minute,start_time,end_time)> (SELECT AVG(TIMESTAMPDIFF(minute,start_time,end_time)) AS AvgDuration_YWU FROM exam_answers WHERE TIMESTAMPDIFF(minute,start_time,end_time)>0 AND test_name='Yawn Warm-Up') """ Explanation: Question 2: Once you've verified that your subquery is written correctly on its own, incorporate it into a main query to extract all the data from exam_answers that had test durations that were greater than the average duration for the "Yawn Warm-Up" game (you will get 11059 rows): End of explanation """ %%sql SELECT COUNT(*) FROM exam_answers WHERE subcategory_name IN ('Puzzles', 'Numerosity', 'Bark Game'); """ Explanation: Now double check the results you just retrieved by replacing the subquery with "9934"; you should get the same results. It is helpful to get into the habit of including these kinds of quality checks into your query-writing process. This example shows you how subqueries allow you retrieve information dynamically, rather than having to hard code in specific numbers or names. This capability is particularly useful when you need to build the output of your queries into reports or dashboards that are supposed to display real-time information. 2) Testing membership Subqueries can also be useful for assessing whether groups of rows are members of other groups of rows. To use them in this capacity, we need to know about and practice the IN, NOT IN, EXISTS, and NOT EXISTS operators. Recall from MySQL Exercise 2: Selecting Data Subsets Using WHERE that the IN operator allows you to use a WHERE clause to say how you want your results to relate to a list of multiple values. It's basically a condensed way of writing a sequence of OR statements. The following query would select all the users who live in the state of North Carolina (abbreviated "NC") or New York (abbreviated "NY"): mysql SELECT * FROM users WHERE state IN ('NC','NY'); Notice the quotation marks around the members of the list referred to by the IN statement. These quotation marks are required since the state names are strings of text. A query that would give an equivalent result would be: mysql SELECT * FROM users WHERE state ='NC' OR state ='NY'; A query that would select all the users who do NOT live in the state of North Carolina or New York would be: mysql SELECT * FROM users WHERE state NOT IN ('NC','NY'); Question 3: Use an IN operator to determine how many entries in the exam_answers tables are from the "Puzzles", "Numerosity", or "Bark Game" tests. You should get a count of 163022. End of explanation """ %%sql SELECT COUNT(*) FROM dogs WHERE breed_group NOT IN ('Working', 'Sporting', 'Herding'); """ Explanation: Question 4: Use a NOT IN operator to determine how many unique dogs in the dog table are NOT in the "Working", "Sporting", or "Herding" breeding groups. You should get an answer of 7961. End of explanation """ %%sql SELECT COUNT(DISTINCT user_guid) FROM users u WHERE NOT EXISTS (SELECT * FROM dogs d WHERE u.user_guid=d.user_guid ) """ Explanation: EXISTS and NOT EXISTS perform similar functions to IN and NOT IN, but EXISTS and NOT EXISTS can only be used in subqueries. The syntax for EXISTS and NOT EXISTS statements is a little different than that of IN statements because EXISTS is not preceded by a column name or any other expression. The most important difference between EXISTS/NOT EXISTS and IN/NOT IN statements, though, is that unlike IN/NOT IN statements, EXISTS/NOT EXISTS are logical statements. Rather than returning raw data, per se, EXISTS/NOT EXISTS statements return a value of TRUE or FALSE. As a practical consequence, EXISTS statements are often written using an asterisk after the SELECT clause rather than explicit column names. The asterisk is faster to write, and since the output is just going to be a logical true/false either way, it does not matter whether you use an asterisk or explicit column names. We can use EXISTS and a subquery to compare the users who are in the users table and dogs table, similar to what we practiced previously using joins. If we wanted to retrieve a list of all the users in the users table who were also in the dogs table, we could write: sql SELECT DISTINCT u.user_guid AS uUserID FROM users u WHERE EXISTS (SELECT d.user_guid FROM dogs d WHERE u.user_guid =d.user_guid); You would get the same result if you wrote: sql SELECT DISTINCT u.user_guid AS uUserID FROM users u WHERE EXISTS (SELECT * FROM dogs d WHERE u.user_guid =d.user_guid); Essentially, both of these queries say give me all the distinct user_guids from the users table that have a value of "TRUE" in my EXISTS clause. The results would be equivalent to an inner join with GROUP BY query. Now... Question 5: How could you determine the number of unique users in the users table who were NOT in the dogs table using a NOT EXISTS clause? You should get the 2226, the same result as you got in Question 10 of MySQL Exercise 8: Joining Tables with Outer Joins. End of explanation """ %%sql SELECT COUNT(u.user_guid), d.user_guid FROM users u LEFT JOIN dogs d on u.user_guid = d.user_guid GROUP BY d.user_guid HAVING d.user_guid IS NULL """ Explanation: Similarly, the query above can get result much faster then the query below: End of explanation """ %%sql DESCRIBE SELECT DISTINCT u.user_guid AS uUserID FROM users u WHERE EXISTS (SELECT * FROM dogs d WHERE u.user_guid =d.user_guid); %%sql DESCRIBE SELECT DISTINCT u.user_guid AS uUserID FROM users u JOIN dogs d ON u.user_guid=d.user_guid; %%sql SELECT DISTINCT u.user_guid AS uUserID FROM users u WHERE EXISTS (SELECT * FROM dogs d WHERE u.user_guid =d.user_guid); %%sql SELECT DISTINCT u.user_guid AS uUserID FROM users u JOIN dogs d ON u.user_guid=d.user_guid; """ Explanation: Why the query below one can be much faster than the other one? End of explanation """ %%sql select * from sys.indexes where object_id = (select object_id from sys.objects where name = 'users') """ Explanation: On SQL Server, this will list all the indexes for a specified table: End of explanation """ %%sql SELECT DistinctUUsersID.user_guid AS uUserID, d.user_guid AS dUserID, count(*) AS numrows FROM (SELECT DISTINCT u.user_guid FROM users u) AS DistinctUUsersID LEFT JOIN dogs d ON DistinctUUsersID.user_guid=d.user_guid GROUP BY DistinctUUsersID.user_guid ORDER BY numrows DESC """ Explanation: 3) Accurate logical representations of desired output and Derived Tables A third situation in which subqueries can be useful is when they simply represent the logic of what you want better than joins. We saw an example of this in our last MySQL Exercise. We wanted a list of each dog a user in the users table owns, with its accompanying breed information whenever possible. To achieve this, we wrote this query in Question 6: sql SELECT u.user_guid AS uUserID, d.user_guid AS dUserID, d.dog_guid AS dDogID, d.breed FROM users u LEFT JOIN dogs d ON u.user_guid=d.user_guid Once we saw the "exploding rows" phenomenon due to duplicate rows, we wrote a follow-up query in Question 7 to assess how many rows would be outputted per user_id when we left joined the users table on the dogs table: sql SELECT u.user_guid AS uUserID, d.user_guid AS dUserID, count(*) AS numrows FROM users u LEFT JOIN dogs d ON u.user_guid=d.user_guid GROUP BY u.user_guid ORDER BY numrows DESC This same general query without the COUNT function could have been used to output a complete list of all the distinct users in the users table, their dogs, and their dogs' breed information. However, the method we used to arrive at this was not very pretty or logically satisfying. Rather than joining many duplicated rows and fixing the results later with the GROUP BY clause, it would be much more elegant if we could simply join the distinct UserIDs in the first place. There is no way to do that with join syntax, on its own. However, you can use subqueries in combination with joins to achieve this goal. To complete the join on ONLY distinct UserIDs from the users table, we could write: sql SELECT DistinctUUsersID.user_guid AS uUserID, d.user_guid AS dUserID, count(*) AS numrows FROM (SELECT DISTINCT u.user_guid FROM users u) AS DistinctUUsersID LEFT JOIN dogs d ON DistinctUUsersID.user_guid=d.user_guid GROUP BY DistinctUUsersID.user_guid ORDER BY numrows DESC Try it yourself: End of explanation """ %%sql SELECT u.user_guid AS uUserID, d.user_guid AS dUserID, count(*) AS numrows FROM (SELECT DISTINCT u.user_guid FROM users u) AS DistinctUUsersID LEFT JOIN dogs d ON DistinctUUsersID.user_guid=d.user_guid GROUP BY DistinctUUsersID.user_guid ORDER BY numrows DESC """ Explanation: <mark> Queries that include subqueries always run the innermost subquery first, and then run subsequent queries sequentially in order from the innermost query to the outermost query. </mark> Therefore, the query we just wrote extracts the distinct user_guids from the users table first, and then left joins that reduced subset of user_guids on the dogs table. As mentioned at the beginning of the lesson, since the subquery is in the FROM statement, it actually creates a temporary table, called a derived table, that is then incorporated into the rest of the query. There are several important points to notice about the syntax of this subquery. First, an alias of "DistinctUUsersID" is used to name the results of the subquery. We are required to give an alias to any derived table we create in subqueries within FROM statements. Otherwise there would be no way for the database to refer to the multiple columns within the temporary results we create. Second, we need to use this alias every time we want to execute a function that uses the derived table. Remember that the results in which we are interested require a join between the dogs table and the temporary table, not the dogs table and the original users table with duplicates. That means we need to make sure we reference the temporary table alias in the ON, GROUP BY, and SELECT clauses. Third, relatedly, aliases used within subqueries can refer to tables outside of the subqueries. However, outer queries cannot refer to aliases created within subqueries unless those aliases are explicitly part of the subquery output. In other words, if you wrote the first line of the query above as: sql SELECT u.user_guid AS uUserID, d.user_guid AS dUserID, count(*) AS numrows ... the query would not execute because the alias "u" is contained inside the subquery, but is not included in the output. Go ahead and try it to see what the error message looks like: End of explanation """ %%sql SELECT DISTINCT d.dog_guid, d.breed_group, u.state, u.zip FROM dogs d, users u WHERE breed_group IN ('Working','Sporting','Herding') AND d.user_guid=u.user_guid; %%sql SELECT DISTINCT dd.dog_guid, dd.breed_group, u.state, u.zip FROM users u, ( SELECT d.user_guid, d.dog_guid, d.breed_group FROM dogs d WHERE breed_group IN ('Working', 'Sporting', 'Herding') ) AS dd WHERE dd.user_guid=u.user_guid %%sql SELECT DISTINCT d.dog_guid FROM dogs d WHERE breed_group IN ('Working', 'Sporting', 'Herding') """ Explanation: A similar thing would happen if you tried to use the alias u in the GROUP BY statement. Another thing to take note of is that when you use subqueries in FROM statements, the temporary table you create can have multiple columns in the output (unlike when you use subqueries in outside SELECT statements). But for that same reason, subqueries in FROM statements can be very computationally intensive. Therefore, it's a good idea to use them sparingly, especially when you have very large data sets. Overall, subqueries and joins can often be used interchangeably. Some people strongly prefer one approach over another, but there is no consensus about which approach is best. When you are analyzing very large datasets, it's a good idea to test which approach will likely be faster or easier to troubleshoot for your particular application. Let's practice some more subqueries! Question 6: Write a query using an IN clause and equijoin syntax that outputs the dog_guid, breed group, state of the owner, and zip of the owner for each distinct dog in the Working, Sporting, and Herding breed groups. (You should get 10,254 rows; the query will be a little slower than some of the others we have practiced) End of explanation """ %%sql SELECT DISTINCT d.dog_guid, d.breed_group, u.state, u.zip FROM dogs d JOIN users u ON d.user_guid=u.user_guid WHERE d.breed_group IN ('Working','Sporting','Herding'); """ Explanation: Question 7: Write the same query as in Question 6 using traditional join syntax. End of explanation """ %%sql SELECT d.user_guid FROM dogs d WHERE NOT EXISTS (SELECT * FROM users u WHERE u.user_guid=d.user_guid ) """ Explanation: Question 8: Earlier we examined unique users in the users table who were NOT in the dogs table. Use a NOT EXISTS clause to examine all the users in the dogs table that are not in the users table (you should get 2 rows in your output). End of explanation """ %%sql SELECT DistinctUUsersID.user_guid AS uUserID, d.user_guid AS dUserID, count(*) AS numrows FROM (SELECT DISTINCT u.user_guid FROM users u WHERE u.user_guid='ce7b75bc-7144-11e5-ba71-058fbc01cf0b' ) AS DistinctUUsersID LEFT JOIN dogs d ON DistinctUUsersID.user_guid=d.user_guid GROUP BY DistinctUUsersID.user_guid ORDER BY numrows DESC; """ Explanation: Question 9: We saw earlier that user_guid 'ce7b75bc-7144-11e5-ba71-058fbc01cf0b' still ends up with 1819 rows of output after a left outer join with the dogs table. If you investigate why, you'll find out that's because there are duplicate user_guids in the dogs table as well. How would you adapt the query we wrote earlier (copied below) to only join unique UserIDs from the users table with unique UserIDs from the dog table? Join we wrote earlier: sql SELECT DistinctUUsersID.user_guid AS uUserID, d.user_guid AS dUserID, count(*) AS numrows FROM (SELECT DISTINCT u.user_guid FROM users u) AS DistinctUUsersID LEFT JOIN dogs d ON DistinctUUsersID.user_guid=d.user_guid GROUP BY DistinctUUsersID.user_guid ORDER BY numrows DESC; Let's build our way up to the correct query. To troubleshoot, let's only examine the rows related to user_guid 'ce7b75bc-7144-11e5-ba71-058fbc01cf0b', since that's the userID that is causing most of the trouble. Rewrite the query above to only LEFT JOIN distinct user(s) from the user table whose user_guid='ce7b75bc-7144-11e5-ba71-058fbc01cf0b'. The first two output columns should have matching user_guids, and the numrows column should have one row with a value of 1819: End of explanation """ %%sql SELECT DISTINCT dall.user_guid FROM dogs dall """ Explanation: Question 10: Now let's prepare and test the inner query for the right half of the join. Give the dogs table an alias, and write a query that would select the distinct user_guids from the dogs table (we will use this query as a inner subquery in subsequent questions, so you will need an alias to differentiate the user_guid column of the dogs table from the user_guid column of the users table). End of explanation """ %%sql SELECT DistinctUUsersID.user_guid AS uUserID, d.user_guid AS dUserID, count(*) AS numrows FROM (SELECT DISTINCT u.user_guid FROM users u WHERE u.user_guid='ce7b75bc-7144-11e5-ba71-058fbc01cf0b' ) AS DistinctUUsersID LEFT JOIN (SELECT DISTINCT dall.user_guid FROM dogs dall) AS d ON DistinctUUsersID.user_guid=d.user_guid GROUP BY DistinctUUsersID.user_guid ORDER BY numrows DESC; """ Explanation: Question 11: Now insert the query you wrote in Question 10 as a subquery on the right part of the join you wrote in question 9. The output should return columns that should have matching user_guids, and 1 row in the numrows column with a value of 1. If you are getting errors, make sure you have given an alias to the derived table you made to extract the distinct user_guids from the dogs table, and double-check that your aliases are referenced correctly in the SELECT and ON statements. End of explanation """ %%sql SELECT DistinctUUsersID.user_guid AS uUserID, d.user_guid AS dUserID, d.breed FROM (SELECT DISTINCT u.user_guid FROM users u LIMIT 100 ) AS DistinctUUsersID LEFT JOIN (SELECT DISTINCT dall.user_guid, dall.breed FROM dogs dall) AS d ON DistinctUUsersID.user_guid=d.user_guid GROUP BY DistinctUUsersID.user_guid ; """ Explanation: Question 12: Adapt the query from Question 10 so that, in theory, you would retrieve a full list of all the DogIDs a user in the users table owns, with its accompagnying breed information whenever possible. HOWEVER, BEFORE YOU RUN THE QUERY MAKE SURE TO LIMIT YOUR OUTPUT TO 100 ROWS WITHIN THE SUBQUERY TO THE LEFT OF YOUR JOIN. If you run the query without imposing limits it will take a very long time. If you try to limit the output by just putting a limit clause at the end of the outermost query, the database will still have to hold the entire derived tables in memory and join each row of the derived tables before limiting the output. If you put the limit clause in the subquery to the left of the join, the database will only have to join 100 rows of data. End of explanation """ %%sql SELECT DistinctUUsersID.user_guid AS uUserID, d.user_guid AS dUserID, d.breed, d.weight, count(*) AS numrows FROM (SELECT DISTINCT u.user_guid FROM users u LIMIT 100 # Without limit, the query take forever to run... ) AS DistinctUUsersID LEFT JOIN (SELECT DISTINCT dall.user_guid, dall.breed, dall.weight FROM dogs dall) AS d ON DistinctUUsersID.user_guid=d.user_guid GROUP BY DistinctUUsersID.user_guid HAVING numrows>10 ORDER BY numrows DESC; %%sql SELECT DistictUUsersID.user_guid AS userid, d.breed, d.weight, count(*) AS numrows FROM (SELECT DISTINCT u.user_guid FROM users u) AS DistictUUsersID LEFT JOIN dogs d ON DistictUUsersID.user_guid=d.user_guid GROUP BY DistictUUsersID.user_guid HAVING numrows>10 ORDER BY numrows DESC; """ Explanation: Question 13: You might have a good guess by now about why there are duplicate rows in the dogs table and users table, even though most corporate databases are configured to prevent duplicate rows from ever being accepted. To be sure, though, let's adapt this query we wrote above: sql SELECT DistinctUUsersID.user_guid AS uUserID, d.user_guid AS dUserID, count(*) AS numrows FROM (SELECT DISTINCT u.user_guid FROM users u) AS DistinctUUsersID LEFT JOIN dogs d ON DistinctUUsersID.user_guid=d.user_guid GROUP BY DistinctUUsersID.user_guid ORDER BY numrows DESC Add dog breed and dog weight to the columns that will be included in the final output of your query. In addition, use a HAVING clause to include only UserIDs who would have more than 10 rows in the output of the left join (your output should contain 5 rows). End of explanation """
dpshelio/2015-EuroScipy-pandas-tutorial
solved - 06 - Reshaping data.ipynb
bsd-2-clause
%matplotlib inline import pandas as pd import numpy as np import matplotlib.pyplot as plt try: import seaborn except ImportError: pass pd.options.display.max_rows = 8 """ Explanation: Reshaping data with stack and unstack End of explanation """ !head -1 ./data/BETR8010000800100hour.1-1-1990.31-12-2012 """ Explanation: Case study: air quality data of European monitoring stations (AirBase) Going further with the time series case study test on the AirBase (The European Air quality dataBase) data: the actual data downloaded from the Airbase website did not look like a nice csv file (data/airbase_data.csv). One of the actual downloaded raw data files of AirBase is included in the repo: End of explanation """ data = pd.read_csv("data/BETR8010000800100hour.1-1-1990.31-12-2012", sep='\t')#, header=None) data.head() """ Explanation: Just reading the tab-delimited data: End of explanation """ colnames = ['date'] + [item for pair in zip(["{:02d}".format(i) for i in range(24)], ['flag']*24) for item in pair] data = pd.read_csv("data/BETR8010000800100hour.1-1-1990.31-12-2012", sep='\t', header=None, na_values=[-999, -9999], names=colnames) data.head() """ Explanation: The above data is clearly not ready to be used! Each row contains the 24 measurements for each hour of the day, and also contains a flag (0/1) indicating the quality of the data. <div class="alert alert-success"> <b>EXERCISE</b>: Clean up this dataframe using more options of `read_csv` </div> specify that the values of -999 and -9999 should be regarded as NaN specify are own column names (http://stackoverflow.com/questions/6356041/python-intertwining-two-lists) End of explanation """ data = data.drop('flag', axis=1) data """ Explanation: For now, we disregard the 'flag' columns End of explanation """ df = pd.DataFrame({'A':['one', 'one', 'two', 'two'], 'B':['a', 'b', 'a', 'b'], 'C':range(4)}) df """ Explanation: Now, we want to reshape it: our goal is to have the different hours as row indices, merged with the date into a datetime-index. Intermezzo: reshaping your data with stack, unstack and pivot The docs say: Pivot a level of the (possibly hierarchical) column labels, returning a DataFrame (or Series in the case of an object with a single level of column labels) having a hierarchical index with a new inner-most level of row labels. <img src="img/stack.png" width=70%> End of explanation """ df = df.set_index(['A', 'B']) df result = df['C'].unstack() result df = result.stack().reset_index(name='C') df """ Explanation: To use stack/unstack, we need the values we want to shift from rows to columns or the other way around as the index: End of explanation """ df.pivot(index='A', columns='B', values='C') """ Explanation: pivot is similar to unstack, but let you specify column names: End of explanation """ df = pd.DataFrame({'A':['one', 'one', 'two', 'two', 'one', 'two'], 'B':['a', 'b', 'a', 'b', 'a', 'b'], 'C':range(6)}) df df.pivot_table(index='A', columns='B', values='C', aggfunc='count') #'mean' """ Explanation: pivot_table is similar as pivot, but can work with duplicate indices and let you specify an aggregation function: End of explanation """ colnames = ['date'] + [item for pair in zip(["{:02d}".format(i) for i in range(24)], ['flag']*24) for item in pair] data = pd.read_csv("data/BETR8010000800100hour.1-1-1990.31-12-2012", sep='\t', header=None, na_values=[-999, -9999], names=colnames) data = data.drop('flag', axis=1) data.head() """ Explanation: Back to our case study We can now use stack and some other functions to create a timeseries from the original dataframe: End of explanation """ data = data.set_index('date') data_stacked = data.stack() data_stacked """ Explanation: <div class="alert alert-success"> <b>EXERCISE</b>: Reshape the dataframe to a timeseries </div> The end result should look like: <div> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>BETR801</th> </tr> </thead> <tbody> <tr> <th>1990-01-02 09:00:00</th> <td>48.0</td> </tr> <tr> <th>1990-01-02 12:00:00</th> <td>48.0</td> </tr> <tr> <th>1990-01-02 13:00:00</th> <td>50.0</td> </tr> <tr> <th>1990-01-02 14:00:00</th> <td>55.0</td> </tr> <tr> <th>...</th> <td>...</td> </tr> <tr> <th>2012-12-31 20:00:00</th> <td>16.5</td> </tr> <tr> <th>2012-12-31 21:00:00</th> <td>14.5</td> </tr> <tr> <th>2012-12-31 22:00:00</th> <td>16.5</td> </tr> <tr> <th>2012-12-31 23:00:00</th> <td>15.0</td> </tr> </tbody> </table> <p>170794 rows × 1 columns</p> </div> First, reshape the dataframe so that each row consists of one observation for one date + hour combination: End of explanation """ data_stacked = data_stacked.reset_index(name='BETR801') data_stacked.head() data_stacked.index = pd.to_datetime(data_stacked['date'] + data_stacked['level_1'], format="%Y-%m-%d%H") data_stacked = data_stacked.drop(['date', 'level_1'], axis=1) data_stacked """ Explanation: Now, combine the date and hour colums into a datetime (tip: string columns can be summed to concatenate the strings): End of explanation """ cast = pd.read_csv('data/cast.csv') cast.head() titles = pd.read_csv('data/titles.csv') titles.head() """ Explanation: We can also use this with the movie data End of explanation """ c = cast c = c[(c.character == 'Superman') | (c.character == 'Batman')] c = c.groupby(['year', 'character']).size() c = c.unstack() c = c.fillna(0) c.head() d = c.Superman - c.Batman print('Superman years:') print(len(d[d > 0.0])) """ Explanation: <div class="alert alert-success"> <b>EXERCISE</b>: Define a year as a "Superman year" whose films feature more Superman characters than Batman. How many years in film history have been Superman years? </div> End of explanation """ c = cast c = c.groupby(['year', 'type']).size() c = c.unstack('type') c.plot() """ Explanation: <div class="alert alert-success"> <b>EXERCISE</b>: Plot the number of actor roles each year and the number of actress roles each year over the history of film. </div> End of explanation """ c = cast c = c.groupby(['year', 'type']).size() c = c.unstack('type') c.plot(kind='area') """ Explanation: <div class="alert alert-success"> <b>EXERCISE</b>: Plot the number of actor roles each year and the number of actress roles each year, but this time as a kind='area' plot. </div> End of explanation """
PythonBootCampIAG-USP/NASA_PBC2015
Day_00/04_Numpy_Matplotlib/szhu_NumpyMatplotlib.ipynb
mit
import numpy as np """ Explanation: Numpy and Matplotlib Reference documents <A HREF="http://wiki.scipy.org/Tentative_NumPy_Tutorial">Tentative Numpy Tutorial</A> <A HREF="http://docs.scipy.org/doc/numpy/reference">NumPy Reference</A> <A HREF="http://mathesaurus.sourceforge.net/matlab-numpy.html">NumPy for MATLAB Users</A> <A HREF="http://mathesaurus.sourceforge.net/r-numpy.html">NumPy for R (and S-Plus) Users</A> <A HREF="http://people.duke.edu/~ccc14/pcfb/numerics.html">NumPy and Matplotlib (Practical Computing for Biologists)</A> <A HREF="http://scipy-lectures.github.io/intro/matplotlib/matplotlib.html">Matplotlib: Plotting</A> What is Numpy? While the Python language is an excellent tool for general-purpose programming, it was not designed specifically for mathematical and scientific computing. Numpy allows for: Efficient array computing in Python Efficient indexing/slicing of arrays Mathematical functions that operate on an entire array The critical thing to know is that Python for loops are very slow! One should try to use array-operations as much as possible. First, let's import the numpy module. There are multiple ways we can do this. In the following examples, <code>zeros</code> is a numpy routine which we will see later, and depending on how we import numpy we call it in different ways: <code>'import numpy' </code> imports the entire numpy module --> <code>'numpy.zeros()' </code> <code>'import numpy as np' </code> imports the entire numpy module and renames it --> <code>'np.zeros()' </code> <code>'from numpy import *' </code> imports the entire numpy module (more or less) --> <code>'zeros()' </code> <code>'from numpy import zeros' </code> imports only the <code>zeros()</code> routine After all that preamble, let's get started: End of explanation """ a = np.array((1,2,3)) # or np.array((1,2,3)) print a """ Explanation: Creating numpy arrays You can create an array from scratch, similar to how you might create a list / tuple / whatever: End of explanation """ myList = [0.2, 0.4, 0.6] myArray = np.array(myList) print myList print myArray print type(myArray) print myArray.dtype """ Explanation: You can also convert a list or tuple of elements into an array: End of explanation """ # an array of booleans print np.array([True, False, True]) # an array of characters/strings print np.array(['a', 'b', 'c']) print np.array([2, 3, 'c']) print np.array([2, 3, 0.4]) print np.array([2, 3, 'C'], dtype=int) """ Explanation: Arrays can be created with non-numbers as well, but all the elements of an array have to have the same type. i.e., arrays are homogeneous. Once an array has been created, its dtype is fixed and it can only store elements of the same type. However, the dtype can explicitly be changed (we'll see this later). End of explanation """ print myArray print myArray[-2] print myArray[1] print myArray[1:2] """ Explanation: You can access elements of an array in the same way you access elements of a list: End of explanation """ newArray = np.array([ [3, 8, 0, 1], [4, 0, 0, 9], [2, 2, 7, 1], [5, 1, 0, 8]] ) print newArray print newArray.shape print newArray[:,-1] print newArray[1,3] print newArray[3,0:2] print newArray[:,0] # print out an individual column """ Explanation: Multidimensional arrays work like you might expect: End of explanation """ b = np.ones(3) print b print b.shape c = np.zeros((1,3), int) print c print type(c) print c.dtype print [3, 5, 6] print (3, 5, 6) a = [3,5,6] b = (3,5,6) b[0] = 1 print b d = np.zeros(3, complex) print d print d.dtype # slightly faster, but be careful to fill it with actual values! f = np.empty(4) f.fill(3.14) f[-1] = 0 print f """ Explanation: If you know the size of the array you want, you can create an array of ones or zeros or an empty array: End of explanation """ print np.eye(5, dtype=int) # default data type is 'float' """ Explanation: Create an identity array: End of explanation """ print np.arange(-5, 5, 0.5) # excludes upper endpoint print np.linspace(-3, 3, 21) # includes both endpoints print np.logspace(1, 4, 4) """ Explanation: Number generation Here are a few ways to generate uniformly spaced numbers over an interval (including both endpoints), similar to <code>range()</code>: End of explanation """ print np.random.rand(10) print np.random.rand(2,2) print np.() print np.random.randint(2,100,5) print np.random.normal(10, 3, (2,4)) print np.random.randn(5) # samples 5 times from the standard normal distribution print np.random.normal(3, 1, 5) # samples 5 times from a Gaussian with mean 3 and std dev 1 """ Explanation: Some examples of random number generation using numpy End of explanation """ print newArray print newArray.reshape(2,7) print newArray.reshape(-1,2) """ Explanation: Working with and manipulating arrays End of explanation """ print newArray reshapedArray = newArray.reshape(2,8) print reshapedArray print newArray """ Explanation: None of these manipulations have modified the original newArray: End of explanation """ redshifts = np.array((0.2, 1.56, 6.3, 0.003, 0.9, 4.54, 1.1)) print redshifts close = redshifts < 1 print close print redshifts[close] far = np.where(redshifts > 2) print far[0][0] print redshifts[far] middle = np.where( (redshifts >= 1) & (redshifts <= 2) ) print middle print redshifts[middle] print (redshifts >= 1) & (redshifts <= 2) """ Explanation: Above we saw how to index arrays with single numbers and slices, just like Python lists. But arrays allow for a more sophisticated kind of indexing which is very powerful: you can index an array with another array, and in particular with an array of boolean values. This is particluarly useful to extract information from an array that matches a certain condition. End of explanation """ myList = [3, 6, 7, 2] print 2*myList myArray = np.array([3, 6, 7, 2]) print 2*myArray """ Explanation: Mathematical operations on arrays Math with arrays is straightforward and easy. For instance, let's say we want to multiply every number in a group by 3. If we try to do that with a list: End of explanation """ arr1 = np.arange(4) arr2 = np.arange(10,14) print arr1 print arr2 print arr1 + arr2 print arr1 - arr2 print arr1 * arr2 print arr1 / arr2 """ Explanation: In general, mathematical operations on arrays are done element-by-element: End of explanation """ print 3.5 + arr1 """ Explanation: Notice in particular that multiplication is element-wise and is NOT a dot product or regular matrix multiplication. End of explanation """ arr1 += 10 print arr1 arr1.fill(0) print arr1 print arr2 print np.mean(arr2), arr2.mean() print np.sum(arr2), arr2.sum() print np.min(arr2), arr2.min() """ Explanation: In the last example, numpy understood the command to be "add 3.5 to every element in arr1." That is, it converts the scalar 3.5 into an array of the appropriate size. Since the new array is filled with floats, and arr1 is filled with ints, the summed array is an array of floats. Broadcasting with numpy The broadcasting rules allow numpy to: create new dimensions of length 1 (since this doesn't change the size of the array) 'stretch' a dimension of length 1 that needs to be matched to a dimension of a different size. So in the above example, the scalar 1.5 is effectively: first 'promoted' to a 1-dimensional array of length 1 then, this array is 'stretched' to length 4 to match the dimension of arr1. After these two operations are complete, the addition can proceed as now both operands are one-dimensional arrays of length 4. This broadcasting behavior is in practice enormously powerful, especially because when numpy broadcasts to create new dimensions or to 'stretch' existing ones, it doesn't actually replicate the data. In the example above the operation is carried as if the 1.5 was a 1-d array with 1.5 in all of its entries, but no actual array was ever created. This can save lots of memory in cases when the arrays in question are large and can have significant performance implications. End of explanation """ arr3 = np.random.rand(6,5) print arr3 np.savetxt('arrayFile.txt', arr3, fmt='%.2e', header='') arr4 = np.loadtxt('arrayFile.txt') print arr4 ## what if we want to skip the first and last rows? Or we just want columns 2 and 3? arr5 = np.genfromtxt('arrayFile.txt', skip_header=1, skip_footer=1, usecols=(2,3)) print arr5 """ Explanation: Numpy I/O with arrays Numpy makes it easy to write arrays to files and read them. It can write both text and binary files. In a text file, the number $\pi$ could be written as "3.141592653589793", for example: a string of digits that a human can read, with in this case 15 decimal digits. In contrast, that same number written to a binary file would be encoded as 8 characters (bytes) that are not readable by a human but which contain the exact same data that the variable pi had in the computer's memory. Text mode: occupies more space, precision can be lost (if not all digits are written to disk), but is readable and editable by hand with a text editor. Can only be used for one- and two-dimensional arrays. Binary mode: compact and exact representation of the data in memory, can't be read or edited by hand. Arrays of any size and dimensionality can be saved and read without loss of information. In the following examples, we'll only be talking about reading and writing arrays in text mode. End of explanation """ import matplotlib.pyplot as plt %matplotlib inline """ Explanation: What is Matplotlib? The matplotlib library is a powerful tool capable that can quickly produce simple plots for data visualization as well as complex publication-quality figures with fine layout control. Here, we will only provide a minimal introduction, but Google is your friend when it comes to creating the perfect plots. The pyplot tutorial (http://matplotlib.org/users/pyplot_tutorial.html) is a great place to get started, and the matplotlib gallery (http://matplotlib.org/gallery.html) has tons of information as well. Just as we typically use the shorthand np for Numpy, we will use plt for the matplotlib.pyplot module where the plotting functions reside. End of explanation """ x = np.linspace(0, 2*np.pi, 50) y = np.sin(x) plt.plot(x, y) ### If you don't give it x values: plt.plot(np.random.rand(100)) ### Multiple data sets on the same plot # the semicolon at the end suppresses the display of some usually unnecessary information plt.plot(x, np.sin(x), label=r'$\sin(x)$') plt.plot(x, np.cos(x), label='cos(x)') plt.xlabel('x values', size=24) plt.ylabel('y values') plt.title('two functions') plt.legend(); """ Explanation: To plot a collection of x- and y-values: End of explanation """ plt.plot(x, np.sin(x), linewidth=3, color='red', linestyle='--') plt.plot(x, np.sin(x), 'o--', markersize=5, color='k') ### logarithmic plots x = np.linspace(-5, 5) y = np.exp(-x**2) plt.semilogy(x,y, lw=2, color='0.1') plt.loglog(x,y) plt.xlim(1e-1, 1e1) plt.ylim(1e-2, 1e1) # A more complicated example mu, sigma = 5.9, 0.2 measurements = mu + sigma * np.random.randn(10000) # the histogram of the data plt.hist(measurements, 50, normed=False, facecolor='g', alpha=0.75, label='hist') plt.xlabel('Height (feet)') plt.ylabel('Number of people') plt.title('Height distribution') # This will put a text fragment at the position given: plt.text(5.1, 600, r'$\mu=100$, $\sigma=15$', fontsize=14) plt.axis([5,6.7,0,750]) plt.grid(True) plt.legend() ### Error bars and subplots # generate some fake data, with errors in y x = np.arange(0.2, 4, 0.5) y = np.exp(-x) errors = 0.1 + 0.1*np.sqrt(x) fig = plt.figure() noErrs = fig.add_subplot(121) plt.plot(x, y) plt.xticks((0,1,2,3,4)) noErrs.set_title('No errors') noErrs.set_ylim(-0.3,1) withErrs = fig.add_subplot(122) withErrs.errorbar(x, y, yerr=errors) withErrs.set_title('With errors') plt.ylim(-0.3,1) plt.xticks(np.arange(0,4.5,0.5)) fig.suptitle('fake data', size=16, weight='bold') # normal distribution center at x=0 and y=5 x = np.random.randn(100000) y = np.random.randn(100000)+5 # let's make this plot bigger plt.figure(figsize=(12,8)) plt.hist2d(x, y, bins=40) plt.colorbar() """ Explanation: Notice how matplotlib automatically assigned different colors to the two data sets. We can finetune what these look like: End of explanation """
esa-as/2016-ml-contest
CEsprey - RandomForest/Facies_Feature_Engineering_And_ML.ipynb
apache-2.0
%matplotlib notebook import pandas as pd import numpy as np from sklearn import preprocessing from sklearn.ensemble import RandomForestClassifier from sklearn import cross_validation from sklearn.cross_validation import KFold from sklearn.cross_validation import train_test_split from sklearn import metrics from sklearn.metrics import confusion_matrix from classification_utilities import display_cm, display_adj_cm filename = 'training_data.csv' training_data = pd.read_csv(filename) """ Explanation: 1.0 - Facies Classification Using RandomForestClassifier. Chris Esprey - https://www.linkedin.com/in/christopher-esprey-beng-8aab1078?trk=nav_responsive_tab_profile I have generated two main feature types, namely: The absolute difference between each feature for all feature rows. The difference between each sample and the mean and standard deviation of each facies. I then threw this at a RandomForestClassifier. Possible future improvements: - Perform Univariate feature selection to hone in on the best features - Try out other classifiers e.g. gradient boost, SVM etc. - Use an ensemble of algorithms for classification End of explanation """ ## Create a difference vector for each feature e.g. x1-x2, x1-x3... x2-x3... # order features in depth. feature_vectors = training_data.drop(['Formation', 'Well Name','Facies'], axis=1) feature_vectors = feature_vectors[['Depth','GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE']] def difference_vector(feature_vectors): length = len(feature_vectors['Depth']) df_temp = np.zeros((25, length)) for i in range(0,int(len(feature_vectors['Depth']))): vector_i = feature_vectors.iloc[i,:] vector_i = vector_i[['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE']] for j, value_j in enumerate(vector_i): for k, value_k in enumerate(vector_i): differ_j_k = value_j - value_k df_temp[5*j+k, i] = np.abs(differ_j_k) return df_temp def diff_vec2frame(feature_vectors, df_temp): heads = feature_vectors.columns[1::] for i in range(0,5): string_i = heads[i] for j in range(0,5): string_j = heads[j] col_head = 'diff'+string_i+string_j df = pd.Series(df_temp[5*i+j, :]) feature_vectors[col_head] = df return feature_vectors df_diff = difference_vector(feature_vectors) feature_vectors = diff_vec2frame(feature_vectors, df_diff) # Drop duplicated columns and column of zeros feature_vectors = feature_vectors.T.drop_duplicates().T feature_vectors.drop('diffGRGR', axis = 1, inplace = True) # Add Facies column back into features vector feature_vectors['Facies'] = training_data['Facies'] # # group by facies, take statistics of each facies e.g. mean, std. Take sample difference of each row with def facies_stats(feature_vectors): facies_labels = np.sort(feature_vectors['Facies'].unique()) frame_mean = pd.DataFrame() frame_std = pd.DataFrame() for i, value in enumerate(facies_labels): facies_subframe = feature_vectors[feature_vectors['Facies']==value] subframe_mean = facies_subframe.mean() subframe_std = facies_subframe.std() frame_mean[str(value)] = subframe_mean frame_std[str(value)] = subframe_std return frame_mean.T, frame_std.T def feature_stat_diff(feature_vectors, frame_mean, frame_std): feature_vec_origin = feature_vectors[['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE']] for i, column in enumerate(feature_vec_origin): feature_column = feature_vec_origin[column] stat_column_mean = frame_mean[column] stat_column_std = frame_std[column] for j in range(0,9): stat_column_mean_facie = stat_column_mean[j] stat_column_std_facie = stat_column_std[j] feature_vectors[column + '_mean_diff_facies' + str(j)] = feature_column-stat_column_mean_facie feature_vectors[column + '_std_diff_facies' + str(j)] = feature_column-stat_column_std_facie return feature_vectors frame_mean, frame_std = facies_stats(feature_vectors) feature_vectors = feature_stat_diff(feature_vectors, frame_mean, frame_std) """ Explanation: 2.0 - Feature Generation End of explanation """ # A = feature_vectors.sort_values(by='Facies') # A.reset_index(drop=True).plot(subplots=True, style='b', figsize = [12, 400]) """ Explanation: 3.0 - Generate plots of each feature End of explanation """ df = feature_vectors predictors = feature_vectors.columns predictors = list(predictors.drop('Facies')) correct_facies_labels = df['Facies'].values # Scale features df = df[predictors] scaler = preprocessing.StandardScaler().fit(df) scaled_features = scaler.transform(df) # Train test split: X_train, X_test, y_train, y_test = train_test_split(scaled_features, correct_facies_labels, test_size=0.2, random_state=0) alg = RandomForestClassifier(random_state=1, n_estimators=200, min_samples_split=8, min_samples_leaf=3, max_features= None) alg.fit(X_train, y_train) predicted_random_forest = alg.predict(X_test) facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS', 'WS', 'D','PS', 'BS'] result = predicted_random_forest conf = confusion_matrix(y_test, result) display_cm(conf, facies_labels, hide_zeros=True, display_metrics = True) def accuracy(conf): total_correct = 0. nb_classes = conf.shape[0] for i in np.arange(0,nb_classes): total_correct += conf[i][i] acc = total_correct/sum(sum(conf)) return acc print(accuracy(conf)) adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]]) def accuracy_adjacent(conf, adjacent_facies): nb_classes = conf.shape[0] total_correct = 0. for i in np.arange(0,nb_classes): total_correct += conf[i][i] for j in adjacent_facies[i]: total_correct += conf[i][j] return total_correct / sum(sum(conf)) print(accuracy_adjacent(conf, adjacent_facies)) """ Explanation: 4.0 - Train model using RandomForestClassifier End of explanation """ # read in Test data filename = 'validation_data_nofacies.csv' test_data = pd.read_csv(filename) # Reproduce feature generation feature_vectors_test = test_data.drop(['Formation', 'Well Name'], axis=1) feature_vectors_test = feature_vectors_test[['Depth','GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE']] df_diff_test = difference_vector(feature_vectors_test) feature_vectors_test = diff_vec2frame(feature_vectors_test, df_diff_test) # Drop duplicated columns and column of zeros feature_vectors_test = feature_vectors_test.T.drop_duplicates().T feature_vectors_test.drop('diffGRGR', axis = 1, inplace = True) # Create statistical feature differences using preivously caluclated mean and std values from train data. feature_vectors_test = feature_stat_diff(feature_vectors_test, frame_mean, frame_std) feature_vectors_test = feature_vectors_test[predictors] scaler = preprocessing.StandardScaler().fit(feature_vectors_test) scaled_features = scaler.transform(feature_vectors_test) predicted_random_forest = alg.predict(scaled_features) predicted_random_forest test_data['Facies'] = predicted_random_forest test_data.to_csv('test_data_prediction_CE.csv') """ Explanation: 5.0 - Predict on test data End of explanation """
GoogleCloudPlatform/vertex-ai-samples
notebooks/community/sdk/sdk_automl_tabular_classification_online_explain.ipynb
apache-2.0
import os # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = "--user" else: USER_FLAG = "" ! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG """ Explanation: Vertex SDK: AutoML training tabular classification model for online explanation <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_tabular_classification_online_explain.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_tabular_classification_online_explain.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> <td> <a href="https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_tabular_classification_online_explain.ipynb"> Open in Google Cloud Notebooks </a> </td> </table> <br/><br/><br/> Overview This tutorial demonstrates how to use the Vertex SDK to create tabular classification models and do online prediction with explanation using a Google Cloud AutoML model. Dataset The dataset used for this tutorial is the Iris dataset from TensorFlow Datasets. This dataset does not require any feature engineering. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the type of Iris flower species from a class of three species: setosa, virginica, or versicolor. Objective In this tutorial, you create an AutoML tabular classification model and deploy for online prediction with explainability from a Python script using the Vertex SDK. You can alternatively create and deploy models using the gcloud command-line tool or online using the Cloud Console. The steps performed include: Create a Vertex Dataset resource. Train the model. View the model evaluation. Deploy the Model resource to a serving Endpoint resource. Make a prediction request with explainability. Undeploy the Model. Costs This tutorial uses billable components of Google Cloud: Vertex AI Cloud Storage Learn about Vertex AI pricing and Cloud Storage pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage. Set up your local development environment If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step. Otherwise, make sure your environment meets this notebook's requirements. You need the following: The Cloud Storage SDK Git Python 3 virtualenv Jupyter notebook running in a virtual environment with Python 3 The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions: Install and initialize the SDK. Install Python 3. Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment. To install Jupyter, run pip3 install jupyter on the command-line in a terminal shell. To launch Jupyter, run jupyter notebook on the command-line in a terminal shell. Open this notebook in the Jupyter Notebook Dashboard. Installation Install the latest version of Vertex SDK for Python. End of explanation """ ! pip3 install -U google-cloud-storage $USER_FLAG """ Explanation: Install the latest GA version of google-cloud-storage library as well. End of explanation """ ! pip3 install -U tabulate $USER_FLAG if os.environ["IS_TESTING"]: ! pip3 install --upgrade tensorflow $USER_FLAG """ Explanation: Install the latest GA version of Tabulate library as well. End of explanation """ import os if not os.getenv("IS_TESTING"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) """ Explanation: Restart the kernel Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages. End of explanation """ PROJECT_ID = "[your-project-id]" # @param {type:"string"} if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]": # Get your GCP project id from gcloud shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID:", PROJECT_ID) ! gcloud config set project $PROJECT_ID """ Explanation: Before you begin GPU runtime This tutorial does not require a GPU runtime. Set up your Google Cloud project The following steps are required, regardless of your notebook environment. Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage. If you are running this notebook locally, you will need to install the Cloud SDK. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $. End of explanation """ REGION = "us-central1" # @param {type: "string"} """ Explanation: Region You can also change the REGION variable, which is used for operations throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you. Americas: us-central1 Europe: europe-west4 Asia Pacific: asia-east1 You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services. Learn more about Vertex AI regions End of explanation """ from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") """ Explanation: Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial. End of explanation """ # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. import os import sys # If on Google Cloud Notebook, then don't execute this code if not os.path.exists("/opt/deeplearning/metadata/env_version"): if "google.colab" in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. elif not os.getenv("IS_TESTING"): %env GOOGLE_APPLICATION_CREDENTIALS '' """ Explanation: Authenticate your Google Cloud account If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step. If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. Otherwise, follow these steps: In the Cloud Console, go to the Create service account key page. Click Create service account. In the Service account name field, enter a name, and click Create. In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin. Click Create. A JSON file that contains your key downloads to your local environment. Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell. End of explanation """ BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"} if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]": BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP """ Explanation: Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions. Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization. End of explanation """ ! gsutil mb -l $REGION $BUCKET_NAME """ Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket. End of explanation """ ! gsutil ls -al $BUCKET_NAME """ Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents: End of explanation """ import google.cloud.aiplatform as aip """ Explanation: Set up variables Next, set up some variables used throughout the tutorial. Import libraries and define constants End of explanation """ aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME) """ Explanation: Initialize Vertex SDK for Python Initialize the Vertex SDK for Python for your project and corresponding bucket. End of explanation """ IMPORT_FILE = "gs://cloud-samples-data/tables/iris_1000.csv" """ Explanation: Tutorial Now you are ready to start creating your own AutoML tabular classification model. Location of Cloud Storage training data. Now set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage. End of explanation """ count = ! gsutil cat $IMPORT_FILE | wc -l print("Number of Examples", int(count[0])) print("First 10 rows") ! gsutil cat $IMPORT_FILE | head heading = ! gsutil cat $IMPORT_FILE | head -n1 label_column = str(heading).split(",")[-1].split("'")[0] print("Label Column Name", label_column) if label_column is None: raise Exception("label column missing") """ Explanation: Quick peek at your data This tutorial uses a version of the Iris dataset that is stored in a public Cloud Storage bucket, using a CSV index file. Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows. You also need for training to know the heading name of the label column, which is save as label_column. For this dataset, it is the last column in the CSV file. End of explanation """ dataset = aip.TabularDataset.create( display_name="Iris" + "_" + TIMESTAMP, gcs_source=[IMPORT_FILE] ) print(dataset.resource_name) """ Explanation: Create the Dataset Next, create the Dataset resource using the create method for the TabularDataset class, which takes the following parameters: display_name: The human readable name for the Dataset resource. gcs_source: A list of one or more dataset index files to import the data items into the Dataset resource. bq_source: Alternatively, import data items from a BigQuery table into the Dataset resource. This operation may take several minutes. End of explanation """ dag = aip.AutoMLTabularTrainingJob( display_name="iris_" + TIMESTAMP, optimization_prediction_type="classification", optimization_objective="minimize-log-loss", ) print(dag) """ Explanation: Create and run training pipeline To train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline. Create training pipeline An AutoML training pipeline is created with the AutoMLTabularTrainingJob class, with the following parameters: display_name: The human readable name for the TrainingJob resource. optimization_prediction_type: The type task to train the model for. classification: A tabuar classification model. regression: A tabular regression model. column_transformations: (Optional): Transformations to apply to the input columns optimization_objective: The optimization objective to minimize or maximize. binary classification: minimize-log-loss maximize-au-roc maximize-au-prc maximize-precision-at-recall maximize-recall-at-precision multi-class classification: minimize-log-loss regression: minimize-rmse minimize-mae minimize-rmsle The instantiated object is the DAG (directed acyclic graph) for the training pipeline. End of explanation """ model = dag.run( dataset=dataset, model_display_name="iris_" + TIMESTAMP, training_fraction_split=0.6, validation_fraction_split=0.2, test_fraction_split=0.2, budget_milli_node_hours=8000, disable_early_stopping=False, target_column=label_column, ) """ Explanation: Run the training pipeline Next, you run the DAG to start the training job by invoking the method run, with the following parameters: dataset: The Dataset resource to train the model. model_display_name: The human readable name for the trained model. training_fraction_split: The percentage of the dataset to use for training. test_fraction_split: The percentage of the dataset to use for test (holdout data). validation_fraction_split: The percentage of the dataset to use for validation. target_column: The name of the column to train as the label. budget_milli_node_hours: (optional) Maximum training time specified in unit of millihours (1000 = hour). disable_early_stopping: If True, training maybe completed before using the entire budget if the service believes it cannot further improve on the model objective measurements. The run method when completed returns the Model resource. The execution of the training pipeline will take upto 20 minutes. End of explanation """ # Get model resource ID models = aip.Model.list(filter="display_name=iris_" + TIMESTAMP) # Get a reference to the Model Service client client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"} model_service_client = aip.gapic.ModelServiceClient(client_options=client_options) model_evaluations = model_service_client.list_model_evaluations( parent=models[0].resource_name ) model_evaluation = list(model_evaluations)[0] print(model_evaluation) """ Explanation: Review model evaluation scores After your model has finished training, you can review the evaluation scores for it. First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project. End of explanation """ endpoint = model.deploy(machine_type="n1-standard-4") """ Explanation: Deploy the model Next, deploy your model for online prediction. To deploy the model, you invoke the deploy method, with the following parameters: machine_type: The type of compute machine. End of explanation """ INSTANCE = { "petal_length": "1.4", "petal_width": "1.3", "sepal_length": "5.1", "sepal_width": "2.8", } """ Explanation: Send a online prediction request with explainability Send a online prediction with explainability to your deployed model. In this method, the predicted response will include an explanation on how the features contributed to the explanation. Make test item You will use synthetic data as a test data item. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction. End of explanation """ instances_list = [INSTANCE] prediction = endpoint.explain(instances_list) print(prediction) """ Explanation: Make the prediction with explanation Now that your Model resource is deployed to an Endpoint resource, one can do online explanations by sending prediction requests to the Endpoint resource. Request The format of each instance is: [feature_list] Since the explain() method can take multiple items (instances), send your single test item as a list of one test item. Response The response from the explain() call is a Python dictionary with the following entries: ids: The internal assigned unique identifiers for each prediction request. displayNames: The class names for each class label. confidences: For classification, the predicted confidence, between 0 and 1, per class label. values: For regression, the predicted value. deployed_model_id: The Vertex AI identifier for the deployed Model resource which did the predictions. explanations: The feature attributions End of explanation """ import numpy as np try: label = np.argmax(prediction[0][0]["scores"]) cls = prediction[0][0]["classes"][label] print("Predicted Value:", cls, prediction[0][0]["scores"][label]) except: pass """ Explanation: Understanding the explanations response First, you will look what your model predicted and compare it to the actual value. End of explanation """ from tabulate import tabulate feature_names = ["sepal_length", "sepal_width", "petal_length", "petal_width"] attributions = prediction.explanations[0].attributions[0].feature_attributions rows = [] for i, val in enumerate(feature_names): rows.append([val, INSTANCE[val], attributions[val]]) print(tabulate(rows, headers=["Feature name", "Feature value", "Attribution value"])) """ Explanation: Examine feature attributions Next you will look at the feature attributions for this particular example. Positive attribution values mean a particular feature pushed your model prediction up by that amount, and vice versa for negative attribution values. End of explanation """ import random # Prepare 10 test examples to your model for prediction using a random distribution to generate # test instances instances = [] for i in range(10): pl = str(random.uniform(1.0, 2.0)) pw = str(random.uniform(1.0, 2.0)) sl = str(random.uniform(4.0, 6.0)) sw = str(random.uniform(2.0, 4.0)) instances.append( {"petal_length": pl, "petal_width": pw, "sepal_length": sl, "sepal_width": sw} ) response = endpoint.explain(instances) """ Explanation: Check your explanations and baselines To better make sense of the feature attributions you're getting, you should compare them with your model's baseline. In most cases, the sum of your attribution values + the baseline should be very close to your model's predicted value for each input. Also note that for regression models, the baseline_score returned from AI Explanations will be the same for each example sent to your model. For classification models, each class will have its own baseline. In this section you'll send 10 test examples to your model for prediction in order to compare the feature attributions with the baseline. Then you'll run each test example's attributions through a sanity check in the sanity_check_explanations method. Get explanations End of explanation """ import numpy as np def sanity_check_explanations( explanation, prediction, mean_tgt_value=None, variance_tgt_value=None ): passed_test = 0 total_test = 1 # `attributions` is a dict where keys are the feature names # and values are the feature attributions for each feature baseline_score = explanation.attributions[0].baseline_output_value print("baseline:", baseline_score) # Sanity check 1 # The prediction at the input is equal to that at the baseline. # Please use a different baseline. Some suggestions are: random input, training # set mean. if abs(prediction - baseline_score) <= 0.05: print("Warning: example score and baseline score are too close.") print("You might not get attributions.") else: passed_test += 1 print("Sanity Check 1: Passed") print(passed_test, " out of ", total_test, " sanity checks passed.") i = 0 for explanation in response.explanations: try: prediction = np.max(response.predictions[i]["scores"]) except TypeError: prediction = np.max(response.predictions[i]) sanity_check_explanations(explanation, prediction) i += 1 """ Explanation: Sanity check In the function below you perform a sanity check on the explanations. End of explanation """ endpoint.undeploy_all() """ Explanation: Undeploy the model When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model. End of explanation """ delete_all = True if delete_all: # Delete the dataset using the Vertex dataset object try: if "dataset" in globals(): dataset.delete() except Exception as e: print(e) # Delete the model using the Vertex model object try: if "model" in globals(): model.delete() except Exception as e: print(e) # Delete the endpoint using the Vertex endpoint object try: if "endpoint" in globals(): endpoint.delete() except Exception as e: print(e) # Delete the AutoML or Pipeline trainig job try: if "dag" in globals(): dag.delete() except Exception as e: print(e) # Delete the custom trainig job try: if "job" in globals(): job.delete() except Exception as e: print(e) # Delete the batch prediction job using the Vertex batch prediction object try: if "batch_predict_job" in globals(): batch_predict_job.delete() except Exception as e: print(e) # Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object try: if "hpt_job" in globals(): hpt_job.delete() except Exception as e: print(e) if "BUCKET_NAME" in globals(): ! gsutil rm -r $BUCKET_NAME """ Explanation: Cleaning up To clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial: Dataset Pipeline Model Endpoint AutoML Training Job Batch Job Custom Job Hyperparameter Tuning Job Cloud Storage Bucket End of explanation """
ryan-leung/PHYS4650_Python_Tutorial
notebooks/Jan2018/python-matplotlib.ipynb
bsd-3-clause
import matplotlib.pyplot as plt %matplotlib inline import numpy as np """ Explanation: Matplotlib <img src="images/matplotlib.svg" alt="matplotlib" style="width: 600px;"/> Using matplotlib in Jupyter notebook End of explanation """ x = np.arange(-np.pi,np.pi,0.01) # Create an array of x values from -pi to pi with 0.01 interval y = np.sin(x) # Apply sin function on all x plt.plot(x,y) plt.plot(y) """ Explanation: File Reading Line Plots plt.plot Plot lines and/or markers: * plot(x, y) * plot x and y using default line style and color * plot(x, y, 'bo') * plot x and y using blue circle markers * plot(y) * plot y using x as index array 0..N-1 * plot(y, 'r+') * Similar, but with red plusses run %pdoc plt.plot for more details End of explanation """ x = np.arange(0,10,1) # x = 1,2,3,4,5... y = x*x # Squared x plt.plot(x,y,'bo') # plot x and y using blue circle markers plt.plot(x,y,'r+') # plot x and y using red plusses """ Explanation: Scatter Plots plt.plot can also plot markers. End of explanation """ x = np.arange(-np.pi,np.pi,0.001) plt.plot(x,np.sin(x)) plt.title('y = sin(x)') # title plt.xlabel('x (radians)') # x-axis label plt.ylabel('y') # y-axis label # To plot the axis label in LaTex, we can run from matplotlib import rc ## For sans-serif font: rc('font',**{'family':'sans-serif','sans-serif':['Helvetica']}) rc('text', usetex=True) ## for Palatino and other serif fonts use: #rc('font',**{'family':'serif','serif':['Palatino']}) plt.plot(x,np.sin(x)) plt.title(r'T = sin($\theta$)') # title, the `r` in front of the string means raw string plt.xlabel(r'$\theta$ (radians)') # x-axis label, LaTex synatx should be encoded with $$ plt.ylabel('T') # y-axis label """ Explanation: Plot properties Add x-axis and y-axis End of explanation """ x1 = np.linspace(0.0, 5.0) x2 = np.linspace(0.0, 2.0) y1 = np.cos(2 * np.pi * x1) * np.exp(-x1) y2 = np.cos(2 * np.pi * x2) plt.subplot(2, 1, 1) plt.plot(x1, y1, '.-') plt.title('Plot 2 graph at the same time') plt.ylabel('Amplitude (Damped)') plt.subplot(2, 1, 2) plt.plot(x2, y2, '.-') plt.xlabel('time (s)') plt.ylabel('Amplitude (Undamped)') """ Explanation: Multiple plots End of explanation """ plt.plot(x,np.sin(x)) plt.savefig('plot.pdf') plt.savefig('plot.png') # To load image into this Jupyter notebook from IPython.display import Image Image("plot.png") """ Explanation: Save figure End of explanation """
leonardodaniel/quant-project
analysis/stock_analysis.ipynb
gpl-3.0
import numpy as np import pandas as pd import datetime as dt import matplotlib.pyplot as plt %matplotlib inline plt.figure(figsize=(12,12)) """ Explanation: Stock analysis: returns and volatility This notebook aims to explore the Markowitz theory on modern portfolios with a little of code and a little of maths. The modern portfolio theory seeks to build a portfolio of different assets in such way that increases the returns and reduces the risk of holding the portfolio. In almost any treatment of risk I have read, the risk is repressented by the standar deviation of the returns and is called volatility. Let's assume that the random walk for stocks is of the form $$ S_{t+1} = S_t \mu \Delta t + S_t \sigma \epsilon \sqrt{\Delta t} $$ where $S$ is the stock price at time $t$, $\mu$ and $\sigma$ are mean and standar deviation, $\Delta t$ the time step and $\epsilon$ a normal distributed random variable with mean zero and variance one. Under this assumption, $\sigma$ indicates how scattered are the returns in the future and under certain conditions, there is a probability that it can lead to a loss due to this scattering. So, an investor seeks to maximize returns, but keeping volatility at bay, so to reduce the chance of lossing money. For this analysis we will use only basic numerical libraries, to show how some algorithms work without much black-box and we will work with real stock data. End of explanation """ # beware! it takes a numpy array def calculate_volatility(df, t=10): # default timespan for volatility rets = np.diff(np.log(df)) vol = np.zeros(rets.size - t) for i in range(vol.size): vol[i] = np.std(rets[i:(t+i)]) return np.sqrt(260)*vol """ Explanation: The volatility calculation is made using the return for each day, given the close price of any stock. As we are interested in the annual volatility, this needs to be scaled by a factor $\sqrt{n}$, where $n$ is the number of working days in a year (I will assume 260) End of explanation """ symbols = np.loadtxt("../utils/symbols", dtype=str) n_sym = symbols.size data = {} for s in map(lambda x: x[2:-1], symbols): data[str(s)] = pd.read_csv("../data/{}.csv".format(str(s))) data[str(s)].sort_values(by="Date", ascending=True, inplace=True) t = data["AAPL"].Date[64:] t = [dt.datetime.strptime(d,'%Y-%m-%d').date() for d in t] plt.subplot(411) plt.plot(t, calculate_volatility(data["AAPL"].Close.values, 63)) plt.subplot(412) plt.plot(t, calculate_volatility(data["AMZN"].Close.values, 63)) plt.subplot(413) plt.plot(t, calculate_volatility(data["GOOG"].Close.values, 63)) plt.subplot(414) plt.plot(t, calculate_volatility(data["MSFT"].Close.values, 63)) plt.tight_layout() """ Explanation: Using the scripts on utils it's possible to download the data from Yahoo! Finance for the following stocks (by default): AAPL AMZN GOOG MSFT We will use the function defined above to plot the running volatility of those stocks on a 63 days window. End of explanation """ def repair_split(df, info): # info is a list of tuples [(date, split-ratio)] temp = df.Close.values.copy() for i in info: date, ratio = i mask = np.array(df.Date >= date) temp[mask] = temp[mask]*ratio return temp aapl_info = [("2005-02-28", 2), ("2014-06-09", 7)] plt.figure() plt.subplot(411) plt.plot(t, calculate_volatility(repair_split(data["AAPL"], aapl_info), 63)) plt.subplot(412) plt.plot(t, calculate_volatility(data["AMZN"].Close.values, 63)) plt.subplot(413) plt.plot(t, calculate_volatility(repair_split(data["GOOG"], [("2014-03-27", 2)]), 63)) plt.subplot(414) plt.plot(t, calculate_volatility(data["MSFT"].Close.values, 63)) plt.tight_layout() """ Explanation: I forgot to check if in the time in which those stocks are defined there was any splitting (that's why it's important explorative analysis). AAPL splitted on Febbruary 28 2005 by 2:1 and June 9 2014 by 7:1, while GOOG splitted on April 2 2014 (not a real split, it generated another kind of stock, splitting by 2:1) End of explanation """ rets = {key:np.diff(np.log(df.Close.values)) for key, df in data.items()} corr = np.corrcoef(list(rets.values())) rets.keys(), corr plt.xticks(range(4), rets.keys(), rotation=45) plt.yticks(range(4), rets.keys()) plt.imshow(corr, interpolation='nearest') """ Explanation: Now let's build a data structure with only daily return for every stock. The model for returns is given by a compounding of the daily return, like $$ S_{n+1} = S_{n}e^{r} $$ End of explanation """ def normalize(v): return v/v.sum() portfolios = np.random.uniform(0, 1, size=(1000, 4)) # 1000 random portfolios portfolios = np.apply_along_axis(normalize, 1, portfolios) # normalize so that they sum 1 # total returns per dollar per portfolio total_returns = np.dot(portfolios, list(rets.values())) mean = 260*total_returns.mean(axis=1) std = np.sqrt(260)*total_returns.std(axis=1) plt.scatter(std, mean) plt.xlabel("Annual volatility") plt.ylabel("Annual returns") """ Explanation: Too bad there arent negative correlations, after all, they all play more or less on the same areas. So, given those stocks, what combination yields the best portfolio (bigger return, lower risk) with a given ammount of investment capital? The optimization problem isn't so hard numericaly, but its possible to derive an analytical expression. What follows is know as modern portfolio theory. First, let's explore what I think is a pretty beautifull result (because I didn't expected it). We will allocate randomly those stocks in order to generate a lot of random portfolios with the same initial value. End of explanation """ K = 260*np.cov(list(rets.values())) # annual covariance R = np.array([np.ones(4), 260*np.mean(list(rets.values()), axis=1)]) x = np.array([1, 0.15]) # I will select a 15% of annual return M = np.dot(R, np.dot(np.linalg.inv(K), R.transpose())) variance = np.dot(x, np.dot(np.linalg.inv(M), x.transpose())) volatility = np.sqrt(variance) weigths = np.dot(np.linalg.inv(K), np.dot(R.transpose(), np.dot(np.linalg.inv(M), x.transpose()))) volatility, weigths """ Explanation: It's easy to see that there is a hard superior limit on the points for a given volatility. That curve is called efficient frontier and it represents the best portfolio allocation (less risk for an expected return or bigger returns for a choice of risk). It seems somewhat to a parabolla. As said before, it's possible to derive some analytical expressions for that curve. Let $w_i$ be a weight vector that represents the ammount of stock $i$ and let its sum be one (scale isn't important here, so it can be put as the initial investment, but the number one is easy and I don't carry more symbols). The expected return of the i-th stock is $r_i$, so the total return for a portfolio is $$ r = w_i r^i $$ (summation of same indices is implied). In the same way, the total volatility (standar deviation) of the portfolio is $$ \sigma^2 = K_i^j w_j w^i $$ where $K$ is the covariance matrix, and the condition on the weights is expressed as $$ w_i 1^i = 1 $$ where $1^i$ is a vector of ones. If we choice an expected return, we can build an optimal portfolio by minimizing the standar deviation. So the problem becomes $$ min\left( K_i^j w_j w^i \,\,\,|\,\,\, w_i 1^i = 1,\,\,\, r = w_i r^i \right) $$ the right side I think may bring some confusion: the $w_i$ isn't bounded, only the $r$. In fact, if $r^i$ is a n-dimentional vector, for a given $r$ there is a full subspace of dimension $n-1$ of weights. The Lagrange multiplier problem can be solved by minimizing $$ \Lambda(w, \lambda) = K_j^i w_i w^j + \lambda_1 \left( w_i 1^i - 1 \right) + \lambda_2 \left( w_i r^i - r \right) $$ $$ \frac{\partial\Lambda}{\partial w_i} = 2 K_j^i w^j + \lambda_1 1^i + \lambda_2 r^i = 0 $$ and solving for $w^j$ yields $$ w^j = -\frac{1}{2} (K_j^i)^{-1} \left( \lambda_1 1^i + \lambda_2 r^i \right) $$ the term between parentesis can be put in a concise way as $$ (\lambda \cdot R)^T $$ where $\lambda$ is a 2-dimensional row vector and R a $2 \times q$ matrix (with q the number of stocks) $$ \lambda = (\lambda_1 \,\,\,\,\,\lambda_2) $$ $$ R = (1^i\,\,\,\,\,r^i)^T $$ this way, the bounding conditions can be put also as $$ R w^j = (1\,\,\,\,\,r)^T $$ In this last expression, the weight can be changed with the solution above, returning $$ -\frac{1}{2}\lambda \cdot \left[ R (K_j^i)^{-1} R^T \right] = (1\,\,\,\,\,r) $$ calling $M$ that messy $2\times 2$ matrix in brackets, it's possible to solve $\lambda$ as $$ \lambda = -(2\,\,\,\,\,2r) \cdot M^{-1} $$ It's easy to check that the matrix $M$, and hence also it's inverse, are symmetric. And with this, the variance can be (finaly) solved: $$ \sigma^2 = K_i^j w_j w^i = \frac{1}{4}\lambda R K^{-1} K K^{-1} R^T \lambda^T $$ $$ = \frac{1}{4}\lambda R K^{-1} R^T \lambda^T = \frac{1}{4}\lambda M \lambda^T $$ $$ = (1\,\,\,\,\,r) M^{-1} (1\,\,\,\,\,r)^T $$ That will be a very long calculation. I will just put the final result (remember, that formula is a scalar). The elements of M are $$ M_{00} = 1_i (K_j^i)^{-1} 1^i $$ $$ M_{11} = r_i (K_j^i)^{-1} r^i $$ $$ M_{10} = M_{01} = 1_i (K_j^i)^{-1} r^i $$ and the minimal variance, in function of the desidered return is $$ \sigma^2(r) = \frac{M_{00} r^2 - 2M_{01} r + M_{11}}{M_{00}M_{11} - M_{01}^2} $$ and the weights are $$ w^j = (K_j^i)^{-1} R^T M^{-1} (1\,\,\,\,\,r)^T $$ I was wrong, the plot of variance-mean is a parabola, but it seems that volatility-mean is a hyperbolla. So, returning to code: End of explanation """ volatility = np.sqrt(M[1,1]/(M[0,1]*M[0,1])) returns = M[1,1]/M[1,0] x = np.array([1, returns]) weigths = np.dot(np.linalg.inv(K), np.dot(R.transpose(), np.dot(np.linalg.inv(M), x.transpose()))) returns, volatility, weigths """ Explanation: For example, at this day (24 Jannuary 2017) the market closed with the following prices for the stocks in this list: GOOG: 823.87 MSFT: 63.52 AMZN: 822.44 AAPL: 119.97 with the assets allocation suggested by the optimum, if I have $10000 to invest, I will need to: Buy 2 stocks of GOOG (2.5) Buy 66 stocks of MSFT (66.5) Buy 4 stocks of AMZN (4.4) Don't buy AAPL (0.4) Put the remaining $870.18 to take LIBOR rate? or to rebalance the portfolio? options? Another kind of optimization that it's possible is to maximize the Sharpe ratio, defined as the ration of the expected return and the volatility. One can think of it as the returns for unit of risk, so maximizing it yields an optimization indeed. We know that any optimal portfolio is in the efficient frontier, so having an expression of this curve we only need to maximize $$ S = \frac{r}{\sigma} $$ The expression for the volatility in function of the desidered return can be put as $$ \sigma^2 = ar^2 + br + c $$ As we are interested only on the optimal curve, we will consider only the right side of this parabolla. This way, we have an additional advantage of having an invertible function on its domain. The return then have solution $$ r = \frac{-b + \sqrt{b^2 - 4a(c - \sigma^2)}}{2a} $$ (seems like cheating, I know...), so the Sharpe ratio becomes $$ S = \frac{-b + \sqrt{b^2 - 4a(c - \sigma^2)}}{2a \sigma} $$ do note that the part beyond square root is always positive thanks to the Bessel inequality, at least for this problem, so the Sharpe ratio will always defined positive. Doing the derivative of S with respect to $\sigma$ and solving the problem for the maximum the solutions are $$ \sigma = \pm \sqrt{\frac{4ac^2 - b^2 c}{b^2}} $$ and we take the positive value. With the real values back we obtain a very simple expression for the volatility: $$ \sigma = \sqrt{\frac{M_{11}}{M_{01}^2}} $$ and the return: $$ r = \frac{M_{00}}{M_{10}} $$ With the volatility the other quantities can be calculated as well using the formulas, so let's return again to code: End of explanation """ sig = np.linspace(0.28, 0.6, 100) sharpe = (M[1,0] + np.sqrt(np.linalg.det(M))*np.sqrt(sig*sig*M[0,0] - 1))/(sig*M[0,0]) plt.plot(sig, sharpe) """ Explanation: This time the algorithm ask to buy a lot of AMZN, some of GOOG and sell a little of the others. With the same $10000 the portfolio distribution would be: Buy 2 stocks of GOOG (2.5) Sell 9 stocks of MSFT (8.9) Buy 11 stocks of AMZN (11.3) Sell 6 stocks of AAPL (6.5) With remaining $596.92 to play It's possible to visualize the Sharpe factor for differents volatilities, rewriting the equation of the returns in function of the volatility as $$ r = \frac{M_{10} + \sqrt{det(M)}\sqrt{M_{00}\sigma^2 - 1}}{M_{00}} $$ and hence the Sharpe ratio as $$ S = \frac{M_{10} + \sqrt{det(M)}\sqrt{M_{00}\sigma^2 - 1}}{\sigma M_{00}} $$ End of explanation """
georgetown-analytics/machine-learning
archive/notebook/Clustering Flag Data - Pipeline.ipynb
mit
import os import requests import numpy as np import pandas as pd import matplotlib.cm as cm import matplotlib.pyplot as plt from sklearn import manifold from sklearn.cluster import KMeans, AgglomerativeClustering from sklearn.decomposition import PCA from sklearn.metrics import silhouette_samples, silhouette_score from sklearn.metrics.pairwise import euclidean_distances from sklearn.preprocessing import OneHotEncoder from sklearn.pipeline import Pipeline from time import time %matplotlib inline pd.set_option('max_columns', 500) """ Explanation: This notebook duplicates the process from Clustering Flag Data.ipynb, but implements a pipeline with OneHotEncoder instead. Predicting Religion from Country Flags Professor Bengfort put together a notebook using the UCI Machine Learning Repository flags dataset to predict the religion of a country based on the attributes of their flags. What if we had the same data, without the religion column? Can we used unsupervised machine learning to draw some conclusions about the data? 🇦🇫🇦🇽🇦🇱🇩🇿🇦🇸🇦🇩🇦🇴🇦🇮🇦🇶🇦🇬🇦🇷🇦🇲🇦🇼🇦🇺🇦🇹🇦🇿🇧🇸🇧🇭🇧🇩🇧🇧🇧🇾🇧🇪🇧🇿🇧🇯🇧🇲🇧🇹🇧🇴🇧🇶🇧🇦🇧🇼🇧🇷🇮🇴 Here is some infomation about our dataset: Data Set Information: This data file contains details of various nations and their flags. In this file the fields are separated by spaces (not commas). With this data you can try things like predicting the religion of a country from its size and the colours in its flag. 10 attributes are numeric-valued. The remainder are either Boolean- or nominal-valued. Attribute Information: name: Name of the country concerned landmass: 1=N.America, 2=S.America, 3=Europe, 4=Africa, 4=Asia, 6=Oceania zone: Geographic quadrant, based on Greenwich and the Equator; 1=NE, 2=SE, 3=SW, 4=NW area: in thousands of square km population: in round millions language: 1=English, 2=Spanish, 3=French, 4=German, 5=Slavic, 6=Other Indo-European, 7=Chinese, 8=Arabic, 9=Japanese/Turkish/Finnish/Magyar, 10=Others religion: 0=Catholic, 1=Other Christian, 2=Muslim, 3=Buddhist, 4=Hindu, 5=Ethnic, 6=Marxist, 7=Others bars: Number of vertical bars in the flag stripes: Number of horizontal stripes in the flag colours: Number of different colours in the flag red: 0 if red absent, 1 if red present in the flag green: same for green blue: same for blue gold: same for gold (also yellow) white: same for white black: same for black orange: same for orange (also brown) mainhue: predominant colour in the flag (tie-breaks decided by taking the topmost hue, if that fails then the most central hue, and if that fails the leftmost hue) circles: Number of circles in the flag crosses: Number of (upright) crosses saltires: Number of diagonal crosses quarters: Number of quartered sections sunstars: Number of sun or star symbols crescent: 1 if a crescent moon symbol present, else 0 triangle: 1 if any triangles present, 0 otherwise icon: 1 if an inanimate image present (e.g., a boat), otherwise 0 animate: 1 if an animate image (e.g., an eagle, a tree, a human hand) present, 0 otherwise text: 1 if any letters or writing on the flag (e.g., a motto or slogan), 0 otherwise topleft: colour in the top-left corner (moving right to decide tie-breaks) botright: Colour in the bottom-left corner (moving left to decide tie-breaks) End of explanation """ # You should recognize this from the Wheat notebook URL = "https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data" def fetch_data(fname='flags.txt'): """ Helper method to retreive the ML Repository dataset. """ response = requests.get(URL) outpath = os.path.abspath(fname) with open(outpath, 'wb') as f: f.write(response.content) return outpath # Fetch the data if required DATA = fetch_data() # Load data and do some simple data management # We are going to define the names from the features and build a dictionary to convert our categorical features. FEATS = [ "name", "landmass", "zone", "area", "population", "language", "religion", "bars", "stripes", "colours", "red", "green", "blue", "gold", "white", "black", "orange", "mainhue", "circles", "crosses", "saltires", "quarters", "sunstars", "crescent", "triangle", "icon", "animate", "text", "topleft", "botright", ] COLOR_MAP = {"red": 1, "blue": 2, "green": 3, "white": 4, "gold": 5, "black": 6, "orange": 7, "brown": 8} # Load Data df = pd.read_csv(DATA, header=None, names=FEATS) df.head() # Now we will use the dictionary to convert categoricals into int values for k,v in COLOR_MAP.items(): df.ix[df.mainhue == k, 'mainhue'] = v for k,v in COLOR_MAP.items(): df.ix[df.topleft == k, 'topleft'] = v for k,v in COLOR_MAP.items(): df.ix[df.botright == k, 'botright'] = v df.mainhue = df.mainhue.apply(int) df.topleft = df.topleft.apply(int) df.botright = df.botright.apply(int) df.describe() """ Explanation: Let's grab the data and set it up for analysis. End of explanation """ feature_names = [ "landmass", "zone", "area", "population", "language", "bars", "stripes", "colours", "red", "green", "blue", "gold", "white", "black", "orange", "mainhue", "circles", "crosses", "saltires", "quarters", "sunstars", "crescent", "triangle", "icon", "animate", "text", "topleft", "botright", ] X = df[feature_names] y = df.religion """ Explanation: Clustering Clustering is an unsupervised machine learning method. This means we don't have to have a value we are predicting. You can use clustering when you know this information as well. Scikit-learn provides a number of metrics you can employ with a "known ground truth" (i.e. the values you are predicting). We won't cover them here, but you can use this notebook to add some cells, create your "y" value, and explore the metrics described here. In the case of the flags data, we do have our "known ground truth". However, for the purpose of this exercise we are going to drop that information out of our data set. We will use it later with Agglomerative Clustering. End of explanation """ K = range(1,10) meandistortions = [] for k in K: clf = Pipeline([ ('encoder', OneHotEncoder(categorical_features=[0, 1, 4, 7, 15, 26, 27])), ('estimator', KMeans(n_clusters=k, n_jobs=-1, random_state=1)) ]) Z = clf.fit_transform(X) meandistortions.append(sum(np.min(euclidean_distances(Z, clf.steps[1][1].cluster_centers_.T), axis=1)) / X.shape[0]) plt.plot(K, meandistortions, 'bx-') plt.xlabel('k') plt.ylabel('Average distortion') plt.title('Selecting k with the Elbow Method') plt.show() """ Explanation: KMeans Clustering Let's look at KMeans clustering first. "K-means is a simple unsupervised machine learning algorithm that groups a dataset into a user-specified number (k) of clusters. The algorithm is somewhat naive--it clusters the data into k clusters, even if k is not the right number of clusters to use. Therefore, when using k-means clustering, users need some way to determine whether they are using the right number of clusters." One way to determine the number of cluster is through the "elbow" method. Using this method, we try a range of values for k and evaluate the "variance explained as a function of the number of clusters". End of explanation """ pipeline =Pipeline([ ('encoder', OneHotEncoder(categorical_features=[0, 1, 4, 7, 15, 26, 27])), ('estimator', KMeans(n_clusters=3, n_jobs=-1, random_state=1)) ]) pipeline.fit(X) labels = pipeline.steps[1][1].labels_ silhouette_score(X, labels, metric='euclidean') pipeline =Pipeline([ ('encoder', OneHotEncoder(categorical_features=[0, 1, 4, 7, 15, 26, 27])), ('estimator', KMeans(n_clusters=5, n_jobs=-1, random_state=1)) ]) pipeline.fit(X) labels = pipeline.steps[1][1].labels_ silhouette_score(X, labels, metric='euclidean') pipeline =Pipeline([ ('encoder', OneHotEncoder(categorical_features=[0, 1, 4, 7, 15, 26, 27])), ('estimator', KMeans(n_clusters=4, n_jobs=-1, random_state=1)) ]) pipeline.fit(X) labels = pipeline.steps[1][1].labels_ silhouette_score(X, labels, metric='euclidean') """ Explanation: If the line chart looks like an arm, then the "elbow" on the arm is the value of k that is the best. Our goal is to choose a small value of k that still has a low variance. The elbow usually represents where we start to have diminishing returns by increasing k. However, the elbow method doesn't always work well; especially if the data is not very clustered. Based on our plot, it looks like k=3 is worth looking at. How do we measure which might be better? We can use the Silhouette Coefficient. A higher Silhouette Coefficient score relates to a model with better defined clusters. Look at the silhouette score based on some different k values. End of explanation """ pipeline =Pipeline([ ('encoder', OneHotEncoder(categorical_features=[0, 1, 4, 7, 15, 26, 27])), ('estimator', KMeans(n_clusters=8, n_jobs=-1, random_state=1)) ]) pipeline.fit(X) labels = pipeline.steps[1][1].labels_ silhouette_score(X, labels, metric='euclidean') """ Explanation: We can see above, k=3 has the better score. As implemented in scikit-learn, KMeans will use 8 clusters by default. Given our data, it makes sense to try this out since our data actually has 8 potential labels (look at "religion" in the data secription above). Based on the plot above, we should expect the silhouette score for k=8 to be less than for k=4. End of explanation """ # Code adapted from http://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_silhouette_analysis.html def silhouette_plot(X, range_n_clusters = range(2, 12, 2)): for n_clusters in range_n_clusters: # Create a subplot with 1 row and 2 columns fig, (ax1, ax2) = plt.subplots(1, 2) fig.set_size_inches(18, 7) # The 1st subplot is the silhouette plot # The silhouette coefficient can range from -1, 1 ax1.set_xlim([-.1, 1]) # The (n_clusters+1)*10 is for inserting blank space between silhouette # plots of individual clusters, to demarcate them clearly. ax1.set_ylim([0, len(X) + (n_clusters + 1) * 10]) # Initialize the clusterer with n_clusters value and a random generator # seed of 10 for reproducibility. '''clusterer = KMeans(n_clusters=n_clusters, random_state=10) cluster_labels = clusterer.fit_predict(X)''' clusterer = Pipeline([ ('encoder', OneHotEncoder(categorical_features=[0, 1, 4, 7, 15, 26, 27])), ('estimator', KMeans(n_clusters=n_clusters, n_jobs=-1, random_state=1)) ]) cluster_labels = clusterer.fit_predict(X) # The silhouette_score gives the average value for all the samples. # This gives a perspective into the density and separation of the formed # clusters silhouette_avg = silhouette_score(X, cluster_labels) print("For n_clusters =", n_clusters, "The average silhouette_score is :", silhouette_avg) # Compute the silhouette scores for each sample sample_silhouette_values = silhouette_samples(X, cluster_labels) y_lower = 10 for i in range(n_clusters): # Aggregate the silhouette scores for samples belonging to # cluster i, and sort them ith_cluster_silhouette_values = \ sample_silhouette_values[cluster_labels == i] ith_cluster_silhouette_values.sort() size_cluster_i = ith_cluster_silhouette_values.shape[0] y_upper = y_lower + size_cluster_i color = cm.spectral(float(i) / n_clusters) ax1.fill_betweenx(np.arange(y_lower, y_upper), 0, ith_cluster_silhouette_values, facecolor=color, edgecolor=color, alpha=0.7) # Label the silhouette plots with their cluster numbers at the middle ax1.text(-0.05, y_lower + 0.5 * size_cluster_i, str(i)) # Compute the new y_lower for next plot y_lower = y_upper + 10 # 10 for the 0 samples ax1.set_title("The silhouette plot for the various clusters.") ax1.set_xlabel("The silhouette coefficient values") ax1.set_ylabel("Cluster label") # The vertical line for average silhouette score of all the values ax1.axvline(x=silhouette_avg, color="red", linestyle="--") ax1.set_yticks([]) # Clear the yaxis labels / ticks ax1.set_xticks([0, 0.2, 0.4, 0.6, 0.8, 1]) # 2nd Plot showing the actual clusters formed colors = cm.spectral(cluster_labels.astype(float) / n_clusters) ax2.scatter(X.ix[:, 0], X.ix[:, 1], marker='.', s=30, lw=0, alpha=0.7, c=colors) # Labeling the clusters centers = clusterer.steps[1][1].cluster_centers_ # Draw white circles at cluster centers ax2.scatter(centers[:, 0], centers[:, 1], marker='o', c="white", alpha=1, s=200) for i, c in enumerate(centers): ax2.scatter(c[0], c[1], marker='$%d$' % i, alpha=1, s=50) ax2.set_title("The visualization of the clustered data.") ax2.set_xlabel("Feature space for the 1st feature") ax2.set_ylabel("Feature space for the 2nd feature") plt.suptitle(("Silhouette analysis for KMeans clustering on sample data " "with n_clusters = %d" % n_clusters), fontsize=14, fontweight='bold') plt.show() silhouette_plot(X) """ Explanation: We can also visualize what our clusters look like. The function below will plot the clusters and visulaize their silhouette scores. End of explanation """ # Code adapted from http://scikit-learn.org/stable/auto_examples/cluster/plot_digits_linkage.html # Visualize the clustering def plot_clustering(X_red, X, labels, title=None): x_min, x_max = np.min(X_red, axis=0), np.max(X_red, axis=0) X_red = (X_red - x_min) / (x_max - x_min) plt.figure(figsize=(6, 4)) for i in range(X_red.shape[0]): plt.text(X_red[i, 0], X_red[i, 1], str(y[i]), color=plt.cm.spectral(labels[i] / 10.), fontdict={'weight': 'bold', 'size': 9}) plt.xticks([]) plt.yticks([]) if title is not None: plt.title(title, size=17) plt.axis('off') plt.tight_layout() print("Computing embedding") X_red = manifold.SpectralEmbedding(n_components=2).fit_transform(X) print("Done.") for linkage in ('ward', 'average', 'complete'): clustering = AgglomerativeClustering(linkage=linkage, n_clusters=8) t0 = time() clustering.fit(X_red) print("%s : %.2fs" % (linkage, time() - t0)) plot_clustering(X_red, X, clustering.labels_, "%s linkage" % linkage) plt.show() """ Explanation: If we had just used silhouette scores, we would have missed that a lot of our data is actually not clustering very well. The plots above should make us reevaluate whether clustering is the right thing to do on our data. Hierarchical clustering Hierarchical clustering is a general family of clustering algorithms that build nested clusters by merging or splitting them successively. This hierarchy of clusters is represented as a tree (or dendrogram). The root of the tree is the unique cluster that gathers all the samples, the leaves being the clusters with only one sample. See the Wikipedia page for more details. Agglomerative Clustering The AgglomerativeClustering object performs a hierarchical clustering using a bottom up approach: each observation starts in its own cluster, and clusters are successively merged together. The linkage criteria determines the metric used for the merge strategy: * Ward minimizes the sum of squared differences within all clusters. It is a variance-minimizing approach and in this sense is similar to the k-means objective function but tackled with an agglomerative hierarchical approach. * Maximum or complete linkage minimizes the maximum distance between observations of pairs of clusters. * Average linkage minimizes the average of the distances between all observations of pairs of clusters. AgglomerativeClustering can also scale to large number of samples when it is used jointly with a connectivity matrix, but is computationally expensive when no connectivity constraints are added between samples: it considers at each step all the possible merges. End of explanation """
amanahuja/adaptive_resonance_networks
ipynb/ART2_demo_01.ipynb
mit
%load_ext autoreload %autoreload 2 import os import numpy as np from IPython.display import Image # make sure we're in the root directory pwd = os.getcwd() if pwd.endswith('ipynb'): os.chdir('..') #print os.getcwd() """ Explanation: ART2 demo Adaptive Resonance Theory Neural Networks by Aman Ahuja | github.com/amanahuja | twitter: @amanqa Overview In this example: * We'll use 10x10 binary ASCII blocks to demonstrate ART2 * Yes, this is the same data we used for ART1 End of explanation """ Image("data/architecture_art2.png", width=500) """ Explanation: ART2 Architecture End of explanation """ from ART2 import ART2 # This is my data! idata = np.array([0.8, 0.6]) nn = ART2(n=len(idata), m=2, rho=0.9, theta=0.1) nn.start_logging(to_file=False, to_console=True) nn.learning_trial(idata = idata) nn.stop_logging() """ Explanation: Mini ART2 tests From the book End of explanation """ print nn.Bij.T print print nn.Tji nn.start_logging() # second pattern idata = np.array([0.6, 0.8]) nn.learning_trial(idata = idata) nn.stop_logging() """ Explanation: F2 activations $y_j$ Cluster unit activations are: $$y_j = \sum_{i}{b_{ij}}{p_i}$$ given by np.dot(art2.Bij.T, art2.pi) End of explanation """ # data directory data_dir = 'data' print os.listdir(data_dir) # ASCII data file data_file = 'ASCII_01.txt' with open(os.path.join(data_dir, data_file), 'r') as f: raw_data = f.read() # Get data into a usable form here data = [d.strip() for d in raw_data.split('\n\n')] data = [d for d in data if d is not ''] data = [d.replace('\n', '') for d in data] # print the data data """ Explanation: [Load data] Data is a series of png images pixelated drawings of letters End of explanation """ def format_output(raw): out = "{}\n{}\n{}\n{}\n{}".format( raw[:5], raw[5:10], raw[10:15], raw[15:20], raw[20:25], ) return out from collections import Counter import numpy as np def preprocess_data(data): """ Convert to numpy array Convert to 1s and 0s """ # Get useful information from first row if data[0]: irow = data[0] # get size idat_size = len(irow) # get unique characters chars = False while not chars: chars = get_unique_chars(irow, reverse=True) char1, char2 = chars outdata = [] idat = np.zeros(idat_size, dtype=bool) #convert to boolean using the chars identified for irow in data: assert len(irow) == idat_size, "data row lengths not consistent" idat = [x==char1 for x in irow] # note: idat is a list of bools idat =list(np.array(idat).astype(int)) outdata.append(idat) outdata = np.array(outdata) return outdata.astype(int) def get_unique_chars(irow, reverse=False): """ Get unique characters in data Helper function ---- reverse: bool Reverses order of the two chars returned """ chars = Counter(irow) if len(chars) > 2: raise Exception("Data is not binary") elif len(chars) < 2: # first row doesn't contain both chars return False, False # Reorder here? if reverse: char2, char1 = chars.keys() else: char1, char2 = chars.keys() return char1, char2 # Examine one print format_output(data[0]) """ Explanation: Helper functions End of explanation """ from ART2 import ART2 from collections import defaultdict # create network input_row_size = 25 max_categories = 10 rho = 0.20 network = ART2(n=input_row_size, m=max_categories, rho=rho) # preprocess data data_cleaned = preprocess_data(data) # shuffle data? np.random.seed(1221) np.random.shuffle(data_cleaned) # learn data array, row by row for row in data_cleaned: network.learn(row) print print "n rows of data: ", len(data_cleaned) print "max categories allowed: ", max_categories print "rho: ", rho print "n categories used: ", network.n_cats print # output results, row by row output_dict = defaultdict(list) for row, row_cleaned in zip (data, data_cleaned): pred = network.predict(row_cleaned) output_dict[pred].append(row) for k,v in output_dict.iteritems(): print "category: {}, ({} members)".format(k, len(v)) print '-'*20 for row in v: print format_output(row) print print # \ print "'{}':{}".format( # row, # network.predict(row_cleaned)) """ Explanation: DO End of explanation """
mne-tools/mne-tools.github.io
dev/_downloads/758680cba517820dcb0b486577bea58f/70_fnirs_processing.ipynb
bsd-3-clause
import os.path as op import numpy as np import matplotlib.pyplot as plt from itertools import compress import mne fnirs_data_folder = mne.datasets.fnirs_motor.data_path() fnirs_cw_amplitude_dir = op.join(fnirs_data_folder, 'Participant-1') raw_intensity = mne.io.read_raw_nirx(fnirs_cw_amplitude_dir, verbose=True) raw_intensity.load_data() """ Explanation: Preprocessing functional near-infrared spectroscopy (fNIRS) data This tutorial covers how to convert functional near-infrared spectroscopy (fNIRS) data from raw measurements to relative oxyhaemoglobin (HbO) and deoxyhaemoglobin (HbR) concentration, view the average waveform, and topographic representation of the response. Here we will work with the fNIRS motor data &lt;fnirs-motor-dataset&gt;. End of explanation """ raw_intensity.annotations.set_durations(5) raw_intensity.annotations.rename({'1.0': 'Control', '2.0': 'Tapping/Left', '3.0': 'Tapping/Right'}) unwanted = np.nonzero(raw_intensity.annotations.description == '15.0') raw_intensity.annotations.delete(unwanted) """ Explanation: Providing more meaningful annotation information First, we attribute more meaningful names to the trigger codes which are stored as annotations. Second, we include information about the duration of each stimulus, which was 5 seconds for all conditions in this experiment. Third, we remove the trigger code 15, which signaled the start and end of the experiment and is not relevant to our analysis. End of explanation """ subjects_dir = op.join(mne.datasets.sample.data_path(), 'subjects') brain = mne.viz.Brain( 'fsaverage', subjects_dir=subjects_dir, background='w', cortex='0.5') brain.add_sensors( raw_intensity.info, trans='fsaverage', fnirs=['channels', 'pairs', 'sources', 'detectors']) brain.show_view(azimuth=20, elevation=60, distance=400) """ Explanation: Viewing location of sensors over brain surface Here we validate that the location of sources-detector pairs and channels are in the expected locations. Source-detector pairs are shown as lines between the optodes, channels (the mid point of source-detector pairs) are optionally shown as orange dots. Source are optionally shown as red dots and detectors as black. End of explanation """ picks = mne.pick_types(raw_intensity.info, meg=False, fnirs=True) dists = mne.preprocessing.nirs.source_detector_distances( raw_intensity.info, picks=picks) raw_intensity.pick(picks[dists > 0.01]) raw_intensity.plot(n_channels=len(raw_intensity.ch_names), duration=500, show_scrollbars=False) """ Explanation: Selecting channels appropriate for detecting neural responses First we remove channels that are too close together (short channels) to detect a neural response (less than 1 cm distance between optodes). These short channels can be seen in the figure above. To achieve this we pick all the channels that are not considered to be short. End of explanation """ raw_od = mne.preprocessing.nirs.optical_density(raw_intensity) raw_od.plot(n_channels=len(raw_od.ch_names), duration=500, show_scrollbars=False) """ Explanation: Converting from raw intensity to optical density The raw intensity values are then converted to optical density. End of explanation """ sci = mne.preprocessing.nirs.scalp_coupling_index(raw_od) fig, ax = plt.subplots() ax.hist(sci) ax.set(xlabel='Scalp Coupling Index', ylabel='Count', xlim=[0, 1]) """ Explanation: Evaluating the quality of the data At this stage we can quantify the quality of the coupling between the scalp and the optodes using the scalp coupling index. This method looks for the presence of a prominent synchronous signal in the frequency range of cardiac signals across both photodetected signals. In this example the data is clean and the coupling is good for all channels, so we will not mark any channels as bad based on the scalp coupling index. End of explanation """ raw_od.info['bads'] = list(compress(raw_od.ch_names, sci < 0.5)) """ Explanation: In this example we will mark all channels with a SCI less than 0.5 as bad (this dataset is quite clean, so no channels are marked as bad). End of explanation """ raw_haemo = mne.preprocessing.nirs.beer_lambert_law(raw_od, ppf=0.1) raw_haemo.plot(n_channels=len(raw_haemo.ch_names), duration=500, show_scrollbars=False) """ Explanation: At this stage it is appropriate to inspect your data (for instructions on how to use the interactive data visualisation tool see tut-visualize-raw) to ensure that channels with poor scalp coupling have been removed. If your data contains lots of artifacts you may decide to apply artifact reduction techniques as described in ex-fnirs-artifacts. Converting from optical density to haemoglobin Next we convert the optical density data to haemoglobin concentration using the modified Beer-Lambert law. End of explanation """ fig = raw_haemo.plot_psd(average=True) fig.suptitle('Before filtering', weight='bold', size='x-large') fig.subplots_adjust(top=0.88) raw_haemo = raw_haemo.filter(0.05, 0.7, h_trans_bandwidth=0.2, l_trans_bandwidth=0.02) fig = raw_haemo.plot_psd(average=True) fig.suptitle('After filtering', weight='bold', size='x-large') fig.subplots_adjust(top=0.88) """ Explanation: Removing heart rate from signal The haemodynamic response has frequency content predominantly below 0.5 Hz. An increase in activity around 1 Hz can be seen in the data that is due to the person's heart beat and is unwanted. So we use a low pass filter to remove this. A high pass filter is also included to remove slow drifts in the data. End of explanation """ events, event_dict = mne.events_from_annotations(raw_haemo) fig = mne.viz.plot_events(events, event_id=event_dict, sfreq=raw_haemo.info['sfreq']) fig.subplots_adjust(right=0.7) # make room for the legend """ Explanation: Extract epochs Now that the signal has been converted to relative haemoglobin concentration, and the unwanted heart rate component has been removed, we can extract epochs related to each of the experimental conditions. First we extract the events of interest and visualise them to ensure they are correct. End of explanation """ reject_criteria = dict(hbo=80e-6) tmin, tmax = -5, 15 epochs = mne.Epochs(raw_haemo, events, event_id=event_dict, tmin=tmin, tmax=tmax, reject=reject_criteria, reject_by_annotation=True, proj=True, baseline=(None, 0), preload=True, detrend=None, verbose=True) epochs.plot_drop_log() """ Explanation: Next we define the range of our epochs, the rejection criteria, baseline correction, and extract the epochs. We visualise the log of which epochs were dropped. End of explanation """ epochs['Tapping'].plot_image(combine='mean', vmin=-30, vmax=30, ts_args=dict(ylim=dict(hbo=[-15, 15], hbr=[-15, 15]))) """ Explanation: View consistency of responses across trials Now we can view the haemodynamic response for our tapping condition. We visualise the response for both the oxy- and deoxyhaemoglobin, and observe the expected peak in HbO at around 6 seconds consistently across trials, and the consistent dip in HbR that is slightly delayed relative to the HbO peak. End of explanation """ epochs['Control'].plot_image(combine='mean', vmin=-30, vmax=30, ts_args=dict(ylim=dict(hbo=[-15, 15], hbr=[-15, 15]))) """ Explanation: We can also view the epoched data for the control condition and observe that it does not show the expected morphology. End of explanation """ fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(15, 6)) clims = dict(hbo=[-20, 20], hbr=[-20, 20]) epochs['Control'].average().plot_image(axes=axes[:, 0], clim=clims) epochs['Tapping'].average().plot_image(axes=axes[:, 1], clim=clims) for column, condition in enumerate(['Control', 'Tapping']): for ax in axes[:, column]: ax.set_title('{}: {}'.format(condition, ax.get_title())) """ Explanation: View consistency of responses across channels Similarly we can view how consistent the response is across the optode pairs that we selected. All the channels in this data are located over the motor cortex, and all channels show a similar pattern in the data. End of explanation """ evoked_dict = {'Tapping/HbO': epochs['Tapping'].average(picks='hbo'), 'Tapping/HbR': epochs['Tapping'].average(picks='hbr'), 'Control/HbO': epochs['Control'].average(picks='hbo'), 'Control/HbR': epochs['Control'].average(picks='hbr')} # Rename channels until the encoding of frequency in ch_name is fixed for condition in evoked_dict: evoked_dict[condition].rename_channels(lambda x: x[:-4]) color_dict = dict(HbO='#AA3377', HbR='b') styles_dict = dict(Control=dict(linestyle='dashed')) mne.viz.plot_compare_evokeds(evoked_dict, combine="mean", ci=0.95, colors=color_dict, styles=styles_dict) """ Explanation: Plot standard fNIRS response image Next we generate the most common visualisation of fNIRS data: plotting both the HbO and HbR on the same figure to illustrate the relation between the two signals. End of explanation """ times = np.arange(-3.5, 13.2, 3.0) topomap_args = dict(extrapolate='local') epochs['Tapping'].average(picks='hbo').plot_joint( times=times, topomap_args=topomap_args) """ Explanation: View topographic representation of activity Next we view how the topographic activity changes throughout the response. End of explanation """ times = np.arange(4.0, 11.0, 1.0) epochs['Tapping/Left'].average(picks='hbo').plot_topomap( times=times, **topomap_args) epochs['Tapping/Right'].average(picks='hbo').plot_topomap( times=times, **topomap_args) """ Explanation: Compare tapping of left and right hands Finally we generate topo maps for the left and right conditions to view the location of activity. First we visualise the HbO activity. End of explanation """ epochs['Tapping/Left'].average(picks='hbr').plot_topomap( times=times, **topomap_args) epochs['Tapping/Right'].average(picks='hbr').plot_topomap( times=times, **topomap_args) """ Explanation: And we also view the HbR activity for the two conditions. End of explanation """ fig, axes = plt.subplots(nrows=2, ncols=4, figsize=(9, 5), gridspec_kw=dict(width_ratios=[1, 1, 1, 0.1])) vmin, vmax, ts = -8, 8, 9.0 evoked_left = epochs['Tapping/Left'].average() evoked_right = epochs['Tapping/Right'].average() evoked_left.plot_topomap(ch_type='hbo', times=ts, axes=axes[0, 0], vmin=vmin, vmax=vmax, colorbar=False, **topomap_args) evoked_left.plot_topomap(ch_type='hbr', times=ts, axes=axes[1, 0], vmin=vmin, vmax=vmax, colorbar=False, **topomap_args) evoked_right.plot_topomap(ch_type='hbo', times=ts, axes=axes[0, 1], vmin=vmin, vmax=vmax, colorbar=False, **topomap_args) evoked_right.plot_topomap(ch_type='hbr', times=ts, axes=axes[1, 1], vmin=vmin, vmax=vmax, colorbar=False, **topomap_args) evoked_diff = mne.combine_evoked([evoked_left, evoked_right], weights=[1, -1]) evoked_diff.plot_topomap(ch_type='hbo', times=ts, axes=axes[0, 2:], vmin=vmin, vmax=vmax, colorbar=True, **topomap_args) evoked_diff.plot_topomap(ch_type='hbr', times=ts, axes=axes[1, 2:], vmin=vmin, vmax=vmax, colorbar=True, **topomap_args) for column, condition in enumerate( ['Tapping Left', 'Tapping Right', 'Left-Right']): for row, chroma in enumerate(['HbO', 'HbR']): axes[row, column].set_title('{}: {}'.format(chroma, condition)) fig.tight_layout() """ Explanation: And we can plot the comparison at a single time point for two conditions. End of explanation """ fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(6, 4)) mne.viz.plot_evoked_topo(epochs['Left'].average(picks='hbo'), color='b', axes=axes, legend=False) mne.viz.plot_evoked_topo(epochs['Right'].average(picks='hbo'), color='r', axes=axes, legend=False) # Tidy the legend: leg_lines = [line for line in axes.lines if line.get_c() == 'b'][:1] leg_lines.append([line for line in axes.lines if line.get_c() == 'r'][0]) fig.legend(leg_lines, ['Left', 'Right'], loc='lower right') """ Explanation: Lastly, we can also look at the individual waveforms to see what is driving the topographic plot above. End of explanation """
slowvak/MachineLearningForMedicalImages
notebooks/Module 6.ipynb
mit
# This is used to display images within the browser %matplotlib inline import os import numpy as np import matplotlib.pyplot as plt import dicom as pydicom # library to load dicom images try: import cPickle as pickle except: import pickle from sklearn.preprocessing import StandardScaler import nibabel as nib """ Explanation: Application Example Step 1: Load basic python libraries End of explanation """ with open('RBF SVM.pkl', 'rb') as fid: classifier = pickle.load(fid) print (dir(classifier)) """ Explanation: Step 2: Load the classifier and the images # load a classifier that has been saved in pickle form with open('my_dumped_classifier.pkl', 'rb') as fid: gnb_loaded = cPickle.load(fid) End of explanation """ CurrentDir= os.getcwd() # Print current directory print (CurrentDir) # Get parent direcotry print(os.path.abspath(os.path.join(CurrentDir, os.pardir))) # Create the file paths. The images are contained in a subfolder called Data. PostName = os.path.abspath(os.path.join(os.path.abspath(os.path.join(CurrentDir, os.pardir)), "Data", 'POST.nii.gz') ) PreName = os.path.abspath(os.path.join(os.path.abspath(os.path.join(CurrentDir, os.pardir)), "Data", 'PRE.nii.gz') ) FLAIRName = os.path.abspath(os.path.join(os.path.abspath(os.path.join(CurrentDir, os.pardir)), "Data", 'FLAIR.nii.gz') ) GT = os.path.abspath(os.path.join(os.path.abspath(os.path.join(CurrentDir, os.pardir)), "Data", 'GroundTruth.nii.gz') ) # read Pre in--we assume that all images are same x,y dims Pre = nib.load(PreName) # Pre is a class containing the image data among other information Pre=Pre.get_data() xdim = np.shape(Pre)[0] ydim = np.shape(Pre)[1] zdim = np.shape(Pre)[2] # Printing the dimensions of an image print ('Dimensions') print (xdim,ydim,zdim) # make space in a numpy array for the images ArrayDicom = np.zeros((xdim, ydim, 2), dtype=Pre.dtype) # copy Pre pixels into z=0 Pre=Pre[:,:,55] ArrayDicom[:, :, 0] = Post/ np.mean(Post[np.nonzero(Post)]) # Post Post = nib.load(PostName) # Pre is a class containing the image data among other information Post=Post.get_data() Post= Post[:,:,55] ArrayDicom[:, :, 1] = Pre/ np.mean(Pre[np.nonzero(Pre)]) """ Explanation: Step 3: Load the unknown image and perform the segmetnation End of explanation """ print ('Shape before reshape') print (np.shape(ArrayDicom)) ArrayDicom=ArrayDicom.reshape(-1,2) print ('Shape after reshape') print (np.shape(ArrayDicom)) """ Explanation: Step 4: Use the pretrained classifier to perform segmentation Reshape the data End of explanation """ # ArrayDicom = StandardScaler().fit_transform(ArrayDicom) Labels=classifier.predict(ArrayDicom) print (Labels) """ Explanation: Appy trained classifier End of explanation """ print (np.mean(Labels[np.nonzero(Labels)])) print (np.shape(Labels)) # respape to image Labels=Labels.reshape(240,240) Post=Post.reshape(240,240) Pre=Pre.reshape(240,240) f, (ax1,ax2,ax3)=plt.subplots(1,3) ax1.imshow(np.rot90(Post[:, :],3), cmap=plt.cm.gray) ax1.axis('off') ax2.imshow(np.rot90(Pre[:, :],3), cmap=plt.cm.gray) ax2.axis('off') ax3.imshow(np.rot90(Labels[:, :,],3), cmap=plt.cm.jet) ax3.axis('off') """ Explanation: Visualize results End of explanation """
kadrlica/destools
notebook/intervals.ipynb
mit
%matplotlib inline import numpy as np import pylab as plt import scipy.stats as stats from scipy.stats import multivariate_normal as mvn try: import emcee got_emcee = True except ImportError: got_emcee = False try: import corner got_corner = True except ImportError: got_corner = False plt.rcParams['axes.labelsize'] = 16 """ Explanation: Likelihood Functions and Confidence Intervals by Alex Drlica-Wagner Introduction This notebook attempts to pragmatically address several questions about deriving uncertainty intervals from a likelihood analysis. End of explanation """ mean = 2.0; cov = 1.0 rv = mvn(mean,cov) lnlfn = lambda x: rv.logpdf(x) x = np.linspace(-2,6,5000) lnlike = lnlfn(x) plt.plot(x,lnlike,'-k'); plt.xlabel(r'$x$'); plt.ylabel('$\log \mathcal{L}$'); """ Explanation: 1D Likelihood As a simple and straightforward starting example, we begin with a 1D Gaussian likelihood function. End of explanation """ # You can use any complicate optimizer that you want (i.e. scipy.optimize) # but for this application we just do a simple array operation maxlike = np.max(lnlike) mle = x[np.argmax(lnlike)] print "Maximum Likelihood Estimate: %.2f"%mle print "Maximum Likelihood Value: %.2f"%maxlike """ Explanation: For this simple likelihood function, we could analytically compute the maximum likelihood estimate and confidence intervals. However, for more complicated likelihoods an analytic solution may not be possible. As an introduction to these cases it is informative to proceed numerically. End of explanation """ def interval(x, lnlike, delta=1.0): maxlike = np.max(lnlike) ts = -2 * (lnlike - maxlike) lower = x[np.argmax(ts < delta)] upper = x[len(ts) - np.argmax((ts < delta)[::-1]) - 1] return lower, upper intervals = [(68,1.0), (90,2.706), (95,3.841)] plt.plot(x,lnlike,'-k'); plt.xlabel(r'$x$'); plt.ylabel('$\log \mathcal{L}$'); kwargs = dict(ls='--',color='k') plt.axhline(maxlike - intervals[0][1]/2.,**kwargs) print "Confidence Intervals:" for cl,delta in intervals: lower,upper = interval(x,lnlike,delta) print " %i%% CL: x = %.2f [%+.2f,%+.2f]"%(cl,mle,lower-mle,upper-mle) plt.axvline(lower,**kwargs); plt.axvline(upper,**kwargs); """ Explanation: To find the 68% confidence intervals, we can calculate the delta-log-likelihood. The test statisitcs (TS) is defined as ${\rm TS} = -2\Delta \log \mathcal{L}$ and is $\chi^2$-distributed. Therefore, the confidence intervals on a single parameter can be read off of a $\chi^2$ table with 1 degree of freedom (dof). | 2-sided Interval | p-value | $\chi^2_{1}$ | Gaussian $\sigma$ | |------|------|------|------| | 68% | 32% | 1.000 | 1.00 | | 90% | 10% | 2.706 | 1.64 | | 95% | 5% | 3.841 | 1.96 | | 99% | 1% | 6.635 | 2.05 | End of explanation """ for cl, d in intervals: sigma = stats.norm.isf((100.-cl)/2./100.) print " %i%% = %.2f sigma"%(cl,sigma) """ Explanation: These numbers might look familiar. They are the number of standard deviations that you need to go out in the standard normal distribution to contain the requested fraction of the distribution (i.e., 68%, 90%, 95%). End of explanation """ mean = [2.0,1.0] cov = [[1,1],[1,2]] rv = stats.multivariate_normal(mean,cov) lnlfn = lambda x: rv.logpdf(x) print "Mean:",rv.mean.tolist() print "Covariance",rv.cov.tolist() xx, yy = np.mgrid[-4:6:.01, -4:6:.01] values = np.dstack((xx, yy)) lnlike = lnlfn(values) fig2 = plt.figure(figsize=(8,6)) ax2 = fig2.add_subplot(111) im = ax2.contourf(values[:,:,0], values[:,:,1], lnlike ,aspect='auto'); plt.colorbar(im,label='$\log \mathcal{L}$') plt.xlabel('$x$'); plt.ylabel('$y$'); plt.show() # You can use any complicate optimizer that you want (i.e. scipy.optimize) # but for this application we just do a simple array operation maxlike = np.max(lnlike) maxidx = np.unravel_index(np.argmax(lnlike),lnlike.shape) mle_x, mle_y = mle = values[maxidx] print "Maximum Likelihood Estimate:",mle print "Maximum Likelihood Value:",maxlike """ Explanation: 2D Likelihood Now we extend the example above to a 2D likelihood function. We define the likelihood with the same multivariat_normal function, but now add a second dimension and a covariance between the two dimensions. These parameters are adjustable if would like to play around with them. End of explanation """ lnlike -= maxlike x = xx[:,maxidx[1]] delta = 2.706 # This is the loglike projected at y = mle[1] = 0.25 plt.plot(x, lnlike[:,maxidx[1]],'-r'); lower,upper = max_lower,max_upper = interval(x,lnlike[:,maxidx[1]],delta) plt.axvline(lower,ls='--',c='r'); plt.axvline(upper,ls='--',c='r') y_max = yy[:,maxidx[1]] # This is the profile likelihood where we maximize over the y-dimension plt.plot(x, lnlike.max(axis=1),'-k') lower,upper = profile_lower,profile_upper = interval(x,lnlike.max(axis=1),delta) plt.axvline(lower,ls='--',c='k'); plt.axvline(upper,ls='--',c='k') plt.xlabel('$x$'); plt.ylabel('$\log \mathcal{L}$') y_profile = yy[lnlike.argmax(axis=0),lnlike.argmax(axis=1)] print "Projected Likelihood (red):\t %.1f [%+.2f,%+.2f]"%(mle[0],max_lower-mle[0],max_upper-mle[0]) print "Profile Likelihood (black):\t %.1f [%+.2f,%+.2f]"%(mle[0],profile_lower-mle[0],profile_upper-mle[0]) """ Explanation: The case now becomes a bit more complicated. If you want to set a confidence interval on a single parameter, you cannot simply projected the likelihood onto the dimension of interest. Doing so would ignore the correlation between the two parameters. End of explanation """ for cl, d in intervals: lower,upper = interval(x,lnlike[:,maxidx[1]],d) print " %s CL: x = %.2f [%+.2f,%+.2f]"%(cl,mle[0],lower-mle[0],upper-mle[0]) """ Explanation: In the plot above we are showing two different 1D projections of the 2D likelihood function. The red curve shows the projected likelihood scanning in values of $x$ and always assuming the value of $y$ that maximized the likelihood. On the other hand, the black curve shows the 1D likelihood derived by scanning in values of $x$ and at each value of $x$ maximizing the value of the likelihood with respect to the $y$-parameter. In other words, the red curve is ignoring the correlation between the two parameters while the black curve is accounting for it. As you can see from the values printed above the plot, the intervals derived from the red curve understimate the analytically derived values, while the intervals on the black curve properly reproduce the analytic estimate. Just to verify the result quoted above, we derive intervals on $x$ at several different confidence levels. We start with the projected likelihood with $y$ fixed at $y_{\rm max}$. End of explanation """ for cl, d in intervals: lower,upper = interval(x,lnlike.max(axis=1),d) print " %s CL: x = %.2f [%+.2f,%+.2f]"%(cl,mle[0],lower-mle[0],upper-mle[0]) """ Explanation: Below are the confidence intervals in $x$ derived from the profile likelihood technique. As you can see, these values match the analytically derived values. End of explanation """ fig2 = plt.figure(figsize=(8,6)) ax2 = fig2.add_subplot(111) im = ax2.contourf(values[:,:,0], values[:,:,1], lnlike ,aspect='auto'); plt.colorbar(im,label='$\log \mathcal{L}$') im = ax2.contour(values[:,:,0], values[:,:,1], lnlike , levels=[-delta/2], colors=['k'], aspect='auto', zorder=10,lw=2); plt.axvline(mle[0],ls='--',c='k'); plt.axhline(mle[1],ls='--',c='k'); plt.axvline(max_lower,ls='--',c='r'); plt.axvline(max_upper,ls='--',c='r') plt.axvline(profile_lower,ls='--',c='k'); plt.axvline(profile_upper,ls='--',c='k') plt.plot(x,y_max,'-r'); plt.plot(x,y_profile,'-k') plt.xlabel('$x$'); plt.ylabel('$y$'); plt.show() """ Explanation: By plotting the likelihood contours, it is easy to see why the profile likelihood technique performs correctly while naively slicing through the likelihood plane does not. The profile likelihood is essentially tracing the ridgeline of the 2D likelihood function, thus intersecting the countour of delta-log-likelihood at it's most distant point. This can be seen from the black lines in the 2D likelihood plot below. End of explanation """ # Remember, the posterior probability is the likelihood times the prior lnprior = lambda x: 0 def lnprob(x): return lnlfn(x) + lnprior(x) if got_emcee: nwalkers=100 ndim, nwalkers = len(mle), 100 pos0 = [np.random.rand(ndim) for i in range(nwalkers)] sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob, threads=2) # This takes a while... sampler.run_mcmc(pos0, 5000) samples = sampler.chain[:, 100:, :].reshape((-1, ndim)) x_samples,y_samples = samples.T for cl in [68,90,95]: x_lower,x_mle,x_upper = np.percentile(x_samples,q=[(100-cl)/2.,50,100-(100-cl)/2.]) print " %i%% CL:"%cl, "x = %.2f [%+.2f,%+.2f]"%(x_mle,x_lower-x_mle,x_upper-x_mle) """ Explanation: MCMC Posterior Sampling One way to explore the posterior distribution is through MCMC sampling. This gives an alternative method for deriving confidence intervals. Now, rather than maximizing the likelihood as a function of the other parameter, we marginalize (integrate) over that parameter. This is more computationally intensive, but is more robust in the case of complex likelihood functions. End of explanation """ if got_corner: fig = corner.corner(samples, labels=["$x$","$y$"],truths=mle,quantiles=[0.05, 0.5, 0.95],range=[[-4,6],[-4,6]]) """ Explanation: These results aren't perfect since they are suspect to random variations in the sampling, but they are pretty close. Plotting the distribution of samples, we see something very similar to the plots we generated for the likelihood alone (which is good since out prior was flat). End of explanation """
SeverTopan/AdjSim
tutorial/tutorial.ipynb
gpl-3.0
import adjsim import numpy as np # AdjSim also heavily relies on numpy. Its usage is recommended. from matplotlib import pyplot # Magic function to display matplotlib plots inline in the Jupyter Notebook. Not crucial for AdjSim. %matplotlib inline """ Explanation: AdjSim Tutorial AdjSim is an agent-based modelling engine. It allows users to define simulation environments through which agents interact through ability casting and timestep iteration. It is tailored towards allowing agents to behave intelligently through the employment of Reinforcement Learning techniques such as Q-Learning. This tutorial will enumerate many of AdjSim's features through the construction of a simulation where a group of agents will play a game of tag. Lets dive in! Simulation: The Stuff Container AdjSim simulations take place inside the fittingly named Simulation object. Lets begin by importing our libararies. End of explanation """ sim = adjsim.core.Simulation() # Interactive simulation method. sim.start() # Begin the simulation. sim.step() # Take one step. sim.step(10) # Take 10 steps. sim.end() # end the simulation. # Batch simulation method. sim.simulate(100) # Start, Simulate for 100 timesteps, End. All in one go. """ Explanation: We will make a trivial empty simulation to display its usage. End of explanation """ sim.agents.add(adjsim.core.Agent()) sim.agents.add(adjsim.core.Agent()) sim.simulate(100) """ Explanation: Its a little lonely in that simulation, lets add an agent. End of explanation """ sim = adjsim.core.VisualSimulation() sim.simulate(15) """ Explanation: Things are happening, but its a little hard to see. Lets visualize our simulation space. Simulations and agents have a hierarchal class structure. VisualSimulation inherits from Simulation, and can be used to display a 2D simulation environment. End of explanation """ sim.agents.add(adjsim.core.VisualAgent(pos=np.array([1,1]))) sim.simulate(15) """ Explanation: Similarly, an appropriate VisualAgent must be present in the simulation space if it is to be seen. The agent inheritance hierarchy is as follows: Agent: The base agent class. Contains the attributes that are minimally needed in order for an agent to be simulated. SpatialAgent: Adds the presence of the pos Attribute, allowing for a 2D position to be associated with an agent. VisualAgent: Adds the attributes needed for visualization on top of those of SpatialAgent. This is the minimum class needed for an agent to appear inside a VisualSimulation. End of explanation """ # Constants. ARENA_BOUND = 100 MOVE_DIST = 20 # Actions must follos the following signiature. # The Simulation object will be passed in as the 'simulation' parameter. # The agent calling the action will be passed as the 'source' parameter. def move(simulation, source): movement = (np.random.rand(2) - 0.5) * MOVE_DIST source.pos = np.clip(source.pos + movement, -ARENA_BOUND, ARENA_BOUND) source.step_complete = True """ Explanation: Now we should see our agent (shown in the image below). <img src="https://raw.githubusercontent.com/SeverTopan/AdjSim/master/tutorial/images/1.PNG" alt="Single Agent" style="width: 200px;"/> Next, lets make it do things. Agents: Making This Space Less Lonely Agents are simulation artifacts that manipulate the simulation environment. In the adjsim model, within each timestep agents take turns acting out their environment manipulations. One iteration of an agent's environment manipulations is known as an agent's step. There are two major aspects to an agent's step: A set of actions. An action defines one distinct user-defined set of computations that the agent manipulates its environment with. It is simply a python function. The actions we will be defining in our simulation of the game of tag will be a move and a tag action. A decision module. Decision modules are AdjSim objects that an agent uses to choose which actions to perform, and in what order. We'll start by making an agent that can move, but that can't tag. The move action will allow an agent to move in a random direction bounded by a square arena. End of explanation """ class Mover(adjsim.core.VisualAgent): def __init__(self, x, y): super().__init__(pos=np.array([x, y])) # Set the decision module. self.decision = adjsim.decision.RandomSingleCastDecision() # Populate the agent's action list. self.actions["move"] = move """ Explanation: The final line, source.step_complete = True, lets the decision module know that no further actions can be completed in the agent's step after the current one. Now let's create an agent that uses this action. The agent we will be creating will have a RandomRepeatedCastDecision. This decision module will randomly select actions and invoke them until the source.step_complete attribute is true. This should result in one cast of the above-defined move action. End of explanation """ sim = adjsim.core.VisualSimulation() sim.agents.add(Mover(0, 0)) sim.simulate(15) """ Explanation: Let's take it out for a spin. End of explanation """ class MoverSimulation(adjsim.core.VisualSimulation): def __init__(self): super().__init__() for i in range(5): for j in range(5): self.agents.add(Mover(20*i, 20*j)) sim = MoverSimulation() sim.simulate(15) """ Explanation: We observe an agent moving in random directions in each timestep. <img src="https://raw.githubusercontent.com/SeverTopan/AdjSim/master/tutorial/images/2.gif" alt="Single Agent" style="width: 400px;"/> Nice! Lets give it some friends. End of explanation """ # Reiterate imports. import adjsim import numpy as np import sys # Constants. ARENA_BOUND = 100 TAG_DIST_SQUARE = 100 MOVE_DIST = 20 def move(simulation, source): movement = (np.random.rand(2) - 0.5) * MOVE_DIST source.pos = np.clip(source.pos + movement, -ARENA_BOUND, ARENA_BOUND) source.step_complete = True def tag(simulation, source): if not source.is_it: return if source.cooldown > 0: source.cooldown -= 1 source.step_complete return # Find nearest neighbour. closest_distance = sys.float_info.max nearest_neighbour = None for agent in simulation.agents: if agent.id == source.id: continue distance = adjsim.utility.distance_square(agent, source) if distance < closest_distance: nearest_neighbour = agent closest_distance = distance if closest_distance > TAG_DIST_SQUARE: return assert nearest_neighbour # Perform Tag. nearest_neighbour.is_it = True nearest_neighbour.color = adjsim.color.RED_DARK # This will change the agent's visual color. nearest_neighbour.order = 1 # Order describes what order the agents will take their steps in the simulation loop. nearest_neighbour.cooldown = 5 source.is_it = False source.order = 0 # Order describes what order the agents will take their steps in the simulation loop. source.color = adjsim.color.BLUE_DARK # This will change the agent's visual color. class Tagger(adjsim.core.VisualAgent): def __init__(self, x, y, is_it): super().__init__() self.is_it = is_it self.cooldown = 5 if is_it else 0 self.color = adjsim.color.RED_DARK if is_it else adjsim.color.BLUE_DARK self.pos = np.array([x, y]) self.decision = adjsim.decision.RandomSingleCastDecision() self.actions["move"] = move self.actions["tag"] = tag if is_it: self.order = 1 # Order describes what order the agents will take their steps in the simulation loop. class TaggerSimulation(adjsim.core.VisualSimulation): def __init__(self): super().__init__() for i in range(5): for j in range(5): self.agents.add(Tagger(20*i, 20*j, False)) self.agents.add(Tagger(10, 10, True)) sim = TaggerSimulation() sim.simulate(15) """ Explanation: We should now see a group of 25 agents moving in a similar fashion to our first one. <img src="https://raw.githubusercontent.com/SeverTopan/AdjSim/master/tutorial/images/3.gif" alt="Single Agent" style="width: 400px;"/> The basics are now covered. Lets put together our simulation of a game of tag. Basic AdjSim Usage: Putting It All Together The following will describe our simulation of a game of tag. End of explanation """ sim = TaggerSimulation() sim.trackers["agent_count"] = adjsim.analysis.AgentCountTracker() sim.simulate(15) sim.trackers["agent_count"].plot() """ Explanation: And that's it! We should observe a game of tag being played where the 'it' agent is red. It will tag the agents around it when it has the chance. <img src="https://raw.githubusercontent.com/SeverTopan/AdjSim/master/tutorial/images/4.gif" alt="Single Agent" style="width: 400px;"/> Trackers: Introspection Into The Simulation We can procedurally extract information from the simulation using a Tracker. Trackers are functors that log information at runtime. Built-In Trackers We'll start by taking a look at some built-in AdjSim Trackers. They can be found within the adjsim.analysis module. Let's keep track of the global agent count. End of explanation """ class TagCountTracker(adjsim.analysis.Tracker): def __init__(self): super().__init__() self.data = [] self.tag_count = 0 self.last_it_agent = None def __call__(self, simulation): for agent in simulation.agents: if agent.is_it: # Trackers will be called once before the first simulation loop, so we ignore the first # changing of agents since it resembles the agent that is starting in the 'it' state. if agent != self.last_it_agent: self.last_it_agent = agent if simulation.time > 0: self.tag_count += 1 # Append the data. self.data.append(self.tag_count) def plot(self): pyplot.style.use('ggplot') line, = pyplot.plot(self.data, label="Global Tag Count") line.set_antialiased(True) pyplot.xlabel('Timestep') pyplot.ylabel('Tag Count') pyplot.title('Global Tag Count Over Time') pyplot.legend() pyplot.show() """ Explanation: Not incredibly insightful, eh? (as expected, since our agent population is not changing in this particular simulation). Lets see if we can do better. Custom Trackers Let's make our own tracker that tracks when tags take place. A Tracker must implement a __call__(self, simulation) method, and store relevant data that it would like to store within its self.data attribute. It is also recommended but optional to implement a plot method. End of explanation """ sim = TaggerSimulation() sim.trackers["tag_count"] = TagCountTracker() sim.simulate(50) sim.trackers["tag_count"].plot() """ Explanation: Now lets test it out. End of explanation """ # Call signiature must be the following, same as an action. def tagger_loss(simulation, source): return 10 if source.is_it else 0 """ Explanation: That covers the basics. Essentially any data structure can be stored in a tracker and any user code can be run. It is meant to be an architectural handle for obtaining data about the simulation in between timesteps. Decisions: Filling the Noggin' The agents so far act randomly. Let's fix that. Using Reinforcement Learning-Enabled Decision Models In this part of the tutorial we will be exploring AdjSim's Functional Decision model. This is the architecture that is used by the decision modules that employ Reinforcement Learning within AdjSim. The model asserts that there are two additional pieces of information that need to be given to an agent in order for it to be able to make intelligent decisions: A Loss Function: This function essentially gives an agent a score as to how well its doing. Functional decision modules attempy to minimize their associated loss function. For example, taggers try to minimize the amount of time they spend tagged, or bacteria try to maximize their calories. A Perception Function: A function that processes the simulation from the perspective of an agent, and returns an object that is then used as input to the decision module for the purposes of invoking intelligent actions. For example, a given Tagger should be aware of where the Tagger that is 'it' is located relative to itsef. The perception acts as a filter between the omniscient data present within a simulation and the data that an agent may use to make its decisions. An invocation of a Perception at a particular timestep is known as a Observation. Overall, the processing workflow in an agent step is the following. $$ \text{Simulation} \rightarrow \text{Perception Function} \rightarrow \text{(Observation)} \rightarrow \text{Decision Module} $$ Then the decision module is responsible for selecting and invoking the appropriate actions. $$ \text{Decision Module} \rightarrow \text{Actions} \rightarrow \text{(Environment Manipulations)} $$ Finally, the decision module evaluates the efficacy of its actions through the Loss Function evaluation. It uses these results to train itself to make better future decisions. $$ \text{Decision Module} \rightarrow \text{Loss Function} \rightarrow \text{(Efficacy Evaluation Score)} \rightarrow \text{Decision Module} $$ We're going to start by making the agents of our simulation choose the right action between moving and tagging. Note that the direction of movement is still random. Parameterized functions will be covered in the next section. For our Loss Function, we will be using the is_it attribute to discourage being tagged. End of explanation """ import math def find_it_tagger(simulation, source): for agent in simulation.agents: if agent.is_it: return agent # Raise an error if not found, this should never happen. raise Exception("'it' agent not found.") def find_closest_non_it_tagger(simulation, source): closest_distance = sys.float_info.max nearest_neighbour = None for agent in simulation.agents: if agent.id == source.id: continue distance = adjsim.utility.distance_square(agent, source) if distance < closest_distance: nearest_neighbour = agent closest_distance = distance return nearest_neighbour def tagger_perception(simulation, source): agent = None # Find appropriate agent and distance. if source.is_it: agent = find_closest_non_it_tagger(simulation, source) else: agent = find_it_tagger(simulation, source) # Obtain theta value. delta = agent.pos - source.pos theta = math.atan(delta[1]/delta[0]) if delta[0] != 0 else np.sign(delta[1])*math.pi/2 distance = np.sum(delta**2)**0.5 # Discretize observation to reduce number of possible states. rounded_theta = round(theta/(math.pi/20))*(math.pi/20) rounded_distance = round(distance/10)*10 return (rounded_theta, rounded_distance, source.is_it) """ Explanation: Our Perception Function will return two parameters: Whether or not the agent is 'it' The relative location of the nearest agent if the agent is 'it', otherwise the relative location of the 'it' agent. This will be done in polar coordinates. Since we will be using Q-Learning for this simulation, each Observation will define a state of the agent. Since relative location involves continuous values (floats), we find ourselves with an infinite number of possible agent states. In order to make the algorithm palpable we will discretize the values obtained that would otherwise be returned from the perception function. This need to discretize states is a idiosyncracy of the Q-Learning algorithm, and may not be needed when other decision modules are used. End of explanation """ class SomewhatCleverTagger(adjsim.core.VisualAgent): def __init__(self, x, y, is_it, _decision): super().__init__() self.is_it = is_it self.cooldown = 5 if is_it else 0 self.color = adjsim.color.RED_DARK if is_it else adjsim.color.BLUE_DARK self.pos = np.array([x, y]) self.decision = _decision self.actions["move"] = move self.actions["tag"] = tag if is_it: self.order = 1 class SomewhatCleverTaggerSimulation(adjsim.core.VisualSimulation): def __init__(self): super().__init__() # Let's create the collective decision module. We will load and save our progress to the same file. io_file_name = "somewhat_clever_tagger.qlearning.pkl" self.tagger_decision = adjsim.decision.QLearningDecision(perception=tagger_perception, loss=tagger_loss, simulation=self, input_file_name=io_file_name, output_file_name=io_file_name) for i in range(5): for j in range(5): self.agents.add(SomewhatCleverTagger(20*i, 20*j, False, self.tagger_decision)) self.agents.add(SomewhatCleverTagger(10, 10, True, self.tagger_decision)) sim = SomewhatCleverTaggerSimulation() sim.simulate(15) """ Explanation: Now, let's put together our simulation. It is important to note that we will be sharing our decision module object across all our Taggers in the same way that actions are shared. This will allow training to occur collectively, and allow all agents to learn from each other's enterprises. End of explanation """ def clever_move(simulation, source): # We need to convert polar movement coordinates to cartesian. move_rho = source.move_rho.value move_theta = source.move_theta.value dx = math.cos(move_theta) * move_rho dy = math.sin(move_theta) * move_rho movement = np.array([dx, dy]) source.pos = np.clip(source.pos + movement, -ARENA_BOUND, ARENA_BOUND) source.step_complete = True class ProperlyCleverTagger(adjsim.core.VisualAgent): def __init__(self, x, y, is_it, _decision): super().__init__() self.is_it = is_it self.cooldown = 5 if is_it else 0 self.color = adjsim.color.RED_DARK if is_it else adjsim.color.BLUE_DARK self.pos = np.array([x, y]) self.decision = _decision # This is where the new magic lies. self.move_rho = adjsim.decision.DecisionMutableFloat(0, MOVE_DIST) self.move_theta = adjsim.decision.DecisionMutableFloat(0, 360) self.actions["move"] = clever_move self.actions["tag"] = tag if is_it: self.order = 1 """ Explanation: So now we are prepared to run the simulation a series of times, and after training the bacteria will be able to choose which of move or tag they should choose. We will go into more detail regarding the training/testing cycle in the next example, where the results will be more flashy. Parameterizing Actions Using Decision-Mutable Values We not only want our taggers to know which action to perform (between move and tag), but also what direction to move when they perform their move action. We accomplish this using Decision-Mutable Values. These are variable that a decision module will try to optimize for when performing its training cycle. These values are never explicitly set by the user. The decision module will set its value during runtime, before calling each action. These values are effectively read-only for the consumer of the API. Let's take a look. We will begin by re-writing our move function. End of explanation """ class ProperlyCleverTaggerTrainSimulation(adjsim.core.Simulation): # Note that the simulation is not visual for training. def __init__(self): super().__init__() io_file_name = "properly_clever_tagger.qlearning.pkl" self.tagger_decision = adjsim.decision.QLearningDecision(perception=tagger_perception, loss=tagger_loss, simulation=self, input_file_name=io_file_name, output_file_name=io_file_name) for i in range(5): for j in range(5): self.agents.add(ProperlyCleverTagger(20*i, 20*j, False, self.tagger_decision)) self.agents.add(ProperlyCleverTagger(10, 10, True, self.tagger_decision)) class ProperlyCleverTaggerTestSimulation(adjsim.core.VisualSimulation): # Note that the simulation is not visual for training. def __init__(self): super().__init__() io_file_name = "properly_clever_tagger.qlearning.pkl" self.tagger_decision = adjsim.decision.QLearningDecision(perception=tagger_perception, loss=tagger_loss, simulation=self, input_file_name=io_file_name, output_file_name=io_file_name, nonconformity_probability=0) for i in range(5): for j in range(5): self.agents.add(ProperlyCleverTagger(20*i, 20*j, False, self.tagger_decision)) self.agents.add(ProperlyCleverTagger(10, 10, True, self.tagger_decision)) # Training cycle. EPOCHS = 10 for i in range(EPOCHS): print("Epoch:", i) sim = ProperlyCleverTaggerTrainSimulation() sim.simulate(700) """ Explanation: Now we have properly clever taggers. Let's train them. We will make a train simulation and a test simulation, and run the training for a number of epochs. Let's see how they do! End of explanation """ # Testing Cycle. sim = ProperlyCleverTaggerTestSimulation() sim.simulate(500) """ Explanation: After completing our training cycles, we expect to have intelligently acting agents! Lets take a look with our testing simulation. We will also plot a graph of agent loss over time (for anecdotal introspection into the simulation). It will show how each individual agent percieves how well its doing (the higher the value, the worse the performance). We will observe graph spikes when an agent is tagged. End of explanation """ grid_sim = adjsim.core.Simulation() grid_sim.indices.grid.initialize(5) # We now have a 5x5 infinitely spanning grid index. # Add some agents. Their positions should lie in grid cells. # Otherwise the positions are rounded to the nearest cell. grid_sim.agents.add(adjsim.core.SpatialAgent(pos=np.array([5, 5]))) grid_sim.agents.add(adjsim.core.SpatialAgent(pos=np.array([5, 5]))) grid_sim.agents.add(adjsim.core.SpatialAgent(pos=np.array([0, 0]))) """ Explanation: Our agents now make intelligent decisions. <img src="https://raw.githubusercontent.com/SeverTopan/AdjSim/master/tutorial/images/5.gif" alt="Single Agent" style="width: 400px;"/> A short commentary about the observed behaviour: Notice how the "non-it" agents don't unconditionally flee from the "it" one, instead several of the closer agents seem to taunt it back and forth. Since the "it" agent only percieves the closest non-it agent, the "non-it" agents have acutally learned to exlpoit the vulnerability of the "it" agent's perception limitations, forcing it to switch back and forth between its targets. Pretty cool! Indices: Finding Friends Efficiently Indices provide data structures that allow for lower complexity look-up of agents. There is currently one index defined in Canonical AdjSim. It is the GridIndex, and it allows us to perform neighbour searches upon grid-based simulations with ease. We won't go through a full example here, if that is desired please see the Conway's Game Of Life example. Here is a brief overview. GridIndex: Being Discrete Before starting the simulation an index needs to be initialized. The parameter passed in is the size of one side of the grid (the grid is always square). The grid is infinitely spanning, as it is implemented using hash tables, so no global bounds need be provided. Multiple agents can inhabit one cell. End of explanation """ grid_sim.indices.grid.get_inhabitants(np.array([5, 5])) """ Explanation: Now lets take a look at the lookup functions that the grid provides: get_inhabitants O(1) lookup of agents inhabiting a single cell, or a list of cell coordinates. End of explanation """ grid_sim.indices.grid.get_neighbour_coordinates(np.array([-5, 0])) """ Explanation: get_neighbour_coordinates returns the coordinates of the 8 cells neighbouring any particular cell. End of explanation """ grid_sim.indices.grid.get_neighbours(np.array([5, 0])) """ Explanation: get_neighbours O(1) lookup of agents inhabiting the 8 cells surrounding a target coordinate. End of explanation """ def move_counter(agent): move_counter.count += 1 move_counter.count = 0 def trivial_move(simulation, source): source.pos = source.pos + np.array([1, 1]) class TrivialMover(adjsim.core.SpatialAgent): def __init__(self): super().__init__() self.actions["move"] = trivial_move self.decision = adjsim.decision.RandomSingleCastDecision() callback_sim = adjsim.core.Simulation() callback_sim.callbacks.agent_moved.register(move_counter) callback_sim.agents.add(TrivialMover()) callback_sim.simulate(5) move_counter.count """ Explanation: More indices to come, stay posted :). Callbacks: Systematic User Code Invocation Callbacks allow user code to be called at specific points during the simulation. Callbacks provide the following interface: register: Register a function with a given callback. unregister: Unregister a function from a callback. is_registered: Returns whether or not a given function is registered. There are currently 6 callbacks in an AdjSim simulation. The first are agent callbacks, and pass in a single parameter to the callback function that is registered. This parameter is an Agent. * agent_added: Fires when an Agent is added to the agent set. * agent_removed: Fires when an Agent is removed from the agent set. * agent_moved: Fires when a SpatialAgent's pos attribute is set. The next set of callbacks also pass in a single parameter to their respective functions, this time a Simulation object. * simulation_step_started: Fires when a Simulation step is started. * simulation_step_complete: Fires when a Simulation step is ended. * simulation_started: Fires when the Simulation starts. * simulation_complete: Fires when the Simulation ends. We'll demonstrate by creating a function that counts the number of agent movements. End of explanation """
softEcon/course
lectures/basics/python_overview/lecture.ipynb
mit
# This is an inline comment: Python3 print('hello world') # Python2 print 'hello world' """ Explanation: This was our first Python command. Basic Python Explorations There are some minor differences between Python2 and Python3. Let us consider an example: End of explanation """ 1 * 1.0 a = 3 type(a) b = 3 > 5 print(b), type(b) """ Explanation: Basic Types We now look at the different types of objects that Python offers: floats, integers, booleans, lists, dictionaries, etc. End of explanation """ L = ['red', 'blue', 'green', 'black', 'white'] print(L) L[1], L[3:], L[3:15] """ Explanation: Let us now turn to containers: lists, dictionaries. End of explanation """ L[1] = 'yellow' print(L) """ Explanation: Lists are mutable objects, i.e. they can be changed. End of explanation """ T = ('red', 'black') T[1] = 'yellow' """ Explanation: What is an example of an immutable object? End of explanation """ print(L) G = L print(G,L) # But now: L[1] = 'blue' print(G,L) # Let us now create an independent copy: G = L[:] print('Independent: ', G, L) L[1] = 'yellow' print('Independent: ', G, L) """ Explanation: Let us now turn to the distinction between independent copies and references to objects. End of explanation """ import urllib; from IPython.core.display import HTML HTML(urllib.urlopen('http://bit.ly/1K5apRH').read()) """ Explanation: Miscellaneous Python Resources Formatting End of explanation """
google/rba
Regularized Regression.ipynb
apache-2.0
########################################################################### # # Copyright 2021 Google Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # This solution, including any related sample code or data, is made available # on an “as is,” “as available,” and “with all faults” basis, solely for # illustrative purposes, and without warranty or representation of any kind. # This solution is experimental, unsupported and provided solely for your # convenience. Your use of it is subject to your agreements with Google, as # applicable, and may constitute a beta feature as defined under those # agreements. To the extent that you make any data available to Google in # connection with your use of the solution, you represent and warrant that you # have all necessary and appropriate rights, consents and permissions to permit # Google to use and process that data. By using any portion of this solution, # you acknowledge, assume and accept all risks, known and unknown, associated # with its usage, including with respect to your deployment of any portion of # this solution in your systems, or usage in connection with your business, # if at all. ########################################################################### """ Explanation: Regularized Regression End of explanation """ ################################################################################ ######################### CHANGE BQ PROJECT NAME BELOW ######################### ################################################################################ project_name = '' #add proj name and dataset # Google credentials authentication libraries from google.colab import auth auth.authenticate_user() # data processing libraries import numpy as np from numpy.core.numeric import NaN import pandas as pd import pandas_gbq import datetime # modeling and metrics from scipy.optimize import least_squares from statsmodels.tools.tools import add_constant import statsmodels.api as sm from sklearn.model_selection import train_test_split, cross_val_score, LeaveOneOut, KFold, LeavePOut from sklearn.linear_model import LinearRegression, Ridge, Lasso, ElasticNet from sklearn.metrics import mean_squared_error, r2_score, mean_absolute_error from statsmodels.stats.stattools import durbin_watson # Visualization import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns # Calculating Relative Importance !pip install relativeImp from relativeImp import relativeImp # BigQuery Magics ''' BigQuery magics are used to run BigQuery SQL queries in a python environment. These queries can also be run in the BigQuery UI ''' from google.cloud import bigquery from google.cloud.bigquery import magics magics.context.project = project_name #update your project name client = bigquery.Client(project=magics.context.project) %load_ext google.cloud.bigquery bigquery.USE_LEGACY_SQL = False """ Explanation: 0) Dependencies End of explanation """ ################################################################################ ######################### CHANGE BQ PROJECT NAME BELOW ######################### ################################################################################ %%bigquery df SELECT * FROM `.RBA_demo.cleaned_data`; #update with project name. df.head() """ Explanation: 1) Import dataset End of explanation """ KPI_COL = "y1" y = df[KPI_COL] X = df[df.columns[df.columns != KPI_COL]].values """ Explanation: 1.1) Define KPI column and feature set End of explanation """ reg = Ridge().fit(X,y) #reg = Lasso().fit(X,y) #reg = ElasticNet().fit(X,y) #reg = LinearRegression().fit(X,y) """ Explanation: 2) Build RBA Model Create a linear model to measure the impact of digital media (x variables) on conversions (y variable). Different regularization techniques, such as Ridge or Lasso, can be implemented to adjust for highly correlated features. End of explanation """ reg.intercept_ coefficients = reg.coef_.tolist() coefficients """ Explanation: 2.1) Print the model coefficient results End of explanation """ #R-squared reg.score(X,y) # Generate predictions to calculate MAE, MSE, RMSE Y_prediction = reg.predict(X) mean_absolute_error(y,Y_prediction) mean_squared_error(y,Y_prediction) rmse = np.sqrt(mean_squared_error(y,Y_prediction)) rmse """ Explanation: 2.2) Print the model evaluation metrics End of explanation """ yName = 'y1' xNames = df[df.columns[df.columns != KPI_COL]].columns.to_list() df_results = relativeImp(df, outcomeName = yName, driverNames = xNames) df_results """ Explanation: 3) Calculate contribution of each digitl Use the relativeImp package to conduct key driver analysis and generate relative importance values by feature in the model. The relativeImp function produces a raw relative importance and a normalized relative importance value. - Raw relative importance sums to the r-squared of the linear model. - Normalized relative importance is scaled to sum to 1 End of explanation """ residuals = Y_prediction - y """ Explanation: 4) Validate Linear Regression Model Assumptions 4.1) Generate model residuals End of explanation """ ''' Visually inspect linearity between target variable (y1) and predictions ''' plt.plot(Y_prediction,y,'o',alpha=0.5) plt.show() """ Explanation: 4.2) Linearity End of explanation """ ''' Visually inspect the residuals to confirm normality ''' fig = sm.qqplot(residuals) sns.kdeplot(residuals, label = '', shade = True) plt.xlabel('Model Residuals'); plt.ylabel('Density'); plt.title('Distribution of Residuals'); """ Explanation: 4.3) Normality of Errors End of explanation """ ''' Visually inspect residuals to confirm constant variance ''' plt.plot(residuals,'o',alpha=0.5) plt.show() """ Explanation: 4.4) Absence of Multicollinearity Tested and checked during data processing stage 4.5) Homoscedasticity End of explanation """ ''' The Durbin Watson test is a statistical test for detecting autocorrelation of the model residuals ''' dw = durbin_watson(residuals) print('Durbin-Watson',dw) if dw < 1.5: print('Positive autocorrelation', '\n') elif dw > 2.5: print('Negative autocorrelation', '\n') else: print('Little to no autocorrelation', '\n') """ Explanation: 4.6) Residual Autocorrelation Check End of explanation """
tsaqib/bike-sharing-time-series-nn-numpy
weather-forecasting-auto-reg/weather-forecasting-auto-reg.ipynb
mit
#!/usr/bin/python # -*- coding: utf-8 -*- import numpy as np import pandas as pd import matplotlib.pyplot as mp from statsmodels.tsa.arima_model import ARMA, ARIMA from statsmodels.tsa.stattools import adfuller, arma_order_select_ic import warnings from IPython.display import HTML # At the time of writing, statsmodels didn't catchup with the latest pandas, # so there are a few unwanted warnings we don't want to see here. # Follow this instruction to ignore warnings: https://stackoverflow.com/questions/9031783/hide-all-warnings-in-ipython np.set_printoptions(threshold=np.inf) %matplotlib inline %config InlineBackend.figure_format = 'retina' FILE_NAME = 'GlobalLandTemperaturesByCountry.csv' """ Explanation: Weather Forecasting using Auto-regressive Model In this project Auto-regressive, a schotastic process is applied on a 248 years of weather data of Canada to capture the trend and seasonality and forecast temperature for future dates. End of explanation """ df = pd.read_csv(FILE_NAME, sep=',', skipinitialspace=True, encoding='utf-8') df = df[df['Country'] == 'Canada'] """ Explanation: Load and prepare the data This project works with a dataset saved into a .csv file. Let us load that up and perform required pre-processing. End of explanation """ print(df.shape) df = df[df.AverageTemperature.notnull()] print(df.shape) """ Explanation: Are we dealing with missing values? End of explanation """ df.index = pd.to_datetime(df.dt) df = df.drop(['dt', 'Country', 'AverageTemperatureUncertainty'], axis=1) df = df.sort_index() df.AverageTemperature.fillna(method='pad', inplace=True) mp.plot(df.AverageTemperature) mp.show() """ Explanation: Clearly, there are 437 rows without a value. Now this can be handled in many ways. For simplicity's sake, padding is used which essentially means continuing with the last non-NaN value. End of explanation """ df = df.loc['1900-01-01':] mp.plot(df.AverageTemperature) mp.show() """ Explanation: Now the questions we need to ask, whether: - Do we need more than 248 years of data? Past 113 years of data should be more than enough. - Are the flat NaN values across many initial years contribute positively to our dataset? No, because we miss both trend and seasonality. So, we are settling with data points from 1900 and onwards. End of explanation """ df_5yrs = df.loc['2008-05-01':] mp.plot(df_5yrs.AverageTemperature) mp.show() """ Explanation: Seasonality If we plot last 5-year instead of all the years as plotted above, clearly it is seen that there is a seasonality of the data points. Every year there is a pattern from the beginning of the year to the end due to the recurring seasons through out the year. End of explanation """ df_5yrs.AverageTemperature.plot.line(style='b', legend=True, label='Avg. Temperature (AT)') ax = df_5yrs.AverageTemperature.rolling(window=12).mean().plot.line(style='r', legend=True, label='Mean AT') mp.show() """ Explanation: Rolling Mean/Moving Average Running a rolling mean aka. moving average through the data is almost obligatory just to get a more comprehensive insight into the data. The rolling mean window is kept 12 just to signify that we are interested in 12-month data points per year. End of explanation """ def test_stationarity(df): print('Results of Dickey-Fuller Test:') dftest = adfuller(df) indices = ['Test Statistic', 'p-value', '#Lags Used', 'Number of Observations Used'] output = pd.Series(dftest[0:4], index=indices) for key, value in dftest[4].items(): output['Critical Value (%s)' % key] = value print(output) test_stationarity(df.AverageTemperature) """ Explanation: Stationarity If the dataset has its key statistical properties unchanged over time such as mean value, the variance and the autocorrelation, it is said to be stationary. There is a test called Augmented Dickey–Fuller test, which helps to find stationary properties in a Time Series. End of explanation """ print(arma_order_select_ic(df.AverageTemperature, ic=['aic', 'bic'], trend='nc', max_ar=4, max_ma=4, fit_kw={'method': 'css-mle'})) """ Explanation: One way to interpret the results is that the Test Statistic here is lower than Critical Value (1%), so it can be concluded with 99% confidence that the Time Series is stationary. What to do if the Time Series is non-stationary? One way is to apply Autoregressive integrated moving average (ARIMA) to the series to make it stationary. Autoregressive–moving-average (ARMA) Model A more detailed discussion on what the parameters are and how they are chosen can be found here. End of explanation """ # Fit the model ts = pd.Series(df.AverageTemperature, index=df.index) model = ARMA(ts, order=(3, 3)) results = model.fit(trend='nc', method='css-mle') print(results.summary2()) # Plot the model fig, ax = mp.subplots(figsize=(10, 8)) fig = results.plot_predict('01/01/2003', '12/01/2023', ax=ax) ax.legend(loc='lower left') mp.title('Weather Time Series prediction') mp.show() predictions = results.predict('01/01/2003', '12/01/2023') # You can manipulate/print the predictions after this """ Explanation: Fitting the Model and Forecasting next 10-year Weather End of explanation """
probml/pyprobml
deprecated/linreg_bayes_svi_hmc_pyro.ipynb
mit
#!pip install -q numpyro@git+https://github.com/pyro-ppl/numpyro !pip3 install pyro-ppl import os from functools import partial import torch import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import pyro import pyro.distributions as dist from pyro.nn import PyroSample from pyro.infer.autoguide import AutoDiagonalNormal from pyro.infer import Predictive from pyro.infer import SVI, Trace_ELBO from pyro.infer import MCMC, NUTS pyro.set_rng_seed(1) from torch import nn from pyro.nn import PyroModule """ Explanation: <a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/linreg_bayes_svi_hmc_pyro.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Bayesian linear regression in Pyro We compare stochastic variational inference with HMC for Bayesian linear regression. We use the example from sec 8.1 of Statistical Rethinking ed 2. The code is modified from https://pyro.ai/examples/bayesian_regression.html and https://pyro.ai/examples/bayesian_regression_ii.html. For a NumPyro version (that uses Laplace approximation instead of SVI/ HMC), see https://fehiepsi.github.io/rethinking-numpyro/08-conditional-manatees.html. End of explanation """ DATA_URL = "https://d2hg8soec8ck9v.cloudfront.net/datasets/rugged_data.csv" data = pd.read_csv(DATA_URL, encoding="ISO-8859-1") df = data[["cont_africa", "rugged", "rgdppc_2000"]] df = df[np.isfinite(df.rgdppc_2000)] df["rgdppc_2000"] = np.log(df["rgdppc_2000"]) fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 6), sharey=True) african_nations = df[df["cont_africa"] == 1] non_african_nations = df[df["cont_africa"] == 0] sns.scatterplot(non_african_nations["rugged"], non_african_nations["rgdppc_2000"], ax=ax[0]) ax[0].set(xlabel="Terrain Ruggedness Index", ylabel="log GDP (2000)", title="Non African Nations") sns.scatterplot(african_nations["rugged"], african_nations["rgdppc_2000"], ax=ax[1]) ax[1].set(xlabel="Terrain Ruggedness Index", ylabel="log GDP (2000)", title="African Nations"); # Dataset: Add a feature to capture the interaction between "cont_africa" and "rugged" # ceofficeints are: beta_a, beta_r, beta_ar df["cont_africa_x_rugged"] = df["cont_africa"] * df["rugged"] data = torch.tensor(df[["cont_africa", "rugged", "cont_africa_x_rugged", "rgdppc_2000"]].values, dtype=torch.float) x_data, y_data = data[:, :-1], data[:, -1] """ Explanation: Data The dataset has 3 variables: $A$ (whether a country is in Africa or not), $R$ (its terrain ruggedness), and $G$ (the log GDP per capita in 2000). We want to preict $G$ from $A$, $R$, and $A \times R$. The response variable is very skewed, so we log transform it. End of explanation """ pyro.set_rng_seed(1) # linear_reg_model = PyroModule[nn.Linear](3, 1) linear_reg_model = nn.Linear(3, 1) print(type(linear_reg_model)) # Define loss and optimize loss_fn = torch.nn.MSELoss(reduction="sum") optim = torch.optim.Adam(linear_reg_model.parameters(), lr=0.05) num_iterations = 1500 def train(): # run the model forward on the data y_pred = linear_reg_model(x_data).squeeze(-1) # calculate the mse loss loss = loss_fn(y_pred, y_data) # initialize gradients to zero optim.zero_grad() # backpropagate loss.backward() # take a gradient step optim.step() return loss for j in range(num_iterations): loss = train() if (j + 1) % 200 == 0: print("[iteration %04d] loss: %.4f" % (j + 1, loss.item())) # Inspect learned parameters print("Learned parameters:") for name, param in linear_reg_model.named_parameters(): print(name, param.data.numpy()) mle_weights = linear_reg_model.weight.data.numpy().squeeze() print(mle_weights) mle_bias = linear_reg_model.bias.data.numpy().squeeze() print(mle_bias) mle_params = [mle_weights, mle_bias] print(mle_params) fit = df.copy() fit["mean"] = linear_reg_model(x_data).detach().cpu().numpy() fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 6), sharey=True) african_nations = fit[fit["cont_africa"] == 1] non_african_nations = fit[fit["cont_africa"] == 0] fig.suptitle("Regression Fit", fontsize=16) ax[0].plot(non_african_nations["rugged"], non_african_nations["rgdppc_2000"], "o") ax[0].plot(non_african_nations["rugged"], non_african_nations["mean"], linewidth=2) ax[0].set(xlabel="Terrain Ruggedness Index", ylabel="log GDP (2000)", title="Non African Nations") ax[1].plot(african_nations["rugged"], african_nations["rgdppc_2000"], "o") ax[1].plot(african_nations["rugged"], african_nations["mean"], linewidth=2) ax[1].set(xlabel="Terrain Ruggedness Index", ylabel="log GDP (2000)", title="African Nations"); """ Explanation: Ordinary least squares We define the linear model as a simple neural network with no hidden layers. We fit it by using maximum likelihood, optimized by (full batch) gradient descent, as is standard for DNNs. End of explanation """ class BayesianRegression(PyroModule): def __init__(self, in_features, out_features): super().__init__() self.linear = PyroModule[nn.Linear](in_features, out_features) self.linear.weight = PyroSample(dist.Normal(0.0, 1.0).expand([out_features, in_features]).to_event(2)) self.linear.bias = PyroSample(dist.Normal(0.0, 10.0).expand([out_features]).to_event(1)) def forward(self, x, y=None): sigma = pyro.sample("sigma", dist.Uniform(0.0, 10.0)) mean = self.linear(x).squeeze(-1) mu = pyro.deterministic("mu", mean) # save this variable so we can access it later with pyro.plate("data", x.shape[0]): obs = pyro.sample("obs", dist.Normal(mean, sigma), obs=y) return mean """ Explanation: Bayesian model To make a Bayesian version of the linear neural network, we need to use a Pyro module instead of a torch.nn.module. This lets us replace torch tensors containg the parameters with random variables, defined by PyroSample commands. We also specify the likelihood function by using a plate over the multiple observations. End of explanation """ def summary_np_scalars(samples): site_stats = {} for site_name, values in samples.items(): marginal_site = pd.DataFrame(values) describe = marginal_site.describe(percentiles=[0.05, 0.25, 0.5, 0.75, 0.95]).transpose() site_stats[site_name] = describe[["mean", "std", "5%", "25%", "50%", "75%", "95%"]] return site_stats def summary_torch(samples): site_stats = {} for k, v in samples.items(): site_stats[k] = { "mean": torch.mean(v, 0), "std": torch.std(v, 0), "5%": v.kthvalue(int(len(v) * 0.05), dim=0)[0], "95%": v.kthvalue(int(len(v) * 0.95), dim=0)[0], } return site_stats def plot_param_post_helper(samples, label, axs): ax = axs[0] sns.distplot(samples["linear.bias"], ax=ax, label=label) ax.set_title("bias") for i in range(0, 3): ax = axs[i + 1] sns.distplot(samples["linear.weight"][:, 0, i], ax=ax, label=label) ax.set_title(f"weight {i}") def plot_param_post(samples_list, label_list): fig, axs = plt.subplots(nrows=2, ncols=2, figsize=(12, 10)) axs = axs.reshape(-1) fig.suptitle("Marginal Posterior density - Regression Coefficients", fontsize=16) n_methods = len(samples_list) for i in range(n_methods): plot_param_post_helper(samples_list[i], label_list[i], axs) ax = axs[-1] handles, labels = ax.get_legend_handles_labels() fig.legend(handles, labels, loc="upper right") def plot_param_post2d_helper(samples, label, axs, shade=True): ba = samples["linear.weight"][:, 0, 0] # africa indicator br = samples["linear.weight"][:, 0, 1] # ruggedness bar = samples["linear.weight"][:, 0, 2] # africa*ruggedness sns.kdeplot(ba, br, ax=axs[0], shade=shade, label=label) axs[0].set(xlabel="bA", ylabel="bR", xlim=(-2.5, -1.2), ylim=(-0.5, 0.1)) sns.kdeplot(br, bar, ax=axs[1], shade=shade, label=label) axs[1].set(xlabel="bR", ylabel="bAR", xlim=(-0.45, 0.05), ylim=(-0.15, 0.8)) def plot_param_post2d(samples_list, label_list): fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(12, 6)) axs = axs.reshape(-1) fig.suptitle("Cross-section of the Posterior Distribution", fontsize=16) n_methods = len(samples_list) shades = [False, True] # first method is contour, second is shaded for i in range(n_methods): plot_param_post2d_helper(samples_list[i], label_list[i], axs, shades[i]) ax = axs[-1] handles, labels = ax.get_legend_handles_labels() fig.legend(handles, labels, loc="upper right"); # fig.legend() """ Explanation: Utilities Summarize posterior End of explanation """ def plot_pred_helper(predictions, africa, ax): nations = predictions[predictions["cont_africa"] == africa] nations = nations.sort_values(by=["rugged"]) ax.plot(nations["rugged"], nations["mu_mean"], color="k") ax.plot(nations["rugged"], nations["true_gdp"], "o") # uncertainty about mean ax.fill_between(nations["rugged"], nations["mu_perc_5"], nations["mu_perc_95"], alpha=0.2, color="k") # uncertainty about observations ax.fill_between(nations["rugged"], nations["y_perc_5"], nations["y_perc_95"], alpha=0.15, color="k") ax.set(xlabel="Terrain Ruggedness Index", ylabel="log GDP (2000)") return ax def make_post_pred_df(samples_pred): pred_summary = summary_torch(samples_pred) mu = pred_summary["_RETURN"] # mu = pred_summary["mu"] y = pred_summary["obs"] predictions = pd.DataFrame( { "cont_africa": x_data[:, 0], "rugged": x_data[:, 1], "mu_mean": mu["mean"], "mu_perc_5": mu["5%"], "mu_perc_95": mu["95%"], "y_mean": y["mean"], "y_perc_5": y["5%"], "y_perc_95": y["95%"], "true_gdp": y_data, } ) return predictions def plot_pred(samples_pred): predictions = make_post_pred_df(samples_pred) fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(12, 6), sharey=True) plot_pred_helper(predictions, 0, axs[0]) axs[0].set_title("Non-African nations") plot_pred_helper(predictions, 1, axs[1]) axs[1].set_title("African nations") """ Explanation: Plot posterior predictions End of explanation """ pyro.set_rng_seed(1) model = BayesianRegression(3, 1) nuts_kernel = NUTS(model) mcmc = MCMC(nuts_kernel, num_samples=1000, warmup_steps=200) mcmc.run(x_data, y_data) print(mcmc.get_samples().keys()) print(mcmc.get_samples()["linear.weight"].shape) print(mcmc.get_samples()["linear.bias"].shape) hmc_samples_torch = mcmc.get_samples() summary_torch(hmc_samples_torch) """ Explanation: HMC inference End of explanation """ hmc_samples_params = {k: v.detach().cpu().numpy() for k, v in hmc_samples_torch.items()} plot_param_post([hmc_samples_params], ["HMC"]) hmc_samples_params["linear.weight"].shape plot_param_post2d([hmc_samples_params], ["HMC"]) """ Explanation: Parameter posterior End of explanation """ predictive = Predictive(model, mcmc.get_samples()) hmc_samples_pred = predictive(x_data) print(hmc_samples_pred.keys()) print(hmc_samples_pred["obs"].shape) print(hmc_samples_pred["mu"].shape) # predictive = Predictive(model, mcmc.get_samples(), return_sites=("obs", "_RETURN")) predictive = Predictive(model, mcmc.get_samples(), return_sites=("obs", "mu", "_RETURN")) hmc_samples_pred = predictive(x_data) print(hmc_samples_pred.keys()) print(hmc_samples_pred["obs"].shape) print(hmc_samples_pred["mu"].shape) print(hmc_samples_pred["_RETURN"].shape) plot_pred(hmc_samples_pred) plt.savefig("linreg_africa_post_pred_hmc.pdf", dpi=300) """ Explanation: Predictive posterior End of explanation """ pyro.set_rng_seed(1) model = BayesianRegression(3, 1) guide = AutoDiagonalNormal(model) adam = pyro.optim.Adam({"lr": 0.03}) svi = SVI(model, guide, adam, loss=Trace_ELBO()) pyro.clear_param_store() num_iterations = 1000 for j in range(num_iterations): # calculate the loss and take a gradient step loss = svi.step(x_data, y_data) if j % 100 == 0: print("[iteration %04d] loss: %.4f" % (j + 1, loss / len(data))) """ Explanation: Diagonal Gaussian variational posterior Fit End of explanation """ post = guide.get_posterior() nsamples = 800 samples = post.sample(sample_shape=(nsamples,)) print(samples.shape) # [800,5] print(torch.mean(samples, dim=0)) # transform(sigma), weights 0:2, bias weights = np.reshape(samples[:, 1:4].detach().cpu().numpy(), (-1, 1, 3)) bias = samples[:, 4].detach().cpu().numpy() diag_samples_params = {"linear.weight": weights, "linear.bias": bias} print(diag_samples_params["linear.weight"].shape) plot_param_post( [ hmc_samples_params, diag_samples_params, ], ["HMC", "Diag"], ) plt.savefig("linreg_africa_post_marginals_hmc_diag.pdf", dpi=300) plot_param_post2d([hmc_samples_params, diag_samples_params], ["HMC", "Diag"]) plt.savefig("linreg_africa_post_2d_hmc_diag.pdf", dpi=300) """ Explanation: Parameter posterior End of explanation """ predictive = Predictive(model, guide=guide, num_samples=800, return_sites=("obs", "_RETURN")) diag_samples_pred = predictive(x_data) print(diag_samples_pred.keys()) print(diag_samples_pred["_RETURN"].shape) plot_pred(diag_samples_pred) """ Explanation: Posterior predictive We extract posterior predictive distribution for obs, and the return value of the model (which is the mean prediction). End of explanation """ print(pyro.get_param_store().keys()) guide.requires_grad_(False) for name, value in pyro.get_param_store().items(): print(name, pyro.param(name)) # derive posterior quantiles for model parameters from the variational parameters # note that we transform to the original parameter domain (eg sigma is in [0,10]) quant = guide.quantiles([0.5]) print(quant) post = guide.get_posterior() print(type(post)) print(post) print(post.support) print(post.mean) # transform(sigma), weights 0:2, bias # gaussian approx for sigma contains mean and variance of t=logit(sigma/10) # so to get posterior for sigma, we need to apply sigmoid(t) s = torch.tensor(-2.2916) print(torch.sigmoid(s) * 10) nsamples = 800 samples = post.sample(sample_shape=(nsamples,)) print(samples.shape) print(torch.mean(samples, dim=0)) # E[transform(sigma)]=-2.2926, s = torch.tensor(-2.2926) print(torch.sigmoid(s) * 10) transform = guide.get_transform() trans_samples = transform(samples) print(torch.mean(trans_samples, dim=0)) trans_samples = transform.inv(samples) print(torch.mean(trans_samples, dim=0)) sample = guide.sample_latent() print(sample) sample = list(guide._unpack_latent(sample)) print(sample) predictive = Predictive(model, guide=guide, num_samples=800) samples = predictive(x_data) print(samples.keys()) print(samples["linear.bias"].shape) print(samples["linear.weight"].shape) """ Explanation: Scratch Experiments with the log(sigma) term. End of explanation """ pyro.set_rng_seed(1) model = BayesianRegression(3, 1) from pyro.infer.autoguide import AutoMultivariateNormal, init_to_mean guide = AutoMultivariateNormal(model, init_loc_fn=init_to_mean) adam = pyro.optim.Adam({"lr": 0.03}) svi = SVI(model, guide, adam, loss=Trace_ELBO()) pyro.clear_param_store() num_iterations = 1000 for j in range(num_iterations): # calculate the loss and take a gradient step loss = svi.step(x_data, y_data) if j % 100 == 0: print("[iteration %04d] loss: %.4f" % (j + 1, loss / len(data))) """ Explanation: Full Gaussian variational posterior Fit End of explanation """ post = guide.get_posterior() nsamples = 800 samples = post.sample(sample_shape=(nsamples,)) print(samples.shape) # [800,5] print(torch.mean(samples, dim=0)) # transform(sigma), weights 0:2, bias weights = np.reshape(samples[:, 1:4].detach().cpu().numpy(), (-1, 1, 3)) bias = samples[:, 4].detach().cpu().numpy() full_samples_params = {"linear.weight": weights, "linear.bias": bias} print(full_samples_params["linear.weight"].shape) plot_param_post( [ hmc_samples_params, full_samples_params, ], ["HMC", "Full"], ) plt.savefig("linreg_africa_post_marginals_hmc_full.pdf", dpi=300) plot_param_post2d([hmc_samples_params, full_samples_params], ["HMC", "Full"]) plt.savefig("linreg_africa_post_2d_hmc_full.pdf", dpi=300) """ Explanation: Parameter posterior End of explanation """ predictive = Predictive(model, guide=guide, num_samples=800, return_sites=("obs", "_RETURN")) full_samples_pred = predictive(x_data) plot_pred(full_samples_pred) """ Explanation: Predictive posterior End of explanation """
musketeer191/job_analytics
getStats.ipynb
gpl-3.0
import my_util as my_util; from my_util import * HOME_DIR = 'd:/larc_projects/job_analytics/' DATA_DIR = HOME_DIR + 'data/clean/' title_df = pd.read_csv(DATA_DIR + 'new_titles_2posts_up.csv') """ Explanation: This script is dedicated to querying all needed statistics for the project. End of explanation """ def distTitle(agg_df, for_domain=False, for_func=False): fig = plt.figure() plt.hist(agg_df.n_title) mean_n_title = round(agg_df.n_title.mean(), 1) xl = '# job titles' + r'$(\mu = {})$'.format(mean_n_title) plt.xlabel(xl, fontsize=16); if for_domain: plt.ylabel('# domains', fontsize=16) if for_func: plt.ylabel('# functions', fontsize=16) plt.grid(True) return fig def aggBy(col, title_df): by_col = title_df.groupby(col) print('# {}: {}'.format(col, by_col.ngroups) ) agg_df = by_col.agg({'title': 'nunique','non_std_title': 'nunique','n_post': sum}) agg_df = agg_df.rename(columns={'title': 'n_title', 'std_title': 'n_std_title'}).reset_index() return agg_df """ Explanation: Helpers End of explanation """ title_stats = pd.read_csv(DATA_DIR + 'stats_job_titles.csv') titles = title_stats['title'] print('# titles: %d' %len(titles)) by_n_post = pd.read_csv(DATA_DIR + 'stats_job_post_dist.csv') by_n_post.head() """ Explanation: Distribution of job posts among job titles End of explanation """ by_n_post_after_std = title_stats.groupby('n_post').agg({'title': len}) by_n_post_after_std = by_n_post_after_std.rename(columns={'title': 'n_title_after_std'}).reset_index() quantile(by_n_post_after_std.n_post) fig = vizJobPostDist(by_n_post) plt.savefig(RES_DIR + 'fig/dist_job_post_by_title.pdf') plt.show(); plt.close() print('# job titles with >= 2 posts: {}'.format(title_df.shape[0]) ) """ Explanation: Job posts distribution among standard job titles End of explanation """ by_domain_agg = aggBy('domain', title_df) by_domain_agg.sort_values('n_title', ascending=False, inplace=True) by_domain_agg.to_csv(DATA_DIR + 'stats_domains.csv', index=False) by_domain_agg.describe().round(1).to_csv(DATA_DIR + 'tmp/domain_desc.csv') by_domain_agg.describe().round(1) plt.close('all') fig = distTitle(by_domain_agg, for_domain=True) fig.set_tight_layout(True) plt.savefig(DATA_DIR + 'title_dist_by_domain.pdf') plt.show(); plt.close() """ Explanation: Statistics for Domains Note: The domains are domains of job titles with >= 2 posts. End of explanation """ title_df.query('domain == "information technology"').sort_values('std_title') """ Explanation: Why no. of job titles in IT is reduced a lot after std? End of explanation """ by_func_agg = aggBy('pri_func', title_df) by_func_agg.sort_values('n_title', ascending=False, inplace=True) by_func_agg.to_csv(DATA_DIR + 'stats_pri_funcs.csv', index=False) by_func_agg.describe().round(1).to_csv(DATA_DIR + 'tmp/func_desc.csv') by_func_agg.describe().round(1) by_func_agg.head(10) fig = distTitle(by_func_agg, for_func=True) fig.set_tight_layout(True) plt.savefig(DATA_DIR + 'title_dist_by_func.pdf') plt.show(); plt.close() sum(title_df.domain == 'information technology') title_df.std_title[title_df.pri_func == 'technician'].nunique() job_df = pd.read_csv(DATA_DIR + 'jobs.csv') print job_df.shape job_df.head(1) full_job_df = pd.read_csv(DATA_DIR + 'job_posts.csv') print full_job_df.shape full_job_df.head(1) full_job_df = pd.merge(full_job_df, job_df[['job_id', 'doc']]) print full_job_df.shape print('# job ids including dups: %d' %len(full_job_df.job_id)) print('# unique job ids: %d' % full_job_df.job_id.nunique()) full_job_df.head(1) full_job_df.to_csv(DATA_DIR + 'job_posts.csv', index=False) """ Explanation: Statistics for functions Note: Functions are limited to those of job titles with >= 2 posts. End of explanation """
tpin3694/tpin3694.github.io
machine-learning/blurring_images.ipynb
mit
# Load image import cv2 import numpy as np from matplotlib import pyplot as plt """ Explanation: Title: Blurring Images Slug: blurring_images Summary: How to blurring images using OpenCV in Python. Date: 2017-09-11 12:00 Category: Machine Learning Tags: Preprocessing Images Authors: Chris Albon Preliminaries End of explanation """ # Load image as grayscale image = cv2.imread('images/plane_256x256.jpg', cv2.IMREAD_GRAYSCALE) """ Explanation: Load Image As Greyscale End of explanation """ # Blur image image_blurry = cv2.blur(image, (5,5)) """ Explanation: Blur Image End of explanation """ # Show image plt.imshow(image_blurry, cmap='gray'), plt.xticks([]), plt.yticks([]) plt.show() """ Explanation: View Image End of explanation """
nathania/pysal
pysal/network/Network Usage.ipynb
bsd-3-clause
ntw.pointpatterns dir(ntw.pointpatterns['crimes']) """ Explanation: A network is composed of a single topological representation of a road and $n$ point patterns which are snapped to the network. End of explanation """ counts = ntw.count_per_edge(ntw.pointpatterns['crimes'].obs_to_edge, graph=False) sum(counts.values()) / float(len(counts.keys())) """ Explanation: Attributes for every point pattern dist_to_node dict keyed by pointid with the value being a dict in the form {node: distance to node, node: distance to node} obs_to_edge dict keyed by edge with the value being a dict in the form {pointID:(x-coord, y-coord), pointID:(x-coord, y-coord), ... } obs_to_node points geojson like representation of the point pattern. Includes properties if read with attributes=True snapped_coordinates dict keyed by pointid with the value being (x-coord, y-coord) Counts per edge are important, but should not be precomputed since we have different representations of the network (digitized and graph currently). (Relatively) Uniform segmentation still needs to be done. End of explanation """ n200 = ntw.segment_edges(200.0) counts = n200.count_per_edge(n200.pointpatterns['crimes'].obs_to_edge, graph=False) sum(counts.values()) / float(len(counts.keys())) """ Explanation: Segmentation End of explanation """ import networkx as nx figsize(10,10) g = nx.Graph() for e in ntw.edges: g.add_edge(*e) for n, p in ntw.node_coords.iteritems(): g.node[n] = p nx.draw(g, ntw.node_coords, node_size=300, alpha=0.5) g = nx.Graph() for e in n200.edges: g.add_edge(*e) for n, p in n200.node_coords.iteritems(): g.node[n] = p nx.draw(g, n200.node_coords, node_size=25, alpha=1.0) """ Explanation: Visualization of the shapefile derived, unsegmented network with nodes in a larger, semi-opaque form and the distance segmented network with small, fully opaque nodes. End of explanation """ #Binary Adjacency #ntw.contiguityweights(graph=False) w = ntw.contiguityweights(graph=False) #Build the y vector #edges = ntw.w.neighbors.keys() edges = w.neighbors.keys() y = np.zeros(len(edges)) for i, e in enumerate(edges): if e in counts.keys(): y[i] = counts[e] #Moran's I #res = ps.esda.moran.Moran(y, ntw.w, permutations=99) res = ps.esda.moran.Moran(y, w, permutations=99) print dir(res) """ Explanation: Moran's I using the digitized network End of explanation """ counts = ntw.count_per_edge(ntw.pointpatterns['crimes'].obs_to_edge, graph=True) #Binary Adjacency #ntw.contiguityweights(graph=True) w = ntw.contiguityweights(graph=True) #Build the y vector #edges = ntw.w.neighbors.keys() edges = w.neighbors.keys() y = np.zeros(len(edges)) for i, e in enumerate(edges): if e in counts.keys(): y[i] = counts[e] #Moran's I #res = ps.esda.moran.Moran(y, ntw.w, permutations=99) res = ps.esda.moran.Moran(y, w, permutations=99) print dir(res) """ Explanation: Moran's I using the graph representation to generate the W Note that we have to regenerate the counts per edge, since the graph will have less edges. End of explanation """ #Binary Adjacency #n200.contiguityweights(graph=False) w = n200.contiguityweights(graph=False) #Compute the counts counts = n200.count_per_edge(n200.pointpatterns['crimes'].obs_to_edge, graph=False) #Build the y vector and convert from raw counts to intensities #edges = n200.w.neighbors.keys() edges = w.neighbors.keys() y = np.zeros(len(edges)) for i, e in enumerate(edges): if e in counts.keys(): length = n200.edge_lengths[e] y[i] = counts[e] / length #Moran's I #res = ps.esda.moran.Moran(y, n200.w, permutations=99) res = ps.esda.moran.Moran(y, w, permutations=99) print dir(res) """ Explanation: Moran's I using the segmented network and intensities instead of counts End of explanation """ import time t1 = time.time() n0 = ntw.allneighbordistances(ntw.pointpatterns['crimes']) print time.time()-t1 import time t1 = time.time() n1 = n200.allneighbordistances(n200.pointpatterns['crimes']) print time.time()-t1 """ Explanation: Timings for distance based methods, e.g. G-function End of explanation """ import time t1 = time.time() n0 = ntw.allneighbordistances(ntw.pointpatterns['crimes']) print time.time()-t1 import time t1 = time.time() n1 = n200.allneighbordistances(n200.pointpatterns['crimes']) print time.time()-t1 """ Explanation: Note that the first time these methods are called, the underlying node-to-node shortest path distance matrix has to be calculated. Subsequent calls will not require this, and will be much faster: End of explanation """ npts = ntw.pointpatterns['crimes'].npoints sim = ntw.simulate_observations(npts) sim """ Explanation: Simulate a point pattern on the network Need to supply a count of the number of points and a distirbution (default is uniform). Generally this will not be called by the user, since the simulation will be used for Monte Carlo permutation. End of explanation """ gres = ps.NetworkG(ntw, ntw.pointpatterns['crimes'], permutations = 99) figsize(5,5) plot(gres.xaxis, gres.observed, 'b-', linewidth=1.5, label='Observed') plot(gres.xaxis, gres.upperenvelope, 'r--', label='Upper') plot(gres.xaxis, gres.lowerenvelope, 'k--', label='Lower') legend(loc='best') """ Explanation: Create a nearest neighbor matrix using the crimes point pattern Right now, both the G and K functions generate a full distance matrix. This is because, I know that the full generation is correct and I believe that the truncated generated, e.g. nearest neighbor, has a big. G-function End of explanation """ kres = ps.NetworkK(ntw, ntw.pointpatterns['crimes'], permutations=99) figsize(5,5) plot(kres.xaxis, kres.observed, 'b-', linewidth=1.5, label='Observed') plot(kres.xaxis, kres.upperenvelope, 'r--', label='Upper') plot(kres.xaxis, kres.lowerenvelope, 'k--', label='Lower') legend(loc='best') """ Explanation: K-function End of explanation """
google/starthinker
colabs/dynamic_costs.ipynb
apache-2.0
!pip install git+https://github.com/google/starthinker """ Explanation: Dynamic Costs Reporting Calculate DV360 cost at the dynamic creative combination level. License Copyright 2020 Google LLC, Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Disclaimer This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team. This code generated (see starthinker/scripts for possible source): - Command: "python starthinker_ui/manage.py colab" - Command: "python starthinker/tools/colab.py [JSON RECIPE]" 1. Install Dependencies First install the libraries needed to execute recipes, this only needs to be done once, then click play. End of explanation """ from starthinker.util.configuration import Configuration CONFIG = Configuration( project="", client={}, service={}, user="/content/user.json", verbose=True ) """ Explanation: 2. Set Configuration This code is required to initialize the project. Fill in required fields and press play. If the recipe uses a Google Cloud Project: Set the configuration project value to the project identifier from these instructions. If the recipe has auth set to user: If you have user credentials: Set the configuration user value to your user credentials JSON. If you DO NOT have user credentials: Set the configuration client value to downloaded client credentials. If the recipe has auth set to service: Set the configuration service value to downloaded service credentials. End of explanation """ FIELDS = { 'dcm_account':'', 'auth_read':'user', # Credentials used for reading data. 'configuration_sheet_url':'', 'auth_write':'service', # Credentials used for writing data. 'bigquery_dataset':'dynamic_costs', } print("Parameters Set To: %s" % FIELDS) """ Explanation: 3. Enter Dynamic Costs Reporting Recipe Parameters Add a sheet URL. This is where you will enter advertiser and campaign level details. Specify the CM network ID. Click run now once, and a tab called Dynamic Costs will be added to the sheet with instructions. Follow the instructions on the sheet; this will be your configuration. StarThinker will create two or three (depending on the case) reports in CM named Dynamic Costs - .... Wait for BigQuery->->->Dynamic_Costs_Analysis to be created or click Run Now. Copy Dynamic Costs Sample Data. Click Edit Connection, and Change to BigQuery->->->Dynamic_Costs_Analysis. Copy Dynamic Costs Sample Report. When prompted, choose the new data source you just created. Edit the table to include or exclude columns as desired. Or, give the dashboard connection intructions to the client. Modify the values below for your use case, can be done multiple times, then click play. End of explanation """ from starthinker.util.configuration import execute from starthinker.util.recipe import json_set_fields TASKS = [ { 'dynamic_costs':{ 'auth':{'field':{'name':'auth_read','kind':'authentication','order':1,'default':'user','description':'Credentials used for reading data.'}}, 'account':{'field':{'name':'dcm_account','kind':'string','order':0,'default':''}}, 'sheet':{ 'template':{ 'url':'https://docs.google.com/spreadsheets/d/19J-Hjln2wd1E0aeG3JDgKQN9TVGRLWxIEUQSmmQetJc/edit?usp=sharing', 'tab':'Dynamic Costs', 'range':'A1' }, 'url':{'field':{'name':'configuration_sheet_url','kind':'string','order':1,'default':''}}, 'tab':'Dynamic Costs', 'range':'A2:B' }, 'out':{ 'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}}, 'dataset':{'field':{'name':'bigquery_dataset','kind':'string','order':2,'default':'dynamic_costs'}} } } } ] json_set_fields(TASKS, FIELDS) execute(CONFIG, TASKS, force=True) """ Explanation: 4. Execute Dynamic Costs Reporting This does NOT need to be modified unless you are changing the recipe, click play. End of explanation """
DamienIrving/ocean-analysis
development/Frolicher2015_validation.ipynb
mit
import re import glob import numpy import iris import iris.coord_categorisation from iris.experimental.equalise_cubes import equalise_attributes import warnings warnings.filterwarnings('ignore') """ Explanation: Heat budget validation I'm doing an analysis of the CMIP5 single forcing experiments (historicalGHG and historicalAA) to explore the separate influence of anthropogenic aerosols (AAs) and greenhouse gases (GHGs) on historical ocean change. To make sure that my heat budget calculations are correct, I'm first trying to reproduce the heat uptake values in Table 2 of Frolicher et al (2015). In particular, the following is my attempt to reproduce the heat uptake south of 30S for the GISS-E2-R model. In summary, the steps I take to calculcate the cumulative oceanic heat uptake south of 30S between 1870 (1861-80) and 1995 (1986-2005) are as follows: 1. Load the monthly timescale hfds (surface downward heat flux) data for the region south of 30S 2. Convert the units from $W m^{-2}$ to $J m^{-2}$ by multiplying by the number of seconds in each timestep 3. Convert to an annual timescale by summing the 12 monthly values for each year 4. Convert the units from $J m^{-2}$ to $J$ by multiplying each value by the corresponding grid-cell area 5. Calculate the spatial sum (i.e. collapse the latitude and longitude dimensions) 6. Calculate the 20-year sum for the time periods of interest 7. Calculate the final result My result differs from Frolicher et al (2015) by a factor of 3. I haven't regridded the data to a $1 \times 1$ grid like they did, but I doubt that explains such as discrepancy. I'm stumped as to what the issue could be. End of explanation """ lat_constraint = iris.Constraint(latitude=lambda cell: cell <= -30) def read_hfds_data(file_list, lat_constraint): """Read in data for a given latitude constraint""" cube = iris.load(file_list, 'surface_downward_heat_flux_in_sea_water' & lat_constraint) equalise_attributes(cube) iris.util.unify_time_units(cube) cube = cube.concatenate_cube() return cube hfds_control_files = glob.glob('/g/data/ua6/DRSv2/CMIP5/GISS-E2-R/piControl/mon/ocean/r1i1p1/hfds/latest/hfds_Omon_GISS-E2-R_piControl_r1i1p1_*.nc') hfds_control_cube = read_hfds_data(hfds_control_files, lat_constraint) hfds_control_cube hfds_historical_files = glob.glob('/g/data/ua6/DRSv2/CMIP5/GISS-E2-R/historical/mon/ocean/r1i1p1/hfds/latest/hfds_Omon_GISS-E2-R_historical_r1i1p1_*.nc') hfds_historical_cube = read_hfds_data(hfds_historical_files, lat_constraint) hfds_historical_cube """ Explanation: Step 1: Read data The code below simply loads all data south of 30S. End of explanation """ def broadcast_array(array, axis_index, shape): """Broadcast an array to a target shape. Args: array (numpy.ndarray) axis_index (int or tuple): Postion in the target shape that the axis/axes of the array corresponds to e.g. if array corresponds to (depth, lat, lon) in (time, depth, lat, lon) then axis_index = [1, 3] e.g. if array corresponds to (lat) in (time, depth, lat, lon) then axis_index = 2 shape (tuple): shape to broadcast to For a one dimensional array, make start_axis_index = end_axis_index """ if type(axis_index) in [float, int]: start_axis_index = end_axis_index = axis_index else: assert len(axis_index) == 2 start_axis_index, end_axis_index = axis_index dim = start_axis_index - 1 while dim >= 0: array = array[numpy.newaxis, ...] array = numpy.repeat(array, shape[dim], axis=0) dim = dim - 1 dim = end_axis_index + 1 while dim < len(shape): array = array[..., numpy.newaxis] array = numpy.repeat(array, shape[dim], axis=-1) dim = dim + 1 return array def convert_to_joules(cube): """Convert units to Joules""" assert 'W' in str(cube.units) assert 'days' in str(cube.coord('time').units) time_span_days = cube.coord('time').bounds[:, 1] - cube.coord('time').bounds[:, 0] time_span_seconds = time_span_days * 60 * 60 * 24 cube.data = cube.data * broadcast_array(time_span_seconds, 0, cube.shape) cube.units = str(cube.units).replace('W', 'J') return cube hfds_control_cube = convert_to_joules(hfds_control_cube) hfds_control_cube hfds_historical_cube = convert_to_joules(hfds_historical_cube) hfds_historical_cube """ Explanation: Step 2: Convert from W m-2 to J m-2 End of explanation """ def annual_sum(cube): """Calculate the annual sum.""" iris.coord_categorisation.add_year(cube, 'time') cube = cube.aggregated_by(['year'], iris.analysis.SUM) cube.remove_coord('year') return cube hfds_control_cube = annual_sum(hfds_control_cube) hfds_control_cube hfds_historical_cube = annual_sum(hfds_historical_cube) hfds_historical_cube """ Explanation: Step 3: Calculate the annual sum End of explanation """ def multiply_by_area(cube, area_cube): """Multiply each cell of cube by its area.""" area_data = broadcast_array(area_cube.data, [1, 2], cube.shape) cube.data = cube.data * area_data units = str(cube.units) cube.units = units.replace('m-2', '') return cube area_file = '/g/data/ua6/DRSv2/CMIP5/GISS-E2-R/piControl/fx/ocean/r0i0p0/areacello/latest/areacello_fx_GISS-E2-R_piControl_r0i0p0.nc' area_cube = iris.load_cube(area_file, 'cell_area' & lat_constraint) area_cube hfds_control_cube = multiply_by_area(hfds_control_cube, area_cube) hfds_control_cube hfds_historical_cube = multiply_by_area(hfds_historical_cube, area_cube) hfds_historical_cube """ Explanation: Step 4: Convert from J m-2 to J (i.e. multiply by area) End of explanation """ hfds_control_cube = hfds_control_cube.collapsed(['latitude', 'longitude'], iris.analysis.SUM) hfds_control_cube hfds_historical_cube = hfds_historical_cube.collapsed(['latitude', 'longitude'], iris.analysis.SUM) hfds_historical_cube """ Explanation: Step 5: Calculate the spatial sum End of explanation """ def get_time_constraint(time_list): """Get the time constraint used for subsetting an iris cube.""" start_date, end_date = time_list date_pattern = '([0-9]{4})-([0-9]{1,2})-([0-9]{1,2})' assert re.search(date_pattern, start_date) assert re.search(date_pattern, end_date) start_year, start_month, start_day = start_date.split('-') end_year, end_month, end_day = end_date.split('-') time_constraint = iris.Constraint(time=lambda t: iris.time.PartialDateTime(year=int(start_year), month=int(start_month), day=int(start_day)) <= t.point <= iris.time.PartialDateTime(year=int(end_year), month=int(end_month), day=int(end_day))) return time_constraint def get_control_time_constraint(control_cube, hist_cube, time_bounds): """Define the time constraints for the control data.""" iris.coord_categorisation.add_year(control_cube, 'time') iris.coord_categorisation.add_year(hist_cube, 'time') branch_time = hist_cube.attributes['branch_time'] index = 0 for bounds in control_cube.coord('time').bounds: lower, upper = bounds if lower <= branch_time < upper: break else: index = index + 1 branch_year = control_cube.coord('year').points[index] hist_start_year = hist_cube.coord('year').points[0] start_gap = int(time_bounds[0].split('-')[0]) - hist_start_year end_gap = int(time_bounds[1].split('-')[0]) - hist_start_year control_start_year = branch_year + start_gap control_end_year = branch_year + end_gap control_start_date = str(control_start_year).zfill(4)+'-01-01' control_end_date = str(control_end_year).zfill(4)+'-01-01' time_constraint = get_time_constraint([control_start_date, control_end_date]) control_cube.remove_coord('year') hist_cube.remove_coord('year') return time_constraint def temporal_sum(cube, time_constraint): """Calculate temporal sum over a given time period.""" cube = cube.copy() temporal_subset = cube.extract(time_constraint) result = temporal_subset.collapsed('time', iris.analysis.SUM) return float(result.data) period_1870 = ['1861-01-01', '1880-12-31'] period_1995 = ['1986-01-01', '2005-12-31'] hist_1870_constraint = get_time_constraint(period_1870) hist_1995_constraint = get_time_constraint(period_1995) hist_1870 = temporal_sum(hfds_historical_cube, hist_1870_constraint) hist_1995 = temporal_sum(hfds_historical_cube, hist_1995_constraint) print('historial, 1870: ', hist_1870, 'J') print('historial, 1995: ', hist_1995, 'J') control_1870_constraint = get_control_time_constraint(hfds_control_cube, hfds_historical_cube, period_1870) control_1995_constraint = get_control_time_constraint(hfds_control_cube, hfds_historical_cube, period_1995) control_1870 = temporal_sum(hfds_control_cube, control_1870_constraint) control_1995 = temporal_sum(hfds_control_cube, control_1995_constraint) print('control, 1870: ', control_1870, 'J') print('control, 1995: ', control_1995, 'J') """ Explanation: Step 6: Calculate the 20-year sum for the time periods of interest End of explanation """ change = (hist_1995 - hist_1870) - (control_1995 - control_1870) print('Cumulative oceanic heat uptake south of 30S between 1870 (1861-80) and 1995 (1986-2005):', change) """ Explanation: Final result End of explanation """
LorenzoBi/courses
OODE/.ipynb_checkpoints/introduction_hints-checkpoint.ipynb
mit
from __future__ import print_function, division import numpy as np # numpy will be used a lot, thus it is convenient to address it with np instead of numpy """ Explanation: Installation The goal is to have working python with the following packages and bindings: numpy (commonly used data types and algorithms in numerical computation) scipy (commonly used algorithms in scientific computation) matplotlib ipython notebook or jupyter notebook Ubuntu/Debian Linux In the following we will explain how to get a native installation on your system. Beside that it is also possible to use the prepared VirtualBox image, follow the Windows/macOS section for this. Install the packages that are provided by your linux distribution, for Ubuntu up to 16.10: sudo apt install python-numpy python-scipy python-matplotlib python-pip cython ipython-notebook build-essential libblas-dev liblapack-dev while for Ubuntu 17.04: sudo apt install python-numpy python-scipy python-matplotlib python-pip cython jupyter-notebook build-essential libblas-dev liblapack-dev This gives you numpy, scipy, matplotlib and the ipython notebook respective jupyter notebook. macOS and Windows We recommend using Anaconda which bundles all required packages. Installation instructions are provided on the Anaconda website. Some python snippets We suggest to start an ipython/jupyter notebook and do some interactive experiments. Execute on the command line (Ubuntu <= 16.10, macOS): ipython notebook respective (Ubuntu 17.04) jupyter notebook and click on New Notebook. It is good practice to first include all python packages that are going to be used. We will likely always use numpy that provides some numerical datatypes and standard numerical algorithms as well as the print function from Python 3 replacing the print statement and the / function from Python 3 that automatically type converts integer to floats to avoid counterintuitive behaviour. After you have entered commands into a cell, press Shift + Enter to execute it. End of explanation """ a = 1 # new integer variable a that is assigned the value 1 b = 2 # new integer variable b that is assigned the value 2 c = 2.0 # new floating point variable c that is assigned the value 2.0 d = True # new boolean variable d that is assignes the value True print(b*c) # automatic type conversion to float print(a/b) # what you expect with automatic type conversion, beware: this is different in standard python 2! print(a//b) # integer division """ Explanation: Primitive Datatypes Similar to Matlab, python uses dynamical typing with implicit type conversion. Primitive datatypes in python are integers, floating point numbers, boolean and characters. Here are some examples using these primitive and the implicit type conversion: End of explanation """ a = [1, 2, 14.0, 3] # declaration of a list, note that members of the collection may have different types print(a[0], a[2], a[-1]) # indexing is zero based: this gives the first, third and last element of the list print(len(a)) # len gives the length of a list b = list(range(10)) # range is a function that constructs a generator giving a list with integers c = list(range(3,7)) d = [x*x for x in b if x % 2 == 0] # list comprehension is a method to construct new lists out of a given one, # here all even squares < 100 print(b, c, d) cd = c + d # concatenate lists c and d print(cd) print(c) c.append(5) # extend the list c with another element 5 print(c) e = 'hello world' # declares a string which is a list of characters print(e, e[3]) f = { # declare a dictionary with the keys 'algorithm', 'tolerance', 'suspicious iterates' 'algorithm': 'downhill', 'tolerance': 1e-6, 'suspicious iterates': [3,7,11] } print(f['algorithm'], f.keys()) # access mapped elements of dictionary by their key, the function keys gives a list of keys """ Explanation: Note that all these datatypes are immutable, this will be explained in the functions paragraph. Composed datatypes: lists and dictionaries Furthermore there are the composed datatypes list and dict (short for dictionary). A list is a ordered collection of arbitrary python objects, a dictionary a mapping between arbitrary python objects (the object on the source side have to be hashable). In contrast to the previously encountered datatypes, list and dict are mutable, explained in the function section. End of explanation """ for elem in a: print(elem, type(elem)) """ Explanation: Iterators Lists can be used as iterators, some examples: End of explanation """ # less elegant, but same functionality as in previous example for ii in range(len(a)): print(a[ii], type(a[ii])) for key in f: print(key, ':', f[key]) # sum all cubes of integers from 0,...,99 cubesum = 0 for ii in range(100): cubesum += ii*ii*ii print(cubesum) """ Explanation: Note that the second line that constitutes the inner block of the for loop has been indented by 4 characters. This is one syntax rule in python, which you will also encounter in if-else and while statements: Blocks are announced by a : and have to be indented. End of indentation indicates end of block. In Java/C++ you would use { to indicate the begin of a block and } to indicate its end. End of explanation """ print(3 == 7) print(7 < 7) print(7 <= 7) print(3 in range(0,9)) print(4 in ['apple', 'pear']) """ Explanation: Booleans Comparison for equality is done with == in python, for numerical datatypes also &lt;, &lt;=, &gt;, &gt;= exist. Membership in lists and dictionaries can be tested by in: End of explanation """ print(cubesum % 100 == 0 and cubesum < 0) print(cubesum % 100 == 0 or cubesum < 0) print(not cubesum in ['apple', 'pear']) """ Explanation: Booleans can be composed with and and or and negated with not: End of explanation """ a = 2.0 x = 2.0 # Newton iteration to compute x = sqrt(a) with a residual <= 1e-8 while( abs(x*x-a) > 1e-8 ): x = .5*(x+a/x) print(x, x*x-a) """ Explanation: Loops Using a boolean expression it is also possible to write a while loop that continues as long as a condition is satisfied: End of explanation """ a = 2.0 x = 2.0 count = 0 # Newton iteration to compute x = sqrt(a) with a residual that cannot be achieved in double precision while True: if abs(x*x-a) <= 1e-30: # exit loop because precision is reached print("Very precise square root computed") break if count > 100: # exit loop because iteration limit is reached print("Iteration limit exceeded") break x = .5*(x+a/x) count += 1 print(count, x, x*x-a) # sum all primes from 10,...,99 (this is inefficient!) primesum = 0 for ii in range(10,100): # test if ii is prime prime = True for jj in range(2,ii): if ii % jj == 0: prime = False # continue with next iteration if ii is not a prime number if not prime: continue primesum += ii print(primesum) """ Explanation: Customary with loops are the break and continue statements that exit the loop execution if a certain condition is satisfied or jump to the next loop execution: End of explanation """ def newton_squareroot(a, x0): # declare a function newton_squareroot that takes two arguments a and x0 """ Compute square root of a by Newton iteration, with initial guess x0 """ x = 1.0 it = 0 itmax = 100 tol = 1e-12 while (abs(x*x-a) > tol and it < itmax): x = .5*(x+a/x) it += 1 return x # return the computed square root x print(newton_squareroot(2.0, 1.0), newton_squareroot(9.0, 1.0)) entropy = lambda x: -x*np.log(x) # declare a function entropy with one argument by a lambda expression print(entropy(0.2)) # declare a function density with three arguments by a lambda expression density = lambda x, m, s: np.exp(-(x-m)*(x-m)/(2.0*s*s))/(np.sqrt(2.0*np.pi)*s) print(density(0.3,0.1,1.2)) """ Explanation: Functions There are two different ways to declare functions. With the def statement and as lambda expression: End of explanation """ def newton_squareroot(a, x0 = 1.0, tol = 1e-12, itmax = 100): """ Compute square root of a by Newton iteration, with initial guess x0 """ x = x0 it = 0 while (abs(x*x-a) > tol and it < itmax): x = .5*(x+a/x) it += 1 return x, it # return the computed square root x and the number of iterations print(newton_squareroot(3.4)) """ Explanation: In functions declared with def it is also possible to have optional arguments. They have to appear after nonoptional arguments: End of explanation """ def double(x): x = x + x def appendzero(x): x.append(0) # for an immutable argument x = 1.0 double(x) # inside the function x = x+x, so x = 2.0, but as x is immutable the value does not change outside print(x) # for a mutable argument x = [1] appendzero(x) # inside the function x will be appended with 0, so x = [1, 0] print(x) """ Explanation: Now the difference between mutable and immutable can be explained: arguments passed to a function that are mutable can be changed during function execution, while immutable arguments can not: End of explanation """ a = np.array([3.0, 4.2, -1.0]) # converting an python array of floats to a numpy vector b = np.zeros([4]) # a numpy vector containing 4 zeros c = 3.0*np.ones([13]) # a numpy vector containing 13 times the entry 3.0 print(a, b, c) d = np.zeros_like(a) # a numpy vector containing as many zeros as a has entries e = np.ones_like(b) # a numpy vector containing as many ones as b has entries print(d, e) f = np.linspace(3.0, 7.0, 101) # a numpy vector with 101 elements, spaced equidistantly over [3.0, 7.0] print(f) """ Explanation: Numpy In scientific computing, it is required to efficiently handle vector and matrix data. This is done by the numpy package, that we have included as np. Different methods to create some vectors: End of explanation """ A = np.array([ # convert a python array of arrays of floats to a numpy matrix [3.0, 0.5], [3.7, 1.2], [5.0, -3.0] ]) print(A) B = np.diag(a) # create a diagonal matrix with diagonal given by the array a print(B) """ Explanation: Matrices can be constructed in a similar way: End of explanation """ # get dimensions print(a.shape) print(A.shape) # transposition print(A.transpose()) print(A.transpose().shape) # matrix vector product print(B.dot(a)) # access first row of matrix A: print(A[0,:]) # access second column of matrix A: print(A[:,1]) """ Explanation: some operations with matrices and vectors End of explanation """ print(f[:4]) # the first four elements of f, equivalent to f[0:4] print(f[12:18]) # elements f[12], ..., f[17] print(f[::20]) # every twentieth element of f print(f[1::20]) # every twentieth element of f, but starting at the second position print(f[[0,5,8]]) # the first, sixth, ninth element of f print(f[85:-4]) # all elements of f starting with the 86st and omitting the four last elements """ Explanation: The previous examples are a special case of so called slicing in numpy to obtain subvectors and submatrices, further examples: End of explanation """ vecA = A.reshape(-1) # vectorize A print(vecA) reshapedA = vecA.reshape([3,2]) print(reshapedA) # change shape of A to 3 x 2 instead 3 x 2, order of elements is preserved # compute trace of matrix print(B.trace()) # compute sum and norms of vector print(u'\u03a3\u2096a\u2096 =', a.sum()) print(u'\u2016a\u2016\u2082 =', np.linalg.norm(a)) print(u'\u2016a\u2016\u2081 =', np.linalg.norm(a, 1)) print(u'\u2016a\u2016\u221e =', np.linalg.norm(a, np.inf) ) # solve linear system (backslash in matlab) np.linalg.solve(B, np.ones([3])) # find x with B*x = [1,1,1] # compute inverse of A (you should always prefer solve or getting a decomposition) print(np.linalg.inv(B)) """ Explanation: It is sometimes necessary to "vectorize" matrices, forming the vector with all matrix rows appended. End of explanation """ ab = np.concatenate([a, b]) # vector obtained by stacking a and b print(ab) K = np.bmat([[np.eye(3), A], [A.transpose(), np.zeros([2,2])]]) # block matrix, eye(n): identity in R^n print(K) """ Explanation: It is often necessary to construct vectors and matrices from vectors and matrix of smaller dimensions by stacking or forming block matrices. This can be done with the concatenate and bmat in numpy. End of explanation """ print(np.sin(a)) print(np.array([np.sin(x) for x in a])) # same result, but slower """ Explanation: Many of the standard functions are available in numpy and vectorized: They can be applied to a numpy vector and return a vector with the function applied to the components. This is very efficiently implemented: End of explanation """ %timeit test = np.sin(np.linspace(0.0, 1.0, 1000)) %timeit test = np.array([np.sin(ii*1e-3) for ii in range(1000)]) """ Explanation: To verify that the first variant is faster, we use %timeit, that determines the average execution time of a python statement. A command starting with % is a IPython magic command that only works in ipython and ipython notebooks, but not in python scripts. End of explanation """ def fib(n): if n in [0, 1]: return 1 return fib(n-1) + fib(n-2) import time zerotime = time.time() print('30th fibonacci number:', fib(30)) executiontime = time.time() - zerotime print('took {:.2f} s to compute using recursion'.format(executiontime)) """ Explanation: To get the execution time of a code block in python, you can use the time.time() of the time module that gives you the current system time: End of explanation """ newton_squareroot_vectorized = np.vectorize(newton_squareroot) print(newton_squareroot_vectorized(np.linspace(1.0, 10.0, 5))) %timeit test = newton_squareroot_vectorized(np.linspace(1.0, 10.0, 100)) %timeit test = [newton_squareroot_vectorized(x) for x in np.linspace(1.0, 10.0, 100)] """ Explanation: You can also create a custom vectorized function out of a function that is defined for scalar values: End of explanation """ import matplotlib.pyplot as plt %matplotlib notebook xs = np.linspace(-5.0, 10.0, 1000) ys = np.sin(xs) xs2 = np.linspace(-2.0, 8.0, 25) ys2 = np.cos(xs2) plt.figure() plt.title('A sample plot') plt.plot(xs, ys) plt.plot(xs2, ys2, 'o') plt.show() # Compute iterates in Newton iteration for x^2 = 2 iterates = [1.0] a = 2.0 while abs(iterates[-1]*iterates[-1] - a) > 1e-15: iterates.append(.5*(iterates[-1]+a/iterates[-1])) residuals = [abs(it*it-a) for it in iterates] plt.figure() plt.title(r'Convergence Newton Iteration $ x^2 = 2 $') plt.xlabel(r'iteration $k$') plt.ylabel(r'residual $\vert x_k^2 - 2 \vert$') plt.semilogy([0, len(iterates)-1], [1e-15, 1e-15], '--') plt.semilogy(range(len(iterates)), residuals, '-o') plt.show() """ Explanation: Plotting with Matplotlib First load the plotting module from matplotlib as plt and activate the notebook plotting extension for ipython notebook (not necessary in scripts). End of explanation """ import scipy.integrate class IVPResult: pass def solve_ivp(f, ts, x0, p=None, integrator='dopri5', store_trajectory=False): """ Solve initial value problem d/dt x = f(t, x, p); x(t0) = x0. Evaluate Solution at time points specified in ts, with ts[0] = t0. """ ivp = scipy.integrate.ode(f) ivp.set_integrator(integrator) if store_trajectory: times = [] points = [] def solout(t, x): if len(times) == 0 or t != times[-1]: times.append(t) points.append(np.copy(x[:x0.shape[0]])) ivp.set_solout(solout) ivp.set_initial_value(x0, ts[0]) ivp.set_f_params(p) result = IVPResult() result.ts = ts result.xs = np.zeros([ts.shape[0], x0.shape[0]]) result.success = True result.xs[0,:] = x0 for ii in range(1,ts.shape[0]): ivp.integrate(ts[ii]) result.xs[ii,:] = ivp.y[:x0.shape[0]] if not ivp.successful(): result.success = False break if store_trajectory: result.trajectory_t = np.array(times) result.trajectory_x = np.array(points) return result """ Explanation: Solving an initial value problems with scipy.integrate Initial value problems as considered in the lecture are of the form $$ \dot x(t) = f(t, x(t), p), \quad x(t_0) = x_0 $$ where $ p $ is a parameter. Scipy provides with ODE a class to solve ordinary differential equations. The following function provides a convenience interface to solve an initial value problem as specified above: End of explanation """ def harm_osc(t, x, p): k = p[0] omega = p[1] return np.array([x[1], -k*k*x[0]+np.sin(omega*t)]) """ Explanation: As an example, we consider a harmonic oscillator that is driven by a periodic external force with frequency $ \omega $. The system is described by the differential equation $$ \ddot x + k^2 x = F \sin(\omega t) $$ with $ k = \sqrt{D/m} $ where $ D $ is the spring constant and $ m $ mass, furthermore we set $ F = 1 $. Reformulation as a system of first order is given by $$ \begin{pmatrix} \dot x_0 \ \dot x_1 \end{pmatrix} = \begin{pmatrix} x_1 \ -k^2 x_0 + \sin(\omega t) \end{pmatrix}. $$ We have to define the function $ f $ for the right hand side in python where we set $ p = \begin{pmatrix} k \ \omega \end{pmatrix} $. End of explanation """ result = solve_ivp(harm_osc, np.array([0., 25.]), np.array([1., 0.]), p=np.array([1., 2.]), store_trajectory=True) print(result.xs) fig = plt.figure() plt.plot(result.trajectory_t, result.trajectory_x) plt.show() """ Explanation: First we use $ k = 1, \omega = 2 $ and solve the initial value problem with $ x(0) = 1, \dot x(0) = 0 $ as initial value on the time horizon $ [0, 25] $ and store the trajectory as computed by the initial value solver: End of explanation """ result = solve_ivp(harm_osc, np.array([0., 25.]), np.array([1., 0.]), p=np.array([1., 1.]), store_trajectory=True) print(result.xs) fig = plt.figure() plt.plot(result.trajectory_t, result.trajectory_x) plt.show() """ Explanation: Now the case of resonance disaster: we set $ k = \omega = 1 $: End of explanation """ ffcn = lambda x: x[0]*x[3]*np.sum(x[:3]) + x[2] dfcn = lambda x: np.array([x[3]*np.sum(x[:3])+x[0]*x[3], x[0]*x[3], x[0]*x[3]+1, x[0]*np.sum(x[:3])]) cfcn_ieq = lambda x: np.array([np.prod(x)-25.0]) cfcn_eq = lambda x: np.array([np.inner(x,x)-40]) def jfcn_ieq(x): return np.array([ [np.prod(x[1:]), np.prod(x[[0,2,3]]), np.prod(x[[0,1,3]]), np.prod(x[:3])] ]) jfcn_eq = lambda x: np.array([2*x]) """ Explanation: Solving an optimization problem using the SQP solver SLSQP provided by scipy We use the problem HS71 from the Hock-Schittkowski collection of optimization problems. This is given by $$ \begin{array}{ll} \min_{x \in \mathbb R^4} & x_0 x_3 (x_0+x_1+x_2) \ \text{s.t.} & x_0 x_1 x_2 x_3 -25 \ge 0 \ & \Vert x \Vert_2^2 - 40 = 0 \ & 1 \le x_i \le 5 \quad (i=0,\ldots,3) \end{array}. $$ Now we define functions - ffcn that computes $ f(x) := x_0 x_3 (x_0+x_1+x_2) + x_2 $, - dfcn that computes $ \nabla f(x) = \begin{pmatrix} x_3 (x_0+x_1+x_2) + x_0 x_3 \ x_0 x_3 \ x_0 x_3 + 1 \ x_0 (x_0+x_1+x_2) + x_0 x_3 \end{pmatrix} $ - cfcn_ieq that computes $ c_I(x) := x_0 x_1 x_2 x_3 - 25 $ - cfcn_eq that computes $ c_E(x) := \Vert x \Vert_2^2 - 40 $ - jfcn_ieq that computes $ Jc_I(x) = \nabla c_I(x)^T = \begin{pmatrix} x_1 x_2 x_3 & x_0 x_2 x_3 & x_0 x_1 x_3 & x_0 x_1 x_2 \end{pmatrix}$ - jfcn_eq that computes $ Jc_E(x) = \nabla c_E(x)^T = \begin{pmatrix} 2 x_0 & 2 x_1 & 2 x_2 & 2 x_3 \end{pmatrix}$ End of explanation """ x0 = np.random.randn(4) # x0 is an array with 4 entries chosen from a normal distribution h = 1e-8 # perturbation in finite differences # get unit vectors unit = np.eye(4) # compute finite difference approximation to gradient in x0 fx0 = ffcn(x0) # store f(x0) to compute it only once dfx0 = np.array([ffcn(x0+h*unit[ii,:])-fx0 for ii in range(4)])/h # approximation to gradient print(np.linalg.norm(dfx0 - dfcn(x0)), np.inf) # compute finite difference approximation to inequality jacobian in x0 cix0 = cfcn_ieq(x0) jcix0 = (np.array([cfcn_ieq(x0+h*unit[ii,:])-cix0 for ii in range(4)])/h).transpose() print(np.linalg.norm(jcix0 - jfcn_ieq(x0)), np.inf) # compute finite difference approximation to equality jacobian in x0 cex0 = cfcn_eq(x0) jcex0 = (np.array([cfcn_eq(x0+h*unit[ii,:])-cex0 for ii in range(4)])/h).transpose() print(np.linalg.norm(jcex0 - jfcn_eq(x0)), np.inf) """ Explanation: For sanity reasons we test first if our implementation has no obvious bugs by comparing with finite differences at a random point. End of explanation """ import scipy.optimize n = 4 # number of variables m = 2 # number of constraints l = np.ones([n]) # lower bound variables u = 5.0*np.ones([n]) # upper bound variables x0 = np.array([1., 5., 5., 1.]) # initial guess result = scipy.optimize.minimize(ffcn, x0, method='SLSQP', jac=dfcn, bounds=[(l[i], u[i]) for i in range(l.shape[0])], constraints=[ {'type': 'eq', 'fun': cfcn_eq, 'jac': jfcn_eq}, {'type': 'ineq', 'fun': cfcn_ieq, 'jac': jfcn_ieq} ], options={'disp': True, 'iprint': 2}) """ Explanation: The resulting deviations from the finite difference approximations give some confidence that we have done a correct implementation of the gradient and jacobian. Let us now solve the optimization problem with initial guess $ x^0 = \begin{pmatrix} 1 \ 5 \ 5 \ 1 \end{pmatrix} $. Using the specified options shows a one line summary per SQP iteration. End of explanation """ print(result) """ Explanation: result is now a dictionary containing the solution vector and some status information: End of explanation """
robertoalotufo/ia898
master/tutorial_numpy_1_7.ipynb
mit
import numpy as np r,c = np.indices( (5, 10) ) print('r=\n', r) print('c=\n', c) """ Explanation: <a href="https://colab.research.google.com/github/robertoalotufo/ia898/blob/master/master/tutorial_numpy_1_7.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Table of Contents <p><div class="lev1 toc-item"><a href="#Numpy:-Funções-indices-e-meshgrid" data-toc-modified-id="Numpy:-Funções-indices-e-meshgrid-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Numpy: Funções indices e meshgrid</a></div><div class="lev2 toc-item"><a href="#Operador-indices-em-pequenos-exemplos-numéricos" data-toc-modified-id="Operador-indices-em-pequenos-exemplos-numéricos-11"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>Operador indices em pequenos exemplos numéricos</a></div><div class="lev2 toc-item"><a href="#Operador-indices-em-exemplo-de-imagens-sintéticas" data-toc-modified-id="Operador-indices-em-exemplo-de-imagens-sintéticas-12"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>Operador indices em exemplo de imagens sintéticas</a></div><div class="lev2 toc-item"><a href="#Soma" data-toc-modified-id="Soma-13"><span class="toc-item-num">1.3&nbsp;&nbsp;</span>Soma</a></div><div class="lev2 toc-item"><a href="#Subtração" data-toc-modified-id="Subtração-14"><span class="toc-item-num">1.4&nbsp;&nbsp;</span>Subtração</a></div><div class="lev2 toc-item"><a href="#Xadrez" data-toc-modified-id="Xadrez-15"><span class="toc-item-num">1.5&nbsp;&nbsp;</span>Xadrez</a></div><div class="lev2 toc-item"><a href="#Reta" data-toc-modified-id="Reta-16"><span class="toc-item-num">1.6&nbsp;&nbsp;</span>Reta</a></div><div class="lev2 toc-item"><a href="#Parábola" data-toc-modified-id="Parábola-17"><span class="toc-item-num">1.7&nbsp;&nbsp;</span>Parábola</a></div><div class="lev2 toc-item"><a href="#Círculo" data-toc-modified-id="Círculo-18"><span class="toc-item-num">1.8&nbsp;&nbsp;</span>Círculo</a></div><div class="lev2 toc-item"><a href="#Meshgrid" data-toc-modified-id="Meshgrid-19"><span class="toc-item-num">1.9&nbsp;&nbsp;</span>Meshgrid</a></div><div class="lev2 toc-item"><a href="#Gerando-os-vetores-com-linspace" data-toc-modified-id="Gerando-os-vetores-com-linspace-110"><span class="toc-item-num">1.10&nbsp;&nbsp;</span>Gerando os vetores com linspace</a></div><div class="lev2 toc-item"><a href="#Exemplo-na-geração-da-imagem-sinc-com-meshgrid" data-toc-modified-id="Exemplo-na-geração-da-imagem-sinc-com-meshgrid-111"><span class="toc-item-num">1.11&nbsp;&nbsp;</span>Exemplo na geração da imagem sinc com meshgrid</a></div><div class="lev2 toc-item"><a href="#Exemplo-na-geração-da-imagem-sinc-com-indices" data-toc-modified-id="Exemplo-na-geração-da-imagem-sinc-com-indices-112"><span class="toc-item-num">1.12&nbsp;&nbsp;</span>Exemplo na geração da imagem sinc com indices</a></div><div class="lev2 toc-item"><a href="#Documentação-Oficial-Numpy" data-toc-modified-id="Documentação-Oficial-Numpy-113"><span class="toc-item-num">1.13&nbsp;&nbsp;</span>Documentação Oficial Numpy</a></div><div class="lev2 toc-item"><a href="#Referências" data-toc-modified-id="Referências-114"><span class="toc-item-num">1.14&nbsp;&nbsp;</span>Referências</a></div><div class="lev2 toc-item"><a href="#Para-usuários-avançados" data-toc-modified-id="Para-usuários-avançados-115"><span class="toc-item-num">1.15&nbsp;&nbsp;</span>Para usuários avançados</a></div><div class="lev2 toc-item"><a href="#Exemplos-com-Imagem" data-toc-modified-id="Exemplos-com-Imagem-116"><span class="toc-item-num">1.16&nbsp;&nbsp;</span>Exemplos com Imagem</a></div> # Numpy: Funções indices e meshgrid As funções ``indices`` e ``meshgrid`` são extremamente úteis na geração de imagens sintéticas e o seu aprendizado permite também entender as vantagens de programação matricial, evitando-se a varredura seqüencial da imagem muito usual na programação na linguagem C. ## Operador indices em pequenos exemplos numéricos A função ``indices`` recebe como parâmetros uma tupla com as dimensões (H,W) das matrizes a serem criadas. No exemplo a seguir, estamos gerando matrizes de 5 linhas e 10 colunas. Esta função retorna uma tupla de duas matrizes que podem ser obtidas fazendo suas atribuições como no exemplo a seguir onde criamos as matrizes ``r`` e ``c``, ambas de tamanho (5,10), isto é, 5 linhas e 10 colunas: End of explanation """ f = r + c print('f=\n', f) """ Explanation: Note que a matriz r é uma matriz onde cada elemento é a sua coordenada linha e a matriz c é uma matriz onde cada elemento é a sua coordenada coluna. Desta forma, qualquer operação matricial feita com r e c, na realidade você está processando as coordenadas da matriz. Assim, é possível gerar diversas imagens sintéticas a partir de uma função de suas coordenadas. Como o NumPy processa as matrizes diretamente, sem a necessidade de fazer um for explícito, a notação do programa fica bem simples e a eficiência também. O único inconveniente é o uso da memória para se calcular as matrizes de índices r e c. Iremos ver mais à frente que isto pode ser minimizado. Por exemplo seja a função que seja a soma de suas coordenadas $f(r,c) = r + c$: End of explanation """ f = r - c print('f=\n', f) """ Explanation: Ou ainda a função diferença entre a coordenada linha e coluna $f(r,c) = r - c$: End of explanation """ f = (r + c) % 2 print('f=\n', f) """ Explanation: Ou ainda a função $f(r,c) = (r + c) \% 2$ onde % é operador módulo. Esta função retorna 1 se a soma das coordenadas for ímpar e 0 caso contrário. É uma imagem no estilo de um tabuleiro de xadrez de valores 0 e 1: End of explanation """ f = (r == c//2) print('f=\n', f) print('f=\n',f.astype(np.int)) """ Explanation: Ou ainda a função de uma reta $ f(r,c) = (r = \frac{1}{2}c)$: End of explanation """ f = r**2 + c**2 print('f=\n', f) """ Explanation: Ou ainda a função parabólica dada pela soma do quadrado de suas coordenadas $$ f(r,c) = r^2 + c^2 $$: End of explanation """ f = ((r**2 + c**2) < 4**2) print('f=\n', f * 1) """ Explanation: Ou ainda a função do círculo de raio 4, com centro em (0,0) $f(r,c) = (r^2 + c^2 < 4^2)$: End of explanation """ import numpy as np import sys,os ia898path = os.path.abspath('../../') if ia898path not in sys.path: sys.path.append(ia898path) import ia898.src as ia nb = ia.nbshow(3) r,c = np.indices( (200, 300) ) rn = ia.normalize(r) cn = ia.normalize(c) nb.nbshow(rn,'rn:linhas') nb.nbshow(cn,'cn:colunas',flush=True) """ Explanation: Operador indices em exemplo de imagens sintéticas Vejamos os exemplos acima, porém gerados em imagens. A diferença será no tamanho da matriz, iremos utilizar matriz (200,300), e a forma de visualizá-la através do adshow, ao invés de imprimir os valores como fizemos acima. Como muitas vezes o resultado das funções poderão estar fora da faixa 0-255 admitida pelo adshow, iremos sempre normalizar os valores finais da imagem calculada para a faixa 0-255 utilizando a função ia636:ianormalize ianormalize da toolbox ia636. Gerando as coordenadas utilizando indices: Observe que o parâmetro de indices é uma tupla. Verifique o número de parêntesis utilizados: End of explanation """ f = r + c ia.adshow(ia.normalize(f),'r + c') """ Explanation: Soma Função soma: $f(r,c) = r + c$: End of explanation """ f = r - c ia.adshow(ia.normalize(f),'r - c') %matplotlib inline import matplotlib.pyplot as plt plt.imshow(f,cmap='gray') plt.colorbar() """ Explanation: Subtração Função subtração $f(r,c) = r - c$: End of explanation """ f = (r//8 + c//8) % 2 ia.adshow(ia.normalize(f),'(r//8 + c//8) % 2') """ Explanation: Xadrez Função xadrez $f(r,c) = \frac{(r + c)}{8} \% 2$. Aqui foi feita a divisão por 8 para que o tamanho das casas do xadrez fique 8 x 8, caso contrário é muito difícil de visualizar o efeito xadrez pois a imagem possui muitos pixels.: End of explanation """ f = (r == c//2) ia.adshow(f,'r == c//2') """ Explanation: Reta Ou ainda a função de uma reta $f(r,c) = (r = \frac{1}{2} c)$: End of explanation """ f = r**2 + c**2 ia.adshow(ia.normalize(f),'r**2 + c**2') """ Explanation: Parábola Função parabólica: $f(r,c) = r^2 + c^2$: End of explanation """ f = (r**2 + c**2 < 190**2) ia.adshow(ia.normalize(f),'r**2 + c**2) < 190**2') """ Explanation: Círculo Função do círculo de raio 190, $f(r,c) = (r^2 + c^2 < 190^2)$: End of explanation """ import numpy as np r, c = np.meshgrid( np.array([-1.5, -1.0, -0.5, 0.0, 0.5]), np.array([ -20, -10, 0, 10, 20, 30]), indexing='ij') print('r=\n',r) print('c=\n',c) """ Explanation: Meshgrid A função meshgrid é semelhante à função indices visto anteriormente, porém, enquanto indices gera as coordenadas inteiras não negativas a partir de um shape(H,W), o meshgrid gera os valores das matrizes a partir de dois vetores de valores reais quaisquer, um para as linhas e outro para as colunas. Veja a seguir um pequeno exemplo numérico. Para que o meshgrid fique compatível com a nossa convenção de (linhas,colunas), deve-se usar o parâmetro indexing='ij'. End of explanation """ rows = np.linspace(-1.5, 0.5, 5) cols = np.linspace(-20, 30, 6) print('rows:', rows) print('cols:', cols) """ Explanation: Gerando os vetores com linspace A função linspace gera vetor em ponto flutuante recebendo os parâmetro de valor inicial, valor final e número de pontos do vetor. Desta forma ele é bastante usado para gerar os parâmetro para o meshgrid. Repetindo os mesmos valores do exemplo anterior, porém usando linspace. Observe que o primeiro vetor possui 5 pontos, começando com valor -1.5 e o valor final é 0.5 (inclusive). O segundo vetor possui 6 pontos, começando de -20 até 30: End of explanation """ r, c = np.meshgrid(rows, cols, indexing='ij') print('r = \n', r) print('c = \n', c) """ Explanation: Usando os dois vetores gerados pelo linspace no meshgrid: End of explanation """ f = r * c print('f=\n', f) """ Explanation: Podemos agora gerar uma matriz ou imagem que seja função destes valores. Por exemplo ser o produto deles: End of explanation """ e = np.spacing(1) # epsilon to avoid 0/0 rows = np.linspace(-5.0, 5.0, 150) # coordenadas das linhas cols = np.linspace(-6.0, 6.0, 180) # coordenadas das colunas r, c = np.meshgrid(rows, cols, indexing='ij') # Grid de coordenadas estilo numpy z = np.sin(r**2 + c**2 + e) / (r**2 + c**2 + e) # epsilon is added to avoid 0/0 ia.adshow(ia.normalize(z),'Função sinc: sen(r² + c²)/(r²+c²) em duas dimensões') plt.imshow(z) plt.colorbar() """ Explanation: Exemplo na geração da imagem sinc com meshgrid Neste exemplo, geramos a imagem da função $ sinc(r,c)$ em duas dimensões, nos intervalos na vertical, de -5 a 5 e na horizontal de -6 a 6. A função sinc é uma função trigonométrica que pode ser utilizada para filtragens. A equação é dada por: $$ sinc(r,c) = \frac{\sin(r^2 + c^2)}{r^2 + c^2}, \text{para\ } -5 \leq r \leq 5, -6 \leq c \leq 6 $$ Na origem, tanto r como c são zeros, resultando uma divisão por zero. Entretanto pela teoria dos limites, $\frac{sin(x)}{x}$ é igual a 1 quando $x$ é igual a zero. Uma forma de se obter isto em ponto flutuante é somar tanto no numerador como no denominador um epsilon, que é a menor valor em ponto flutuante. Epsilon pode ser obtido pela função np.spacing. End of explanation """ n_rows = len(rows) n_cols = len(cols) r,c = np.indices((n_rows,n_cols)) r = -5. + 10.*r.astype(float)/(n_rows-1) c = -6. + 12.*c.astype(float)/(n_cols-1) zi = np.sin(r**2 + c**2 + e) / (r**2 + c**2 + e) # epsilon is addes to avoid 0/0 ia.adshow(ia.normalize(zi),'Função sinc: sin(r² + c²)/(r²+c²) em duas dimensões') """ Explanation: Exemplo na geração da imagem sinc com indices Outra forma de gerar a mesma imagem, usando a função indices é processar os indices de modo a gerar os mesmos valores relativos à grade de espaçamento regular acima, conforme ilustrado abaixo: End of explanation """ print('Máxima diferença entre z e zi?', abs(z - zi).max()) np.exp(-1/2.) """ Explanation: Verificando que as duas funções são iguais: End of explanation """
xysmas/music_genre_classifier
src/report.ipynb
gpl-2.0
%load_ext autoreload %autoreload 2 import numpy as np import sklearn.metrics as metrics import utils as utils from LogisticRegressionClassifier import LogisticRegressionClassifier %pylab inline """ Explanation: Logistic Regression Aaron Gonzales CS529, Machine Learning Project 3 Instructor: Trilce Estrada Overview of the project I have a working logistic regression classifier built in python and numpy. It seems to work somewhat quickly but may not be that robust or as stable as I'd like it to be. There are a great number of things to look at with respect to optimizing gradient descent and the various fiddly bits of the program. This is an ipython notebook; code can be executed direclty from here. When you run the program, navigate to the root of this project and then to /src. python3 ./main.py will make the program work. End of explanation """ fft_dict, fft_labels, ffts = utils.read_features(feature='fft') mfc_dict, mfc_labels, mfcs = utils.read_features(feature='mfc') """ Explanation: I had previously extracted the ffts and mcfts from the data; all of them live in the /data/ folder of this package. As such, loading them in is easy. Methods to extract them are in the fft.py and mfcc.py files; both were ran from an IPython environment. The FFT data was scaled to {0, 1} and the mfcs were scaled via z-score. the data is the variables (fft, mfcs), the labels are hopefully labeled clearly enough, and the dictionary is just a mapping of label ID to actual english word. Note: I used the full 1000 song dataset for this; not the reduced set from Trilce. End of explanation """ lrc_fft = LogisticRegressionClassifier(ffts, fft_labels, fft_dict) lrc_mfc = LogisticRegressionClassifier(mfcs, mfc_labels, mfc_dict) """ Explanation: The classifer is implemented as a class and holds its metrics and data information internally after calls to its methods. It is initialized with the data as given: End of explanation """ lrc_fft.cross_validate() """ Explanation: FFT components Now that we have our data loaded, we can go ahead and fit the logistic regression model to it. Internally it is performing 10-fold cross validation with shuffling via Sklearn's cross_validated module. Gradient Descent I went with the vectorized version of gradient descent discussed in Piazza. My learning rate was adaptive to the custom 'error' rate defined as the max value from the dot product between the $$\Delta End of explanation """ from sklearn.feature_selection import VarianceThreshold sel = VarianceThreshold(0.01150) a = sel.fit_transform(ffts) a.shape lr = LogisticRegressionClassifier(a, fft_labels, fft_dict) lr.cross_validate() """ Explanation: These are not the best scores i've ever seen. Selecting only the features that have moderate variance gives us a set of 200 to test. End of explanation """ from sklearn.decomposition import PCA p = PCA(n_components=200) pcad = p.fit_transform(ffts) pcalrc = LogisticRegressionClassifier(pcad, fft_labels, fft_dict) pcalrc.cross_validate(3) """ Explanation: Those scores went down, so I presume that I did something wrong or that ther is incredible bias or multicoliniarty in this model. I'll try PCA and see how that goes. End of explanation """ # this was already fit lrc_mfc.cross_validate(10) _ = utils.plot_confusion_matrix(lrc_mfc.metrics['cv_average']) """ Explanation: Not much better. Results are holding steady around 30%. On to the MFC features. End of explanation """
GoogleCloudPlatform/mlops-on-gcp
skew_detection/01_covertype_training_serving.ipynb
apache-2.0
!pip install -q -U tensorflow==2.1 !pip install -U -q google-api-python-client !pip install -U -q pandas # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) """ Explanation: Serving a Keras Model on AI Platform Prediction with request-response logging to BigQuery This tutorial shows how to train a TensorFlow classification model using the Keras API, and then deploy the model to AI Platform Prediction for online prediction. The tutorial also shows how to enable AI Platform Prediction request-response logging to BigQuery. The tutorial covers the following tasks: Prepare the data and generate metadata. Train and evaluate a TensorFlow classification model using the Keras API. Export the trained model as a SavedModel for serving. Deploy the trained model to AI Platform Prediction. Enable request-response logging to send logs to BigQuery. Query logs from BigQuery. Note: This example uses TensorFlow 2.x Setup Install packages and dependencies End of explanation """ PROJECT_ID = '[your-google-project-id]' BUCKET = '[your-bucket-name]' REGION = '[your-region-id]' !gcloud config set project $PROJECT_ID """ Explanation: Configure Google Cloud environment settings End of explanation """ try: from google.colab import auth auth.authenticate_user() print("Colab user is authenticated.") except: pass """ Explanation: Authenticate your Google Cloud account This step is required if you run the notebook in Colab. End of explanation """ from __future__ import absolute_import from __future__ import division from __future__ import print_function import os import tensorflow as tf import pandas as pd from google.cloud import bigquery print("TF version: {}".format(tf.__version__)) """ Explanation: Import libraries End of explanation """ LOCAL_WORKSPACE = './workspace' LOCAL_DATA_DIR = os.path.join(LOCAL_WORKSPACE, 'data') BQ_DATASET_NAME = 'prediction_logs' BQ_TABLE_NAME = 'covertype_classifier_logs' MODEL_NAME = 'covertype_classifier' VERSION_NAME = 'v1' TRAINING_DIR = os.path.join(LOCAL_WORKSPACE, 'training') MODEL_DIR = os.path.join(TRAINING_DIR, 'exported_model') """ Explanation: Define constants You can change the default values for the following constants: End of explanation """ if tf.io.gfile.exists(LOCAL_WORKSPACE): print("Removing previous workspace artifacts...") tf.io.gfile.rmtree(LOCAL_WORKSPACE) print("Creating a new workspace...") tf.io.gfile.makedirs(LOCAL_WORKSPACE) tf.io.gfile.makedirs(LOCAL_DATA_DIR) print("Workspace created.") """ Explanation: Create a local workspace End of explanation """ LOCAL_TRAIN_DATA = os.path.join(LOCAL_DATA_DIR, 'train.csv') LOCAL_EVAL_DATA = os.path.join(LOCAL_DATA_DIR, 'eval.csv') !gsutil cp gs://workshop-datasets/covertype/data_validation/training/dataset.csv {LOCAL_TRAIN_DATA} !gsutil cp gs://workshop-datasets/covertype/data_validation/evaluation/dataset.csv {LOCAL_EVAL_DATA} !wc -l {LOCAL_TRAIN_DATA} """ Explanation: 1. Preparing the dataset and defining the metadata The data in this tutorial is based on the covertype dataset from UCI Machine Learning Repository. The notebook uses a version of the dataset that has been preprocessed, split, and uploaded to a public Cloud Storage bucket at the following location: gs://workshop-datasets/covertype For more information, see Cover Type Dataset The task in this tutorial is to predict forest cover type from cartographic variables only. The aim is to build and deploy a minimal model to showcase the AI Platform Prediction request-response logging capabilities. Such logs let you perform further analysis for detecting data skews. 1.1. Download the data End of explanation """ pd.read_csv(LOCAL_TRAIN_DATA).head().T """ Explanation: View a sample of the downloaded data: End of explanation """ HEADER = ['Elevation', 'Aspect', 'Slope','Horizontal_Distance_To_Hydrology', 'Vertical_Distance_To_Hydrology', 'Horizontal_Distance_To_Roadways', 'Hillshade_9am', 'Hillshade_Noon', 'Hillshade_3pm', 'Horizontal_Distance_To_Fire_Points', 'Wilderness_Area', 'Soil_Type', 'Cover_Type'] TARGET_FEATURE_NAME = 'Cover_Type' TARGET_FEATURE_LABELS = ['0', '1', '2', '3', '4', '5', '6'] NUMERIC_FEATURE_NAMES = ['Aspect', 'Elevation', 'Hillshade_3pm', 'Hillshade_9am', 'Hillshade_Noon', 'Horizontal_Distance_To_Fire_Points', 'Horizontal_Distance_To_Hydrology', 'Horizontal_Distance_To_Roadways','Slope', 'Vertical_Distance_To_Hydrology'] CATEGORICAL_FEATURES_WITH_VOCABULARY = { 'Soil_Type': ['2702', '2703', '2704', '2705', '2706', '2717', '3501', '3502', '4201', '4703', '4704', '4744', '4758', '5101', '6101', '6102', '6731', '7101', '7102', '7103', '7201', '7202', '7700', '7701', '7702', '7709', '7710', '7745', '7746', '7755', '7756', '7757', '7790', '8703', '8707', '8708', '8771', '8772', '8776'], 'Wilderness_Area': ['Cache', 'Commanche', 'Neota', 'Rawah'] } FEATURE_NAMES = list(CATEGORICAL_FEATURES_WITH_VOCABULARY.keys()) + NUMERIC_FEATURE_NAMES HEADER_DEFAULTS = [[0] if feature_name in NUMERIC_FEATURE_NAMES + [TARGET_FEATURE_NAME] else ['NA'] for feature_name in HEADER] NUM_CLASSES = len(TARGET_FEATURE_LABELS) """ Explanation: 1.2 Define the metadata The following code shows the metadata of the dataset, which is used to create the data input function, the feature columns, and the serving function. End of explanation """ RANDOM_SEED = 19830610 import multiprocessing def create_dataset(file_pattern, batch_size=128, num_epochs=1, shuffle=False): dataset = tf.data.experimental.make_csv_dataset( file_pattern=file_pattern, batch_size=batch_size, column_names=HEADER, column_defaults=HEADER_DEFAULTS, label_name=TARGET_FEATURE_NAME, field_delim=',', header=True, num_epochs=num_epochs, shuffle=shuffle, shuffle_buffer_size=(5 * batch_size), shuffle_seed=RANDOM_SEED, num_parallel_reads=multiprocessing.cpu_count(), sloppy=True, ) return dataset.cache() """ Explanation: 2. Training and evaluating the model 2.1. Implement the data input pipeline End of explanation """ index = 1 for batch in create_dataset(LOCAL_TRAIN_DATA, batch_size=5, shuffle=False).take(2): print("Batch: {}".format(index)) print("========================") record, target = batch print("Input features:") for key in record: print(" - {}:{}".format(key, record[key].numpy())) print("Target: {}".format(target)) index += 1 print() """ Explanation: The following code performs a test by reading some batches of data using the data input function: End of explanation """ import math def create_feature_columns(): feature_columns = [] for feature_name in FEATURE_NAMES: # Categorical features if feature_name in CATEGORICAL_FEATURES_WITH_VOCABULARY: vocabulary = CATEGORICAL_FEATURES_WITH_VOCABULARY[feature_name] vocab_size = len(vocabulary) # Create embedding column for categorical feature column with vocabulary embedding_feature_column = tf.feature_column.embedding_column( categorical_column = tf.feature_column.categorical_column_with_vocabulary_list( key=feature_name, vocabulary_list=vocabulary), dimension=int(math.sqrt(vocab_size) + 1)) feature_columns.append(embedding_feature_column) # Numeric features else: numeric_column = tf.feature_column.numeric_column(feature_name) feature_columns.append(numeric_column) return feature_columns """ Explanation: 2.2. Create feature columns End of explanation """ feature_columns = create_feature_columns() for column in feature_columns: print(column) """ Explanation: The following code tests the feature columns to be created: End of explanation """ def create_model(params): feature_columns = create_feature_columns() layers = [] layers.append(tf.keras.layers.DenseFeatures(feature_columns)) for units in params.hidden_units: layers.append(tf.keras.layers.Dense(units=units, activation='relu')) layers.append(tf.keras.layers.BatchNormalization()) layers.append(tf.keras.layers.Dropout(rate=params.dropout)) layers.append(tf.keras.layers.Dense(units=NUM_CLASSES, activation='softmax')) model = tf.keras.Sequential(layers=layers, name='classifier') adam_optimzer = tf.keras.optimizers.Adam(learning_rate=params.learning_rate) model.compile( optimizer=adam_optimzer, loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False), metrics=[tf.keras.metrics.SparseCategoricalAccuracy()], loss_weights=None, sample_weight_mode=None, weighted_metrics=None, ) return model """ Explanation: 2.3. Create and compile the model End of explanation """ def run_experiment(model, params): # TensorBoard callback LOG_DIR = os.path.join(TRAINING_DIR, 'logs') tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=LOG_DIR) # Early stopping callback earlystopping_callback = tf.keras.callbacks.EarlyStopping( monitor='val_sparse_categorical_accuracy', patience=3, restore_best_weights=True ) callbacks = [ tensorboard_callback, earlystopping_callback] # Train dataset train_dataset = create_dataset( LOCAL_TRAIN_DATA, batch_size=params.batch_size, shuffle=True) # Eval dataset eval_dataset = create_dataset( LOCAL_EVAL_DATA, batch_size=params.batch_size) # Prep training directory if tf.io.gfile.exists(TRAINING_DIR): print("Removing previous training artifacts...") tf.io.gfile.rmtree(TRAINING_DIR) print("Creating training directory...") tf.io.gfile.mkdir(TRAINING_DIR) print("Experiment started...") print(".......................................") # Run train and evaluate history = model.fit( x=train_dataset, epochs=params.epochs, callbacks=callbacks, validation_data=eval_dataset, ) print(".......................................") print("Experiment finished.") print("") return history """ Explanation: 2.4. Train and evaluate the experiment Define the experiment End of explanation """ class Parameters(): pass TRAIN_DATA_SIZE = 431010 params = Parameters() params.learning_rate = 0.01 params.hidden_units = [128, 128] params.dropout = 0.15 params.batch_size = 265 params.steps_per_epoch = int(math.ceil(TRAIN_DATA_SIZE / params.batch_size)) params.epochs = 10 """ Explanation: Define hyperparameters End of explanation """ model = create_model(params) example_batch, _ = list( create_dataset(LOCAL_TRAIN_DATA, batch_size=2, shuffle=True).take(1))[0] model(example_batch) model.summary() import logging logger = tf.get_logger() logger.setLevel(logging.ERROR) history = run_experiment(model, params) """ Explanation: Run the experiment End of explanation """ import matplotlib.pyplot as plt fig, (ax1, ax2) = plt.subplots(1, 2) fig.set_size_inches(w=(10, 5)) # Plot training & validation accuracy values ax1.plot(history.history['sparse_categorical_accuracy']) ax1.plot(history.history['val_sparse_categorical_accuracy']) ax1.set_title('Model Accuracy') ax1.set(xlabel='Iteration', ylabel='accuracy') ax1.legend(['Train', 'Eval'], loc='upper left') # Plot training & validation loss values ax2.plot(history.history['loss']) ax2.plot(history.history['val_loss']) ax2.set_title('Model Loss') ax2.set(xlabel='Iteration', ylabel='loss') ax2.legend(['Train', 'Eval'], loc='upper left') """ Explanation: Visualize training history End of explanation """ LABEL_KEY = 'predicted_label' SCORE_KEY = 'confidence' PROBABILITIES_KEY = 'probabilities' SIGNATURE_NAME = 'serving_default' """ Explanation: 3. Export the model for serving End of explanation """ def make_features_serving_fn(model): @tf.function def serve_features_fn(features): probabilities = model(features) labels = tf.constant(TARGET_FEATURE_LABELS, dtype=tf.string) predicted_class_indices = tf.argmax(probabilities, axis=1) predicted_class_label = tf.gather( params=labels, indices=predicted_class_indices) prediction_confidence = tf.reduce_max(probabilities, axis=1) return { LABEL_KEY: predicted_class_label, SCORE_KEY:prediction_confidence, PROBABILITIES_KEY: probabilities} return serve_features_fn """ Explanation: 3.1. Implement serving input receiver functions Create the serving function The notebook creates a serving input function that expects a features dictionary and returns the following: - Predicted class label - Prediction confidence - Prediction probabilities of all the classes End of explanation """ feature_spec = {} for feature_name in FEATURE_NAMES: if feature_name in CATEGORICAL_FEATURES_WITH_VOCABULARY: feature_spec[feature_name] = tf.io.FixedLenFeature( shape=[None], dtype=tf.string) else: feature_spec[feature_name] = tf.io.FixedLenFeature( shape=[None], dtype=tf.float32) for key, value in feature_spec.items(): print("{}: {}".format(key, value)) """ Explanation: Create the feature spec dictionary The code creates the feature_spec dictionary for the input features with respect to the dataset metadata: End of explanation """ features_input_signature = { feature: tf.TensorSpec(shape=spec.shape, dtype=spec.dtype, name=feature) for feature, spec in feature_spec.items()} signatures = { SIGNATURE_NAME: make_features_serving_fn(model).get_concrete_function( features_input_signature)} model.save(MODEL_DIR, save_format='tf', signatures=signatures) print("Model is exported to: {}.".format(MODEL_DIR)) """ Explanation: 3.2. Export the model End of explanation """ !saved_model_cli show --dir {MODEL_DIR} --tag_set serve --signature_def {SIGNATURE_NAME} """ Explanation: Verify the signature (inputs and outputs) of the exported model using thesaved_model_cli function: End of explanation """ instances = [ { 'Soil_Type': ['7202'], 'Wilderness_Area': ['Commanche'], 'Aspect': [61], 'Elevation': [3091], 'Hillshade_3pm': [129], 'Hillshade_9am': [227], 'Hillshade_Noon': [223], 'Horizontal_Distance_To_Fire_Points': [2868], 'Horizontal_Distance_To_Hydrology': [134], 'Horizontal_Distance_To_Roadways': [0], 'Slope': [8], 'Vertical_Distance_To_Hydrology': [10], } ] """ Explanation: 3.3. Test the exported model locally Create a sample instance for prediction: End of explanation """ import numpy as np def create_tf_features(instance): new_instance = {} for key, value in instance.items(): if key in CATEGORICAL_FEATURES_WITH_VOCABULARY: new_instance[key] = tf.constant(value, dtype=tf.string) else: new_instance[key] = tf.constant(value, dtype=tf.float32) return new_instance """ Explanation: Prepare the sample instance in the format that's expected by the model signature: End of explanation """ features_predictor = tf.saved_model.load(MODEL_DIR).signatures[SIGNATURE_NAME] def local_predict(instance): features = create_tf_features(instance) outputs = features_predictor(**features) return outputs """ Explanation: Load the SavedModel for prediction, and then create a function that generates the prediction probabilities from the model to return the class label that has the highest probability: End of explanation """ outputs = local_predict(instances[0]) predictions = list( zip(outputs[LABEL_KEY].numpy().tolist(), outputs[SCORE_KEY].numpy().tolist())) for prediction in predictions: print("Predicted label: {} - Prediction confidence: {}".format( prediction[0], round(prediction[1], 3))) """ Explanation: Perform a prediction using the local SavedModel: End of explanation """ !gsutil rm -r gs://{BUCKET}/models/{MODEL_NAME} !gsutil cp -r {MODEL_DIR} gs://{BUCKET}/models/{MODEL_NAME} """ Explanation: 3.4 Upload the exported model to Cloud Storage End of explanation """ !gcloud ai-platform models create {MODEL_NAME} \ --project {PROJECT_ID} \ --regions {REGION} # List the models !gcloud ai-platform models list --project {PROJECT_ID} """ Explanation: 4. Deploy the model to AI Platform Prediction 4.1. Create the model in AI Platform Prediction End of explanation """ !gcloud ai-platform versions create {VERSION_NAME} \ --model={MODEL_NAME} \ --origin=gs://{BUCKET}/models/{MODEL_NAME} \ --runtime-version=2.1 \ --framework=TENSORFLOW \ --python-version=3.7 \ --project={PROJECT_ID} # List the model versions !gcloud ai-platform versions list --model={MODEL_NAME} --project={PROJECT_ID} """ Explanation: 4.2. Create a model version End of explanation """ import googleapiclient.discovery service = googleapiclient.discovery.build('ml', 'v1') name = 'projects/{}/models/{}/versions/{}'.format(PROJECT_ID, MODEL_NAME, VERSION_NAME) print("Service name: {}".format(name)) def caip_predict(instances): request_body={ 'signature_name': SIGNATURE_NAME, 'instances': instances} response = service.projects().predict( name=name, body=request_body ).execute() if 'error' in response: raise RuntimeError(response['error']) outputs = response['predictions'] return outputs """ Explanation: 4.3. Test the deployed model Create a function to call the AI Platform Prediction model version: End of explanation """ outputs = caip_predict(instances) for output in outputs: print("Predicted label: {} - Prediction confidence: {}".format( output[LABEL_KEY], round(output[SCORE_KEY], 3))) """ Explanation: Perform a prediction using AI Platform Prediction: End of explanation """ client = bigquery.Client(PROJECT_ID) dataset_names = [dataset.dataset_id for dataset in client.list_datasets(PROJECT_ID)] dataset = bigquery.Dataset("{}.{}".format(PROJECT_ID, BQ_DATASET_NAME)) dataset.location = "US" if BQ_DATASET_NAME not in dataset_names: dataset = client.create_dataset(dataset) print("Created dataset {}.{}".format(client.project, dataset.dataset_id)) print("BigQuery dataset is ready.") """ Explanation: 5. Preparing logging for a BigQuery dataset 5.1. Create the BigQuery dataset End of explanation """ import json table_schema_json = [ { "name": "model", "type": "STRING", "mode": "REQUIRED" }, { "name":"model_version", "type": "STRING", "mode":"REQUIRED" }, { "name":"time", "type": "TIMESTAMP", "mode": "REQUIRED" }, { "name":"raw_data", "type": "STRING", "mode": "REQUIRED" }, { "name":"raw_prediction", "type": "STRING", "mode": "NULLABLE" }, { "name":"groundtruth", "type": "STRING", "mode": "NULLABLE" }, ] json.dump( table_schema_json, open('table_schema.json', 'w')) """ Explanation: 5.2. Create the BigQuery table to store the logs Define the table schema End of explanation """ table = bigquery.Table( "{}.{}.{}".format(PROJECT_ID, BQ_DATASET_NAME, BQ_TABLE_NAME)) table_names = [table.table_id for table in client.list_tables(dataset)] if BQ_TABLE_NAME in table_names: print("Deleting BQ table: {} ...".format(BQ_TABLE_NAME)) client.delete_table(table) TIME_PARTITION_EXPERIATION = int(60 * 60 * 24 * 7) !bq mk --table \ --project_id={PROJECT_ID} \ --time_partitioning_field=time \ --time_partitioning_type=DAY \ --time_partitioning_expiration={TIME_PARTITION_EXPERIATION} \ {PROJECT_ID}:{BQ_DATASET_NAME}.{BQ_TABLE_NAME} \ 'table_schema.json' """ Explanation: Create a table that's partitioned on ingestion time End of explanation """ sampling_percentage = 1.0 bq_full_table_name = '{}.{}.{}'.format(PROJECT_ID, BQ_DATASET_NAME, BQ_TABLE_NAME) logging_config = { "requestLoggingConfig":{ "samplingPercentage": sampling_percentage, "bigqueryTableName": bq_full_table_name } } service.projects().models().versions().patch( name=name, body=logging_config, updateMask="requestLoggingConfig" ).execute() """ Explanation: 5.3. Configure the AI Platform Prediction model version to enable request-response logging to BigQuery In order to enable the request-response logging to an existing AI Platform Prediction model version, you need to call the patch API and populate the requestLoggingConfig field. End of explanation """ import time for i in range(5): caip_predict(instances) print('.', end='') time.sleep(1) """ Explanation: 5.4. Test request-response logging Send sample prediction requests to the model version on AI Platform Prediction: End of explanation """ query = ''' SELECT * FROM `{}.{}` WHERE model_version = '{}' ORDER BY time desc LIMIT {} '''.format(BQ_DATASET_NAME, BQ_TABLE_NAME, VERSION_NAME, 3) pd.io.gbq.read_gbq( query, project_id=PROJECT_ID).T """ Explanation: Query the logged request-response entries in BigQuery: End of explanation """
stevesimmons/pydata-berlin2017-pandas-and-dask-from-the-inside
pandas-from-the-inside.ipynb
gpl-3.0
# Sample code from the tutorial 'Pandas from the Inside' # Stephen Simmons - mail@stevesimmons.com # PyData Amsterdam, Fri 7 April 2017 # # Requires python3, pandas and numpy. # Jupyter/IPython are also useful. # Best with pandas > 0.18.1. # Pandas 0.18.0 requires a workaround for an indexing bug. import csv import os import numpy as np import pandas as pd print("numpy=%s; pandas=%s" % (np.__version__, pd.__version__)) # Don't wrap tables pd.options.display.max_rows = 20 pd.options.display.width = 200 """ Explanation: Pandas from the Inside PyData Berlin tutorial, 30 June 2017 Stephen Simmons - mail@stevesimmons.com http://github.com/stevesimmons Imports and setting sensible defaults End of explanation """ def main(name='bg3.txt'): # Download sample data from www.afltables.com if not present if not os.path.exists(name): download_sample_data(names=[name]) # Part 1 - Load sample data as a DataFrame (1 game => 1 row) raw_df = load_data(name) # Part 2 - Reshape to give team scores (1 game => 2 rows) scores_df = prepare_game_scores(raw_df) # Parts 3 and 4 - GroupBy to get Wins/Draws/Losses/Points ladder_df = calc_team_ladder(scores_df) return ladder_df #if __name__ == '__main__': # main() """ Explanation: Overall program flow If run as a standalone script, a main() function mirroring the structure of the slides would look like this: End of explanation """ def download_sample_data(names=('bg3.txt', 'bg7.txt')): ''' Download results and attendance stats for every AFL match since 1897 from www.afltables.com into files 'bg3.txt' and 'bg7.txt' in the current directory. ''' import urllib.request base_url = 'http://afltables.com/afl/stats/biglists/' for filename in names: url = base_url + filename print("Downloading from %s" % url) txt = urllib.request.urlopen(url).read() with open(filename, 'wb') as f: f.write(txt) print("Wrote %d bytes to %s" % (len(txt), filename)) name = 'bg3.txt' if not os.path.exists(name): download_sample_data([name]) """ Explanation: Download sample data The sample data here is historical Australian Rules Football (AFL) game results, going back to the start of the first competition in 1897. This data can be downloaded from the web site http://www.afltables.com. The site's focus is week/game-level HTML tables formatted for human footy enthusiasts to read. We want something easier for pandas to consume, so instead grab the large text file bg3.txt. End of explanation """ def load_data(name='bg3.txt'): ''' Pandas DataFrames from loading csv files bg3.txt (games) or bg7.txt (attendance) csvs downloaded from www.afltables.com. ''' if name == 'bg3.txt': # Scores with rounds # - GameNum ends with '.', single space for nums > 100k # - Rounds are 'R1'-'R22' or 'QF', 'PF', 'GF'. # - Three grand finals were drawn and replayed the next week # - Scores are strings '12.5.65' with goals/behinds/points # - Venue may end with a '.', e.g. 'M.C.G.' though always at EOL cols = 'GameNum Date Round HomeTeam HomeScore AwayTeam AwayScore Venue' sep = '[. ] +' sep = '[. ] +' elif name == 'bg7.txt': # Attendance stats # - RowNum ends with '.', single space for nums > 100k # - Spectators ends with '*' for finals games # - Venue may end with a '.', e.g. 'M.C.G.' # - Dates are 'dd-Mmm-yyyy'. # - Date/Venue unique, except for two days in 1980s, when # M.C.G. hosted games at 2pm and 5pm with same num of spectators. cols = 'RowNum Spectators HomeTeam HomeScore AwayTeam AwayScore Venue Date' sep = '(?:(?<=[0-9])[.*] +)|(?: +)' else: raise ValueError("Unexpected data file") df = pd.read_csv(name, skiprows=2, sep=sep, names=cols.split(), parse_dates=['Date'], quoting=csv.QUOTE_NONE, engine='python') return df import os os.getcwd() raw_df = load_data('bg3.txt') raw_df.info() """ Explanation: Note I prepared the slides in today's tutorial last year for PyData London, when the 2016 season was only part-way through. If you download the data now, it includes results for the whole of the 2016 season. The last 2016 match is the Grand Final on Saturday 1 October, where the Western Bulldogs beat the Sydney Swans 13.11.89 to 10.7.67. Load raw data into a pandas DataFrame The pandas function pd.read_csv() is surprisingly powerful. Here we use its ability to split text into columns using a regular expression. This is a little tricky because most columns are separated by two or more spaces. The exception is the row numbers, where the separator is a fot (from the row number) plus a single space, once the row numbers are above 99,999, End of explanation """ %timeit raw_df.groupby('HomeTeam').size().sort_values() raw_df.groupby('HomeTeam').size().sort_values() """ Explanation: Here is an example of timing an operation with the IPython magic %timeit. End of explanation """ def prepare_game_scores(df): ''' DataFrame with rows giving each team's results in a game (1 game -> 2 rows for home and away teams) ''' scores_raw = df.drop('GameNum', axis=1).set_index(['Date', 'Venue', 'Round']) # Convert into sections for both teams home_teams = scores_raw['HomeTeam'].rename('Team') away_teams = scores_raw['AwayTeam'].rename('Team') # Split the score strings into Goals/Behinds, and points For and Against regex = '(?P<G>\d+).(?P<B>\d+).(?P<F>\d+)' home_scores = scores_raw['HomeScore'].str.extract(regex, expand=True).astype(int) away_scores = scores_raw['AwayScore'].str.extract(regex, expand=True).astype(int) home_scores['A'] = away_scores['F'] away_scores['A'] = home_scores['F'] home_games = pd.concat([home_teams, home_scores], axis=1) away_games = pd.concat([away_teams, away_scores], axis=1) scores = home_games.append(away_games).sort_index().set_index('Team', append=True) # scores = pd.concat([home_games, away_games], axis=0).sort_index() # Rather than moving Team to MultiIndex with scores.set_index('Team', append=True), # keep it as a data column so we can see what an inhomogeneous DataFrame looks like. return scores scores_df = prepare_game_scores(raw_df) scores_df """ Explanation: Reformat the raw table This function illustrates some powerful ways to process the raw data. In particular, notice how Series.str.extract(regex, expand=True) can split a Series of strings into a DataFrame with columns given by a regular expression. End of explanation """ #scores_df.loc(axis=0)[str(year), :, 'R1':'R9', :] scores_df.info() scores_df.index.is_lexsorted() scores_df.sort_index(inplace=True) scores_df.index.is_lexsorted() """ Explanation: Calculate season ladder To calculate the ladder for a season, we need to pull out all round robin games for that year (i.e. excluding the finals games SF, QF, PF, GF). A quick way of doing this is noting there are never more than 23 rounds in a season (i.e. R1 to R23). So we can select Rounds in the range 'R1':'R9' inclusive. Recents versions of pandas can get this directly using multidimensional slicing, so long as the index is sorted. End of explanation """ scores_df.loc(axis=0)['2016', :, 'R1':'R9', :] """ Explanation: Now the scores DataFrame is sorted, we can easily select subsets of the rows: End of explanation """ def calc_team_ladder(scores_df, year=2016): ''' DataFrame with championship ladder with round-robin games for the given year. Wins, draws and losses are worth 4, 2 and 0 points respectively. ''' # Select a subset of the rows # df.loc[] matches dates as strings like '20160506' or '2016'. # Note here rounds are simple strings so sort with R1 < R10 < R2 < .. < R9 # (we could change this with a CategoricalIndex) if pd.__version__ > '0.18.0': # MultiIndex slicing works ok scores2 = scores_df.sort_index() x = scores2.loc(axis=0)[str(year), :, 'R1':'R9', :] else: # pandas 0.18.0 has a bug with .loc on MultiIndexes # if dates are the first level. It works as expected if we # move the dates to the end before slicing scores2 = scores_df.reorder_levels([1, 2, 3, 0]).sort_index() x = scores2.loc(axis=0)[:, 'R1':'R9', :, str(year):str(year)] # Don't need to put levels back in order as we are about to drop 3 of them # x = x.reorder_levels([3, 0, 1, 2]).sort_index() # Just keep Team. This does a copy too, avoiding SettingWithCopy warning y = x.reset_index(['Date', 'Venue', 'Round'], drop=True) # Add cols with 0/1 for number of games played, won, drawn and lost y['P'] = 1 y['W'] = (y['F'] > y['A']).astype(int) y['D'] = 0 y.loc[y['F'] == y['A'], 'D'] = 1 y.eval('L = 1*(A>F)', inplace=True) #print(y) # Subtotal by team and then sort by Points/Percentage t = y.groupby(level='Team').sum() t['PCT'] = 100.0 * t.F / t.A t['PTS'] = 4 * t['W'] + 2 * t['D'] ladder = t.sort_values(['PTS', 'PCT'], ascending=False) # Add ladder position (note: assumes no ties!) ladder['Pos'] = pd.RangeIndex(1, len(ladder) + 1) #print(ladder) return ladder ladder_df = calc_team_ladder(scores_df, 2015) ladder_df """ Explanation: Here is the complete code to produce the ladder. Note there is a bug in pandas 0.18.0's multidimensional slicing which we have to work around. End of explanation """
juloliveira/ipython
Apache Spark/Hello World Apache Spark.ipynb
gpl-2.0
from pyspark import SparkContext from pyspark.sql import Row from pyspark.mllib.clustering import KMeans, KMeansModel from sklearn import datasets from numpy import array, sqrt import pandas as pd import matplotlib.pyplot as plt %matplotlib inline """ Explanation: Iniciando em PySpark Este documento descreve basicamente como utilizar Apache Spark, Hadoop e PySpark. Ele também demonstra a utilização de algoritimos de clusterização (KMeans) e plotar gráficos de dispersão com o módulo Matplotlib. Dados Utilizamos como fonte de dados o CSV do Titanic Challenge do Kaggle. Fizemos uma pequena modificação alterando a vírgula (,) como separador para ponto-e-virgula (;) pois estava gerando um problema ao fazer o split(',') pois os passageiros tem vírgula em seus nomes (Sobrenome, Primeiro Nome) Ambiente O ambiente em que rodamos nossos scripts foi um notebook rodando Ubuntu 14.04 . Hadoop: Instalação básica em modo Single Node conforme indicado no site oficial Spark: Instalação básica conforme descrito no site oficial Python: versão 2.7.6 IPython: versão 3.2.1 com profile customizado para integrar com PySpark Começando a brincadeira Carregando módulos End of explanation """ sc = SparkContext('local', 'master') train = sc.textFile('hdfs://localhost/train.csv').map(lambda x: x.split(';')) """ Explanation: Nas linhas abaixo nós inicializamos o SparkContext e em seguida carregamos o arquivo do Hadoop HDFS, e já "splitamos" as linhas na função "map" End of explanation """ header = train.first() train_filter = train.filter(lambda item: item != header) """ Explanation: Nosso arquivo vem com a primeira linha sendo o cabeçalho, portando, aplicamos a função "filter" para remover esta linha. Esta não é a forma mais elegante, podemos melhorar (sugestões?!). End of explanation """ train_norm = train_filter.map(lambda item: Row( PassengerId=item[0],Survived=item[1],Pclass=item[2],Name=item[3],Sex=item[4], Age=item[5],SibSp=item[6],Parch=item[7],Ticket=item[8],Fare=item[9],Cabin=item[10],Embarked=item[11])) """ Explanation: Em seguida, nós criamos um novo dataset para facilitar o acesso aos atributos da nossa lista. Desta forma, ao invés de acessarmos através de, por exemplo, item[6] nós podemos acessar item.Age. End of explanation """ _with_age = train_norm.filter(lambda item: item.Age != '') sum_age = _with_age.map(lambda item: (float(item.Age), 1)).fold((0, 0), (lambda x,y: (x[0]+y[0],x[1]+y[1]))) avg_age = int(sum_age[0] / sum_age[1]) print sum_age print "Idade média dos passageiros: %s" % avg_age """ Explanation: Iremos trabalhar com o dado "Age" e em nossos dados, alguns passageiros não possuem o dado. Para resolver isso, iremos calcular a média de idade dos passageiros que tem o dado "Age" e preencher os registros que não tem o dado. Para tal, faremos outro "filter", em seguida um "map" retornando apenas a idade e em seguida "reduce" somando os valores. End of explanation """ _no_age = train_norm.filter(lambda item: item.Age == '') _no_age_fix = _no_age.map(lambda item: Row(PassengerId=item[0],Survived=item[1],Pclass=item[2], Name=item[3],Sex=item[4],SibSp=item[6],Parch=item[7], Ticket=item[8],Fare=item[9],Cabin=item[10],Embarked=item[11], Age=avg_age)) train_clean = _with_age.union(_no_age_fix) """ Explanation: Agora que temos a idade média dos passageiros com idade, vamos criar um novo dataset com os valores corrigidos, depois juntar (join) tudo e ter um dataset com todos registros. End of explanation """ y_ages = train_clean.map(lambda item: float(item.Age)) x_fares = train_clean.map(lambda item: float(item.Fare)) plt.scatter(x_fares.collect(), y_ages.collect()) plt.grid() """ Explanation: Apenas para fins apenas didáticos, iremos plotar Idade x Tarifa e iremos encontrar o agrumamento com o menor erro e plotar novamente Idade x Tarifa mas desta vez colorindo os itens de acordo com seu agrupamento. Portanto, iniciamos com um gráfico de dispersão Idade x Tarifa. End of explanation """ train_cluster_data = train_clean.map(lambda item: array([float(item.Age), float(item.Fare)])) train_cluster_data.take(3) """ Explanation: Agora criaremos um dataset com os dados que iremos testar para montar nosso cluster. End of explanation """ clustering_errors=[] for i in range(2,10): clusters = KMeans.train(train_cluster_data, i, maxIterations=10, runs=10, initializationMode="random") def error(point): center = clusters.centers[clusters.predict(point)] return sqrt(sum([x**2 for x in (point - center)])) WSSSE = train_cluster_data.map(lambda item: error(item)).reduce(lambda x, y: x+y) clustering_errors.append(array([i, WSSSE])) plt.plot(clustering_errors) plt.grid() """ Explanation: A seguir, iremos rodar o algoritimo KMeans para encontrar o K com menor erro. Para isso, iremos interar K's de 2 até 16, plotar um gráfico com os erros. End of explanation """ clusters_errors_set = pd.DataFrame(clustering_errors) clusters_errors_set """ Explanation: Em seguida, uma tabela con os dados: End of explanation """ cluster = KMeans.train(train_cluster_data, 15, maxIterations=10, runs=10, initializationMode="random") """ Explanation: Verificamos que o K com menor erro é um cluster com 15 grupos. Então, montarmos nosso cluster com 15 grupos. End of explanation """ x = train_clean.map(lambda item: item.Fare).collect() y = train_clean.map(lambda item: item.Age).collect() c = train_clean.map(lambda item: cluster.predict([item.Age, item.Fare])).collect() plt.figure(figsize=(15,10)) plt.scatter(x, y, c=c) plt.grid() """ Explanation: Agora é só arrumar a casa, plotar os dados e sair pro abraço! End of explanation """
Caranarq/01_Dmine
Datasets/CFE/.ipynb_checkpoints/Usuarios Electricos (P0609)-checkpoint.ipynb
gpl-3.0
descripciones = { 'P0609': 'Usuarios Electricos' } # Librerias utilizadas import pandas as pd import sys import urllib import os import csv import zipfile # Configuracion del sistema print('Python {} on {}'.format(sys.version, sys.platform)) print('Pandas version: {}'.format(pd.__version__)) import platform; print('Running on {} {}'.format(platform.system(), platform.release())) """ Explanation: Usuarios de Energía Eléctrica Parámetros que se obtienen desde esta fuente ID |Descripción ---|:---------- P0609|Usuarios eléctricos End of explanation """ url = r'http://datos.cfe.gob.mx/Datos/Usuariosyconsumodeelectricidadpormunicipio.csv' archivo_local = r'D:\PCCS\00_RawData\01_CSV\CFE\UsuariosElec.csv' if os.path.isfile(archivo_local): print('Ya existe el archivo: {}'.format(archivo_local)) else: print('Descargando {} ... ... ... ... ... '.format(archivo_local)) urllib.request.urlretrieve(url, archivo_local) print('se descargó {}'.format(archivo_local)) """ Explanation: 2. Descarga de datos End of explanation """ dtypes = { # Los valores numericos del CSV estan guardados como " 000,000 " y requieren limpieza 'Cve Mun':'str', '2010':'str', '2011':'str', '2012':'str', '2013':'str', '2014':'str', '2015':'str', '2016':'str', 'ene-17':'str', 'feb-17':'str', 'mar-17':'str', 'abr-17':'str', 'may-17':'str', 'jun-17':'str', 'jul-17':'str', 'ago-17':'str', 'sep-17':'str', 'oct-17':'str', 'nov-17':'str', 'dic-17':'str'} # Lectura del Dataset dataset = pd.read_csv(archivo_local, skiprows = 2, nrows = 82236, na_values = ' - ', dtype=dtypes) # Lee el dataset dataset['CVE_EDO'] = dataset['Cve Inegi'].apply(lambda x: '{0:0>2}'.format(x)) # CVE_EDO de 2 digitos dataset['CVE_MUN'] = dataset['CVE_EDO'].map(str) + dataset['Cve Mun'] dataset.head() # Quitar espacios en blanco y comas de columnas que deberian ser numericas columnums = ['2010', '2011', '2012', '2013', '2014', '2015', '2016', 'ene-17', 'feb-17', 'mar-17', 'abr-17', 'may-17', 'jun-17', 'jul-17', 'ago-17', 'sep-17', 'oct-17', 'nov-17', 'dic-17'] for columna in columnums: dataset[columna] = dataset[columna].str.replace(' ','') dataset[columna] = dataset[columna].str.replace(',','') dataset.head() # Convertir columnas a numericas columnasanios = ['2010', '2011', '2012', '2013', '2014', '2015', '2016', 'ene-17', 'feb-17', 'mar-17', 'abr-17', 'may-17', 'jun-17', 'jul-17', 'ago-17', 'sep-17', 'oct-17', 'nov-17', 'dic-17'] for columna in columnasanios: dataset[columna] = pd.to_numeric(dataset[columna], errors='coerce', downcast = 'integer') dataset.head() # Quitar columnas que ya no se utilizarán dropcols = ['Cve Edo', 'Cve Inegi', 'Cve Mun', 'Entidad Federativa', 'Municipio', 'Unnamed: 25', 'CVE_EDO'] dataset = dataset.drop(dropcols, axis = 1) # Asignar CVE_EDO como indice dataset = dataset.set_index('CVE_MUN') dataset.head() # Sumar las columnas de 2017 columnas2017 = ['ene-17', 'feb-17', 'mar-17', 'abr-17', 'may-17', 'jun-17', 'jul-17', 'ago-17', 'sep-17', 'oct-17', 'nov-17', 'dic-17'] dataset['2017'] = dataset[columnas2017].sum(axis = 1) # Eliminar columnas de 2017 dataset = dataset.drop(columnas2017, axis = 1) dataset.head() """ Explanation: 3. Estandarizacion de datos de Parámetros End of explanation """ len(dataset) dataset.head(40) dataset_total = dataset[dataset['Tarifa'] == 'TOTAL'] dataset_total.head() len(dataset_total) """ Explanation: Exportar Dataset Antes de exportar el dataset voy a reducir su tamaño porque tiene 82,236 renglones divididos por tarifa. ÚNicamente voy a dejar los totales de todas las tarifas. End of explanation """
turbomanage/training-data-analyst
quests/serverlessml/01_explore/labs/explore_data.ipynb
apache-2.0
from google.cloud import bigquery import seaborn as sns import matplotlib.pyplot as plt import pandas as pd import numpy as np import shutil """ Explanation: Explore and create ML datasets In this notebook, we will explore data corresponding to taxi rides in New York City to build a Machine Learning model in support of a fare-estimation tool. The idea is to suggest a likely fare to taxi riders so that they are not surprised, and so that they can protest if the charge is much higher than expected. Learning Objectives Access and explore a public BigQuery dataset on NYC Taxi Cab rides Visualize your dataset using the Seaborn library Inspect and clean-up the dataset for future ML model training Create a benchmark to judge future ML model performance off of Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook. Let's start off with the Python imports that we need. End of explanation """ %%bigquery SELECT FORMAT_TIMESTAMP("%Y-%m-%d %H:%M:%S %Z", pickup_datetime) AS pickup_datetime, pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude, passenger_count, trip_distance, tolls_amount, fare_amount, total_amount # TODO 1: Specify the correct BigQuery public dataset for nyc-tlc yellow taxi cab trips # Tip: For projects with hyphens '-' be sure to escape with backticks `` FROM LIMIT 10 """ Explanation: <h3> Extract sample data from BigQuery </h3> The dataset that we will use is <a href="https://console.cloud.google.com/bigquery?p=nyc-tlc&d=yellow&t=trips&page=table">a BigQuery public dataset</a>. Click on the link, and look at the column names. Switch to the Details tab to verify that the number of records is one billion, and then switch to the Preview tab to look at a few rows. Let's write a SQL query to pick up interesting fields from the dataset. It's a good idea to get the timestamp in a predictable format. End of explanation """ %%bigquery trips SELECT FORMAT_TIMESTAMP("%Y-%m-%d %H:%M:%S %Z", pickup_datetime) AS pickup_datetime, pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude, passenger_count, trip_distance, tolls_amount, fare_amount, total_amount FROM `nyc-tlc.yellow.trips` WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1 print(len(trips)) # We can slice Pandas dataframes as if they were arrays trips[:10] """ Explanation: Let's increase the number of records so that we can do some neat graphs. There is no guarantee about the order in which records are returned, and so no guarantee about which records get returned if we simply increase the LIMIT. To properly sample the dataset, let's use the HASH of the pickup time and return 1 in 100,000 records -- because there are 1 billion records in the data, we should get back approximately 10,000 records if we do this. We will also store the BigQuery result in a Pandas dataframe named "trips" End of explanation """ # TODO 2: Visualize your dataset using the Seaborn library. Plot the distance of the trip as X and the fare amount as Y ax = sns.regplot(x="", y="", fit_reg=False, ci=None, truncate=True, data=trips) ax.figure.set_size_inches(10, 8) """ Explanation: <h3> Exploring data </h3> Let's explore this dataset and clean it up as necessary. We'll use the Python Seaborn package to visualize graphs and Pandas to do the slicing and filtering. End of explanation """ %%bigquery trips SELECT FORMAT_TIMESTAMP("%Y-%m-%d %H:%M:%S %Z", pickup_datetime) AS pickup_datetime, pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude, passenger_count, trip_distance, tolls_amount, fare_amount, total_amount FROM `nyc-tlc.yellow.trips` WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1 # TODO 3: Filter the data to only include non-zero distance trips and fares above $2.50 AND print(len(trips)) ax = sns.regplot(x="trip_distance", y="fare_amount", fit_reg=False, ci=None, truncate=True, data=trips) ax.figure.set_size_inches(10, 8) """ Explanation: Hmm ... do you see something wrong with the data that needs addressing? It appears that we have a lot of invalid data that is being coded as zero distance and some fare amounts that are definitely illegitimate. Let's remove them from our analysis. We can do this by modifying the BigQuery query to keep only trips longer than zero miles and fare amounts that are at least the minimum cab fare ($2.50). Note the extra WHERE clauses. End of explanation """ tollrides = trips[trips['tolls_amount'] > 0] tollrides[tollrides['pickup_datetime'] == '2012-02-27 09:19:10 UTC'] """ Explanation: What's up with the streaks around 45 dollars and 50 dollars? Those are fixed-amount rides from JFK and La Guardia airports into anywhere in Manhattan, i.e. to be expected. Let's list the data to make sure the values look reasonable. Let's also examine whether the toll amount is captured in the total amount. End of explanation """ trips.describe() """ Explanation: Looking a few samples above, it should be clear that the total amount reflects fare amount, toll and tip somewhat arbitrarily -- this is because when customers pay cash, the tip is not known. So, we'll use the sum of fare_amount + tolls_amount as what needs to be predicted. Tips are discretionary and do not have to be included in our fare estimation tool. Let's also look at the distribution of values within the columns. End of explanation """ def showrides(df, numlines): lats = [] lons = [] for iter, row in df[:numlines].iterrows(): lons.append(row['pickup_longitude']) lons.append(row['dropoff_longitude']) lons.append(None) lats.append(row['pickup_latitude']) lats.append(row['dropoff_latitude']) lats.append(None) sns.set_style("darkgrid") plt.figure(figsize=(10,8)) plt.plot(lons, lats) showrides(trips, 10) showrides(tollrides, 10) """ Explanation: Hmm ... The min, max of longitude look strange. Finally, let's actually look at the start and end of a few of the trips. End of explanation """ def preprocess(trips_in): trips = trips_in.copy(deep=True) trips.fare_amount = trips.fare_amount + trips.tolls_amount del trips['tolls_amount'] del trips['total_amount'] del trips['trip_distance'] # we won't know this in advance! qc = np.all([\ trips['pickup_longitude'] > -78, \ trips['pickup_longitude'] < -70, \ trips['dropoff_longitude'] > -78, \ trips['dropoff_longitude'] < -70, \ trips['pickup_latitude'] > 37, \ trips['pickup_latitude'] < 45, \ trips['dropoff_latitude'] > 37, \ trips['dropoff_latitude'] < 45, \ trips['passenger_count'] > 0, ], axis=0) return trips[qc] tripsqc = preprocess(trips) tripsqc.describe() """ Explanation: As you'd expect, rides that involve a toll are longer than the typical ride. <h3> Quality control and other preprocessing </h3> We need to do some clean-up of the data: <ol> <li>New York city longitudes are around -74 and latitudes are around 41.</li> <li>We shouldn't have zero passengers.</li> <li>Clean up the total_amount column to reflect only fare_amount and tolls_amount, and then remove those two columns.</li> <li>Before the ride starts, we'll know the pickup and dropoff locations, but not the trip distance (that depends on the route taken), so remove it from the ML dataset</li> <li>Discard the timestamp</li> </ol> We could do preprocessing in BigQuery, similar to how we removed the zero-distance rides, but just to show you another option, let's do this in Python. In production, we'll have to carry out the same preprocessing on the real-time input data. This sort of preprocessing of input data is quite common in ML, especially if the quality-control is dynamic. End of explanation """ shuffled = tripsqc.sample(frac=1) trainsize = int(len(shuffled['fare_amount']) * 0.70) validsize = int(len(shuffled['fare_amount']) * 0.15) df_train = shuffled.iloc[:trainsize, :] df_valid = shuffled.iloc[trainsize:(trainsize+validsize), :] df_test = shuffled.iloc[(trainsize+validsize):, :] df_train.head(n=1) df_train.describe() df_valid.describe() df_test.describe() """ Explanation: The quality control has removed about 300 rows (11400 - 11101) or about 3% of the data. This seems reasonable. Let's move on to creating the ML datasets. <h3> Create ML datasets </h3> Let's split the QCed data randomly into training, validation and test sets. Note that this is not the entire data. We have 1 billion taxicab rides. This is just splitting the 10,000 rides to show you how it's done on smaller datasets. In reality, we'll have to do it on all 1 billion rides and this won't scale. End of explanation """ def to_csv(df, filename): outdf = df.copy(deep=False) outdf.loc[:, 'key'] = np.arange(0, len(outdf)) # rownumber as key # reorder columns so that target is first column cols = outdf.columns.tolist() cols.remove('fare_amount') cols.insert(0, 'fare_amount') print (cols) # new order of columns outdf = outdf[cols] outdf.to_csv(filename, header=False, index_label=False, index=False) to_csv(df_train, 'taxi-train.csv') to_csv(df_valid, 'taxi-valid.csv') to_csv(df_test, 'taxi-test.csv') !head -10 taxi-valid.csv """ Explanation: Let's write out the three dataframes to appropriately named csv files. We can use these csv files for local training (recall that these files represent only 1/100,000 of the full dataset) just to verify our code works, before we run it on all the data. End of explanation """ !ls -l *.csv """ Explanation: <h3> Verify that datasets exist </h3> End of explanation """ %%bash head taxi-train.csv """ Explanation: We have 3 .csv files corresponding to train, valid, test. The ratio of file-sizes correspond to our split of the data. End of explanation """ def distance_between(lat1, lon1, lat2, lon2): # haversine formula to compute distance "as the crow flies". Taxis can't fly of course. dist = np.degrees(np.arccos(np.minimum(1,np.sin(np.radians(lat1)) * np.sin(np.radians(lat2)) + np.cos(np.radians(lat1)) * np.cos(np.radians(lat2)) * np.cos(np.radians(lon2 - lon1))))) * 60 * 1.515 * 1.609344 return dist def estimate_distance(df): return distance_between(df['pickuplat'], df['pickuplon'], df['dropofflat'], df['dropofflon']) def compute_rmse(actual, predicted): return np.sqrt(np.mean((actual-predicted)**2)) def print_rmse(df, rate, name): print ("{1} RMSE = {0}".format(compute_rmse(df['fare_amount'], rate*estimate_distance(df)), name)) # TODO 4: Create a benchmark to judge future ML model performance off of # Specify the five feature columns FEATURES = ['','','','',''] # Specify the one target column for prediction TARGET = '' columns = list([TARGET]) columns.append('pickup_datetime') columns.extend(FEATURES) # in CSV, target is the first column, after the features columns.append('key') df_train = pd.read_csv('taxi-train.csv', header=None, names=columns) df_valid = pd.read_csv('taxi-valid.csv', header=None, names=columns) df_test = pd.read_csv('taxi-test.csv', header=None, names=columns) rate = df_train['fare_amount'].mean() / estimate_distance(df_train).mean() print ("Rate = ${0}/km".format(rate)) print_rmse(df_train, rate, 'Train') print_rmse(df_valid, rate, 'Valid') print_rmse(df_test, rate, 'Test') """ Explanation: Looks good! We now have our ML datasets and are ready to train ML models, validate them and evaluate them. <h3> Benchmark </h3> Before we start building complex ML models, it is a good idea to come up with a very simple model and use that as a benchmark. My model is going to be to simply divide the mean fare_amount by the mean trip_distance to come up with a rate and use that to predict. Let's compute the RMSE of such a model. End of explanation """ validation_query = """ SELECT (tolls_amount + fare_amount) AS fare_amount, pickup_datetime, pickup_longitude AS pickuplon, pickup_latitude AS pickuplat, dropoff_longitude AS dropofflon, dropoff_latitude AS dropofflat, passenger_count*1.0 AS passengers, 'unused' AS key FROM `nyc-tlc.yellow.trips` WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 10000)) = 2 AND trip_distance > 0 AND fare_amount >= 2.5 AND pickup_longitude > -78 AND pickup_longitude < -70 AND dropoff_longitude > -78 AND dropoff_longitude < -70 AND pickup_latitude > 37 AND pickup_latitude < 45 AND dropoff_latitude > 37 AND dropoff_latitude < 45 AND passenger_count > 0 """ client = bigquery.Client() df_valid = client.query(validation_query).to_dataframe() print_rmse(df_valid, 2.59988, 'Final Validation Set') """ Explanation: <h2>Benchmark on same dataset</h2> The RMSE depends on the dataset, and for comparison, we have to evaluate on the same dataset each time. We'll use this query in later labs: End of explanation """
SJSlavin/phys202-2015-work
assignments/assignment09/IntegrationEx02.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import numpy as np import seaborn as sns from scipy import integrate """ Explanation: Integration Exercise 2 Imports End of explanation """ def integrand(x, a): return 1.0/(x**2 + a**2) def integral_approx(a): # Use the args keyword argument to feed extra arguments to your integrand I, e = integrate.quad(integrand, 0, np.inf, args=(a,)) return I def integral_exact(a): return 0.5*np.pi/a print("Numerical: ", integral_approx(1.0)) print("Exact : ", integral_exact(1.0)) assert True # leave this cell to grade the above integral """ Explanation: Indefinite integrals Here is a table of definite integrals. Many of these integrals has a number of parameters $a$, $b$, etc. Find five of these integrals and perform the following steps: Typeset the integral using LateX in a Markdown cell. Define an integrand function that computes the value of the integrand. Define an integral_approx funciton that uses scipy.integrate.quad to peform the integral. Define an integral_exact function that computes the exact value of the integral. Call and print the return value of integral_approx and integral_exact for one set of parameters. Here is an example to show what your solutions should look like: Example Here is the integral I am performing: $$ I_1 = \int_0^\infty \frac{dx}{x^2 + a^2} = \frac{\pi}{2a} $$ End of explanation """ # YOUR CODE HERE def integrand1(x, a, b, c): return np.exp(-(a*x**2 + b*x + c)) def integral1_approx(a, b, c): I, e = integrate.quad(integrand1, -np.inf, np.inf, args=(a,b,c,)) return I def integral1_exact(a, b, c): return np.sqrt(np.pi/a) * np.exp((b**2 - 4*a*c)/(4*a)) print("Numerical: ", integral1_approx(1.0, 1.0, 1.0)) print("Exact: ", integral1_exact(1.0, 1.0, 1.0)) assert True # leave this cell to grade the above integral """ Explanation: Integral 1 $$ I = \int_{-\infty}^\infty e^{-(ax^2 + bx + c)} dx = \sqrt{\frac{\pi}{2}} e^\frac{b^2-4ac}{4a} $$ End of explanation """ # YOUR CODE HERE def integrand2(x, p): return (np.sin(p*x))**2 def integral2_approx(p): I, e = integrate.quad(integrand2, 0, 0.5*np.pi, args=(p,)) return I def integral2_exact(p): return 0.25*np.pi print("Numerical: ", integral2_approx(1.0)) print("Exact: ", integral2_exact(1.0)) #numerical result is around 10* actual result assert True # leave this cell to grade the above integral """ Explanation: Integral 2 $$ I = \int_0^\frac{\pi}{2} \sin^2 px dx = \frac{\pi}{4} $$ End of explanation """ def integrand3(x, p): return (1 - np.cos(p*x))/x**2 def integral3_approx(p): I, e = integrate.quad(integrand3, 0, np.inf, args=(p,)) return I def integral3_exact(p): return 0.5*p*np.pi print("Numerical: ", integral3_approx(3.0)) print("Exact: ", integral3_exact(3.0)) assert True # leave this cell to grade the above integral """ Explanation: Integral 3 $$ I = \int_0^\infty \frac{1 - \cos px}{x^2} dx = \frac{\pi p}{2} $$ End of explanation """ def integrand4(x): return np.log(x)/(1 + x) def integral4_approx(): I, e = integrate.quad(integrand4, 0, 1) return I def integral4_exact(): return -(np.pi**2)/12 print("Numerical: ", integral4_approx()) print("Exact: ", integral4_exact()) assert True # leave this cell to grade the above integral """ Explanation: Integral 4 $$ I = \int_0^1 \frac{\ln x}{1+x} dx = - \frac{\pi^2}{12} $$ End of explanation """ def integrand5(x, a): return x/np.sinh(a*x) def integral5_approx(a): I, e = integrate.quad(integrand5, 0, np.inf, args=(a,)) return I def integral5_exact(a): return (np.pi**2)/(4*a**2) print("Numerical: ", integral5_approx(2.0)) print("Exact: ", integral5_exact(2.0)) assert True # leave this cell to grade the above integral """ Explanation: Integral 5 $$ I = \int_0^\infty \frac{x}{\sinh ax} dx = \frac{\pi^2}{4a^2} $$ End of explanation """
daneschi/berkeleytutorial
tutorial/02_runOptimizeModel/runOptimizeModel.ipynb
mit
# Import tulip import sys sys.path.insert(0, '../tuliplib/tulipBin/py') import numpy as np import matplotlib.pyplot as plt # Import UQ Library import tulipUQ as uq # Import Computational Model Library import tulipCM as cm # Import Data Library import tulipDA as da # Import Action Library import tulipAC as ac # READ DATASET data = da.daData_multiple_Table() data.readFromFile('heartFailure.dat') print (np.loadtxt('heartFailure.dat',delimiter=',',dtype=str)) """ Explanation: Optimize 0D adult physiology model under heart failure conditions Author: Daniele E. Schiavazzi References: - Tran J., Schiavazzi D., Ramachandra B.A., Kahn A. and Marsden A.L., Automated tuning for parameter identification in multi-scale coronary simulations, Computer and Fluids, 142(5):128-138, 2017. Link - Schiavazzi D., Baretta A., Pennati G., Hsia T.Y. and Marsden A.L., Patient-specific parameter estimation in single-ventricle lumped circulation models under uncertainty, International Journal of Numerical Methods in Biomedical Engineering, 33(3),1:34e02799, 2016. Link Volume 33, Issue 3 March 2017 e02799 Date: June 28th, 2017 Objectives of this tutorial: - Learn how to create data objects in tulip. - Learn how to create 0D models and ODE integrators in tulip. - Learn how to create actions in tulip to optimize models according to collected data. First, lets create a daData_multiple_Table tulip data object. This simply means that patient data will be stored by row where each patient occupies a separate column. End of explanation """ # CREATE ODE MODEL ode = cm.odeNormalAdultSimplePA() """ Explanation: The columns in the above data table represent patients affected by diastolic left ventricular disfunction. Patients with normal, mild and severe left ventricular diastolic dysfunction are represented in columns 4, 3 and 2, repectively. As expected, the systemic indicators (Systemic Blood Pressures "SBP","DBP" and Cardiac Output "CO") are not significantly affected by this condition. Let's start by creating a ODE circuit model. tulip provides a library of existing models, the odeNormalAdultSimplePA is an adult model with normal physiology and is characterized by a simple RC pulmonary compartment. <img src="Circuit_Normal_Simple.png" width="50%" height="50%"> End of explanation """ # CREATE ODE INTEGRATOR timeStep = 0.01 totalCycles = 10 rk4 = cm.odeIntegratorRK4(ode,timeStep,totalCycles) """ Explanation: We now create a Runge-Kutta 4 ODE integrator and initialize it by providing the ODE model as weel as the integration time step and total number of heart cycles. Note that the ODE model knows about its own heart rate (one of the model parameters) and therefore the total number of time steps is automatically selected to consistently generated the same number of heart cycles. End of explanation """ # Create new LPN model lpnModel = cm.cmLPNModel(rk4) """ Explanation: An LPN (lumped parameter network) model is initialized by providing the selected ODE integrator. End of explanation """ # ASSIGN DATA OBJECT TO MODEL lpnModel.setData(data,0) """ Explanation: The cmLPNModel contains a data object that knows how to compute cost functions and likelihood. This object needs to be assigned before we can proceed. The second argument represents the specific column of the patient data file that we would like to use for optimization. End of explanation """ # SET OPTIMIZER PARAMETERS # Total Number of iterations totIterations = 3 # Convergence Tolerance convTol = 1.0e-6 # Check Convergence every convUpdateIt iterations convUpdateIt = 1 # Maximum Iterations maxOptIt = 200 # Coefficient for Step increments stepCoefficient = 0.1 # INIT ACTION nm = ac.acActionOPT_NM(convTol,convUpdateIt,maxOptIt,stepCoefficient) """ Explanation: Once we are happy with the model and data, we need to decide what to do with them. One possibility is to do optimization. We first define some parameters and then initialize the acActionOPT_NM to perform optimization using the Nelder-Mead simplex algorithm. The following parameters are used: - totIterations is the total number of restart for ther Nelder-Mead optimizer. While the Nelder-Mead algorithm is often used for its robustness, restarts are sometime necessary to recover from situations where the simplex degenerates. In practice we notice that restart help in our case, leading to progressively smaller and smaller log-likelihood. - convTol is the convergence tolerance. - convUpdateIt is the number of iteration between convergence checking. - maxOptIt is the maximum number of function evaluations - stepCoefficient determines the size and shape of the initial simplex. The relative magnitudes of its elements should reflect the units of the variables. This algorithm has been modified from the implementation at this link. References: - Nelder J., Mead R., A simplex method for function minimization, Computer Journal, Volume 7, 1965, pages 308-313. - O'Neill R., Algorithm AS 47: Function Minimization Using a Simplex Procedure,Applied Statistics, Volume 20, Number 3, 1971, pages 338-345. End of explanation """ # ASSIGN MODEL TO ACTION nm.setModel(lpnModel) """ Explanation: At this point, we assign the LPN model to the acAction object. End of explanation """ # SET INITIAL GUESS - DEFAULT PARAMETER VALUES useStartingParameterFromFile = False startFromCentre = False startParameterFile = '' nm.setInitialParamGuess(useStartingParameterFromFile,startFromCentre,startParameterFile) """ Explanation: With this assignment we have completely defined the object hierarchy in tulip. Specifically, the model has a data member that, based on the model result, evaluates the likelihood or cost function and the optimizer knows about the model to run. This hierarchy is summarized in the picture below: <img src="tulip.png" width="50%" height="50%"> We set the initial guess to the default parameter set. This is a hardcoded set of parameters defined within each odeModel. End of explanation """ # RUN RESTARTED NELDER-MEAD for loopA in range(totIterations): # PERFORM ACTION nm.go() # SET RESTART CONDITION if(loopA == 0): nm.setInitialPointFromFile(True) nm.setInitialPointFile('optParams.txt') print ('--- Iteration Completed!!') """ Explanation: We solve totIteration restarts using this loop. Note that the optimization is started by calling nm.go() and the iterations after the first will read the initial parameter guess by the file optParams.txt that contains the optimal set of parameters from the previous optimization. End of explanation """ print (np.loadtxt('outputTargets.out',delimiter=',',dtype=str)) """ Explanation: Check the agreement between the patient data and the outputs of the optimized model. Let's open the outputTargets.out file that contains this information. End of explanation """ res = np.loadtxt('allData.dat') ini = 500 # PLOT CONTENT plt.figure(figsize=(20,5)) # Pressure Curves plt.subplot(1,1,1) plt.plot(res[ini:,11],res[ini:,19]/1333.3,'r-',lw=5,label='Left Ventricle') # Left Ventricular Pressure plt.plot(res[ini:,11],res[ini:,17]/1333.3,'g-',lw=5,label='Left Atrium') # Left Atrial Pressure plt.plot(res[ini:,11],res[ini:,18]/1333.3,'r--',lw=5,label='Right Ventricle') # Right Ventricular Pressure plt.plot(res[ini:,11],res[ini:,16]/1333.3,'g--',lw=5,label='Right Atrium') # Right Atrial Pressure plt.plot(res[ini:,11],res[ini:,8]/1333.3,'b-',lw=5,label='Aortic') # Aortic Pressure plt.plot(res[ini:,11],res[ini:,5]/1333.3,'k-',lw=5,label='Pulmonary') # Pulmonary Pressure plt.legend(fontsize=14) plt.xlabel('Time [s]',fontsize=14) plt.ylabel('Pressure [mmHg]',fontsize=14) plt.tick_params(axis='both', which='major', labelsize=12) plt.tick_params(axis='both', which='minor', labelsize=12) plt.tight_layout() plt.show() # Flow Curves plt.figure(figsize=(20,5)) plt.subplot(1,1,1) plt.plot(res[ini:,11],res[ini:,4],'r-',lw=5,label='Rigth AV') plt.plot(res[ini:,11],res[ini:,6],'g-',lw=5,label='Pulmonary') plt.plot(res[ini:,11],res[ini:,7],'b-',lw=5,label='Left AV') plt.plot(res[ini:,11],res[ini:,9],'k-',lw=5,label='Aortic') plt.legend(fontsize=14) plt.xlabel('Time [s]',fontsize=14) plt.ylabel('Flow [cm$^3$/s]',fontsize=14) plt.tick_params(axis='both', which='major', labelsize=12) plt.tick_params(axis='both', which='minor', labelsize=12) plt.tight_layout() plt.show() # Ventricular Pressure-Volume plt.figure(figsize=(20,5)) plt.subplot(1,2,1) plt.plot(res[ini:,3],res[ini:,19]/1333.3, lw=5) # Left Ventricular PV Loop plt.xlabel('Left Ventricular Volume [cm$^3$]',fontsize=14) plt.ylabel('Left Ventricular Pressure [mmHg]',fontsize=14) plt.tick_params(axis='both', which='major', labelsize=12) plt.tick_params(axis='both', which='minor', labelsize=12) plt.subplot(1,2,2) plt.plot(res[ini:,2],res[ini:,18]/1333.3, lw=5) # Right Ventricular PV Loop plt.xlabel('Right Ventricular Volume [cm$^3$]',fontsize=14) plt.ylabel('Right Ventricular Pressure [mmHg]',fontsize=14) plt.tick_params(axis='both', which='major', labelsize=12) plt.tick_params(axis='both', which='minor', labelsize=12) plt.tight_layout() plt.show() """ Explanation: The allData.dat file has also been generated with the model outputs for all the time steps. Lets plot some pressures, flows and pressure-volume loops. End of explanation """
juditacs/snippets
misc/function_map.ipynb
lgpl-3.0
d = {'a': 'AB', 'b': 'C'} funcs = {} for key, value in d.items(): funcs[key] = lambda v: v in value # True, True, False ? print(funcs['a']('AB'), funcs['a']('A'), funcs['a']('C')) # False, True ? print(funcs['b']('AB'), funcs['b']('C')) """ Explanation: Create a function map for substring comparison ~~~ d = {'a': 'AB', 'b': 'C'} funcs = .... funcs'a' --> True funcs'b' --> False funcs'b' --> False ~~~ Solution 1 - naive solution doesn't work as expected End of explanation """ value = "ABCD" print(funcs['a']('AB'), funcs['a']('A'), funcs['a']('C'), funcs['a']('D')) """ Explanation: Changing value changes the function behavior as well End of explanation """ d = {'a': 'AB', 'b': 'C'} def foo(value): def bar(v): return v in value return bar funcs = {} for key, value in d.items(): funcs[key] = foo(value) # True, True, False ? print(funcs['a']('AB'), funcs['a']('A'), funcs['a']('C')) # False, True ? d = {'a': 'AB', 'b': 'C'} def foo(value): def bar(v): return v in value return bar funcs = {} for key, value in d.items(): funcs[key] = foo(value) # True, True, False ? print(funcs['a']('AB'), funcs['a']('A'), funcs['a']('C')) # False, True ? print(funcs['b']('AB'), funcs['b']('C')) """ Explanation: Solution 2 - function factory using closure End of explanation """ value = "ABCD" print(funcs['a']('AB'), funcs['a']('A'), funcs['a']('C'), funcs['a']('D')) """ Explanation: Changing value doesn't affect the functions End of explanation """ from functools import partial d = {'a': 'AB', 'b': 'C'} funcs = {} for key, value in d.items(): funcs[key] = partial(lambda v, value: v in value, value=value) # True, True, False ? print(funcs['a']('AB'), funcs['a']('A'), funcs['a']('C')) # False, True ? print(funcs['b']('AB'), funcs['b']('C')) """ Explanation: Solution 3 - partial End of explanation """ value = "ABCD" print(funcs['a']('AB'), funcs['a']('A'), funcs['a']('C'), funcs['a']('D')) """ Explanation: Changing value End of explanation """
5x5x5x5/Machine_Learning_Life_Science
scikit-learn classification pipeline.ipynb
mit
import numpy as np import pandas as pd from sklearn.linear_model import LogisticRegression from sklearn import metrics from rdkit import Chem from rdkit.Chem import Draw %matplotlib inline """ Explanation: Train-Validation-Test Set Split Train models parameter search over random space Evaluation of fit Setup Computational Environment End of explanation """ m = Chem.MolFromSmiles('Cc1ccccc1') Chem.Kekulize(m) Chem.MolToSmiles(m,kekuleSmiles=True) fig = Draw.MolToMPL(m) m2 = Chem.MolFromSmiles('C1=C2C(=CC(=C1Cl)Cl)OC3=CC(=C(C=C3O2)Cl)Cl') fig2 = Draw.MolToMPL(m2) m3 = Chem.MolFromSmiles('O=C1OC2=C(C=C1)C1=C(C=CCO1)C=C2') fig3 = Draw.MolToMPL(m3) """ Explanation: Look at Chemicals End of explanation """ smiles = ("O=C(NCc1cc(OC)c(O)cc1)CCCC/C=C/C(C)C", "CC(C)CCCCCC(=O)NCC1=CC(=C(C=C1)O)OC", "c1(C(=O)O)cc(OC)c(O)cc1") mols = [Chem.MolFromSmiles(x) for x in smiles] Draw.MolsToGridImage(mols) suppl = Chem.SDMolSupplier('data/cdk2.sdf') d_train = pd.read_csv("train-0.1m.csv") d_test = pd.read_csv("test.csv") d_train_test = d_train.append(d_test) vars_categ = ["Month","DayofMonth","DayOfWeek","UniqueCarrier", "Origin", "Dest"] vars_num = ["DepTime","Distance"] def get_dummies(d, col): dd = pd.get_dummies(d.ix[:, col]) dd.columns = [col + "_%s" % c for c in dd.columns] return(dd) %time X_train_test_categ = pd.concat([get_dummies(d_train_test, col) for col in vars_categ], axis = 1) X_train_test = pd.concat([X_train_test_categ, d_train_test.ix[:,vars_num]], axis = 1) y_train_test = np.where(d_train_test["dep_delayed_15min"]=="Y", 1, 0) X_train = X_train_test[0:d_train.shape[0]] y_train = y_train_test[0:d_train.shape[0]] X_test = X_train_test[d_train.shape[0]:] y_test = y_train_test[d_train.shape[0]:] md = LogisticRegression(tol=0.00001, C=1000) %time md.fit(X_train, y_train) phat = md.predict_proba(X_test)[:,1] metrics.roc_auc_score(y_test, phat) """ Explanation: Look at a grid of chemicals End of explanation """
nagordon/mechpy
tutorials/sundries.ipynb
mit
import numpy as np y = '''62606.53409 59989.34659 62848.01136 80912.28693 79218.03977 81242.1875 59387.27273 73027.5974 69470.69805 66843.99351 82758.44156 81647.72727 77519.96753''' y = [float(x) for x in np.array(y.replace('\n',',').split(','))] print(y, end=" ") """ Explanation: Mechpy Tutorials a mechanical engineering toolbox source code - https://github.com/nagordon/mechpy documentation - https://nagordon.github.io/mechpy/web/ Neal Gordon 2017-02-20 Sundries this is just various codes for doing silly things with data processing and engineering stuff Convert a string of numbers to a list or array End of explanation """ from sympy import symbols, exp, pi, cos from sympy.plotting import textplot x = symbols('x') textplot( exp(-x) * cos(2*pi*x) ,0,5, 75,20) """ Explanation: Ascii Plot End of explanation """ # This is the same example as in the scitools.aplotter doc string import numpy as np x = np.linspace(0, 5, 81) #y = np.exp(-0.5*x**2)*np.cos(np.pi*x) y=np.exp(-x)*np.cos(2*np.pi*x) from scitools.aplotter import plot plot(x, y) plot(x, y, draw_axes=False) # Plot symbols (the dot argument) at data points plot(x, y, plot_slope=False) # Drop axis labels plot(x, y, plot_labels=False) plot(x, y, dot='o', plot_slope=False) # Store plot in a string p = plot(x, y, output=str) print(p) """ Explanation: another example of an Ascii Plot End of explanation """
brian-rose/ClimateModeling_courseware
Lectures/Lecture25 -- Water, water everywhere!.ipynb
mit
# Ensure compatibility with Python 2 and 3 from __future__ import print_function, division """ Explanation: ATM 623: Climate Modeling Brian E. J. Rose, University at Albany Lecture 25: Water, water everywhere! A brief look at the effects of evaporation on global climate Warning: content out of date and not maintained You really should be looking at The Climate Laboratory book by Brian Rose, where all the same content (and more!) is kept up to date. Here you are likely to find broken links and broken code. About these notes: This document uses the interactive Jupyter notebook format. The notes can be accessed in several different ways: The interactive notebooks are hosted on github at https://github.com/brian-rose/ClimateModeling_courseware The latest versions can be viewed as static web pages rendered on nbviewer A complete snapshot of the notes as of May 2017 (end of spring semester) are available on Brian's website. Also here is a legacy version from 2015. Many of these notes make use of the climlab package, available at https://github.com/brian-rose/climlab End of explanation """ %matplotlib inline import numpy as np import matplotlib.pyplot as plt import xarray as xr import climlab from climlab import constants as const def inferred_heat_transport( energy_in, lat_deg ): '''Returns the inferred heat transport (in PW) by integrating the net energy imbalance from pole to pole.''' from scipy import integrate from climlab import constants as const lat_rad = np.deg2rad( lat_deg ) return ( 1E-15 * 2 * np.math.pi * const.a**2 * integrate.cumtrapz( np.cos(lat_rad)*energy_in, x=lat_rad, initial=0. ) ) # A two-dimensional domain num_lev = 50 state = climlab.column_state(num_lev=num_lev, num_lat=60, water_depth=10.) lev = state.Tatm.domain.axes['lev'].points """ Explanation: Contents Imagine a world with reduced efficiency of evaporation Reduced evaporation experiment in a simple model with climlab Reduced evaporation efficiency experiment in an aquaplanet GCM Conclusion <a id='section1'></a> 1. Imagine a world with reduced efficiency of evaporation Recall from last lecture that the bulk formula for surface evaporation (latent heat flux) is $$ \text{LE} = L ~\rho ~ C_D ~ U \left( q_s - q_a \right) $$ which we approximated in terms of temperatures for a wet surface as $$ \text{LE} \approx L ~\rho ~ C_D ~ U \left( (1-r) ~ q_s^ + r \frac{\partial q^}{\partial T} \left( T_s - T_a \right) \right) $$ The drag coefficient $C_D$ determines the flux for a given set of temperatures, relative humidity, and wind speed. Now suppose that the drag coefficient is reduced by a factor of two (for evaporation only, not for sensible heat flux). i.e. all else being equal, there will be half as much evaporation. Reasoning through the effects of this perturbation (and calculating the effects in models) will give us some insight into several different roles played by water in the climate system. In-class exercise: What is the effect of the reduced evaporation efficiency on surface temperature? Form small groups. Each group should formulate a hypothesis about how and why the surface temperature will change when $C_D$ is reduced by a factor of 2. Draw a sketch of the surface temperature anomaly as a function of latitude. Be prepared to explain your sketch and your hypothesis. <a id='section2'></a> 2. Reduced evaporation experiment in a simple model with climlab We can use climlab to construct a model for the zonal-average climate. The model will be on a pressure-latitude grid. It will include the following processes: Seasonally varying insolation as function of latitude RRTMG radiation, including water vapor dependence and prescribed clouds Fixed relative humidity Shortave absorption by ozone Meridional heat transport, implemented as a horizontal down-gradient temperature diffusion at every vertical level Sensible and Latent heat fluxes at the surface using the bulk formulas Convective adjustment of the atmospheric lapse rate (not surface) This model basically draws together all the process models we have developed throughout the course, and adds the surface flux parameterizations. Note that since we are using explicit surface flux parameterizations, we will now use the convective adjustment only on the atmospheric air temperatures. Previous our adjustment has also modified the surface temperature, which was implicitly taking account of the turbulent heat fluxes. End of explanation """ # Define two types of cloud, high and low cldfrac = np.zeros_like(state.Tatm) r_liq = np.zeros_like(state.Tatm) r_ice = np.zeros_like(state.Tatm) clwp = np.zeros_like(state.Tatm) ciwp = np.zeros_like(state.Tatm) # indices high = 10 # corresponds to 210 hPa low = 40 # corresponds to 810 hPa # A high, thin ice layer (cirrus cloud) r_ice[:,high] = 14. # Cloud ice crystal effective radius (microns) ciwp[:,high] = 10. # in-cloud ice water path (g/m2) cldfrac[:,high] = 0.322 # A low, thick, water cloud layer (stratus) r_liq[:,low] = 14. # Cloud water drop effective radius (microns) clwp[:,low] = 100. # in-cloud liquid water path (g/m2) cldfrac[:,low] = 0.21 # wrap everything up in a dictionary mycloud = {'cldfrac': cldfrac, 'ciwp': ciwp, 'clwp': clwp, 'r_ice': r_ice, 'r_liq': r_liq} plt.plot(cldfrac[0,:], lev) plt.gca().invert_yaxis() plt.ylabel('Pressure (hPa)') plt.xlabel('Cloud fraction') plt.title('Prescribed cloud fraction in the column model') plt.show() # The top-level model model = climlab.TimeDependentProcess(state=state, name='Radiative-Convective-Diffusive Model') # Specified relative humidity distribution h2o = climlab.radiation.ManabeWaterVapor(state=state) # Hard convective adjustment for ATMOSPHERE ONLY (not surface) conv = climlab.convection.ConvectiveAdjustment(state={'Tatm':model.state['Tatm']}, adj_lapse_rate=6.5, **model.param) # Annual mean insolation as a function of latitude and time of year sun = climlab.radiation.DailyInsolation(domains=model.Ts.domain) # Couple the radiation to insolation and water vapor processes rad = climlab.radiation.RRTMG(state=state, specific_humidity=h2o.q, albedo=0.125, insolation=sun.insolation, coszen=sun.coszen, **mycloud) model.add_subprocess('Radiation', rad) model.add_subprocess('Insolation', sun) model.add_subprocess('WaterVapor', h2o) model.add_subprocess('Convection', conv) print( model) """ Explanation: Here we specify cloud properties. The combination of the two cloud layers defined below were found to reproduce the global, annual mean energy balance in a single-column model. We will specify the same clouds everywhere for simplicity. A more thorough investigation would incorporate some meridional variations in cloud properties. End of explanation """ from climlab.dynamics import MeridionalDiffusion # thermal diffusivity in W/m**2/degC D = 0.04 # meridional diffusivity in m**2/s K = D / model.Tatm.domain.heat_capacity[0] * const.a**2 d = MeridionalDiffusion(state={'Tatm': model.state['Tatm']}, K=K, **model.param) model.add_subprocess('Diffusion', d) """ Explanation: Here we add a diffusive heat transport process. The climlab code is set up to handle meridional diffusion level-by-level with a constant coefficient. End of explanation """ # Add surface heat fluxes shf = climlab.surface.SensibleHeatFlux(state=model.state, Cd=0.5E-3) lhf = climlab.surface.LatentHeatFlux(state=model.state, Cd=0.5E-3) # set the water vapor input field for LHF lhf.q = h2o.q model.add_subprocess('SHF', shf) model.add_subprocess('LHF', lhf) """ Explanation: Now we will add the surface heat flux processes. We have not used these before. Note that the drag coefficient $C_D$ is passed as an input argument when we create the process. It is also stored as an attribute of the process and can be modified (see below). The bulk formulas depend on a wind speed $U$. In this model, $U$ is specified as a constant. In a model with more complete dynamics, $U$ would be interactively calculated from the equations of motion. End of explanation """ print( model) """ Explanation: The complete model, ready to use! End of explanation """ model.integrate_years(4.) # One more year to get annual-mean diagnostics model.integrate_years(1.) ticks = [-90, -60, -30, 0, 30, 60, 90] fig, axes = plt.subplots(2,2,figsize=(14,10)) ax = axes[0,0] ax.plot(model.lat, model.timeave['Ts']) ax.set_title('Surface temperature (reference)') ax.set_ylabel('K') ax2 = axes[0,1] field = (model.timeave['Tatm']).transpose() cax = ax2.contourf(model.lat, model.lev, field) ax2.invert_yaxis() fig.colorbar(cax, ax=ax2) ax2.set_title('Atmospheric temperature (reference)'); ax2.set_ylabel('hPa') ax3 = axes[1,0] ax3.plot(model.lat, model.timeave['LHF'], label='LHF') ax3.plot(model.lat, model.timeave['SHF'], label='SHF') ax3.set_title('Surface heat flux (reference)') ax3.set_ylabel('W/m2') ax3.legend(); ax4 = axes[1,1] Rtoa = np.squeeze(model.timeave['ASR'] - model.timeave['OLR']) ax4.plot(model.lat, inferred_heat_transport(Rtoa, model.lat)) ax4.set_title('Meridional heat transport (reference)'); ax4.set_ylabel('PW') for ax in axes.flatten(): ax.set_xlim(-90,90); ax.set_xticks(ticks) ax.set_xlabel('Latitude'); ax.grid(); """ Explanation: <div class="alert alert-warning"> Although this is a "simple" model, it has a 60 x 30 point grid and is **by far the most complex model** we have built so far in these notes. These runs will probably take 15 minutes or more to execute, depending on the speed of your computer. </div> End of explanation """ model2 = climlab.process_like(model) model2.subprocess['LHF'].Cd *= 0.5 model2.integrate_years(4.) model2.integrate_years(1.) fig, axes = plt.subplots(2,2,figsize=(14,10)) ax = axes[0,0] ax.plot(model.lat, model2.timeave['Ts'] - model.timeave['Ts']) ax.set_title('Surface temperature anomaly') ax.set_ylabel('K') ax2 = axes[0,1] field = (model2.timeave['Tatm'] - model.timeave['Tatm']).transpose() cax = ax2.contourf(model.lat, model.lev, field) ax2.invert_yaxis() fig.colorbar(cax, ax=ax2) ax2.set_title('Atmospheric temperature anomaly'); ax2.set_ylabel('hPa') ax3 = axes[1,0] for field in ['LHF','SHF']: ax3.plot(model2.lat, model2.timeave[field] - model.timeave[field], label=field) ax3.set_title('Surface heat flux anomalies') ax3.set_ylabel('W/m2') ax3.legend(); ax4 = axes[1,1] Rtoa = np.squeeze(model.timeave['ASR'] - model.timeave['OLR']) Rtoa2 = np.squeeze(model2.timeave['ASR'] - model2.timeave['OLR']) ax4.plot(model.lat, inferred_heat_transport(Rtoa2-Rtoa, model.lat)) ax4.set_title('Meridional heat transport anomaly'); ax4.set_ylabel('PW') for ax in axes.flatten(): ax.set_xlim(-90,90); ax.set_xticks(ticks) ax.set_xlabel('Latitude'); ax.grid(); print ('The global mean surface temperature anomaly is %0.2f K.' %np.average(model2.timeave['Ts'] - model.timeave['Ts'], weights=np.cos(np.deg2rad(model.lat)), axis=0) ) """ Explanation: Reducing the evaporation efficiency Just need to clone our model, and modify $C_D$ in the latent heat flux subprocess. End of explanation """ # Load the climatologies from the CAM4 aquaplanet runs datapath = "http://ramadda.atmos.albany.edu:8080/repository/opendap/latest/Top/Users/BrianRose/CESM_runs/" endstr = "/entry.das" ctrl = xr.open_dataset(datapath + 'aquaplanet_som/QAqu_ctrl.cam.h0.clim.nc' + endstr, decode_times=False).mean(dim='time') halfEvap = xr.open_dataset(datapath + 'aquaplanet_som/QAqu_halfEvap.cam.h0.clim.nc' + endstr, decode_times=False).mean(dim='time') lat = ctrl.lat lon = ctrl.lon lev = ctrl.lev TS_anom = halfEvap.TS - ctrl.TS Tatm_anom = halfEvap['T'] - ctrl['T'] """ Explanation: This model predicts the following: The surface temperature warms slightly in the tropics, and cools at high latitudes The atmosphere gets colder everywhere! There is a substantial reduction in surface latent heat flux, especially in the tropics where it is dominant. There is also a substantial increase in sensible heat flux. This is consistent with the cooler air temperatures and warmer surface. Colder tropical atmosphere leads to a decrease in the poleward heat tranpsort. This helps explain the high-latitude cooling. Notice that the heat transport responds to the atmopsheric temperature gradient, which changes in the opposite direction of the surface temperature gradient. Basically, this model predicts that by inhibiting evaporation in the tropics, we force the tropical surface to warm and the tropical atmosphere to cool. This cooling signal is then communicated globally by atmospheric heat transport. The result is small positive global surface temperature anomaly. Discussion: what is this model missing? We could list many things, but as we will see below, two key climate components that are not included in this model are changes in relative humidity cloud feedback We will compare this result to an analogous experiment in a GCM. <a id='section3'></a> 3. Reduced evaporation efficiency experiment in an aquaplanet GCM The model is the familiar CESM but in simplified "aquaplanet" setup. The surface is completely covered by a shallow slab ocean. This model setup (with CAM4 model physics) is described in detail in this paper: Rose, B. E. J., Armour, K. C., Battisti, D. S., Feldl, N., and Koll, D. D. B. (2014). The dependence of transient climate sensitivity and radiative feedbacks on the spatial pattern of ocean heat uptake. Geophys. Res. Lett., 41, doi:10.1002/2013GL058955 Here we will compare a control simulation with a perturbation simulation in which we have once again reduced the drag coefficient by a factor of 2. End of explanation """ fig, (ax1,ax2) = plt.subplots(1,2,figsize=(14,5)) ax1.plot(lat, TS_anom.mean(dim='lon')); ax1.set_title('Surface temperature anomaly') cax2 = ax2.contourf(lat, lev, Tatm_anom.mean(dim='lon'), levels=np.arange(-7, 8., 2.), cmap='seismic') ax2.invert_yaxis(); fig.colorbar(cax2,ax=ax2); ax2.set_title('Atmospheric temperature anomaly'); for ax in (ax1, ax2): ax.set_xlim(-90,90); ax.set_xticks(ticks); ax.grid(); print ('The global mean surface temperature anomaly is %0.2f K.' %((TS_anom*ctrl.gw).mean(dim=('lat','lon'))/ctrl.gw.mean(dim='lat'))) """ Explanation: Temperature anomalies End of explanation """ energy_budget = {} for name, run in zip(['ctrl','halfEvap'],[ctrl,halfEvap]): budget = xr.Dataset() # TOA radiation budget['OLR'] = run.FLNT budget['OLR_clr'] = run.FLNTC budget['ASR'] = run.FSNT budget['ASR_clr'] = run.FSNTC budget['Rtoa'] = budget.ASR - budget.OLR # net downwelling radiation # surface fluxes (all positive UP) budget['LHF'] = run.LHFLX budget['SHF'] = run.SHFLX budget['LWsfc'] = run.FLNS budget['LWsfc_clr'] = run.FLNSC budget['SWsfc'] = -run.FSNS budget['SWsfc_clr'] = -run.FSNSC budget['SnowFlux'] = ((run.PRECSC+run.PRECSL) *const.rho_w*const.Lhfus) # net upward radiation from surface budget['SfcNetRad'] = budget['LWsfc'] + budget['SWsfc'] budget['SfcNetRad_clr'] = budget['LWsfc_clr'] + budget['SWsfc_clr'] # net upward surface heat flux budget['SfcNet'] = (budget['SfcNetRad'] + budget['LHF'] + budget['SHF'] + budget['SnowFlux']) # net heat flux in to atmosphere budget['Fatmin'] = budget['Rtoa'] + budget['SfcNet'] # hydrological cycle budget['Evap'] = run['QFLX'] # kg/m2/s or mm/s budget['Precip'] = (run['PRECC']+run['PRECL'][:])*const.rho_w # kg/m2/s or mm/s budget['EminusP'] = budget.Evap - budget.Precip # kg/m2/s or mm/s energy_budget[name] = budget # Here we take advantage of xarray! # We can simply subtract the two xarray.Dataset objects # to get anomalies for every term # And also take the zonal averages for all anomaly fields in one line of code anom = energy_budget['halfEvap'] - energy_budget['ctrl'] zonanom = anom.mean(dim='lon') """ Explanation: In this model, reducing the evaporation efficiency leads to a much warmer climate. The largest warming occurs in mid-latitudes. The warming is not limited to the surface but in fact extends deeply through the troposphere. Both the spatial pattern and the magnitude of the warming are completely different than what our much simpler model predicted. Why? Compute all the terms in the TOA and surface energy and water budget anomalies End of explanation """ fig, (ax1,ax2) = plt.subplots(1,2,figsize=(14,5)) ax1.plot(lat, zonanom.ASR, color='b', label='ASR') ax1.plot(lat, zonanom.OLR, color='r', label='OLR') ax1.plot(lat, zonanom.ASR_clr, color='b', linestyle='--') ax1.plot(lat, zonanom.OLR_clr, color='r', linestyle='--') ax1.set_title('TOA radiation anomalies') ax2.plot(lat, zonanom.SWsfc, color='b', label='SW') ax2.plot(lat, zonanom.SWsfc_clr, color='b', linestyle='--') ax2.plot(lat, zonanom.LWsfc, color='g', label='LW') ax2.plot(lat, zonanom.LWsfc_clr, color='g', linestyle='--') ax2.plot(lat, zonanom.LHF, color='r', label='LHF') ax2.plot(lat, zonanom.SHF, color='c', label='SHF') ax2.plot(lat, zonanom.SfcNet, color='m', label='Net') ax2.set_title('Surface energy budget anomalies') for ax in [ax1, ax2]: ax.set_ylabel('W/m2'); ax.set_xlabel('Latitude') ax.set_xlim(-90,90); ax.set_xticks(ticks); ax.legend(); ax.grid(); """ Explanation: Energy budget anomalies at TOA and surface End of explanation """ fig, (ax1,ax2) = plt.subplots(1,2,figsize=(12,5)) RH = (halfEvap.RELHUM - ctrl.RELHUM).mean(dim='lon'); CLOUD = (halfEvap.CLOUD - ctrl.CLOUD).mean(dim='lon') contours = np.arange(-15, 16., 2.) cax1 = ax1.contourf(lat, lev, RH, levels=contours, cmap='seismic'); fig.colorbar(cax1, ax=ax1); ax1.set_title('Relative Humidity (%)') cax2 = ax2.contourf(lat, lev, 100*CLOUD, levels=contours, cmap='seismic'); ax2.set_title('Cloud fraction (%)') for ax in [ax1, ax2]: ax.invert_yaxis(); ax.set_xlim(-90,90); ax.set_xticks(ticks); """ Explanation: Dashed lines are clear-sky radiation anomalies. Looking at the TOA budget: Reducing evaporation efficiency leads to very large increase in ASR, especially in mid-latitudes This increase is almost entirely due to clouds! Accompanied by a (mostly) clear-sky OLR increase, consistent with the warmer temperatures. This is very suggestive of an important role for low-level cloud changes. [Why?] From the surface budget: Notice that the decrease in evaporation is much weaker than we found in the simple model. Here, the decreased evaporation efficiency is competing against the warmer temperatures which tend to strongly increase evaporation, all else being equal. The surface (ocean) gains a lot of excess heat by solar radiation. As noted from the TOA budget, this is due to changes in cloudiness. The clear-sky SW anomaly is actually positive, consistent with a warmer, moister atmosphere (but this effect is small). The LW anomaly is positive, indicating increased radiative cooling of the surface. This is also largely a cloud effect, and consistent with a decrease in low-level cloudiness. [Why?] As in the simple model, there is an increase in the sensible heat flux (though weaker). According to bulk formula, should be driven by one or both of increased wind speed increased air-sea temperature difference Vertical structure of relative humidity and cloud changes End of explanation """ HT = {} HT['total'] = inferred_heat_transport(anom.Rtoa.mean(dim='lon'), lat) HT['atm'] = inferred_heat_transport(anom.Fatmin.mean(dim='lon'), lat) HT['latent'] = inferred_heat_transport(anom.EminusP.mean(dim='lon') * const.Lhvap, lat) HT['dse'] = HT['atm'] - HT['latent'] fig, ax = plt.subplots() ax.plot(lat, HT['total'], 'k-', label='total', linewidth=2) ax.plot(lat, HT['dse'], 'b', label='dry') ax.plot(lat, HT['latent'], 'r', label='latent') ax.set_xlim(-90,90); ax.set_xticks(ticks); ax.grid() ax.legend(loc='upper left'); ax.set_ylabel('PW'); ax.set_xlabel('Latitude') """ Explanation: Meridional heat transport anomalies End of explanation """ %load_ext version_information %version_information numpy, scipy, matplotlib, xarray, climlab """ Explanation: <a id='section4'></a> 4. Conclusion We have forced a climate change NOT by adding any kind of radiative forcing, but just by changing the efficiency of evaporation at the sea surface. The climate system then find a new equilibrium in which the radiative fluxes, surface temperature, air-sea temperature difference, boundary layer relative humidity, and wind speeds all change simultaneously. Reasoning our way through such a problem from first principles in practically impossible. This is particularly true because in this example, the dominant driver of the climate change is an increase in SW absorption due to a substantial decrease in low-level clouds across the subtropics and mid-latitudes. A comprehensive theory to explain these cloud changes does not yet exist. Understanding changes in low-level cloudiness under climate change is enormously important -- because these clouds, which have an unambiguous cooling effect, are a key determinant of climate sensitivity. There is lots of work left to do. Water is intimately involved in just about every aspect of the planetary energy budget. Here we have highlighted the role of water in: Cooling of the surface by evaporation Water vapor greenhouse effect Poleward latent heat transport Cloud formation <div class="alert alert-success"> [Back to ATM 623 notebook home](../index.ipynb) </div> Version information End of explanation """
flightcom/freqtrade
freqtrade/templates/strategy_analysis_example.ipynb
gpl-3.0
from pathlib import Path from freqtrade.configuration import Configuration # Customize these according to your needs. # Initialize empty configuration object config = Configuration.from_files([]) # Optionally, use existing configuration file # config = Configuration.from_files(["config.json"]) # Define some constants config["timeframe"] = "5m" # Name of the strategy class config["strategy"] = "SampleStrategy" # Location of the data data_location = Path(config['user_data_dir'], 'data', 'binance') # Pair to analyze - Only use one pair here pair = "BTC/USDT" # Load data using values set above from freqtrade.data.history import load_pair_history candles = load_pair_history(datadir=data_location, timeframe=config["timeframe"], pair=pair, data_format = "hdf5", ) # Confirm success print("Loaded " + str(len(candles)) + f" rows of data for {pair} from {data_location}") candles.head() """ Explanation: Strategy analysis example Debugging a strategy can be time-consuming. Freqtrade offers helper functions to visualize raw data. The following assumes you work with SampleStrategy, data for 5m timeframe from Binance and have downloaded them into the data directory in the default location. Setup End of explanation """ # Load strategy using values set above from freqtrade.resolvers import StrategyResolver from freqtrade.data.dataprovider import DataProvider strategy = StrategyResolver.load_strategy(config) strategy.dp = DataProvider(config, None, None) # Generate buy/sell signals using strategy df = strategy.analyze_ticker(candles, {'pair': pair}) df.tail() """ Explanation: Load and run strategy Rerun each time the strategy file is changed End of explanation """ # Report results print(f"Generated {df['buy'].sum()} buy signals") data = df.set_index('date', drop=False) data.tail() """ Explanation: Display the trade details Note that using data.head() would also work, however most indicators have some "startup" data at the top of the dataframe. Some possible problems Columns with NaN values at the end of the dataframe Columns used in crossed*() functions with completely different units Comparison with full backtest having 200 buy signals as output for one pair from analyze_ticker() does not necessarily mean that 200 trades will be made during backtesting. Assuming you use only one condition such as, df['rsi'] &lt; 30 as buy condition, this will generate multiple "buy" signals for each pair in sequence (until rsi returns > 29). The bot will only buy on the first of these signals (and also only if a trade-slot ("max_open_trades") is still available), or on one of the middle signals, as soon as a "slot" becomes available. End of explanation """ from freqtrade.data.btanalysis import load_backtest_data, load_backtest_stats # if backtest_dir points to a directory, it'll automatically load the last backtest file. backtest_dir = config["user_data_dir"] / "backtest_results" # backtest_dir can also point to a specific file # backtest_dir = config["user_data_dir"] / "backtest_results/backtest-result-2020-07-01_20-04-22.json" # You can get the full backtest statistics by using the following command. # This contains all information used to generate the backtest result. stats = load_backtest_stats(backtest_dir) strategy = 'SampleStrategy' # All statistics are available per strategy, so if `--strategy-list` was used during backtest, this will be reflected here as well. # Example usages: print(stats['strategy'][strategy]['results_per_pair']) # Get pairlist used for this backtest print(stats['strategy'][strategy]['pairlist']) # Get market change (average change of all pairs from start to end of the backtest period) print(stats['strategy'][strategy]['market_change']) # Maximum drawdown () print(stats['strategy'][strategy]['max_drawdown']) # Maximum drawdown start and end print(stats['strategy'][strategy]['drawdown_start']) print(stats['strategy'][strategy]['drawdown_end']) # Get strategy comparison (only relevant if multiple strategies were compared) print(stats['strategy_comparison']) # Load backtested trades as dataframe trades = load_backtest_data(backtest_dir) # Show value-counts per pair trades.groupby("pair")["sell_reason"].value_counts() """ Explanation: Load existing objects into a Jupyter notebook The following cells assume that you have already generated data using the cli. They will allow you to drill deeper into your results, and perform analysis which otherwise would make the output very difficult to digest due to information overload. Load backtest results to pandas dataframe Analyze a trades dataframe (also used below for plotting) End of explanation """ # Plotting equity line (starting with 0 on day 1 and adding daily profit for each backtested day) from freqtrade.configuration import Configuration from freqtrade.data.btanalysis import load_backtest_data, load_backtest_stats import plotly.express as px import pandas as pd # strategy = 'SampleStrategy' # config = Configuration.from_files(["user_data/config.json"]) # backtest_dir = config["user_data_dir"] / "backtest_results" stats = load_backtest_stats(backtest_dir) strategy_stats = stats['strategy'][strategy] dates = [] profits = [] for date_profit in strategy_stats['daily_profit']: dates.append(date_profit[0]) profits.append(date_profit[1]) equity = 0 equity_daily = [] for daily_profit in profits: equity_daily.append(equity) equity += float(daily_profit) df = pd.DataFrame({'dates': dates,'equity_daily': equity_daily}) fig = px.line(df, x="dates", y="equity_daily") fig.show() """ Explanation: Plotting daily profit / equity line End of explanation """ from freqtrade.data.btanalysis import load_trades_from_db # Fetch trades from database trades = load_trades_from_db("sqlite:///tradesv3.sqlite") # Display results trades.groupby("pair")["sell_reason"].value_counts() """ Explanation: Load live trading results into a pandas dataframe In case you did already some trading and want to analyze your performance End of explanation """ from freqtrade.data.btanalysis import analyze_trade_parallelism # Analyze the above parallel_trades = analyze_trade_parallelism(trades, '5m') parallel_trades.plot() """ Explanation: Analyze the loaded trades for trade parallelism This can be useful to find the best max_open_trades parameter, when used with backtesting in conjunction with --disable-max-market-positions. analyze_trade_parallelism() returns a timeseries dataframe with an "open_trades" column, specifying the number of open trades for each candle. End of explanation """ from freqtrade.plot.plotting import generate_candlestick_graph # Limit graph period to keep plotly quick and reactive # Filter trades to one pair trades_red = trades.loc[trades['pair'] == pair] data_red = data['2019-06-01':'2019-06-10'] # Generate candlestick graph graph = generate_candlestick_graph(pair=pair, data=data_red, trades=trades_red, indicators1=['sma20', 'ema50', 'ema55'], indicators2=['rsi', 'macd', 'macdsignal', 'macdhist'] ) # Show graph inline # graph.show() # Render graph in a seperate window graph.show(renderer="browser") """ Explanation: Plot results Freqtrade offers interactive plotting capabilities based on plotly. End of explanation """ import plotly.figure_factory as ff hist_data = [trades.profit_ratio] group_labels = ['profit_ratio'] # name of the dataset fig = ff.create_distplot(hist_data, group_labels,bin_size=0.01) fig.show() """ Explanation: Plot average profit per trade as distribution graph End of explanation """
IanHawke/maths-with-python
04-basic-plotting.ipynb
mit
from matplotlib import pyplot %matplotlib inline from matplotlib import rcParams rcParams['figure.figsize']=(12,9) from math import sin, pi x = [] y = [] for i in range(201): x_point = 0.01*i x.append(x_point) y.append(sin(pi*x_point)**2) pyplot.plot(x, y) pyplot.show() """ Explanation: Plotting There are many Python plotting libraries depending on your purpose. However, the standard general-purpose library is matplotlib. This is often used through its pyplot interface. End of explanation """ from math import sin, pi x = [] y = [] for i in range(201): x_point = 0.01*i x.append(x_point) y.append(sin(pi*x_point)**2) pyplot.plot(x, y, marker='+', markersize=8, linestyle=':', linewidth=3, color='b', label=r'$\sin^2(\pi x)$') pyplot.legend(loc='lower right') pyplot.xlabel(r'$x$') pyplot.ylabel(r'$y$') pyplot.title('A basic plot') pyplot.show() """ Explanation: We have defined two sequences - in this case lists, but tuples would also work. One contains the $x$-axis coordinates, the other the data points to appear on the $y$-axis. A basic plot is produced using the plot command of pyplot. However, this plot will not automatically appear on the screen, as after plotting the data you may wish to add additional information. Nothing will actually happen until you either save the figure to a file (using pyplot.savefig(&lt;filename&gt;)) or explicitly ask for it to be displayed (with the show command). When the plot is displayed the program will typically pause until you dismiss the plot. If using the notebook you can include the command %matplotlib inline or %matplotlib notebook before plotting to make the plots appear automatically inside the notebook. If code is included in a program which is run inside spyder through an IPython console, the figures may appear in the console automatically. Either way, it is good practice to always include the show command to explicitly display the plot. This plotting interface is straightforward, but the results are not particularly nice. The following commands illustrate some of the ways of improving the plot: End of explanation """ from math import sin, pi, exp, log x = [] y1 = [] y2 = [] for i in range(201): x_point = 1.0 + 0.01*i x.append(x_point) y1.append(exp(sin(pi*x_point))) y2.append(log(pi+x_point*sin(x_point))) pyplot.loglog(x, y1, linestyle='--', linewidth=4, color='k', label=r'$y_1=e^{\sin(\pi x)}$') pyplot.loglog(x, y2, linestyle='-.', linewidth=4, color='r', label=r'$y_2=\log(\pi+x\sin(x))$') pyplot.legend(loc='lower right') pyplot.xlabel(r'$x$') pyplot.ylabel(r'$y$') pyplot.title('A basic logarithmic plot') pyplot.show() from math import sin, pi, exp, log x = [] y1 = [] y2 = [] for i in range(201): x_point = 1.0 + 0.01*i x.append(x_point) y1.append(exp(sin(pi*x_point))) y2.append(log(pi+x_point*sin(x_point))) pyplot.semilogy(x, y1, linestyle='None', marker='o', color='g', label=r'$y_1=e^{\sin(\pi x)}$') pyplot.semilogy(x, y2, linestyle='None', marker='^', color='r', label=r'$y_2=\log(\pi+x\sin(x))$') pyplot.legend(loc='lower right') pyplot.xlabel(r'$x$') pyplot.ylabel(r'$y$') pyplot.title('A different logarithmic plot') pyplot.show() """ Explanation: Whilst most of the commands are self-explanatory, a note should be made of the strings line r'$x$'. These strings are in LaTeX format, which is the standard typesetting method for professional-level mathematics. The $ symbols surround mathematics. The r before the definition of the string is Python notation, not LaTeX. It says that the following string will be "raw": that backslash characters should be left alone. Then, special LaTeX commands have a backslash in front of them: here we use \pi and \sin. Most basic symbols can be easily guessed (eg \theta or \int), but there are useful lists of symbols, and a reverse search site available. We can also use ^ to denote superscripts (used here), _ to denote subscripts, and use {} to group terms. By combining these basic commands with other plotting types (semilogx and loglog, for example), most simple plots can be produced quickly. Here are some more examples: End of explanation """
watsonyanghx/CS231n
assignment1/.ipynb_checkpoints/features-checkpoint.ipynb
mit
import random import numpy as np from cs231n.data_utils import load_CIFAR10 import matplotlib.pyplot as plt %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # for auto-reloading extenrnal modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 """ Explanation: Image features exercise Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website. We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels. All of your work for this exercise will be done in this notebook. End of explanation """ from cs231n.features import color_histogram_hsv, hog_feature def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000): # Load the raw CIFAR-10 data cifar10_dir = 'cs231n/datasets/cifar-10-batches-py' X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir) # Subsample the data mask = range(num_training, num_training + num_validation) X_val = X_train[mask] y_val = y_train[mask] mask = range(num_training) X_train = X_train[mask] y_train = y_train[mask] mask = range(num_test) X_test = X_test[mask] y_test = y_test[mask] return X_train, y_train, X_val, y_val, X_test, y_test X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data() """ Explanation: Load data Similar to previous exercises, we will load CIFAR-10 data from disk. End of explanation """ from cs231n.features import * num_color_bins = 10 # Number of bins in the color histogram feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)] X_train_feats = extract_features(X_train, feature_fns, verbose=True) X_val_feats = extract_features(X_val, feature_fns) X_test_feats = extract_features(X_test, feature_fns) # Preprocessing: Subtract the mean feature mean_feat = np.mean(X_train_feats, axis=0, keepdims=True) X_train_feats -= mean_feat X_val_feats -= mean_feat X_test_feats -= mean_feat # Preprocessing: Divide by standard deviation. This ensures that each feature # has roughly the same scale. std_feat = np.std(X_train_feats, axis=0, keepdims=True) X_train_feats /= std_feat X_val_feats /= std_feat X_test_feats /= std_feat # Preprocessing: Add a bias dimension X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))]) X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))]) X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))]) """ Explanation: Extract Features For each image we will compute a Histogram of Oriented Gradients (HOG) as well as a color histogram using the hue channel in HSV color space. We form our final feature vector for each image by concatenating the HOG and color histogram feature vectors. Roughly speaking, HOG should capture the texture of the image while ignoring color information, and the color histogram represents the color of the input image while ignoring texture. As a result, we expect that using both together ought to work better than using either alone. Verifying this assumption would be a good thing to try for the bonus section. The hog_feature and color_histogram_hsv functions both operate on a single image and return a feature vector for that image. The extract_features function takes a set of images and a list of feature functions and evaluates each feature function on each image, storing the results in a matrix where each column is the concatenation of all feature vectors for a single image. End of explanation """ # Use the validation set to tune the learning rate and regularization strength from cs231n.classifiers.linear_classifier import LinearSVM learning_rates = [1e-9, 1e-8, 1e-7] regularization_strengths = [1e5, 1e6, 1e7] results = {} best_val = -1 best_svm = None pass ################################################################################ # TODO: # # Use the validation set to set the learning rate and regularization strength. # # This should be identical to the validation that you did for the SVM; save # # the best trained classifer in best_svm. You might also want to play # # with different numbers of bins in the color histogram. If you are careful # # you should be able to get accuracy of near 0.44 on the validation set. # ################################################################################ for lr in learning_rates: for reg in regularization_strengths: svm = LinearSVM() loss_hist = svm.train(X_train_feats, y_train, learning_rate=lr, reg=reg, num_iters=1500, verbose=True) training_accuracy = np.mean(svm.predict(X_train_feats) == y_train) validation_accuracy = np.mean(svm.predict(X_val_feats) == y_val) if best_val < validation_accuracy: best_val = validation_accuracy best_svm = svm results[(lr, reg)] = (training_accuracy, validation_accuracy) ################################################################################ # END OF YOUR CODE # ################################################################################ # Print out results. for lr, reg in sorted(results): train_accuracy, val_accuracy = results[(lr, reg)] print 'lr %e reg %e train accuracy: %f val accuracy: %f' % ( lr, reg, train_accuracy, val_accuracy) print 'best validation accuracy achieved during cross-validation: %f' % best_val # Evaluate your trained SVM on the test set y_test_pred = best_svm.predict(X_test_feats) test_accuracy = np.mean(y_test == y_test_pred) print test_accuracy # An important way to gain intuition about how an algorithm works is to # visualize the mistakes that it makes. In this visualization, we show examples # of images that are misclassified by our current system. The first column # shows images that our system labeled as "plane" but whose true label is # something other than "plane". examples_per_class = 8 classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] for cls, cls_name in enumerate(classes): idxs = np.where((y_test != cls) & (y_test_pred == cls))[0] idxs = np.random.choice(idxs, examples_per_class, replace=False) for i, idx in enumerate(idxs): plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1) plt.imshow(X_test[idx].astype('uint8')) plt.axis('off') if i == 0: plt.title(cls_name) plt.show() """ Explanation: Train SVM on features Using the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels. End of explanation """ print X_train_feats.shape from cs231n.classifiers.neural_net import TwoLayerNet input_dim = X_train_feats.shape[1] hidden_dim = 50 num_classes = 10 net = TwoLayerNet(input_dim, hidden_dim, num_classes) best_net = None ################################################################################ # TODO: Train a two-layer neural network on image features. You may want to # # cross-validate various parameters as in previous sections. Store your best # # model in the best_net variable. # ################################################################################ ## Identical to visualization code above def visualize(stats): plt.subplot(2, 1, 1) plt.plot(stats['loss_history']) plt.title('Loss history') plt.xlabel('Iteration') plt.ylabel('Loss') plt.subplot(2, 1, 2) plt.plot(stats['train_acc_history'], label='train') plt.plot(stats['val_acc_history'], label='val') plt.title('Classification accuracy history') plt.xlabel('Epoch') plt.ylabel('Clasification accuracy') plt.show() ## Train the network stats = net.train(X_train_feats, y_train, X_val_feats, y_val, num_iters=15000, batch_size=200, learning_rate=1e-3, learning_rate_decay=0.91, reg=0.028, verbose=True) best_net = net ## Best accuracy on the validation set stats['best_val_acc'] = max(stats['val_acc_history']) visualize(stats) test_acc = np.mean(net.predict(X_test_feats) == y_test) print 'Validation accuracy: ', stats['best_val_acc'], 'Test accuracy: ', test_acc ################################################################################ # END OF YOUR CODE # ################################################################################ # Run your neural net classifier on the test set. You should be able to # get more than 55% accuracy. test_acc = (net.predict(X_test_feats) == y_test).mean() print test_acc """ Explanation: Inline question 1: Describe the misclassification results that you see. Do they make sense? Answer: The mistakes encountered make sense when we consider the features we are using The color histogram feature creates a bias towards marking images with similar backgrounds as the same class. Thus, for example deer and other animals photographed against a green/brown background are confused. Similar, the HOG features cause us to put images with similar edges into the same category. The hard edges between car, truck, plane, and ship are quite similar, so the errors make sense. Neural Network on image features Earlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy. End of explanation """
AllenDowney/ModSim
python/soln/examples/hiv_model_soln.ipynb
gpl-2.0
# install Pint if necessary try: import pint except ImportError: !pip install pint # download modsim.py if necessary from os.path import exists filename = 'modsim.py' if not exists(filename): from urllib.request import urlretrieve url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/' local, _ = urlretrieve(url+filename, filename) print('Downloaded ' + local) # import functions from modsim from modsim import * """ Explanation: Modeling HIV infection Modeling and Simulation in Python Copyright 2021 Allen Downey License: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International End of explanation """ init = State(R=200, L=0, E=0, V=4e-7) """ Explanation: During the initial phase of HIV infection, the concentration of the virus in the bloodstream typically increases quickly and then decreases. The most obvious explanation for the decline is an immune response that destroys the virus or controls its replication. However, at least in some patients, the decline occurs even without any detectable immune response. In 1996 Andrew Phillips proposed another explanation for the decline ("Reduction of HIV Concentration During Acute Infection: Independence from a Specific Immune Response", available from https://people.math.gatech.edu/~weiss/uploads/5/8/6/1/58618765/phillips1996.pdf). Phillips presents a system of differential equations that models the concentrations of the HIV virus and the CD4 cells it infects. The model does not include an immune response; nevertheless, it demonstrates behavior that is qualitatively similar to what is seen in patients during the first few weeks after infection. His conclusion is that the observed decline in the concentration of HIV might not be caused by an immune response; it could be due to the dynamic interaction between HIV and the cells it infects. In this notebook, we'll implement Phillips's model and consider whether it does the work it is meant to do. The Model The model has four state variables, R, L, E, and V. Read the paper to understand what they represent. Here are the initial conditional we can glean from the paper. End of explanation """ gamma = 1.36 mu = 1.36e-3 tau = 0.2 beta = 0.00027 p = 0.1 alpha = 3.6e-2 sigma = 2 delta = 0.33 pi = 100 """ Explanation: The behavior of the system is controlled by 9 parameters. That might seem like a lot, but they are not entirely free parameters; their values are constrained by measurements and background knowledge (although some are more constrained than others). Here are the values from Table 1. Note: the parameter $\rho$ (the Greek letter "rho") in the table appears as $p$ in the equations. Since it represents a proportion, we'll use $p$. End of explanation """ system = System(init=init, t_end=120, num=481) """ Explanation: Here's a System object with the initial conditions and the duration of the simulation (120 days). Normally we would store the parameters in the System object, but the code will be less cluttered if we leave them as global variables. End of explanation """ # Solution def slope_func(t, state, system): R, L, E, V = state infections = beta * R * V conversions = alpha * L dRdt = gamma * tau - mu * R - infections dLdt = p * infections - mu * L - conversions dEdt = (1-p) * infections + conversions - delta * E dVdt = pi * E - sigma * V return dRdt, dLdt, dEdt, dVdt """ Explanation: Exercise: Use the equations in the paper to write a slope function that takes a State object with the current values of R, L, E, and V, and returns their derivatives in the corresponding order. End of explanation """ # Solution slope_func(0, init, system) """ Explanation: Test your slope function with the initial conditions. The results should be approximately -2.16e-08, 2.16e-09, 1.944e-08, -8e-07 End of explanation """ # Solution results, details = run_solve_ivp(system, slope_func) details.message # Solution results.head() """ Explanation: Exercise: Now use run_solve_ivp to simulate the system of equations. End of explanation """ results.V.plot(label='V') decorate(xlabel='Time (days)', ylabel='Free virions V', yscale='log', ylim=[0.1, 1e4]) results.R.plot(label='R', color='C1') decorate(xlabel='Time (days)', ylabel='Number of cells', ) results.L.plot(color='C2', label='L') results.E.plot(color='C4', label='E') decorate(xlabel='Time (days)', ylabel='Number of cells', yscale='log', ylim=[0.1, 100]) """ Explanation: The next few cells plot the results on the same scale as the figures in the paper. Exericise: Compare your results to the results in the paper. Are they consistent? End of explanation """
josef-pkt/statsmodels
examples/notebooks/recursive_ls.ipynb
bsd-3-clause
%matplotlib inline import numpy as np import pandas as pd import statsmodels.api as sm import matplotlib.pyplot as plt from pandas_datareader.data import DataReader np.set_printoptions(suppress=True) """ Explanation: Recursive least squares Recursive least squares is an expanding window version of ordinary least squares. In addition to availability of regression coefficients computed recursively, the recursively computed residuals the construction of statistics to investigate parameter instability. The RLS class allows computation of recursive residuals and computes CUSUM and CUSUM of squares statistics. Plotting these statistics along with reference lines denoting statistically significant deviations from the null hypothesis of stable parameters allows an easy visual indication of parameter stability. End of explanation """ print(sm.datasets.copper.DESCRLONG) dta = sm.datasets.copper.load_pandas().data dta.index = pd.date_range('1951-01-01', '1975-01-01', freq='AS') endog = dta['WORLDCONSUMPTION'] # To the regressors in the dataset, we add a column of ones for an intercept exog = sm.add_constant(dta[['COPPERPRICE', 'INCOMEINDEX', 'ALUMPRICE', 'INVENTORYINDEX']]) """ Explanation: Example 1: Copper We first consider parameter stability in the copper dataset (description below). End of explanation """ mod = sm.RecursiveLS(endog, exog) res = mod.fit() print(res.summary()) """ Explanation: First, construct and fir the model, and print a summary. Although the RLS model computes the regression parameters recursively, so there are as many estimates as there are datapoints, the summary table only presents the regression parameters estimated on the entire sample; except for small effects from initialization of the recursiions, these estimates are equivalent to OLS estimates. End of explanation """ print(res.recursive_coefficients.filtered[0]) res.plot_recursive_coefficient(range(mod.k_exog), alpha=None, figsize=(10,6)); """ Explanation: The recursive coefficients are available in the recursive_coefficients attribute. Alternatively, plots can generated using the plot_recursive_coefficient method. End of explanation """ print(res.cusum) fig = res.plot_cusum(); """ Explanation: The CUSUM statistic is available in the cusum attribute, but usually it is more convenient to visually check for parameter stability using the plot_cusum method. In the plot below, the CUSUM statistic does not move outside of the 5% significance bands, so we fail to reject the null hypothesis of stable parameters at the 5% level. End of explanation """ res.plot_cusum_squares(); """ Explanation: Another related statistic is the CUSUM of squares. It is available in the cusum_squares attribute, but it is similarly more convenient to check it visually, using the plot_cusum_squares method. In the plot below, the CUSUM of squares statistic does not move outside of the 5% significance bands, so we fail to reject the null hypothesis of stable parameters at the 5% level. End of explanation """ start = '1959-12-01' end = '2015-01-01' m2 = DataReader('M2SL', 'fred', start=start, end=end) cpi = DataReader('CPIAUCSL', 'fred', start=start, end=end) def ewma(series, beta, n_window): nobs = len(series) scalar = (1 - beta) / (1 + beta) ma = [] k = np.arange(n_window, 0, -1) weights = np.r_[beta**k, 1, beta**k[::-1]] for t in range(n_window, nobs - n_window): window = series.iloc[t - n_window:t + n_window+1].values ma.append(scalar * np.sum(weights * window)) return pd.Series(ma, name=series.name, index=series.iloc[n_window:-n_window].index) m2_ewma = ewma(np.log(m2['M2SL'].resample('QS').mean()).diff().iloc[1:], 0.95, 10*4) cpi_ewma = ewma(np.log(cpi['CPIAUCSL'].resample('QS').mean()).diff().iloc[1:], 0.95, 10*4) """ Explanation: Quantity theory of money The quantity theory of money suggests that "a given change in the rate of change in the quantity of money induces ... an equal change in the rate of price inflation" (Lucas, 1980). Following Lucas, we examine the relationship between double-sided exponentially weighted moving averages of money growth and CPI inflation. Although Lucas found the relationship between these variables to be stable, more recently it appears that the relationship is unstable; see e.g. Sargent and Surico (2010). End of explanation """ fig, ax = plt.subplots(figsize=(13,3)) ax.plot(m2_ewma, label='M2 Growth (EWMA)') ax.plot(cpi_ewma, label='CPI Inflation (EWMA)') ax.legend(); endog = cpi_ewma exog = sm.add_constant(m2_ewma) exog.columns = ['const', 'M2'] mod = sm.RecursiveLS(endog, exog) res = mod.fit() print(res.summary()) res.plot_recursive_coefficient(1, alpha=None); """ Explanation: After constructing the moving averages using the $\beta = 0.95$ filter of Lucas (with a window of 10 years on either side), we plot each of the series below. Although they appear to move together prior for part of the sample, after 1990 they appear to diverge. End of explanation """ res.plot_cusum(); """ Explanation: The CUSUM plot now shows subtantial deviation at the 5% level, suggesting a rejection of the null hypothesis of parameter stability. End of explanation """ res.plot_cusum_squares(); """ Explanation: Similarly, the CUSUM of squares shows subtantial deviation at the 5% level, also suggesting a rejection of the null hypothesis of parameter stability. End of explanation """
qinwf-nuan/keras-js
notebooks/layers/recurrent/GRU.ipynb
mit
data_in_shape = (3, 6) rnn = GRU(4, activation='tanh', recurrent_activation='hard_sigmoid') layer_0 = Input(shape=data_in_shape) layer_1 = rnn(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) weights = [] for i, w in enumerate(model.get_weights()): np.random.seed(3200 + i) weights.append(2 * np.random.random(w.shape) - 1) model.set_weights(weights) weight_names = ['W', 'U', 'b'] for w_i, w_name in enumerate(weight_names): print('{} shape:'.format(w_name), weights[w_i].shape) print('{}:'.format(w_name), format_decimal(weights[w_i].ravel().tolist())) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['recurrent.GRU.0'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights], 'expected': {'data': data_out_formatted, 'shape': data_out_shape} } [w.shape for w in model.get_weights()] """ Explanation: GRU [recurrent.GRU.0] units=4, activation='tanh', recurrent_activation='hard_sigmoid' Note dropout_W and dropout_U are only applied during training phase End of explanation """ data_in_shape = (8, 5) rnn = GRU(5, activation='sigmoid', recurrent_activation='sigmoid') layer_0 = Input(shape=data_in_shape) layer_1 = rnn(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) weights = [] for i, w in enumerate(model.get_weights()): np.random.seed(3300 + i) weights.append(2 * np.random.random(w.shape) - 1) model.set_weights(weights) weight_names = ['W', 'U', 'b'] for w_i, w_name in enumerate(weight_names): print('{} shape:'.format(w_name), weights[w_i].shape) print('{}:'.format(w_name), format_decimal(weights[w_i].ravel().tolist())) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['recurrent.GRU.1'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights], 'expected': {'data': data_out_formatted, 'shape': data_out_shape} } """ Explanation: [recurrent.GRU.1] units=5, activation='sigmoid', recurrent_activation='sigmoid' Note dropout_W and dropout_U are only applied during training phase End of explanation """ data_in_shape = (3, 6) rnn = GRU(4, activation='tanh', recurrent_activation='hard_sigmoid', return_sequences=True) layer_0 = Input(shape=data_in_shape) layer_1 = rnn(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) weights = [] for i, w in enumerate(model.get_weights()): np.random.seed(3400 + i) weights.append(2 * np.random.random(w.shape) - 1) model.set_weights(weights) weight_names = ['W', 'U', 'b'] for w_i, w_name in enumerate(weight_names): print('{} shape:'.format(w_name), weights[w_i].shape) print('{}:'.format(w_name), format_decimal(weights[w_i].ravel().tolist())) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['recurrent.GRU.2'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights], 'expected': {'data': data_out_formatted, 'shape': data_out_shape} } """ Explanation: [recurrent.GRU.2] units=4, activation='tanh', recurrent_activation='hard_sigmoid', return_sequences=True Note dropout_W and dropout_U are only applied during training phase End of explanation """ data_in_shape = (3, 6) rnn = GRU(4, activation='tanh', recurrent_activation='hard_sigmoid', return_sequences=False, go_backwards=True) layer_0 = Input(shape=data_in_shape) layer_1 = rnn(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) weights = [] for i, w in enumerate(model.get_weights()): np.random.seed(3410 + i) weights.append(2 * np.random.random(w.shape) - 1) model.set_weights(weights) weight_names = ['W', 'U', 'b'] for w_i, w_name in enumerate(weight_names): print('{} shape:'.format(w_name), weights[w_i].shape) print('{}:'.format(w_name), format_decimal(weights[w_i].ravel().tolist())) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['recurrent.GRU.3'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights], 'expected': {'data': data_out_formatted, 'shape': data_out_shape} } """ Explanation: [recurrent.GRU.3] units=4, activation='tanh', recurrent_activation='hard_sigmoid', return_sequences=False, go_backwards=True Note dropout_W and dropout_U are only applied during training phase End of explanation """ data_in_shape = (3, 6) rnn = GRU(4, activation='tanh', recurrent_activation='hard_sigmoid', return_sequences=True, go_backwards=True) layer_0 = Input(shape=data_in_shape) layer_1 = rnn(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) weights = [] for i, w in enumerate(model.get_weights()): np.random.seed(3420 + i) weights.append(2 * np.random.random(w.shape) - 1) model.set_weights(weights) weight_names = ['W', 'U', 'b'] for w_i, w_name in enumerate(weight_names): print('{} shape:'.format(w_name), weights[w_i].shape) print('{}:'.format(w_name), format_decimal(weights[w_i].ravel().tolist())) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['recurrent.GRU.4'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights], 'expected': {'data': data_out_formatted, 'shape': data_out_shape} } """ Explanation: [recurrent.GRU.4] units=4, activation='tanh', recurrent_activation='hard_sigmoid', return_sequences=True, go_backwards=True Note dropout_W and dropout_U are only applied during training phase End of explanation """ data_in_shape = (3, 6) rnn = GRU(4, activation='tanh', recurrent_activation='hard_sigmoid', return_sequences=False, go_backwards=False, stateful=True) layer_0 = Input(batch_shape=(1, *data_in_shape)) layer_1 = rnn(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) weights = [] for i, w in enumerate(model.get_weights()): np.random.seed(3430 + i) weights.append(2 * np.random.random(w.shape) - 1) model.set_weights(weights) weight_names = ['W', 'U', 'b'] for w_i, w_name in enumerate(weight_names): print('{} shape:'.format(w_name), weights[w_i].shape) print('{}:'.format(w_name), format_decimal(weights[w_i].ravel().tolist())) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['recurrent.GRU.5'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights], 'expected': {'data': data_out_formatted, 'shape': data_out_shape} } """ Explanation: [recurrent.GRU.5] units=4, activation='tanh', recurrent_activation='hard_sigmoid', return_sequences=False, go_backwards=False, stateful=True Note dropout_W and dropout_U are only applied during training phase To test statefulness, model.predict is run twice End of explanation """ data_in_shape = (3, 6) rnn = GRU(4, activation='tanh', recurrent_activation='hard_sigmoid', return_sequences=True, go_backwards=False, stateful=True) layer_0 = Input(batch_shape=(1, *data_in_shape)) layer_1 = rnn(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) weights = [] for i, w in enumerate(model.get_weights()): np.random.seed(3440 + i) weights.append(2 * np.random.random(w.shape) - 1) model.set_weights(weights) weight_names = ['W', 'U', 'b'] for w_i, w_name in enumerate(weight_names): print('{} shape:'.format(w_name), weights[w_i].shape) print('{}:'.format(w_name), format_decimal(weights[w_i].ravel().tolist())) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['recurrent.GRU.6'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights], 'expected': {'data': data_out_formatted, 'shape': data_out_shape} } """ Explanation: [recurrent.GRU.6] units=4, activation='tanh', recurrent_activation='hard_sigmoid', return_sequences=True, go_backwards=False, stateful=True Note dropout_W and dropout_U are only applied during training phase To test statefulness, model.predict is run twice End of explanation """ data_in_shape = (3, 6) rnn = GRU(4, activation='tanh', recurrent_activation='hard_sigmoid', return_sequences=False, go_backwards=True, stateful=True) layer_0 = Input(batch_shape=(1, *data_in_shape)) layer_1 = rnn(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) weights = [] for i, w in enumerate(model.get_weights()): np.random.seed(3450 + i) weights.append(2 * np.random.random(w.shape) - 1) model.set_weights(weights) weight_names = ['W', 'U', 'b'] for w_i, w_name in enumerate(weight_names): print('{} shape:'.format(w_name), weights[w_i].shape) print('{}:'.format(w_name), format_decimal(weights[w_i].ravel().tolist())) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['recurrent.GRU.7'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights], 'expected': {'data': data_out_formatted, 'shape': data_out_shape} } """ Explanation: [recurrent.GRU.7] units=4, activation='tanh', recurrent_activation='hard_sigmoid', return_sequences=False, go_backwards=True, stateful=True Note dropout_W and dropout_U are only applied during training phase To test statefulness, model.predict is run twice End of explanation """ data_in_shape = (3, 6) rnn = GRU(4, activation='tanh', recurrent_activation='hard_sigmoid', use_bias=False, return_sequences=True, go_backwards=True, stateful=True) layer_0 = Input(batch_shape=(1, *data_in_shape)) layer_1 = rnn(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) weights = [] for i, w in enumerate(model.get_weights()): np.random.seed(3460 + i) weights.append(2 * np.random.random(w.shape) - 1) model.set_weights(weights) weight_names = ['W', 'U'] for w_i, w_name in enumerate(weight_names): print('{} shape:'.format(w_name), weights[w_i].shape) print('{}:'.format(w_name), format_decimal(weights[w_i].ravel().tolist())) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['recurrent.GRU.8'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights], 'expected': {'data': data_out_formatted, 'shape': data_out_shape} } """ Explanation: [recurrent.GRU.8] units=4, activation='tanh', recurrent_activation='hard_sigmoid', use_bias=False, return_sequences=True, go_backwards=True, stateful=True Note dropout_W and dropout_U are only applied during training phase To test statefulness, model.predict is run twice End of explanation """ print(json.dumps(DATA)) """ Explanation: export for Keras.js tests End of explanation """
pacoqueen/ginn
extra/install/ipython2/ipython-5.10.0/examples/IPython Kernel/Beyond Plain Python.ipynb
gpl-2.0
print("Hi") """ Explanation: IPython: beyond plain Python When executing code in IPython, all valid Python syntax works as-is, but IPython provides a number of features designed to make the interactive experience more fluid and efficient. First things first: running code, getting help In the notebook, to run a cell of code, hit Shift-Enter. This executes the cell and puts the cursor in the next cell below, or makes a new one if you are at the end. Alternately, you can use: Alt-Enter to force the creation of a new cell unconditionally (useful when inserting new content in the middle of an existing notebook). Control-Enter executes the cell and keeps the cursor in the same cell, useful for quick experimentation of snippets that you don't need to keep permanently. End of explanation """ ? """ Explanation: Getting help: End of explanation """ import collections collections.namedtuple? collections.Counter?? *int*? """ Explanation: Typing object_name? will print all sorts of details about any object, including docstrings, function definition lines (for call arguments) and constructor details for classes. End of explanation """ %quickref """ Explanation: An IPython quick reference card: End of explanation """ collections. """ Explanation: Tab completion Tab completion, especially for attributes, is a convenient way to explore the structure of any object you’re dealing with. Simply type object_name.&lt;TAB&gt; to view the object’s attributes. Besides Python objects and keywords, tab completion also works on file and directory names. End of explanation """ 2+10 _+10 """ Explanation: The interactive workflow: input, output, history End of explanation """ 10+20; _ """ Explanation: You can suppress the storage and rendering of output if you append ; to the last cell (this comes in handy when plotting with matplotlib, for example): End of explanation """ _10 == Out[10] """ Explanation: The output is stored in _N and Out[N] variables: End of explanation """ from __future__ import print_function print('last output:', _) print('next one :', __) print('and next :', ___) In[11] _i _ii print('last input:', _i) print('next one :', _ii) print('and next :', _iii) %history -n 1-5 """ Explanation: And the last three have shorthands for convenience: End of explanation """ !pwd files = !ls print("My current directory's files:") print(files) !echo $files !echo {files[0].upper()} """ Explanation: Exercise Write the last 10 lines of history to a file named log.py. Accessing the underlying operating system End of explanation """ import os for i,f in enumerate(files): if f.endswith('ipynb'): !echo {"%02d" % i} - "{os.path.splitext(f)[0]}" else: print('--') """ Explanation: Note that all this is available even in multiline blocks: End of explanation """ %magic """ Explanation: Beyond Python: magic functions The IPyhton 'magic' functions are a set of commands, invoked by prepending one or two % signs to their name, that live in a namespace separate from your normal Python variables and provide a more command-like interface. They take flags with -- and arguments without quotes, parentheses or commas. The motivation behind this system is two-fold: To provide an orthogonal namespace for controlling IPython itself and exposing other system-oriented functionality. To expose a calling mode that requires minimal verbosity and typing while working interactively. Thus the inspiration taken from the classic Unix shell style for commands. End of explanation """ %timeit list(range(1000)) %%timeit list(range(10)) list(range(100)) """ Explanation: Line vs cell magics: End of explanation """ for i in range(1, 5): size = i*100 print('size:', size, end=' ') %timeit list(range(size)) """ Explanation: Line magics can be used even inside code blocks: End of explanation """ %%bash echo "My shell is:" $SHELL echo "My disk usage is:" df -h """ Explanation: Magics can do anything they want with their input, so it doesn't have to be valid Python: End of explanation """ %%writefile test.txt This is a test file! It can contain anything I want... And more... !cat test.txt """ Explanation: Another interesting cell magic: create any file you want locally from the notebook: End of explanation """ %lsmagic """ Explanation: Let's see what other magics are currently defined in the system: End of explanation """ >>> # Fibonacci series: ... # the sum of two elements defines the next ... a, b = 0, 1 >>> while b < 10: ... print(b) ... a, b = b, a+b In [1]: for i in range(10): ...: print(i, end=' ') ...: """ Explanation: Running normal Python code: execution and errors Not only can you input normal Python code, you can even paste straight from a Python or IPython shell session: End of explanation """ %%writefile mod.py def f(x): return 1.0/(x-1) def g(y): return f(y+1) """ Explanation: And when your code produces errors, you can control how they are displayed with the %xmode magic: End of explanation """ import mod mod.g(0) %xmode plain mod.g(0) %xmode verbose mod.g(0) """ Explanation: Now let's call the function g with an argument that would produce an error: End of explanation """ %xmode context """ Explanation: The default %xmode is "context", which shows additional context but not all local variables. Let's restore that one for the rest of our session. End of explanation """ %%perl @months = ("July", "August", "September"); print $months[0]; %%ruby name = "world" puts "Hello #{name.capitalize}!" """ Explanation: Running code in other languages with special %% magics End of explanation """ mod.g(0) %debug """ Explanation: Raw Input in the notebook Since 1.0 the IPython notebook web application support raw_input which for example allow us to invoke the %debug magic in the notebook: End of explanation """ enjoy = input('Are you enjoying this tutorial? ') print('enjoy is:', enjoy) """ Explanation: Don't foget to exit your debugging session. Raw input can of course be use to ask for user input: End of explanation """ %matplotlib inline import numpy as np import matplotlib.pyplot as plt x = np.linspace(0, 2*np.pi, 300) y = np.sin(x**2) plt.plot(x, y) plt.title("A little chirp") fig = plt.gcf() # let's keep the figure object around for later... """ Explanation: Plotting in the notebook This magic configures matplotlib to render its figures inline: End of explanation """ %connect_info """ Explanation: The IPython kernel/client model End of explanation """ %qtconsole """ Explanation: We can connect automatically a Qt Console to the currently running kernel with the %qtconsole magic, or by typing ipython console --existing &lt;kernel-UUID&gt; in any terminal: End of explanation """
ShorensteinCenter/Shorenstein-Center-Notebooks
Shorenstein_Center_Notebook_2.ipynb
mit
# set colors c1='#18a45f' # subs c2='#ec3038' # unsubs c3='#3286ec' # cleaned c4='#fecf5f' # pending c_ev= '#cccccc' c_nev='#000000' c12m = '#016d2c'#'12 months' c9m ='#31a354' #'9 months ' c6m = '#74c476' #'6 months' c3m= '#bae4b3' #'3 months' c1m = '#edf8e9'#'1 month' # import libraries %matplotlib inline import os from mailchimp3 import MailChimp # import your wrapper of choice for your email service provider - in this case mailchimp3; learn more about mailchimp3:https://github.com/charlesthk/python-mailchimp/blob/master/README.md import pandas as pd # standard code for importing the pandas library and aliasing it as pd - if you want to learn all about pandas read 'Python for Data Analysis' version 2nd Edition by Wes McKinney, the creator of pandas import time # allows you to time things import matplotlib.pyplot as plt # allows you to plot data import seaborn as sns # makes the plots look nicer import numpy as np import os import glob """ Explanation: Table of Contents: 0 import libraries 1 Pull Data from the API 2 Turn the Data into a pandas DataFrame 3 Explore the Data - 3.1Basic Engagement by Individual User - 3.2 Last Active by Individual User - 3.3Two-Dimensional Distributions - 3.4Time on List for Unsubscribed Users 0. Import libraries and set global variables <a class="anchor" id="0-bullet"></a> End of explanation """ # run this cell to initialize variables to pull data from the API of your email service provider, in this case MailChimp # replace the variable values in quotes in red caps with the unique values for your MailChimp account # if this request times out, pull via batch request -- which is slower but the recommended method by MailChimp LIST_NAME='YOURLISTNAME' NAME='YOURUSERNAME'# your MailChimp user name (used at login) SECRET_KEY='YOURAPIKEY' # your MailChimp API Key LIST_ID='YOURLISTID' # the ID for the individual list you want to look at # OUT_FILE='OUTFILENAME.csv'# if you want to export your data, you can speficy the outfile name and type, in this case CSV # make an output directory to explort the results and images from this notebook oupt_dir='Shorenstein_Notebook_2_'+str(LIST_NAME) try: os.mkdir(oupt_dir) except: "marvelous!" # initalizes client - creates a connection with the API; calling that connection client client=MailChimp(NAME,SECRET_KEY) lists_endpoint=client.lists.get(LIST_ID) # read in data from Shorenstein Notebook 1, or pull it again # GET request pulling data from the MailChimp API - see documentation # you can also read in a pkl or other file type if you already have this information from running Notebook 1 member_data=client.lists.members.all(LIST_ID,get_all=True, fields='members.status,members.email_address,members.timestamp_opt,members.timestamp_signup,members.member_rating,members.stats,members.id, members.last_changed, members.action, members.timestamp, members.unsubscribe_reason') # this is a function that gets the last 50 actions for each user on your list # if it times out do a batch request def last_user_actions(userid): """user id is a string that is the md5 hash of the lower case email. this function gets the lasy 50 user actions and returns a dataframe of user actions""" member_act_api=client.lists.members.activity.all(list_id=LIST_ID, subscriber_hash=userid) member_act=pd.DataFrame(member_act_api['activity']) member_act['id']=userid return member_act # create member list of unique member ids in your member data frame memb_list=list(pd.DataFrame(member_data['members'])['id'].unique()) member_actions=pd.concat(map(last_user_actions,memb_list)) # parse the timestamp member_actions['timestamp']=member_actions.timestamp.apply(pd.to_datetime) """ Explanation: 1. Pull data from API End of explanation """ # turns the member_data returned by the API into a pandas data frame member_data_frame=pd.DataFrame(member_data['members']) # unpack open rate and click rate from stats for each record, add the value to a new column named open and click respectively # create a column for those who never opened or clicked # false = number of subscribers who have ever opened # true = number of subscribers who have never opened member_data_frame['open']=member_data_frame.stats.apply(lambda x: x['avg_open_rate']) member_data_frame['click']=member_data_frame.stats.apply(lambda x: x['avg_click_rate']) member_data_frame['never_opened']=member_data_frame.open.apply(lambda x:x==0) member_data_frame['never_clicked']=member_data_frame.click.apply(lambda x:x==0) # preparing the data by calculating the month joined, and for each of those months, what % of those people are subscribed, unsubscribed, cleaned or pending. # NOTE: There is no output from this cell but you need to run it to see the graphs below. member_data_frame['timestamp_opt']=member_data_frame.timestamp_opt.apply(pd.to_datetime) member_data_frame['timestamp_signup']=member_data_frame.timestamp_signup.apply(pd.to_datetime) # records missing signup_time print sum(member_data_frame.timestamp_signup.isnull()) # make sure index is unique because we are about to do some manipulations based on it member_data_frame.reset_index(drop=True,inplace=True) # index of members where we don't know when the signed up but we have opt in time guess_time_ix=member_data_frame[(member_data_frame.timestamp_signup.isnull())& (member_data_frame.timestamp_opt.isnull()!=True)].index # when we don't have signup time use opt in time member_data_frame.loc[guess_time_ix,'timestamp_signup']=member_data_frame.loc[guess_time_ix,'timestamp_opt'] # use integer division to break down people in to groups by the month they joined member_data_frame['join_month']=member_data_frame.timestamp_signup.apply(lambda x:pd.to_datetime(2592000*int((x.value/1e9)/2592000),unit='s')) # represent the joined month as an interger of ms since epoch time 0 # this format is not nice for people but very nice for computers member_data_frame['jv']=member_data_frame.join_month.apply(lambda x: x.value) member_data_frame['jv']=member_data_frame.join_month.apply(lambda x: x.value) member_actions['timestamp']=member_actions.timestamp.apply(pd.to_datetime) # slice to only look at opens memb_open=member_actions[member_actions.action=='open'] # get last open last_open=memb_open.groupby('id').timestamp.max().reset_index() # get oldest open old_open=memb_open.groupby('id').timestamp.min().reset_index() # clean name last_open.columns=['id','last'] old_open.columns=['id','old'] # merge open_time=pd.merge(last_open,old_open, how='left',on='id') # get ms time open_time['lv']=open_time['last'].apply(lambda x: x.value) open_time['ov']=open_time['old'].apply(lambda x: x.value) # add member open times to member data frame member_data_frame=pd.merge(member_data_frame,open_time, how='left',on='id') member_data_frame['latestv']=member_data_frame['last'].apply(lambda x: x.value) member_last_not_null=member_data_frame[member_data_frame['last'].isnull()!=True] import time # get todays date a=pd.to_datetime(time.time(),unit='s') # slices dataframe into 12 months, 9 months, 6 months, 3 months and 1 month active subscribers member_data_frame['12m']=member_data_frame['last'].apply(lambda x: x>(a-pd.to_timedelta('365D'))) member_data_frame['9m']=member_data_frame['last'].apply(lambda x: x>(a-pd.to_timedelta('274D'))) member_data_frame['6m']=member_data_frame['last'].apply(lambda x: x>(a-pd.to_timedelta('183D'))) member_data_frame['3m']=member_data_frame['last'].apply(lambda x: x>(a-pd.to_timedelta('91D'))) member_data_frame['1m']=member_data_frame['last'].apply(lambda x: x>(a-pd.to_timedelta('30D'))) # get monthly activity as a fraction of all users who joined that month monthly_act=member_data_frame.groupby('join_month').agg({'12m':sum, '9m':sum, '6m':sum, '3m':sum, '1m':sum, 'id':lambda x: x.size}).reset_index() monthly_act.rename(columns={'id':'tot'},inplace=True) monthly_act['1m_per']=monthly_act.apply(lambda x: x['1m']/float(x['tot']),axis=1) monthly_act['3m_per']=monthly_act.apply(lambda x: x['3m']/float(x['tot']),axis=1) monthly_act['6m_per']=monthly_act.apply(lambda x: x['6m']/float(x['tot']),axis=1) monthly_act['9m_per']=monthly_act.apply(lambda x: x['9m']/float(x['tot']),axis=1) monthly_act['12m_per']=monthly_act.apply(lambda x: x['12m']/float(x['tot']),axis=1) unsubscribe_times=member_actions[member_actions.action=='unsub'][['id','timestamp']].copy() unsubscribe_times.rename(columns={'timestamp':'unsub_time'},inplace=True) unsubscribe_times['unsubv']=unsubscribe_times.unsub_time.apply(lambda x: x.value/1e9) member_data_frame=pd.merge(member_data_frame,unsubscribe_times, how='left',on='id') member_data_frame['life']=member_data_frame.apply(lambda x:x['unsub_time']-x['timestamp_opt'],axis=1) """ Explanation: 2. Turn the Data into a pandas Data Frame End of explanation """ list_open_rate=lists_endpoint['stats']['open_rate'] print list_open_rate """ Explanation: 3. Explore the data 3.1 Basic Engagement by Individual User We go from asking, 'What is the (unique) open rate for your list?' End of explanation """ # user unique open rate = unique open rate for an individual on your list # calculation: (number of unique opens by a user / number of campaigns received by that user) x 100 plt.figure(figsize=(20,10)) plt.hist([member_data_frame[member_data_frame.status=='subscribed'].open, member_data_frame[member_data_frame.status=='unsubscribed'].open], stacked=True, normed=False,label=['Subscribed', 'Unsubscribed'], color=[c1,c2]) plt.title('Distribution of User Unique Open Rate, Subscribed vs Unsubscibed',fontdict={'fontsize':25}) plt.xlabel("User Unique Open Rate",fontdict={'fontsize':20}) plt.ylabel("Counts",fontdict={'fontsize':20}) plt.yticks(fontsize=15) plt.xticks(fontsize=15) plt.legend(loc='best', prop={'size': 20}) plt.savefig(oupt_dir+'/3.1_dist_open_sub_vs_unsub.png') plt.show() """ Explanation: To asking 'What is the distribution of user unique open rates for current subscribers vs. unsubscribes?' End of explanation """ plt.figure(figsize=(20,10)) plt.hist([member_data_frame[member_data_frame.status=='subscribed'].click, member_data_frame[member_data_frame.status=='unsubscribed'].click], stacked=True, normed=False,label=['Subscribed', 'Unsubscribed'], color=[c1,c2]) plt.title('Distribution of User Unique Click Rate, Subscribed vs Unsubscirebed',fontdict={'fontsize':25}) plt.xlabel("User Unique Click Rate",fontdict={'fontsize':20}) plt.ylabel("Counts",fontdict={'fontsize':20}) plt.yticks(fontsize=15) plt.xticks(fontsize=15) plt.legend(loc='best', prop={'size': 20}) plt.savefig(oupt_dir+'/3.1_dist_click_sub_vs_unsub.png') plt.show() plt.figure(figsize=(20,10)) ax=plt.hist([member_last_not_null[member_last_not_null.status=='subscribed'].latestv, member_last_not_null[member_last_not_null.status=='unsubscribed'].latestv], stacked=True, normed=False,label=['Subscribed', 'Unsubscribed'], color=[c1,c2]) plt.xticks(ax[1],map(lambda x: pd.to_datetime(x).date(),ax[1]), rotation=35) plt.title('Distribution of Time of Last Opened Email, Subscribed vs Unsubscribed',fontdict={'fontsize':25}) plt.ylabel("Counts",fontdict={'fontsize':20}) plt.yticks(fontsize=15) plt.xticks(fontsize=15) plt.xlabel('Last Email Opened',fontdict={'fontsize':20}) plt.legend(loc='best', prop={'size': 20}) plt.savefig(oupt_dir+'/3.1_distlast_opened_sub_vs_unsub.png') plt.show() plt.figure(figsize=(20,10)) plt.title('Distribution of Time of Last Opened Email, Subscribers',fontdict={'fontsize':25}) member_data_frame[member_data_frame.status=='subscribed']['last'].hist(label='SUBSCRIBED',color=c1) plt.xlabel('Date of Last Email Opened by Subscriber',fontdict={'fontsize':20}) plt.ylabel("Subscriber Counts",fontdict={'fontsize':20}) plt.yticks(fontsize=15) plt.xticks(fontsize=15) plt.savefig(oupt_dir+'/4_last_opens_sub.png') plt.figure(figsize=(20,10)) plt.title('Distribution of Time of Last Opened Email, Unsubscribed',fontdict={'fontsize':25}) member_data_frame[member_data_frame.status=='unsubscribed']['last'].hist(label='UNSUBSCRIBED',color=c2) plt.xlabel('Date of Last Email Opened by Unsubscribed Users',fontdict={'fontsize':20}) plt.ylabel("Unsubscriber Counts",fontdict={'fontsize':20}) plt.yticks(fontsize=15) plt.xticks(fontsize=15) plt.savefig(oupt_dir+'/5_last_opens_unsub.png') """ Explanation: What is the distribution of user unique click rates for current subscribers vs. unsubscribed users? End of explanation """ m12_act=member_data_frame['12m'].sum() print m12_act m9_act=member_data_frame['9m'].sum() print m9_act m6_act=member_data_frame['6m'].sum() print m6_act m3_act=member_data_frame['3m'].sum() print m3_act m1_act=member_data_frame['1m'].sum() print m1_act # stacked histogram showing number of members active in the last 12 months, 9 months, 6 months, 3 months, 1 month plt.figure(figsize=(20,10)) member_data_frame[member_data_frame['12m']==True].join_month.hist(label='12 MONTHS',color=c12m) member_data_frame[member_data_frame['9m']==True].join_month.hist(label='9 MONTHS',color=c9m) member_data_frame[member_data_frame['6m']==True].join_month.hist(label='6 MONTHS',color=c6m) member_data_frame[member_data_frame['3m']==True].join_month.hist(label='3 MONTHS',color=c3m) member_data_frame[member_data_frame['1m']==True].join_month.hist(label='1 MONTH',color=c1m) plt.legend(loc='best', prop={'size': 20}) plt.xlabel('Time Joined',fontsize=20) plt.ylabel('Counts',fontdict={'fontsize':20}) plt.title('''Number of Members Active in the Last 12 Months, 9 Months, 6 Months, 3 Months 1 Month by Time Joined''',fontdict={'fontsize':25}) plt.yticks(fontsize=15) plt.xticks(fontsize=15) plt.savefig(oupt_dir+'/7_memb_active.png') fig, ax = plt.subplots(figsize=(20,10)) ax.stackplot(list(monthly_act.join_month),monthly_act['1m_per'], monthly_act['3m_per']-monthly_act['1m_per'], monthly_act['6m_per']-monthly_act['3m_per'], monthly_act['9m_per']-monthly_act['6m_per'], monthly_act['12m_per']-monthly_act['9m_per'],labels=['1 MONTH','3 MONTHS','6 MONTHS', '9 MONTHS', '12 MONTHS'], colors=[c1m,c3m,c6m,c9m,c12m]) plt.legend(loc='upper left', prop={'size': 15}) plt.xlabel('Time Joined',fontdict={'fontsize':20}) plt.ylabel('Percent Active',fontdict={'fontsize':20}) plt.title('''Percent Active 12 Months, 9 Months, 6 Months, 3 Months 1 Month by Time Joined - for Current Subscribers''',fontdict={'fontsize':25}) plt.yticks(fontsize=15) plt.xticks(fontsize=15) plt.savefig(oupt_dir+'/6_percent_memb_active.png') plt.show() """ Explanation: 3.2 Last Active by Individual User Number of current subscribers who have opened an email in the last: 12 months, 9 months, 6 months, 3 months End of explanation """ # open rate vs. when someone joined, for subscribers only # x axis = month they joined in miliseconds since 1970 # y axis is users unique open rate nxt=3 #number of x ticks xrt=np.linspace(member_data_frame[member_data_frame.status=='subscribed'].jv.min(), member_data_frame[member_data_frame.status=='subscribed'].jv.max(),num=nxt) g=sns.jointplot(member_data_frame[member_data_frame.status=='subscribed'].jv/1e9, member_data_frame[member_data_frame.status=='subscribed'].open, kind="kde",size=10, space=0,ylim=(0,1)) j=g.ax_joint mx=g.ax_marg_x my=g.ax_marg_y mx.set_xticks(map(lambda x:x/1e9,xrt)) mx.set_xticklabels(map(lambda x: pd.to_datetime(x).date(),xrt)) mx.set_title("Open Rate vs Join Time for Subscribers",fontdict={'fontsize':25}) plt.rcParams["axes.labelsize"] = 25 g.set_axis_labels(xlabel='Time Joined',ylabel='User Unique Open Rate',fontdict={'fontsize':25}) g.savefig(oupt_dir+'/8_open_vs_join_sub.png') """ Explanation: 3.3 Two-Dimensional Distributions User unique open rate vs. when they joined, for subscribers End of explanation """ # x axis = month they joined in miliseconds since 1970 # y axis = user unique open rate nxt=3 #number of x ticks xrt=np.linspace(member_data_frame[member_data_frame.status=='unsubscribed'].jv.min(), member_data_frame[member_data_frame.status=='unsubscribed'].jv.max(),num=nxt) g = sns.jointplot(member_data_frame[member_data_frame.status=='unsubscribed'].jv/1e9, member_data_frame[member_data_frame.status=='unsubscribed'].open, ylim=(0,1),kind="kde", size=10, space=0) j=g.ax_joint mx=g.ax_marg_x my=g.ax_marg_y mx.set_xticks(map(lambda x:x/1e9,xrt)) mx.set_xticklabels(map(lambda x: pd.to_datetime(x).date(),xrt)) mx.set_title("Open Rate vs Join Time for Unsubscribed Users",fontdict={'fontsize':25}) g.set_axis_labels(xlabel='Time Joined',ylabel='User Unique Open Rate') g.savefig(oupt_dir+'/9_open_vs_join_unsub.png') """ Explanation: User unique open rate vs. time joined, for unsubscribers End of explanation """ # x axis is distribution of when joined - farther to the right is joining more recently. # y axis is the oldest email record of opening in last 50 actions. # upper left is longtime engaged person, person who joined awhile ago who is still active nxt=3 #number of x ticks nyt=3 #number of x ticks xrt=np.linspace(member_data_frame[member_data_frame.status=='subscribed'].jv.min(), member_data_frame[member_data_frame.status=='subscribed'].jv.max(),num=nxt) yrt=np.linspace(member_data_frame[member_data_frame.status=='subscribed'].lv.min(), member_data_frame[member_data_frame.status=='subscribed'].lv.max(),num=nxt) g = sns.jointplot(member_data_frame[member_data_frame.status=='subscribed'].jv/1e9, member_data_frame[member_data_frame.status=='subscribed'].lv/1e9, kind="kde", size=10, space=0) j=g.ax_joint mx=g.ax_marg_x my=g.ax_marg_y mx.set_xticks(map(lambda x:x/1e9,xrt)) mx.set_xticklabels(map(lambda x: pd.to_datetime(x).date(),xrt)) my.set_yticks(map(lambda x:x/1e9,yrt)) my.set_yticklabels(map(lambda x: pd.to_datetime(x).date(),yrt)) mx.set_title("Latest Opened Email vs Join Time for Subscribers",fontdict={'fontsize':25}) g.set_axis_labels(xlabel='Time Joined',ylabel='Latest Open') g.savefig(oupt_dir+'/9_last_open_vs_join_sub.png') """ Explanation: Time of the last email opened vs. time joined for subscribers End of explanation """ # x axis is distribution of when joined - farther to the right is joining more recently. # y axis is the oldest email record of opening in last 50 actions. # upper left is longtime engaged person, person who joined awhile ago who is still active nxt=3 # number of x ticks nyt=3 # number of y ticks xrt=np.linspace(member_data_frame[member_data_frame.status=='unsubscribed'].jv.min(), member_data_frame[member_data_frame.status=='unsubscribed'].jv.max(),num=nxt) yrt=np.linspace(member_data_frame[member_data_frame.status=='unsubscribed'].unsubv.min(), member_data_frame[member_data_frame.status=='unsubscribed'].unsubv.max(),num=nyt) g = sns.jointplot(member_data_frame[member_data_frame.status=='unsubscribed'].jv/1e9, member_data_frame[member_data_frame.status=='unsubscribed'].unsubv/1e9, kind="kde", size=10, space=0) j=g.ax_joint mx=g.ax_marg_x my=g.ax_marg_y mx.set_xticks(map(lambda x:x/1e9,xrt)) mx.set_xticklabels(map(lambda x: pd.to_datetime(x).date(),xrt)) my.set_yticks(map(lambda x:x/1e9,yrt)) my.set_yticklabels(map(lambda x: pd.to_datetime(x,unit='s').date(),yrt)) mx.set_title("Unsubscribe Time vs Join Time",fontdict={'fontsize':25}) g.set_axis_labels(xlabel='Time Joined',ylabel='Time Unsubscribed') g.savefig(oupt_dir+'/10_unusub_vs_join_unsub.png') """ Explanation: Joined Date vs Unsubscribed Date End of explanation """ # x axis is distribution of when joined - farther to the right is joining more recently # y axis is the oldest email record of opening in last 50 actions # upper left is longtime engaged user # upper right newer user recently opened nxt=3 #number of x ticks nyt=3 #number of x ticks xrt=np.linspace(member_data_frame[member_data_frame.status=='unsubscribed'].jv.min(), member_data_frame[member_data_frame.status=='unsubscribed'].jv.max(),num=nxt) yrt=np.linspace(member_data_frame[member_data_frame.status=='unsubscribed'].lv.min(), member_data_frame[member_data_frame.status=='unsubscribed'].lv.max(),num=nxt) g = sns.jointplot(member_data_frame[member_data_frame.status=='unsubscribed'].jv/1e9, member_data_frame[member_data_frame.status=='unsubscribed'].lv/1e9, kind="kde", size=10, space=0) j=g.ax_joint mx=g.ax_marg_x my=g.ax_marg_y mx.set_xticks(map(lambda x:x/1e9,xrt)) mx.set_xticklabels(map(lambda x: pd.to_datetime(x).date(),xrt)) my.set_yticks(map(lambda x:x/1e9,yrt)) my.set_yticklabels(map(lambda x: pd.to_datetime(x).date(),yrt)) mx.set_title("Latest Opened Email vs Join Time for Unsubscribed Users",fontdict={'fontsize':25}) g.set_axis_labels(xlabel='Time Joined',ylabel='Latest Open') g.savefig(oupt_dir+'/12_last_open_vs_join_unsub.png') """ Explanation: Time of the last email opened vs. time joined for unsubscribers End of explanation """ # shortest time on the list before unsubscribing # from the individual who unsubscribed the fastest member_data_frame[member_data_frame.life.isnull()!=True].life.min() # longest time on the list before unsubscribing # from the individual who stayed on the longest before unsubscribing member_data_frame[member_data_frame.life.isnull()!=True].life.max() # histogram of time range each unsubscriber was on the list before they unsubscribed # depending on how granular you want to go and the lifetime of your list you may want to update bin size plt.figure(figsize=(20,10)) ax=plt.hist([member_data_frame.dropna(subset=['life']).life.apply(lambda x:x.value)],label=['life time'],color=c2) plt.xticks(ax[1],map(lambda x: pd.to_timedelta(x).floor('D'),ax[1]), rotation=30) plt.title('Distribution of Lifetime of Unsubscribed Users',fontdict={'fontsize':25}) plt.xlabel('Lifetime on List',fontdict={'fontsize':20}) plt.ylabel('Counts',fontdict={'fontsize':20}) plt.yticks(fontsize=15) plt.xticks(fontsize=15) plt.legend(loc='best') plt.savefig(oupt_dir+'/11_life_unsub.png') plt.show() """ Explanation: 3.4 Time on List for Unsubscribers End of explanation """
ContextLab/hypertools
docs/tutorials/normalize.ipynb
mit
import hypertools as hyp import numpy as np %matplotlib inline """ Explanation: Normalization The normalize is a helper function to z-score your data. This is useful if your features (columns) are scaled differently within or across datasets. By default, hypertools normalizes across the columns of all datasets passed, but also affords the option to normalize columns within individual lists. Alternatively, you can also normalize each row. The function returns an array or list of arrays where the columns or rows are z-scored (output type same as input type). Import packages End of explanation """ x1 = np.random.randn(10,10) x2 = np.random.randn(10,10) c1 = np.dot(x1, x1.T) c2 = np.dot(x2, x2.T) m1 = np.zeros([1,10]) m2 = 10 + m1 data1 = np.random.multivariate_normal(m1[0], c1, 100) data2 = np.random.multivariate_normal(m2[0], c2, 100) data = [data1, data2] """ Explanation: Generate synthetic data First, we generate two sets of synthetic data. We pull points randomly from a multivariate normal distribution for each set, so the sets will exhibit unique statistical properties. End of explanation """ geo = hyp.plot(data, '.') """ Explanation: Visualize the data End of explanation """ norm = hyp.normalize(data, normalize = 'across') geo = hyp.plot(norm, '.') """ Explanation: Normalizing (Specified Cols or Rows) Or, to specify a different normalization, pass one of the following arguments as a string, as shown in the examples below. 'across' - columns z-scored across passed lists (default) 'within' - columns z-scored within passed lists 'row' - rows z-scored Normalizing 'across' When you normalize 'across', all of the data is stacked/combined, and the normalization is done on the columns of the full dataset. Then the data is split back into separate elements. End of explanation """ norm = hyp.normalize(data, normalize = 'within') geo = hyp.plot(norm, '.') """ Explanation: Normalizing 'within' When you normalize 'within', normalization is done on the columns of each element of the data, separately. End of explanation """ norm = hyp.normalize(data, normalize = 'row') geo = hyp.plot(norm, '.') """ Explanation: Normalizing by 'row' End of explanation """
julienchastang/unidata-python-workshop
notebooks/Bonus/Downloading GFS with Siphon.ipynb
mit
%matplotlib inline from siphon.catalog import TDSCatalog best_gfs = TDSCatalog('http://thredds.ucar.edu/thredds/catalog/grib/NCEP/GFS/' 'Global_0p25deg/catalog.xml?dataset=grib/NCEP/GFS/Global_0p25deg/Best') best_gfs.datasets """ Explanation: <div style="width:1000 px"> <div style="float:right; width:98 px; height:98px;"> <img src="https://raw.githubusercontent.com/Unidata/MetPy/master/metpy/plots/_static/unidata_150x150.png" alt="Unidata Logo" style="height: 98px;"> </div> <h1>Using Siphon to query the NetCDF Subset Service</h1> <h3>Unidata Python Workshop</h3> <div style="clear:both"></div> </div> <hr style="height:2px;"> Objectives Learn what Siphon is Employ Siphon's NCSS class to retrieve data from a THREDDS Data Server (TDS) Plot a map using numpy arrays, matplotlib, and cartopy! Introduction: Siphon is a python package that makes downloading data from Unidata data technologies a breeze! In our examples, we'll focus on interacting with the netCDF Subset Service (NCSS) as well as the radar server to retrieve grid data and radar data. But first! Bookmark these resources for when you want to use Siphon later! + latest Siphon documentation + Siphon github repo + TDS documentation + netCDF subset service documentation Let's get started! First, we'll import the TDSCatalog class from Siphon and put the special 'matplotlib' line in so our map will show up later in the notebook. Let's construct an instance of TDSCatalog pointing to our dataset of interest. In this case, I've chosen the TDS' "Best" virtual dataset for the GFS global 0.25 degree collection of GRIB files. This will give us a good resolution for our map. This catalog contains a single dataset. End of explanation """ best_ds = list(best_gfs.datasets.values())[0] ncss = best_ds.subset() """ Explanation: We pull out this dataset and call subset() to set up requesting a subset of the data. End of explanation """ query = ncss.query() """ Explanation: We can then use the ncss object to create a new query object, which facilitates asking for data from the server. End of explanation """ ncss.variables """ Explanation: We can look at the ncss.variables object to see what variables are available from the dataset: End of explanation """ from datetime import datetime query.lonlat_box(north=43, south=35, east=260, west=249).time(datetime.utcnow()) query.accept('netcdf4') query.variables('Temperature_surface') """ Explanation: We construct a query asking for data corresponding to a latitude and longitude box where 43 lat is the northern extent, 35 lat is the southern extent, 260 long is the western extent and 249 is the eastern extent. Note that longitude values are the longitude distance from the prime meridian. We request the data for the current time. This request will return all surface temperatures for points in our bounding box for a single time. Note the string representation of the query is a properly encoded query string. End of explanation """ from xarray.backends import NetCDF4DataStore import xarray as xr data = ncss.get_data(query) data = xr.open_dataset(NetCDF4DataStore(data)) list(data) """ Explanation: We now request data from the server using this query. The NCSS class handles parsing this NetCDF data (using the netCDF4 module). If we print out the variable names, we see our requested variables, as well as a few others (more metadata information) End of explanation """ temp_3d = data['Temperature_surface'] """ Explanation: We'll pull out the temperature variable. End of explanation """ # Helper function for finding proper time variable def find_time_var(var, time_basename='time'): for coord_name in var.coords: if coord_name.startswith(time_basename): return var.coords[coord_name] raise ValueError('No time variable found for ' + var.name) time_1d = find_time_var(temp_3d) lat_1d = data['lat'] lon_1d = data['lon'] time_1d """ Explanation: We'll pull out the useful variables for latitude, and longitude, and time (which is the time, in hours since the forecast run). Notice the variable names are labeled to show how many dimensions each variable is. This will come in to play soon when we prepare to plot. Try printing one of the variables to see some info on the data! End of explanation """ import numpy as np from netCDF4 import num2date from metpy.units import units # Reduce the dimensions of the data and get as an array with units temp_2d = temp_3d.metpy.unit_array.squeeze() # Combine latitude and longitudes lon_2d, lat_2d = np.meshgrid(lon_1d, lat_1d) """ Explanation: Now we make our data suitable for plotting. We'll import numpy so we can combine lat/longs (meshgrid) and remove one-dimensional entities from our arrays (squeeze). Also we'll use netCDF4's num2date to change the time since the model run to an actual date. End of explanation """ import matplotlib.pyplot as plt import cartopy.crs as ccrs import cartopy.feature as cfeature from metpy.plots import ctables # Create a new figure fig = plt.figure(figsize=(15, 12)) # Add the map and set the extent ax = plt.axes(projection=ccrs.PlateCarree()) ax.set_extent([-100.03, -111.03, 35, 43]) # Retrieve the state boundaries using cFeature and add to plot ax.add_feature(cfeature.STATES, edgecolor='gray') # Contour temperature at each lat/long contours = ax.contourf(lon_2d, lat_2d, temp_2d.to('degF'), 200, transform=ccrs.PlateCarree(), cmap='RdBu_r') #Plot a colorbar to show temperature and reduce the size of it fig.colorbar(contours) # Make a title with the time value ax.set_title(f'Temperature forecast (\u00b0F) for {time_1d[0].values}Z', fontsize=20) # Plot markers for each lat/long to show grid points for 0.25 deg GFS ax.plot(lon_2d.flatten(), lat_2d.flatten(), linestyle='none', marker='o', color='black', markersize=2, alpha=0.3, transform=ccrs.PlateCarree()); """ Explanation: Now we can plot these up using matplotlib. We import cartopy and matplotlib classes, create our figure, add a map, then add the temperature data and grid points. End of explanation """
4dsolutions/Python5
Computing Volumes.ipynb
mit
import math xyz_volume = math.sqrt(2)**3 ivm_volume = 3 print("XYZ units:", xyz_volume) print("IVM units:", ivm_volume) print("Conversion constant:", ivm_volume/xyz_volume) """ Explanation: Synergetics<br/>Oregon Curriculum Network <h3 align="center">Computing Volumes in XYZ and IVM units</h3> <h4 align="center">by Kirby Urner, July 2016</h4> A cube is composed of 24 identical not-regular tetrahedrons, each with a corner at the cube's center, an edge from cube's center to a face center, and two more to adjacent cube corners on that face, defining six edges in all (Fig. 1). If we define the cube's edges to be √2 then the whole cube would have volume √2 * √2 * √2 in XYZ units. However, in IVM units, the very same cube has a volume of 3, owing to the differently-shaped volume unit, a tetrahedron of edges 2, inscribed in this same cube. Fig. 986.210 from Synergetics: Those lengths would be in R-units, where R is the radius of a unit sphere. In D-units, twice as long (D = 2R), the tetrahedron has edges 1 and the cube has edges √2/2. By XYZ we mean the XYZ coordinate system of René Descartes (1596 – 1650). By IVM we mean the "octet-truss", a space-frame consisting of tetrahedrons and octahedrons in a space-filling matrix, with twice as many tetrahedrons as octahedrons. The tetrahedron and octahedron have relative volumes of 1:4. The question then becomes, how to superimpose the two. The canonical solution is to start with unit-radius balls (spheres) of radius R. R = 1 in other words, whereas D, the diameter, is 2. Alternatively, we may set D = 1 and R = 0.5, keeping the same 2:1 ratio for D:R. The XYZ cube has edges R, whereas the IVM tetrahedron has edges D. That relative sizing convention brings their respective volumes fairly close together, with the cube's volume exceeding the tetrahedron's by about six percent. End of explanation """ from math import sqrt as rt2 from qrays import Qvector, Vector R =0.5 D =1.0 S3 = pow(9/8, 0.5) root2 = rt2(2) root3 = rt2(3) root5 = rt2(5) root6 = rt2(6) PHI = (1 + root5)/2.0 class Tetrahedron: """ Takes six edges of tetrahedron with faces (a,b,d)(b,c,e)(c,a,f)(d,e,f) -- returns volume in ivm and xyz units """ def __init__(self, a,b,c,d,e,f): self.a, self.a2 = a, a**2 self.b, self.b2 = b, b**2 self.c, self.c2 = c, c**2 self.d, self.d2 = d, d**2 self.e, self.e2 = e, e**2 self.f, self.f2 = f, f**2 def ivm_volume(self): ivmvol = ((self._addopen() - self._addclosed() - self._addopposite())/2) ** 0.5 return ivmvol def xyz_volume(self): xyzvol = rt2(8/9) * self.ivm_volume() return xyzvol def _addopen(self): a2,b2,c2,d2,e2,f2 = self.a2, self.b2, self.c2, self.d2, self.e2, self.f2 sumval = f2*a2*b2 sumval += d2 * a2 * c2 sumval += a2 * b2 * e2 sumval += c2 * b2 * d2 sumval += e2 * c2 * a2 sumval += f2 * c2 * b2 sumval += e2 * d2 * a2 sumval += b2 * d2 * f2 sumval += b2 * e2 * f2 sumval += d2 * e2 * c2 sumval += a2 * f2 * e2 sumval += d2 * f2 * c2 return sumval def _addclosed(self): a2,b2,c2,d2,e2,f2 = self.a2, self.b2, self.c2, self.d2, self.e2, self.f2 sumval = a2 * b2 * d2 sumval += d2 * e2 * f2 sumval += b2 * c2 * e2 sumval += a2 * c2 * f2 return sumval def _addopposite(self): a2,b2,c2,d2,e2,f2 = self.a2, self.b2, self.c2, self.d2, self.e2, self.f2 sumval = a2 * e2 * (a2 + e2) sumval += b2 * f2 * (b2 + f2) sumval += c2 * d2 * (c2 + d2) return sumval def make_tet(v0,v1,v2): """ three edges from any corner, remaining three edges computed """ tet = Tetrahedron(v0.length(), v1.length(), v2.length(), (v0-v1).length(), (v1-v2).length(), (v2-v0).length()) return tet.ivm_volume(), tet.xyz_volume() tet = Tetrahedron(D, D, D, D, D, D) print(tet.ivm_volume()) """ Explanation: The Python code below encodes a Tetrahedron type based solely on its six edge lengths. The code makes no attempt to determine the consequent angles. A complicated volume formula, mined from the history books and streamlined by mathematician Gerald de Jong, outputs the volume of said tetrahedron in both IVM and XYZ units. <a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/45589318711/in/dateposted-public/" title="dejong"><img src="https://farm2.staticflickr.com/1935/45589318711_677d272397.jpg" width="417" height="136" alt="dejong"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script> The unittests that follow assure it's producing the expected results. The formula bears great resemblance to the one by Piero della Francesca. End of explanation """ import unittest from qrays import Vector, Qvector class Test_Tetrahedron(unittest.TestCase): def test_unit_volume(self): tet = Tetrahedron(D, D, D, D, D, D) self.assertEqual(tet.ivm_volume(), 1, "Volume not 1") def test_e_module(self): e0 = D e1 = root3 * PHI**-1 e2 = rt2((5 - root5)/2) e3 = (3 - root5)/2 e4 = rt2(5 - 2*root5) e5 = 1/PHI tet = Tetrahedron(e0, e1, e2, e3, e4, e5) self.assertTrue(1/23 > tet.ivm_volume()/8 > 1/24, "Wrong E-mod") def test_unit_volume2(self): tet = Tetrahedron(R, R, R, R, R, R) self.assertAlmostEqual(float(tet.xyz_volume()), 0.117851130) def test_phi_edge_tetra(self): tet = Tetrahedron(D, D, D, D, D, PHI) self.assertAlmostEqual(float(tet.ivm_volume()), 0.70710678) def test_right_tetra(self): e = pow((root3/2)**2 + (root3/2)**2, 0.5) # right tetrahedron tet = Tetrahedron(D, D, D, D, D, e) self.assertAlmostEqual(tet.xyz_volume(), 1) def test_quadrant(self): qA = Qvector((1,0,0,0)) qB = Qvector((0,1,0,0)) qC = Qvector((0,0,1,0)) tet = make_tet(qA, qB, qC) self.assertAlmostEqual(tet[0], 0.25) def test_octant(self): x = Vector((0.5, 0, 0)) y = Vector((0 , 0.5, 0)) z = Vector((0 , 0 , 0.5)) tet = make_tet(x,y,z) self.assertAlmostEqual(tet[1], 1/6, 5) # good to 5 places def test_quarter_octahedron(self): a = Vector((1,0,0)) b = Vector((0,1,0)) c = Vector((0.5,0.5,root2/2)) tet = make_tet(a, b, c) self.assertAlmostEqual(tet[0], 1, 5) # good to 5 places def test_xyz_cube(self): a = Vector((0.5, 0.0, 0.0)) b = Vector((0.0, 0.5, 0.0)) c = Vector((0.0, 0.0, 0.5)) R_octa = make_tet(a,b,c) self.assertAlmostEqual(6 * R_octa[1], 1, 4) # good to 4 places def test_s3(self): D_tet = Tetrahedron(D, D, D, D, D, D) a = Vector((0.5, 0.0, 0.0)) b = Vector((0.0, 0.5, 0.0)) c = Vector((0.0, 0.0, 0.5)) R_cube = 6 * make_tet(a,b,c)[1] self.assertAlmostEqual(D_tet.xyz_volume() * S3, R_cube, 4) def test_martian(self): p = Qvector((2,1,0,1)) q = Qvector((2,1,1,0)) r = Qvector((2,0,1,1)) result = make_tet(5*q, 2*p, 2*r) self.assertAlmostEqual(result[0], 20, 7) def test_phi_tet(self): "edges from common vertex: phi, 1/phi, 1" p = Vector((1, 0, 0)) q = Vector((1, 0, 0)).rotz(60) * PHI r = Vector((0.5, root3/6, root6/3)) * 1/PHI result = make_tet(p, q, r) self.assertAlmostEqual(result[0], 1, 7) def test_phi_tet_2(self): p = Qvector((2,1,0,1)) q = Qvector((2,1,1,0)) r = Qvector((2,0,1,1)) result = make_tet(PHI*q, (1/PHI)*p, r) self.assertAlmostEqual(result[0], 1, 7) def test_phi_tet_3(self): T = Tetrahedron(PHI, 1/PHI, 1.0, root2, root2/PHI, root2) result = T.ivm_volume() self.assertAlmostEqual(result, 1, 7) def test_koski(self): a = 1 b = PHI ** -1 c = PHI ** -2 d = (root2) * PHI ** -1 e = (root2) * PHI ** -2 f = (root2) * PHI ** -1 T = Tetrahedron(a,b,c,d,e,f) result = T.ivm_volume() self.assertAlmostEqual(result, PHI ** -3, 7) a = Test_Tetrahedron() R =0.5 D =1.0 suite = unittest.TestLoader().loadTestsFromModule(a) unittest.TextTestRunner().run(suite) """ Explanation: The make_tet function takes three vectors from a common corner, in terms of vectors with coordinates, and computes the remaining missing lengths, thereby getting the information it needs to use the Tetrahedron class as before. End of explanation """ a = 2 b = 4 c = 5 d = 3.4641016151377544 e = 4.58257569495584 f = 4.358898943540673 tetra = Tetrahedron(a,b,c,d,e,f) print("IVM volume of tetra:", round(tetra.ivm_volume(),5)) """ Explanation: <a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/41211295565/in/album-72157624750749042/" title="Martian Multiplication"><img src="https://farm1.staticflickr.com/907/41211295565_59145e2f63.jpg" width="500" height="312" alt="Martian Multiplication"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script> The above tetrahedron has a=2, b=2, c=5, for a volume of 20. The remaining three lengths have not been computed as it's sufficient to know only a, b, c if the angles between them are those of the regular tetrahedron. That's how IVM volume is computed: multiply a * b * c from a regular tetrahedron corner, then "close the lid" to see the volume. End of explanation """ b = rt2(2)/4 a = c = rt2(3/8) d = e = 0.5 f = rt2(2)/2 mite = Tetrahedron(a, b, c, d, e, f) print("IVM volume of Mite:", round(mite.ivm_volume(),5)) print("XYZ volume of Mite:", round(mite.xyz_volume(),5)) """ Explanation: Lets define a MITE, one of these 24 identical space-filling tetrahedrons, with reference to D=1, R=0.5, as this is how our Tetrahedron class is calibrated. The cubes 12 edges will all be √2/2. Edges 'a' 'b' 'c' fan out from the cube center, with 'b' going up to a face center, with 'a' and 'c' to adjacent ends of the face's edge. From the cube's center to mid-face is √2/4 (half an edge), our 'b'. 'a' and 'c' are both half the cube's body diagonal of √(3/2)/2 or √(3/8). Edges 'd', 'e' and 'f' define the facet opposite the cube's center. 'd' and 'e' are both half face diagonals or 0.5, whereas 'f' is a cube edge, √2/2. This gives us our tetrahedron: End of explanation """ regular = Tetrahedron(0.5, 0.5, 0.5, 0.5, 0.5, 0.5) print("MITE volume in XYZ units:", round(regular.xyz_volume(),5)) print("XYZ volume of 24-Mite Cube:", round(24 * regular.xyz_volume(),5)) """ Explanation: Allowing for floating point error, this space-filling right tetrahedron has a volume of 0.125 or 1/8. Since 24 of them form a cube, said cube has a volume of 3. The XYZ volume, on the other hand, is what we'd expect from a regular tetrahedron of edges 0.5 in the current calibration system. End of explanation """ from math import sqrt as rt2 from tetravolume import make_tet, Vector ø = (rt2(5)+1)/2 e0 = Black_Yellow = rt2(3)*ø**-1 e1 = Black_Blue = 1 e3 = Yellow_Blue = (3 - rt2(5))/2 e6 = Black_Red = rt2((5 - rt2(5))/2) e7 = Blue_Red = 1/ø # E-mod is a right tetrahedron, so xyz is easy v0 = Vector((Black_Blue, 0, 0)) v1 = Vector((Black_Blue, Yellow_Blue, 0)) v2 = Vector((Black_Blue, 0, Blue_Red)) # assumes R=0.5 so computed result is 8x needed # volume, ergo divide by 8. ivm, xyz = make_tet(v0,v1,v2) print("IVM volume:", round(ivm/8, 5)) print("XYZ volume:", round(xyz/8, 5)) """ Explanation: The MITE (minimum tetrahedron) further dissects into component modules, a left and right A module, then either a left or right B module. Outwardly, the positive and negative MITEs look the same. Here are some drawings from R. Buckminster Fuller's research, the chief popularizer of the A and B modules. In a different Jupyter Notebook, we could run these tetrahedra through our volume computer to discover both As and Bs have a volume of 1/24 in IVM units. Instead, lets take a look at the E-module and compute its volume. <br /> The black hub is at the center of the RT, as shown here... <br /> <div style="text-align: center"> <a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/24971714468/in/dateposted-public/" title="E module with origin"><img src="https://farm5.staticflickr.com/4516/24971714468_46e14ce4b5_z.jpg" width="640" height="399" alt="E module with origin"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script> <b>RT center is the black hub (Koski with vZome)</b> </div> End of explanation """
pmgbergen/porepy
tutorials/mpfa.ipynb
gpl-3.0
import numpy as np import porepy as pp # Create grid n = 5 g = pp.CartGrid([n,n]) g.compute_geometry() # Define boundary type dirich = np.ravel(np.argwhere(g.face_centers[1] < 1e-10)) bound = pp.BoundaryCondition(g, dirich, 'dir') # Create permeability matrix k = np.ones(g.num_cells) perm = pp.SecondOrderTensor(k) """ Explanation: Multi-point flux approximation (MPFA) Porepy supports mpfa discretization for darcy flow problem: \begin{equation} q = -K \nabla p,\quad \nabla \cdot q = f \end{equation} We can write this as a single equation of pressure: \begin{equation} -\nabla \cdot K\nabla p = f \end{equation} For each control volume $\Omega_k$ we then have \begin{equation} \int_{\Omega_k} f dv = \int_{\partial\Omega_k} (K\nabla p)\cdot n dA, \end{equation} To solve this system we first have to create the grid. Then we need to define boundary conditions. We set the bottom boundary as a Dirichlet boundary. The other boundaries are set to Neuman. We also need to create the permeability tensor: End of explanation """ top_faces = np.ravel(np.argwhere(g.face_centers[1] > n - 1e-10)) bot_faces = np.ravel(np.argwhere(g.face_centers[1] < 1e-10)) p_b = np.zeros(g.num_faces) p_b[top_faces] = -1 * g.face_areas[top_faces] p_b[bot_faces] = 0 """ Explanation: We now define the boundary conditions. We set zero pressure on the bottom boundary, and a constant innflow on the top boundary. Note that the traction on the Neumann boundary is the discharge, not the flux. To get the discharge for each boundary face we scale the flux by the face areas. End of explanation """ mpfa_solver = pp.Mpfa("flow") f = np.zeros(g.num_cells) specified_parameters = {"second_order_tensor": perm, "source": f, "bc": bound, "bc_values": p_b} data = pp.initialize_default_data(g, {}, "flow", specified_parameters) mpfa_solver.discretize(g, data) A, b = mpfa_solver.assemble_matrix_rhs(g, data) p_class = np.linalg.solve(A.A, b) pp.plot_grid(g, cell_value=p_class, figsize=(15, 12)) """ Explanation: We can now solv this problem using the Mpfa class. We assume no source or sinks $f=0$: End of explanation """ flux = data[pp.DISCRETIZATION_MATRICES]["flow"]["flux"] bound_flux = data[pp.DISCRETIZATION_MATRICES]["flow"]["bound_flux"] print("flux matrix shape: {}".format(flux.shape)) print("bound_flux matrix shape: {} ".format(bound_flux.shape)) """ Explanation: To understand what goes on under the hood of the Mpfa class, we can also create the lhs and rhs manually. Mpfa.discretize() stores the flux discretization as two sparse matrices "flux" and "bound_flux" in the data dictionary: End of explanation """ div = pp.fvutils.scalar_divergence(g) A = div * flux b = f - div * bound_flux * p_b p = np.linalg.solve(A.A, b) """ Explanation: They give the discretization of the fluxes over each face: \begin{equation} F = \text{flux} \cdot p + \text{bound_flux} \cdot p_b \end{equation} Here $p$ is a vector of cell center pressure and has length g.num_cells. The vector $p_b$ is the boundary condition values. It is the pressure for Dirichlet boundaries and flux for Neumann boundaries and has length g.num_faces. We are now ready to set up the linear system of equations and solve it. We assume no source and sinks $f = 0$. Each row in the discretized system is now \begin{equation} \int_{\Omega_k} f dv = \int_{\partial\Omega_k} F dA = [div \cdot \text{flux} \cdot p + div\cdot\text{bound_flux}\cdot p_b]_k, \end{equation} We move the known boundary variable $u_b$ over to the right hand side and solve the system: End of explanation """ assert np.allclose(p, p_class) """ Explanation: This gives us the save results as using the Mpfa class: End of explanation """ pp.plot_grid(g, cell_value=p, figsize=(15, 12)) """ Explanation: We can also plot the pressure End of explanation """ F = flux * p + bound_flux * p_b neumann_faces = np.argwhere(bound.is_neu) assert np.allclose(np.abs(p_b[neumann_faces]), np.abs(F[neumann_faces])) F_n = F * g.face_normals pp.plot_grid(g, vector_value=F_n, figsize=(15, 12)) # We now test the solution (mostly for debugging purposes) p_exact = g.cell_centers[1] assert np.allclose(p, p_exact) """ Explanation: We can also retrieve the flux for each face. End of explanation """
scottprahl/miepython
docs/09_backscattering.ipynb
mit
#!pip install --user miepython import numpy as np import matplotlib.pyplot as plt try: import miepython except ModuleNotFoundError: print('miepython not installed. To install, uncomment and run the cell above.') print('Once installation is successful, rerun this cell again.') """ Explanation: Backscattering Efficiency Validation Scott Prahl Apr 2021 If miepython is not installed, uncomment the following cell (i.e., delete the #) and run (shift-enter) End of explanation """ print(" miepython Wiscombe") print(" X m.real m.imag Qback Qback ratio") m=complex(1.55, 0.0) x = 2*3.1415926535*0.525/0.6328 ref = 2.92534 qext, qsca, qback, g = miepython.mie(m,x) print("%9.3f % 8.4f % 8.4f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref)) m=complex(0.0, -1000.0) x=0.099 ref = (4.77373E-07*4.77373E-07 + 1.45416E-03*1.45416E-03)/x/x*4 qext, qsca, qback, g = miepython.mie(m,x) print("%9.3f % 8.4f % 8.2f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref)) x=0.101 ref = (5.37209E-07*5.37209E-07 + 1.54399E-03*1.54399E-03)/x/x*4 qext, qsca, qback, g = miepython.mie(m,x) print("%9.3f % 8.4f % 8.2f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref)) x=100 ref = (4.35251E+01*4.35251E+01 + 2.45587E+01*2.45587E+01)/x/x*4 qext, qsca, qback, g = miepython.mie(m,x) print("%9.3f % 8.4f % 8.2f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref)) x=10000 ref = abs(2.91013E+03-4.06585E+03*1j)**2/x/x*4 qext, qsca, qback, g = miepython.mie(m,x) print("%9.3f % 8.4f % 8.2f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref)) print() """ Explanation: Wiscombe tests Since the backscattering efficiency is $|2S_1(-180^\circ)/x|^2$, it is easy to see that that backscattering should be the best comparison. For example, the asymmetry factor for this test case only has three significant digits and the scattering efficiency only has two! A typical test result looks like this: ``` MIEV0 Test Case 12: Refractive index: real 1.500 imag -1.000E+00, Mie size parameter = 0.055 NUMANG = 7 angles symmetric about 90 degrees Angle Cosine S-sub-1 S-sub-2 Intensity Deg of Polzn 0.00 1.000000 7.67526E-05 8.34388E-05 7.67526E-05 8.34388E-05 1.28530E-08 0.0000 ( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000) 30.00 0.866025 7.67433E-05 8.34349E-05 6.64695E-05 7.22517E-05 1.12447E-08 -0.1428 ( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000) 60.00 0.500000 7.67179E-05 8.34245E-05 3.83825E-05 4.16969E-05 8.02857E-09 -0.5999 ( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000) 90.00 0.000000 7.66833E-05 8.34101E-05 3.13207E-08 -2.03740E-08 6.41879E-09 -1.0000 ( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000) 120.00 -0.500000 7.66486E-05 8.33958E-05 -3.83008E-05 -4.17132E-05 8.01841E-09 -0.6001 ( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000) 150.00 -0.866025 7.66233E-05 8.33853E-05 -6.63499E-05 -7.22189E-05 1.12210E-08 -0.1429 ( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000) 180.00 -1.000000 7.66140E-05 8.33814E-05 -7.66140E-05 -8.33814E-05 1.28222E-08 0.0000 ( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000) Angle S-sub-1 T-sub-1 T-sub-2 0.00 7.67526E-05 8.34388E-05 3.13207E-08 -2.03740E-08 7.67213E-05 8.34592E-05 ( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000) 180.00 7.66140E-05 8.33814E-05 3.13207E-08 -2.03740E-08 7.66453E-05 8.33611E-05 ( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000) Efficiency factors for Asymmetry Extinction Scattering Absorption Factor 0.101491 0.000011 0.101480 0.000491 ( 1.000000) ( 1.000000) ( 1.000000) ( 1.000000) ``` Perfectly conducting spheres End of explanation """ print(" miepython Wiscombe") print(" X m.real m.imag Qback Qback ratio") m=complex(0.75, 0.0) x=0.099 ref = (1.81756E-08*1.81756E-08 + 1.64810E-04*1.64810E-04)/x/x*4 qext, qsca, qback, g = miepython.mie(m,x) print("%9.3f % 8.4f % 8.4f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref)) x=0.101 ref = (2.04875E-08*2.04875E-08 + 1.74965E-04*1.74965E-04)/x/x*4 qext, qsca, qback, g = miepython.mie(m,x) print("%9.3f % 8.4f % 8.4f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref)) x=10.0 ref = (1.07857E+00*1.07857E+00 + 3.60881E-02*3.60881E-02)/x/x*4 qext, qsca, qback, g = miepython.mie(m,x) print("%9.3f % 8.4f % 8.4f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref)) x=1000.0 ref = (1.70578E+01*1.70578E+01 + 4.84251E+02* 4.84251E+02)/x/x*4 qext, qsca, qback, g = miepython.mie(m,x) print("%9.3f % 8.4f % 8.4f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref)) print() """ Explanation: Spheres with a smaller refractive index than their environment End of explanation """ print(" miepython Wiscombe") print(" X m.real m.imag Qback Qback ratio") m=complex(1.5, 0) x=10 ref = abs(4.322E+00 + 4.868E+00*1j)**2/x/x*4 qext, qsca, qback, g = miepython.mie(m,x) print("%9.3f % 8.4f % 8.5f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref)) x=100 ref = abs(4.077E+01 + 5.175E+01*1j)**2/x/x*4 qext, qsca, qback, g = miepython.mie(m,x) print("%9.3f % 8.4f % 8.5f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref)) x=1000 ref = abs(5.652E+02 + 1.502E+03*1j)**2/x/x*4 qext, qsca, qback, g = miepython.mie(m,x) print("%9.3f % 8.4f % 8.5f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref)) print() """ Explanation: Non-absorbing spheres End of explanation """ print(" old") print(" miepython Wiscombe") print(" X m.real m.imag Qback Qback ratio") m=complex(1.33, -0.00001) x=1 ref = (2.24362E-02*2.24362E-02 + 1.43711E-01*1.43711E-01)/x/x*4 qext, qsca, qback, g = miepython.mie(m,x) print("%9.3f % 8.4f % 8.5f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref)) x=100 ref = (5.65921E+01*5.65921E+01 + 4.65097E+01*4.65097E+01)/x/x*4 qext, qsca, qback, g = miepython.mie(m,x) print("%9.3f % 8.4f % 8.5f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref)) x=10000 ref = abs(-1.82119E+02 -9.51912E+02*1j)**2/x/x*4 qext, qsca, qback, g = miepython.mie(m,x) print("%9.3f % 8.4f % 8.5f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref)) print() """ Explanation: Water droplets End of explanation """ print(" miepython Wiscombe") print(" X m.real m.imag Qback Qback ratio") m=complex(1.5, -1.0) x=0.055 ref = abs(7.66140E-05 + 8.33814E-05*1j)**2/x/x*4 qext, qsca, qback, g = miepython.mie(m,x) print("%9.3f % 8.4f % 8.4f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref)) x=0.056 ref = (8.08721E-05*8.08721E-05 + 8.80098E-05*8.80098E-05)/x/x*4 qext, qsca, qback, g = miepython.mie(m,x) print("%9.3f % 8.4f % 8.4f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref)) x=1.0 ref = (3.48844E-01*3.48844E-01 + 1.46829E-01*1.46829E-01)/x/x*4 qext, qsca, qback, g = miepython.mie(m,x) print("%9.3f % 8.4f % 8.4f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref)) x=100.0 ref = (2.02936E+01*2.02936E+01 + 4.38444E+00*4.38444E+00)/x/x*4 qext, qsca, qback, g = miepython.mie(m,x) print("%9.3f % 8.4f % 8.4f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref)) x=10000 ref = abs(-2.18472E+02 -2.06461E+03*1j)**2/x/x*4 qext, qsca, qback, g = miepython.mie(m,x) print("%9.3f % 8.4f % 8.4f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref)) print() """ Explanation: Moderately absorbing spheres End of explanation """ print(" miepython Wiscombe") print(" X m.real m.imag Qback Qback ratio") m=complex(10, -10.0) x=1 ref = abs(4.48546E-01 + 7.91237E-01*1j)**2/x/x*4 qext, qsca, qback, g = miepython.mie(m,x) print("%9.3f % 8.4f % 8.4f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref)) x=100 ref = abs(-4.14538E+01 -1.82181E+01*1j)**2/x/x*4 qext, qsca, qback, g = miepython.mie(m,x) print("%9.3f % 8.4f % 8.4f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref)) x=10000 ref = abs(2.25248E+03 -3.92447E+03*1j)**2/x/x*4 qext, qsca, qback, g = miepython.mie(m,x) print("%9.3f % 8.4f % 8.4f % 8e % 8e %8.5f" % (x,m.real,m.imag,qback,ref,qback/ref)) """ Explanation: Spheres with really big index of refraction End of explanation """ x = np.logspace(1, 5, 20) # also in microns kappa=1 m = 1.5 - kappa*1j R = abs(m-1)**2/abs(m+1)**2 Qbig = R * np.ones_like(x) qext, qsca, qback, g = miepython.mie(m,x) plt.semilogx(x, qback, '+') plt.semilogx(x, Qbig, ':') plt.text(x[-1],Qbig[-1],"$\kappa$=%.3f" % kappa,va="bottom",ha='right') kappa=0.001 m = 1.5 - kappa*1j R = abs(m-1)**2/abs(m+1)**2 Qbig = R * np.ones_like(x) qext, qsca, qback, g = miepython.mie(m,x) plt.semilogx(x, qback, '+') plt.semilogx(x, Qbig, ':') plt.text(x[-1],Qbig[-1],"$\kappa$=%.3f" % kappa,va="bottom",ha='right') plt.ylim(0,0.2) plt.title("Backscattering Efficiency for m=1.5 - i $\kappa$") plt.xlabel("Size Parameter") plt.ylabel("$Q_{back}$") plt.grid() """ Explanation: Backscattering Efficiency for Large Absorbing Spheres For large spheres with absorption, backscattering efficiency should just be equal to the reflection for perpendicular light on a planar surface. End of explanation """
joonasfo/python
Assignment_05.ipynb
mit
# Initial import statements %matplotlib inline import matplotlib.pyplot as plt import numpy as np from matplotlib.pyplot import * from numpy import * from numpy.linalg import * """ Explanation: Assignment: 05 LU decomposition etc. Introduction to Numerical Problem Solving, Spring 2017 19.2.2017, Joonas Forsberg<br /> Helsinki Metropolia University of Applied Sciences End of explanation """ from scipy.linalg import lu_factor, lu_solve # Create a function which can be used later if needed def lu_decomp1(A, b): # Solve by using lu_factor and lu_solve PLU = lu_factor(A) x = lu_solve(PLU, b) return x # Create variables A = np.matrix(((1, 4, 1), (1, 6, -1), (2, -1, 2))) b = np.array(([7, 13, 5])) x = lu_decomp1(A, b) print(dot(inv(A), b)) print("Result = {}".format(x)) """ Explanation: Problem 3 Write a Python program that solves $Ax = b$ using LU decomposition. Use the functions <i>lu_factor</i> and <i>lu_solve</i> from <i>scipy.linalg</i> package. $$ A = \begin{bmatrix} 1 & 4 & 1 \ 1 & 6 & -1 \ 2 & -1 & 2 \end{bmatrix}B = \begin{bmatrix} 7 \ 13 \ 5 \end{bmatrix}$$ Solution We can use the functions lu_factor and lu_solve from scipy.linalg to solve the problem by using LU decomposition. Function lu_factor returns lu (N,N) which is a matrix containing U in its upper triangle, and L in its lower triangle. The function also returns piv (N,) which i representing the permutation matrix P. In function <i>lu_decomp1(A, b)</i>, the result of lu_factor is saved into a variable (PLU) which is later referred in lu_solve(PLU, b) function call, which gives us the result of $Ax = b$. An alternative method for solving the problem would be to use "lu"-function, which unpacks the matrices into separate variables, which could be useful if you need to modify the variables or you don't want to use lu_solve to calculate the end result. The expected result is: $[5.5,0.9,-2.1]$ End of explanation """ A = np.array([[5, -3, -1, 0], [-2, 1, 1, 1], [3, -5, 1, 2], [0, 8, -4, -3]]) B = np.array(([1, 3, -9, 6, 4], [2, -1, 6, 7, 1], [3, 2, -3, 15, 5], [8, -1, 1, 4, 2], [11, 1, -2, 18, 7])) ainv = inv(A) binv = inv(B) print("Inverse of A:\n {}".format(ainv)) print("\nInverse of B:\n {}".format(binv)) """ Explanation: Problem 6 Invert the following matrices with any method $$ A = \begin{bmatrix} 5 & -3 & -1 & 0 \ -2 & 1 & 1 & 1 \ 3 & -5 & 1 & 2 \ 0 & 8 & -4 & -3 \end{bmatrix} B = \begin{bmatrix} 1 & 3 & -9 & 6 & 4 \ 2 & -1 & 6 & 7 & 1 \ 3 & 2 & -3 & 15 & 5 \ 8 & -1 & 1 & 4 & 2 \ 11 & 1 & -2 & 18 & 7 \end{bmatrix}$$ Comment on the reliability of the results. Solution Probably the simplest way to inverse the given matrices is to use inv() function from the numpy.linalg package. Inv() function returns inverse of the matrix given as a parameter in the function call. End of explanation """ print("Determinant of A: {}".format(np.linalg.det(A))) print("Determinant of B: {}".format(np.linalg.det(B))) """ Explanation: Reliability The result of matrix A is correct and the results have 16 decimal precision. In this case, the matrix B is also correctly inverted. However, the determinent is close to zero and if we would be reducing the precision to be less than it's now, we would not be able to invert the matrix. End of explanation """ def gaussSeidel(A, b): omega = 1.1 # Amount of iterations p = 1000 # Define tolerance tol = 1.0e-9 n = len(b) x = np.zeros(n) # Generate array based on starting vector for y in range(n): x[y] = b[y]/A[y, y] # Iterate p times for k in range(p): xOld = x.copy() for i in range(n): s = 0 for j in range(n): if j != i: s = s + A[i, j] * x[j] x[i] = omega/A[i, i] * (b[i] - s) + (1 - omega)*x[i] # Break execution if we are within the tolerance needed dx = math.sqrt(np.dot(x-xOld,x-xOld)) if dx < tol: return x return x A = np.array(([4.0, -1, 0, 0], [-1, 4, -1, 0], [0, -1, 4, -1], [0, 0, -1, 3])) b = np.array(([15.0, 10, 10, 10])) x = gaussSeidel(A, b) print("Result = {}".format(x)) """ Explanation: If you want to invert matrices with small determinant, the solution is to ensure the tolerances are low enough so that the inv() function can invert the matrix. Problem 9 Use the Gauss-Seidel with relaxation to solve $Ax = b$, where $$A = \begin{bmatrix} 4 & -1 & 0 & 0 \ -1 & 4 & -1 & 0 \ 0 & -1 & 4 & -1 \ 0 & 0 & -1 & 3 \end{bmatrix} B = \begin{bmatrix} 15 \ 10 \ 10 \ 10 \ \end{bmatrix}$$ Take $x_i = b_i/A_{ii}$ as the starting vector, and use $ω = 1.1$ for the relaxation factor. Solution We can use the sample code created during class as a baseline for the exercise. We need to make couple of modifications to the source code in order to take the value of omega into the account. We also want to stop iterating once good enough accuracy is achieved. Accuracy is defined in the tol -variable. The tolerance needed is calculated by taking dot product of xOld (taken before iteration) and x (current iteration) and comparing it against the tol variable. If the difference is less than the tol variable, it means the value of x has not changed more than the tolerence, which indicates we are close to the level of accuracy we need. Expected result is: [ 5. 5. 5. 5.] End of explanation """
ML4DS/ML4all
TM2.Topic_Models/TM_py3_NSF/notebook/TM2_TopicModels_student.ipynb
mit
# Common imports %matplotlib inline import matplotlib.pyplot as plt import pylab import numpy as np # import pandas as pd # import os from os.path import isfile, join # import scipy.io as sio # import scipy import zipfile as zp # import shutil # import difflib import gensim """ Explanation: Exploring and undertanding documental databases with topic models Version 1.0 Date: Nov 23, 2017 Authors: Jerónimo Arenas-García (jeronimo.arenas@uc3m.es) Jesús Cid-Sueiro (jcid@tsc.uc3m.es) End of explanation """ xmlfile = '../data/1600057.xml' with open(xmlfile,'r') as fin: print(fin.read()) """ Explanation: 1. Corpus acquisition In this block we will work with collections of text documents. The objectives will be: Find the most important topics in the collection and assign documents to topics Analyze the structure of the collection by means of graph analysis We will work with a collection of research projects funded by the US National Science Foundation, that you can find under the ./data directory. These files are publicly available from the NSF website. 1.1. Exploring file structure NSF project information is provided in XML files. Projects are yearly grouped in .zip files, and each project is saved in a different XML file. To explore the structure of such files, we will use the file 160057.xml. Parsing XML files in python is rather easy using the ElementTree module. 1.1.1. File format To start with, you can have a look at the contents of the example file. We are interested on the following information of each project: Project identifier Project Title Project Abstract Budget Starting Year (we will ignore project duration) Institution (name, zipcode, and state) End of explanation """ import xml.etree.ElementTree as ET root = ET.fromstring(open(xmlfile,'r').read()) """ Explanation: 1.1.2. Parsing XML XML is an inherently hierarchical data format, and the most natural way to represent it is with a tree. The ElementTree module has two classes for this purpose: ElementTree represents the whole XML document as a tree Element represents a single node in this tree We can import XML data by reading an XML file: End of explanation """ def parse_xmlproject(xml_string): """This function processess the specified XML field, and outputs a dictionary with the desired project information :xml_string: String with XML content :Returns: Dictionary with indicated files """ root = ET.fromstring(xml_string) dictio = {} for child in root[0]: if child.tag.lower() == 'awardtitle': dictio['title'] = child.text elif child.tag.lower() == 'awardeffectivedate': dictio['year'] = str(child.text[-4:]) elif child.tag.lower() == 'awardamount': dictio['budget'] = float(child.text) elif child.tag.lower() == 'abstractnarration': dictio['abstract'] = child.text elif child.tag.lower() == 'awardid': dictio['project_code'] = child.text elif child.tag.lower() == 'institution': #For the institution we have to access the children elements #and search for the name, zipcode, and statecode only name = '' zipcode = '' statecode = '' for child2 in child: if child2.tag.lower() == 'name': name = child2.text elif child2.tag.lower() == 'zipcode': zipcode = child2.text elif child2.tag.lower() == 'statecode': statecode = child2.text dictio['institution'] = (name, zipcode, statecode) return dictio parse_xmlproject(open(xmlfile,'r').read()) """ Explanation: The code below implements a function that parses the XML files and provides as its output a dictionary with fields: project_code (string) title (string) abstract (string) budget (float) year (string) institution (tuple with elements: name, zipcode, and statecode) End of explanation """ # Construct an iterator (or a list) for the years you want to work with years = [2015, 2016] datafiles_path = '../data/' NSF_data = [] for year in years: zpobj = zp.ZipFile(join(datafiles_path, str(year)+'.zip')) for fileinzip in zpobj.namelist(): if fileinzip.endswith('xml'): #Some files seem to be incorrectly parsed try: project_dictio = parse_xmlproject(zpobj.read(fileinzip)) if project_dictio['abstract']: NSF_data.append(project_dictio) except: pass """ Explanation: 1.2. Building the dataset Now, we will use the function you just implemented, to create a database that we will use throughout this module. For simplicity, and given that the dataset is not too large, we will keep all projects in the RAM. The dataset will consist of a list containing the dictionaries associated to each of the considered projects in a time interval. End of explanation """ print('Number of projects in dataset:', len(NSF_data)) # Budget budget_data = list(map(lambda x: x['budget'], NSF_data)) print('Average budget of projects in dataset:', np.mean(budget_data)) # Institutions insti_data = list(map(lambda x: x['institution'], NSF_data)) print('Number of unique institutions in dataset:', len(set(insti_data))) # Counts per year counts = dict() for project in NSF_data: counts[project['year']] = counts.get(project['year'],0) + 1 print('Breakdown of projects by starting year:') for el in counts: print(el, ':', counts[el]) """ Explanation: We will extract some characteristics of the constructed dataset: End of explanation """ corpus_raw = list(map(lambda x: x['abstract'], NSF_data)) abstractlen_data = list(map(lambda x: len(x), corpus_raw)) print('Average length of projects abstracts (in characters):', np.mean(abstractlen_data)) """ Explanation: For the rest of this notebook, we will work with the abstracts only. The list of all abstract will be the corpus we will work with. End of explanation """ from nltk import download # You should comment this code fragment if the package is already available. # download('punkt') # download('stopwords') """ Explanation: 2. Corpus Processing Topic modelling algorithms process vectorized data. In order to apply them, we need to transform the raw text input data into a vector representation. To do so, we will remove irrelevant information from the text data and preserve as much relevant information as possible to capture the semantic content in the document collection. Thus, we will proceed with the following steps: Tokenization Homogeneization, which includes: Removing capitalization. Removing non alphanumeric tokens (e.g. punktuation signs) Stemming/Lemmatization. Cleaning Vectorization For the first steps, we will use some of the powerful methods available from the Natural Language Toolkit. In order to use the word_tokenize method from nltk, you might need to get the appropriate libraries using nltk.download(). You must select option "d) Download", and identifier "punkt" End of explanation """ from nltk.tokenize import word_tokenize from nltk.stem import SnowballStemmer, WordNetLemmatizer from nltk.corpus import stopwords # Maybe you can try the stemmer too. # stemmer = SnowballStemmer('english') wnl = WordNetLemmatizer() stopwords_en = stopwords.words('english') # Initialize ouput corpus corpus_clean = [] ndocs = len(corpus_raw) for n, text in enumerate(corpus_raw): if not n%100: print('\rTokenizing document', n, 'out of', ndocs, end='', flush=True) # Tokenize each text entry. # tokens = <FILL IN> # tokens_filtered = <FILL IN> # tokens_lemmatized = <FILL IN> # tokens_clean = <FILL IN> # Add the new token list as a new element to corpus_clean (that will be a list of lists) # corpus_clean.<FILL IN> print('\n\n The corpus has been tokenized. Check the result for the first abstract:') print(corpus_raw[0]) print(corpus_clean[0]) """ Explanation: 2.1. Corpus Processing We will create a list that contains just the abstracts in the dataset. As the order of the elements in a list is fixed, it will be later straightforward to match the processed abstracts to metadata associated to their corresponding projects. Exercise 1: Generate a corpus of processed documents. For each document in corpus_raw complete the following steps: 1. Tokenize. 2. Remove capitalization and non-alphanumeric tokens. 3. Lemmatize 4. Remove the stopwords using the NLTK stopwords list. End of explanation """ # Create dictionary of tokens D = gensim.corpora.Dictionary(corpus_clean) n_tokens = len(D) print('The dictionary contains', n_tokens, 'terms') print('First terms in the dictionary:') for n in range(10): print(str(n), ':', D[n]) """ Explanation: 2.4. Vectorization Up to this point, we have transformed the raw text collection in a list of documents, where each documen is a collection of the words that are most relevant for semantic analysis. Now, we need to convert these data (a list of token lists) into a numerical representation (a list of vectors, or a matrix). To do so, we will start using the tools provided by the gensim library. As a first step, we create a dictionary containing all tokens in our text corpus, and assigning an integer identifier to each one of them. End of explanation """ no_below = 5 #Minimum number of documents to keep a term in the dictionary no_above = .75 #Maximum proportion of documents in which a term can appear to be kept in the dictionary D.filter_extremes(no_below=no_below, no_above=no_above, keep_n=25000) n_tokens = len(D) print('The dictionary contains', n_tokens, 'terms') print('First terms in the dictionary:') for n in range(10): print(str(n), ':', D[n]) """ Explanation: We can also filter out terms that appear in too few or too many of the documents in the dataset: End of explanation """ # corpus_bow = <FILL IN> """ Explanation: In the second step, let us create a numerical version of our corpus using the doc2bow method. In general, D.doc2bow(token_list) transforms any list of tokens into a list of tuples (token_id, n), one per each token in token_list, where token_id is the token identifier (according to dictionary D) and n is the number of occurrences of such token in token_list. Exercise 2: Apply the doc2bow method from gensim dictionary D, to all tokens in every document in clean_abstracts. The result must be a new list named corpus_bow where each element is a list of tuples (token_id, number_of_occurrences). End of explanation """ print('Original document (after cleaning):') print(corpus_clean[0]) print('Sparse vector representation (first 10 components):') print(corpus_bow[0][:10]) print('Word counts for the first project (first 10 components):') print(list(map(lambda x: (D[x[0]], x[1]), corpus_bow[0][:10]))) """ Explanation: At this point, it is good to make sure to understand what has happened. In clean_abstracts we had a list of token lists. With it, we have constructed a Dictionary, D, which assigns an integer identifier to each token in the corpus. After that, we have transformed each article (in clean_abstracts) in a list tuples (id, n). End of explanation """ # SORTED TOKEN FREQUENCIES (I): # Create a "flat" corpus with all tuples in a single list corpus_bow_flat = [item for sublist in corpus_bow for item in sublist] # Initialize a numpy array that we will use to count tokens. # token_count[n] should store the number of ocurrences of the n-th token, D[n] token_count = np.zeros(n_tokens) # Count the number of occurrences of each token. for x in corpus_bow_flat: # Update the proper element in token_count # scode: <FILL IN> # Sort by decreasing number of occurences ids_sorted = np.argsort(- token_count) tf_sorted = token_count[ids_sorted] """ Explanation: Note that we can interpret each element of corpus_bow as a sparse_vector. For example, a list of tuples [(0, 1), (3, 3), (5,2)] for a dictionary of 10 elements can be represented as a vector, where any tuple (id, n) states that position id must take value n. The rest of positions must be zero. [1, 0, 0, 3, 0, 2, 0, 0, 0, 0] These sparse vectors will be the inputs to the topic modeling algorithms. As a summary, the following variables will be relevant for the next chapters: * D: A gensim dictionary. Term strings can be accessed using the numeric identifiers. For instance, D[0] contains the string corresponding to the first position in the BoW representation. * corpus_bow: BoW corpus. A list containing an entry per project in the dataset, and consisting of the (sparse) BoW representation for the abstract of that project. * NSF_data: A list containing an entry per project in the dataset, and consisting of metadata for the projects in the dataset The way we have constructed the corpus_bow variable guarantees that the order is preserved, so that the projects are listed in the same order in the lists corpus_bow and NSF_data. Before starting with the semantic analyisis, it is interesting to observe the token distribution for the given corpus. End of explanation """ print(D[ids_sorted[0]]) """ Explanation: ids_sorted is a list of all token ids, sorted by decreasing number of occurrences in the whole corpus. For instance, the most frequent term is End of explanation """ print("{0} times in the whole corpus".format(tf_sorted[0])) """ Explanation: which appears End of explanation """ # SORTED TOKEN FREQUENCIES (II): plt.rcdefaults() # Example data n_art = len(NSF_data) n_bins = 25 hot_tokens = [D[i] for i in ids_sorted[n_bins-1::-1]] y_pos = np.arange(len(hot_tokens)) z = tf_sorted[n_bins-1::-1]/n_art plt.figure() plt.barh(y_pos, z, align='center', alpha=0.4) plt.yticks(y_pos, hot_tokens) plt.xlabel('Average number of occurrences per article') plt.title('Token distribution') plt.show() # SORTED TOKEN FREQUENCIES: # Example data plt.figure() plt.semilogy(tf_sorted) plt.ylabel('Total number of occurrences') plt.xlabel('Token rank') plt.title('Token occurrences') plt.show() # cold_tokens = <FILL IN> print("There are {0} cold tokens, which represent {1}% of the total number of tokens in the dictionary".format( len(cold_tokens), float(len(cold_tokens))/n_tokens*100)) """ Explanation: In the following we plot the most frequent terms in the corpus. End of explanation """ all_counts = [(D[el], D.dfs[el]) for el in D.dfs] all_counts = sorted(all_counts, key=lambda x: x[1]) """ Explanation: 2.5. Dictionary properties As a final comment, note that gensim dictionaries contain a method dfs to compute the word counts automatically. In the code below we build a list all_counts that contains tuples (terms, document_counts). End of explanation """ num_topics = 50 # This might take some time... # ldag = gensim.models.ldamodel.LdaModel(<FILL IN>) """ Explanation: 3. Topic Modeling There are several implementations of the LDA topic model in python: Python library lda. Gensim module: gensim.models.ldamodel.LdaModel Sci-kit Learn module: sklearn.decomposition In the following sections we explore the use of gensim 3.1. Training a topic model using Gensim LDA Since we already have computed the dictionary and documents BoW representation using Gensim, computing the topic model is straightforward using the LdaModel() function. Please, refer to Gensim API documentation for more information on the different parameters accepted by the function: Exercise 3: Create an LDA model with the 50 topics using corpus_bow and the dictionary, D. End of explanation """ ldag.print_topics(num_topics=-1, num_words=10) """ Explanation: 3.2. LDA model visualization Gensim provides a basic visualization of the obtained topics: End of explanation """ import pyLDAvis.gensim as gensimvis import pyLDAvis vis_data = gensimvis.prepare(ldag, corpus_bow, D) pyLDAvis.display(vis_data) """ Explanation: A more useful visualization is provided by the python LDA visualization library, pyLDAvis. Before executing the next code fragment you might need to install pyLDAvis: &gt;&gt; pip install (--user) pyLDAvis End of explanation """ # <SOL> # </SOL> """ Explanation: 3.3. Gensim utility functions In addition to visualization purposes, topic models are useful to obtain a semantic representation of documents that can later be used with some other purpose: In document classification problems In content-based recommendations systems Essentially, the idea is that the topic model provides a (semantic) vector representation of documents, and use probability divergences to measure document similarity. The following functions of the LdaModel class will be useful in this context: get_topic_terms(topic_id): Gets vector of the probability distribution among words for the indicated topic get_document_topics(bow_vector): Gets (sparse) vector with the probability distribution among topics for the provided document Exercise 4: Show the probability distribution over words for topic 0. End of explanation """ print(ldag.get_document_topics(corpus_bow[0])) """ Explanation: Exercise 5: Show the probability distribution over topics for document 0. End of explanation """ print(ldag[corpus_bow[0]]) print('When applied to a dataset it will provide an iterator') print(ldag[corpus_bow[:3]]) print('We can rebuild the list from the iterator with a one liner') print([el for el in ldag[corpus_bow[:3]]]) """ Explanation: An alternative to the use of the get_document_topics() function is to directly transform a dataset using the ldag object as follows. You can apply this transformation to several documents at once, but then the result is an iterator from which you can build the corresponding list if necessary End of explanation """ reduced_corpus = [el for el in ldag[corpus_bow[:3]]] reduced_corpus = gensim.matutils.corpus2dense(reduced_corpus, num_topics).T print(reduced_corpus) """ Explanation: Finally, Gensim provides some useful functions to convert between formats, and to simplify interaction with numpy and scipy. The following code fragment converts a corpus in sparse format to a full numpy matrix End of explanation """ def most_relevant_projects(ldag, topicid, corpus_bow, nprojects=10): """This function returns the most relevant projects in corpus_bow : ldag: The trained topic model object provided by gensim : topicid: The topic for which we want to find the most relevant documents : corpus_bow: The BoW representation of documents in Gensim format : nprojects: Number of most relevant projects to identify : Returns: A list with the identifiers of the most relevant projects """ print('Computing most relevant projects for Topic', topicid) print('Topic composition is:') print(ldag.show_topic(topicid)) #<SOL> #</SOL> #To test the function we will find the most relevant projects for a subset of the NSF dataset project_id = most_relevant_projects(ldag, 17, corpus_bow[:10000]) #Print titles of selected projects for idproject in project_id: print(NSF_data[idproject]['title']) """ Explanation: Exercise 6: Build a function that returns the most relevant projects for a given topic End of explanation """ def pairwase_dist(doc1, doc2): """This function returns the Jensen-Shannon distance between the corresponding vectors of the documents : doc1: Semantic representation for the doc1 (a vector of length ntopics) : doc2: Semantic representation for the doc2 (a vector of length ntopics) : Returns: The JS distance between doc1 and doc2 (a number) """ #<SOL> #</SOL> """ Explanation: Exercise 7: Build a function that computes the semantic distance between two documents. For this, you can use the functions (or code fragments) provided in the library dist_utils.py. End of explanation """ #print(NSF_data[0].keys()) #print(NSF_data[0]['institution']) def strNone(str_to_convert): if str_to_convert is None: return '' else: return str_to_convert with open('NSF_nodes.csv','w') as fout: fout.write('Id;Title;Year;Budget;UnivName;UnivZIP;State\n') for project in NSF_data: fout.write(project['project_code']+';'+project['title']+';') fout.write(project['year']+';'+str(project['budget'])+';') fout.write(project['institution'][0]+';') fout.write(strNone(project['institution'][1])+';') fout.write(strNone(project['institution'][2])+'\n') """ Explanation: Exercise 8: Explore the influence of the concentration parameters, $alpha$ and $eta$. In particular observe how do topic and document distributions change as these parameters increase. Exercise 9: Note that we have not used the terms in the article titles, though the can be expected to contain relevant words for the topic modeling. Include the title words in the analyisis. In order to give them a special relevante, insert them in the corpus several times, so as to make their words more significant. 4. Saving data. The following function creates the Node CSV file that could be usefult for further processing. In particular, the output files could be used to visualize the document corpus as a graph using a visualization software like Gephi. End of explanation """
alias-org/alias
examples/demonstration-notebook.ipynb
gpl-3.0
import alias as al example = al.ArgumentationFramework('Example') """ Explanation: First, import the library and create a blank abstract argumentation framework: Welcome to the ALIAS Demonstration Notebook! This Ipython Notebook aims to demonstrate the key functionality of the ALIAS library. End of explanation """ example.add_argument('a') example.add_argument('b') example.add_argument('c') # Arguments can also be passed as a list or tuple # e.g: example.add_argument(['a','b,'c']) """ Explanation: Lets add a few arguments to the framework we've called 'Example': End of explanation """ example.add_attack(('a','b')) example.add_attack(('b','c')) # Attacks can also be passed as a list or a tuple # by using the optional parameter 'atts' # e.g: example.add_attack(atts=[('a', 'b'), ('b', 'c')]) """ Explanation: Now, lets create some attacks between these arguments: End of explanation """ print example """ Explanation: We have created an Argumentation Framework called example which contains three arguments and two attacks. For a string representation of the Framework, simply call print on it: End of explanation """ arga = example['a'] print example.get_attackers('b') print arga """ Explanation: Argument objects belonging to a framework can be referenced by name like so: End of explanation """ # Creating drawings requires # matplotlib and NetworkX (NetworkX is imported by ALIAS internally) %matplotlib inline import matplotlib import numpy as np import matplotlib.pyplot as plt # Call ALIAS' drawing facility on our AF al.draw_framework(example) """ Explanation: ALIAS utilises NetworkX's drawing library to provide visual representations of Argumentation Frameworks. Lets draw our AF 'example' using the draw_framework() function: End of explanation """ # Call the generate_all_in() function on the example AF allin = example.generate_all_in() print allin # Labellings are updated dynamically when the AF is updated example.add_argument('d') print allin example.remove_argument('d') print allin """ Explanation: Lets apply a labelling to our AF. We can use ALIAS' built-in generate_all_in() function to generate an "All-In labelling" for our framework. End of explanation """ al.draw_framework(example, labelling=allin) """ Explanation: draw_framework() can also take a labelling as an argument to colour code the arguments in the representation: End of explanation """ preflab = al.labelling_preferred(example) print preflab """ Explanation: Lets apply some semantics to our AF. ALIAS can handle both extension-based and labelling-based semantics. End of explanation """ print preflab[0] al.draw_framework(example, labelling=preflab[0]) """ Explanation: Labelling semantic functions return a list of all labellings for the given framework that satisfy the constraints of the chosen semantics. End of explanation """ stabext = example.extension_stable() print stabext """ Explanation: Lets try some extension based semantics. These functions take the power set of all arguments in the framework and applys filtering to find the chosen semantics. (Labellings can also be converted to extension using the function labelling.lab2ext()) End of explanation """ exampleapx = al.read_apx('example-apx.apx') print exampleapx al.draw_framework(exampleapx) grounded = al.labelling_grounded(exampleapx) al.draw_framework(exampleapx, labelling=grounded) pref = al.labelling_preferred(exampleapx) print pref print pref[0] al.draw_framework(exampleapx, labelling=pref[0]) al.to_neo4j(af=exampleapx, u='neo4j', p='test') f = al.from_neo4j(framework='exampleapx', u='neo4j', p='test') print f db = al.Dbwrapper() db.to_sqlite(exampleapx) """ Explanation: ALIAS can perform file input and output operations on .apx, .dot and .tgf files. Lets load an aspartix file to demonstrate this functionality. End of explanation """
google-aai/sc17
cats/step_5_to_8_part2.ipynb
apache-2.0
# Enter your username: YOUR_GMAIL_ACCOUNT = '******' # Whatever is before @gmail.com in your email address # Libraries for this section: import os import datetime import numpy as np import pandas as pd import cv2 import matplotlib.pyplot as plt import matplotlib.image as mpimg import tensorflow as tf from tensorflow.contrib.learn import RunConfig, Experiment from tensorflow.contrib.learn.python.learn import learn_runner # Directory settings: TRAIN_DIR = os.path.join('/home', YOUR_GMAIL_ACCOUNT, 'data/training_small/') # Where the subset training dataset lives. DEBUG_DIR = os.path.join('/home', YOUR_GMAIL_ACCOUNT, 'data/debugging_small/') # Where the debugging dataset lives. VALID_DIR = os.path.join('/home', YOUR_GMAIL_ACCOUNT, 'data/validation_images/') # Where the full validation dataset lives. OUTPUT_DIR = os.path.join('/home', YOUR_GMAIL_ACCOUNT, 'data/output_cnn_small/') # Where we store our logging and models. # TensorFlow setup: NUM_CLASSES = 2 # This code can be generalized beyond 2 classes (binary classification). QUEUE_CAP = 5000 # Number of images the TensorFlow queue can store during training. # For debugging, QUEUE_CAP is ignored in favor of using all images available. TRAIN_BATCH_SIZE = 128 # Number of images processed every training iteration. DEBUG_BATCH_SIZE = 64 # Number of images processed every debugging iteration. TRAIN_STEPS = 100 # Number of batches to use for training. DEBUG_STEPS = 2 # Number of batches to use for debugging. # Example: If dataset is 5 batches ABCDE, train_steps = 2 uses AB, train_steps = 7 uses ABCDEAB). # Monitoring setup: TRAINING_LOG_PERIOD_SECS = 60 # How often we want to log training metrics (from training hook in our model_fn). CHECKPOINT_PERIOD_SECS = 60 # How often we want to save a checkpoint. # Hyperparameters we'll tune in the tutorial: DROPOUT = 0.8 # Regularization parameter for neural networks - must be between 0 and 1. # Additional hyperparameters: LEARNING_RATE = 0.001 # Rate at which weights update. CNN_KERNEL_SIZE = 3 # Receptive field will be square window with this many pixels per side. CNN_STRIDES = 2 # Distance between consecutive receptive fields. CNN_FILTERS = 16 # Number of filters (new receptive fields to train, i.e. new channels) in first convolutional layer. FC_HIDDEN_UNITS = 512 # Number of hidden units in the fully connected layer of the network. """ Explanation: Feline Neural Network - Part 1 Author(s): bfoo@google.com, kozyr@google.com Let's train a basic convolutional neural network to recognize cats. Setup Ensure that you have downloaded the data into your VM, i.e. you've already run these commands in your shell: mkdir -p ~/data/training_small gsutil -m cp gs://$BUCKET/catimages/training_images/000*.png ~/data/training_small/ gsutil -m cp gs://$BUCKET/catimages/training_images/001*.png ~/data/training_small/ mkdir -p ~/data/debugging_small gsutil -m cp gs://$BUCKET/catimages/training_images/002*.png ~/data/debugging_small mkdir -p ~/data/training_images gsutil -m cp gs://$BUCKET/catimages/training_images/*.png ~/data/training_images/ mkdir -p ~/data/validation_images gsutil -m cp gs://$BUCKET/catimages/validation_images/*.png ~/data/validation_images/ If you are going through this notebook more than once, please delete the output folder in your VM to start afresh each time by running the following so that you can start over: rm -r ~/data/output_cnn_small End of explanation """ def show_inputs(dir, filelist=None, img_rows=1, img_cols=3, figsize=(20, 10)): """Display the first few images. Args: dir: directory where the files are stored filelist: list of filenames to pull from, if left as default, all files will be used img_rows: number of rows of images to display img_cols: number of columns of images to display figsize: sizing for inline plots Returns: pixel_dims: pixel dimensions (height and width) of the image """ if filelist is None: filelist = os.listdir(dir) # Grab all the files in the directory filelist = np.array(filelist) plt.close('all') fig = plt.figure(figsize=figsize) print('File names:') for i in range(img_rows * img_cols): print(str(filelist[i])) a=fig.add_subplot(img_rows, img_cols,i + 1) img = mpimg.imread(os.path.join(dir, str(filelist[i]))) plt.imshow(img) plt.show() return np.shape(img) pixel_dim = show_inputs(TRAIN_DIR) print('Images have ' + str(pixel_dim[0]) + 'x' + str(pixel_dim[1]) + ' pixels.') pixels = pixel_dim[0] * pixel_dim[1] """ Explanation: Let's visualize what we're working with and get the pixel count for our images. They should be square for this to work, but luckily we padded them with black pixels where needed back in Step 2. End of explanation """ # Input function: def generate_input_fn(dir, batch_size, queue_capacity): """Return _input_fn for use with TF Experiment. Will be called in the Experiment section below (see _experiment_fn). Args: dir: directory we're taking our files from, code is written to collect all files in this dir. batch_size: number of rows ingested in each training iteration. queue_capacity: number of images the TF queue can store. Returns: _input_fn: a function that returns a batch of images and labels. """ file_pattern = os.path.join(dir, '*') # We're pulling in all files in the directory. def _input_fn(): """A function that returns a batch of images and labels. Args: None Returns: image_batch: 4-d tensor collection of images. label_batch: 1-d tensor of corresponding labels. """ height, width, channels = [pixel_dim[0], pixel_dim[1], 3] # [height, width, 3] because there are 3 channels per image. filenames_tensor = tf.train.match_filenames_once(file_pattern) # Collect the filenames # Queue that periodically reads in images from disk: # When ready to run iteration, TF will take batch_size number of images out of filename_queue. filename_queue = tf.train.string_input_producer( filenames_tensor, shuffle=False) # Do not shuffle order of the images ingested. # Convert filenames from queue into contents (png images pulled into memory): reader = tf.WholeFileReader() filename, contents = reader.read(filename_queue) # Decodes contents pulled in into 3-d tensor per image: image = tf.image.decode_png(contents, channels=channels) # If dimensions mismatch, pad with zeros (black pixels) or crop to make it fit: image = tf.image.resize_image_with_crop_or_pad(image, height, width) # Parse out label from filename: label = tf.string_to_number(tf.string_split([tf.string_split([filename], '_').values[-1]], '.').values[0]) # All your filenames should be in this format number_number_label.extension where label is 0 or 1. # Execute above in a batch of batch_size to create a 4-d tensor of collection of images: image_batch, label_batch = tf.train.batch( [image, label], batch_size, num_threads=1, # We'll decline the multithreading option so that everything stays in filename order. capacity=queue_capacity) # Normalization for better training: # Change scale from pixel uint8 values between 0 and 255 into normalized float32 values between 0 and 1: image_batch = tf.to_float(image_batch) / 255 # Rescale from (0,1) to (-1,1) so that the "center" of the image range is 0: image_batch = (image_batch * 2) - 1 return image_batch, label_batch return _input_fn """ Explanation: Step 5 - Get tooling for training convolutional neural networks Here is where we enable training convolutional neural networks on data inputs like ours. We'll build it using a TensorFlow estimator. TensorFlow (TF) is designed for scale, which means it doesn't pull all our data into memory all at once, but instead it's all about lazy execution. We'll write functions which it will run when it's efficient to do so. TF will pull in batches of our image data and run the functions we wrote. In order to make this work, we need to write code for the following: Input function: generate_input_fn() Neural network architecture: cnn() Model function: generate_model_fn() Estimator: tf.estimator.Estimator() Experiment: generate_experiment_fn() Prediction generator: cat_finder() Input function The input function tells TensorFlow what format of feature and label data to expect. We'll set ours up to pull in all images in a directory we point it at. It expects images with filenames in the following format: number_number_label.extension, so if your file naming scheme is different, please edit the input function. End of explanation """ # CNN architecture: def cnn(features, dropout, reuse, is_training): """Defines the architecture of the neural network. Will be called within generate_model_fn() below. Args: features: feature data as 4-d tensor (of batch_size) pulled in when_input_fn() is executed. dropout: regularization parameter in last layer (between 0 and 1, exclusive). reuse: a scoping safeguard. First time training: set to False, after that, set to True. is_training: if True then fits model and uses dropout, if False then doesn't consider the dropout Returns: 2-d tensor: each image's "logit", [logit(1-p), logit(p)] where p=Pr(1) i.e. probability that class is 1 (cat in our case). In CNN terminology, "logit" doesn't always mean the logit function you might have encountered studying statistics: logit(p) = logodds(p) = log(p / (1-p)) Instead of converting using the inverse of the logodds function, use softmax. """ # Next, we define a scope for reusing our variables, choosing our network architecture and naming our layers. with tf.variable_scope('cnn', reuse=reuse): layer_1 = tf.layers.conv2d( # 2-d convolutional layer; size of output image is (pixels/stride) a side with channels = filters. inputs=features, # previous layer (inputs) is features argument to the main function kernel_size=CNN_KERNEL_SIZE, # 3x3(x3 because we have 3 channels) receptive field (only square ones allowed) strides=CNN_STRIDES, # distance between consecutive receptive fields filters=CNN_FILTERS, # number of receptive fields to train; think of this as a CNN_FILTERS-channel image which is input to next layer) padding='SAME', # SAME uses zero padding if not all CNN_KERNEL_SIZE x CNN_KERNEL_SIZE positions are filled, VALID will ignore missing activation=tf.nn.relu) # activation function is ReLU which is f(x) = max(x, 0) # For simplicity, this neural network doubles the number of receptive fields (filters) with each layer. # By using more filters, we are able to preserve the spatial dimensions better by storing more information. # # To determine how much information is preserved by each layer, consider that with each layer, # the output width and height is divided by the `strides` value. # When strides=2 for example, the input width W and height H is reduced by 2x, resulting in # an "image" (formally, an activation field) for each filter output with dimensions W/2 x H/2. # By doubling the number of filters compared to the input number of filters, the total output # dimension becomes W/2 x H/2 x CNN_FILTERS*2, essentially compressing the input of the layer # (W x H x CNN_FILTERS) to half as many total "pixels" (hidden units) at the output. # # On the other hand, increasing the number of filters will also increase the training time proportionally, # as there are more weights and biases to train and convolutions to perform. # # As an exercise, you can play around with different numbers of filters, strides, and kernel_sizes. # To avoid very long training time, make sure to keep kernel sizes small (under 5), # strides at least 2 but no larger than kernel sizes (or you will skip pixels), # and cap the number of filters at each level (no more than 512). # # When modifying these values, it is VERY important to keep track of the size of your layer outputs, # i.e. the number of hidden units, since the final layer will need to be flattened into a 1D vector with size # equal to the total number of hidden units. For this reason, using strides that are divisible by the width # and height of the input may be the easiest way to avoid miscalculations from rounding. layer_2 = tf.layers.conv2d( inputs=layer_1, kernel_size=CNN_KERNEL_SIZE, strides=CNN_STRIDES, filters=CNN_FILTERS * (2 ** 1), # Double the number of filters from previous layer padding='SAME', activation=tf.nn.relu) layer_3 = tf.layers.conv2d( inputs=layer_2, kernel_size=CNN_KERNEL_SIZE, strides=CNN_STRIDES, filters=CNN_FILTERS * (2 ** 2), # Double the number of filters from previous layer padding='SAME', activation=tf.nn.relu) layer_4 = tf.layers.conv2d( inputs=layer_3, kernel_size=CNN_KERNEL_SIZE, strides=CNN_STRIDES, filters=CNN_FILTERS * (2 ** 3), # Double the number of filters from previous layer padding='SAME', activation=tf.nn.relu) layer_5 = tf.layers.conv2d( inputs=layer_4, kernel_size=CNN_KERNEL_SIZE, strides=CNN_STRIDES, filters=CNN_FILTERS * (2 ** 4), # Double the number of filters from previous layer padding='SAME', activation=tf.nn.relu) layer_5_flat = tf.reshape( # Flattening to 2-d tensor (1-d per image row for feedforward fully-connected layer) layer_5, shape=[-1, # Reshape final layer to 1-d tensor per image. CNN_FILTERS * (2 ** 4) * # Number of filters (depth), times... pixels / (CNN_STRIDES ** 5) / (CNN_STRIDES ** 5)]) # ... number of hidden units per filter (input pixels / width decimation / height decimation) dense_layer= tf.layers.dense( # fully connected layer inputs=layer_5_flat, units=FC_HIDDEN_UNITS, # number of hidden units activation=tf.nn.relu) dropout_layer = tf.layers.dropout( # Dropout layer randomly keeps only dropout*100% of the dense layer's hidden units in training and autonormalizes during prediction. inputs=dense_layer, rate=dropout, training=is_training) return tf.layers.dense(inputs=dropout_layer, units=NUM_CLASSES) # 2-d tensor: of "logits" for each image in batch. """ Explanation: Neural network architecture This is where we define the architecture of the neural network we're using, such are the number of hidden layers and units. End of explanation """ # Model function: def generate_model_fn(dropout): """Return a function that determines how TF estimator operates. The estimator has 3 modes of operation: * train (fitting and updating the model) * eval (collecting and returning validation metrics) * predict (using the model to label unlabeled images) The returned function _cnn_model_fn below determines what to do depending on the mode of operation, and returns specs telling the estimator what to execute for that mode. Args: dropout: regularization parameter in last layer (between 0 and 1, exclusive) Returns: _cnn_model_fn: a function that returns specs for use with TF estimator """ def _cnn_model_fn(features, labels, mode): """A function that determines specs for the TF estimator based on mode of operation. Args: features: actual data (which goes into scope within estimator function) as 4-d tensor (of batch_size), pulled in via tf executing _input_fn(), which is the output to generate_input_fn() and is in memory labels: 1-d tensor of 0s and 1s mode: TF object indicating whether we're in train, eval, or predict mode. Returns: estim_specs: collections of metrics and tensors that are required for training (e.g. prediction values, loss value, train_op tells model weights how to update) """ # Use the cnn() to compute logits: logits_train = cnn(features, dropout, reuse=False, is_training=True) logits_eval = cnn(features, dropout, reuse=True, is_training=False) # We'll be evaluating these later. # Transform logits into predictions: pred_classes = tf.argmax(logits_eval, axis=1) # Returns 0 or 1, whichever has larger logit. pred_prob = tf.nn.softmax(logits=logits_eval)[:, 1] # Applies softmax function to return 2-d probability vector. # Note: we're not outputting pred_prob in this tutorial, that line just shows you # how to get it if you want it. Softmax[i] = exp(logit[i]) / sum(exp((logit[:])) # If we're in prediction mode, early return predicted class (0 or 1): if mode == tf.estimator.ModeKeys.PREDICT: return tf.estimator.EstimatorSpec(mode, predictions=pred_classes) # If we're not in prediction mode, define loss function and optimizer. # Loss function: # This is what the algorithm minimizes to learn the weights. # tf.reduce_mean() just takes the mean over a batch, giving back a scalar. # Inside tf.reduce_mean() we'll select any valid binary loss function we want to use. loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits( logits=logits_train, labels=tf.cast(labels, dtype=tf.int32))) # Optimizer: # This is the scheme the algorithm uses to update the weights. # AdamOptimizer is adaptive moving average, feel free to replace with one you prefer. optimizer = tf.train.AdamOptimizer(learning_rate=LEARNING_RATE) # The minimize method below doesn't minimize anything, it just takes a step. train_op = optimizer.minimize(loss, global_step=tf.train.get_global_step()) # Performance metric: # Should be whatever we chose as we defined in Step 1. This is what you said you care about! # This output is for reporting only, it is not optimized directly. acc = tf.metrics.accuracy(labels=labels, predictions=pred_classes) # Hooks - pick what to log and show: # Hooks are designed for monitoring; every time TF writes a summary, it'll append these. logging_hook = tf.train.LoggingTensorHook({ 'x-entropy loss': loss, 'training accuracy': acc[0], }, every_n_secs=TRAINING_LOG_PERIOD_SECS) # Stitch everything together into the estimator specs, which we'll output here so it can # later be passed to tf.estimator.Estimator() estim_specs = tf.estimator.EstimatorSpec( mode=mode, predictions=pred_classes, loss=loss, train_op=train_op, training_hooks=[logging_hook], eval_metric_ops={ # This bit is Step 7! 'accuracy': acc, } ) # TF estim_specs defines a huge dict that stores different metrics and operations for useby TF Estimator. # This gives you the interaction between your architecture in cnn() and the weights, etc. in the current iteration which # will be used as input in the next iteration. return estim_specs return _cnn_model_fn """ Explanation: Model function The model function tells TensorFlow how to call the model we designed above and what to do when we're in training vs evaluation vs prediction mode. This is where we define the loss function, the optimizer, and the performance metric (which we picked in Step 1). End of explanation """ # TF Estimator: # WARNING: Don't run this block of code more than once without first changing OUTPUT_DIR. estimator = tf.estimator.Estimator( model_fn=generate_model_fn(DROPOUT), # Call our generate_model_fn to create model function model_dir=OUTPUT_DIR, # Where to look for data and also to paste output. config=RunConfig( save_checkpoints_secs=CHECKPOINT_PERIOD_SECS, keep_checkpoint_max=20, save_summary_steps=100, log_step_count_steps=100) ) """ Explanation: TF Estimator This is where it all comes together: TF Estimator takes in as input everything we've created thus far and when executed it will output everything that is necessary for training (fits a model), evaluation (outputs metrics), or prediction (outputs predictions). End of explanation """ # TF Experiment: def experiment_fn(output_dir): """Create _experiment_fn which returns a TF experiment To be used with learn_runner, which we imported from tf. Args: output_dir: which is where we write our models to. Returns: a TF Experiment """ return Experiment( estimator=estimator, train_input_fn=generate_input_fn(TRAIN_DIR, TRAIN_BATCH_SIZE, QUEUE_CAP), # Generate input function above. eval_input_fn=generate_input_fn(DEBUG_DIR, DEBUG_BATCH_SIZE, QUEUE_CAP), train_steps=TRAIN_STEPS, # Number of batches to use for training. eval_steps=DEBUG_STEPS, # Number of batches to use for eval. min_eval_frequency=1 # Run eval once every min_eval_frequency number of checkpoints. ) """ Explanation: TF Experiment A TF Experiment defines how to run your TF estimator during training and debugging only. TF Experiments are not necessary for prediction once training is complete. TERMINOLOGY WARNING: The word "experiment" here is not used the way it is used by typical scientists and statisticians. End of explanation """ # Enable TF verbose output: tf.logging.set_verbosity(tf.logging.INFO) start_time = datetime.datetime.now() print('It\'s {:%H:%M} in London'.format(start_time) + ' --- Let\'s get started!') # Let the learning commence! Run the TF Experiment here. learn_runner.run(experiment_fn, OUTPUT_DIR) # Output lines using the word "Validation" are giving our metric on the non-training dataset (from DEBUG_DIR). end_time = datetime.datetime.now() print('\nIt was {:%H:%M} in London when we started.'.format(start_time)) print('\nWe\'re finished and it\'s {:%H:%M} in London'.format(end_time)) print('\nCongratulations! Training is complete!') """ Explanation: Step 6 - Train a model! Let's run our lovely creation on our training data. In order to train, we need learn_runner(), which we imported from TensorFlow above. For prediction, we will only need estimator.predict(). End of explanation """ # Observed labels from filenames: def get_labels(dir): """Get labels from filenames. Filenames must be in the following format: number_number_label.png Args: dir: directory containing image files Returns: labels: 1-d np.array of binary labels """ filelist = os.listdir(dir) # Use all the files in the directory labels = np.array([]) for f in filelist: split_filename = f.split('_') label = int(split_filename[-1].split('.')[0]) labels = np.append(labels, label) return labels # Cat_finder function for getting predictions: def cat_finder(dir, model_version): """Get labels from model. Args: dir: directory containing image files Returns: predictions: 1-d np array of binary labels """ num_predictions = len(os.listdir(dir)) predictions = [] # Initialize array. # Estimator.predict() returns a generator g. Call next(g) to retrieve the next value. prediction_gen = estimator.predict( input_fn=generate_input_fn(dir=dir, batch_size=TRAIN_STEPS, queue_capacity=QUEUE_CAP ), checkpoint_path=model_version ) # Use generator to ensure ordering is preserved and predictions match order of validation_labels: i = 1 for pred in range(0, num_predictions): predictions.append(next(prediction_gen)) #Append the next value of the generator to the prediction array i += 1 if i % 1000 == 0: print('{:d} predictions completed (out of {:d})...'.format(i, len(os.listdir(dir)))) print('{:d} predictions completed (out of {:d})...'.format(len(os.listdir(dir)), len(os.listdir(dir)))) return np.array(predictions) """ Explanation: Get predictions and performance metrics Create functions for outputting observed labels, predicted labels, and accuracy. Filenames must be in the following format: number_number_label.extension End of explanation """ def get_accuracy(truth, predictions, threshold=0.5, roundoff = 2): """Compares labels with model predictions and returns accuracy. Args: truth: can be bool (False, True), int (0, 1), or float (0, 1) predictions: number between 0 and 1, inclusive threshold: we convert the predictions to 1s if they're above this value roundoff: report accuracy to how many decimal places? Returns: accuracy: number correct divided by total predictions """ truth = np.array(truth) == (1|True) predicted = np.array(predictions) >= threshold matches = sum(predicted == truth) accuracy = float(matches) / len(truth) return round(accuracy, roundoff) files = os.listdir(TRAIN_DIR) model_version = os.path.join(OUTPUT_DIR, 'model.ckpt-' + str(TRAIN_STEPS)) predicted = cat_finder(TRAIN_DIR, model_version) observed = get_labels(TRAIN_DIR) print('Accuracy is ' + str(get_accuracy(observed, predicted))) """ Explanation: Get training accuracy End of explanation """ files = os.listdir(DEBUG_DIR) model_version = os.path.join(OUTPUT_DIR + 'model.ckpt-' + str(TRAIN_STEPS)) predicted = cat_finder(DEBUG_DIR, model_version) observed = get_labels(DEBUG_DIR) print('Debugging accuracy is ' + str(get_accuracy(observed, predicted))) df = pd.DataFrame({'files': files, 'predicted': predicted, 'observed': observed}) hit = df.files[df.observed == df.predicted] miss = df.files[df.observed != df.predicted] # Show successful classifications: show_inputs(DEBUG_DIR, hit, 3) # Show unsuccessful classifications: show_inputs(DEBUG_DIR, miss, 3) """ Explanation: Step 7 - Debugging and Tuning Debugging It's worth taking a look to see if there's something special about the images we misclassified. End of explanation """ # Disable TF verbose output: tf.logging.set_verbosity(tf.logging.FATAL) # Get output: dropouts = np.array([]) accuracies = np.array([]) for i in range(9): tune_output_dir = os.path.join(OUTPUT_DIR, 'dropout0.' + str(i + 1) + '/') tune_dropout = (float(i) + 1) / 10 print('It\'s {:%H:%M} in London'.format(datetime.datetime.now()) + ' --- Dropout setting is ' + str(tune_dropout)) # Try a new dropout setting for TF Estimator: estimator = tf.estimator.Estimator( model_fn=generate_model_fn(tune_dropout), model_dir=tune_output_dir, config=RunConfig( save_checkpoints_secs=CHECKPOINT_PERIOD_SECS, keep_checkpoint_max=20, save_summary_steps=100, log_step_count_steps=100 ) ) # Train it! learn_runner.run(experiment_fn, tune_output_dir) # Identify the model version: tuned_model = os.path.join(tune_output_dir, 'model.ckpt-' + str(TRAIN_STEPS)) # Output predicted and observed labels: predicted = cat_finder(DEBUG_DIR, model_version=tuned_model) observed = get_labels(DEBUG_DIR) # Compute performance metric: accuracy = get_accuracy(truth=observed, predictions=predicted) print('Accuracy is: ' + str(accuracy)) # Append to array: dropouts = np.append(dropouts, tune_dropout) accuracies = np.append(accuracies, accuracy) best_dropout = dropouts[np.argmax(accuracies)] print("Dropout tuning complete! Set hyperparameter to " + str(best_dropout) + ".") """ Explanation: Tuning In this demo, we'll see extremely basic tuning of the dropout rate. There are plenty of more sophisticated options (check out Cloud ML Engine!) but I'd like to show you that the core principles are simple. We'll just step through an array of options and see which dropout setting gets you the best accuracy, then we'll select that one when we train with more data. End of explanation """ files = os.listdir(VALID_DIR) predicted = cat_finder(VALID_DIR, model_version) observed = get_labels(VALID_DIR) print('\nValidation accuracy is ' + str(get_accuracy(observed, predicted))) """ Explanation: Step 8 - Validation Imagine we had skipped Step 7 (don't worry, we'll use the tuned hyperparameter when we train again in the next notebook) and we considered launching the CNN model we just trained in Step 6. The training accuracy looked good, right? Never fear, validation will keep you safe! It's important to validate before trusting a model, so let's do that now and apply cat_finder() to our validation dataset. Since this is validation, we'll only look at the final performance metric (accuracy) and nothing else. End of explanation """
IntelPNI/brainiak
examples/reprsimil/bayesian_rsa_example.ipynb
apache-2.0
%matplotlib inline import scipy.stats import scipy.spatial.distance as spdist import numpy as np from brainiak.reprsimil.brsa import BRSA, prior_GP_var_inv_gamma, prior_GP_var_half_cauchy from brainiak.reprsimil.brsa import GBRSA import brainiak.utils.utils as utils import matplotlib.pyplot as plt import logging np.random.seed(10) """ Explanation: This demo shows how to use the Bayesian Representational Similarity Analysis method in brainiak with a simulated dataset. The brainik.reprsimil.brsa module has two estimators named BRSA and GBRSA. Both of them can be used to estimate representational similarity from a single participant, but with some differences in the assumptions of the models and fitting procedure. The basic usages are similar. We now generally recommend using GBRSA over BRSA for most of the cases. This document shows how to use BRSA for most of the part. At the end of the document, the usage of GBRSA is shown as well. You are encouranged to go through the example and try both estimators for your data. The group_brsa_example.ipynb in the same directory demonstrates how to use GBRSA to estimate shared representational structure from multiple participants. Please note that the model assumes that the covariance matrix U which all $\beta_i$ follow describe a multi-variate Gaussian distribution that is zero-meaned. This assumption does not imply that there must be both positive and negative responses across voxels. However, it means that (Group) Bayesian RSA treats the task-evoked activity against baseline BOLD level as signal, while in other RSA tools the deviation of task-evoked activity in each voxel from the average task-evoked activity level across voxels may be considered as signal of interest. Due to this assumption in (G)BRSA, relatively high degree of similarity may be expected when the activity patterns of two task conditions share a strong sensory driven components. When two task conditions elicit exactly the same activity pattern but only differ in their global magnitudes, under the assumption in (G)BRSA, their similarity is 1; under the assumption that only deviation of pattern from average patterns is signal of interest (which is currently not supported by (G)BRSA), their similarity would be -1 because the deviations of the two patterns from their average pattern are exactly opposite. Load some package which we will use in this demo. If you see error related to loading any package, you can install that package. For example, if you use Anaconda, you can use "conda install matplotlib" to install matplotlib. Notice that due to current implementation, you need to import either prior_GP_var_inv_gamma or prior_GP_var_half_cauchy from brsa module, in order to use the smooth prior imposed onto SNR in BRSA (see below). They are forms of priors imposed on the variance of Gaussian Process prior on log(SNR). (If you think these sentences are confusing, just import them like below and forget about this). End of explanation """ logging.basicConfig( level=logging.DEBUG, filename='brsa_example.log', format='%(relativeCreated)6d %(threadName)s %(message)s') """ Explanation: You might want to keep a log of the output. End of explanation """ design = utils.ReadDesign(fname="example_design.1D") n_run = 3 design.n_TR = design.n_TR * n_run design.design_task = np.tile(design.design_task[:,:-1], [n_run, 1]) # The last "condition" in design matrix # codes for trials subjects made and error. # We ignore it here. fig = plt.figure(num=None, figsize=(12, 3), dpi=150, facecolor='w', edgecolor='k') plt.plot(design.design_task) plt.ylim([-0.2, 0.4]) plt.title('hypothetic fMRI response time courses ' 'of all conditions\n' '(design matrix)') plt.xlabel('time') plt.show() n_C = np.size(design.design_task, axis=1) # The total number of conditions. ROI_edge = 15 # We simulate "ROI" of a rectangular shape n_V = ROI_edge**2 * 2 # The total number of simulated voxels n_T = design.n_TR # The total number of time points, # after concatenating all fMRI runs """ Explanation: We want to simulate some data in which each voxel responds to different task conditions differently, but following a common covariance structure Load an example design matrix. The user should prepare their design matrix with their favorate software, such as using 3ddeconvolve of AFNI, or using SPM or FSL. The design matrix reflects your belief of how fMRI signal should respond to a task (if a voxel does respond). The common assumption is that a neural event that you are interested in will elicit a slow hemodynamic response in some voxels. The response peaks around 4-6 seconds after the event onset and dies down more than 12 seconds after the event. Therefore, typically you convolve a time series A, composed of delta (stem) functions reflecting the time of each neural event belonging to the same category (e.g. all trials in which a participant sees a face), with a hemodynamic response function B, to form the hypothetic response of any voxel to such type of neural event. For each type of event, such a convoluted time course can be generated. These time courses, put together, are called design matrix, reflecting what we believe a temporal signal would look like, if it exists in any voxel. Our goal is to figure out how the (spatial) response pattern of a population of voxels (in an Region of Interest, ROI) are similar or disimilar to different types of tasks (e.g., watching face vs. house, watching different categories of animals, different conditions of a cognitive task). So we need the design matrix in order to estimate the similarity matrix we are interested. We can use the utility called ReadDesign in brainiak.utils to read a design matrix generated from AFNI. For design matrix saved as Matlab data file by SPM or or other toolbox, you can use scipy.io.loadmat('YOURFILENAME') and extract the design matrix from the dictionary returned. Basically, the Bayesian RSA in this toolkit just needs a numpy array which is in size of {time points} * {condition} You can also generate design matrix using the function gen_design which is in brainiak.utils. It takes in (name of) event timing files in AFNI or FSL format (denoting onsets, duration, and weight for each event belongning to the same condition) and outputs the design matrix as numpy array. In typical fMRI analysis, some nuisance regressors such as head motion, baseline time series and slow drift are also entered into regression. In using our method, you should not include such regressors into the design matrix, because the spatial spread of such nuisance regressors might be quite different from the spatial spread of task related signal. Including such nuisance regressors in design matrix might influence the pseudo-SNR map, which in turn influence the estimation of the shared covariance matrix. We concatenate the design matrix by 2 times, mimicking 2 runs of identical timing End of explanation """ noise_bot = 0.5 noise_top = 5.0 noise_level = np.random.rand(n_V) * \ (noise_top - noise_bot) + noise_bot # The standard deviation of the noise is in the range of [noise_bot, noise_top] # In fact, we simulate autocorrelated noise with AR(1) model. So the noise_level reflects # the independent additive noise at each time point (the "fresh" noise) # AR(1) coefficient rho1_top = 0.8 rho1_bot = -0.2 rho1 = np.random.rand(n_V) \ * (rho1_top - rho1_bot) + rho1_bot noise_smooth_width = 10.0 coords = np.mgrid[0:ROI_edge, 0:ROI_edge*2, 0:1] coords_flat = np.reshape(coords,[3, n_V]).T dist2 = spdist.squareform(spdist.pdist(coords_flat, 'sqeuclidean')) # generating noise K_noise = noise_level[:, np.newaxis] \ * (np.exp(-dist2 / noise_smooth_width**2 / 2.0) \ + np.eye(n_V) * 0.1) * noise_level # We make spatially correlated noise by generating # noise at each time point from a Gaussian Process # defined over the coordinates. plt.pcolor(K_noise) plt.colorbar() plt.xlim([0, n_V]) plt.ylim([0, n_V]) plt.title('Spatial covariance matrix of noise') plt.show() L_noise = np.linalg.cholesky(K_noise) noise = np.zeros([n_T, n_V]) noise[0, :] = np.dot(L_noise, np.random.randn(n_V))\ / np.sqrt(1 - rho1**2) for i_t in range(1, n_T): noise[i_t, :] = noise[i_t - 1, :] * rho1 \ + np.dot(L_noise,np.random.randn(n_V)) # For each voxel, the noise follows AR(1) process: # fresh noise plus a dampened version of noise at # the previous time point. # In this simulation, we also introduced spatial smoothness resembling a Gaussian Process. # Notice that we simulated in this way only to introduce spatial noise correlation. # This does not represent the assumption of the form of spatial noise correlation in the model. # Instead, the model is designed to capture structured noise correlation manifested # as a few spatial maps each modulated by a time course, which appears as spatial noise correlation. fig = plt.figure(num=None, figsize=(12, 2), dpi=150, facecolor='w', edgecolor='k') plt.plot(noise[:, 0]) plt.title('noise in an example voxel') plt.show() """ Explanation: simulate data: noise + signal First, we start with noise, which is Gaussian Process in space and AR(1) in time End of explanation """ # import nibabel # ROI = nibabel.load('ROI.nii') # I,J,K = ROI.shape # all_coords = np.zeros((I, J, K, 3)) # all_coords[...,0] = np.arange(I)[:, np.newaxis, np.newaxis] # all_coords[...,1] = np.arange(J)[np.newaxis, :, np.newaxis] # all_coords[...,2] = np.arange(K)[np.newaxis, np.newaxis, :] # ROI_coords = nibabel.affines.apply_affine( # ROI.affine, all_coords[ROI.get_data().astype(bool)]) """ Explanation: Then, we simulate signals, assuming the magnitude of response to each condition follows a common covariance matrix. Our model allows to impose a Gaussian Process prior on the log(SNR) of each voxels. What this means is that SNR turn to be smooth and local, but betas (response amplitudes of each voxel to each condition) are not necessarily correlated in space. Intuitively, this is based on the assumption that voxels coding for related aspects of a task turn to be clustered (instead of isolated) Our Gaussian Process are defined on both the coordinate of a voxel and its mean intensity. This means that voxels close together AND have similar intensity should have similar SNR level. Therefore, voxels of white matter but adjacent to gray matters do not necessarily have high SNR level. If you have an ROI saved as a binary Nifti file, say, with name 'ROI.nii' Then you can use nibabel package to load the ROI and the following example code to retrive the coordinates of voxels. Note: the following code won't work if you just installed Brainiak and try this demo because ROI.nii does not exist. It just serves as an example for you to retrieve coordinates of voxels in an ROI. You can use the ROI_coords for the argument coords in BRSA.fit() End of explanation """ # ideal covariance matrix ideal_cov = np.zeros([n_C, n_C]) ideal_cov = np.eye(n_C) * 0.6 ideal_cov[8:12, 8:12] = 0.6 for cond in range(8, 12): ideal_cov[cond,cond] = 1 fig = plt.figure(num=None, figsize=(4, 4), dpi=100) plt.pcolor(ideal_cov) plt.colorbar() plt.xlim([0, 16]) plt.ylim([0, 16]) ax = plt.gca() ax.set_aspect(1) plt.title('ideal covariance matrix') plt.show() std_diag = np.diag(ideal_cov)**0.5 ideal_corr = ideal_cov / std_diag / std_diag[:, None] fig = plt.figure(num=None, figsize=(4, 4), dpi=100) plt.pcolor(ideal_corr) plt.colorbar() plt.xlim([0, 16]) plt.ylim([0, 16]) ax = plt.gca() ax.set_aspect(1) plt.title('ideal correlation matrix') plt.show() """ Explanation: Let's keep in mind of the pattern of the ideal covariance / correlation below and see how well BRSA can recover their patterns. End of explanation """ L_full = np.linalg.cholesky(ideal_cov) # generating signal snr_level = 1.0 # Notice that accurately speaking this is not SNR. # The magnitude of signal depends not only on beta but also on x. # (noise_level*snr_level)**2 is the factor multiplied # with ideal_cov to form the covariance matrix from which # the response amplitudes (beta) of a voxel are drawn from. tau = 1.0 # magnitude of Gaussian Process from which the log(SNR) is drawn smooth_width = 3.0 # spatial length scale of the Gaussian Process, unit: voxel inten_kernel = 4.0 # intensity length scale of the Gaussian Process # Slightly counter-intuitively, if this parameter is very large, # say, much larger than the range of intensities of the voxels, # then the smoothness has much small dependency on the intensity. inten = np.random.rand(n_V) * 20.0 # For simplicity, we just assume that the intensity # of all voxels are uniform distributed between 0 and 20 # parameters of Gaussian process to generate pseuso SNR # For curious user, you can also try the following commond # to see what an example snr map might look like if the intensity # grows linearly in one spatial direction # inten = coords_flat[:,0] * 2 inten_tile = np.tile(inten, [n_V, 1]) inten_diff2 = (inten_tile - inten_tile.T)**2 K = np.exp(-dist2 / smooth_width**2 / 2.0 - inten_diff2 / inten_kernel**2 / 2.0) * tau**2 \ + np.eye(n_V) * tau**2 * 0.001 # A tiny amount is added to the diagonal of # the GP covariance matrix to make sure it can be inverted L = np.linalg.cholesky(K) snr = np.abs(np.dot(L, np.random.randn(n_V))) * snr_level sqrt_v = noise_level * snr betas_simulated = np.dot(L_full, np.random.randn(n_C, n_V)) * sqrt_v signal = np.dot(design.design_task, betas_simulated) Y = signal + noise + inten # The data to be fed to the program. fig = plt.figure(num=None, figsize=(4, 4), dpi=100) plt.pcolor(np.reshape(snr, [ROI_edge, ROI_edge*2])) plt.colorbar() ax = plt.gca() ax.set_aspect(1) plt.title('pseudo-SNR in a rectangular "ROI"') plt.show() idx = np.argmin(np.abs(snr - np.median(snr))) # choose a voxel of medium level SNR. fig = plt.figure(num=None, figsize=(12, 4), dpi=150, facecolor='w', edgecolor='k') noise_plot, = plt.plot(noise[:,idx],'g') signal_plot, = plt.plot(signal[:,idx],'b') plt.legend([noise_plot, signal_plot], ['noise', 'signal']) plt.title('simulated data in an example voxel' ' with pseudo-SNR of {}'.format(snr[idx])) plt.xlabel('time') plt.show() fig = plt.figure(num=None, figsize=(12, 4), dpi=150, facecolor='w', edgecolor='k') data_plot, = plt.plot(Y[:,idx],'r') plt.legend([data_plot], ['observed data of the voxel']) plt.xlabel('time') plt.show() idx = np.argmin(np.abs(snr - np.max(snr))) # display the voxel of the highest level SNR. fig = plt.figure(num=None, figsize=(12, 4), dpi=150, facecolor='w', edgecolor='k') noise_plot, = plt.plot(noise[:,idx],'g') signal_plot, = plt.plot(signal[:,idx],'b') plt.legend([noise_plot, signal_plot], ['noise', 'signal']) plt.title('simulated data in the voxel with the highest' ' pseudo-SNR of {}'.format(snr[idx])) plt.xlabel('time') plt.show() fig = plt.figure(num=None, figsize=(12, 4), dpi=150, facecolor='w', edgecolor='k') data_plot, = plt.plot(Y[:,idx],'r') plt.legend([data_plot], ['observed data of the voxel']) plt.xlabel('time') plt.show() """ Explanation: In the following, pseudo-SNR is generated from a Gaussian Process defined on a "rectangular" ROI, just for simplicity of code End of explanation """ scan_onsets = np.int32(np.linspace(0, design.n_TR,num=n_run + 1)[: -1]) print('scan onsets: {}'.format(scan_onsets)) """ Explanation: The reason that the pseudo-SNRs in the example voxels are not too small, while the signal looks much smaller is because we happen to have low amplitudes in our design matrix. The true SNR depends on both the amplitudes in design matrix and the pseudo-SNR. Therefore, be aware that pseudo-SNR does not directly reflects how much signal the data have, but rather a map indicating the relative strength of signal in differerent voxels. When you have multiple runs, the noise won't be correlated between runs. Therefore, you should tell BRSA when is the onset of each scan. Note that the data (variable Y above) you feed to BRSA is the concatenation of data from all runs along the time dimension, as a 2-D matrix of time x space End of explanation """ brsa = BRSA(GP_space=True, GP_inten=True) # Initiate an instance, telling it # that we want to impose Gaussian Process prior # over both space and intensity. brsa.fit(X=Y, design=design.design_task, coords=coords_flat, inten=inten, scan_onsets=scan_onsets) # The data to fit should be given to the argument X. # Design matrix goes to design. And so on. """ Explanation: Fit Bayesian RSA to our simulated data The nuisance regressors in typical fMRI analysis (such as head motion signal) are replaced by principal components estimated from residuals after subtracting task-related response. n_nureg tells the model how many principal components to keep from the residual as nuisance regressors, in order to account for spatial correlation in noise. If you prefer not using this approach based on principal components of residuals, you can set auto_nuisance=False, and optionally provide your own nuisance regressors as nuisance argument to BRSA.fit(). In practice, we find that the result is much better with auto_nuisance=True. End of explanation """ fig = plt.figure(num=None, figsize=(4, 4), dpi=100) plt.pcolor(brsa.C_, vmin=-0.1, vmax=1) plt.xlim([0, n_C]) plt.ylim([0, n_C]) plt.colorbar() ax = plt.gca() ax.set_aspect(1) plt.title('Estimated correlation structure\n shared between voxels\n' 'This constitutes the output of Bayesian RSA\n') plt.show() fig = plt.figure(num=None, figsize=(4, 4), dpi=100) plt.pcolor(brsa.U_) plt.xlim([0, 16]) plt.ylim([0, 16]) plt.colorbar() ax = plt.gca() ax.set_aspect(1) plt.title('Estimated covariance structure\n shared between voxels\n') plt.show() """ Explanation: We can have a look at the estimated similarity in matrix brsa.C_. We can also compare the ideal covariance above with the one recovered, brsa.U_ End of explanation """ regressor = np.insert(design.design_task, 0, 1, axis=1) betas_point = np.linalg.lstsq(regressor, Y)[0] point_corr = np.corrcoef(betas_point[1:, :]) point_cov = np.cov(betas_point[1:, :]) fig = plt.figure(num=None, figsize=(4, 4), dpi=100) plt.pcolor(point_corr, vmin=-0.1, vmax=1) plt.xlim([0, 16]) plt.ylim([0, 16]) plt.colorbar() ax = plt.gca() ax.set_aspect(1) plt.title('Correlation structure estimated\n' 'based on point estimates of betas\n') plt.show() fig = plt.figure(num=None, figsize=(4, 4), dpi=100) plt.pcolor(point_cov) plt.xlim([0, 16]) plt.ylim([0, 16]) plt.colorbar() ax = plt.gca() ax.set_aspect(1) plt.title('Covariance structure of\n' 'point estimates of betas\n') plt.show() """ Explanation: In contrast, we can have a look of the similarity matrix based on Pearson correlation between point estimates of betas of different conditions. This is what vanila RSA might give End of explanation """ fig = plt.figure(num=None, figsize=(5, 5), dpi=100) plt.pcolor(np.reshape(brsa.nSNR_, [ROI_edge, ROI_edge*2])) plt.colorbar() ax = plt.gca() ax.set_aspect(1) ax.set_title('estimated pseudo-SNR') plt.show() fig = plt.figure(num=None, figsize=(5, 5), dpi=100) plt.pcolor(np.reshape(snr / np.exp(np.mean(np.log(snr))), [ROI_edge, ROI_edge*2])) plt.colorbar() ax = plt.gca() ax.set_aspect(1) ax.set_title('true normalized pseudo-SNR') plt.show() RMS_BRSA = np.mean((brsa.C_ - ideal_corr)**2)**0.5 RMS_RSA = np.mean((point_corr - ideal_corr)**2)**0.5 print('RMS error of Bayesian RSA: {}'.format(RMS_BRSA)) print('RMS error of standard RSA: {}'.format(RMS_RSA)) print('Recovered spatial smoothness length scale: ' '{}, vs. true value: {}'.format(brsa.lGPspace_, smooth_width)) print('Recovered intensity smoothness length scale: ' '{}, vs. true value: {}'.format(brsa.lGPinten_, inten_kernel)) print('Recovered standard deviation of GP prior: ' '{}, vs. true value: {}'.format(brsa.bGP_, tau)) """ Explanation: We can make a comparison between the estimated SNR map and the true SNR map (normalized) End of explanation """ plt.scatter(rho1, brsa.rho_) plt.xlabel('true AR(1) coefficients') plt.ylabel('recovered AR(1) coefficients') ax = plt.gca() ax.set_aspect(1) plt.show() plt.scatter(np.log(snr) - np.mean(np.log(snr)), np.log(brsa.nSNR_)) plt.xlabel('true normalized log SNR') plt.ylabel('recovered log pseudo-SNR') ax = plt.gca() ax.set_aspect(1) plt.show() """ Explanation: Empirically, the smoothness turns to be over-estimated when signal is weak. We can also look at how other parameters are recovered. End of explanation """ plt.scatter(betas_simulated, brsa.beta_) plt.xlabel('true betas (response amplitudes)') plt.ylabel('recovered betas by Bayesian RSA') ax = plt.gca() ax.set_aspect(1) plt.show() plt.scatter(betas_simulated, betas_point[1:, :]) plt.xlabel('true betas (response amplitudes)') plt.ylabel('recovered betas by simple regression') ax = plt.gca() ax.set_aspect(1) plt.show() """ Explanation: Even though the variation reduced in estimated pseudo-SNR (due to overestimation of smoothness of the GP prior under low SNR situation), betas recovered by the model has higher correlation with true betas than doing simple regression, shown below. Obiously there is shrinkage of the estimated betas, as a result of variance-bias tradeoff. But we think such shrinkage does preserve the patterns of betas, and therefore the result is suitable to be further used for decoding purpose. End of explanation """ u, s, v = np.linalg.svd(noise + inten) plt.plot(s) plt.xlabel('principal component') plt.ylabel('singular value of unnormalized noise') plt.show() plt.pcolor(np.reshape(v[0,:], [ROI_edge, ROI_edge*2])) ax = plt.gca() ax.set_aspect(1) plt.title('Weights of the first principal component in unnormalized noise') plt.colorbar() plt.show() plt.pcolor(np.reshape(brsa.beta0_[0,:], [ROI_edge, ROI_edge*2])) ax = plt.gca() ax.set_aspect(1) plt.title('Weights of the DC component in noise') plt.colorbar() plt.show() plt.pcolor(np.reshape(inten, [ROI_edge, ROI_edge*2])) ax = plt.gca() ax.set_aspect(1) plt.title('The baseline intensity of the ROI') plt.colorbar() plt.show() plt.pcolor(np.reshape(v[1,:], [ROI_edge, ROI_edge*2])) ax = plt.gca() ax.set_aspect(1) plt.title('Weights of the second principal component in unnormalized noise') plt.colorbar() plt.show() plt.pcolor(np.reshape(brsa.beta0_[1,:], [ROI_edge, ROI_edge*2])) ax = plt.gca() ax.set_aspect(1) plt.title('Weights of the first recovered noise pattern\n not related to DC component in noise') plt.colorbar() plt.show() """ Explanation: The singular decomposition of noise, and the comparison between the first two principal component of noise and the patterns of the first two nuisance regressors, returned by the model. The principal components may not look exactly the same. The first principal components both capture the baseline image intensities (although they may sometimes appear counter-phase) Apparently one can imagine that the choice of the number of principal components used as nuisance regressors can influence the result. If you just choose 1 or 2, perhaps only the global drift would be captured. But including too many nuisance regressors would slow the fitting speed and might have risk of overfitting. The users might consider starting in the range of 5-20. We do not have automatic cross-validation built in. But you can use the score() function to do cross-validation and select the appropriate number. The idea here is similar to that in GLMdenoise (http://kendrickkay.net/GLMdenoise/) End of explanation """ noise_new = np.zeros([n_T, n_V]) noise_new[0, :] = np.dot(L_noise, np.random.randn(n_V))\ / np.sqrt(1 - rho1**2) for i_t in range(1, n_T): noise_new[i_t, :] = noise_new[i_t - 1, :] * rho1 \ + np.dot(L_noise,np.random.randn(n_V)) Y_new = signal + noise_new + inten ts, ts0 = brsa.transform(Y_new,scan_onsets=scan_onsets) # ts, ts0 = brsa.transform(Y_new,scan_onsets=scan_onsets) recovered_plot, = plt.plot(ts[:200, 8], 'b') design_plot, = plt.plot(design.design_task[:200, 8], 'g') plt.legend([design_plot, recovered_plot], ['design matrix for one condition', 'recovered time course for the condition']) plt.show() # We did not plot the whole time series for the purpose of seeing closely how much the two # time series overlap c = np.corrcoef(design.design_task.T, ts.T) # plt.pcolor(c[0:n_C, n_C:],vmin=-0.5,vmax=1) plt.pcolor(c[0:16, 16:],vmin=-0.5,vmax=1) ax = plt.gca() ax.set_aspect(1) plt.title('correlation between true design matrix \nand the recovered task-related activity') plt.colorbar() plt.xlabel('recovered task-related activity') plt.ylabel('true design matrix') plt.show() # plt.pcolor(c[n_C:, n_C:],vmin=-0.5,vmax=1) plt.pcolor(c[16:, 16:],vmin=-0.5,vmax=1) ax = plt.gca() ax.set_aspect(1) plt.title('correlation within the recovered task-related activity') plt.colorbar() plt.show() """ Explanation: "Decoding" from new data Now we generate a new data set, assuming signal is the same but noise is regenerated. We want to use the transform() function of brsa to estimate the "design matrix" in this new dataset. End of explanation """ [score, score_null] = brsa.score(X=Y_new, design=design.design_task, scan_onsets=scan_onsets) print("Score of full model based on the correct esign matrix, assuming {} nuisance" " components in the noise: {}".format(brsa.n_nureg_, score)) print("Score of a null model with the same assumption except that there is no task-related response: {}".format( score_null)) plt.bar([0,1],[score, score_null], width=0.5) plt.ylim(np.min([score, score_null])-100, np.max([score, score_null])+100) plt.xticks([0,1],['Model','Null model']) plt.ylabel('cross-validated log likelihood') plt.title('cross validation on new data') plt.show() [score_noise, score_noise_null] = brsa.score(X=noise_new+inten, design=design.design_task, scan_onsets=scan_onsets) print("Score of full model for noise, based on the correct design matrix, assuming {} nuisance" " components in the noise: {}".format(brsa.n_nureg_, score_noise)) print("Score of a null model for noise: {}".format( score_noise_null)) plt.bar([0,1],[score_noise, score_noise_null], width=0.5) plt.ylim(np.min([score_noise, score_noise_null])-100, np.max([score_noise, score_noise_null])+100) plt.xticks([0,1],['Model','Null model']) plt.ylabel('cross-validated log likelihood') plt.title('cross validation on noise') plt.show() """ Explanation: Model selection by cross-validataion: You can compare different models by cross-validating the parameters of one model learnt from some training data on some testing data. BRSA provides a score() function, which provides you a pair of cross-validated log likelihood for testing data. The first value is the cross-validated log likelihood of the model you have specified. The second value is a null model which assumes everything else the same except that there is no task-related activity. Notice that comparing the score of your model of interest against its corresponding null model is not the single way to compare models. You might also want to compare against a model using the same set of design matrix, but a different rank (especially rank 1, which means all task conditions have the same response pattern, only differing in the magnitude). In general, in the context of BRSA, a model means the timing of each event and the way these events are grouped, together with other trivial parameters such as the rank of the covariance matrix and the number of nuisance regressors. All these parameters can influence model performance. In future, we will provide interface to test the performance of a model with predefined similarity matrix or covariance matrix. End of explanation """ gbrsa = GBRSA(nureg_method='PCA', auto_nuisance=True, logS_range=1, anneal_speed=20, n_iter=50) # Initiate an instance, telling it # that we want to impose Gaussian Process prior # over both space and intensity. gbrsa.fit(X=Y, design=design.design_task,scan_onsets=scan_onsets) # The data to fit should be given to the argument X. # Design matrix goes to design. And so on. plt.pcolor(np.reshape(gbrsa.nSNR_, (ROI_edge, ROI_edge*2))) plt.colorbar() ax = plt.gca() ax.set_aspect(1) plt.title('SNR map estimated by marginalized BRSA') plt.show() plt.pcolor(np.reshape(snr, (ROI_edge, ROI_edge*2))) ax = plt.gca() ax.set_aspect(1) plt.colorbar() plt.title('true SNR map') plt.show() plt.scatter(snr, gbrsa.nSNR_) ax = plt.gca() ax.set_aspect(1) plt.xlabel('simulated pseudo-SNR') plt.ylabel('estimated pseudo-SNR') plt.show() plt.scatter(np.log(snr), np.log(gbrsa.nSNR_)) ax = plt.gca() ax.set_aspect(1) plt.xlabel('simulated log(pseudo-SNR)') plt.ylabel('estimated log(pseudo-SNR)') plt.show() plt.pcolor(gbrsa.U_) plt.colorbar() plt.title('covariance matrix estimated by marginalized BRSA') plt.show() plt.pcolor(ideal_cov) plt.colorbar() plt.title('true covariance matrix') plt.show() plt.scatter(betas_simulated, gbrsa.beta_) ax = plt.gca() ax.set_aspect(1) plt.xlabel('simulated betas') plt.ylabel('betas estimated by marginalized BRSA') plt.show() plt.scatter(rho1, gbrsa.rho_) ax = plt.gca() ax.set_aspect(1) plt.xlabel('simulated AR(1) coefficients') plt.ylabel('AR(1) coefficients estimated by marginalized BRSA') plt.show() """ Explanation: As can be seen above, the model with the correct design matrix explains new data with signals generated from the true model better than the null model, but explains pure noise worse than the null model. We can also try the version which marginalize SNR and rho for each voxel. This version is intended for analyzing data of a group of participants and estimating their shared similarity matrix. But it also allows analyzing single participant. End of explanation """ # "Decoding" ts, ts0 = gbrsa.transform([Y_new],scan_onsets=[scan_onsets]) recovered_plot, = plt.plot(ts[0][:200, 8], 'b') design_plot, = plt.plot(design.design_task[:200, 8], 'g') plt.legend([design_plot, recovered_plot], ['design matrix for one condition', 'recovered time course for the condition']) plt.show() # We did not plot the whole time series for the purpose of seeing closely how much the two # time series overlap c = np.corrcoef(design.design_task.T, ts[0].T) plt.pcolor(c[0:n_C, n_C:],vmin=-0.5,vmax=1) ax = plt.gca() ax.set_aspect(1) plt.title('correlation between true design matrix \nand the recovered task-related activity') plt.colorbar() plt.xlabel('recovered task-related activity') plt.ylabel('true design matrix') plt.show() plt.pcolor(c[n_C:, n_C:],vmin=-0.5,vmax=1) ax = plt.gca() ax.set_aspect(1) plt.title('correlation within the recovered task-related activity') plt.colorbar() plt.show() # cross-validataion [score, score_null] = gbrsa.score(X=[Y_new], design=[design.design_task], scan_onsets=[scan_onsets]) print("Score of full model based on the correct design matrix, assuming {} nuisance" " components in the noise: {}".format(gbrsa.n_nureg_, score)) print("Score of a null model with the same assumption except that there is no task-related response: {}".format( score_null)) plt.bar([0,1],[score[0], score_null[0]], width=0.5) plt.ylim(np.min([score[0], score_null[0]])-100, np.max([score[0], score_null[0]])+100) plt.xticks([0,1],['Model','Null model']) plt.ylabel('cross-validated log likelihood') plt.title('cross validation on new data') plt.show() [score_noise, score_noise_null] = gbrsa.score(X=[noise_new+inten], design=[design.design_task], scan_onsets=[scan_onsets]) print("Score of full model for noise, based on the correct design matrix, assuming {} nuisance" " components in the noise: {}".format(gbrsa.n_nureg_, score_noise)) print("Score of a null model for noise: {}".format( score_noise_null)) plt.bar([0,1],[score_noise[0], score_noise_null[0]], width=0.5) plt.ylim(np.min([score_noise[0], score_noise_null[0]])-100, np.max([score_noise[0], score_noise_null[0]])+100) plt.xticks([0,1],['Model','Null model']) plt.ylabel('cross-validated log likelihood') plt.title('cross validation on noise') plt.show() """ Explanation: We can also do "decoding" and cross-validating using the marginalized version in GBRSA End of explanation """
henrysky/astroNN
demo_tutorial/VAE/.ipynb_checkpoints/variational_autoencoder_demo-checkpoint.ipynb
mit
%matplotlib inline %config InlineBackend.figure_format='retina' import numpy as np import pylab as plt from scipy.stats import norm from tensorflow.keras.layers import Input, Dense, Lambda, Layer, Add, Multiply from tensorflow.keras.models import Model, Sequential from tensorflow.keras import regularizers import tensorflow as tf from astroNN.nn.layers import KLDivergenceLayer from astroNN.nn.losses import nll tf.compat.v1.disable_eager_execution() """ Explanation: Variational Autoencoder demo with 1D data Here is astroNN, please take a look if you are interested in astronomy or how neural network applied in astronomy * Henry Leung - Astronomy student, University of Toronto - henrysky * Project advisor: Jo Bovy - Professor, Department of Astronomy and Astrophysics, University of Toronto - jobovy * Contact Henry: henrysky.leung [at] utoronto.ca * This tutorial is created on 13/Jan/2018 with Keras 2.1.2, Tensorflow 1.4.0, Nvidia CuDNN 6.1 for CUDA 8.0 (Optional), Python 3.6.3 Win10 x64 * Updated on 31/Jan/2020 with Tensorflow 2.1.0 Import everything we need first End of explanation """ original_dim = 4000 # Our 1D images dimension, each image has 4000 pixel intermediate_dim = 256 # Number of neurone our fully connected neural net has batch_size = 50 epochs = 15 epsilon_std = 1.0 def blackbox_image_generator(pixel, center, sigma): return norm.pdf(pixel, center, sigma) def model_vae(latent_dim): """ Main Model + Encoder """ x = Input(shape=(original_dim,)) h = Dense(intermediate_dim, activation='relu')(x) z_mu = Dense(latent_dim, kernel_regularizer=regularizers.l2(1e-4))(h) z_log_var = Dense(latent_dim)(h) z_mu, z_log_var = KLDivergenceLayer()([z_mu, z_log_var]) z_sigma = Lambda(lambda t: tf.exp(.5*t))(z_log_var) eps = Input(tensor=tf.random.normal(mean=0, stddev=epsilon_std, shape=(tf.shape(x)[0], latent_dim))) z_eps = Multiply()([z_sigma, eps]) z = Add()([z_mu, z_eps]) decoder = Sequential() decoder.add(Dense(intermediate_dim, input_dim=latent_dim, activation='relu')) decoder.add(Dense(original_dim, activation='sigmoid')) x_pred = decoder(z) vae = Model(inputs=[x, eps], outputs=x_pred) encoder = Model(x, z_mu) return vae, encoder """ Explanation: Then define basic constant, function and define our neural network End of explanation """ s_1 = np.random.normal(30, 1.5, 900) s_2 = np.random.normal(15, 1, 900) s_3 = np.random.normal(10, 1, 900) s = np.concatenate([s_1, s_2, s_3]) plt.figure(figsize=(12, 12)) plt.hist(s[:900], 70, density=1, facecolor='green', alpha=0.75, label='Population 1') plt.hist(s[900:1800], 70, density=1, facecolor='red', alpha=0.75, label='Population 2') plt.hist(s[1800:], 70, density=1, facecolor='blue', alpha=0.75, label='Population 3') plt.title('Disturbution of hidden variable used to generate data', fontsize=15) plt.xlabel('True Latent Variable Value', fontsize=15) plt.ylabel('Probability Density', fontsize=15) plt.tick_params(labelsize=12, width=1, length=10) plt.legend(loc='best', fontsize=15) plt.show() """ Explanation: Now we will generate some true latent variable so we can pass them to a blackbox image generator to generate some 1D images. The blackbox image generator (which is deterministic) will take two numbers and generate images in a predictable way. This is important because if the generator generate image in a random way, then there is nothing neural network can learn. But for simplicity, we will fix the first latent variable of the blackbox image generator a constant and only use the second one to generate images. End of explanation """ # We have some images, each has 4000 pixels x_train = np.zeros((len(s), original_dim)) for counter, S in enumerate(s): xs = np.linspace(0, 40, original_dim) x_train[counter] = blackbox_image_generator(xs, 20, S) # Prevent nan causes error x_train[np.isnan(x_train.astype(float))] = 0 x_train *= 10 # Add some noise to our images x_train += np.random.normal(0, 0.2, x_train.shape) plt.figure(figsize=(8, 8)) plt.title('Example image from Population 1', fontsize=15) plt.plot(x_train[500]) plt.xlabel('Pixel', fontsize=15) plt.ylabel('Flux', fontsize=15) plt.tick_params(labelsize=12, width=1, length=10) plt.show() plt.figure(figsize=(8, 8)) plt.title('Example image from Population 2', fontsize=15) plt.plot(x_train[1000]) plt.xlabel('Pixel', fontsize=15) plt.ylabel('Flux', fontsize=15) plt.tick_params(labelsize=12, width=1, length=10) plt.show() plt.figure(figsize=(8, 8)) plt.title('Example image from Population 3', fontsize=15) plt.plot(x_train[1600]) plt.xlabel('Pixel', fontsize=15) plt.ylabel('Flux', fontsize=15) plt.tick_params(labelsize=12, width=1, length=10) plt.show() """ Explanation: Now we will pass the true latent variable to the blackbox image generator to generate some images. Below are the example images from the three populations. They may seems to have no difference but neural network will pick up some subtle features usually. End of explanation """ latent_dim = 1 # Dimension of our latent space vae, encoder = model_vae(latent_dim) vae.compile(optimizer='rmsprop', loss=nll, weighted_metrics=None, loss_weights=None, sample_weight_mode=None) vae.fit(x_train, x_train, shuffle=True, epochs=epochs, batch_size=batch_size, verbose=0) z_test = encoder.predict(x_train, batch_size=batch_size) plt.figure(figsize=(12, 12)) plt.hist(z_test[:900], 70, density=1, facecolor='green', alpha=0.75, label='Population 1') plt.hist(z_test[900:1800], 70, density=1, facecolor='red', alpha=0.75, label='Population 2') plt.hist(z_test[1800:], 70, density=1, facecolor='blue', alpha=0.75, label='Population 3') plt.title('Disturbution of latent variable value from neural net', fontsize=15) plt.xlabel('Latent Variable Value from Neural Net', fontsize=15) plt.ylabel('Probability Density', fontsize=15) plt.tick_params(labelsize=12, width=1, length=10) plt.legend(loc='best', fontsize=15) plt.show() """ Explanation: Now we will pass the images to the neural network and train with them. End of explanation """ m_1A = np.random.normal(28, 2, 300) m_1B = np.random.normal(19, 2, 300) m_1C = np.random.normal(12, 1, 300) m_2A = np.random.normal(28, 2, 300) m_2B = np.random.normal(19, 2, 300) m_2C = np.random.normal(12, 1, 300) m_3A = np.random.normal(28, 2, 300) m_3B = np.random.normal(19, 2, 300) m_3C = np.random.normal(12, 1, 300) m = np.concatenate([m_1A, m_1B, m_1C, m_2A, m_2B, m_2C, m_3A, m_3B, m_3C]) x_train = np.zeros((len(s), original_dim)) for counter in range(len(s)): xs = np.linspace(0, 40, original_dim) x_train[counter] = blackbox_image_generator(xs, m[counter], s[counter]) # Prevent nan causes error x_train[np.isnan(x_train.astype(float))] = 0 x_train *= 10 # Add some noise to our images x_train += np.random.normal(0, 0.1, x_train.shape) plt.figure(figsize=(12, 12)) plt.hist(s[:900], 70, density=1, facecolor='green', alpha=0.75, label='Population 1') plt.hist(s[900:1800], 70, density=1, facecolor='red', alpha=0.75, label='Population 2') plt.hist(s[1800:], 70, density=1, facecolor='blue', alpha=0.75, label='Population 3') plt.title('Disturbution of hidden variable 1 used to generate data', fontsize=15) plt.xlabel('True Latent Variable Value', fontsize=15) plt.ylabel('Probability Density', fontsize=15) plt.tick_params(labelsize=12, width=1, length=10) plt.legend(loc='best', fontsize=15) plt.show() plt.figure(figsize=(12, 12)) plt.hist(m[:900], 70, density=1, facecolor='green', alpha=0.75, label='Population 1') plt.hist(m[900:1800], 70, density=1, facecolor='red', alpha=0.75, label='Population 2') plt.hist(m[1800:], 70, density=1, facecolor='blue', alpha=0.75, label='Population 3') plt.title('Disturbution of hidden variable 2 used to generate data', fontsize=15) plt.xlabel('True Latent Variable Value', fontsize=15) plt.ylabel('Probability Density', fontsize=15) plt.tick_params(labelsize=12, width=1, length=10) plt.legend(loc='best', fontsize=15) plt.show() """ Explanation: Yay!! Seems like the neural network recovered the three population successfully. Althought the recovered latent variable is not exactly the same as the original ones we generated (I mean at least the scale isn't the same), usually you won't expect the neural network can learn the real phyiscs. In this case, the latent variable is just some transformations from the original ones. You should still remember that we have fixed the first latent variable of the blackbox image generator. What happes if we also generate 3 populations for the first latent variable, and the first latent variable will have no correlation with the second latent variable (Meaning if you know the first latent value of an object, you have no information gain on the second latent value of that object because the first and second have nothing to do with each other) End of explanation """ latent_dim = 1 # Dimension of our latent space vae, encoder = model_vae(latent_dim) vae.compile(optimizer='rmsprop', loss=nll, weighted_metrics=None, loss_weights=None, sample_weight_mode=None) epochs = 15 vae.fit(x_train, x_train, shuffle=True, epochs=epochs, batch_size=batch_size, verbose=0) z_test = encoder.predict(x_train, batch_size=batch_size) plt.figure(figsize=(12, 12)) # plt.hist(z_test[:900], 70, density=1, facecolor='green', alpha=0.75, label='Population 1') # plt.hist(z_test[900:1800], 70, density=1, facecolor='red', alpha=0.75, label='Population 2') # plt.hist(z_test[1800:], 70, density=1, facecolor='blue', alpha=0.75, label='Population 3') plt.hist(z_test[:300], 70, density=1, alpha=0.75, label='Population 1A') plt.hist(z_test[300:600], 70, density=1, alpha=0.75, label='Population 1B') plt.hist(z_test[600:900], 70, density=1, alpha=0.75, label='Population 1C') plt.hist(z_test[900:1200], 70, density=1, alpha=0.75, label='Population 2A') plt.hist(z_test[1200:1500], 70, density=1, alpha=0.75, label='Population 2B') plt.hist(z_test[1500:1800], 70, density=1, alpha=0.75, label='Population 2C') plt.hist(z_test[1800:2100], 70, density=1, alpha=0.75, label='Population 3A') plt.hist(z_test[2100:2400], 70, density=1, alpha=0.75, label='Population 3B') plt.hist(z_test[2400:2700], 70, density=1, alpha=0.75, label='Population 3C') plt.title('Disturbution of latent variable value from neural net', fontsize=15) plt.xlabel('Latent Variable Value from Neural Net', fontsize=15) plt.ylabel('Probability Density', fontsize=15) plt.tick_params(labelsize=12, width=1, length=10) plt.legend(loc='best', fontsize=15) plt.show() """ Explanation: Since we have two independent variables to generate our images, what happened if you still try to force the neural network to explain the images with just one variable? Before we run the training, we should think about what we expect first. Lets denate the first latent variable population as 1, 2 and 3 , while the second latent variable population as A, B and C. If we know an object is in population 2, it has equal chance that its in population A, B and C. With this logic, we should have 9 unique population in total (1A, 1B, 1C, 2A, 2B, 2C, 3A, 3B, 3C). If the neural network want to explain the images with 1 latent variable, it should has 9 peaks in the plot. End of explanation """ latent_dim = 2 # Dimension of our latent space epochs = 40 vae, encoder = model_vae(latent_dim) vae.compile(optimizer='rmsprop', loss=nll, weighted_metrics=None, loss_weights=None, sample_weight_mode=None) vae.fit(x_train, x_train, shuffle=True, epochs=epochs, batch_size=batch_size, verbose=0) z_test = encoder.predict(x_train, batch_size=batch_size) plt.figure(figsize=(12, 12)) plt.scatter(z_test[:300, 0], z_test[:300, 1], s=4, label='Population 1A') plt.scatter(z_test[300:600, 0], z_test[300:600, 1], s=4, label='Population 1B') plt.scatter(z_test[600:900, 0], z_test[600:900, 1], s=4, label='Population 1C') plt.scatter(z_test[900:1200, 0], z_test[900:1200, 1], s=4, label='Population 2A') plt.scatter(z_test[1200:1500, 0], z_test[1200:1500, 1], s=4, label='Population 2B') plt.scatter(z_test[1500:1800, 0], z_test[1500:1800, 1], s=4, label='Population 2C') plt.scatter(z_test[1800:2100, 0], z_test[1800:2100, 1], s=4, label='Population 3A') plt.scatter(z_test[2100:2400, 0], z_test[2100:2400, 1], s=4, label='Population 3B') plt.scatter(z_test[2400:2700, 0], z_test[2400:2700, 1], s=4, label='Population 3C') plt.title('Latent Space (Middle layer of Neurones)', fontsize=15) plt.xlabel('Second Latent Variable (Neurone)', fontsize=15) plt.ylabel('First Latent Variable (Neurone)', fontsize=15) plt.tick_params(labelsize=12, width=1, length=10) plt.legend(loc='best', fontsize=15, markerscale=6) plt.show() """ Explanation: By visual inspection, seems like the neural network only recovered 6 population :( What will happen if we increase the latent space of the nerual network to 2? End of explanation """
mgeier/jupyter-presentation
jupyter-presentation.ipynb
cc0-1.0
import soundfile as sf sig, fs = sf.read('data/singing.wav') """ Explanation: Using Jupyter/IPython for Teaching <p xmlns:dct="http://purl.org/dc/terms/"> <a rel="license" href="http://creativecommons.org/publicdomain/zero/1.0/"> <img src="http://i.creativecommons.org/p/zero/1.0/88x31.png" style="border-style: none;" alt="CC0" /> </a> </p> This notebook is meant to be a slide show. If it doesn't look like a slide show, you probably have to install RISE: python3 -m pip install rise --user python3 -m notebook.nbextensions install --python rise --user python3 -m notebook.nbextensions enable --python rise --user If it doesn't work, you might have to use python instead of python3. After the installation (and after re-loading the Jupyter notebook), you will have a new item in the toolbar which allows you to start the presentation. What is Jupyter? formerly known as IPython ("interactive Python") an interactive terminal and a browser-based notebook https://jupyter.org/ can be used with different programming languages: Julia (http://julialang.org/) Python (https://www.python.org/) R (http://www.r-project.org/) and many others ... What's so great about the Jupyter notebook? mix of text, code and results media images, audio, video anything a web browser can display equations One notebook, many uses interactive local use static online HTML pages on http://nbviewer.jupyter.org/ interactive online use at https://mybinder.org/ nbconvert HTML $\mathrm{\LaTeX}$ $\to$ PDF .py files ... slide shows! HTML5 &lt;audio&gt; tag <audio src="data/singing.wav" controls>Your browser does not support the audio element.</audio> singing.wav by www.openairlib.net; CC BY-SA. Loading Audio Data in Python End of explanation """ %matplotlib inline import matplotlib.pyplot as plt import numpy as np t = np.arange(len(sig)) / fs plt.plot(t, sig) plt.xlabel('time / seconds') plt.grid() """ Explanation: Plotting End of explanation """ plt.specgram(sig, Fs=fs) plt.ylabel('frequency / Hz') plt.xlabel('time / seconds') plt.ylim(0, 10000); """ Explanation: Spectrogram Squared magnitude of the Short Time Fourier Transform (STFT) $$|\text{STFT}{x[n]}(m, \omega)|^2 = \left| \sum_{n=-\infty}^\infty x[n]w[n-m] \text{e}^{-j \omega n}\right|^2$$ End of explanation """ %matplotlib inline import sympy as sp sp.init_printing() t, sigma, omega = sp.symbols(('t', 'sigma', 'omega')) sigma = -2 omega = 10 s = sigma + sp.I * omega x = sp.exp(s * t) x sp.plotting.plot(sp.re(x),(t, 0, 2 * sp.pi), ylim=[-2, 2], ylabel='Re{$e^{st}$}') sp.plotting.plot(sp.im(x),(t, 0, 2 * sp.pi), ylim=[-2, 2], ylabel='Im{$e^{st}$}'); """ Explanation: Symbolic Math End of explanation """