markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
|---|---|---|---|---|
A cool feature of GanjaScene is the ability to use + to draw both scenes together:
|
draw(sc + sc_refl, sig=layout.sig, scale=0.5)
|
docs/tutorials/cga/visualization-tools.ipynb
|
arsenovic/clifford
|
bsd-3-clause
|
mpl_toolkits.clifford
While ganja.js produces great diagrams, it's hard to combine them with other plotting tools.
mpl_toolkits.clifford works within matplotlib.
|
from matplotlib import pyplot as plt
plt.ioff() # we'll ask for plotting when we want it
# if you're editing this locally, you'll get an interactive UI if you uncomment the following
#
# %matplotlib notebook
from mpl_toolkits.clifford import plot
import mpl_toolkits.clifford; mpl_toolkits.clifford.__version__
|
docs/tutorials/cga/visualization-tools.ipynb
|
arsenovic/clifford
|
bsd-3-clause
|
Assembling the plot is a lot more work, but we also get much more control:
|
# standard matplotlib stuff - construct empty plots side-by-side, and set the scaling
fig, (ax_before, ax_both) = plt.subplots(1, 2, sharex=True, sharey=True)
ax_before.set(xlim=[-4, 4], ylim=[-4, 4], aspect='equal')
ax_both.set(xlim=[-4, 4], ylim=[-4, 4], aspect='equal')
# plot the objects before reflection on both plots
for ax in (ax_before, ax_both):
plot(ax, [point], color='tab:blue', label='point', marker='x', linestyle=' ')
plot(ax, [line], color='tab:green', label='line')
plot(ax, [circle], color='tab:red', label='circle')
# plot the objects after reflection, with thicker lines
plot(ax_both, [point_refl], color='tab:blue', label='point_refl', marker='x', linestyle=' ', markeredgewidth=2)
plot(ax_both, [line_refl], color='tab:green', label='line_refl', linewidth=2)
fig.tight_layout()
ax_both.legend()
# show the figure
fig
|
docs/tutorials/cga/visualization-tools.ipynb
|
arsenovic/clifford
|
bsd-3-clause
|
G3C
Let's repeat the above, but with 3D Conformal Geometric Algebra.
Note that if you're viewing these docs in a jupyter notebook, the lines below will replace all your 2d variables with 3d ones
|
from clifford.g3c import *
point = up(2*e1+e2)
line = up(3*e1 + 2*e2) ^ up(3*e1 - 2*e2) ^ einf
circle = up(e1) ^ up(-e1 + 1.6*e2 + 1.2*e3) ^ up(-e1 - 1.6*e2 - 1.2*e3)
sphere = up(3*e1) ^ up(e1) ^ up(2*e1 + e2) ^ up(2*e1 + e3)
# note that due to floating point rounding, we need to truncate back to a single grade here, with ``(grade)``
point_refl = homo((circle * point.gradeInvol() * ~circle)(1))
line_refl = (circle * line.gradeInvol() * ~circle)(3)
sphere_refl = (circle * sphere.gradeInvol() * ~circle)(4)
|
docs/tutorials/cga/visualization-tools.ipynb
|
arsenovic/clifford
|
bsd-3-clause
|
pyganja
Once again, we can create a pair of scenes exactly as before
|
sc = GanjaScene()
sc.add_object(point, color=(255, 0, 0), label='point')
sc.add_object(line, color=(0, 255, 0), label='line')
sc.add_object(circle, color=(0, 0, 255), label='circle')
sc.add_object(sphere, color=(0, 255, 255), label='sphere')
sc_refl = GanjaScene()
sc_refl.add_object(point_refl, color=(128, 0, 0), label='point_refl')
sc_refl.add_object(line_refl.normal(), color=(0, 128, 0), label='line_refl')
sc_refl.add_object(sphere_refl.normal(), color=(0, 128, 128), label='sphere_refl')
|
docs/tutorials/cga/visualization-tools.ipynb
|
arsenovic/clifford
|
bsd-3-clause
|
But this time, when we draw them we don't need to pass sig.
Better yet, we can rotate the 3D world around using left click, pan with right click, and zoom with the scroll wheel.
|
draw(sc + sc_refl, scale=0.5)
|
docs/tutorials/cga/visualization-tools.ipynb
|
arsenovic/clifford
|
bsd-3-clause
|
Some more example of using pyganja to visualize 3D CGA can be found in the interpolation and clustering notebooks.
mpl_toolkits.clifford
The 3D approach for matplotlib is much the same.
Note that due to poor handling of rounding errors in clifford.tools.classify, a call to .normal() is needed.
Along with explicit grade selection, this is a useful trick to try and get something to render which otherwise would not.
|
# standard matplotlib stuff - construct empty plots side-by-side, and set the scaling
fig, (ax_before, ax_both) = plt.subplots(1, 2, subplot_kw=dict(projection='3d'), figsize=(8, 4))
ax_before.set(xlim=[-4, 4], ylim=[-4, 4], zlim=[-4, 4])
ax_both.set(xlim=[-4, 4], ylim=[-4, 4], zlim=[-4, 4])
# plot the objects before reflection on both plots
for ax in (ax_before, ax_both):
plot(ax, [point], color='tab:red', label='point', marker='x', linestyle=' ')
plot(ax, [line], color='tab:green', label='line')
plot(ax, [circle], color='tab:blue', label='circle')
plot(ax, [sphere], color='tab:cyan') # labels do not work for spheres: pygae/mpl_toolkits.clifford#5
# plot the objects after reflection
plot(ax_both, [point_refl], color='tab:red', label='point_refl', marker='x', linestyle=' ', markeredgewidth=2)
plot(ax_both, [line_refl.normal()], color='tab:green', label='line_refl', linewidth=2)
plot(ax_both, [sphere_refl], color='tab:cyan')
fig.tight_layout()
ax_both.legend()
# show the figure
fig
|
docs/tutorials/cga/visualization-tools.ipynb
|
arsenovic/clifford
|
bsd-3-clause
|
Export epochs to Pandas DataFrame
In this example the pandas exporter will be used to produce a DataFrame
object. After exploring some basic features a split-apply-combine
work flow will be conducted to examine the latencies of the response
maxima across epochs and conditions.
Note. Equivalent methods are available for raw and evoked data objects.
Short Pandas Primer
Pandas Data Frames
~~~~~~~~~~~~~~~~~~
A data frame can be thought of as a combination of matrix, list and dict:
It knows about linear algebra and element-wise operations but is size mutable
and allows for labeled access to its data. In addition, the pandas data frame
class provides many useful methods for restructuring, reshaping and visualizing
data. As most methods return data frame instances, operations can be chained
with ease; this allows to write efficient one-liners. Technically a DataFrame
can be seen as a high-level container for numpy arrays and hence switching
back and forth between numpy arrays and DataFrames is very easy.
Taken together, these features qualify data frames for inter operation with
databases and for interactive data exploration / analysis.
Additionally, pandas interfaces with the R statistical computing language that
covers a huge amount of statistical functionality.
Export Options
~~~~~~~~~~~~~~
The pandas exporter comes with a few options worth being commented.
Pandas DataFrame objects use a so called hierarchical index. This can be
thought of as an array of unique tuples, in our case, representing the higher
dimensional MEG data in a 2D data table. The column names are the channel names
from the epoch object. The channels can be accessed like entries of a
dictionary:
df['MEG 2333']
Epochs and time slices can be accessed with the .ix method:
epochs_df.ix[(1, 2), 'MEG 2333']
However, it is also possible to include this index as regular categorial data
columns which yields a long table format typically used for repeated measure
designs. To take control of this feature, on export, you can specify which
of the three dimensions 'condition', 'epoch' and 'time' is passed to the Pandas
index using the index parameter. Note that this decision is revertible any
time, as demonstrated below.
Similarly, for convenience, it is possible to scale the times, e.g. from
seconds to milliseconds.
Some Instance Methods
~~~~~~~~~~~~~~~~~~~~~
Most numpy methods and many ufuncs can be found as instance methods, e.g.
mean, median, var, std, mul, , max, argmax etc.
Below an incomplete listing of additional useful data frame instance methods:
apply : apply function to data.
Any kind of custom function can be applied to the data. In combination with
lambda this can be very useful.
describe : quickly generate summary stats
Very useful for exploring data.
groupby : generate subgroups and initialize a 'split-apply-combine' operation.
Creates a group object. Subsequently, methods like apply, agg, or transform
can be used to manipulate the underlying data separately but
simultaneously. Finally, reset_index can be used to combine the results
back into a data frame.
plot : wrapper around plt.plot
However it comes with some special options. For examples see below.
shape : shape attribute
gets the dimensions of the data frame.
values :
return underlying numpy array.
to_records :
export data as numpy record array.
to_dict :
export data as dict of arrays.
Reference
~~~~~~~~~
More information and additional introductory materials can be found at the
pandas doc sites: http://pandas.pydata.org/pandas-docs/stable/
|
# Author: Denis Engemann <denis.engemann@gmail.com>
#
# License: BSD (3-clause)
import mne
import matplotlib.pyplot as plt
import numpy as np
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
raw = mne.io.read_raw_fif(raw_fname)
raw.set_eeg_reference('average', projection=True) # set EEG average reference
# For simplicity we will only consider the first 10 epochs
events = mne.read_events(event_fname)[:10]
# Add a bad channel
raw.info['bads'] += ['MEG 2443']
picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True,
stim=False, exclude='bads')
tmin, tmax = -0.2, 0.5
baseline = (None, 0)
reject = dict(grad=4000e-13, eog=150e-6)
event_id = dict(auditory_l=1, auditory_r=2, visual_l=3, visual_r=4)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks,
baseline=baseline, preload=True, reject=reject)
|
0.15/_downloads/plot_epochs_to_data_frame.ipynb
|
mne-tools/mne-tools.github.io
|
bsd-3-clause
|
Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
|
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 2
sample_id = 18
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
|
Deep_learning_ND/dlnd_image_classification.ipynb
|
sarathid/Learning
|
gpl-3.0
|
Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
|
def normalize(x):
"""
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
"""
# TODO: Implement Function
# print(x[0][0][0])
z = x/255.0
# print(z[0][0][0])
return z
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_normalize(normalize)
|
Deep_learning_ND/dlnd_image_classification.ipynb
|
sarathid/Learning
|
gpl-3.0
|
One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
|
def one_hot_encode(x):
"""
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
"""
# TODO: Implement Function
# print(x.shape)
# print(x[:10])
# print(np.eye(10)[x[:10]])
total_classes = 10
return np.eye(total_classes)[x]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_one_hot_encode(one_hot_encode)
|
Deep_learning_ND/dlnd_image_classification.ipynb
|
sarathid/Learning
|
gpl-3.0
|
Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
|
import tensorflow as tf
def neural_net_image_input(image_shape):
"""
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
"""
# TODO: Implement Function
shape = (None, image_shape[0], image_shape[1], image_shape[2])
return tf.placeholder(tf.float32, shape=shape, name='x')
def neural_net_label_input(n_classes):
"""
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
"""
# TODO: Implement Function
# print(n_classes)
shape = (None, n_classes)
return tf.placeholder(tf.float32, shape=shape, name='y')
def neural_net_keep_prob_input():
"""
Return a Tensor for keep probability
: return: Tensor for keep probability.
"""
# TODO: Implement Function
return tf.placeholder(tf.float32, name='keep_prob')
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
|
Deep_learning_ND/dlnd_image_classification.ipynb
|
sarathid/Learning
|
gpl-3.0
|
Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
|
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
"""
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
"""
# TODO: Implement Function
# print(x_tensor.shape)
# print(conv_num_outputs)
# print(conv_ksize)
# print(conv_strides)
# print(pool_ksize)
# print(pool_strides)
# Weight and bias
in_dim = x_tensor.get_shape().as_list()[3]
weight_input = [conv_ksize[0], conv_ksize[1], in_dim, conv_num_outputs]
# print('Weight',(weight_input))
weight = tf.Variable(tf.truncated_normal(weight_input, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None))
bias = tf.Variable(tf.zeros(conv_num_outputs))
# Apply Convolution
strides = [1, conv_strides[0], conv_strides[1], 1]
# print('Stride', strides)
conv_layer = tf.nn.conv2d(x_tensor, weight, strides=strides, padding='SAME')
# Add bias
conv_layer = tf.nn.bias_add(conv_layer, bias)
# Apply activation function
conv_layer = tf.nn.relu(conv_layer)
# Apply Max Pooling
ksize = [1, pool_ksize[0], pool_ksize[1], 1]
pool_stride = [1, pool_strides[0], pool_strides[1], 1]
conv_layer = tf.nn.max_pool(conv_layer, ksize=ksize, strides=pool_stride, padding='SAME')
print(conv_layer.shape)
return conv_layer
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_con_pool(conv2d_maxpool)
|
Deep_learning_ND/dlnd_image_classification.ipynb
|
sarathid/Learning
|
gpl-3.0
|
Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
|
def flatten(x_tensor):
"""
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
"""
# TODO: Implement Function
# print(x_tensor.shape)
flattened_x_tensor = tf.contrib.layers.flatten(x_tensor)
# print(flattened_x_tensor.shape)
return flattened_x_tensor
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_flatten(flatten)
|
Deep_learning_ND/dlnd_image_classification.ipynb
|
sarathid/Learning
|
gpl-3.0
|
Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
|
def output(x_tensor, num_outputs):
"""
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
# TODO: Implement Function
return tf.layers.dense(inputs=x_tensor, units=num_outputs, activation= tf.nn.relu)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_output(output)
|
Deep_learning_ND/dlnd_image_classification.ipynb
|
sarathid/Learning
|
gpl-3.0
|
Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
|
def conv_net(x, keep_prob):
"""
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
"""
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
# print(x.shape)
conv_num_outputs = 64
conv_ksize = (int(x.shape[1].value/2), int(x.shape[2].value/2))
conv_strides = (4, 4)
pool_ksize = (2, 2)
pool_strides = (2, 2)
conv = conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
print(conv.shape)
conv_num_outputs = 10
conv_ksize = (2, 2)
conv_strides = (4, 4)
pool_ksize = (2, 2)
pool_strides = (2, 2)
conv = conv2d_maxpool(conv, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
print(conv.shape)
conv =tf.nn.dropout(conv, keep_prob)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
conv = flatten(conv)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
conv = fully_conn(conv, conv_num_outputs)
conv =tf.nn.dropout(conv, keep_prob)
conv = fully_conn(conv, conv_num_outputs)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
conv = output(conv, conv_num_outputs)
print(conv.shape)
# TODO: return output
return conv
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
|
Deep_learning_ND/dlnd_image_classification.ipynb
|
sarathid/Learning
|
gpl-3.0
|
Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
|
def print_stats(session, feature_batch, label_batch, cost, accuracy):
"""
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
"""
# TODO: Implement Function
loss = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0})
valid_acc = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0})
print('Loss: {:.6f} Accuracy: {:.6f}'.format(loss,valid_acc))
|
Deep_learning_ND/dlnd_image_classification.ipynb
|
sarathid/Learning
|
gpl-3.0
|
Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
|
# TODO: Tune Parameters
epochs = 10
batch_size = 64
keep_probability = 0.5
|
Deep_learning_ND/dlnd_image_classification.ipynb
|
sarathid/Learning
|
gpl-3.0
|
Es ist also sehr einfach in Python diese Funktion zu programmieren.Das hat aber nichts mit Machine Learning zu tun, sondern das ist das klassische Programmier Paradigma:<br>
- es gibt eine Regel
- es gibt einen Input
- der Algorithmus erzeugt einen Output<br>
Dieses Prinzip wird häufig auch EVA genannt - Eingabe/Verarbeitung/Ausgabe<br>
Im Machine Learning ist die zugrundeliegende Regel nicht bekannt.
Wir kennen nur den Input und den Output und der ML Algorithmus soll die zugrundeliegenden Parameter lernen.<br>
Das heisst in diesem Problem sind die Parameter 1.8 und 32 erst einmal zu lernen.<br>
Wir wollen also ein ML Modell auf Basis von Tensorflow und Keras trainieren, diese Parameter zu lernen.<br>
<h2>Importieren der notwendigen Bibliotheken</h2>
|
# Importieren Sie numpy as np
import numpy as np
# Importieren Sie tensorflow as tf
# Import des Modells Sequential
# from keras.models import Sequential
# Import des Dense Layers
# from keras.layers import Dense
# importieren Sie matplotlib.pyplot as plt
# Setzen Sie den magic Befehl: %matplotlib inline
# Übergabe der zu lernenden Werte und deren Umrechnungsergebnisse an numpy arrays
celsius_i = np.array([-40, -10, 0, 8, 12.5, 15, 22, 38, 49.5], dtype=float)
fahrenheit_o = np.array([-40, 14, 32, 46.4, 54.5, 59, 71.6, 100.4, 121.1], dtype=float)
# Ausgabe der zu lernenden Werte
print("Die zu lernenden Werte sind:")
for i,c in enumerate(celsius_i):
print("{} Grad Celsius = {} Grad Fahrenheit".format(c, fahrenheit_o[i]))
|
20-10-14-ml-workcamp/wc-arbeiten-tf-11-aufgabe.ipynb
|
mediagit2016/workcamp-maschinelles-lernen-grundlagen
|
gpl-3.0
|
<h2> Aufsetzen des Neuronalen Netzes als nn</h2>
|
# Initialisieren Sie ein Neuronales Netz nn mit Sequential()
# nn = Sequential()
# Fügen Sie einen Dense Layer mit einem Neuron hinzu, (units=1,input_dim=1)
#
# Wieviel Parameter hat dieser Layer ?
#
# nn.add(Dense(units=1, input_dim=1))
# Wir wollen den adam optimizer mit einer bestimmten Lernrate einsetzen
# mit der Angabe optimizer='adam' würden nur die default Werte genommen werden.
# Definition des optimzers mit Übergabe einer Lernrate 0.1
optimizer_adam=tf.keras.optimizers.Adam(0.1)
# Kompilieren Sie das Modell
nn.compile(optimizer=optimizer_adam, loss='mean_squared_error')
# Überprüfen der Konfiguration
# nn.summary()
# Wieviel Parameter sind trainierbar ?
# Trainieren Sie das Modell mit 1000 epochs
epoch_num = #Wert#
history = nn.fit(celsius_i,fahrenheit_o,epochs=epoch_num, verbose=1 )
print("Das Training ist beendet")
# Graphische Darstellung der Ergebnisentwicklung
plt.xlabel('Epoch Number')
plt.ylabel("Loss Magnitude")
plt.plot(history.history['loss'])
|
20-10-14-ml-workcamp/wc-arbeiten-tf-11-aufgabe.ipynb
|
mediagit2016/workcamp-maschinelles-lernen-grundlagen
|
gpl-3.0
|
<h2>Vorhersage von Werten</h2>
|
# Vorhersage für den Wert 100.0
ergebnis = nn.predict([#Wert#])
print(ergebnis)
# Vorhersage für den Wert 85.5
ergebnis = nn.predict([#Wert#])
print(ergebnis)
|
20-10-14-ml-workcamp/wc-arbeiten-tf-11-aufgabe.ipynb
|
mediagit2016/workcamp-maschinelles-lernen-grundlagen
|
gpl-3.0
|
<h3 style="color:orange; font-weight:bold;">Dies ist doch ein erstaunliches Ergebnis, oder ?</h3>
Wir wollen deshalb einmal überprüfen welche Werte in diesem einen<br>
Layer gelernt wurden.
<h2>Auslesen der gelernten Parameter</h2>
|
# Ausgabe der Layer Parameter die gelernt wurden
print("Dies sind die Layer Variablen: {}".format(nn.get_weights()))
# erwartetes Ergebnis [1.8, 32]
|
20-10-14-ml-workcamp/wc-arbeiten-tf-11-aufgabe.ipynb
|
mediagit2016/workcamp-maschinelles-lernen-grundlagen
|
gpl-3.0
|
In diesem Single Layer Neuronalen Netz lässt sich die Formel<br>
für die Umrechnung also exakt nachbilden und die Parameter werden gelernt.<br>
<br>
Wie sieht das aber aus, wenn wir eine andere Struktur des Neuronalen Netzes wählen ?<br>
<h2>Erweiterung des Neuronalen Netzes nn2</h2>
|
# Initialisieren des Neuronalen Netzes nn2
# nn2=Sequential()
#
# Hinufügen der Layer (Vergabe von Namen)
#
# nn2.add(Dense(units=4, input_dim=1, name='a'))
# nn2.add(Dense(units=4, name='b'))
# nn2.add(Dense(units=1, name='c'))
# Überlegung:
# Wieviel Parameter hat Layer a ?
# Wieviel Parameter hat Layer b ?
# Wieviel Parameter hat Layer c ?
# Wieviel Parameter hat das Modell insgesamt ?
#
# Definieren des optimizers mit einer Lernrate 0.1
optimizer_a=tf.keras.optimizers.Adam(0.1)
# Kompilieren Sie das Modell
nn2.compile(optimizer=optimizer_a, loss='mean_squared_error')
#
# Überprüfen der Konfiguration
# Wieviel Parameter hat Layer a ?
# Wieviel Parameter hat Layer b ?
# Wieviel Parameter hat Layer c ?
#
# nn2.summary()
# Trainieren Sie das Modell nn2 mit 1000 epochs
epoch_num = #Wert#
history = nn2.fit(celsius_i,fahrenheit_o,epochs=epoch_num, verbose=1 )
print("Das Training ist beendet")
|
20-10-14-ml-workcamp/wc-arbeiten-tf-11-aufgabe.ipynb
|
mediagit2016/workcamp-maschinelles-lernen-grundlagen
|
gpl-3.0
|
<h2> Vorhersage von Werten</h2>
|
# Vorhersage für den Wert 100.0
ergebnis = nn2.predict([#Wert#])
print(ergebnis)
# Vorhersage für den Wert 85.5
ergebnis = nn2.predict([#Wert#])
print(ergebnis)
|
20-10-14-ml-workcamp/wc-arbeiten-tf-11-aufgabe.ipynb
|
mediagit2016/workcamp-maschinelles-lernen-grundlagen
|
gpl-3.0
|
<h3 style="color:orange;">Dieses Modell liefert also nahezu identische Vorhersagewerte</h3>
|
print("Model nn2 ermittelt, dass 100 Grad Celsius: {} Grad Fahrenheit sind.".format(nn2.predict([100.0])))
|
20-10-14-ml-workcamp/wc-arbeiten-tf-11-aufgabe.ipynb
|
mediagit2016/workcamp-maschinelles-lernen-grundlagen
|
gpl-3.0
|
Wie sehen aber die Parameter in den Layern aus ?
|
# Ausgabe der Layer
# nn2.layers
nn2.layers
|
20-10-14-ml-workcamp/wc-arbeiten-tf-11-aufgabe.ipynb
|
mediagit2016/workcamp-maschinelles-lernen-grundlagen
|
gpl-3.0
|
Alle Parameter eines Layers können über Ihre Methoden<br>
get_weights()<br>
set_weights()<br>
angesprochen werden.
Für einen Dense Layer gehören dazu sowohl die <br>
Gewichte wie auch die Bias Werte.<br>
<br>
Wir lesen die Werte der Layer in die Variablen input1, input2 und input3<br>
aus. Die Layer haben dabei die Indizes 0-2.
|
# Auslesen der Layer 1-3 in Variablen - Index 0-2
input1 = nn2.layers[0]
input2 = nn2.layers[1]
input3 = nn2.layers[2]
# Einlesen der Gewichte und Bias Werte aus Layer a h hat den Index 0
weights, biases = input1.get_weights()
# Ausgabe der Gewichte in Layer a
weights
# Aus gabe der Bias Werte in Layer a
biases
|
20-10-14-ml-workcamp/wc-arbeiten-tf-11-aufgabe.ipynb
|
mediagit2016/workcamp-maschinelles-lernen-grundlagen
|
gpl-3.0
|
<h2>Auslesen der Werte für Layer b</h2>
|
# Einlesen der Gewichte und Bias Werte aus Layer b - hat den Index 1
# und wurde in die Variable input2 ausgelesen
weights1, biases1 = input2.get_weights()
# Ausgabe der Gewichte in Layer b
weights1
# Aus gabe der Bias Werte in Layer b
biases1
|
20-10-14-ml-workcamp/wc-arbeiten-tf-11-aufgabe.ipynb
|
mediagit2016/workcamp-maschinelles-lernen-grundlagen
|
gpl-3.0
|
<h2>Auslesen der Werte für Layer c</h2>
|
# Einlesen der Gewichte und Bias Werte aus Layer c - hat den Index 2 und
# wurde in die Variable input3 ausgelesen
weights2, biases2 = input3.get_weights()
# Ausgabe der Gewichte in Layer c
weights2
# Aus gabe der Bias Werte in Layer b
biases2
print("Das sind die Parameter in Layer a: {}".format(input1.get_weights()))
print("Das sind die Parameter in Layer b: {}".format(input2.get_weights()))
print("Das sind die Parameter in Layer c: {}".format(input3.get_weights()))
#
# Versuchen Sie einmal in unserer Aufgabe 10 die
# Werte des Layers im Modell nn zu ermitteln !
#
|
20-10-14-ml-workcamp/wc-arbeiten-tf-11-aufgabe.ipynb
|
mediagit2016/workcamp-maschinelles-lernen-grundlagen
|
gpl-3.0
|
<h2>Aufsetzen eines Neuronalen Netzes nn3 und Überprüfen
der Parameter vor und nach dem Training</h2>
|
# Initialisierung des Modells nn3 als Sequential
# nn3 = Sequential()
# Hinzufügen der Layer
#nn3.add(Dense(units=16, input_dim=1))
#nn3.add(Dense(units=8))
#nn3.add(Dense(units=1))
#
# Überprüfen Sie gedanklich einmal die Parameter
#
# Überprüfen der Konfiguration
# nn3.summary()
# Definieren des optimizers mit einer Lernrate 0.1
optimizer_a=tf.keras.optimizers.Adam(0.1)
# Kompilieren Sie das Modell
nn3.compile(optimizer=optimizer_a, loss='mean_squared_error')
# Auslesen der Layer
nn3.layers
# Auslesen der Layer 1-3 in Variablen - Index 0-2
inp1 = nn3.layers[0]
inp2 = nn3.layers[1]
inp3 = nn3.layers[2]
print("Das sind die Parameter in Layer 0: {}".format(inp1.get_weights()))
|
20-10-14-ml-workcamp/wc-arbeiten-tf-11-aufgabe.ipynb
|
mediagit2016/workcamp-maschinelles-lernen-grundlagen
|
gpl-3.0
|
Warum stehen da Werte ? Warum sind die Bias Werte 0 ?
|
print("Das sind die Parameter in Layer 1: {}".format(inp2.get_weights()))
print("Das sind die Parameter in Layer 2: {}".format(inp3.get_weights()))
# Trainieren Sie das Modell nn2 mit 800 epochs
epoch_num = #Wert#
history = nn3.fit(celsius_i,fahrenheit_o,epochs=epoch_num, verbose=1 )
print("Das Training ist beendet")
# Graphische Darstellung der Ergebnisentwicklung
plt.xlabel('Epoch Number')
plt.ylabel("Loss Magnitude")
plt.plot(history.history['loss'])
# Vorhersage für den Wert 100.0
ergebnis = nn3.predict([#Wert#])
print(ergebnis)
#
# Versuchen Sie einmal die Parameter der Layer des Modells nn3 jetzt nach dem Training
# auszugeben !
#
i1 = nn3.layers[0]
i2 = nn3.layers[1]
i3 = nn3.layers[2]
print("Das sind die Parameter in Layer 2 nach dem Training: {}".format(i3.get_weights()))
#
# Wie hat sich der Bias Wert nach dem Training verändert ?
#
|
20-10-14-ml-workcamp/wc-arbeiten-tf-11-aufgabe.ipynb
|
mediagit2016/workcamp-maschinelles-lernen-grundlagen
|
gpl-3.0
|
Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
|
#############
# In the my_answers.py file, fill out the TODO sections as specified
#############
from my_answers import NeuralNetwork
def MSE(y, Y):
return np.mean((y-Y)**2)
|
first-neural-network/Your_first_neural_network.ipynb
|
fnakashima/deep-learning
|
mit
|
Unfortunately, making the queries directly in django is complicated and it takes a long time to run. Instead, we query directly the database using postico and export the results to csv for further processing.
PostgreSQL command
You can use the following sql command to retrieve the data corresponding to one day.
SELECT
agency.id,
service_date.id, service_date.date,
route.id, route.short_name, route.long_name,
trip.id, trip.headsign, trip.short_name,
stop_time.id, stop_time.arrival_time, stop_time.departure_time, stop_time.stop_sequence,
stop.id, stop.stop_id, stop.name,
capacity_path.id, capacity_path.path,
capacity_capacity.id, capacity_capacity.capacity1st, capacity_capacity.capacity2nd
FROM service_date
LEFT OUTER JOIN trip ON (service_date.service_id = trip.service_id)
LEFT OUTER JOIN route ON (route.id = trip.route_id)
LEFT OUTER JOIN agency ON (agency.id = route.agency_id)
LEFT OUTER JOIN stop_time ON (stop_time.trip_id = trip.id)
LEFT OUTER JOIN stop ON (stop.id = stop_time.stop_id)
LEFT OUTER JOIN capacity_path ON (capacity_path.trip_id = trip.id AND capacity_path.stop_id = stop.id)
LEFT OUTER JOIN capacity_capacity ON (capacity_capacity.trip_id = trip.id AND capacity_capacity.stop_id = stop.id AND capacity_capacity.service_date_id = service_date.id)
WHERE
(agency.id = 31)
AND service_date.date = '2017-01-30'
AND stop.stop_id NOT IN ('132','133','134','135','136','137','138', '139', '140', '141','142','174','175', '176')
ORDER BY
trip.id ASC,
stop_time.stop_sequence ASC
Helper
|
def strip_id(s):
try:
index = s.index(':')
except ValueError:
index = len(s)
return s[:index]
columns = [
'agency_id',
'service_date_id', 'service_date_date',
'route_id', 'route_short_name', 'route_long_name',
'trip_id', 'trip_headsign', 'trip_short_name',
'stop_time_id', 'stop_time_arrival_time', 'stop_time_departure_time', 'stop_time_stop_sequence',
'stop_id', 'stop_stop_id', 'stop_name',
'capacity_path_id', 'capacity_path_path',
'capacity_capacity_id', 'capacity_capacity_capacity1st', 'capacity_capacity_capacity2nd'
]
in_dir = "in_data/"
out_dir = "out_data/"
|
preprocessing/process_csv.ipynb
|
tOverney/ADA-Project
|
apache-2.0
|
We process the CSV to stem the stop_id as they are currently not in the official form. The geops dataset add a suffix to each stop_id if they correspond to differente route.
|
dates = ['2017-01-30','2017-01-31','2017-02-01','2017-02-02','2017-02-03','2017-02-04','2017-02-05']
for date in dates:
file = in_dir + date + '.csv'
df = pd.read_csv(file)
df.columns = columns
df['stop_stop_id'] = df['stop_stop_id'].apply(lambda x: strip_id(x))
df.to_csv(out_dir + date + '_processed.csv')
|
preprocessing/process_csv.ipynb
|
tOverney/ADA-Project
|
apache-2.0
|
Load data
To download data go to https://map.ox.ac.uk/explorer/#/ and select the layer Plasmodium falciparum parasite rate in 2-10 year olds in Africa and click download. Select the zip file option. You should then have a zip file called 2015_Nature_Africa_PR.2000.zip.
Unzip the folder and enter the folder location below.
|
import os
import pandas as pd
data_folder_location = '~/Downloads/2015_Nature_Africa_PR.2000/'
name_lf = '2015_Nature_Africa_PR.2005.tif'
name_hf = '2015_Nature_Africa_PR.2015.tif'
if not os.path.exists(name_lf[:-3] + 'csv'):
import georaster
def get_map_as_df(path):
my_image = georaster.SingleBandRaster(path, load_data=False)
return pd.DataFrame(data=np.stack([my_image.coordinates()[1].flatten(), my_image.coordinates()[0].flatten(),
my_image.read_single_band(1).flatten()], axis=1), columns=['latitude', 'longitude', 'value'])
lf_data = get_map_as_df(os.path.join(data_folder_location, name_lf))
hf_data = get_map_as_df(os.path.join(data_folder_location, name_hf))
else:
lf_data = pd.read_csv(name_lf[:-3] + 'csv')
hf_data = pd.read_csv(name_hf[:-3] + 'csv')
|
emukit/examples/multi_fidelity_dgp/malaria_data_example.ipynb
|
EmuKit/emukit
|
apache-2.0
|
Change paths to where your data is stored
|
import scipy.special
# Discard points where we have no data
lf_valid = lf_data.value > 0
hf_valid = hf_data.value > 0
y_lf = lf_data.value.values[lf_valid, None]
y_hf = hf_data.value.values[hf_valid, None]
# Transform data so it lies on real line
y_lf_transformed = scipy.special.logit(y_lf)
y_hf_transformed = scipy.special.logit(y_hf)
# Construct features
x_lf = np.stack([lf_data.latitude.values[lf_valid], lf_data.longitude.values[lf_valid]], axis=1)
x_hf = np.stack([hf_data.latitude.values[hf_valid], hf_data.longitude.values[hf_valid]], axis=1)
# Choose a random subset of high fidelity points for training
np.random.seed(0)
i_train = np.random.choice(x_hf.shape[0], 1000, replace=False)
x_hf_train = x_hf[i_train, :]
y_hf_train = y_hf_transformed[i_train, :]
|
emukit/examples/multi_fidelity_dgp/malaria_data_example.ipynb
|
EmuKit/emukit
|
apache-2.0
|
Initialze inducing points to a subset of the data
|
i_z = np.random.choice(x_lf.shape[0], 1000, replace=False)
z_low = x_lf[i_z, :]
i_z_low = np.random.choice(x_lf.shape[0], 1000, replace=False)
z_high = np.concatenate([x_lf[i_z_low, :], y_lf_transformed[i_z_low, :]], axis=1)
dgp = make_dgpMF_model([x_lf, x_hf_train], [y_lf_transformed, y_hf_train], [z_low, z_high])
|
emukit/examples/multi_fidelity_dgp/malaria_data_example.ipynb
|
EmuKit/emukit
|
apache-2.0
|
Training loop + some printing
|
class PrintAction(Action):
def __init__(self, model, text):
self.model = model
self.text = text
def run(self, ctx):
if ctx.iteration % 500 == 0:
likelihood = ctx.session.run(self.model.likelihood_tensor)
objective = ctx.session.run(self.model.objective)
print('ELBO {:.4f}; KL {:,.4f}'.format(ctx.session.run(self.model.L), ctx.session.run(self.model.KL)))
print('{}: iteration {} objective {:,.4f}'.format(self.text, ctx.iteration, objective))
def run_adam(model, lr, iterations, callback=None):
adam = AdamOptimizer(lr).make_optimize_action(model)
actions = [adam] if callback is None else [adam, callback]
loop = Loop(actions, stop=iterations)()
model.anchor(model.enquire_session())
dgp.layers[0].feature.Z.trainable = False
dgp.layers[1].feature.Z.trainable = False
dgp.layers[0].q_sqrt = dgp.layers[0].q_sqrt.value * 1e-6
dgp.layers[0].q_sqrt.trainable = False
dgp.likelihood.likelihood.variance = y_hf_train.var() * .01
dgp.likelihood.likelihood.variance.trainable = False
dgp.run_adam(0.01, 15000)
dgp.likelihood.likelihood.variance.trainable = True
dgp.layers[0].q_sqrt.trainable = True
dgp.run_adam(3e-3, 10000)
|
emukit/examples/multi_fidelity_dgp/malaria_data_example.ipynb
|
EmuKit/emukit
|
apache-2.0
|
Test against a subset of high fidelity data
|
n_test = 10000
idxs = np.arange(0, y_hf.shape[0])
idxs_minus_train = np.array(list(set(idxs) - set(i_train)))
np.random.seed(123)
i_test = np.random.choice(idxs_minus_train, n_test, replace=False)
x_test = x_hf[i_test, :]
y_test = y_hf[i_test, :]
import scipy
# batch predict
batch_size = 1000
n_batches = int(np.ceil(n_test/batch_size))
y_result = np.zeros(n_test)
for i in range(n_batches):
i_start = i*batch_size
i_end = np.min([(i+1) * batch_size, n_test])
transformed_predictions = dgp.predict_f(x_test[i_start:i_end, :], 50)[0].mean(axis=0)
y_result[i_start:i_end] = scipy.special.expit(transformed_predictions)[:, 0]
plt.figure(figsize=(12, 12))
plt.scatter(y_test, y_result, alpha=0.1)
min_max = [y_hf.min(), y_hf.max()]
plt.plot(min_max, min_max, color='r')
plt.xlabel('Truth')
plt.ylabel('Prediction');
from sklearn.metrics import r2_score, mean_squared_error, mean_absolute_error
print(r2_score(y_test, y_result))
print(np.sqrt(mean_squared_error(y_test, y_result)))
print(mean_absolute_error(y_test, y_result))
|
emukit/examples/multi_fidelity_dgp/malaria_data_example.ipynb
|
EmuKit/emukit
|
apache-2.0
|
Importing from HDF5
The openmc.data module can read OpenMC's HDF5-formatted data into Python objects. The easiest way to do this is with the openmc.data.IncidentNeutron.from_hdf5(...) factory method. Replace the filename variable below with a valid path to an HDF5 data file on your computer.
|
# Get filename for Gd-157
filename ='/home/romano/openmc/scripts/nndc_hdf5/Gd157.h5'
# Load HDF5 data into object
gd157 = openmc.data.IncidentNeutron.from_hdf5(filename)
gd157
|
docs/source/pythonapi/examples/nuclear-data.ipynb
|
samuelshaner/openmc
|
mit
|
There is also summed_reactions attribute for cross sections (like total) which are built from summing up other cross sections.
|
pprint(list(gd157.summed_reactions.values()))
|
docs/source/pythonapi/examples/nuclear-data.ipynb
|
samuelshaner/openmc
|
mit
|
Note that the cross sections for these reactions are represented by the Sum class rather than Tabulated1D. They do not support the x and y attributes.
|
gd157[27].xs
|
docs/source/pythonapi/examples/nuclear-data.ipynb
|
samuelshaner/openmc
|
mit
|
Converting ACE to HDF5
The openmc.data package can also read ACE files and output HDF5 files. ACE files can be read with the openmc.data.IncidentNeutron.from_ace(...) factory method.
|
filename = '/opt/data/ace/nndc/293.6K/Gd_157_293.6K.ace'
gd157_ace = openmc.data.IncidentNeutron.from_ace(filename)
gd157_ace
|
docs/source/pythonapi/examples/nuclear-data.ipynb
|
samuelshaner/openmc
|
mit
|
We can store this formerly ACE data as HDF5 with the export_to_hdf5() method.
|
gd157_ace.export_to_hdf5('gd157.h5', 'w')
|
docs/source/pythonapi/examples/nuclear-data.ipynb
|
samuelshaner/openmc
|
mit
|
With few exceptions, the HDF5 file encodes the same data as the ACE file.
|
gd157_reconstructed = openmc.data.IncidentNeutron.from_hdf5('gd157.h5')
gd157_ace[16].xs['294K'].y - gd157_reconstructed[16].xs['294K'].y
|
docs/source/pythonapi/examples/nuclear-data.ipynb
|
samuelshaner/openmc
|
mit
|
0. Motivation
The typical economics graduate student places great faith in the analytical mathematical tools that he or she was taught as an undergraduate. In particular this student is likely under the impression that virtually all economic models have closed-form solutions. At best the typical student believes that if he or she were to encounter an economic model without a closed-form solution, then simplifying assumptions could be made that would render the model analytically tractable without sacrificing important economic content.
The typical economics student is, of course, wrong about general existence of closed-form solutions to economic models. In fact the opposite is true: most economic models, particular dynamic, non-linear models with meaningful constraints (i.e., most any interesting model) will fail to have an analytic solution. I the objective of this notebook is to demonstrate this fact and thereby motivate the use of numerical methods in economics more generally using the Solow model of economic growth.
Economics graduate students are very familiar with the Solow growth model. For many students, the Solow model will have been one of the first macroeconomic models taught to them as undergraduates. Indeed, Greg Mankiw's Macroeconomics, the dominant macroeconomics textbook for first and second year undergraduates, devotes two full chapters to motivating and deriving the Solow model. The first few chapters of David Romer's Advanced Macroeconomics, one of the most widely used final year undergraduate and first-year graduate macroeconomics textbook, are also devoted to the Solow growth model and its descendants.
0.1 The basic Solow growth model
The Solow model can be reduced down to a single non-linear differential equation and associated initial condition describing the time evolution of capital stock (per unit effective labor), $k(t)$.
$$ \dot{k}(t) = sf(k(t)) - (n + g + \delta)k(t),\ k(t) = k_0 \tag {0.1.1} $$
The parameter $0 < s < 1$ is the fraction of output invested and the parameters $n, g, \delta$ are the rates of population growth, technological progress, and depreciation of physical capital. The intensive form of the production function $f$ is assumed to be to be strictly concave with
$$ f(0) = 0,\ lim_{k\rightarrow 0}\ f' = \infty,\ lim_{k\rightarrow \infty}\ f' = 0. \tag{0.1.2} $$
A common choice for the function $f$ which satisfies the above conditions is known as the Cobb-Douglas production function.
$$ f(k(t)) = k(t)^{\alpha} $$
Assuming a Cobb-Douglas functional form for $f$ also makes the model analytically tractable (and thus contributes to the typical economics student's belief that all such models "must" have an analytic solution). Sato 1963 showed that the solution to the model under the assumption of Cobb-Douglas production is
$$ k(t) = \Bigg[\bigg(\frac{s}{n+g+\delta}\bigg)\bigg(1 - e^{-(n+g+\delta)(1-\alpha)t}\bigg)+ k_0^{1-\alpha}e^{-(n+g+\delta)(1-\alpha)t}\Bigg]^{\frac{1}{1-\alpha}}. \tag{0.1.3} $$
A notable property of the Solow model with Cobb-Douglas production is that the model predicts that the shares of real income going to capital and labor should be constant. Denoting capital's share of income as $\alpha_K(k)$, the model predicts that
$$ \alpha_K(k(t)) \equiv \frac{\partial \ln f(k(t))}{\partial \ln k(t)} = \alpha \tag{0.1.4} $$
Note that the prediction is that factor shares are constant along both the balanced growth path and during the disequilibrium transient (i.e., the period in which $k(t)$ is varying). We can test this implication of the model using data from the newest version of the Penn World Tables (PWT).
|
import pypwt
pwt = pypwt.load_pwt_data()
fig, ax = plt.subplots(1, 1, figsize=(8,6))
for ctry in pwt.major_axis:
tmp_data = pwt.major_xs(ctry)
tmp_data.labsh.plot(color='gray', alpha=0.5)
# plot some specific countries
pwt.major_xs('USA').labsh.plot(color='blue', ax=ax, label='USA')
pwt.major_xs('IND').labsh.plot(color='green', ax=ax, label='IND')
pwt.major_xs('CHN').labsh.plot(color='orange', ax=ax, label='CHN')
# plot global average
avg_labor_share = pwt.labsh.mean(axis=0)
avg_labor_share.plot(color='r', ax=ax)
ax.set_title("Labor's share has been far from constant!",
fontsize=20, family='serif')
ax.set_xlabel('Year', family='serif', fontsize=15)
ax.set_ylabel('Labor share of income', family='serif', fontsize=15)
ax.set_ylim(0, 1)
plt.show()
|
examples/0 Motivation.ipynb
|
solowPy/solowPy
|
mit
|
From the above figure it is clear that the prediction of constant factor shares is strongly at odds with the empirical data for most countries. Labor's share of real GDP has been declining, on average, for much of the post-war period. For many countries, such as India, China, and South Korea, the fall in labor's share has been dramatic. Note also that the observed trends in factor shares are inconsistent with an economy being on its long-run balanced growth path.
0.2 A more general Solow growth model
While the data clearly reject the Solow model with Cobb-Douglas production, they are not inconsistent with the Solow model in general. A simple generalization of the Cobb-Douglas production function, known as the constant elasticity of substitution (CES) function:
$$ f(k(t)) = \bigg[\alpha k(t)^{\rho} + (1-\alpha)\bigg]^{\frac{1}{\rho}} $$
where $-\infty < \rho < 1$ is the elasticity of substitution between capital and effective labor in production is capable of generating the variable factor shares observed in the data. Note that
$$ \lim_{\rho\rightarrow 0} f(k(t)) = k(t)^{\alpha} $$
and thus the CES production function nests the Cobb-Douglas functional form as a special case.
To see that the CES production function also generates variable factor shares note that
$$ \alpha_K(k(t)) \equiv \frac{\partial \ln f(k(t))}{\partial \ln k(t)} = \frac{\alpha k(t)^{\rho}}{\alpha k(t)^{\rho} + (1 - \alpha)} $$
which varies with $k(t)$.
This seemingly simple generalization of the Cobb-Douglas production function, which is necessary in order for the Solow model generate variable factor share, an economically important feature of the post-war growth experience in most countries, renders the Solow model analytically intractable. To make progress solving a Solow growth model with CES production one needs to resort to computational methods.
1. Creating an instance of the solow.Model class
We begin by creating an instance of the solow.Model class in the IPython notebook. As always, it is a good idea to read the docstrings...
|
solowpy.Model?
|
examples/0 Motivation.ipynb
|
solowPy/solowPy
|
mit
|
More details on on how to create instances of the solow.Model class can be found in the Getting started notebook in the solowPy repository.
2. Finding the steady state
Traditionally, most analysis of the Solow model focuses almost excusively on the long run steady state of the model. Recall that the steady state of the Solow model is the value of capital stock (per unit effective labor) that solves
$$ 0 = sf(k^) - (g + n + \delta)k^. \tag{2.0.1} $$
In words: in the long-run, capital stock (per unit effective labor) converges to the value that balances actual investment,
$$sf(k),$$
with effective depreciation,
$$(g + n + \delta)k.$$
Given the assumption made about the aggregate production technology, $F$, and its intensive form, $f$, there is always a unique value $k^* >0$ satisfying equation 2.0.1.
Although it is trivial to derive an analytic expression for the long-run equilibrium of the Solow model for most intensive production functions, the Solow model serves as a good illustrative case for various numerical methods for solving non-linear equations.
The solowpy.Model.find_steady_state method provides a simple interface to the various 1D root finding routines available in scipy.optimize and uses them to solve the non-linear equation 2.0.1. To see the list of currently supported methods, check out the docstring for the Model.find_steady_state method...
|
solowpy.Model.find_steady_state?
k_star, result = ces_model.find_steady_state(1e-6, 1e6, method='bisect', full_output=True)
print("The steady-state value is {}".format(k_star))
print("Did the bisection algorithm coverge? {}".format(result.converged))
|
examples/0 Motivation.ipynb
|
solowPy/solowPy
|
mit
|
More details on on how to the various methods of the solow.Model class for finding the model's steady state can be found in the accompanying Finding the steady state notebook in the solowPy repository.
3. Graphical analysis using Matplotlib and IPython widgets
Graphical analysis is an important pedagogic and research technique for understanding the behavior of the Solow (or really any!) model and as such the solow.Model class has a number of useful, built-in plotting methods.
Static example: the classic Solow diagram
|
fig, ax = plt.subplots(1, 1, figsize=(8,6))
ces_model.plot_solow_diagram(ax)
fig.show()
|
examples/0 Motivation.ipynb
|
solowPy/solowPy
|
mit
|
There are number of additional plotting methods available (all of which can be turned into interactive plots using IPython widgets). See the Graphical analysis notebook in the solowPy repository.
4. Solving the Solow model
Solving the Solow model requires efficiently and accurately approximating the solution to a non-linear ordinary differential equation (ODE) with a given initial condition (i.e., an non-linear initial value problem).
4.1 Solow model as an initial value problem
The Solow model with can be formulated as an initial value problem (IVP) as follows.
$$ \dot{k}(t) = sf(k(t)) - (g + n + \delta)k(t),\ t\ge t_0,\ k(t_0) = k_0 \tag{4.1.0} $$
The quantecon library has its own module quantecon.ivp for solving initial value problems of this form using finite difference methods. Upon creation of our instance of the solow.Model class, an instance of the quantecon.ivp.IVP class was created and stored as an attribute of our model...
|
ces_model.ivp?
|
examples/0 Motivation.ipynb
|
solowPy/solowPy
|
mit
|
Finite-difference methods only return a discrete approximation to the continuous function $k(t)$. To get a continouous approximation of the solution we can combined finite-difference methods with B-spline interpolation using the interpolate method of the ivp attribute.
|
ces_model.ivp.interpolate?
# interpolate!
ti = np.linspace(0, 100, 1000)
interpolated_soln = ces_model.ivp.interpolate(numeric_soln, ti, k=3)
|
examples/0 Motivation.ipynb
|
solowPy/solowPy
|
mit
|
Accuracy of our numerical methods
When doing numerical work it is important to understand the accuracy of the methods that you are using to approximate the solution to your model. Typically one assesses the accuracy of a solution method by computing and evaluatin some residual function:
$$ R(k; \theta) = sf(\hat{k}(\theta)) - (g + n + \delta)\hat{k}(\theta) - \dot{\hat{k}}(\theta) $$
where $\hat{k}(\theta)$ is out computed solution to the original differential equation. We can assess the accuracy of our finite-difference methods by plotting the residual function for our approximate solution using the compute_residual method of the ivp attribute.
|
# compute the residual...
ti = np.linspace(0, 100, 1000)
residual = ces_model.ivp.compute_residual(numeric_soln, ti, k=3)
|
examples/0 Motivation.ipynb
|
solowPy/solowPy
|
mit
|
For more details behind the numerical methods used in this section see the the Solving the Solow model notebook in the solowPy repository.
5. Impulse response functions
Impulse response functions (IRFs) are a standard tool for analyzing the short run dynamics of dynamic macroeconomic models, such as the Solow growth model, in response to an exogenous shock. The solow.impulse_response.ImpulseResponse class has several attributes and methods for generating and analyzing impulse response functions.
Example: Impact of a change in the savings rate
One can analyze the impact of a doubling of the savings rate on model variables as follows.
|
# 50% increase in the current savings rate...
ces_model.irf.impulse = {'s': 1.5 * ces_model.params['s']}
# in efficiency units...
ces_model.irf.kind = 'efficiency_units'
fig, ax = plt.subplots(1, 1, figsize=(8,6))
ces_model.irf.plot_impulse_response(ax, variable='output')
plt.show()
|
examples/0 Motivation.ipynb
|
solowPy/solowPy
|
mit
|
For more details and examples see the accompanying Impulse response function notebook in the solowPy repository.
6. The Solow model, declining labor's share, and secular stagnation
Recently there has been much discussion about the reasons for the Elsby (2013) has a nice paper that looks at the decline of labor share in the U.S.; as well as much debat about whether or not developed economes are experiencing some sort of secular stagnation more generally.
|
def awesome_interactive_plot(model, iso3_code, **params):
"""Interactive widget for the my awesome plot."""
# extract the relevant data
tmp_data = pwt.major_xs(iso3_code)
actual_labor_share = tmp_data.labsh.values
actual_capital_share = 1 - tmp_data.labsh
output = tmp_data.rgdpna
capital = tmp_data.rkna
labor = tmp_data.emp
# need to update params
model.params.update(params)
# get new initial condition
implied_technology = model.evaluate_solow_residual(output, capital, labor)
k0 = tmp_data.rkna[0] / (implied_technology[0] * labor[0])
# finite difference approximation
T = actual_labor_share.size
soln = model.ivp.solve(t0, k0, T=T, integrator='dopri5')
# get predicted labor share
predicted_capital_share = model.evaluate_output_elasticity(soln[:,1])
predicted_labor_share = 1 - predicted_capital_share
# get predicted output per unit labor
predicted_intensive_output = model.evaluate_intensive_output(soln[:,1])
technology = implied_technology[0] * np.exp(ces_model.params['g'] * soln[:,0])
predicted_output_per_unit_labor = predicted_intensive_output * technology
# make the plots!
fig, axes = plt.subplots(1, 2, figsize=(12,6))
axes[0].plot(soln[:,0], predicted_labor_share, 'b')
axes[0].plot(soln[:,0], predicted_capital_share, 'g')
axes[0].plot(actual_labor_share)
axes[0].plot(actual_capital_share)
axes[0].set_xlabel('Time, $t$', fontsize=15, family='serif')
axes[0].set_ylim(0, 1)
axes[0].set_title('Labor share of income in {}'.format(iso3_code),
fontsize=20, family='serif')
axes[0].legend(loc=0, frameon=False, bbox_to_anchor=(1.0, 1.0))
axes[1].set_xlabel('Time, $t$', fontsize=15, family='serif')
axes[1].set_title('Growth rate of Y/L in {}'.format(iso3_code),
fontsize=20, family='serif')
axes[1].legend(loc=0, frameon=False, bbox_to_anchor=(1.0, 1.0))
axes[1].plot(soln[1:,0], np.diff(np.log(predicted_output_per_unit_labor)),
'b', markersize=3.0)
axes[1].plot(np.log(output / labor).diff().values)
# define some widgets for the various parameters
technology_progress_widget = FloatSliderWidget(min=-0.05, max=0.05, step=5e-3, value=0.01)
population_growth_widget = FloatSliderWidget(min=-0.05, max=0.05, step=5e-3, value=0.01)
savings_widget = FloatSliderWidget(min=eps, max=1-eps, step=5e-3, value=0.2)
output_elasticity_widget = FloatSliderWidget(min=eps, max=1.0, step=5e-3, value=0.15)
depreciation_widget = FloatSliderWidget(min=eps, max=1-eps, step=5e-3, value=0.02)
elasticity_substitution_widget = FloatSliderWidget(min=eps, max=10.0, step=1e-2, value=2.0+eps)
# create the widget!
interact(awesome_interactive_plot,
model=fixed(ces_model),
iso3_code='USA',
g=technology_progress_widget,
n=population_growth_widget,
s=savings_widget,
alpha=output_elasticity_widget,
delta=depreciation_widget,
sigma=elasticity_substitution_widget,
)
|
examples/0 Motivation.ipynb
|
solowPy/solowPy
|
mit
|
We run the simulation up to a fixed number of iterations, controlled by the variable niter, storing the value of the EM field $E_z$ at every timestep so we can analyze them later:
|
import numpy as np
niter = 4000
Ex_t = np.zeros((niter,sim.nx))
Ez_t = np.zeros((niter,sim.nx))
tmax = niter * sim.dt
print("\nRunning simulation up to t = {:g} ...".format(tmax))
while sim.t <= tmax:
print('n = {:d}, t = {:g}'.format(sim.n,sim.t), end = '\r')
Ex_t[sim.n,:] = sim.emf.Ex
Ez_t[sim.n,:] = sim.emf.Ez
sim.iter()
print("\nDone.")
|
python/Electron Plasma Waves.ipynb
|
zambzamb/zpic
|
agpl-3.0
|
Electrostatic / Electromagnetic Waves
As discussed above, the simulation was initialized with a broad spectrum of waves through the thermal noise of the plasma. We can see the noisy fields in the plot below:
|
import matplotlib.pyplot as plt
iter = sim.n//2
plt.plot(np.linspace(0, sim.box, num = sim.nx),Ex_t[iter,:], label = "$E_x$")
plt.plot(np.linspace(0, sim.box, num = sim.nx),Ez_t[iter,:], label = "$E_z$")
plt.grid(True)
plt.xlabel("$x_1$ [$c/\omega_n$]")
plt.ylabel("$E$ field []")
plt.title("$E_x$, $E_z$, t = {:g}".format( iter * sim.dt))
plt.legend()
plt.show()
|
python/Electron Plasma Waves.ipynb
|
zambzamb/zpic
|
agpl-3.0
|
Electrostatic Plasma Waves
To analyze the dispersion relation of the electrostatic plasma waves we use a 2D (Fast) Fourier transform of $E_x(x,t)$ field values that we stored during the simulation. The plot below shows the obtained power spectrum alongside the theoretical prediction.
Since the dataset is not periodic along $t$ we apply a windowing technique (Hanning) to the dataset to lower the background spectrum, and make the dispersion relation more visible.
|
import matplotlib.pyplot as plt
import matplotlib.colors as colors
# (omega,k) power spectrum
win = np.hanning(niter)
for i in range(sim.nx):
Ex_t[:,i] *= win
sp = np.abs(np.fft.fft2(Ex_t))**2
sp = np.fft.fftshift( sp )
k_max = np.pi / sim.dx
omega_max = np.pi / sim.dt
plt.imshow( sp, origin = 'lower', norm=colors.LogNorm(vmin = 1.0),
extent = ( -k_max, k_max, -omega_max, omega_max ),
aspect = 'auto', cmap = 'gray')
k = np.linspace(-k_max, k_max, num = 512)
w=np.sqrt(1 + 3 * v_the**2 * k**2)
plt.plot( k, w, label = "Electron Plasma Wave", color = 'r',ls = '-.' )
plt.ylim(0,2)
plt.xlim(0,k_max)
plt.xlabel("$k$ [$\omega_n/c$]")
plt.ylabel("$\omega$ [$\omega_n$]")
plt.title("Wave dispersion relation")
plt.legend()
plt.show()
|
python/Electron Plasma Waves.ipynb
|
zambzamb/zpic
|
agpl-3.0
|
Electromagnetic Plasma Waves
To analyze the dispersion relation of the electrostatic plasma waves we use a 2D (Fast) Fourier transform of $E_z(x,t)$ field values that we stored during the simulation. The plot below shows the obtained power spectrum alongside the theoretical prediction.
Since the dataset is not periodic along $t$ we apply a windowing technique (Hanning) to the dataset to lower the background spectrum, and make the dispersion relation more visible.
|
import matplotlib.pyplot as plt
import matplotlib.colors as colors
# (omega,k) power spectrum
win = np.hanning(niter)
for i in range(sim.nx):
Ez_t[:,i] *= win
sp = np.abs(np.fft.fft2(Ez_t))**2
sp = np.fft.fftshift( sp )
k_max = np.pi / sim.dx
omega_max = np.pi / sim.dt
plt.imshow( sp, origin = 'lower', norm=colors.LogNorm(vmin = 1e-5, vmax = 0.01),
extent = ( -k_max, k_max, -omega_max, omega_max ),
aspect = 'auto', cmap = 'gray')
k = np.linspace(-k_max, k_max, num = 512)
w=np.sqrt(1 + k**2)
plt.plot( k, w, label = "$\omega^2 = \omega_p^2 + k^2 c^2$", color = 'r', ls = '-.' )
plt.ylim(0,k_max)
plt.xlim(0,k_max)
plt.xlabel("$k$ [$\omega_n/c$]")
plt.ylabel("$\omega$ [$\omega_n$]")
plt.title("EM-wave dispersion relation")
plt.legend()
plt.show()
|
python/Electron Plasma Waves.ipynb
|
zambzamb/zpic
|
agpl-3.0
|
Theoretical background
In addition to the GHZ states, the generalized W states, as proposed by Dür, Vidal and Cirac, in 2000, is a class of interesting examples of multiple qubit entanglement.
A generalized $n$ qubit W state can be written as :
$$ |W_{n}\rangle \; = \; \sqrt{\frac{1}{n}} \: (\:|10...0\rangle \: + |01...0\rangle \: +...+ |00...1\rangle \:) $$
Here are presented circuits allowing to deterministically produce respectively a three, a four and a five qubit W state.
A 2016 paper by Firat Diker proposes an algorithm in the form of nested boxes allowing the deterministic construction of W states of any size $n$. The experimental setup proposed by the author is essentially an optical assembly including half-wave plates. The setup includes $n-1$ so-called two-qubit $F$ gates (not to be confounded with the Fredkin's three-qubit gate).
It is possible to construct the equivalent of such a $F$ gate on a superconducting quantum computing system using transmon qubits in ground and excited states. A $F_{k,\, k+1}$ gate with control qubit $q_{k}$ and target qubit $q_{k+1}$ is obtained here by:
First a rotation round Y-axis $R_{y}(-\theta_{k})$ applied on $q_{k+1}$
Then a controlled Z-gate $cZ$ in any direction between the two qubits $q_{k}$ and $q_{k+1}$
Finally a rotation round Y-axis $R_{y}(\theta_{k})$ applied on $q_{k+1}$
The matrix representations of a $R_{y}(\theta)$ rotation and of the $cZ$ gate can be found in the "Quantum gates and linear algebra" Jupyter notebook of the Qiskit tutorial.
The value of $\theta_{k}$ depends on $n$ and $k$ following the relationship:
$$\theta_{k} = \arccos \left(\sqrt{\frac{1}{n-k+1}}\right) $$
Note that this formula for $\theta$ is different from the one mentioned in the Diker's paper. This is due to the fact that we use here Y-axis rotation matrices instead of $W$ optical gates composed of half-wave plates.
At the beginning, the qubits are placed in the state: $|\varphi_{0} \rangle \, = \, |10...0 \rangle$.
This is followed by the application of $n-1$ sucessive $F$ gates.
$$|\varphi_{1}\rangle = F_{n-1,\,n}\, ... \, F_{k,\, k+1}\, ... \, F_{2,\, 3} \,F_{1,\, 2}\,|\varphi_{0} \rangle \,= \; \sqrt{\frac{1}{n}} \: (\:|10...0\rangle \: + |11...0\rangle \: +...+ |11...1\rangle \:) $$
Then, $n-1$ $cNOT$ gates are applied. The final circuit is:
$$|W_{n}\rangle \,= cNOT_{n,\, n-1}\, cNOT_{n-1,\, n-2}...cNOT_{k,\, k-1}...cNOT_{2,\, 1}\,\,|\varphi_{1} \rangle$$
Let's launch now in the adventure of producing deterministically W states, on simulator or in the real world!
Now you will have the opportunity to choose your backend.
(If you run the following cells in sequence, you will end with the local simulator, which is a good choice for a first trial).
|
"Choice of the backend"
# using local qasm simulator
backend = Aer.get_backend('qasm_simulator')
# using IBMQ qasm simulator
# backend = IBMQ.get_backend('ibmq_qasm_simulator')
# using real device
# backend = least_busy(IBMQ.backends(simulator=False))
flag_qx2 = True
if backend.name() == 'ibmqx4':
flag_qx2 = False
print("Your choice for the backend is: ", backend, "flag_qx2 is: ", flag_qx2)
# Here, two useful routine
# Define a F_gate
def F_gate(circ,q,i,j,n,k) :
theta = np.arccos(np.sqrt(1/(n-k+1)))
circ.ry(-theta,q[j])
circ.cz(q[i],q[j])
circ.ry(theta,q[j])
circ.barrier(q[i])
# Define the cxrv gate which uses reverse CNOT instead of CNOT
def cxrv(circ,q,i,j) :
circ.h(q[i])
circ.h(q[j])
circ.cx(q[j],q[i])
circ.h(q[i])
circ.h(q[j])
circ.barrier(q[i],q[j])
|
community/teach_me_qiskit_2018/w_state/W State 1 - Multi-Qubit Systems.ipynb
|
antoniomezzacapo/qiskit-tutorial
|
apache-2.0
|
Three-qubit W state, step 1
In this section, the production of a three qubit W state will be examined step by step.
In this circuit, the starting state is now: $ |\varphi_{0} \rangle \, = \, |100\rangle$.
The entire circuit corresponds to:
$$ |W_{3}\rangle \,=\, cNOT_{3,2}\, \, cNOT_{2,1}\, \, F_{2,3} \,
\, F_{1,2} \, \, |\varphi_{0} \rangle \, $$
Run the following cell to see what happens when we first apply $F_{1,2}$.
|
# 3-qubit W state Step 1
n = 3
q = QuantumRegister(n)
c = ClassicalRegister(n)
W_states = QuantumCircuit(q,c)
W_states.x(q[2]) #start is |100>
F_gate(W_states,q,2,1,3,1) # Applying F12
for i in range(3) :
W_states.measure(q[i] , c[i])
# circuits = ['W_states']
shots = 1024
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('start W state 3-qubit (step 1) on', backend, "N=", shots,time_exp)
result = execute(W_states, backend=backend, shots=shots)
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('end W state 3-qubit (step 1) on', backend, "N=", shots,time_exp)
plot_histogram(result.result().get_counts(W_states))
|
community/teach_me_qiskit_2018/w_state/W State 1 - Multi-Qubit Systems.ipynb
|
antoniomezzacapo/qiskit-tutorial
|
apache-2.0
|
Three-qubit W state: adding step 2
In the previous step you obtained an histogram compatible with the following state:
$$ |\varphi_{1} \rangle= F_{1,2}\, |\varphi_{0} \rangle\,=F_{1,2}\, \,|1 0 0 \rangle=\frac{1}{\sqrt{3}} \: |1 0 0 \rangle \: + \sqrt{\frac{2}{3}} \: |1 1 0 \rangle $$
NB: Depending on the backend, it happens that the order of the qubits is modified, but without consequence for the state finally reached.
We seem far from the ultimate goal.
Run the following circuit to obtain $|\varphi_{2} \rangle =F_{2,3}\, \, |\varphi_{1} \rangle$
|
# 3-qubit W state, first and second steps
n = 3
q = QuantumRegister(n)
c = ClassicalRegister(n)
W_states = QuantumCircuit(q,c)
W_states.x(q[2]) #start is |100>
F_gate(W_states,q,2,1,3,1) # Applying F12
F_gate(W_states,q,1,0,3,2) # Applying F23
for i in range(3) :
W_states.measure(q[i] , c[i])
shots = 1024
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('start W state 3-qubit (steps 1 + 2) on', backend, "N=", shots,time_exp)
result = execute(W_states, backend=backend, shots=shots)
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('end W state 3-qubit (steps 1 + 2) on', backend, "N=", shots,time_exp)
plot_histogram(result.result().get_counts(W_states))
|
community/teach_me_qiskit_2018/w_state/W State 1 - Multi-Qubit Systems.ipynb
|
antoniomezzacapo/qiskit-tutorial
|
apache-2.0
|
Three-qubit W state, full circuit
In the previous step, we got an histogram compatible with the state:
$$ |\varphi_{2} \rangle =F_{2,3}\, \, |\varphi_{1} \rangle=F_{2,3}\, \, (\frac{1}{\sqrt{3}} \: |1 0 0 \rangle \: + \sqrt{\frac{2}{3}} \: |1 1 0 )= \frac{1}{\sqrt{3}} \: (|1 0 0 \rangle \: + |1 1 0 \:\rangle + |1 1 1\rangle) $$
NB: Again, depending on the backend, it happens that the order of the qubits is modified, but without consequence for the state finally reached.
It looks like we are nearing the goal.
Indeed, two $cNOT$ gates will make it possible to create a W state.
Run the following cell to see what happens. Did we succeed?
|
# 3-qubit W state
n = 3
q = QuantumRegister(n)
c = ClassicalRegister(n)
W_states = QuantumCircuit(q,c)
W_states.x(q[2]) #start is |100>
F_gate(W_states,q,2,1,3,1) # Applying F12
F_gate(W_states,q,1,0,3,2) # Applying F23
if flag_qx2 : # option ibmqx2
W_states.cx(q[1],q[2]) # cNOT 21
W_states.cx(q[0],q[1]) # cNOT 32
else : # option ibmqx4
cxrv(W_states,q,1,2)
cxrv(W_states,q,0,1)
for i in range(3) :
W_states.measure(q[i] , c[i])
shots = 1024
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('start W state 3-qubit on', backend, "N=", shots,time_exp)
result = execute(W_states, backend=backend, shots=shots)
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('end W state 3-qubit on', backend, "N=", shots,time_exp)
plot_histogram(result.result().get_counts(W_states))
|
community/teach_me_qiskit_2018/w_state/W State 1 - Multi-Qubit Systems.ipynb
|
antoniomezzacapo/qiskit-tutorial
|
apache-2.0
|
Now you get an histogram compatible with the final state $|W_{3}\rangle$ through the following steps:
$$ |\varphi_{3} \rangle = cNOT_{2,1}\, \, |\varphi_{2} \rangle =cNOT_{2,1}\,\frac{1}{\sqrt{3}} \: (|1 0 0 \rangle \: + |1 1 0 \rangle\: + |1 1 1\rangle) = \frac{1}{\sqrt{3}} \: (|1 0 0 \rangle \: + |0 1 0 \: + |0 1 1\rangle) $$
$$ |W_{3} \rangle = cNOT_{3,2}\, \, |\varphi_{3} \rangle =cNOT_{3,2}\,\frac{1}{\sqrt{3}} \: (|1 0 0 \rangle \: + |010 \: \rangle+ |0 1 1\rangle) = \frac{1}{\sqrt{3}} \: (|1 0 0 \rangle \: + |0 1 0 \: + |0 0 1\rangle) $$
Bingo!
Four-qubit W state
In this section, the production of a four-qubit W state will be obtained by extending the previous circuit.
In this circuit, the starting state is now: $ |\varphi_{0} \rangle \, = \, |1000\rangle$.
A $F$ gate was added at the beginning of the circuit and a $cNOT$ gate was added before the measurement phase.
The entire circuit corresponds to:
$$ |W_{4}\rangle \,=\, cNOT_{4,3}\, \, cNOT_{3,2}\, \, cNOT_{2,1}\, \, F_{3,4} \, \, F_{2,3} \, \, F_{1,2} \, \,|\varphi_{0} \rangle \, $$
Run the following circuit and see what happens.
|
# 4-qubit W state
n = 4
q = QuantumRegister(n)
c = ClassicalRegister(n)
W_states = QuantumCircuit(q,c)
W_states.x(q[3]) #start is |1000>
F_gate(W_states,q,3,2,4,1) # Applying F12
F_gate(W_states,q,2,1,4,2) # Applying F23
F_gate(W_states,q,1,0,4,3) # Applying F34
cxrv(W_states,q,2,3) # cNOT 21
if flag_qx2 : # option ibmqx2
W_states.cx(q[1],q[2]) # cNOT 32
W_states.cx(q[0],q[1]) # cNOT 43
else : # option ibmqx4
cxrv(W_states,q,1,2)
cxrv(W_states,q,0,1)
for i in range(4) :
W_states.measure(q[i] , c[i])
# circuits = ['W_states']
shots = 1024
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('start W state 4-qubit ', backend, "N=", shots,time_exp)
result = execute(W_states, backend=backend, shots=shots)
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('end W state 4-qubit on', backend, "N=", shots,time_exp)
plot_histogram(result.result().get_counts(W_states))
|
community/teach_me_qiskit_2018/w_state/W State 1 - Multi-Qubit Systems.ipynb
|
antoniomezzacapo/qiskit-tutorial
|
apache-2.0
|
Now, if you used a simulator, you get an histogram clearly compatible with the state:
$$ |W_{4}\rangle \;=\; \frac{1}{2} \: (\:|1000\rangle + |0100\rangle + |0010\rangle + |0001\rangle \:) $$
If you used a real quantum computer, the columns of the histogram compatible with a $|W_{4}\rangle$ state are not all among the highest one. Errors are spreading...
Five-qubit W state
In this section, a five-qubit W state will be obtained, again by extending the previous circuit.
In this circuit, the starting state is now: $ |\varphi_{0} \rangle = |10000\rangle$.
A $F$ gate was added at the beginning of the circuit and an additionnal $cNOT$ gate was added before the measurement phase.
$$ |W_{5}\rangle = cNOT_{5,4} cNOT_{4,3} cNOT_{3,2} cNOT_{2,1} F_{4,5} F_{3,4} F_{2,3} F_{1,2} |\varphi_{0} \rangle $$
Run the following cell and see what happens.
|
# 5-qubit W state
n = 5
q = QuantumRegister(n)
c = ClassicalRegister(n)
W_states = QuantumCircuit(q,c)
W_states.x(q[4]) #start is |10000>
F_gate(W_states,q,4,3,5,1) # Applying F12
F_gate(W_states,q,3,2,5,2) # Applying F23
F_gate(W_states,q,2,1,5,3) # Applying F34
F_gate(W_states,q,1,0,5,4) # Applying F45
W_states.cx(q[3],q[4]) # cNOT 21
cxrv(W_states,q,2,3) # cNOT 32
if flag_qx2 : # option ibmqx2
W_states.cx(q[1],q[2]) # cNOT 43
W_states.cx(q[0],q[1]) # cNOT 54
else : # option ibmqx4
cxrv(W_states,q,1,2)
cxrv(W_states,q,0,1)
for i in range(5) :
W_states.measure(q[i] , c[i])
shots = 1024
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('start W state 5-qubit on', backend, "N=", shots,time_exp)
result = execute(W_states, backend=backend, shots=shots)
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('end W state 5-qubit on', backend, "N=", shots,time_exp)
plot_histogram(result.result().get_counts(W_states))
|
community/teach_me_qiskit_2018/w_state/W State 1 - Multi-Qubit Systems.ipynb
|
antoniomezzacapo/qiskit-tutorial
|
apache-2.0
|
Load open source data published bei the Global Energy Observation, GEO. As you might know, this is not the original format of the database but the standardized format of powerplantmatching.
|
geo = pm.data.GEO()
geo.head()
|
doc/example.ipynb
|
FRESNA/powerplantmatching
|
gpl-3.0
|
Load the data published by the ENTSOE which has the same format as the geo data.
|
entsoe = pm.data.ENTSOE()
entsoe.head()
|
doc/example.ipynb
|
FRESNA/powerplantmatching
|
gpl-3.0
|
Data Inspection
Whereas various options of inspection of provided by the pandas package, some more powerplant specific methods are applicable via an accessor 'powerplant'. It gives you a convenient way to inspect, manipulate the data:
|
geo.powerplant.plot_map();
geo.powerplant.lookup().head(20).to_frame()
geo.powerplant.fill_missing_commyears().head()
|
doc/example.ipynb
|
FRESNA/powerplantmatching
|
gpl-3.0
|
Of course the pandas function are also very convenient:
|
print('Total capacity of GEO is: \n {} MW \n'.format(geo.Capacity.sum()));
print('The technology types are: \n {} '.format(geo.Technology.unique()))
|
doc/example.ipynb
|
FRESNA/powerplantmatching
|
gpl-3.0
|
Incomplete data
All open databases are so far not complete and cover only an part of overall European powerplants. We perceive the capacity gaps looking at the ENTSOE SO&AF Statistics.
|
stats = pm.data.Capacity_stats()
pm.plot.fueltype_totals_bar([geo, entsoe, stats], keys=["ENTSOE", "GEO", 'Statistics']);
|
doc/example.ipynb
|
FRESNA/powerplantmatching
|
gpl-3.0
|
The gaps for both datasets are unmistakable. Adding both datasets on top of each other would not be a solution, since the intersection of both sources are two high, and the resulting dataset would include many duplicates. A better approach is to merge the incomplete datasets together, respecting intersections and differences of each dataset.
Merging datasets
Before comparing two lists of power plants, we need to make sure that the data sets are on the same level of aggretation. That is, we ensure that all power plants blocks are aggregated to powerplant stations.
|
dfs = [geo.powerplant.aggregate_units(), entsoe.powerplant.aggregate_units()]
intersection = pm.matching.combine_multiple_datasets(dfs)
intersection.head()
|
doc/example.ipynb
|
FRESNA/powerplantmatching
|
gpl-3.0
|
The result of the matching process is a multiindexed dataframe. To bring the matched dataframe into a convenient format, we combine the information of the two source sources.
|
intersection = intersection.powerplant.reduce_matched_dataframe()
intersection.head()
|
doc/example.ipynb
|
FRESNA/powerplantmatching
|
gpl-3.0
|
As you can see in the very last column, we can track which original data entries flew into the resulting one.
We can have a look into the Capacity statisitcs
|
pm.plot.fueltype_totals_bar([intersection, stats], keys=["Intersection", 'Statistics']);
combined = intersection.powerplant.extend_by_non_matched(entsoe).powerplant.extend_by_non_matched(geo)
pm.plot.fueltype_totals_bar([combined, stats], keys=["Combined", 'Statistics']);
|
doc/example.ipynb
|
FRESNA/powerplantmatching
|
gpl-3.0
|
The aggregated capacities roughly match the SO&AF for all conventional powerplants
Processed Data
powerplantmatching comes along with already matched data, this includes data from GEO, ENTSOE, OPSD, CARMA, GPD and ESE (ESE, only if you have followed the instructions)
|
m = pm.collection.matched_data()
m.powerplant.plot_map(figsize=(13,13));
pm.plot.fueltype_totals_bar([m, stats], keys=["Processed", 'Statistics']);
pm.plot.factor_comparison([m, stats], keys=['Processed', 'Statistics'])
m.head()
pd.concat([m[m.DateIn.notnull()].groupby('Fueltype').DateIn.count(),
m[m.DateIn.isna()].fillna(1).groupby('Fueltype').DateIn.count()],
keys=['DateIn existent', 'DateIn missing'], axis=1)
|
doc/example.ipynb
|
FRESNA/powerplantmatching
|
gpl-3.0
|
In order to be able to use the Data Observatory via CARTOframes, you need to set your CARTO account credentials first.
Please, visit the Authentication guide for further detail.
|
from cartoframes.auth import set_default_credentials
set_default_credentials('creds.json')
|
docs/examples/advanced_use_cases/building_a_dashboard.ipynb
|
CartoDB/cartoframes
|
bsd-3-clause
|
Note about credentials
For security reasons, we recommend storing your credentials in an external file to prevent publishing them by accident when sharing your notebooks. You can get more information in the section Setting your credentials of the Authentication guide.
<a id='section1'></a>
1. Download all pharmacies in Philadelphia from the Data Observatory
Below is the bounding box of the area of study.
|
dem_bbox = box(-75.229353,39.885501,-75.061124,39.997898)
|
docs/examples/advanced_use_cases/building_a_dashboard.ipynb
|
CartoDB/cartoframes
|
bsd-3-clause
|
We can get the pharmacies from Pitney Bowes' Consumer Points of Interest dataset. This is a premium dataset, so we first need to check that we are subscribed to it.
Take a look at <a href='#example-access-premium-data-from-the-data-observatory' target='_blank'>this template</a> for more details on how to access and download a premium dataset.
|
Catalog().subscriptions().datasets.to_dataframe()
|
docs/examples/advanced_use_cases/building_a_dashboard.ipynb
|
CartoDB/cartoframes
|
bsd-3-clause
|
Download and explore sample
Pitney Bowes POI's are hierarchically classified (levels: trade division, group, class, sub class).
Since we might not know which level can help us identify all pharmacies, we can start by downloading a sample for a smaller area to explore the dataset. For calculating the bounding box we use bboxfinder.
We start by selecting our dataset and taking a quick look at its first 10 rows.
|
dataset = Dataset.get('pb_consumer_po_62cddc04')
dataset.head()
|
docs/examples/advanced_use_cases/building_a_dashboard.ipynb
|
CartoDB/cartoframes
|
bsd-3-clause
|
Let's now download a small sample to help us identify which of the four hierarchy variables gives us the pharmacies.
|
sql_query = "SELECT * except(do_label) FROM $dataset$ WHERE ST_IntersectsBox(geom, -75.161723,39.962019,-75.149535,39.968071)"
sample = dataset.to_dataframe(sql_query=sql_query)
sample.head()
sample['TRADE_DIVISION'].unique()
sample.loc[sample['TRADE_DIVISION'] == 'DIVISION G. - RETAIL TRADE', 'GROUP'].unique()
sample.loc[sample['TRADE_DIVISION'] == 'DIVISION G. - RETAIL TRADE', 'CLASS'].unique()
|
docs/examples/advanced_use_cases/building_a_dashboard.ipynb
|
CartoDB/cartoframes
|
bsd-3-clause
|
The class DRUG STORES AND PROPRIETARY STORES is the one we're looking for.
|
sample.loc[sample['CLASS'] == 'DRUG STORES AND PROPRIETARY STORES', 'SUB_CLASS'].unique()
|
docs/examples/advanced_use_cases/building_a_dashboard.ipynb
|
CartoDB/cartoframes
|
bsd-3-clause
|
Download all pharmacies in the area of study
|
sql_query = """SELECT * except(do_label)
FROM $dataset$
WHERE CLASS = 'DRUG STORES AND PROPRIETARY STORES'
AND ST_IntersectsBox(geom, -75.229353,39.885501,-75.061124,39.997898)"""
ph_pharmacies = dataset.to_dataframe(sql_query=sql_query)
ph_pharmacies.head()
|
docs/examples/advanced_use_cases/building_a_dashboard.ipynb
|
CartoDB/cartoframes
|
bsd-3-clause
|
The dataset contains different versions of the POI's tagged by the do_date column. We are only inetrested in the latest version of each POI.
|
ph_pharmacies = ph_pharmacies.sort_values(by='do_date', ascending=False).groupby('PB_ID').first().reset_index()
ph_pharmacies.shape
|
docs/examples/advanced_use_cases/building_a_dashboard.ipynb
|
CartoDB/cartoframes
|
bsd-3-clause
|
Visualize the dataset
|
Layer(ph_pharmacies,
geom_col='geom',
style=basic_style(opacity=0.75),
popup_hover=popup_element('NAME'))
|
docs/examples/advanced_use_cases/building_a_dashboard.ipynb
|
CartoDB/cartoframes
|
bsd-3-clause
|
<a id='section2'></a>
2. Calculate catchment areas
In order to know the characteristics of the potential customers of every pharmacy, we assume the majority of their clients live closeby. Therefore we will calculate 5-minute-by-car isochrones and take them as their cathment areas.
Note catchment areas usually depend on whether it is a store in the downtown area or in the suburbs, or if it is reachable on foot or only by car. For this example, we will not make such distiction between pharmacies, but we strongly encourage you to do so on your analyses. As an example, here we describe how to calculate catchment areas using human mobility data.
|
iso_service = Isolines()
isochrones_gdf, _ = iso_service.isochrones(ph_pharmacies, [300], mode='car', geom_col='geom')
ph_pharmacies['iso_5car'] = isochrones_gdf.sort_values(by='source_id')['the_geom'].values
|
docs/examples/advanced_use_cases/building_a_dashboard.ipynb
|
CartoDB/cartoframes
|
bsd-3-clause
|
Visualize isochrones
We'll only visualize the ten first isochrones to get a clean visualization.
|
Map([Layer(ph_pharmacies.iloc[:10],
geom_col='iso_5car',
style=basic_style(opacity=0.1),
legends=basic_legend('Catchment Areas')),
Layer(ph_pharmacies.iloc[:10],
geom_col='geom',
popup_hover=popup_element('NAME'),
legends=basic_legend('Pharmacies'))])
|
docs/examples/advanced_use_cases/building_a_dashboard.ipynb
|
CartoDB/cartoframes
|
bsd-3-clause
|
<a id='section3'></a>
3. Enrichment: Chacacterize catchment areas
We'll now enrich the pharmacies catchment areas with demographics, POI's, and consumer spending data.
For the enrichment, we will use the CARTOframes Enrichment class. This class contains the functionality to enrich polygons and points.
Visit CARTOframes Guides for further detail.
|
enrichment = Enrichment()
|
docs/examples/advanced_use_cases/building_a_dashboard.ipynb
|
CartoDB/cartoframes
|
bsd-3-clause
|
Demographics
We will use AGS premium data. In particular, we will work with the dataset ags_sociodemogr_f510a947 which contains yearly demographics data from 2019.
Variable selection
Here we will enrich the pharmacies isochrones with:
- Population aged 60+
- Household income
- Household income for population ages 65+
|
Catalog().country('usa').category('demographics').provider('ags').datasets.to_dataframe().head()
dataset = Dataset.get('ags_sociodemogr_f510a947')
dataset.head()
|
docs/examples/advanced_use_cases/building_a_dashboard.ipynb
|
CartoDB/cartoframes
|
bsd-3-clause
|
We explore the variables to identify the ones we're interested in.
Variables in a dataset are uniquely identified by their slug.
|
dataset.variables.to_dataframe().head()
|
docs/examples/advanced_use_cases/building_a_dashboard.ipynb
|
CartoDB/cartoframes
|
bsd-3-clause
|
We'll select:
- Population and population by age variables to identify number of people aged 60+ as a percentage of total population
- Average household income
- Average household income for porpulation aged 65+
|
vars_enrichment = ['POPCY_5e23b8f4', 'AGECY6064_d54c2315', 'AGECY6569_ad369d43', 'AGECY7074_74eb7531',
'AGECY7579_c91cb67', 'AGECY8084_ab1079a8', 'AGECYGT85_a0959a08', 'INCCYMEDHH_b80a7a7b',
'HINCYMED65_37a430a4', 'HINCYMED75_2ebf01e5']
|
docs/examples/advanced_use_cases/building_a_dashboard.ipynb
|
CartoDB/cartoframes
|
bsd-3-clause
|
Isochrone enrichment
|
ph_pharmacies_enriched = enrichment.enrich_polygons(
ph_pharmacies,
variables=vars_enrichment,
geom_col='iso_5car'
)
ph_pharmacies_enriched.head()
ph_pharmacies = ph_pharmacies_enriched.copy()
ph_pharmacies['pop_60plus'] = ph_pharmacies[['AGECY8084', 'AGECYGT85', 'AGECY6569', 'AGECY7579', 'AGECY7074', 'AGECY6064']].sum(1)
ph_pharmacies.drop(columns=['AGECY8084', 'AGECYGT85', 'AGECY6569', 'AGECY7579', 'AGECY7074', 'AGECY6064'], inplace=True)
|
docs/examples/advanced_use_cases/building_a_dashboard.ipynb
|
CartoDB/cartoframes
|
bsd-3-clause
|
Points of Interest
We will use Pitney Bowes' Consumer Points of Interest premium dataset.
Variable selection
We are interested in knowing how many of the following POIs can be found in each isochrone:
- Beauty shops and beauty salons
- Gyms and other sports centers
These POI's will be considered as an indicator of personal care awareness in a specific area.
The hierarchy classification variable SUB_CLASS variable allows us to identify beaty shops and salons (BEAUTY SHOPS/BEAUTY SALON) and gyms (MEMBERSHIP SPORTS AND RECREATION CLUBS/CLUB AND ASSOCIATION - UNSPECIFIED).
|
sample.loc[sample['TRADE_DIVISION'] == 'DIVISION I. - SERVICES', 'SUB_CLASS'].unique()
|
docs/examples/advanced_use_cases/building_a_dashboard.ipynb
|
CartoDB/cartoframes
|
bsd-3-clause
|
Isochrone enrichment
In order to count only Beauty Shops/Salons and Gyms, we will apply a filter to the enrichment. All filters are applied with an AND-like relationship. This means we need to run two independent enrichment calls, one for the beauty shops/salons and another one for the gyms.
|
ph_pharmacies_enriched = enrichment.enrich_polygons(
ph_pharmacies,
variables=['SUB_CLASS_10243439'],
aggregation='COUNT',
geom_col='iso_5car',
filters={Variable.get('SUB_CLASS_10243439').id : "= 'BEAUTY SHOPS/BEAUTY SALON'"}
)
ph_pharmacies = ph_pharmacies_enriched.rename(columns={'SUB_CLASS_y':'n_beauty_pois'})
ph_pharmacies_enriched = enrichment.enrich_polygons(
ph_pharmacies,
variables=['SUB_CLASS_10243439'],
aggregation='COUNT',
geom_col='iso_5car',
filters={Variable.get('SUB_CLASS_10243439').id : "= 'MEMBERSHIP SPORTS AND RECREATION CLUBS/CLUB AND ASSOCIATION - UNSPECIFIED'"}
)
ph_pharmacies = ph_pharmacies_enriched.rename(columns={'SUB_CLASS':'n_gym_pois'})
ph_pharmacies['n_pois_personal_care'] = ph_pharmacies['n_beauty_pois'] + ph_pharmacies['n_gym_pois']
ph_pharmacies.drop(columns=['n_beauty_pois', 'n_gym_pois'], inplace=True)
|
docs/examples/advanced_use_cases/building_a_dashboard.ipynb
|
CartoDB/cartoframes
|
bsd-3-clause
|
Consumer spending
For consumer spending, we will use AGS premium data. In particular, we will work with the dataset ags_consumer_sp_dbabddfb which contains the latest version of yearly consumer data.
Variable selection
We are interested in spending in:
- Personal care services
- Personal care products
- Health care services
|
dataset = Dataset.get('ags_consumer_sp_dbabddfb')
dataset.variables.to_dataframe().head()
|
docs/examples/advanced_use_cases/building_a_dashboard.ipynb
|
CartoDB/cartoframes
|
bsd-3-clause
|
The variables we're interested in are:
- XCYHC2 Health care services expenditure
- XCYPC3 Personal care services expenditure
- XCYPC4 Personal care products expenditure
|
Variable.get('XCYHC2_18141567').to_dict()
ph_pharmacies_enriched = enrichment.enrich_polygons(
ph_pharmacies,
variables=['XCYPC3_7d26d739', 'XCYPC4_e342429a', 'XCYHC2_18141567'],
geom_col='iso_5car'
)
|
docs/examples/advanced_use_cases/building_a_dashboard.ipynb
|
CartoDB/cartoframes
|
bsd-3-clause
|
We rename the new columns to give them a more descriptive name.
|
ph_pharmacies = ph_pharmacies_enriched.rename(columns={'XCYHC2':'health_care_services_exp',
'XCYPC3':'personal_care_services_exp',
'XCYPC4':'personal_care_products_exp'})
ph_pharmacies.head(2)
|
docs/examples/advanced_use_cases/building_a_dashboard.ipynb
|
CartoDB/cartoframes
|
bsd-3-clause
|
<a id='section4'></a>
4. Dashboard
Finally, with all the data gathered, we will build the dashboard and publish it so we can share it with our client/manager/colleague for them to explore it.
This dashboard allows you to select a range of desired expenditure in care products, people aged 60+, household income, and so forth. Selecting the desired ranges will filter out pharmacies, so that in the end you can identify the target pharmacies for your marketing campaign.
|
cmap = Map(Layer(ph_pharmacies,
geom_col='geom',
style=color_category_style('SIC8_DESCRIPTION', size=4, opacity=0.85, palette='safe', stroke_width=0.15),
widgets=[formula_widget(
'PB_ID',
operation='COUNT',
title='Total number of pharmacies',
description='Keep track of the total amount of pharmacies that meet the ranges selected on the widgets below'),
histogram_widget(
'pop_60plus',
title='Population 60+',
description='Select a range of values to filter',
buckets=15
),
histogram_widget(
'HINCYMED65',
title='Household income 65-74',
buckets=15
),
histogram_widget(
'HINCYMED75',
title='Household income 75+',
buckets=15
),
histogram_widget(
'n_pois_personal_care',
title='Number of personal care POIs',
buckets=15
),
histogram_widget(
'personal_care_products_exp',
title='Expenditure in personal care products ($)',
buckets=15
)],
legends=color_category_legend(
title='Pharmacies',
description='Type of store'),
popup_hover=[popup_element('NAME', title='Name')]
),
viewport={'zoom': 11}
)
cmap
|
docs/examples/advanced_use_cases/building_a_dashboard.ipynb
|
CartoDB/cartoframes
|
bsd-3-clause
|
Publish dashboard
|
cmap.publish('ph_pharmacies_dashboard', password='MY_PASS', if_exists='replace')
|
docs/examples/advanced_use_cases/building_a_dashboard.ipynb
|
CartoDB/cartoframes
|
bsd-3-clause
|
Train Test Split
|
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(df_in, np.ravel(df_target), test_size=0.30, random_state=101)
|
04-more-supervised.ipynb
|
msadegh97/machine-learning-course
|
gpl-3.0
|
Train the Support Vector Classifier
|
from sklearn.svm import SVC
model = SVC(kernel='rbf')
model.fit(X_train,y_train)
|
04-more-supervised.ipynb
|
msadegh97/machine-learning-course
|
gpl-3.0
|
Woah! Notice that we are classifying everything into a single class! This means our model needs to have it parameters adjusted (it may also help to normalize the data).
We can search for parameters using a GridSearch!
Gridsearch
Finding the right parameters (like what C or gamma values to use) is a tricky task! But luckily, we can be a little lazy and just try a bunch of combinations and see what works best! This idea of creating a 'grid' of parameters and just trying out all the possible combinations is called a Gridsearch, this method is common enough that Scikit-learn has this functionality built in with GridSearchCV! The CV stands for cross-validation which is the
GridSearchCV takes a dictionary that describes the parameters that should be tried and a model to train. The grid of parameters is defined as a dictionary, where the keys are the parameters and the values are the settings to be tested.
|
param_grid = {'C': [0.1,1, 10, 100, 1000], 'gamma': [1,0.1,0.01,0.001,0.0001], 'kernel': ['rbf']}
from sklearn.model_selection import GridSearchCV
grid = GridSearchCV(SVC(),param_grid,refit=True,verbose=3)
grid.fit(X_train,y_train)
grid.best_params_
grid.best_estimator_
grid_predictions = grid.predict(X_test)
print(confusion_matrix(y_test,grid_predictions))
print(classification_report(y_test,grid_predictions))
|
04-more-supervised.ipynb
|
msadegh97/machine-learning-course
|
gpl-3.0
|
Decision Tree
|
from sklearn.tree import DecisionTreeClassifier
dtree = DecisionTreeClassifier()
dtree.fit(X_train,y_train)
treepredictop = dtree.predict(X_test)
print(classification_report(y_test,treepredictop))
features = X_train.columns
features
from IPython.display import Image as image
from sklearn.externals.six import StringIO
from sklearn.tree import export_graphviz
import pydot
from PIL import Image
dot_data = StringIO()
export_graphviz(dtree, out_file=dot_data, feature_names=features, filled=True, rounded=True)
graph = pydot.graph_from_dot_data(dot_data.getvalue())
image(graph[0].create_png())
dtree2 = DecisionTreeClassifier(min_samples_split=50)
dtree2.fit(X_train, y_train)
tree2predictop = dtree2.predict(X_test)
print(classification_report(y_test,treepredictop))
dot_data = StringIO()
export_graphviz(dtree2, out_file=dot_data,feature_names=features,filled=True,rounded=True)
graph = pydot.graph_from_dot_data(dot_data.getvalue())
image(graph[0].create_png())
|
04-more-supervised.ipynb
|
msadegh97/machine-learning-course
|
gpl-3.0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.