markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
Model evaluation
Calculate the F1 score given the target vector and the model output. | def calculate_f1_score(y_test, y_output):
print 'Calculating F1 score'
tags = ['part-time-job', 'full-time-job', 'hourly-wage', 'salary', 'associate-needed', 'bs-degree-needed',
'ms-or-phd-needed', 'licence-needed', '1-year-experience-needed', '2-4-years-experience-needed',
'5-plus-years-experience-needed', 'supervising-job']
true_positive = np.array([0.0 for _ in tags])
true_negative = np.array([0.0 for _ in tags])
false_positive = np.array([0.0 for _ in tags])
false_negative = np.array([0.0 for _ in tags])
for target, output in zip(y_test, y_output):
for i, tag in enumerate(tags):
if tag in target and tag in output:
true_positive[i] += 1
elif tag not in target and tag not in output:
true_negative[i] += 1
elif tag in target and tag not in output:
false_negative[i] += 1
elif tag not in target and tag in output:
false_positive[i] += 1
else:
raise Exception('Unknown situation - tag: {} target: {} output: {}'.format(tag, target, output))
tags_precision = np.array([0.0 for _ in tags])
tags_recall = np.array([0.0 for _ in tags])
tags_f1_score = np.array([0.0 for _ in tags])
for i, tag in enumerate(tags):
tags_precision[i] = true_positive[i] / (true_positive[i] + false_positive[i])
tags_recall[i] = true_positive[i] / (true_positive[i] + false_negative[i])
tags_f1_score[i] = 2*tags_precision[i]*tags_recall[i] / (tags_precision[i] + tags_recall[i])
min_tags_precision = np.argmin(tags_precision)
min_tags_recall = np.argmin(tags_recall)
min_tags_f1_score = np.argmin(tags_f1_score)
print
print '{:30s} | {:5s} | {:5s} | {:5s}'.format('Tag', 'Prec.', 'Rec. ', 'F1')
for i in range(len(tags)):
print '{:30s} | {:.3f} | {:.3f} | {:.3f}'.format(
tags[i], tags_precision[i], tags_recall[i], tags_f1_score[i])
print
print 'Worst precision:', tags[min_tags_precision]
print 'Worst recall:', tags[min_tags_recall]
print 'Worst F1 score:', tags[min_tags_f1_score]
print
precision = np.sum(true_positive) / (np.sum(true_positive) + np.sum(false_positive))
recall = np.sum(true_positive) / (np.sum(true_positive) + np.sum(false_negative))
f1_score = 2*precision*recall / (precision + recall)
print 'General:'
print 'Precision: {:.3f}'.format(precision)
print 'Recall: {:.3f}'.format(recall)
print 'F1 score: {:.3f}'.format(f1_score)
return f1_score | indeed.ipynb | matheusportela/indeed-ml-codesprint | mit |
Evaluate model with 5-fold cross-validation using the F1 score metric: | from sklearn.model_selection import KFold
from sklearn.metrics import f1_score
scores = []
k_fold = KFold(n_splits=5)
for i, (train, validation) in enumerate(k_fold.split(X)):
X_train, X_validation, y_train, y_validation = X[train], X[validation], y[train], y[validation]
fit_models(models, X_preprocessor, y_preprocessors, X_train, y_train)
y_output = predict_models(models, X_preprocessor, y_preprocessors, X_validation)
score = calculate_f1_score(y_validation, y_output)
scores.append(score)
print '#{0} F1 score: {1:.3f}'.format(i, score)
print
f1_score = np.mean(scores)
print 'Total F1 score: {0:.3f}'.format(f1_score) | indeed.ipynb | matheusportela/indeed-ml-codesprint | mit |
Model usage
Load the data: | def load_test_data(filename):
with open(filename) as fd:
reader = csv.reader(fd, delimiter='\t')
next(reader, None) # ignore header row
X = [row[0] for row in reader]
return np.array(X)
X_train, y_train = load_train_data('data/train.tsv')
X_test = load_test_data('data/test.tsv') | indeed.ipynb | matheusportela/indeed-ml-codesprint | mit |
Train the model with all training data: | fit_models(models, X_preprocessor, y_preprocessors, X_train, y_train) | indeed.ipynb | matheusportela/indeed-ml-codesprint | mit |
Predict output from test data: | y_output = predict_models(models, X_preprocessor, y_preprocessors, X_test) | indeed.ipynb | matheusportela/indeed-ml-codesprint | mit |
Show some output data: | print y_output[:10] | indeed.ipynb | matheusportela/indeed-ml-codesprint | mit |
Save output data: | def save_output(filename, output):
with open(filename, 'w') as fd:
fd.write('tags\n')
for i, tags in enumerate(output):
fd.write(' '.join(tags))
fd.write('\n')
save_output('data/tags.tsv', y_output) | indeed.ipynb | matheusportela/indeed-ml-codesprint | mit |
Save preprocessors and model: | import pickle
def save(filename, obj):
pickle.dump(obj, open(filename, 'w'))
save('models/X_preprocessor.pickle', X_preprocessor)
save('models/y_preprocessor.pickle', y_preprocessors)
save('models/clf_{0:.3f}_f1_score.pickle'.format(f1_score), models) | indeed.ipynb | matheusportela/indeed-ml-codesprint | mit |
Load saved model | def load(filename):
return pickle.load(open(filename))
models = load('models/clf_0.461_f1_score.pickle')
X_preprocessors = load('models/X_preprocessor.pickle')
y_preprocessors = load('models/y_preprocessor.pickle') | indeed.ipynb | matheusportela/indeed-ml-codesprint | mit |
Create an independence bivariate copula | pv.Bicop() | examples/bivariate_copulas.ipynb | vinecopulib/pyvinecopulib | mit |
Create a Gaussian copula
See help(pv.BicopFamily) for the available families | pv.Bicop(family=pv.BicopFamily.gaussian) | examples/bivariate_copulas.ipynb | vinecopulib/pyvinecopulib | mit |
Create a 90 degrees rotated Clayon copula with parameter = 3 | pv.Bicop(family=pv.BicopFamily.clayton, rotation=90, parameters=[3]) | examples/bivariate_copulas.ipynb | vinecopulib/pyvinecopulib | mit |
Create a t copula with correlation of 0.5 and 4 degrees of freedom
and showcase some methods | cop = pv.Bicop(family=pv.BicopFamily.student, rotation=0, parameters=[0.5, 4])
u = cop.simulate(n=10, seeds=[1, 2, 3])
fcts = [cop.pdf, cop.cdf,
cop.hfunc1, cop.hfunc2,
cop.hinv1, cop.hinv2,
cop.loglik, cop.aic, cop.bic]
[f(u) for f in fcts] | examples/bivariate_copulas.ipynb | vinecopulib/pyvinecopulib | mit |
Different ways to fit a copula... | u = cop.simulate(n=1000, seeds=[1, 2, 3])
# Create a new object an sets its parameters by fitting afterwards
cop2 = pv.Bicop(pv.BicopFamily.student)
cop2.fit(data=u)
print(cop2)
# Otherwise, define first an object to control the fits:
# - pv.FitControlsBicop objects store the controls
# - here, we only restrict the parametric family
# - see help(pv.FitControlsBicop) for more details
# Then, create a copula from the data
controls = pv.FitControlsBicop(family_set=[pv.BicopFamily.student])
print(controls)
cop2 = pv.Bicop(data=u, controls=controls)
print(cop2) | examples/bivariate_copulas.ipynb | vinecopulib/pyvinecopulib | mit |
Similarly, when the family is unkown,
there are two ways to also do model selection... | # Create a new object an selects both its family and parameters afterwards
cop3 = pv.Bicop()
cop3.select(data=u)
print(cop3)
# Or create directly from data
cop3 = pv.Bicop(data=u)
print(cop3) | examples/bivariate_copulas.ipynb | vinecopulib/pyvinecopulib | mit |
Neutral atom device class
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/cirq/tutorials/educators/neutral_atom"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/tutorials/educators/neutral_atom.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/Cirq/blob/master/docs/tutorials/educators/neutral_atom.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/Cirq/docs/tutorials/educators/neutral_atom.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
This tutorial provides an introduction to making circuits that are compatible with neutral atom devices.
Neutral atom devices implement quantum gates in one of two ways. One method is by hitting the entire qubit array with microwaves to simultaneously act on every qubit. This method implements global $XY$ gates which take up to $100$ microseconds to perform. Alternatively, we can shine laser light on some fraction of the array. Gates of this type typically take around $1$ microsecond to perform. This method can act on one or more qubits at a time up to some limit dictated by the available laser power and the beam steering system used to address the qubits. Each category in the native gate set has its own limit, discussed more below. | try:
import cirq
except ImportError:
print("installing cirq...")
!pip install cirq --quiet
import cirq
print("installed cirq.")
from math import pi | docs/tutorials/educators/neutral_atom.ipynb | quantumlib/Cirq | apache-2.0 |
Defining a NeutralAtomDevice
To define a NeutralAtomDevice, we specify
The set of qubits in the device.
The maximum duration of gates and measurements.
max_parallel_z: The maximum number of single qubit $Z$ rotations that can be applied in parallel.
max_parallel_xy: The maximum number of single qubit $XY$ rotations that can be applied in parallel.
max_parallel_c: The maximum number of atoms that can be affected by controlled gates simultaneously.
Note that max_parallel_c must be less than or equal to the minimum of max_parallel_z and max_parallel_xy.
control_radius: The maximum allowed distance between atoms acted on by controlled gates.
We show an example of defining a NeutralAtomDevice below. | """Defining a NeutralAtomDevice."""
# Define milliseconds and microseconds for convenience.
ms = cirq.Duration(nanos=10**6)
us = cirq.Duration(nanos=10**3)
# Create a NeutralAtomDevice
neutral_atom_device = cirq.NeutralAtomDevice(
qubits=cirq.GridQubit.rect(2, 3),
measurement_duration=5 * ms,
gate_duration=100 * us,
max_parallel_z=3,
max_parallel_xy=3,
max_parallel_c=3,
control_radius=2
) | docs/tutorials/educators/neutral_atom.ipynb | quantumlib/Cirq | apache-2.0 |
Note that all above arguments are required to instantiate a NeutralAtomDevice. The example device above has the following properties:
The device is defined on a $3 \times 3$ grid of qubits.
Measurements take $5$ milliseconds.
Gates may take as long as $100$ microseconds if we utilize global microwave gates. Otherwise, a more reasonable bound would be $1$ microsecond.
A maximum of $3$ qubits may be simultaneously acted on by any gate category (max_parallel_c = 3).
Controlled gates have next-nearest neighbor connectivity (control_radius = 2).
We can see some properties of the device as follows. | """View some properties of the device."""
# Display the neutral atom device.
print("Neutral atom device:", neutral_atom_device, sep="\n")
# Get the neighbors of a qubit.
qubit = cirq.GridQubit(0, 1)
print(f"\nNeighbors of qubit {qubit}:")
print(neutral_atom_device.neighbors_of(qubit)) | docs/tutorials/educators/neutral_atom.ipynb | quantumlib/Cirq | apache-2.0 |
Native gate set
The gates supported by the NeutralAtomDevice class can be placed into three categories:
Single-qubit rotations about the $Z$ axis.
Single-qubit rotations about an arbitrary axis in the $X$-$Y$ plane. We refer to these as $XY$ gates in this tutorial.
Controlled gates: CZ, CNOT, CCZ, and CCNOT (TOFFOLI).
Any rotation angle is allowed for single-qubit rotations. Some examples of valid single-qubit rotations are shown below. | # Examine metadata gateset info.
for gate_family in neutral_atom_device.metadata.gateset.gates:
print(gate_family)
print('-' * 80)
"""Examples of valid single-qubit gates."""
# Single qubit Z rotations with any angle are valid.
neutral_atom_device.validate_gate(cirq.rz(pi / 5))
# Single qubit rotations about the X-Y axis with any angle are valid.
neutral_atom_device.validate_gate(
cirq.PhasedXPowGate(phase_exponent=pi / 3, exponent=pi / 7)
) | docs/tutorials/educators/neutral_atom.ipynb | quantumlib/Cirq | apache-2.0 |
A Hadamard gate is invalid because it is a rotation in the $X$-$Z$ plane instead of the $X$-$Y$ plane. | """Example of an invalid single-qubit gate."""
invalid_gate = cirq.H
try:
neutral_atom_device.validate_gate(invalid_gate)
except ValueError as e:
print(f"As expected, {invalid_gate} is invalid!", e) | docs/tutorials/educators/neutral_atom.ipynb | quantumlib/Cirq | apache-2.0 |
For controlled gates, the rotation must be a multiple of $\pi$ due to the physical implementation of the gates. In Cirq, this means the exponent of a controlled gate must be an integer. The next cell shows two examples of valid controlled gates. | """Examples of valid multi-qubit gates."""
# Controlled gates with integer exponents are valid.
neutral_atom_device.validate_gate(cirq.CNOT)
# Controlled NOT gates with two controls are valid.
neutral_atom_device.validate_gate(cirq.TOFFOLI) | docs/tutorials/educators/neutral_atom.ipynb | quantumlib/Cirq | apache-2.0 |
Any controlled gate with non-integer exponent is invalid. | """Example of an invalid controlled gate."""
invalid_gate = cirq.CNOT ** 1.5
try:
neutral_atom_device.validate_gate(invalid_gate)
except ValueError as e:
print(f"As expected, {invalid_gate} is invalid!", e) | docs/tutorials/educators/neutral_atom.ipynb | quantumlib/Cirq | apache-2.0 |
Multiple controls are allowed as long as every pair of atoms (qubits) acted on by the controlled gate are close enough to each other. We can see this by using the validate_operation (or validate_circuit) method, as follows. | """Examples of valid and invalid multi-controlled gates."""
# This TOFFOLI is valid because all qubits involved are close enough to each other.
valid_toffoli = cirq.TOFFOLI.on(cirq.GridQubit(0, 0), cirq.GridQubit(0, 1), cirq.GridQubit(0, 2))
neutral_atom_device.validate_operation(valid_toffoli)
# This TOFFOLI is invalid because all qubits involved are not close enough to each other.
invalid_toffoli = cirq.TOFFOLI.on(cirq.GridQubit(0, 0), cirq.GridQubit(1, 0), cirq.GridQubit(0, 2))
try:
neutral_atom_device.validate_operation(invalid_toffoli)
except ValueError as e:
print(f"As expected, {invalid_toffoli} is invalid!", e) | docs/tutorials/educators/neutral_atom.ipynb | quantumlib/Cirq | apache-2.0 |
NeutralAtomDevices do not currently support gates with more than two controls although these are in principle allowed by the physical realizations. | """Any gate with more than two controls is invalid."""
invalid_gate = cirq.ControlledGate(cirq.TOFFOLI)
try:
neutral_atom_device.validate_gate(invalid_gate)
except ValueError as e:
print(f"As expected, {invalid_gate} is invalid!", e) | docs/tutorials/educators/neutral_atom.ipynb | quantumlib/Cirq | apache-2.0 |
Finally, we note that the duration of any operation can be determined via the duration_of method. | """Example of getting the duration of a valid operation."""
neutral_atom_device.duration_of(valid_toffoli) | docs/tutorials/educators/neutral_atom.ipynb | quantumlib/Cirq | apache-2.0 |
Moment and circuit rules
In addition to consisting of valid operations as discussed above, valid moments on a NeutralAtomDevice must satisfy the following criteria:
Only max_parallel_c gates of the same category may be performed in the same moment.
All instances of gates in the same category in the same moment must be identical.
Controlled gates cannot be applied in parallel with other gate types.
Physically, this is because controlled gates make use of all types of light used to implement gates.
Qubits acted on by different controlled gates in parallel must be farther apart than the control_radius.
Physically, this is so that the entanglement mechanism doesn't cause the gates to interfere with one another.
All measurements must be terminal.
Moments can be validated with the validate_moment method. Some examples are given below. | """Example of a valid moment with single qubit gates."""
qubits = sorted(neutral_atom_device.qubits)
# Get a valid moment.
valid_moment = cirq.Moment(cirq.Z.on_each(qubits[:3]) + cirq.X.on_each(qubits[3:6]))
# Display it.
print("Example of a valid moment with single-qubit gates:", cirq.Circuit(valid_moment), sep="\n\n")
# Verify it is valid.
neutral_atom_device.validate_moment(valid_moment) | docs/tutorials/educators/neutral_atom.ipynb | quantumlib/Cirq | apache-2.0 |
Recall that we defined max_parallel_z = 3 in our device. Thus, if we tried to do 4 $Z$ gates in the same moment, this would be invalid. | """Example of an invalid moment with single qubit gates."""
# Get an invalid moment.
invalid_moment = cirq.Moment(cirq.Z.on_each(qubits[:4]))
# Display it.
print("Example of an invalid moment with single-qubit gates:", cirq.Circuit(invalid_moment), sep="\n\n")
# Uncommenting raises ValueError: Too many simultaneous Z gates.
# neutral_atom_device.validate_moment(invalid_moment) | docs/tutorials/educators/neutral_atom.ipynb | quantumlib/Cirq | apache-2.0 |
This is also true for 4 $XY$ gates since we set max_parallel_xy = 3. However, there is an exception for $XY$ gates acting on every qubit, as illustrated below. | """An XY gate can be performed on every qubit in the device simultaneously.
If the XY gate does not act on every qubit, it must act on <= max_parallel_xy qubits.
"""
valid_moment = cirq.Moment(cirq.X.on_each(qubits))
neutral_atom_device.validate_moment(valid_moment) | docs/tutorials/educators/neutral_atom.ipynb | quantumlib/Cirq | apache-2.0 |
Although both $Z$ and $Z^{1.5}$ are valid gates, they cannot be performed simultaneously because all gates "of the same type" must be identical in the same moment. | """Example of an invalid moment with single qubit gates."""
# Get an invalid moment.
invalid_moment = cirq.Moment(cirq.Z(qubits[0]), cirq.Z(qubits[1]) ** 1.5)
# Display it.
print("Example of an invalid moment with single-qubit gates:", cirq.Circuit(invalid_moment), sep="\n\n")
# Uncommenting raises ValueError: Non-identical simultaneous Z gates.
# neutral_atom_device.validate_moment(invalid_moment) | docs/tutorials/educators/neutral_atom.ipynb | quantumlib/Cirq | apache-2.0 |
Exercise: Multiple controlled gates in the same moment
Construct a NeutralAtomDevice which is capable of implementing two CNOTs in the same moment. Verify that these operations can indeed be performed in parallel by calling the validate_moment method or showing that Cirq inserts the operations into the same moment. | # Your code here! | docs/tutorials/educators/neutral_atom.ipynb | quantumlib/Cirq | apache-2.0 |
Solution | """Example solution for creating a device which allows two CNOTs in the same moment."""
# Create a NeutralAtomDevice.
device = cirq.NeutralAtomDevice(
qubits=cirq.GridQubit.rect(2, 3),
measurement_duration=5 * cirq.Duration(nanos=10**6),
gate_duration=100 * cirq.Duration(nanos=10**3),
max_parallel_z=4,
max_parallel_xy=4,
max_parallel_c=4,
control_radius=1
)
print("Device:")
print(device)
# Create a circuit for a NeutralAtomDevice.
circuit = cirq.Circuit()
# Append two CNOTs that can be in the same moment.
circuit.append(
[cirq.CNOT(cirq.GridQubit(0, 0), cirq.GridQubit(1, 0)),
cirq.CNOT(cirq.GridQubit(0, 2), cirq.GridQubit(1, 2))]
)
# Append two CNOTs that cannot be in the same moment.
circuit.append(
cirq.Moment(cirq.CNOT(cirq.GridQubit(0, 0), cirq.GridQubit(1, 0))),
cirq.Moment(cirq.CNOT(cirq.GridQubit(0, 1), cirq.GridQubit(1, 1)))
)
# Validate the circuit.
device.validate_circuit(circuit)
# Display the circuit.
print("\nCircuit:")
print(circuit) | docs/tutorials/educators/neutral_atom.ipynb | quantumlib/Cirq | apache-2.0 |
Built-in dataset modules
Some dataset format is already implemented in chainer.datasets
TupleDataset | from chainer.datasets import TupleDataset
x = np.arange(10)
t = x * x
data = TupleDataset(x, t)
print('data type: {}, len: {}'.format(type(data), len(data)))
# Unlike numpy, it does not have shape property.
data.shape | src/01_chainer_intro/dataset_introduction.ipynb | corochann/chainer-hands-on-tutorial | mit |
i-th data can be accessed by data[i]
which is a tuple of format ($x_i$, $t_i$, ...) | # get forth data -> x=3, t=9
data[3] | src/01_chainer_intro/dataset_introduction.ipynb | corochann/chainer-hands-on-tutorial | mit |
Slice accessing
When TupleDataset is accessed by slice indexing, e.g. data[i:j], returned value is list of tuple
$[(x_i, t_i), ..., (x_{j-1}, t_{j-1})]$ | # Get 1st, 2nd, 3rd data at the same time.
examples = data[0:4]
print(examples)
print('examples type: {}, len: {}'
.format(type(examples), len(examples))) | src/01_chainer_intro/dataset_introduction.ipynb | corochann/chainer-hands-on-tutorial | mit |
To convert examples into minibatch format, you can use concat_examples function in chainer.dataset.
Its return value is in format ([x_array], [t array], ...) | from chainer.dataset import concat_examples
data_minibatch = concat_examples(examples)
#print(data_minibatch)
#print('data_minibatch type: {}, len: {}'
# .format(type(data_minibatch), len(data_minibatch)))
x_minibatch, t_minibatch = data_minibatch
# Now it is array format, which has shape
print('x_minibatch = {}, type: {}, shape: {}'.format(x_minibatch, type(x_minibatch), x_minibatch.shape))
print('t_minibatch = {}, type: {}, shape: {}'.format(t_minibatch, type(t_minibatch), t_minibatch.shape)) | src/01_chainer_intro/dataset_introduction.ipynb | corochann/chainer-hands-on-tutorial | mit |
DictDataset
TBD | from chainer.datasets import DictDataset
x = np.arange(10)
t = x * x
# To construct `DictDataset`, you can specify each key-value pair by passing "key=value" in kwargs.
data = DictDataset(x=x, t=t)
print('data type: {}, len: {}'.format(type(data), len(data)))
# Get 3rd data at the same time.
example = data[2]
print(example)
print('examples type: {}, len: {}'
.format(type(example), len(example)))
# You can access each value via key
print('x: {}, t: {}'.format(example['x'], example['t'])) | src/01_chainer_intro/dataset_introduction.ipynb | corochann/chainer-hands-on-tutorial | mit |
ImageDataset
This is util class for image dataset.
If the number of dataset becomes very big (for example ImageNet dataset),
it is not practical to load all the images into memory unlike CIFAR-10 or CIFAR-100.
In this case, ImageDataset class can be used to open image from storage everytime of minibatch creation.
[Note] ImageDataset will download only the images, if you need another label information
(for example if you are working with image classification task) use LabeledImageDataset instead.
You need to create a text file which contains the list of image paths to use ImageDataset.
See data/images.dat for how the paths text file look like. | import os
from chainer.datasets import ImageDataset
# print('Current direcotory: ', os.path.abspath(os.curdir))
filepath = './data/images.dat'
image_dataset = ImageDataset(filepath, root='./data/images')
print('image_dataset type: {}, len: {}'.format(type(image_dataset), len(image_dataset))) | src/01_chainer_intro/dataset_introduction.ipynb | corochann/chainer-hands-on-tutorial | mit |
We have created the image_dataset above, however, images are not expanded into memory yet.
Image data will be loaded into memory from storage every time when you access via index, for efficient memory use. | # Access i-th image by image_dataset[i].
# image data is loaded here. for only 0-th image.
img = image_dataset[0]
# img is numpy array, already aligned as (channels, height, width),
# which is the standard shape format to feed into convolutional layer.
print('img', type(img), img.shape) | src/01_chainer_intro/dataset_introduction.ipynb | corochann/chainer-hands-on-tutorial | mit |
LabeledImageDataset
This is util class for image dataset.
It is similar to ImageDataset to allow load the image file from storage into memory at runtime of training.
The difference is that it contains label information, which is usually used for image classification task.
You need to create a text file which contains the list of image paths and labels to use LabeledImageDataset.
See data/images_labels.dat for how the text file look like. | import os
from chainer.datasets import LabeledImageDataset
# print('Current direcotory: ', os.path.abspath(os.curdir))
filepath = './data/images_labels.dat'
labeled_image_dataset = LabeledImageDataset(filepath, root='./data/images')
print('labeled_image_dataset type: {}, len: {}'.format(type(labeled_image_dataset), len(labeled_image_dataset))) | src/01_chainer_intro/dataset_introduction.ipynb | corochann/chainer-hands-on-tutorial | mit |
We have created the labeled_image_dataset above, however, images are not expanded into memory yet.
Image data will be loaded into memory from storage every time when you access via index, for efficient memory use. | # Access i-th image and label by image_dataset[i].
# image data is loaded here. for only 0-th image.
img, label = labeled_image_dataset[0]
print('img', type(img), img.shape)
print('label', type(label), label) | src/01_chainer_intro/dataset_introduction.ipynb | corochann/chainer-hands-on-tutorial | mit |
SubDataset
TBD
It can be used for cross validation. | datasets.split_dataset_n_random() | src/01_chainer_intro/dataset_introduction.ipynb | corochann/chainer-hands-on-tutorial | mit |
Implement your own custom dataset
You can define your own dataset by implementing a sub class of DatasetMixin in chainer.dataset
DatasetMixin
If you want to define custom dataset, DatasetMixin provides the base function to make compatible with other dataset format.
Another important usage for DatasetMixin is to preprocess the input data, including data augmentation.
To implement subclass of DatasetMixin, you usually need to implement these 3 functions.
- Override __init__(self, *args) function: It is not compulsary but
- Override __len__(self) function : Iterator need to know the length of this dataset to understand the end of epoch.
- Override get_examples(self, i) function: | from chainer.dataset import DatasetMixin
print_debug = True
class SimpleDataset(DatasetMixin):
def __init__(self, values):
self.values = values
def __len__(self):
return len(self.values)
def get_example(self, i):
if print_debug:
print('get_example, i = {}'.format(i))
return self.values[i] | src/01_chainer_intro/dataset_introduction.ipynb | corochann/chainer-hands-on-tutorial | mit |
Important function in DatasetMixin is get_examples(self, i) function.
This function is called when they access data[i] | simple_data = SimpleDataset([0, 1, 4, 9, 16, 25])
# get_example(self, i) is called when data is accessed by data[i]
simple_data[3]
# data can be accessed using slice indexing as well
simple_data[1:3] | src/01_chainer_intro/dataset_introduction.ipynb | corochann/chainer-hands-on-tutorial | mit |
The important point is that get_example function is called every time when the data is accessed by [] indexing.
Thus you may put random value generation for data augmentation code in get_example. | import numpy as np
from chainer.dataset import DatasetMixin
print_debug = False
def calc(x):
return x * x
class SquareNoiseDataset(DatasetMixin):
def __init__(self, values):
self.values = values
def __len__(self):
return len(self.values)
def get_example(self, i):
if print_debug:
print('get_example, i = {}'.format(i))
x = self.values[i]
t = calc(x)
t_noise = t + np.random.normal(0, 0.1)
return x, t_noise
square_noise_data = SquareNoiseDataset(np.arange(10)) | src/01_chainer_intro/dataset_introduction.ipynb | corochann/chainer-hands-on-tutorial | mit |
Below SimpleNoiseDataset adds small Gaussian noise to the original value,
and every time the value is accessed, get_example is called and differenct noise is added even if you access to the data with same index. | # Accessing to the same index, but the value is different!
print('Accessing square_noise_data[3]', )
print('1st: ', square_noise_data[3])
print('2nd: ', square_noise_data[3])
print('3rd: ', square_noise_data[3])
# Same applies for slice index accessing.
print('Accessing square_noise_data[0:4]')
print('1st: ', square_noise_data[0:4])
print('2nd: ', square_noise_data[0:4])
print('3rd: ', square_noise_data[0:4]) | src/01_chainer_intro/dataset_introduction.ipynb | corochann/chainer-hands-on-tutorial | mit |
To convert examples into minibatch format, you can use concat_examples function in chainer.dataset in the sameway explained at TupleDataset. | from chainer.dataset import concat_examples
examples = square_noise_data[0:4]
print('examples = {}'.format(examples))
data_minibatch = concat_examples(examples)
x_minibatch, t_minibatch = data_minibatch
# Now it is array format, which has shape
print('x_minibatch = {}, type: {}, shape: {}'.format(x_minibatch, type(x_minibatch), x_minibatch.shape))
print('t_minibatch = {}, type: {}, shape: {}'.format(t_minibatch, type(t_minibatch), t_minibatch.shape)) | src/01_chainer_intro/dataset_introduction.ipynb | corochann/chainer-hands-on-tutorial | mit |
TransformDataset
Transform dataset can be used to create/modify dataset from existing dataset.
New (modified) dataset can be created by TransformDataset(original_data, transform_function).
Let's see a concrete example to create new dataset from original tuple dataset by adding a small noise. | from chainer.datasets import TransformDataset
x = np.arange(10)
t = x * x - x
original_dataset = TupleDataset(x, t)
def transform_function(in_data):
x_i, t_i = in_data
new_t_i = t_i + np.random.normal(0, 0.1)
return x_i, new_t_i
transformed_dataset = TransformDataset(original_dataset, transform_function)
original_dataset[:3]
# Now Gaussian noise is added (in transform_function) to the original_dataset.
transformed_dataset[:3] | src/01_chainer_intro/dataset_introduction.ipynb | corochann/chainer-hands-on-tutorial | mit |
Built-in RNN layers: a simple example
There are three built-in RNN layers in Keras:
keras.layers.SimpleRNN, a fully-connected RNN where the output from previous
timestep is to be fed to next timestep.
keras.layers.GRU, first proposed in
Cho et al., 2014.
keras.layers.LSTM, first proposed in
Hochreiter & Schmidhuber, 1997.
In early 2015, Keras had the first reusable open-source Python implementations of LSTM
and GRU.
Here is a simple example of a Sequential model that processes sequences of integers,
embeds each integer into a 64-dimensional vector, then processes the sequence of
vectors using a LSTM layer. | model = keras.Sequential()
# Add an Embedding layer expecting input vocab of size 1000, and
# output embedding dimension of size 64.
model.add(layers.Embedding(input_dim=1000, output_dim=64))
# Add a LSTM layer with 128 internal units.
# TODO -- your code goes here
# Add a Dense layer with 10 units.
# TODO -- your code goes here
model.summary() | courses/machine_learning/deepdive2/text_classification/labs/rnn.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Built-in RNNs support a number of useful features:
Recurrent dropout, via the dropout and recurrent_dropout arguments
Ability to process an input sequence in reverse, via the go_backwards argument
Loop unrolling (which can lead to a large speedup when processing short sequences on
CPU), via the unroll argument
...and more.
For more information, see the
RNN API documentation.
Outputs and states
By default, the output of a RNN layer contains a single vector per sample. This vector
is the RNN cell output corresponding to the last timestep, containing information
about the entire input sequence. The shape of this output is (batch_size, units)
where units corresponds to the units argument passed to the layer's constructor.
A RNN layer can also return the entire sequence of outputs for each sample (one vector
per timestep per sample), if you set return_sequences=True. The shape of this output
is (batch_size, timesteps, units). | model = keras.Sequential()
model.add(layers.Embedding(input_dim=1000, output_dim=64))
# The output of GRU will be a 3D tensor of shape (batch_size, timesteps, 256)
model.add(layers.GRU(256, return_sequences=True))
# The output of SimpleRNN will be a 2D tensor of shape (batch_size, 128)
model.add(layers.SimpleRNN(128))
model.add(layers.Dense(10))
model.summary() | courses/machine_learning/deepdive2/text_classification/labs/rnn.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
In addition, a RNN layer can return its final internal state(s). The returned states
can be used to resume the RNN execution later, or
to initialize another RNN.
This setting is commonly used in the
encoder-decoder sequence-to-sequence model, where the encoder final state is used as
the initial state of the decoder.
To configure a RNN layer to return its internal state, set the return_state parameter
to True when creating the layer. Note that LSTM has 2 state tensors, but GRU
only has one.
To configure the initial state of the layer, just call the layer with additional
keyword argument initial_state.
Note that the shape of the state needs to match the unit size of the layer, like in the
example below. | encoder_vocab = 1000
decoder_vocab = 2000
encoder_input = layers.Input(shape=(None,))
encoder_embedded = layers.Embedding(input_dim=encoder_vocab, output_dim=64)(
encoder_input
)
# Return states in addition to output
output, state_h, state_c = layers.LSTM(64, return_state=True, name="encoder")(
encoder_embedded
)
encoder_state = [state_h, state_c]
decoder_input = layers.Input(shape=(None,))
decoder_embedded = layers.Embedding(input_dim=decoder_vocab, output_dim=64)(
decoder_input
)
# Pass the 2 states to a new LSTM layer, as initial state
decoder_output = layers.LSTM(64, name="decoder")(
decoder_embedded, initial_state=encoder_state
)
output = layers.Dense(10)(decoder_output)
model = keras.Model([encoder_input, decoder_input], output)
model.summary() | courses/machine_learning/deepdive2/text_classification/labs/rnn.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
RNN layers and RNN cells
In addition to the built-in RNN layers, the RNN API also provides cell-level APIs.
Unlike RNN layers, which processes whole batches of input sequences, the RNN cell only
processes a single timestep.
The cell is the inside of the for loop of a RNN layer. Wrapping a cell inside a
keras.layers.RNN layer gives you a layer capable of processing batches of
sequences, e.g. RNN(LSTMCell(10)).
Mathematically, RNN(LSTMCell(10)) produces the same result as LSTM(10). In fact,
the implementation of this layer in TF v1.x was just creating the corresponding RNN
cell and wrapping it in a RNN layer. However using the built-in GRU and LSTM
layers enable the use of CuDNN and you may see better performance.
There are three built-in RNN cells, each of them corresponding to the matching RNN
layer.
keras.layers.SimpleRNNCell corresponds to the SimpleRNN layer.
keras.layers.GRUCell corresponds to the GRU layer.
keras.layers.LSTMCell corresponds to the LSTM layer.
The cell abstraction, together with the generic keras.layers.RNN class, make it
very easy to implement custom RNN architectures for your research.
Cross-batch statefulness
When processing very long sequences (possibly infinite), you may want to use the
pattern of cross-batch statefulness.
Normally, the internal state of a RNN layer is reset every time it sees a new batch
(i.e. every sample seen by the layer is assumed to be independent of the past). The
layer will only maintain a state while processing a given sample.
If you have very long sequences though, it is useful to break them into shorter
sequences, and to feed these shorter sequences sequentially into a RNN layer without
resetting the layer's state. That way, the layer can retain information about the
entirety of the sequence, even though it's only seeing one sub-sequence at a time.
You can do this by setting stateful=True in the constructor.
If you have a sequence s = [t0, t1, ... t1546, t1547], you would split it into e.g.
s1 = [t0, t1, ... t100]
s2 = [t101, ... t201]
...
s16 = [t1501, ... t1547]
Then you would process it via:
python
lstm_layer = layers.LSTM(64, stateful=True)
for s in sub_sequences:
output = lstm_layer(s)
When you want to clear the state, you can use layer.reset_states().
Note: In this setup, sample i in a given batch is assumed to be the continuation of
sample i in the previous batch. This means that all batches should contain the same
number of samples (batch size). E.g. if a batch contains [sequence_A_from_t0_to_t100,
sequence_B_from_t0_to_t100], the next batch should contain
[sequence_A_from_t101_to_t200, sequence_B_from_t101_to_t200].
Here is a complete example: | paragraph1 = np.random.random((20, 10, 50)).astype(np.float32)
paragraph2 = np.random.random((20, 10, 50)).astype(np.float32)
paragraph3 = np.random.random((20, 10, 50)).astype(np.float32)
lstm_layer = layers.LSTM(64, stateful=True)
output = lstm_layer(paragraph1)
output = lstm_layer(paragraph2)
output = lstm_layer(paragraph3)
# reset_states() will reset the cached state to the original initial_state.
# If no initial_state was provided, zero-states will be used by default.
# TODO -- your code goes here
| courses/machine_learning/deepdive2/text_classification/labs/rnn.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
RNN State Reuse
<a id="rnn_state_reuse"></a>
The recorded states of the RNN layer are not included in the layer.weights(). If you
would like to reuse the state from a RNN layer, you can retrieve the states value by
layer.states and use it as the
initial state for a new layer via the Keras functional API like new_layer(inputs,
initial_state=layer.states), or model subclassing.
Please also note that sequential model might not be used in this case since it only
supports layers with single input and output, the extra input of initial state makes
it impossible to use here. | paragraph1 = np.random.random((20, 10, 50)).astype(np.float32)
paragraph2 = np.random.random((20, 10, 50)).astype(np.float32)
paragraph3 = np.random.random((20, 10, 50)).astype(np.float32)
lstm_layer = layers.LSTM(64, stateful=True)
output = lstm_layer(paragraph1)
output = lstm_layer(paragraph2)
existing_state = lstm_layer.states
new_lstm_layer = layers.LSTM(64)
new_output = new_lstm_layer(paragraph3, initial_state=existing_state)
| courses/machine_learning/deepdive2/text_classification/labs/rnn.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Bidirectional RNNs
For sequences other than time series (e.g. text), it is often the case that a RNN model
can perform better if it not only processes sequence from start to end, but also
backwards. For example, to predict the next word in a sentence, it is often useful to
have the context around the word, not only just the words that come before it.
Keras provides an easy API for you to build such bidirectional RNNs: the
keras.layers.Bidirectional wrapper. | model = keras.Sequential()
# Add Bidirectional layers
# TODO -- your code goes here
model.summary() | courses/machine_learning/deepdive2/text_classification/labs/rnn.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Under the hood, Bidirectional will copy the RNN layer passed in, and flip the
go_backwards field of the newly copied layer, so that it will process the inputs in
reverse order.
The output of the Bidirectional RNN will be, by default, the concatenation of the forward layer
output and the backward layer output. If you need a different merging behavior, e.g.
concatenation, change the merge_mode parameter in the Bidirectional wrapper
constructor. For more details about Bidirectional, please check
the API docs.
Performance optimization and CuDNN kernels
In TensorFlow 2.0, the built-in LSTM and GRU layers have been updated to leverage CuDNN
kernels by default when a GPU is available. With this change, the prior
keras.layers.CuDNNLSTM/CuDNNGRU layers have been deprecated, and you can build your
model without worrying about the hardware it will run on.
Since the CuDNN kernel is built with certain assumptions, this means the layer will
not be able to use the CuDNN kernel if you change the defaults of the built-in LSTM or
GRU layers. E.g.:
Changing the activation function from tanh to something else.
Changing the recurrent_activation function from sigmoid to something else.
Using recurrent_dropout > 0.
Setting unroll to True, which forces LSTM/GRU to decompose the inner
tf.while_loop into an unrolled for loop.
Setting use_bias to False.
Using masking when the input data is not strictly right padded (if the mask
corresponds to strictly right padded data, CuDNN can still be used. This is the most
common case).
For the detailed list of constraints, please see the documentation for the
LSTM and
GRU layers.
Using CuDNN kernels when available
Let's build a simple LSTM model to demonstrate the performance difference.
We'll use as input sequences the sequence of rows of MNIST digits (treating each row of
pixels as a timestep), and we'll predict the digit's label. | batch_size = 64
# Each MNIST image batch is a tensor of shape (batch_size, 28, 28).
# Each input sequence will be of size (28, 28) (height is treated like time).
input_dim = 28
units = 64
output_size = 10 # labels are from 0 to 9
# Build the RNN model
def build_model(allow_cudnn_kernel=True):
# CuDNN is only available at the layer level, and not at the cell level.
# This means `LSTM(units)` will use the CuDNN kernel,
# while RNN(LSTMCell(units)) will run on non-CuDNN kernel.
if allow_cudnn_kernel:
# The LSTM layer with default options uses CuDNN.
lstm_layer = keras.layers.LSTM(units, input_shape=(None, input_dim))
else:
# Wrapping a LSTMCell in a RNN layer will not use CuDNN.
lstm_layer = keras.layers.RNN(
keras.layers.LSTMCell(units), input_shape=(None, input_dim)
)
model = keras.models.Sequential(
[
lstm_layer,
keras.layers.BatchNormalization(),
keras.layers.Dense(output_size),
]
)
return model
| courses/machine_learning/deepdive2/text_classification/labs/rnn.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Let's load the MNIST dataset: | mnist = keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
sample, sample_label = x_train[0], y_train[0] | courses/machine_learning/deepdive2/text_classification/labs/rnn.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Let's create a model instance and train it.
We choose sparse_categorical_crossentropy as the loss function for the model. The
output of the model has shape of [batch_size, 10]. The target for the model is an
integer vector, each of the integer is in the range of 0 to 9. | model = build_model(allow_cudnn_kernel=True)
# Compile the model
# TODO -- your code goes here
model.fit(
x_train, y_train, validation_data=(x_test, y_test), batch_size=batch_size, epochs=1
) | courses/machine_learning/deepdive2/text_classification/labs/rnn.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Now, let's compare to a model that does not use the CuDNN kernel: | noncudnn_model = build_model(allow_cudnn_kernel=False)
noncudnn_model.set_weights(model.get_weights())
noncudnn_model.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer="sgd",
metrics=["accuracy"],
)
noncudnn_model.fit(
x_train, y_train, validation_data=(x_test, y_test), batch_size=batch_size, epochs=1
) | courses/machine_learning/deepdive2/text_classification/labs/rnn.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
When running on a machine with a NVIDIA GPU and CuDNN installed,
the model built with CuDNN is much faster to train compared to the
model that uses the regular TensorFlow kernel.
The same CuDNN-enabled model can also be used to run inference in a CPU-only
environment. The tf.device annotation below is just forcing the device placement.
The model will run on CPU by default if no GPU is available.
You simply don't have to worry about the hardware you're running on anymore. Isn't that
pretty cool? | import matplotlib.pyplot as plt
with tf.device("CPU:0"):
cpu_model = build_model(allow_cudnn_kernel=True)
cpu_model.set_weights(model.get_weights())
result = tf.argmax(cpu_model.predict_on_batch(tf.expand_dims(sample, 0)), axis=1)
print(
"Predicted result is: %s, target result is: %s" % (result.numpy(), sample_label)
)
plt.imshow(sample, cmap=plt.get_cmap("gray")) | courses/machine_learning/deepdive2/text_classification/labs/rnn.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
RNNs with list/dict inputs, or nested inputs
Nested structures allow implementers to include more information within a single
timestep. For example, a video frame could have audio and video input at the same
time. The data shape in this case could be:
[batch, timestep, {"video": [height, width, channel], "audio": [frequency]}]
In another example, handwriting data could have both coordinates x and y for the
current position of the pen, as well as pressure information. So the data
representation could be:
[batch, timestep, {"location": [x, y], "pressure": [force]}]
The following code provides an example of how to build a custom RNN cell that accepts
such structured inputs.
Define a custom cell that supports nested input/output
See Making new Layers & Models via subclassing
for details on writing your own layers. | class NestedCell(keras.layers.Layer):
def __init__(self, unit_1, unit_2, unit_3, **kwargs):
self.unit_1 = unit_1
self.unit_2 = unit_2
self.unit_3 = unit_3
self.state_size = [tf.TensorShape([unit_1]), tf.TensorShape([unit_2, unit_3])]
self.output_size = [tf.TensorShape([unit_1]), tf.TensorShape([unit_2, unit_3])]
super(NestedCell, self).__init__(**kwargs)
def build(self, input_shapes):
# expect input_shape to contain 2 items, [(batch, i1), (batch, i2, i3)]
i1 = input_shapes[0][1]
i2 = input_shapes[1][1]
i3 = input_shapes[1][2]
self.kernel_1 = self.add_weight(
shape=(i1, self.unit_1), initializer="uniform", name="kernel_1"
)
self.kernel_2_3 = self.add_weight(
shape=(i2, i3, self.unit_2, self.unit_3),
initializer="uniform",
name="kernel_2_3",
)
def call(self, inputs, states):
# inputs should be in [(batch, input_1), (batch, input_2, input_3)]
# state should be in shape [(batch, unit_1), (batch, unit_2, unit_3)]
input_1, input_2 = tf.nest.flatten(inputs)
s1, s2 = states
output_1 = tf.matmul(input_1, self.kernel_1)
output_2_3 = tf.einsum("bij,ijkl->bkl", input_2, self.kernel_2_3)
state_1 = s1 + output_1
state_2_3 = s2 + output_2_3
output = (output_1, output_2_3)
new_states = (state_1, state_2_3)
return output, new_states
def get_config(self):
return {"unit_1": self.unit_1, "unit_2": unit_2, "unit_3": self.unit_3}
| courses/machine_learning/deepdive2/text_classification/labs/rnn.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Build a RNN model with nested input/output
Let's build a Keras model that uses a keras.layers.RNN layer and the custom cell
we just defined. | unit_1 = 10
unit_2 = 20
unit_3 = 30
i1 = 32
i2 = 64
i3 = 32
batch_size = 64
num_batches = 10
timestep = 50
cell = NestedCell(unit_1, unit_2, unit_3)
rnn = keras.layers.RNN(cell)
input_1 = keras.Input((None, i1))
input_2 = keras.Input((None, i2, i3))
outputs = rnn((input_1, input_2))
model = keras.models.Model([input_1, input_2], outputs)
model.compile(optimizer="adam", loss="mse", metrics=["accuracy"]) | courses/machine_learning/deepdive2/text_classification/labs/rnn.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Train the model with randomly generated data
Since there isn't a good candidate dataset for this model, we use random Numpy data for
demonstration. | input_1_data = np.random.random((batch_size * num_batches, timestep, i1))
input_2_data = np.random.random((batch_size * num_batches, timestep, i2, i3))
target_1_data = np.random.random((batch_size * num_batches, unit_1))
target_2_data = np.random.random((batch_size * num_batches, unit_2, unit_3))
input_data = [input_1_data, input_2_data]
target_data = [target_1_data, target_2_data]
model.fit(input_data, target_data, batch_size=batch_size) | courses/machine_learning/deepdive2/text_classification/labs/rnn.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Load the WNA profile from Campbell (2003). | with open("../tests/data/qwl_tests.json") as fp:
data = json.load(fp)[1]
thickness = np.diff(data["site"]["depth"])
profile = pysra.site.Profile()
for i, (thick, vel_shear, density) in enumerate(
zip(thickness, data["site"]["velocity"], data["site"]["density"])
):
profile.append(
pysra.site.Layer(
pysra.site.SoilType(f"{i}", density * pysra.motion.GRAVITY),
thick * 1000,
vel_shear * 1000,
)
)
profile.update_layers(0) | examples/example-09.ipynb | arkottke/pysra | mit |
Create simple point source motion | motion = pysra.motion.SourceTheoryRvtMotion(magnitude=6.5, distance=20, region="cena")
motion.calc_fourier_amps(data["freqs"])
calc = pysra.propagation.QuarterWaveLenCalculator(site_atten=0.04)
input_loc = profile.location("outcrop", index=-1)
calc(motion, profile, input_loc)
fig, ax = plt.subplots()
ax.plot(motion.freqs, calc.crustal_amp, label="Crustal Amp.")
ax.plot(motion.freqs, calc.site_term, label="Site Term")
ax.set(
xlabel="Frequency (Hz)",
xscale="log",
ylabel="Amplitude",
yscale="linear",
)
ax.legend()
fig.tight_layout(); | examples/example-09.ipynb | arkottke/pysra | mit |
The quarter-wavelength calculation is tested against the WNA and CENA crustal amplification models provided by Campbell (2003). The test of the CENA model passes, but the WNA model fails. Below is a comparison of the two crustal amplifications. | fig, ax = plt.subplots()
ax.plot(motion.freqs, calc.crustal_amp, label="Calculated")
ax.plot(data["freqs"], data["crustal_amp"], label="Campbell (03)")
ax.set(
xlabel="Frequency (Hz)",
xscale="log",
ylabel="Amplitude",
yscale="linear",
)
ax.legend()
fig.tight_layout(); | examples/example-09.ipynb | arkottke/pysra | mit |
Adjust the profile to match the target crustal amplification -- no consideration of the site attenuation paramater although this can also be done. First, the adjustment is only performed on the velocity. Second set of plots adjusts velocity and thickness. | for adjust_thickness in [False, True]:
calc.fit(
target_type="crustal_amp",
target=data["crustal_amp"],
adjust_thickness=adjust_thickness,
)
fig, ax = plt.subplots()
ax.plot(motion.freqs, calc.crustal_amp, label="Calculated")
ax.plot(data["freqs"], data["crustal_amp"], label="Campbell (03)")
ax.set(
xlabel="Frequency (Hz)",
xscale="log",
ylabel="Amplitude",
yscale="linear",
)
ax.legend()
fig.tight_layout()
for yscale in ["log", "linear"]:
fig, ax = plt.subplots()
ax.plot(
profile.initial_shear_vel,
profile.depth,
label="Initial",
drawstyle="steps-pre",
)
ax.plot(
calc.profile.initial_shear_vel,
calc.profile.depth,
label="Fit",
drawstyle="steps-pre",
)
ax.legend()
ax.set(
xlabel="$V_s$ (m/s)",
xlim=(0, 3500),
ylabel="Depth (m)",
ylim=(8000, 0.1),
yscale=yscale,
)
fig.tight_layout(); | examples/example-09.ipynb | arkottke/pysra | mit |
Load credentials | # %load getCredentialsFromFile.py
def getCredentials():
from oauth2client import file
import httplib2
import ipywidgets as widgets
print("Getting the credentials from file...")
storage = file.Storage("oauth2.dat")
credentials=storage.get()
if credentials is None or credentials.invalid:
print( '❗')
display(widgets.Valid(
value=False,
description='Credentials are ',
disabled=False))
display(widgets.HTML('go create a credential valid file here: <a target="_blank" href="cloud.google.auth.ipynb.ipynb">gcloud authorization notebook</a> and try again'))
else:
http_auth = credentials.authorize(httplib2.Http())
print('✅ Ok')
return credentials
credentials=getCredentials() | google cloud/google cloud with python/cloud.google.compute.instances.ipynb | daverick/alella | gpl-3.0 |
Create services | compute_service = build('compute', 'v1', credentials=credentials)
resource_service = build('cloudresourcemanager', 'v1', credentials=credentials) | google cloud/google cloud with python/cloud.google.compute.instances.ipynb | daverick/alella | gpl-3.0 |
Choose projectId and zone | # %load chooseProjectId.py
#projectId is the variable that will contains the projectId that will be used in the API calls
projectId=None
#list the existing projects
projects=resource_service.projects().list().execute()
#we create a dictionaray name:projectId foe a dropdown list widget
projectsList={project['name']:project['projectId'] for project in projects['projects']}
projectsList['None']='invalid'
#the dropdownlist widget
projectWidget=widgets.Dropdown(options=projectsList,description='Choose your Project',value='invalid')
#a valid widget that get valid when a project is selected
projectIdValid=widgets.Valid(value=False,description='')
display(widgets.Box([projectWidget,projectIdValid]))
def projectValueChange(sender):
if projectWidget.value!='invalid':
#when a valid project is selected ,the gloabl variable projectId is set
projectIdValid.value=True
projectIdValid.description=projectWidget.value
global projectId
projectId=projectWidget.value
else:
projectIdValid.value=False
projectIdValid.description=''
projectWidget.observe(projectValueChange, 'value')
# %load chooseZone.py
#zone is the variable that will contains the zone that will be used in the API calls
zone=None
#list the existing zones
zones=compute_service.zones().list(project=projectId).execute()
#list that will contains the zones for a dropdown list
zonesList=[item['name'] for item in zones['items']]
zonesList.append('none')
#the dropdownlist widget
zoneWidget=widgets.Dropdown(options=zonesList,value='none',description='Choose your Zone:')
zoneValid=widgets.Valid(value=False,description='')
display(widgets.Box([zoneWidget,zoneValid]))
def zoneValueChange(sender):
if zoneWidget.value!='none':
#when a vail zone is slected, the variable zone is set
zoneValid.value=True
zoneValid.description=zoneWidget.value
global zone
zone=zoneWidget.value
else:
zoneValid.value=False
zoneValid.description=''
zoneWidget.observe(zoneValueChange, 'value') | google cloud/google cloud with python/cloud.google.compute.instances.ipynb | daverick/alella | gpl-3.0 |
Create a new instance
- choosing the disk image | image_response = compute_service.images().getFromFamily(
project='debian-cloud', family='debian-8').execute()
source_disk_image = image_response['selfLink'] | google cloud/google cloud with python/cloud.google.compute.instances.ipynb | daverick/alella | gpl-3.0 |
- choosing the machineType | machineType=None
machineTypes=compute_service.machineTypes().list(project=projectId,zone=zone).execute()
machineTypesList=[item['name'] for item in machineTypes['items']]
machineTypesList.append('none')
machineTypesWidget=widgets.Dropdown(options=machineTypesList,value='none',description='Choose your MachineType:')
machineTypesValid=widgets.Valid(value=False,description='')
display(widgets.Box([machineTypesWidget,machineTypesValid]))
def machineTypeValueChange(sender):
if machineTypesWidget.value!='none':
machineTypesValid.value=True
machineTypesValid.description=machineTypesWidget.value
global machineType
machineType=machineTypesWidget.value
else:
machineTypesValid.value=True
machineTypesValid.description=''
machineTypesWidget.observe(machineTypeValueChange, 'value') | google cloud/google cloud with python/cloud.google.compute.instances.ipynb | daverick/alella | gpl-3.0 |
- choose an instance name | instanceName=None
# instanceName have to validates this regexp
instanceNameControl=re.compile(r'^(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?)$')
#the widgets
instanceNameWidget=widgets.Text(description="Name for the new instance:")
valid=widgets.Valid(value=False,description='',disabled=False)
display(widgets.Box([instanceNameWidget,valid]))
def instanceNameValueChange(sender):
if instanceNameWidget.value!="":
if instanceNameControl.match(instanceNameWidget.value):
#when the entered text valid the regexp we set the
valid.value=True
valid.description='OK'
global instanceName
instanceName=instanceNameWidget.value
else:
valid.value=False
valid.description="The instance name has to verify the regexp '(?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?)'"
else:
valid.value=False
valid.description=''
instanceNameWidget.observe(instanceNameValueChange, 'value') | google cloud/google cloud with python/cloud.google.compute.instances.ipynb | daverick/alella | gpl-3.0 |
- creating the config for the new instance
The name, machineType and disk are set accordingly to the previous steps.
With scheduling.preemptible to true we choose a preemptible instance (cheaper ;-) )
You can adjust labels to your needs. | config= {'name':instanceName,
'machineType': "zones/%(zone)s/machineTypes/%(machineType)s" %{'zone':zone,'machineType':machineType},
'disks':[
{
'boot':True,
'autoDelete':True,
'initializeParams':{
'sourceImage':source_disk_image
}
}],
'scheduling':
{
'preemptible': True
},
'networkInterfaces':[{
'network':'global/networks/default',
'accessConfigs': [
{'type':'ONE_TO_ONE_NAT','name':'ExternalNAT'}
]
}],
'serviceAccounts':[{
'email':'default',
'scopes':[
'https://www.googleapis.com/auth/devstorage.read_write',
'https://www.googleapis.com/auth/logging.write'
]
}],
"labels": {
"env": "test",
"created-by": "jupyter-notebooks-cloud-google-compute-instances"
},
}
#print(json.dumps(config, indent=2)) | google cloud/google cloud with python/cloud.google.compute.instances.ipynb | daverick/alella | gpl-3.0 |
- executing the API call | #a progress widget will present the progress of the operation
progress=widgets.IntProgress(value=0,min=0,max=3,step=1,description=':',bar_style='warning')
display(progress)
#executing the insert operation
operation = compute_service.instances().insert(project=projectId,
zone=zone,
body=config
).execute()
def updateProgress(result,progress=progress):
#updating the progress widget with the result of the operation
if result['status']== 'PENDING':
progress.value=1
progress.bar_style='warning'
progress.description=result['status']
elif result['status']== 'RUNNING':
progress.value=2
progress.bar_style='info'
progress.description=result['status']
elif result['status']== 'DONE':
progress.value=3
if 'error' in result:
progress.description='Error'
progress.bar_style='danger'
else:
progress.description=result['status']
progress.bar_style='success'
import time
#repeat until the result is DONE
while True:
#obtain the status of the operation
result=compute_service.zoneOperations().get(project=projectId,
zone=zone,
operation=operation['name']).execute()
updateProgress(result)
if result['status']== 'DONE':
break
time.sleep(.25)
| google cloud/google cloud with python/cloud.google.compute.instances.ipynb | daverick/alella | gpl-3.0 |
Listing the instance and their status | result = compute_service.instances().list(project=projectId, zone=zone).execute()
if 'items' in result.keys():
display(DataFrame.from_dict({instance['name']:(instance['status'],'✅'if instance['status']=='RUNNING' else '✖'if instance['status']=='TERMINATED' else '❓')for instance in result['items']},orient='index'))
else:
print("No instance found.")
| google cloud/google cloud with python/cloud.google.compute.instances.ipynb | daverick/alella | gpl-3.0 |
Start/Stop/Delete instances | # getting the current instances list
instances=compute_service.instances().list(project=projectId,zone=zone).execute()
instancesList=[item['name'] for item in instances['items']]
# none is added for the dropdownlist
instancesList.append('none')
#building and displaying the widgets
instancesWidget=widgets.Dropdown(options=instancesList,value='none')
instancesValid=widgets.Valid(value=False,description='')
instanceAction=widgets.RadioButtons(
options=[ 'Status','Start','Stop', 'Delete'],value='Status')
instanceExecute=widgets.ToggleButton(value=False,description='Execute',disabled=True)
display(widgets.Box([instancesWidget,instancesValid,instanceAction,instanceExecute]))
## execute an operation.
def execute(operation):
#exctract the method and the instancename form the operation
instanceName=operation.uri.split('?')[0].split('/')[-1]
methodId=operation.methodId.split('.')[-1]
#some widgets (action + instance + progress)
progress=widgets.IntProgress(value=0,min=0,max=3,step=1,description=':',bar_style='info')
display(widgets.Box([widgets.Label(value=methodId+"ing"),widgets.Label(value=instanceName),progress]))
#the dropdown and buttons are disabled when an operation is executing
global instanceExecute
global instancesWidget
instancesWidget.disabled=True
instanceExecute.disabled=True
#execute the operation
operation=operation.execute()
#until the operation is not DONE, we update the progress bar
while True:
result=compute_service.zoneOperations().get(project=projectId,
zone=zone,
operation=operation['name']).execute()
updateProgress(result,progress)
if result['status']== 'DONE':
if methodId==u'delete':
#when the instance is deleted, it has to be remove from the dropdownlist
global instancesList
instancesList.remove(instanceName)
instancesWidget.options=instancesList
instancesValid.value=False
#the operation is completed, the dropwdown and buttons are enabled
instancesWidget.disabled=False
instanceExecute.disabled=False
break
time.sleep(0.1)
def executeInstance(sender):
#callback when the execute button is clicked
if instancesValid.value==True:
# the correct operation is created and pass to the execute method
if instanceAction.value=='Stop':
execute(compute_service.instances().stop(project=projectId,
zone=zone,
instance=instancesWidget.value
))
elif instanceAction.value=='Start':
execute(compute_service.instances().start(project=projectId,
zone=zone,
instance=instancesWidget.value
))
elif instanceAction.value=='Delete':
execute(compute_service.instances().delete(project=projectId,
zone=zone,
instance=instancesWidget.value
))
elif instanceAction.value=='Status':
instance=compute_service.instances().get(project=projectId,
zone=zone,
instance=instancesWidget.value).execute()
display(widgets.Box([widgets.Label(value=instance['name']),
widgets.Label(value=instance['status'])
]))
def instancesValueChange(sender):
#callback when an element is selected in the dropdown list
if instancesWidget.value!=None:
#when the seleciton is correct the valid widget is valid
instancesValid.value=True
instanceExecute.disabled=False
#set up the callback on the widgets
instancesWidget.observe(instancesValueChange, 'value')
instanceExecute.observe(executeInstance,'value') | google cloud/google cloud with python/cloud.google.compute.instances.ipynb | daverick/alella | gpl-3.0 |
Global Variables | # Global Variables
# Region-of-interest ofsets
top_x_offset = 0.45
top_y_offset = 0.60
bottom_x_offset = 0.07
# Stores the left and right lines from an image.
# Notice, need to clear before using it on new set of images (video).
right_lines = []
left_lines = []
| 1. Computer Vision and Deep Learning/P1 Finding Lane Lines on the Road/P1.ipynb | egillanton/Udacity-SDCND | mit |
Helper Functions | import math
# Global Variables
# Region-of-interest ofsets
top_x_offset = 0.45
top_y_offset = 0.60
bottom_x_offset = 0.07
# Stores the left and right lines from an image.
# Notice, need to clear before using it on new set of images (video).
right_lines = []
left_lines = []
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')"""
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[255, 0, 0], thickness=2):
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(img, (x1, y1), (x2, y2), color, thickness)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines)
# draw_lines2(line_img, lines)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., λ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + λ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, λ)
def display_image(img, title):
plt.imshow(img, cmap='gray')
plt.suptitle(title)
plt.show()
def draw_lines2(img, lines, color=[255, 0, 0], thickness=2):
"""
This is my implementation of draw_lines() function
"""
threshold = 0.5
for line in lines:
for x1,y1,x2,y2 in line:
# Advoid divided by 0 exception
if x1 == x2:
continue
line_slope = ((y2-y1)/(x2-x1))
# only accept slopes >= threshold
if abs(line_slope) < threshold:
continue
if (line_slope >= threshold ): # Then its Right side
right_lines.append(line[0])
elif (line_slope < -threshold ): # Then its Left side
left_lines.append(line[0])
# Calculate Extrapolation based least-squares curve-fitting calculations of all the previous points
# Good link:
# https://ece.uwaterloo.ca/~dwharder/NumericalAnalysis/06LeastSquares/extrapolation/complete.html
right_lines_x = [x1 for x1, y1, x2, y2 in right_lines] + [x2 for x1, y1, x2, y2 in right_lines]
right_lines_y = [y1 for x1, y1, x2, y2 in right_lines] + [y2 for x1, y1, x2, y2 in right_lines]
# Calculate the slope (m) and the intercept (b), They are kept the same
# y = m*x + b
right_m = 1
right_b = 1
if right_lines_x:
right_m, right_b = np.polyfit(right_lines_x, right_lines_y, 1) # y = m*x + b
# collect left lines x and y sets for least-squares curve-fitting calculating
left_lines_x = [x1 for x1, y1, x2, y2 in left_lines] + [x2 for x1, y1, x2, y2 in left_lines]
left_lines_y = [y1 for x1, y1, x2, y2 in left_lines] + [y2 for x1, y1, x2, y2 in left_lines]
# Calculate the slope (m) and the intercept (b), They are kept the same
# y = m*x + b
left_m = 1
left_b = 1
if left_lines_x:
left_m, left_b = np.polyfit(left_lines_x, left_lines_y, 1)
# Calculate the y values
y_size = img.shape[0]
y1 = y_size
y2 = int(y_size*top_y_offset)
# Calculate the 4 points x values
right_x1 = int((y1-right_b)/right_m)
right_x2 = int((y2-right_b)/right_m)
left_x1 = int((y1-left_b)/left_m)
left_x2 = int((y2-left_b)/left_m)
# Graph the lines
if right_lines_x:
cv2.line(img, (right_x1, y1), (right_x2, y2), [255,0,0], 5)
if left_lines_x:
cv2.line(img, (left_x1, y1), (left_x2, y2), [255,0,0], 5) | 1. Computer Vision and Deep Learning/P1 Finding Lane Lines on the Road/P1.ipynb | egillanton/Udacity-SDCND | mit |
Test Images
Build your pipeline to work on the images in the directory "test_images"
You should make sure your pipeline works well on these images before you try the videos.
Build the pipeline and run your solution on all test_images. Make copies into the test_images_output directory, and you can use the images in your writeup report.
Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters. | import os
os.listdir("test_images/")
def process_image(image, display_images=False, export_images=False):
# Step 1. Convert the image to grayscale.
gray_image = grayscale(image)
# Step 2. Blur using Gaussian smoothing / blurring
# kernel_size = 5
blurred_image = gaussian_blur(gray_image, 5)
# Step 3. Use Canny Edge Detection to get a image of edges
# low_threshold = 50
# high_threshold = 150)
edged_image = canny(blurred_image, 50, 150)
# Step 4. Mask with a trapozoid the area of interest
y_size = image.shape[0]
x_size = image.shape[1]
tx = int(x_size * top_x_offset)
bx = int(x_size * bottom_x_offset)
ty = int(y_size * top_y_offset)
vertices = np.array( [[
(bx, y_size),# Bottom Left
(tx, ty), # Top Left
(x_size - tx, ty), # Top Right
(x_size - bx, y_size) # Bottom Right
]], dtype=np.int32 )
roi_img = region_of_interest(edged_image, vertices)
# Step 5. Run Hough Transformation on masked edge detected image
houghed_image = hough_lines(roi_img, 1, np.pi/180, 40, 30, 200)
# Step 6. Draw the lines on the original image
final_image = weighted_img(houghed_image, image)
if display_images:
display_image(image, "Original Image")
display_image(gray_image, "Grayscale Image")
display_image(blurred_image, "Gaussian Blured Image")
display_image(edged_image, "Canny Edge Detectioned Image")
display_image(roi_img, "Region of Interest Mapped Image")
display_image(houghed_image, "Hough Transformed Image")
display_image(final_image, "Final image")
if export_images:
mpimg.imsave("display_images_output" + "/" + "original_image.jpg", image, cmap='gray')
mpimg.imsave("display_images_output" + "/" + "gray_image.jpg", gray_image, cmap='gray')
mpimg.imsave("display_images_output" + "/" + "blurred_image.jpg", blurred_image, cmap='gray')
mpimg.imsave("display_images_output" + "/" + "edged_image.jpg", edged_image, cmap='gray')
mpimg.imsave("display_images_output" + "/" + "roi_img.jpg", roi_img, cmap='gray')
mpimg.imsave("display_images_output" + "/" + "houghed_image.jpg", houghed_image, cmap='gray')
mpimg.imsave("display_images_output" + "/" + "final_image.jpg", final_image, cmap='gray')
return final_image
final_image = process_image(mpimg.imread('test_images/solidWhiteRight.jpg'), display_images=True, export_images=True)
in_directory = "test_images"
# Create a corresponding output directory
out_directory = "test_images_out"
if not os.path.exists(out_directory):
os.makedirs(out_directory)
# Get all images in input directory and store their names
imageNames = os.listdir(in_directory + "/")
for imageName in imageNames:
image = mpimg.imread(in_directory + "/" + imageName)
# Apply my Lane Finding Image Processing Algorithm on each image
resultImage = process_image(image)
# Save the result in the output directory
mpimg.imsave(out_directory + "/" + imageName, resultImage) | 1. Computer Vision and Deep Learning/P1 Finding Lane Lines on the Road/P1.ipynb | egillanton/Udacity-SDCND | mit |
Test on Videos
We can test our solution on two provided videos:
solidWhiteRight.mp4
solidYellowLeft.mp4 | # Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
import imageio
imageio.plugins.ffmpeg.download()
right_lines.clear()
left_lines.clear()
white_output = 'test_videos_output/solidWhiteRight.mp4'
clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output)) | 1. Computer Vision and Deep Learning/P1 Finding Lane Lines on the Road/P1.ipynb | egillanton/Udacity-SDCND | mit |
Improve the draw_lines() function
At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".
Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.
Now for the one with the solid yellow lane on the left. This one's more tricky! | right_lines.clear()
left_lines.clear()
yellow_output = 'test_videos_output/solidYellowLeft.mp4'
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output)) | 1. Computer Vision and Deep Learning/P1 Finding Lane Lines on the Road/P1.ipynb | egillanton/Udacity-SDCND | mit |
Optional Challenge
I modified my pipeline so it works with this video and submited it along with the rest of my project! | right_lines.clear()
left_lines.clear()
challenge_output = 'test_videos_output/challenge.mp4'
clip3 = VideoFileClip('test_videos/challenge.mp4')
challenge_clip = clip3.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output)) | 1. Computer Vision and Deep Learning/P1 Finding Lane Lines on the Road/P1.ipynb | egillanton/Udacity-SDCND | mit |
So there are 8 folders present inside the train folder, one for each species.
Now let us check the number of files present in each of these sub folders. | sub_folders = check_output(["ls", "../input/train/"]).decode("utf8").strip().split('\n')
count_dict = {}
for sub_folder in sub_folders:
num_of_files = len(check_output(["ls", "../input/train/"+sub_folder]).decode("utf8").strip().split('\n'))
print("Number of files for the species",sub_folder,":",num_of_files)
count_dict[sub_folder] = num_of_files
plt.figure(figsize=(12,4))
sns.barplot(list(count_dict.keys()), list(count_dict.values()), alpha=0.8)
plt.xlabel('Fish Species', fontsize=12)
plt.ylabel('Number of Images', fontsize=12)
plt.show()
| fish/fish_explore.ipynb | sysid/kg | mit |
So the number of files for species ALB (Albacore tuna) is much higher than other species.
Let us look at the number of files present in the test folder. | num_test_files = len(check_output(["ls", "../input/test_stg1/"]).decode("utf8").strip().split('\n'))
print("Number of test files present :", num_test_files) | fish/fish_explore.ipynb | sysid/kg | mit |
Image Size:
Now let us look at the image size of each of the files and see what different sizes are available. | train_path = "../input/train/"
sub_folders = check_output(["ls", train_path]).decode("utf8").strip().split('\n')
different_file_sizes = {}
for sub_folder in sub_folders:
file_names = check_output(["ls", train_path+sub_folder]).decode("utf8").strip().split('\n')
for file_name in file_names:
im_array = imread(train_path+sub_folder+"/"+file_name)
size = "_".join(map(str,list(im_array.shape)))
different_file_sizes[size] = different_file_sizes.get(size,0) + 1
plt.figure(figsize=(12,4))
sns.barplot(list(different_file_sizes.keys()), list(different_file_sizes.values()), alpha=0.8)
plt.xlabel('Image size', fontsize=12)
plt.ylabel('Number of Images', fontsize=12)
plt.title("Image size present in train dataset")
plt.xticks(rotation='vertical')
plt.show() | fish/fish_explore.ipynb | sysid/kg | mit |
So 720_1280_3 is the most common image size available in the train data and 10 different sizes are available.
720_1244_3 is the smallest size of the available images in train set and 974_1732_3 is the largest one.
Now let us look at the distribution in test dataset as well. | test_path = "../input/test_stg1/"
file_names = check_output(["ls", test_path]).decode("utf8").strip().split('\n')
different_file_sizes = {}
for file_name in file_names:
size = "_".join(map(str,list(imread(test_path+file_name).shape)))
different_file_sizes[size] = different_file_sizes.get(size,0) + 1
plt.figure(figsize=(12,4))
sns.barplot(list(different_file_sizes.keys()), list(different_file_sizes.values()), alpha=0.8)
plt.xlabel('File size', fontsize=12)
plt.ylabel('Number of Images', fontsize=12)
plt.xticks(rotation='vertical')
plt.title("Image size present in test dataset")
plt.show() | fish/fish_explore.ipynb | sysid/kg | mit |
Test set also has a very similar distribution.
Animation:
Let us try to have some animation on the available images. Not able to embed the video in the notebook.
Please uncomment the following part of the code and run it in local for animation | """
import random
import matplotlib.animation as animation
from matplotlib import animation, rc
from IPython.display import HTML
random.seed(12345)
train_path = "../input/train/"
sub_folders = check_output(["ls", train_path]).decode("utf8").strip().split('\n')
different_file_sizes = {}
all_files = []
for sub_folder in sub_folders:
file_names = check_output(["ls", train_path+sub_folder]).decode("utf8").strip().split('\n')
selected_files = random.sample(file_names, 10)
for file_name in selected_files:
all_files.append([sub_folder,file_name])
fig = plt.figure()
sns.set_style("whitegrid", {'axes.grid' : False})
img_file = "".join([train_path, sub_folder, "/", file_name])
im = plt.imshow(imread(img_file), vmin=0, vmax=255)
def updatefig(ind):
sub_folder = all_files[ind][0]
file_name = all_files[ind][1]
img_file = "".join([train_path, sub_folder, "/", file_name])
im.set_array(imread(img_file))
plt.title("Species : "+sub_folder, fontsize=15)
return im,
ani = animation.FuncAnimation(fig, updatefig, frames=len(all_files))
ani.save('lb.gif', fps=1, writer='imagemagick')
#rc('animation', html='html5')
#HTML(ani.to_html5_video())
plt.show()
""" | fish/fish_explore.ipynb | sysid/kg | mit |
Basic CNN Model using Keras:
Now let us try to build a CNN model on the dataset. Due to the memory constraints of the kernels, let us take only (500,500,3) array from top left corner of each image and then try to classify based on that portion.
Kindly note that running it offline with the full image will give much better results. This is just a started script I tried and I am a newbie for image classification problems. | import random
from subprocess import check_output
from scipy.misc import imread
import numpy as np
np.random.seed(2016)
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D
from keras.utils import np_utils
from keras import backend as K
batch_size = 1
nb_classes = 8
nb_epoch = 1
img_rows, img_cols, img_rgb = 500, 500, 3
nb_filters = 4
pool_size = (2, 2)
kernel_size = (3, 3)
input_shape = (img_rows, img_cols, 3)
species_map_dict = {
'ALB':0,
'BET':1,
'DOL':2,
'LAG':3,
'NoF':4,
'OTHER':5,
'SHARK':6,
'YFT':7
}
def batch_generator_train(sample_size):
train_path = "../input/train/"
all_files = []
y_values = []
sub_folders = check_output(["ls", train_path]).decode("utf8").strip().split('\n')
for sub_folder in sub_folders:
file_names = check_output(["ls", train_path+sub_folder]).decode("utf8").strip().split('\n')
for file_name in file_names:
all_files.append([sub_folder, '/', file_name])
y_values.append(species_map_dict[sub_folder])
number_of_images = range(len(all_files))
counter = 0
while True:
image_index = random.choice(number_of_images)
file_name = "".join([train_path] + all_files[image_index])
print(file_name)
y = [0]*8
y[y_values[image_index]] = 1
y = np.array(y).reshape(1,8)
im_array = imread(file_name)
X = np.zeros([1, img_rows, img_cols, img_rgb])
#X[:im_array.shape[0], :im_array.shape[1], 3] = im_array.copy().astype('float32')
X[0, :, :, :] = im_array[:500,:500,:].astype('float32')
X /= 255.
print(X.shape)
yield X,y
counter += 1
#if counter == sample_size:
# break
def batch_generator_test(all_files):
for file_name in all_files:
file_name = test_path + file_name
im_array = imread(file_name)
X = np.zeros([1, img_rows, img_cols, img_rgb])
X[0,:, :, :] = im_array[:500,:500,:].astype('float32')
X /= 255.
yield X
def keras_cnn_model():
model = Sequential()
model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1],
border_mode='valid',
input_shape=input_shape))
model.add(Activation('relu'))
model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1]))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=pool_size))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adadelta')
return model
model = keras_cnn_model()
fit= model.fit_generator(
generator = batch_generator_train(100),
nb_epoch = 1,
samples_per_epoch = 100
)
test_path = "../input/test_stg1/"
all_files = []
file_names = check_output(["ls", test_path]).decode("utf8").strip().split('\n')
for file_name in file_names:
all_files.append(file_name)
#preds = model.predict_generator(generator=batch_generator_test(all_files), val_samples=len(all_files))
#out_df = pd.DataFrame(preds)
#out_df.columns = ['ALB', 'BET', 'DOL', 'LAG', 'NoF', 'OTHER', 'SHARK', 'YFT']
#out_df['image'] = all_files
#out_df.to_csv("sample_sub_keras.csv", index=False) | fish/fish_explore.ipynb | sysid/kg | mit |
NumPy & SciPy
Very good tutorials and docs:
Tentative NumPy Tutorial
Scientific Python stack official docs
There are a few compilations for helping MATLAB, IDL, R users transitioning to Python/NumPy. HTML and phf versions are both available. (Big thanks to Alex Mulia for bringing this to our attention!)
Thesaurus of Mathematical Languages, or MATLAB synonymous commands in Python/NumPy | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from scipy.optimize import curve_fit
np.polyfit? | scientific-python-walkabout.ipynb | terencezl/scientific-python-walkabout | mit |
NumPy Arrays | a0 = np.arange(6)
a = a0.reshape((2,3))
print(a.dtype, a.itemsize, a.size, a.shape, '\n')
print(a, '\n')
print(repr(a), '\n')
print(a.tolist())
b = a.astype(float)
print(b, '\n')
print(repr(b))
# re-define a0 and a
a0 = np.arange(6)
a = a0.reshape((2,3))
# get a slice of a to make c
c = a[:2, 1:3]
# a and c are both based on a0, the very initial storage space
print(c, '\n')
print(a.base, a.base is c.base)
# changing c will change a and a0
c[0, 0] = 1111
print('\n', c, '\n')
print(a)
# WAT??? This is different from the slice copy of a list, e.g. mylist[:]
# if you want to make a real copy, and re-allocate some RAM, use
d = a[:]
e = a.copy()
print(d is a, e is a)
# Well... You may expect d is the same as a, but it is just not.
# Our reasoning still holds though. You change d, you'll change a. | scientific-python-walkabout.ipynb | terencezl/scientific-python-walkabout | mit |
Now, we'll primarily demonstrate SciPy's capability of fitting.
Fitting a single variable simple function
$f(x) = a e^{b x}$ | def f(x, a, b):
return a * np.exp(b * x)
x = np.linspace(0, 1, 1000)
y_ideal = f(x, 1, 2)
y = f(x, 1, 2) + np.random.randn(1000)
plt.plot(x, y)
plt.plot(x, y_ideal, lw=2)
popt, pcov = curve_fit(f, x, y)
# popt is the optimized parameters, and pcov is the covariance matrix.
# diagnal members np.diag(pcov) is the variances of each parameter.
# np.sqrt(np.diag(pcov)) is the standard deviation.
print(popt, '\n\n', pcov)
y_fit = f(x, popt[0], popt[1])
plt.plot(x, y_ideal, label='ideal')
plt.plot(x, y_fit, '--', label='fit')
plt.legend(loc=0, fontsize=14) | scientific-python-walkabout.ipynb | terencezl/scientific-python-walkabout | mit |
Fitting a single variable function containing an integral
$f(x) = c \int_o^x (a x' + b) dx' + d$ | from scipy.integrate import quad
def f(x, a, b, c, d):
# the integrand function should be within function f, because parameters a and b
# are available within.
def integrand(xx):
return a * xx + b
# if the upper/lower limit of the integral is our unknown variable x, x has to be
# iterated from an array to a single value, because the quad function only accepts
# a single value each time.
y = np.zeros(len(x))
for idx, value in enumerate(x):
y[idx] = c * quad(integrand, 0, value)[0] + d
return y
x = np.linspace(0, 1, 1000)
y_ideal = f(x, 1, 2, 3, 4)
y = f(x, 1, 2, 3, 4) + np.random.randn(1000)
plt.plot(x, y)
plt.plot(x, y_ideal, lw=2)
popt, pcov = curve_fit(f, x, y)
print(popt, '\n\n', pcov)
y_fit = f(x, popt[0], popt[1], popt[2], popt[3])
plt.plot(x, y_ideal, label='ideal')
plt.plot(x, y_fit, '--', label='fit')
plt.legend(loc=0, fontsize=14) | scientific-python-walkabout.ipynb | terencezl/scientific-python-walkabout | mit |
Fitting a 2 variable function
$f(x) = a e^{b x_1} + e^{c x_2}$ | def f(x, a, b, c):
return a * np.exp(b * x[0]) + np.exp(c * x[1])
x1 = np.linspace(0, 1, 1000)
x2 = np.linspace(0, 1, 1000)
x = [x1, x2]
y_ideal = f(x, 1, 2, 3)
y = f(x, 1, 2, 3) + np.random.randn(1000)
from mpl_toolkits.mplot3d.axes3d import Axes3D
fig = plt.figure(figsize=(10,8))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x[0], x[1], y, alpha=.1)
ax.plot(x[0], x[1], y_ideal, 'r', lw=2)
ax.view_init(30, 80)
popt, pcov = curve_fit(f, x, y)
print(popt, '\n\n', pcov)
fig = plt.figure(figsize=(10,8))
y_fit = f(x, popt[0], popt[1], popt[2])
ax = fig.add_subplot(111, projection='3d')
ax.plot(x[0], x[1], y_ideal, label='ideal')
ax.plot(x[0], x[1], y_fit, label='fit')
plt.legend(loc=0, fontsize=14)
ax.view_init(30, 80) | scientific-python-walkabout.ipynb | terencezl/scientific-python-walkabout | mit |
matplotlib
Some core concepts in http://matplotlib.org/faq/usage_faq.html regarding backends, (non-)interactive modes. | # pyplot (plt) interface vs object oriented interface
fig, axes = plt.subplots(1, 2)
plt.plot([2,3,4])
# Looks like it automatically chose the right axes to plot on.
# how can I plot on the first graph?
# Either keep (well... kind of) using the convenient pyplot interface
fig, axes = plt.subplots(1, 2)
plt.plot([2,3,4])
# change the state of the focus by switching to the zeroth axes
plt.sca(axes[0])
plt.plot([3,2,1])
# Or use the object oriented interface
fig, axes = plt.subplots(1, 2)
plt.plot([2,3,4])
print(axes)
ax = axes[0]
ax.plot([1,2,3])
# if you are not using notebook, and have switched on interactive mode by plt.ion(),
# you need to explicitly say
plt.draw()
# But it doesn't hurt if you say it anyway.
# So there I said it.
# Similarly, if you have two figures and want to switch back and forth
# create figs
fig1 = plt.figure('Ha')
plt.plot([1,2,32])
fig2 = plt.figure(2)
plt.plot([32,2,1])
# switch back to fig 'Ha'
plt.figure('Ha')
plt.scatter([0,1,2], [3,4,5])
# add text and then delete
plt.plot([2,3,4])
plt.text(1, 2.5, r'This is $\frac{x}{x - 1} = 1$!', fontsize=14)
# to delete the text, first get the axes reference, and pop the just added text object out of the list
plt.plot([2,3,4])
plt.text(1, 2.5, r'This is $\frac{x}{x - 1} = 1$!', fontsize=14)
ax = plt.gca()
# print(ax.texts) will give you a list, with one element
ax.texts.pop()
# you have to redraw the figure
plt.draw()
# same can be applied to lines by `ax.lines.pop()`
# tight_layout() to automatically adjust the elements in a figure
plt.plot([35,3,54])
plt.xlabel('X')
plt.ylabel('Y')
plt.plot([35,3,54])
plt.xlabel('X')
plt.ylabel('Y')
plt.tight_layout()
# locator_params() to have more or less ticks
plt.plot([35,3,54])
plt.locator_params(nbins=10) | scientific-python-walkabout.ipynb | terencezl/scientific-python-walkabout | mit |
pandas
A very good glimpse: Ten minutes of pandas.
Read data from online files. | pd.read_csv('https://raw.githubusercontent.com/pydata/pandas/master/doc/data/baseball.csv', index_col='id')
df = pd.read_excel('https://github.com/pydata/pandas/raw/master/doc/data/test.xls')
print(df) | scientific-python-walkabout.ipynb | terencezl/scientific-python-walkabout | mit |
Ways of Indexing
Very confusing? See http://pandas.pydata.org/pandas-docs/stable/indexing.html#different-choices-for-indexing | # simple column selection by label
df['A']
# simple row slice by position, end not included
df[0:2]
# explicit row selection
df.loc['2000-01-03']
# explicit row slicing, end included
df.loc['2000-01-03':'2000-01-05']
# explicit column selection by label
df.loc[:, 'A']
# explicit element selection by label
df.loc['Jan 3, 2000', 'A']
# explicit row selection by position
df.iloc[0]
# explicit row slicing by position, end not included
df.iloc[0:2]
# explicit column selection by position
df.iloc[:, 0]
# explicit element selection by position
df.iloc[0, 0]
# mixed selection, row by position and column by label
df.ix[0, 'A'] | scientific-python-walkabout.ipynb | terencezl/scientific-python-walkabout | mit |
if you want to see logging events.
Transformation interface
In the previous tutorial on Corpora and Vector Spaces, we created a corpus of documents represented as a stream of vectors. To continue, let’s fire up gensim and use that corpus: | from gensim import corpora, models, similarities
if (os.path.exists("/tmp/deerwester.dict")):
dictionary = corpora.Dictionary.load('/tmp/deerwester.dict')
corpus = corpora.MmCorpus('/tmp/deerwester.mm')
print("Used files generated from first tutorial")
else:
print("Please run first tutorial to generate data set")
print (dictionary[0])
print (dictionary[1])
print (dictionary[2]) | docs/notebooks/Topics_and_Transformations.ipynb | Kreiswolke/gensim | lgpl-2.1 |
In this tutorial, I will show how to transform documents from one vector representation into another. This process serves two goals:
To bring out hidden structure in the corpus, discover relationships between words and use them to describe the documents in a new and (hopefully) more semantic way.
To make the document representation more compact. This both improves efficiency (new representation consumes less resources) and efficacy (marginal data trends are ignored, noise-reduction).
Creating a transformation
The transformations are standard Python objects, typically initialized by means of a training corpus: | tfidf = models.TfidfModel(corpus) # step 1 -- initialize a model | docs/notebooks/Topics_and_Transformations.ipynb | Kreiswolke/gensim | lgpl-2.1 |
We used our old corpus from tutorial 1 to initialize (train) the transformation model. Different transformations may require different initialization parameters; in case of TfIdf, the “training” consists simply of going through the supplied corpus once and computing document frequencies of all its features. Training other models, such as Latent Semantic Analysis or Latent Dirichlet Allocation, is much more involved and, consequently, takes much more time.
<B>Note</B>:
Transformations always convert between two specific vector spaces. The same vector space (= the same set of feature ids) must be used for training as well as for subsequent vector transformations. Failure to use the same input feature space, such as applying a different string preprocessing, using different feature ids, or using bag-of-words input vectors where TfIdf vectors are expected, will result in feature mismatch during transformation calls and consequently in either garbage output and/or runtime exceptions. | doc_bow = [(0, 1), (1, 1)]
print(tfidf[doc_bow]) # step 2 -- use the model to transform vectors | docs/notebooks/Topics_and_Transformations.ipynb | Kreiswolke/gensim | lgpl-2.1 |
Or to apply a transformation to a whole corpus: | corpus_tfidf = tfidf[corpus]
for doc in corpus_tfidf:
print(doc) | docs/notebooks/Topics_and_Transformations.ipynb | Kreiswolke/gensim | lgpl-2.1 |
In this particular case, we are transforming the same corpus that we used for training, but this is only incidental. Once the transformation model has been initialized, it can be used on any vectors (provided they come from the same vector space, of course), even if they were not used in the training corpus at all. This is achieved by a process called folding-in for LSA, by topic inference for LDA etc.
<b>Note:</b>
Calling model[corpus] only creates a wrapper around the old corpus document stream – actual conversions are done on-the-fly, during document iteration. We cannot convert the entire corpus at the time of calling corpus_transformed = model[corpus], because that would mean storing the result in main memory, and that contradicts gensim’s objective of memory-indepedence. If you will be iterating over the transformed corpus_transformed multiple times, and the transformation is costly, serialize the resulting corpus to disk first and continue using that.
Transformations can also be serialized, one on top of another, in a sort of chain: | lsi = models.LsiModel(corpus_tfidf, id2word=dictionary, num_topics=2) # initialize an LSI transformation
corpus_lsi = lsi[corpus_tfidf] # create a double wrapper over the original corpus: bow->tfidf->fold-in-lsi | docs/notebooks/Topics_and_Transformations.ipynb | Kreiswolke/gensim | lgpl-2.1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.