markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
As you can infer, we can find the path to the terminal state starting from any given state using this policy. All maze problems can be solved by formulating it as a MDP. POMDP Two state POMDP Let's consider a problem where we have two doors, one to our left and one to our right. One of these doors opens to a room with a tiger in it, and the other one opens to an empty hall. <br> We will call our two states 0 and 1 for left and right respectively. <br> The possible actions we can take are as follows: <br> 1. Open-left: Open the left door. Represented by 0. 2. Open-right: Open the right door. Represented by 1. 3. Listen: Listen carefully to one side and possibly hear the tiger breathing. Represented by 2. <br> The possible observations we can get are as follows: <br> 1. TL: Tiger seems to be at the left door. 2. TR: Tiger seems to be at the right door. <br> The reward function is as follows: <br> We get +10 reward for opening the door to the empty hall and we get -100 reward for opening the other door and setting the tiger free. <br> Listening costs us -1 reward. <br> We want to minimize our chances of setting the tiger free. Our transition probabilities can be defined as: <br> <br> Action 0 (Open left door) $\ P(0) = \left[ {\begin{array}{cc} 0.5 & 0.5 \ 0.5 & 0.5 \ \end{array}}\right] \ \ $ Action 1 (Open right door) $\ P(1) = \left[ {\begin{array}{cc} 0.5 & 0.5 \ 0.5 & 0.5 \ \end{array}}\right] \ \ $ Action 2 (Listen) $\ P(2) = \left[ {\begin{array}{cc} 1.0 & 0.0 \ 0.0 & 1.0 \ \end{array}}\right] \ \ $ <br> <br> Our observation probabilities can be defined as: <br> <br> $\ O(0) = \left[ {\begin{array}{ccc} Open left & TL & TR \ Tiger: left & 0.5 & 0.5 \ Tiger: right & 0.5 & 0.5 \ \end{array}}\right] \ \ $ $\ O(1) = \left[ {\begin{array}{ccc} Open right & TL & TR \ Tiger: left & 0.5 & 0.5 \ Tiger: right & 0.5 & 0.5 \ \end{array}}\right] \ \ $ $\ O(2) = \left[ {\begin{array}{ccc} Listen & TL & TR \ Tiger: left & 0.85 & 0.15 \ Tiger: right & 0.15 & 0.85 \ \end{array}}\right] \ \ $ <br> <br> The rewards of this POMDP are defined as: <br> <br> $\ R(0) = \left[ {\begin{array}{cc} Openleft & Reward \ Tiger: left & -100 \ Tiger: right & +10 \ \end{array}}\right] \ \ $ $\ R(1) = \left[ {\begin{array}{cc} Openright & Reward \ Tiger: left & +10 \ Tiger: right & -100 \ \end{array}}\right] \ \ $ $\ R(2) = \left[ {\begin{array}{cc} Listen & Reward \ Tiger: left & -1 \ Tiger: right & -1 \ \end{array}}\right] \ \ $ <br> Based on these matrices, we will initialize our variables. Let's first define our transition state.
t_prob = [[[0.5, 0.5], [0.5, 0.5]], [[0.5, 0.5], [0.5, 0.5]], [[1.0, 0.0], [0.0, 1.0]]]
mdp_apps.ipynb
Chipe1/aima-python
mit
Followed by the observation model.
e_prob = [[[0.5, 0.5], [0.5, 0.5]], [[0.5, 0.5], [0.5, 0.5]], [[0.85, 0.15], [0.15, 0.85]]]
mdp_apps.ipynb
Chipe1/aima-python
mit
And the reward model.
rewards = [[-100, 10], [10, -100], [-1, -1]]
mdp_apps.ipynb
Chipe1/aima-python
mit
Let's now define our states, observations and actions. <br> We will use gamma = 0.95 for this example. <br>
# 0: open-left, 1: open-right, 2: listen actions = ('0', '1', '2') # 0: left, 1: right states = ('0', '1') gamma = 0.95
mdp_apps.ipynb
Chipe1/aima-python
mit
We have all the required variables to instantiate an object of the POMDP class.
pomdp = POMDP(actions, t_prob, e_prob, rewards, states, gamma)
mdp_apps.ipynb
Chipe1/aima-python
mit
We can now find the utility function by running pomdp_value_iteration on our pomdp object.
utility = pomdp_value_iteration(pomdp, epsilon=3) utility import matplotlib.pyplot as plt %matplotlib inline def plot_utility(utility): open_left = utility['0'][0] open_right = utility['1'][0] listen_left = utility['2'][0] listen_right = utility['2'][-1] left = (open_left[0] - listen_left[0]) / (open_left[0] - listen_left[0] + listen_left[1] - open_left[1]) right = (open_right[0] - listen_right[0]) / (open_right[0] - listen_right[0] + listen_right[1] - open_right[1]) colors = ['g', 'b', 'k'] for action in utility: for value in utility[action]: plt.plot(value, color=colors[int(action)]) plt.vlines([left, right], -10, 35, linestyles='dashed', colors='c') plt.ylim(-10, 35) plt.xlim(0, 1) plt.text(left/2 - 0.35, 30, 'open-left') plt.text((right + left)/2 - 0.04, 30, 'listen') plt.text((right + 1)/2 + 0.22, 30, 'open-right') plt.show() plot_utility(utility)
mdp_apps.ipynb
Chipe1/aima-python
mit
TensorFlow Lite Metadata Writer API <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/lite/models/convert/metadata_writer_tutorial"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/models/convert/metadata_writer_tutorial.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/models/convert/metadata_writer_tutorial.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/models/convert/metadata_writer_tutorial.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> TensorFlow Lite Model Metadata is a standard model description format. It contains rich semantics for general model information, inputs/outputs, and associated files, which makes the model self-descriptive and exchangeable. Model Metadata is currently used in the following two primary use cases: 1. Enable easy model inference using TensorFlow Lite Task Library and codegen tools. Model Metadata contains the mandatory information required during inference, such as label files in image classification, sampling rate of the audio input in audio classification, and tokenizer type to process input string in Natural Language models. Enable model creators to include documentation, such as description of model inputs/outputs or how to use the model. Model users can view these documentation via visualization tools such as Netron. TensorFlow Lite Metadata Writer API provides an easy-to-use API to create Model Metadata for popular ML tasks supported by the TFLite Task Library. This notebook shows examples on how the metadata should be populated for the following tasks below: Image classifiers Object detectors Image segmenters Natural language classifiers Audio classifiers Metadata writers for BERT natural language classifiers and BERT question answerers are coming soon. If you want to add metadata for use cases that are not supported, please use the Flatbuffers Python API. See the tutorials here. Prerequisites Install the TensorFlow Lite Support Pypi package.
!pip install tflite-support-nightly
site/en-snapshot/lite/models/convert/metadata_writer_tutorial.ipynb
tensorflow/docs-l10n
apache-2.0
Create Model Metadata for Task Library and Codegen <a name=image_classifiers></a> Image classifiers See the image classifier model compatibility requirements for more details about the supported model format. Step 1: Import the required packages.
from tflite_support.metadata_writers import image_classifier from tflite_support.metadata_writers import writer_utils
site/en-snapshot/lite/models/convert/metadata_writer_tutorial.ipynb
tensorflow/docs-l10n
apache-2.0
Step 2: Download the example image classifier, mobilenet_v2_1.0_224.tflite, and the label file.
!curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/image_classifier/mobilenet_v2_1.0_224.tflite -o mobilenet_v2_1.0_224.tflite !curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/image_classifier/labels.txt -o mobilenet_labels.txt
site/en-snapshot/lite/models/convert/metadata_writer_tutorial.ipynb
tensorflow/docs-l10n
apache-2.0
Step 3: Create metadata writer and populate.
ImageClassifierWriter = image_classifier.MetadataWriter _MODEL_PATH = "mobilenet_v2_1.0_224.tflite" # Task Library expects label files that are in the same format as the one below. _LABEL_FILE = "mobilenet_labels.txt" _SAVE_TO_PATH = "mobilenet_v2_1.0_224_metadata.tflite" # Normalization parameters is required when reprocessing the image. It is # optional if the image pixel values are in range of [0, 255] and the input # tensor is quantized to uint8. See the introduction for normalization and # quantization parameters below for more details. # https://www.tensorflow.org/lite/models/convert/metadata#normalization_and_quantization_parameters) _INPUT_NORM_MEAN = 127.5 _INPUT_NORM_STD = 127.5 # Create the metadata writer. writer = ImageClassifierWriter.create_for_inference( writer_utils.load_file(_MODEL_PATH), [_INPUT_NORM_MEAN], [_INPUT_NORM_STD], [_LABEL_FILE]) # Verify the metadata generated by metadata writer. print(writer.get_metadata_json()) # Populate the metadata into the model. writer_utils.save_file(writer.populate(), _SAVE_TO_PATH)
site/en-snapshot/lite/models/convert/metadata_writer_tutorial.ipynb
tensorflow/docs-l10n
apache-2.0
<a name=object_detectors></a> Object detectors See the object detector model compatibility requirements for more details about the supported model format. Step 1: Import the required packages.
from tflite_support.metadata_writers import object_detector from tflite_support.metadata_writers import writer_utils
site/en-snapshot/lite/models/convert/metadata_writer_tutorial.ipynb
tensorflow/docs-l10n
apache-2.0
Step 2: Download the example object detector, ssd_mobilenet_v1.tflite, and the label file.
!curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/object_detector/ssd_mobilenet_v1.tflite -o ssd_mobilenet_v1.tflite !curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/object_detector/labelmap.txt -o ssd_mobilenet_labels.txt
site/en-snapshot/lite/models/convert/metadata_writer_tutorial.ipynb
tensorflow/docs-l10n
apache-2.0
Step 3: Create metadata writer and populate.
ObjectDetectorWriter = object_detector.MetadataWriter _MODEL_PATH = "ssd_mobilenet_v1.tflite" # Task Library expects label files that are in the same format as the one below. _LABEL_FILE = "ssd_mobilenet_labels.txt" _SAVE_TO_PATH = "ssd_mobilenet_v1_metadata.tflite" # Normalization parameters is required when reprocessing the image. It is # optional if the image pixel values are in range of [0, 255] and the input # tensor is quantized to uint8. See the introduction for normalization and # quantization parameters below for more details. # https://www.tensorflow.org/lite/models/convert/metadata#normalization_and_quantization_parameters) _INPUT_NORM_MEAN = 127.5 _INPUT_NORM_STD = 127.5 # Create the metadata writer. writer = ObjectDetectorWriter.create_for_inference( writer_utils.load_file(_MODEL_PATH), [_INPUT_NORM_MEAN], [_INPUT_NORM_STD], [_LABEL_FILE]) # Verify the metadata generated by metadata writer. print(writer.get_metadata_json()) # Populate the metadata into the model. writer_utils.save_file(writer.populate(), _SAVE_TO_PATH)
site/en-snapshot/lite/models/convert/metadata_writer_tutorial.ipynb
tensorflow/docs-l10n
apache-2.0
<a name=image_segmenters></a> Image segmenters See the image segmenter model compatibility requirements for more details about the supported model format. Step 1: Import the required packages.
from tflite_support.metadata_writers import image_segmenter from tflite_support.metadata_writers import writer_utils
site/en-snapshot/lite/models/convert/metadata_writer_tutorial.ipynb
tensorflow/docs-l10n
apache-2.0
Step 2: Download the example image segmenter, deeplabv3.tflite, and the label file.
!curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/image_segmenter/deeplabv3.tflite -o deeplabv3.tflite !curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/image_segmenter/labelmap.txt -o deeplabv3_labels.txt
site/en-snapshot/lite/models/convert/metadata_writer_tutorial.ipynb
tensorflow/docs-l10n
apache-2.0
Step 3: Create metadata writer and populate.
ImageSegmenterWriter = image_segmenter.MetadataWriter _MODEL_PATH = "deeplabv3.tflite" # Task Library expects label files that are in the same format as the one below. _LABEL_FILE = "deeplabv3_labels.txt" _SAVE_TO_PATH = "deeplabv3_metadata.tflite" # Normalization parameters is required when reprocessing the image. It is # optional if the image pixel values are in range of [0, 255] and the input # tensor is quantized to uint8. See the introduction for normalization and # quantization parameters below for more details. # https://www.tensorflow.org/lite/models/convert/metadata#normalization_and_quantization_parameters) _INPUT_NORM_MEAN = 127.5 _INPUT_NORM_STD = 127.5 # Create the metadata writer. writer = ImageSegmenterWriter.create_for_inference( writer_utils.load_file(_MODEL_PATH), [_INPUT_NORM_MEAN], [_INPUT_NORM_STD], [_LABEL_FILE]) # Verify the metadata generated by metadata writer. print(writer.get_metadata_json()) # Populate the metadata into the model. writer_utils.save_file(writer.populate(), _SAVE_TO_PATH)
site/en-snapshot/lite/models/convert/metadata_writer_tutorial.ipynb
tensorflow/docs-l10n
apache-2.0
<a name=nl_classifiers></a> Natural language classifiers See the natural language classifier model compatibility requirements for more details about the supported model format. Step 1: Import the required packages.
from tflite_support.metadata_writers import nl_classifier from tflite_support.metadata_writers import metadata_info from tflite_support.metadata_writers import writer_utils
site/en-snapshot/lite/models/convert/metadata_writer_tutorial.ipynb
tensorflow/docs-l10n
apache-2.0
Step 2: Download the example natural language classifier, movie_review.tflite, the label file, and the vocab file.
!curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/nl_classifier/movie_review.tflite -o movie_review.tflite !curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/nl_classifier/labels.txt -o movie_review_labels.txt !curl -L https://storage.googleapis.com/download.tensorflow.org/models/tflite_support/nl_classifier/vocab.txt -o movie_review_vocab.txt
site/en-snapshot/lite/models/convert/metadata_writer_tutorial.ipynb
tensorflow/docs-l10n
apache-2.0
Step 3: Create metadata writer and populate.
NLClassifierWriter = nl_classifier.MetadataWriter _MODEL_PATH = "movie_review.tflite" # Task Library expects label files and vocab files that are in the same formats # as the ones below. _LABEL_FILE = "movie_review_labels.txt" _VOCAB_FILE = "movie_review_vocab.txt" # NLClassifier supports tokenize input string using the regex tokenizer. See # more details about how to set up RegexTokenizer below: # https://github.com/tensorflow/tflite-support/blob/master/tensorflow_lite_support/metadata/python/metadata_writers/metadata_info.py#L130 _DELIM_REGEX_PATTERN = r"[^\w\']+" _SAVE_TO_PATH = "moview_review_metadata.tflite" # Create the metadata writer. writer = nl_classifier.MetadataWriter.create_for_inference( writer_utils.load_file(_MODEL_PATH), metadata_info.RegexTokenizerMd(_DELIM_REGEX_PATTERN, _VOCAB_FILE), [_LABEL_FILE]) # Verify the metadata generated by metadata writer. print(writer.get_metadata_json()) # Populate the metadata into the model. writer_utils.save_file(writer.populate(), _SAVE_TO_PATH)
site/en-snapshot/lite/models/convert/metadata_writer_tutorial.ipynb
tensorflow/docs-l10n
apache-2.0
<a name=audio_classifiers></a> Audio classifiers See the audio classifier model compatibility requirements for more details about the supported model format. Step 1: Import the required packages.
from tflite_support.metadata_writers import audio_classifier from tflite_support.metadata_writers import metadata_info from tflite_support.metadata_writers import writer_utils
site/en-snapshot/lite/models/convert/metadata_writer_tutorial.ipynb
tensorflow/docs-l10n
apache-2.0
Step 2: Download the example audio classifier, yamnet.tflite, and the label file.
!curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/audio_classifier/yamnet_wavin_quantized_mel_relu6.tflite -o yamnet.tflite !curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/audio_classifier/yamnet_521_labels.txt -o yamnet_labels.txt
site/en-snapshot/lite/models/convert/metadata_writer_tutorial.ipynb
tensorflow/docs-l10n
apache-2.0
Step 3: Create metadata writer and populate.
AudioClassifierWriter = audio_classifier.MetadataWriter _MODEL_PATH = "yamnet.tflite" # Task Library expects label files that are in the same format as the one below. _LABEL_FILE = "yamnet_labels.txt" # Expected sampling rate of the input audio buffer. _SAMPLE_RATE = 16000 # Expected number of channels of the input audio buffer. Note, Task library only # support single channel so far. _CHANNELS = 1 _SAVE_TO_PATH = "yamnet_metadata.tflite" # Create the metadata writer. writer = AudioClassifierWriter.create_for_inference( writer_utils.load_file(_MODEL_PATH), _SAMPLE_RATE, _CHANNELS, [_LABEL_FILE]) # Verify the metadata generated by metadata writer. print(writer.get_metadata_json()) # Populate the metadata into the model. writer_utils.save_file(writer.populate(), _SAVE_TO_PATH)
site/en-snapshot/lite/models/convert/metadata_writer_tutorial.ipynb
tensorflow/docs-l10n
apache-2.0
Create Model Metadata with semantic information You can fill in more descriptive information about the model and each tensor through the Metadata Writer API to help improve model understanding. It can be done through the 'create_from_metadata_info' method in each metadata writer. In general, you can fill in data through the parameters of 'create_from_metadata_info', i.e. general_md, input_md, and output_md. See the example below to create a rich Model Metadata for image classifers. Step 1: Import the required packages.
from tflite_support.metadata_writers import image_classifier from tflite_support.metadata_writers import metadata_info from tflite_support.metadata_writers import writer_utils from tflite_support import metadata_schema_py_generated as _metadata_fb
site/en-snapshot/lite/models/convert/metadata_writer_tutorial.ipynb
tensorflow/docs-l10n
apache-2.0
Step 3: Create model and tensor information.
model_buffer = writer_utils.load_file("mobilenet_v2_1.0_224.tflite") # Create general model information. general_md = metadata_info.GeneralMd( name="ImageClassifier", version="v1", description=("Identify the most prominent object in the image from a " "known set of categories."), author="TensorFlow Lite", licenses="Apache License. Version 2.0") # Create input tensor information. input_md = metadata_info.InputImageTensorMd( name="input image", description=("Input image to be classified. The expected image is " "128 x 128, with three channels (red, blue, and green) per " "pixel. Each element in the tensor is a value between min and " "max, where (per-channel) min is [0] and max is [255]."), norm_mean=[127.5], norm_std=[127.5], color_space_type=_metadata_fb.ColorSpaceType.RGB, tensor_type=writer_utils.get_input_tensor_types(model_buffer)[0]) # Create output tensor information. output_md = metadata_info.ClassificationTensorMd( name="probability", description="Probabilities of the 1001 labels respectively.", label_files=[ metadata_info.LabelFileMd(file_path="mobilenet_labels.txt", locale="en") ], tensor_type=writer_utils.get_output_tensor_types(model_buffer)[0])
site/en-snapshot/lite/models/convert/metadata_writer_tutorial.ipynb
tensorflow/docs-l10n
apache-2.0
Step 4: Create metadata writer and populate.
ImageClassifierWriter = image_classifier.MetadataWriter # Create the metadata writer. writer = ImageClassifierWriter.create_from_metadata_info( model_buffer, general_md, input_md, output_md) # Verify the metadata generated by metadata writer. print(writer.get_metadata_json()) # Populate the metadata into the model. writer_utils.save_file(writer.populate(), _SAVE_TO_PATH)
site/en-snapshot/lite/models/convert/metadata_writer_tutorial.ipynb
tensorflow/docs-l10n
apache-2.0
Read the metadata populated to your model. You can display the metadata and associated files in a TFLite model through the following code:
from tflite_support import metadata displayer = metadata.MetadataDisplayer.with_model_file("mobilenet_v2_1.0_224_metadata.tflite") print("Metadata populated:") print(displayer.get_metadata_json()) print("Associated file(s) populated:") for file_name in displayer.get_packed_associated_file_list(): print("file name: ", file_name) print("file content:") print(displayer.get_associated_file_buffer(file_name))
site/en-snapshot/lite/models/convert/metadata_writer_tutorial.ipynb
tensorflow/docs-l10n
apache-2.0
Run a query
%%bigquery --project $PROJECT SELECT start_station_name , AVG(duration) as duration , COUNT(duration) as num_trips FROM `bigquery-public-data`.london_bicycles.cycle_hire GROUP BY start_station_name ORDER BY num_trips DESC LIMIT 5
05_devel/magics.ipynb
GoogleCloudPlatform/bigquery-oreilly-book
apache-2.0
Run a parameterized query
PARAMS = {"num_stations": 3} %%bigquery --project $PROJECT --params $PARAMS SELECT start_station_name , AVG(duration) as duration , COUNT(duration) as num_trips FROM `bigquery-public-data`.london_bicycles.cycle_hire GROUP BY start_station_name ORDER BY num_trips DESC LIMIT @num_stations
05_devel/magics.ipynb
GoogleCloudPlatform/bigquery-oreilly-book
apache-2.0
Into a dataframe
%%bigquery df --project $PROJECT SELECT start_station_name , AVG(duration) as duration , COUNT(duration) as num_trips FROM `bigquery-public-data`.london_bicycles.cycle_hire GROUP BY start_station_name ORDER BY num_trips DESC df.describe() df.plot.scatter('duration', 'num_trips');
05_devel/magics.ipynb
GoogleCloudPlatform/bigquery-oreilly-book
apache-2.0
"This grouped variable is now a GroupBy object. It has not actually computed anything yet except for some intermediate data about the group key df['key1']. The idea is that this object has all of the information needed to then apply some operation to each of the groups." - Python for Data Analysis View a grouping Use list() to show what a grouping looks like
list(df['preTestScore'].groupby(df['regiment']))
python/pandas_apply_operations_to_groups.ipynb
tpin3694/tpin3694.github.io
mit
Descriptive statistics by group
df['preTestScore'].groupby(df['regiment']).describe()
python/pandas_apply_operations_to_groups.ipynb
tpin3694/tpin3694.github.io
mit
Mean of each regiment's preTestScore
groupby_regiment.mean()
python/pandas_apply_operations_to_groups.ipynb
tpin3694/tpin3694.github.io
mit
Mean preTestScores grouped by regiment and company
df['preTestScore'].groupby([df['regiment'], df['company']]).mean()
python/pandas_apply_operations_to_groups.ipynb
tpin3694/tpin3694.github.io
mit
Mean preTestScores grouped by regiment and company without heirarchical indexing
df['preTestScore'].groupby([df['regiment'], df['company']]).mean().unstack()
python/pandas_apply_operations_to_groups.ipynb
tpin3694/tpin3694.github.io
mit
Group the entire dataframe by regiment and company
df.groupby(['regiment', 'company']).mean()
python/pandas_apply_operations_to_groups.ipynb
tpin3694/tpin3694.github.io
mit
Number of observations in each regiment and company
df.groupby(['regiment', 'company']).size()
python/pandas_apply_operations_to_groups.ipynb
tpin3694/tpin3694.github.io
mit
Iterate an operations over groups
# Group the dataframe by regiment, and for each regiment, for name, group in df.groupby('regiment'): # print the name of the regiment print(name) # print the data of that regiment print(group)
python/pandas_apply_operations_to_groups.ipynb
tpin3694/tpin3694.github.io
mit
Group by columns Specifically in this case: group by the data types of the columns (i.e. axis=1) and then use list() to view what that grouping looks like
list(df.groupby(df.dtypes, axis=1))
python/pandas_apply_operations_to_groups.ipynb
tpin3694/tpin3694.github.io
mit
In the dataframe "df", group by "regiments, take the mean values of the other variables for those groups, then display them with the prefix_mean
df.groupby('regiment').mean().add_prefix('mean_')
python/pandas_apply_operations_to_groups.ipynb
tpin3694/tpin3694.github.io
mit
Create a function to get the stats of a group
def get_stats(group): return {'min': group.min(), 'max': group.max(), 'count': group.count(), 'mean': group.mean()}
python/pandas_apply_operations_to_groups.ipynb
tpin3694/tpin3694.github.io
mit
Create bins and bin up postTestScore by those pins
bins = [0, 25, 50, 75, 100] group_names = ['Low', 'Okay', 'Good', 'Great'] df['categories'] = pd.cut(df['postTestScore'], bins, labels=group_names)
python/pandas_apply_operations_to_groups.ipynb
tpin3694/tpin3694.github.io
mit
Apply the get_stats() function to each postTestScore bin
df['postTestScore'].groupby(df['categories']).apply(get_stats).unstack()
python/pandas_apply_operations_to_groups.ipynb
tpin3694/tpin3694.github.io
mit
De um modo similar ao que fizemos antes, vamos contar todas as ocorrencias em todas as sessões do nome de cada cidade
%matplotlib inline import pylab import matplotlib import pandas import numpy dateparse = lambda x: pandas.datetime.strptime(x, '%Y-%m-%d') sessoes = pandas.read_csv('sessoes_democratica_org.csv',index_col=0,parse_dates=['data'], date_parser=dateparse) from functools import reduce # retira falsas ocorrencias de 'guarda' e 'porto' def conta_palavras(texto,palavra): return texto.replace('aeroporto','').replace('lopes porto','').replace('guardar','').replace('guardas','').replace('guardado','').replace('aguarda','').replace('vanguarda','').replace('salvaguarda','').replace('guarda nacional','').replace('guarda civil','').count(palavra.lower()) def conta_ocorrencias(palavra): return numpy.sum(sessoes['sessao'].map(lambda texto: conta_palavras(texto,palavra))) print('A contar...') contagens = [] for cidade in cities: contagem = conta_ocorrencias(cidade) print(cidade +' '+str(contagem)) contagens.append(contagem)
notebooks/Deputado-Histogramado-4.ipynb
fsilva/deputado-histogramado
gpl-3.0
Agora representamos o mapa, e depois desenhamos circulos para cada cidade, de cor e tamanho variável consoante o número de menções. Para representar os distritos Portugueses necessitamos de um dataset com os 'shapefiles' destes: obtem-o em http://www.gadm.org/country Alternativamente o script tambem executa sem a linha 'shp_info = ...'.
from mpl_toolkits.basemap import Basemap pylab.figure(figsize=(20,10)) #map = Basemap(projection='merc',lat_0=40,lon_0=0,resolution='l',llcrnrlon=-10.5, llcrnrlat=36,urcrnrlon=-5.5, urcrnrlat=43) # PT continental map = Basemap(projection='merc',lat_0=40,lon_0=0,resolution='l',llcrnrlon=-32, llcrnrlat=31,urcrnrlon=-5.5, urcrnrlat=43) #shp_info = map.readshapefile('PRT_adm0','',drawbounds=True)#país shp_info = map.readshapefile('PRT_adm1','',drawbounds=True)#distritos #shp_info = map.readshapefile('PRT_adm2','',drawbounds=True)#concelhos #shp_info = map.readshapefile('PRT_adm3','',drawbounds=True)#freguesias map.drawcoastlines(linewidth=0.25) map.fillcontinents(color='#00ff00',lake_color='aqua') map.drawmapboundary(fill_color='aqua') contagens = numpy.array(contagens) size = 10+1000*contagens/max(contagens) color = contagens/max(contagens) x,y = map(long, lat) map.scatter(x, y,s=size, c=-color, zorder=10, cmap=pylab.autumn()) pylab.show()
notebooks/Deputado-Histogramado-4.ipynb
fsilva/deputado-histogramado
gpl-3.0
In the above example, the compiler did nothing because the default compiler (when MainEngine is called without a specific engine_list parameter) translates the individual gates to the gate set supported by the backend. In our case, the backend is a CommandPrinter which supports any type of gate. We can check what happens when the backend is a Simulator by inserting a CommandPrinter as a last compiler engine before the backend so that every command is printed before it gets sent to the Simulator:
from projectq.backends import Simulator from projectq.setups.default import get_engine_list # Use the default compiler engines with a CommandPrinter in the end: engines2 = get_engine_list() + [CommandPrinter()] eng2 = projectq.MainEngine(backend=Simulator(), engine_list=engines2) my_quantum_program(eng2)
examples/compiler_tutorial.ipynb
ProjectQ-Framework/ProjectQ
apache-2.0
As one can see, in this case the compiler had to do a little work because the Simulator does not support a QFT gate. Therefore, it automatically replaces the QFT gate by a sequence of lower-level gates. Using a provided setup and specifying a particular gate set ProjectQ's compiler is fully modular, so one can easily build a special purpose compiler. All one has to do is compose a list of compiler engines through which the individual operations will pass in a serial order and give this compiler list to the MainEngine as the engine_list parameter. For common compiler needs we try to provide predefined "setups" which contain a function get_engine_list which returns a suitable list of compiler engines for the MainEngine. All of our current setups can be found in projectq.setups. For example there is a setup called restrictedgateset which allows to compile to common restricted gate sets. This is useful, for example, to obtain resource estimates for running a given program on actual quantum hardware which does not support every quantum gate. Let's look at an example:
import projectq from projectq.setups import restrictedgateset from projectq.ops import All, H, Measure, Rx, Ry, Rz, Toffoli engine_list3 = restrictedgateset.get_engine_list(one_qubit_gates="any", two_qubit_gates=(CNOT,), other_gates=(Toffoli,)) eng3 = projectq.MainEngine(backend=CommandPrinter(accept_input=False), engine_list=engine_list3) def my_second_program(eng): qubit = eng3.allocate_qubit() qureg = eng3.allocate_qureg(3) H | qubit Rx(0.3) | qubit Toffoli | (qureg[:-1], qureg[2]) QFT | qureg All(Measure) | qureg Measure | qubit eng.flush() my_second_program(eng3)
examples/compiler_tutorial.ipynb
ProjectQ-Framework/ProjectQ
apache-2.0
Please have a look at the documention of the restrictedgateset for details. The above compiler compiles the circuit to gates consisting of any single qubit gate, the CNOT and Toffoli gate. The gate specifications can either be a gate class, e.g., Rz or a specific instance Rz(math.pi). A smaller but still universal gate set would be for example CNOT and Rz, Ry:
engine_list4 = restrictedgateset.get_engine_list(one_qubit_gates=(Rz, Ry), two_qubit_gates=(CNOT,), other_gates=()) eng4 = projectq.MainEngine(backend=CommandPrinter(accept_input=False), engine_list=engine_list4) my_second_program(eng4)
examples/compiler_tutorial.ipynb
ProjectQ-Framework/ProjectQ
apache-2.0
As mentioned in the documention of this setup, one cannot (yet) choose an arbitrary gate set but there is a limited choice. If it doesn't work for a specified gate set, the compiler will either raises a NoGateDecompositionError or a RuntimeError: maximum recursion depth exceeded... which means that for this particular choice of gate set, one would be required to write more decomposition rules to make it work. Also for some choice of gate set there might be compiler engines producing more optimal code. Error messages By default the MainEngine shortens error messages as most often this is enough information to find the error. To see the full error message one can to set verbose=True, i.e.: MainEngine(verbose=True) DIY: Build a compiler engine list for a specific gate set In this short example, we want to look at how to build an own compiler engine_list for compiling to a restricted gate set. Please have a look at the predefined setups for guidance. One of the important compiler engines to change the gate set is the AutoReplacer. It queries the following engines to check if a particular gate is supported and if not, it will use decomposition rules to change this gate to supported ones. Most engines just forward this query to the next engine until the backend is reached. The engine after an AutoReplacer is usually a TagRemover which removes previous tags in commands such as, e.g., ComputeTag which allows a following LocalOptimizer to perform more optimizations (otherwise it would only optimize within a "compute" section and not over the boundaries). To specify different intermediate gate sets, one can insert an InstructionFilter into the engine_list after the AutoReplacer in order to return True or False for the queries of the AutoReplacer asking if a specific gate is supported. Here is a minimal example of a compiler which compiles to CNOT and single qubit gates but doesn't perform optimizations (which could be achieved using the LocalOptimizer). For the more optimal versions, have a look at the restrictricedgateset setup:
import projectq from projectq.backends import CommandPrinter from projectq.cengines import AutoReplacer, DecompositionRuleSet, InstructionFilter from projectq.ops import All, ClassicalInstructionGate, Measure, Toffoli, X import projectq.setups.decompositions # Write a function which, given a Command object, returns whether the command is supported: def is_supported(eng, cmd): if isinstance(cmd.gate, ClassicalInstructionGate): # This is required to allow Measure, Allocate, Deallocate, Flush return True elif isinstance(cmd.gate, X.__class__) and len(cmd.control_qubits) == 1: # Allows a CNOT gate which is an X gate with one control qubit return True elif (len(cmd.control_qubits) == 0 and len(cmd.qubits) == 1 and len(cmd.qubits[0]) == 1): # Gate which has no control qubits, applied to 1 qureg consisting of 1 qubit return True else: return False #is_supported("test", "eng") rule_set = DecompositionRuleSet(modules=[projectq.setups.decompositions]) engine_list5 = [AutoReplacer(rule_set), InstructionFilter(is_supported)] eng5 = projectq.MainEngine(backend=CommandPrinter(accept_input=False), engine_list=engine_list5) def my_third_program(eng): qubit = eng5.allocate_qubit() qureg = eng5.allocate_qureg(3) Toffoli | (qureg[:2], qureg[2]) QFT | qureg All(Measure) | qureg Measure | qubit eng5.flush() my_third_program(eng5)
examples/compiler_tutorial.ipynb
ProjectQ-Framework/ProjectQ
apache-2.0
We fail here because the description column is a string. Lets try again without the description.
# features feature_names_integers = ['Barcode','UnitRRP'] # Extra features from panda (without description) training_data_integers = df_training[feature_names_integers].values training_data_integers[:3] # train model again model_dtc.fit(training_data_integers, target) # Extract test data and test the model test_data_integers = df_test[feature_names_integers].values test_target = df_test["CategoryID"].values expected = test_target predicted_dtc = model_dtc.predict(test_data_integers) print(metrics.classification_report(expected, predicted_dtc, target_names=target_categories)) print(metrics.confusion_matrix(expected, predicted_dtc)) metrics.accuracy_score(expected, predicted, normalize=True, sample_weight=None) predicted[:5]
Session4/code/01 Loading EPOS Category Data for modelling.ipynb
catalystcomputing/DSIoT-Python-sessions
apache-2.0
Lets try a different Classifier Linear classifiers (SVM, logistic regression, a.o.) with SGD training.
from sklearn.linear_model import SGDClassifier # Create classifier class model_sgd = SGDClassifier() # train model again model_sgd.fit(training_data_integers, target) predicted_sgd = model_sgd.predict(test_data_integers) print(metrics.classification_report(expected, predicted_sgd, target_names=target_categories)) print(metrics.confusion_matrix(expected, predicted_sgd)) metrics.accuracy_score(expected, predicted_sgd, normalize=True, sample_weight=None)
Session4/code/01 Loading EPOS Category Data for modelling.ipynb
catalystcomputing/DSIoT-Python-sessions
apache-2.0
The search for good, evil, and the gender divide We took a look at how Marvel and DC are divided along the lines of gender and good vs. evil.
#Clean up and shape the Marvel dataframe #Set Alignment to Index marvel_men = marvel[['ALIGN','SEX']] marvel_men=marvel_men.set_index('ALIGN') #Create separate Male and Female columns gender = ['Male','Female'] oldmarvel = marvel_men.copy() vnames=[] for x in gender: newname = x vnames.append(newname) marvel_men[newname]=marvel_men['SEX'].str.contains(x)*1 marvel_goodbad=marvel_men[vnames] #summate indices into the common categories marvel_goodbad=marvel_goodbad.groupby(marvel_goodbad.index).sum() #do the same for DC #Set Alignment to Index dc_men = dc[['ALIGN','SEX']] dc_men=dc_men.set_index('ALIGN') #Create separate Male and Female columns gender = ['Male','Female'] olddc = dc_men.copy() vnames=[] for x in gender: newname = x vnames.append(newname) dc_men[newname]=dc_men['SEX'].str.contains(x)*1 dc_goodbad=dc_men[vnames] #summate indices into the common categories dc_goodbad=dc_goodbad.groupby(dc_goodbad.index).sum() #drop Reformed Characters Column, as there are only 3 from the universe dc_goodbad=dc_goodbad.drop('Reformed Criminals') #plot the findings marvel_goodbad.plot(kind='bar', color=['blue','salmon'],title='Marvel Universe') dc_goodbad.plot(kind='bar', color=['blue','salmon'], title='DC Universe')
MBA_S16/Berry-Domenico-Comics.ipynb
NYUDataBootcamp/Projects
mit
We can see that the Marvel and DC universes are each dominated by men. What may be surprising, however, is that in both franchises, bad men dominate the universe. On the female side, there are more good females than bad females. Perhaps comic book authors have found have Proportionally, DC has more equal representation of good vs bad characters, while Marvel has almost twice as many bad male characters as good male characters. Good, Evil, and Sexual Orientation
#Clean up and shape the Marvel dataframe #Set Alignment to Index marvel_gsm = marvel[['ALIGN','GSM']] marvel_gsm=marvel_gsm.set_index('ALIGN') #Create separate GSM columns gsm = ['Hetero', 'Bisexual','Transvestites', 'Homosexual','Pansexual', 'Transgender','Genderfluid'] oldmarvel2 = marvel_gsm.copy() onames=[] for x in gsm: newname5 = x onames.append(newname5) marvel_gsm[newname5]=marvel_gsm['GSM'].str.contains(x)*1 marvel_orient=marvel_gsm[onames] marvel_orient.head() #summate indices into the common categories marvel_orient=marvel_orient.groupby(marvel_orient.index).sum() #Clean up and shape the DC dataframe #Set Alignment to Index dc_gsm = dc[['ALIGN','GSM']] dc_gsm=dc_gsm.set_index('ALIGN') #Create separate GSM columns gsm = ['Hetero', 'Bisexual','Transvestites', 'Homosexual','Pansexual', 'Transgender','Genderfluid'] olddc2 = dc_gsm.copy() onames2=[] for x in gsm: newname4 = x onames2.append(newname4) dc_gsm[newname4]=dc_gsm['GSM'].str.contains(x)*1 dc_orient=dc_gsm[onames2] #summate indices into the common categories dc_orient=dc_orient.groupby(dc_orient.index).sum() #drop Reformed Characters Column, as there are only 3 from the universe dc_orient=dc_orient.drop('Reformed Criminals') marvel_orient[[1,2,3,4,5,6]].plot(kind='bar',title='Marvel Universe') dc_orient[[1,3]].plot(kind='bar',color=('blue','red'),title='DC Universe')
MBA_S16/Berry-Domenico-Comics.ipynb
NYUDataBootcamp/Projects
mit
Our findings here are pretty encouraging! Non-heterosexual characters have not been disproportionately vilified in either Marvel or DC. Sexual Orientation in Comics Over the Years We'll take a look at how many characters of each orientation were introduced in both Marvel and DC each year, and any trends that surfaces.
# We create a copy of the original dataframes to look at gender GenderM = marvel.copy() GenderDC = dc.copy() # Clean up and shape the Marvel dataframe # First, we'll drop those decimals from the years GenderM["Year"] = GenderM["Year"].astype(str) GenderM["Year"] = GenderM["Year"].str.replace(".0","") GenderM["Year"] = GenderM["Year"].str.replace("nan","0") # We also want to streamline the entries, removing "Characters" # We'll assume any NaN values for orientation are "Heterosexual" GenderM["SEX"] = GenderM["SEX"].str.replace(" Characters","") GenderM["ALIGN"] = GenderM["ALIGN"].str.replace(" Characters","") GenderM["GSM"] = GenderM["GSM"].str.replace(" Characters","") GenderM["GSM"] = GenderM["GSM"].fillna("Heterosexual") # Next, we pull out just the relevant columns GenderM = GenderM[["name","ALIGN","SEX","GSM","Year"]] # Here, we clean up the variable names newheadings = ["Name","Alignment","Gender","Orientation","Year"] GenderM.columns = newheadings # We eventually want to view trends over time, so "Year" needs to become the index GenderM = GenderM.set_index("Year") # Clean up and shape the DC dataframe # To make it look a little cleaner, we'll drop those decimals from the years GenderDC["YEAR"] = GenderDC["YEAR"].astype(str) GenderDC["YEAR"] = GenderDC["YEAR"].str.replace(".0","") GenderDC["YEAR"] = GenderDC["YEAR"].str.replace("nan","0") # We also want to streamline the entries, removing "Characters" from the columns we plan to use # We'll assume any NaN values for orientation are "Heterosexual" GenderDC["SEX"] = GenderDC["SEX"].str.replace(" Characters","") GenderDC["ALIGN"] = GenderDC["ALIGN"].str.replace(" Characters","") GenderDC["GSM"] = GenderDC["GSM"].str.replace(" Characters","") GenderDC["GSM"] = GenderDC["GSM"].fillna("Heterosexual") # Next, we pull out just the relevant columns GenderDC = GenderDC[["name","ALIGN","SEX","GSM","YEAR"]] # Let's clean up those column headings newdcheadings = ["Name","Alignment","Gender","Orientation","Year"] GenderDC.columns = newdcheadings # We eventually want to view trends over time, so "Year" needs to become the index GenderDC = GenderDC.set_index("Year") # A simple bar chart at this stage shows us the large outlier is skewing the scale; we'll get rid of it later GenderM["Orientation"].value_counts().plot.barh(alpha=0.5) # Now we need to change some of the text data to numbers so it is easier plot. First, Marvel. # Keeping the df GenderM available for other use, we create OrientationM to look specifically at sexual orientations in Marvel OrientationM = GenderM.copy() OrientationM = OrientationM.reset_index() # The goal is to separate each orientation into its own column and assign 1s and 0s. # First, we'll create new columns for each orientation and get rid of the existing one. OrientationM["Heterosexual"] = OrientationM["Orientation"].copy() OrientationM["Homosexual"] = OrientationM["Orientation"].copy() OrientationM["Bisexual"] = OrientationM["Orientation"].copy() OrientationM["Transgender"] = OrientationM["Orientation"].copy() OrientationM["Genderfluid"] = OrientationM["Orientation"].copy() OrientationM["Transvestite"] = OrientationM["Orientation"].copy() OrientationM["Pansexual"] = OrientationM["Orientation"].copy() OrientationM = OrientationM.drop("Orientation", 1) # Now we want to convert the values in each orientation column to a 1 if it matches, a 0 if it does not OrientationM["Heterosexual"] = OrientationM["Heterosexual"].replace(["Heterosexual","Homosexual","Bisexual","Transgender","Genderfluid","Transvestites","Pansexual"],[1,0,0,0,0,0,0]) OrientationM["Homosexual"] = OrientationM["Homosexual"].replace(["Heterosexual","Homosexual","Bisexual","Transgender","Genderfluid","Transvestites","Pansexual"],[0,1,0,0,0,0,0]) OrientationM["Bisexual"] = OrientationM["Bisexual"].replace(["Heterosexual","Homosexual","Bisexual","Transgender","Genderfluid","Transvestites","Pansexual"],[0,0,1,0,0,0,0]) OrientationM["Transgender"] = OrientationM["Transgender"].replace(["Heterosexual","Homosexual","Bisexual","Transgender","Genderfluid","Transvestites","Pansexual"],[0,0,0,1,0,0,0]) OrientationM["Genderfluid"] = OrientationM["Genderfluid"].replace(["Heterosexual","Homosexual","Bisexual","Transgender","Genderfluid","Transvestites","Pansexual"],[0,0,0,0,1,0,0]) OrientationM["Transvestite"] = OrientationM["Transvestite"].replace(["Heterosexual","Homosexual","Bisexual","Transgender","Genderfluid","Transvestites","Pansexual"],[0,0,0,0,0,1,0]) OrientationM["Pansexual"] = OrientationM["Pansexual"].replace(["Heterosexual","Homosexual","Bisexual","Transgender","Genderfluid","Transvestites","Pansexual"],[0,0,0,0,0,0,1]) # In preparation for plotting, we set "Year" as the index and sorted it OrientationM = OrientationM.set_index("Year") OrientationM = OrientationM.sort_index() # Next, we want to group all the values for each year into sums to measure volume of exposure OrientationM = OrientationM.groupby(OrientationM.index).sum() # Now we need to change some of the text data to numbers so it is easier plot. Now, DC. # Keeping the df GenderM available for future use, we create OrientationDC to look specifically at sexual orientations in DC OrientationDC = GenderDC.copy() # Here again, we want to separate each orientation into its own column and assign 1s and 0s. # First, we'll create new columns for each orientation and get rid of the existing one. # This will look redundant at first, but we're gettinig there. OrientationDC["Heterosexual"] = OrientationDC["Orientation"].copy() OrientationDC["Homosexual"] = OrientationDC["Orientation"].copy() OrientationDC["Bisexual"] = OrientationDC["Orientation"].copy() OrientationDC["Transgender"] = OrientationDC["Orientation"].copy() OrientationDC["Genderfluid"] = OrientationDC["Orientation"].copy() OrientationDC["Transvestite"] = OrientationDC["Orientation"].copy() OrientationDC["Pansexual"] = OrientationDC["Orientation"].copy() OrientationDC = OrientationDC.drop("Orientation", 1) # Now we want to convert the values in each orientation column to a 1 if it matches, a 0 if it does not OrientationDC["Heterosexual"] = OrientationDC["Heterosexual"].replace(["Heterosexual","Homosexual","Bisexual","Transgender","Genderfluid","Transvestites","Pansexual"],[1,0,0,0,0,0,0]) OrientationDC["Homosexual"] = OrientationDC["Homosexual"].replace(["Heterosexual","Homosexual","Bisexual","Transgender","Genderfluid","Transvestites","Pansexual"],[0,1,0,0,0,0,0]) OrientationDC["Bisexual"] = OrientationDC["Bisexual"].replace(["Heterosexual","Homosexual","Bisexual","Transgender","Genderfluid","Transvestites","Pansexual"],[0,0,1,0,0,0,0]) OrientationDC["Transgender"] = OrientationDC["Transgender"].replace(["Heterosexual","Homosexual","Bisexual","Transgender","Genderfluid","Transvestites","Pansexual"],[0,0,0,1,0,0,0]) OrientationDC["Genderfluid"] = OrientationDC["Genderfluid"].replace(["Heterosexual","Homosexual","Bisexual","Transgender","Genderfluid","Transvestites","Pansexual"],[0,0,0,0,1,0,0]) OrientationDC["Transvestite"] = OrientationDC["Transvestite"].replace(["Heterosexual","Homosexual","Bisexual","Transgender","Genderfluid","Transvestites","Pansexual"],[0,0,0,0,0,1,0]) OrientationDC["Pansexual"] = OrientationDC["Pansexual"].replace(["Heterosexual","Homosexual","Bisexual","Transgender","Genderfluid","Transvestites","Pansexual"],[0,0,0,0,0,0,1]) # "Year" is already our index; but it needs to be sorted OrientationDC = OrientationDC.sort_index() # Next, we want to group all the values for each year into sums to measure volume of exposure OrientationDC = OrientationDC.groupby(OrientationDC.index).sum() # In order to better see the granularity, we ignore the overwhelming "Heterosexual" values in the Marvel Universe # Now let's see what it looks like! # For clarity, we've also added a title and made the lines bolder so they are easier to see OrientationM = OrientationM.drop("Heterosexual", 1) OrientationM.plot(figsize=(12,6), title="Marvel Characters by Orientation", linewidth=2.0) # In order to better see the granularity, we ignore the overwhelming "Heterosexual" values in DC # Now let's see what it looks like! # For clarity, we've also added a title and made the lines bolder so they are easier to see OrientationDC = OrientationDC.drop("Heterosexual", 1) OrientationDC.plot(figsize=(12,6), title="DC Characters by Orientation", linewidth=2.0)
MBA_S16/Berry-Domenico-Comics.ipynb
NYUDataBootcamp/Projects
mit
A brief comparison of the two plots reveals that Marvel has introduced more variation in sexual identity over the years than DC. To get a better idea, though, we'll want to look at them side by side. Comparing Representation in Marvel vs. DC
# Now we can compare the two franchises in combined plots ax = OrientationM["Homosexual"].plot() OrientationDC["Homosexual"].plot(ax=ax) # Pretty powerful data, but we can make it look better # By adding a title, axis labels, legend; changing the colors, and enlarging, we enhance the "Pow!" factor ax = OrientationM["Homosexual"].plot(label="Marvel", color="red", linewidth=2.0) OrientationDC["Homosexual"].plot(ax=ax, label="DC", figsize=(12,6), color="blue", linewidth=2.0) ax.set_title("Homosexual Comic Characters", fontsize=16, fontweight="bold") ax.set_xlabel("Year", fontsize=12) ax.set_ylabel("Number of Characters Introduced", fontsize=12) legend = ax.legend(loc='upper center', shadow=True)
MBA_S16/Berry-Domenico-Comics.ipynb
NYUDataBootcamp/Projects
mit
This shows us that Marvel introduced more homosexual characters as early as the 1940s, but DC has been more representative in recent years.
# We can go on to do this for any orientation ax = OrientationM["Bisexual"].plot(label="Marvel", color="green", linewidth=2.0) OrientationDC["Bisexual"].plot(ax=ax, label="DC", figsize=(12,6), color="purple", linewidth=2.0) ax.set_title("Bisexual Comic Characters", fontsize=16, fontweight="bold") ax.set_xlabel("Year", fontsize=12) ax.set_ylabel("Number of Characters Introduced", fontsize=12) legend = ax.legend(loc='upper center', shadow=True)
MBA_S16/Berry-Domenico-Comics.ipynb
NYUDataBootcamp/Projects
mit
When it comes to bisexual characters, it is harder to find a trend. The two franchises seem to have ebbed and flowed in representation for this group.
# We could also use a stacked bar chart to look at the array of introductions per year OrientationM.plot.bar(stacked=True, figsize=(16,8), title="Marvel Yearly Character Introductions", fontsize=12) # Or the same thing for DC... OrientationDC.plot.bar(stacked=True, figsize=(16,8), title="DC Yearly Character Introductions", fontsize=12)
MBA_S16/Berry-Domenico-Comics.ipynb
NYUDataBootcamp/Projects
mit
GlobalMaxPooling2D [pooling.GlobalMaxPooling2D.0] input 6x6x3, data_format='channels_last'
data_in_shape = (6, 6, 3) L = GlobalMaxPooling2D(data_format='channels_last') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(270) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.GlobalMaxPooling2D.0'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }
notebooks/layers/pooling/GlobalMaxPooling2D.ipynb
qinwf-nuan/keras-js
mit
[pooling.GlobalMaxPooling2D.1] input 3x6x6, data_format='channels_first'
data_in_shape = (3, 6, 6) L = GlobalMaxPooling2D(data_format='channels_first') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(271) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.GlobalMaxPooling2D.1'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }
notebooks/layers/pooling/GlobalMaxPooling2D.ipynb
qinwf-nuan/keras-js
mit
[pooling.GlobalMaxPooling2D.2] input 5x3x2, data_format='channels_last'
data_in_shape = (5, 3, 2) L = GlobalMaxPooling2D(data_format='channels_last') layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(272) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.GlobalMaxPooling2D.2'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }
notebooks/layers/pooling/GlobalMaxPooling2D.ipynb
qinwf-nuan/keras-js
mit
23. 系统A:当负荷增加250MW时,频率下降0.1HZ。系统B:当负荷增加400MW时,频率下降0.1HZ。系统A运行于49.85HZ,系统B运行于50HZ,如用联络线将两系统相连,求联络线上的功率。
Ka=2500 Kb=4000 fa=49.85 fb=50 df2=fb-fa dPl=df2*Ka dfab=-1*dPl/(Ka+Kb)#B下降的频率 Pab=dfab*Kb trans_power(Ka,Kb,dPl,0)
power_system/调频计算.ipynb
chengts95/homeworkOfPowerSystem
gpl-2.0
24. A、B两系统经联络线相连,已知:$K_{GA}=270MW/Hz$ ,$K_{LA}=21MW/Hz$ ,$K_{GB}=480MW/Hz$ ,$K_{LB}=21MW/Hz$ ,$P_{AB}=300MW$ ,系统B负荷增加150MW。1)两系统所有发电机均仅参加一次调频,求系统频率、联络线功率变化量,A、B两系统发电机和负荷功率变化量;2)除一次调频外,A系统设调频厂进行二次调频,联络线最大允许输送功率为400MW,求系统频率的变化量。
Ka=291 Kb=501 Plb=150 df=-1*Plb/(Ka+Kb) trans_power(Ka,Kb,0,Plb) df 100/Ka
power_system/调频计算.ipynb
chengts95/homeworkOfPowerSystem
gpl-2.0
Create Cloud Storage bucket for storing Vertex Pipeline artifacts
BUCKET_NAME = f"gs://{PROJECT_ID}-bucket" print(BUCKET_NAME) !gsutil ls -al $BUCKET_NAME USER = "dougkelly" # <---CHANGE THIS PIPELINE_ROOT = "{}/pipeline_root/{}".format(BUCKET_NAME, USER) PIPELINE_ROOT
self-paced-labs/vertex-ai/vertex-pipelines/kfp/lab_exercise.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Create BigQuery dataset
!bq --location=US mk -d \ $PROJECT_ID:$BQ_DATASET_NAME
self-paced-labs/vertex-ai/vertex-pipelines/kfp/lab_exercise.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Exploratory Data Analysis in BigQuery
%%bigquery data SELECT CAST(EXTRACT(DAYOFWEEK FROM trip_start_timestamp) AS string) AS trip_dayofweek, FORMAT_DATE('%A',cast(trip_start_timestamp as date)) AS trip_dayname, COUNT(*) as trip_count, FROM `bigquery-public-data.chicago_taxi_trips.taxi_trips` WHERE EXTRACT(YEAR FROM trip_start_timestamp) = 2015 GROUP BY trip_dayofweek, trip_dayname ORDER BY trip_dayofweek ; data.plot(kind='bar', x='trip_dayname', y='trip_count');
self-paced-labs/vertex-ai/vertex-pipelines/kfp/lab_exercise.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Create BigQuery dataset for ML classification task
SAMPLE_SIZE = 100000 YEAR = 2020 sql_script = ''' CREATE OR REPLACE TABLE `@PROJECT_ID.@DATASET.@TABLE` AS ( WITH taxitrips AS ( SELECT trip_start_timestamp, trip_seconds, trip_miles, payment_type, pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude, tips, fare FROM `bigquery-public-data.chicago_taxi_trips.taxi_trips` WHERE 1=1 AND pickup_longitude IS NOT NULL AND pickup_latitude IS NOT NULL AND dropoff_longitude IS NOT NULL AND dropoff_latitude IS NOT NULL AND trip_miles > 0 AND trip_seconds > 0 AND fare > 0 AND EXTRACT(YEAR FROM trip_start_timestamp) = @YEAR ) SELECT trip_start_timestamp, EXTRACT(MONTH from trip_start_timestamp) as trip_month, EXTRACT(DAY from trip_start_timestamp) as trip_day, EXTRACT(DAYOFWEEK from trip_start_timestamp) as trip_day_of_week, EXTRACT(HOUR from trip_start_timestamp) as trip_hour, trip_seconds, trip_miles, payment_type, ST_AsText( ST_SnapToGrid(ST_GeogPoint(pickup_longitude, pickup_latitude), 0.1) ) AS pickup_grid, ST_AsText( ST_SnapToGrid(ST_GeogPoint(dropoff_longitude, dropoff_latitude), 0.1) ) AS dropoff_grid, ST_Distance( ST_GeogPoint(pickup_longitude, pickup_latitude), ST_GeogPoint(dropoff_longitude, dropoff_latitude) ) AS euclidean, CONCAT( ST_AsText(ST_SnapToGrid(ST_GeogPoint(pickup_longitude, pickup_latitude), 0.1)), ST_AsText(ST_SnapToGrid(ST_GeogPoint(dropoff_longitude, dropoff_latitude), 0.1)) ) AS loc_cross, IF((tips/fare >= 0.2), 1, 0) AS tip_bin, IF(ABS(MOD(FARM_FINGERPRINT(STRING(trip_start_timestamp)), 10)) < 9, 'UNASSIGNED', 'TEST') AS data_split FROM taxitrips LIMIT @LIMIT ) ''' sql_script = sql_script.replace( '@PROJECT_ID', PROJECT_ID).replace( '@DATASET', BQ_DATASET_NAME).replace( '@TABLE', BQ_TABLE_NAME).replace( '@YEAR', str(YEAR)).replace( '@LIMIT', str(SAMPLE_SIZE)) # print(sql_script) from google.cloud import bigquery bq_client = bigquery.Client(project=PROJECT_ID, location=BQ_LOCATION) job = bq_client.query(sql_script) _ = job.result()
self-paced-labs/vertex-ai/vertex-pipelines/kfp/lab_exercise.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Verify data split proportions
%%bigquery SELECT data_split, COUNT(*) FROM dougkelly-vertex-demos.chicago_taxi.chicago_taxi_tips_raw GROUP BY data_split
self-paced-labs/vertex-ai/vertex-pipelines/kfp/lab_exercise.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Create Import libraries
import json import logging from typing import NamedTuple import kfp # from google.cloud import aiplatform from google_cloud_pipeline_components import aiplatform as gcc_aip from kfp.v2 import dsl from kfp.v2.dsl import (ClassificationMetrics, Input, Metrics, Model, Output, component) from kfp.v2.google.client import AIPlatformClient
self-paced-labs/vertex-ai/vertex-pipelines/kfp/lab_exercise.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Create and run an AutoML Tabular classification pipeline using Kubeflow Pipelines SDK Create a custom KFP evaluation component
@component( base_image="gcr.io/deeplearning-platform-release/tf2-cpu.2-3:latest", output_component_file="components/tables_eval_component.yaml", # Optional: you can use this to load the component later packages_to_install=["google-cloud-aiplatform==1.0.0"], ) def classif_model_eval_metrics( project: str, location: str, api_endpoint: str, thresholds_dict_str: str, model: Input[Model], metrics: Output[Metrics], metricsc: Output[ClassificationMetrics], ) -> NamedTuple("Outputs", [("dep_decision", str)]): # Return parameter. """This function renders evaluation metrics for an AutoML Tabular classification model. It retrieves the classification model evaluation generated by the AutoML Tabular training process, does some parsing, and uses that info to render the ROC curve and confusion matrix for the model. It also uses given metrics threshold information and compares that to the evaluation results to determine whether the model is sufficiently accurate to deploy. """ import json import logging from google.cloud import aiplatform # Fetch model eval info def get_eval_info(client, model_name): from google.protobuf.json_format import MessageToDict response = client.list_model_evaluations(parent=model_name) metrics_list = [] metrics_string_list = [] for evaluation in response: print("model_evaluation") print(" name:", evaluation.name) print(" metrics_schema_uri:", evaluation.metrics_schema_uri) metrics = MessageToDict(evaluation._pb.metrics) for metric in metrics.keys(): logging.info("metric: %s, value: %s", metric, metrics[metric]) metrics_str = json.dumps(metrics) metrics_list.append(metrics) metrics_string_list.append(metrics_str) return ( evaluation.name, metrics_list, metrics_string_list, ) # Use the given metrics threshold(s) to determine whether the model is # accurate enough to deploy. def classification_thresholds_check(metrics_dict, thresholds_dict): for k, v in thresholds_dict.items(): logging.info("k {}, v {}".format(k, v)) if k in ["auRoc", "auPrc"]: # higher is better if metrics_dict[k] < v: # if under threshold, don't deploy logging.info( "{} < {}; returning False".format(metrics_dict[k], v) ) return False logging.info("threshold checks passed.") return True def log_metrics(metrics_list, metricsc): test_confusion_matrix = metrics_list[0]["confusionMatrix"] logging.info("rows: %s", test_confusion_matrix["rows"]) # log the ROC curve fpr = [] tpr = [] thresholds = [] for item in metrics_list[0]["confidenceMetrics"]: fpr.append(item.get("falsePositiveRate", 0.0)) tpr.append(item.get("recall", 0.0)) thresholds.append(item.get("confidenceThreshold", 0.0)) print(f"fpr: {fpr}") print(f"tpr: {tpr}") print(f"thresholds: {thresholds}") metricsc.log_roc_curve(fpr, tpr, thresholds) # log the confusion matrix annotations = [] for item in test_confusion_matrix["annotationSpecs"]: annotations.append(item["displayName"]) logging.info("confusion matrix annotations: %s", annotations) metricsc.log_confusion_matrix( annotations, test_confusion_matrix["rows"], ) # log textual metrics info as well for metric in metrics_list[0].keys(): if metric != "confidenceMetrics": val_string = json.dumps(metrics_list[0][metric]) metrics.log_metric(metric, val_string) # metrics.metadata["model_type"] = "AutoML Tabular classification" logging.getLogger().setLevel(logging.INFO) aiplatform.init(project=project) # extract the model resource name from the input Model Artifact model_resource_path = model.uri.replace("aiplatform://v1/", "") logging.info("model path: %s", model_resource_path) client_options = {"api_endpoint": api_endpoint} # Initialize client that will be used to create and send requests. client = aiplatform.gapic.ModelServiceClient(client_options=client_options) eval_name, metrics_list, metrics_str_list = get_eval_info( client, model_resource_path ) logging.info("got evaluation name: %s", eval_name) logging.info("got metrics list: %s", metrics_list) log_metrics(metrics_list, metricsc) thresholds_dict = json.loads(thresholds_dict_str) deploy = classification_thresholds_check(metrics_list[0], thresholds_dict) if deploy: dep_decision = "true" else: dep_decision = "false" logging.info("deployment decision is %s", dep_decision) return (dep_decision,) import time DISPLAY_NAME = "automl-tab-chicago-taxi-tips-{}".format(str(int(time.time()))) print(DISPLAY_NAME)
self-paced-labs/vertex-ai/vertex-pipelines/kfp/lab_exercise.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Define the pipeline
@kfp.dsl.pipeline(name="automl-tab-chicago-taxi-tips-train", pipeline_root=PIPELINE_ROOT) def pipeline( bq_source: str = "bq://dougkelly-vertex-demos:chicago_taxi.chicago_taxi_tips_raw", display_name: str = DISPLAY_NAME, project: str = PROJECT_ID, gcp_region: str = REGION, api_endpoint: str = "us-central1-aiplatform.googleapis.com", thresholds_dict_str: str = '{"auRoc": 0.90}', ): dataset_create_op = gcc_aip.TabularDatasetCreateOp( project=project, display_name=display_name, bq_source=bq_source ) training_op = gcc_aip.AutoMLTabularTrainingJobRunOp( project=project, display_name=display_name, optimization_prediction_type="classification", optimization_objective="maximize-au-roc", # binary classification budget_milli_node_hours=750, training_fraction_split=0.9, validation_fraction_split=0.1, column_transformations=[ {"numeric": {"column_name": "trip_seconds"}}, {"numeric": {"column_name": "trip_miles"}}, {"categorical": {"column_name": "trip_month"}}, {"categorical": {"column_name": "trip_day"}}, {"categorical": {"column_name": "trip_day_of_week"}}, {"categorical": {"column_name": "trip_hour"}}, {"categorical": {"column_name": "payment_type"}}, {"numeric": {"column_name": "euclidean"}}, {"categorical": {"column_name": "tip_bin"}}, ], dataset=dataset_create_op.outputs["dataset"], target_column="tip_bin", ) model_eval_task = classif_model_eval_metrics( project, gcp_region, api_endpoint, thresholds_dict_str, training_op.outputs["model"], ) with dsl.Condition( model_eval_task.outputs["dep_decision"] == "true", name="deploy_decision", ): deploy_op = gcc_aip.ModelDeployOp( # noqa: F841 model=training_op.outputs["model"], project=project, machine_type="n1-standard-4", )
self-paced-labs/vertex-ai/vertex-pipelines/kfp/lab_exercise.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Compile and run the pipeline
from kfp.v2 import compiler # noqa: F811 compiler.Compiler().compile( pipeline_func=pipeline, package_path="automl-tab-chicago-taxi-tips-train_pipeline.json" )
self-paced-labs/vertex-ai/vertex-pipelines/kfp/lab_exercise.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Run the pipeline
from kfp.v2.google.client import AIPlatformClient # noqa: F811 api_client = AIPlatformClient(project_id=PROJECT_ID, region=REGION) response = api_client.create_run_from_job_spec( "automl-tab-chicago-taxi-tips-train_pipeline.json", pipeline_root=PIPELINE_ROOT, parameter_values={"project": PROJECT_ID, "display_name": DISPLAY_NAME}, )
self-paced-labs/vertex-ai/vertex-pipelines/kfp/lab_exercise.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Query your deployed model to retrieve online predictions and explanations
from google.cloud import aiplatform import matplotlib.pyplot as plt import pandas as pd endpoint = aiplatform.Endpoint( endpoint_name="2677161280053182464", project=PROJECT_ID, location=REGION) %%bigquery test_df SELECT CAST(trip_month AS STRING) AS trip_month, CAST(trip_day AS STRING) AS trip_day, CAST(trip_day_of_week AS STRING) AS trip_day_of_week, CAST(trip_hour AS STRING) AS trip_hour, CAST(trip_seconds AS STRING) AS trip_seconds, trip_miles, payment_type, euclidean FROM dougkelly-vertex-demos.chicago_taxi.chicago_taxi_tips_raw WHERE data_split = 'TEST' AND tip_bin = 1 test_instance = test_df.iloc[0] test_instance_dict = test_instance.to_dict() test_instance_dict explained_prediction = endpoint.explain([test_instance_dict]) pd.DataFrame.from_dict(explained_prediction.predictions[0]).plot(kind='bar'); pd.DataFrame.from_dict(explained_prediction.explanations[0].attributions[0].feature_attributions, orient='index').plot(kind='barh');
self-paced-labs/vertex-ai/vertex-pipelines/kfp/lab_exercise.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are: cv2.inRange() for color selection cv2.fillPoly() for regions selection cv2.line() to draw lines on an image given endpoints cv2.addWeighted() to coadd / overlay two images cv2.cvtColor() to grayscale or change color cv2.imwrite() to output images to file cv2.bitwise_and() to apply a mask to an image Check out the OpenCV documentation to learn about these and discover even more awesome functionality! Below are some helper functions to help get you started. They should look familiar from the lesson!
import math def grayscale(img): """Applies the Grayscale transform This will return an image with only one color channel but NOTE: to see the returned image as grayscale you should call plt.imshow(gray, cmap='gray')""" return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) # Or use BGR2GRAY if you read an image with cv2.imread() # return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) def canny(img, low_threshold, high_threshold): """Applies the Canny transform""" return cv2.Canny(img, low_threshold, high_threshold) def gaussian_blur(img, kernel_size): """Applies a Gaussian Noise kernel""" return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0) def region_of_interest(img, vertices): """ Applies an image mask. Only keeps the region of the image defined by the polygon formed from `vertices`. The rest of the image is set to black. """ #defining a blank mask to start with mask = np.zeros_like(img) #defining a 3 channel or 1 channel color to fill the mask with depending on the input image if len(img.shape) > 2: channel_count = img.shape[2] # i.e. 3 or 4 depending on your image ignore_mask_color = (255,) * channel_count else: ignore_mask_color = 255 #filling pixels inside the polygon defined by "vertices" with the fill color cv2.fillPoly(mask, vertices, ignore_mask_color) #returning the image only where mask pixels are nonzero masked_image = cv2.bitwise_and(img, mask) return masked_image def draw_lines(img, lines, color=[255, 0, 0], thickness=2): """ NOTE: this is the function you might want to use as a starting point once you want to average/extrapolate the line segments you detect to map out the full extent of the lane (going from the result shown in raw-lines-example.mp4 to that shown in P1_example.mp4). Think about things like separating line segments by their slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left line vs. the right line. Then, you can average the position of each of the lines and extrapolate to the top and bottom of the lane. This function draws `lines` with `color` and `thickness`. Lines are drawn on the image inplace (mutates the image). If you want to make the lines semi-transparent, think about combining this function with the weighted_img() function below """ for line in lines: for x1,y1,x2,y2 in line: cv2.line(img, (x1, y1), (x2, y2), color, thickness) def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap): """ `img` should be the output of a Canny transform. Returns an image with hough lines drawn. """ lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap) line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8) draw_lines(line_img, lines) return line_img # Python 3 has support for cool math symbols. def weighted_img(img, initial_img, α=0.8, β=1., λ=0.): """ `img` is the output of the hough_lines(), An image with lines drawn on it. Should be a blank image (all black) with lines drawn on it. `initial_img` should be the image before any processing. The result image is computed as follows: initial_img * α + img * β + λ NOTE: initial_img and img must be the same shape! """ return cv2.addWeighted(initial_img, α, img, β, λ)
find_lane_lines/CarND_LaneLines_P1/P1.ipynb
gon1213/SDC
gpl-3.0
Test on Images Now you should build your pipeline to work on the images in the directory "test_images" You should make sure your pipeline works well on these images before you try the videos.
import os os.listdir("test_images/")
find_lane_lines/CarND_LaneLines_P1/P1.ipynb
gon1213/SDC
gpl-3.0
run your solution on all test_images and make copies into the test_images directory).
# TODO: Build your pipeline that will draw lane lines on the test_images # then save them to the test_images directory.
find_lane_lines/CarND_LaneLines_P1/P1.ipynb
gon1213/SDC
gpl-3.0
Test on Videos You know what's cooler than drawing lanes over images? Drawing lanes over video! We can test our solution on two provided videos: solidWhiteRight.mp4 solidYellowLeft.mp4
# Import everything needed to edit/save/watch video clips from moviepy.editor import VideoFileClip from IPython.display import HTML def process_image(image): # NOTE: The output you return should be a color image (3 channel) for processing video below # TODO: put your pipeline here, # you should return the final output (image with lines are drawn on lanes) return result
find_lane_lines/CarND_LaneLines_P1/P1.ipynb
gon1213/SDC
gpl-3.0
Let's try the one with the solid white lane on the right first ...
white_output = 'white.mp4' clip1 = VideoFileClip("solidWhiteRight.mp4") white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!! %time white_clip.write_videofile(white_output, audio=False)
find_lane_lines/CarND_LaneLines_P1/P1.ipynb
gon1213/SDC
gpl-3.0
Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(white_output))
find_lane_lines/CarND_LaneLines_P1/P1.ipynb
gon1213/SDC
gpl-3.0
At this point, if you were successful you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. Modify your draw_lines function accordingly and try re-running your pipeline. Now for the one with the solid yellow lane on the left. This one's more tricky!
yellow_output = 'yellow.mp4' clip2 = VideoFileClip('solidYellowLeft.mp4') yellow_clip = clip2.fl_image(process_image) %time yellow_clip.write_videofile(yellow_output, audio=False) HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(yellow_output))
find_lane_lines/CarND_LaneLines_P1/P1.ipynb
gon1213/SDC
gpl-3.0
Reflections Congratulations on finding the lane lines! As the final step in this project, we would like you to share your thoughts on your lane finding pipeline... specifically, how could you imagine making your algorithm better / more robust? Where will your current algorithm be likely to fail? Please add your thoughts below, and if you're up for making your pipeline more robust, be sure to scroll down and check out the optional challenge video below! Submission If you're satisfied with your video outputs it's time to submit! Submit this ipython notebook for review. Optional Challenge Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
challenge_output = 'extra.mp4' clip2 = VideoFileClip('challenge.mp4') challenge_clip = clip2.fl_image(process_image) %time challenge_clip.write_videofile(challenge_output, audio=False) HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(challenge_output))
find_lane_lines/CarND_LaneLines_P1/P1.ipynb
gon1213/SDC
gpl-3.0
Fourier Transform Examples Fourier transforms are most often used to decompose a signal as a function of time into the frequency components that comprise it, e.g. transforming between time and frequency domains. It's also possible to post-process a filtered signal using Fourier transforms. FFTs decompose a single signal into the form of: $$ Y = \frac{1}{2} a_0 \sum_{n=1}^{\infty} a_n cos (n x + \phi_x) $$ Amplitude Example Here, we solve for the individual $a_n$, so we know how strong each of the individual signal components are.
# Create signal frq1 = 50 # Frequency 1(hz) amp1 = 5 # Amplitude 1 frq2 = 250 # Frequency 2(hz) amp2 = 3 # Amplitude 2 sr = 2000 # Sample rate dur = 0.4 # Duration (s) (increasing/decreasing this changes S/N) # Create signal and timesteps X = np.linspace(0, dur-1/sr, int(dur*sr)) # Time Y_s = amp1*np.cos(X*2*np.pi*frq1 - np.pi/4)+amp2*np.cos(X*2*np.pi*frq2 - np.pi/2) # Signal # Add noise Y_sn = Y_s + 40*np.random.rand(len(X)) # Signal + noise plt.plot(X[1:100], Y_sn[1:100]) plt.title('Plot of Signal with Noise') plt.xlabel('Time (s)') plt.ylabel('Amplitude of Signal') plt.show() # Plot Single Sided FT Spectrum Y_sn_fft = np.fft.fft(Y_sn) # Update fft output FT = np.roll(Y_sn_fft, len(X)//2) # Shift zero freq component to center of spectrum SSFT_amp = np.abs(FT)[len(X)//2:] # Use the absolute value for amplitude; spectrum is symmetric - start from zero freq component SSFT_amp = 2/len(X) * SSFT_amp # Normalize # Determine frequencies freqs = sr/len(X)*np.arange(0,len(SSFT_amp)) # Plot plt.plot(freqs[1:], SSFT_amp[1:]) plt.title('Single-Sided Spectrum of Signal') plt.xlabel('freq (Hz)') plt.ylabel('Freq Amplitude') plt.show()
08 - Signal Processing - Scipy.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
The amplitudes don't seem quite right - longer duration increases the signal to noise and gives a better result:
# Create signal sr = 2000 # Sample rate dur = 10 # Increased duration (s) (increasing/decreasing this changes S/N) X = np.linspace(0, dur-1/sr, int(dur*sr)) # Time Y_s = amp1*np.sin(X*2*np.pi*frq1 - np.pi/4) + amp2*np.sin(X*2*np.pi*frq2 + np.pi/2) Y_sn = Y_s + 40*np.random.rand(len(X)) # Determine Single Sided FT Spectrum Y_sn_fft = np.fft.fft(Y_sn) # Update ft output FT = np.roll(Y_sn_fft, len(X)//2) # Shift zero freq component to center of spectrum SSFT_amp = np.abs(FT)[len(X)//2:] # Use the absolute value for amplitude; spectrum is symmetric - start from zero freq component SSFT_amp = 2/len(X) * SSFT_amp # Scale by 2 (using half the spectrum) / number points # Determine frequencies freqs = sr/len(X)*np.arange(0,len(SSFT_amp)) # Plot plt.plot(freqs[1:], SSFT_amp[1:]) plt.title('Single-Sided Spectrum of Signal') plt.xlabel('freq (Hz)') plt.ylabel('Freq Amplitude') plt.show() # Create signal sr = 2000 # Sample rate dur = 10 # Increased duration (s) (increasing/decreasing this changes S/N) X = np.linspace(0, dur-1/sr, int(dur*sr)) # Time Y_s = amp1*np.cos(X*2*np.pi*frq1 - np.pi/4) + amp2*np.cos(X*2*np.pi*frq2 + np.pi/2) Y_sn = Y_s + 40*np.random.rand(len(X)) # Determine Single Sided FT Spectrum Y_sn_fft = np.fft.fft(Y_sn) # Update ft output FT = np.roll(Y_sn_fft, len(X)//2) # Shift zero freq component to center of spectrum SSFT_amp = np.abs(FT)[len(X)//2:] # Use the absolute value for amplitude; spectrum is symmetric - start from zero freq component SSFT_amp = 2/len(X) * SSFT_amp # Scale by 2 (using half the spectrum) / number points # Determine frequencies freqs = sr/len(X)*np.arange(0,len(SSFT_amp)) # Plot plt.plot(freqs[1:], SSFT_amp[1:]) plt.title('Single-Sided Spectrum of Signal') plt.xlabel('freq (Hz)') plt.ylabel('Freq Amplitude') plt.show()
08 - Signal Processing - Scipy.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Phase Example Phase is shift of a periodic signal 'left' or 'right'. it is the $\phi_x$ in the following equation: $$ Y = \frac{1}{2} a_0 \sum_{n=1}^{\infty} a_n cos (n x + \phi_x) $$
# We can use the previous signal to get the phase: # Set a tolerance limit - phase is sensitive to floating point errors # (see Gotchas and Optimization for more info): FT_trun = FT tol = 1*10**-6 # Truncate signal below tolerance level FT_trun[np.abs(FT_trun)<tol] = 0 # Use the angle function (arc tangent of imaginary over real) phase = np.angle(FT_trun[len(X)//2:]) phase_rad = 1/np.pi * phase # Convert to radians # Plot plt.plot(freqs[1:], phase_rad[1:]) plt.title('Single-Sided Spectrum of Signal') plt.xlabel('freq (Hz)') plt.ylabel('Freq Amplitude') plt.show()
08 - Signal Processing - Scipy.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
This shows the phase for every single frequency, but we really only care about the nonzero frequencies with minimum amplitude:
nonzero_freqs = freqs[SSFT_amp > 1][1:] print('Notable frequencies are: {}'.format(nonzero_freqs)) inds = [list(freqs).index(x) for x in nonzero_freqs] # Return index of nonzero frequencies print('Phase shifts for notable frequencies are: {}'.format(phase_rad[inds]))
08 - Signal Processing - Scipy.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
This is better visualized without noise:
# Determine Single Sided FT Spectrum Y_s_fft = np.fft.fft(Y_s) # Update ft output FT = np.roll(Y_s_fft, len(X)//2) # Set a tolerance limit - phase is sensitive to floating point errors # (see Gotchas and Optimization for more info): FT_trun = FT tol = 1*10**-6 # Truncate signal below tolerance level FT_trun[np.abs(FT_trun)<tol] = 0 # Use the angle function (arc tangent of imaginary over real) phase = np.angle(FT_trun[len(X)//2:]) phase_rad = 1/np.pi * phase # Convert to radians # Plot plt.plot(freqs[1:], phase_rad[1:]) plt.title('Single-Sided Spectrum of Signal') plt.xlabel('freq (Hz)') plt.ylabel('Freq Amplitude') plt.show() # To streamline, we can create a function in Pandas (see Pandas Crash Course for more info): import pandas as pd def fft_norm(signal, sr=1): '''Return a normalized fft single sided spectrum.''' signal = signal[: signal.shape[0]//2*2] # Use an even number of data points so can divide FFT in half evenly N = signal.shape[0] freqs = sr*np.arange(0, N//2)/N # FFT fft = np.fft.fft(signal) fft = np.roll(fft, N//2) # Shift zero freq component to center of spectrum # Normalized Amplitude amp_norm = 2/N*np.abs(fft[N//2:]) # Phase tol = 1*10**-6 # Truncate signal below tolerance so phase isn't weird fft[np.abs(fft)<tol] = 0 phase_rad = np.angle(fft[N//2:])/(np.pi) # To convert the phase, use (fft_norm(Phase (Radians)+np.pi)) * conversion factor/(2*np.pi) # I.e. add Pi to the output before converting from radians return pd.DataFrame({'Frequency':freqs, 'Amplitude':amp_norm, 'Phase (Radians)':phase_rad, 'Phase (Degrees)':phase_rad*180}).set_index('Frequency') Y_ms = Y_s-Y_s.mean() # Mean subtract to remove the offset (0 freq component) fft_norm(Y_ms, sr=2000).plot(subplots=True, layout=(3,1)) # mean subtract to plt.show()
08 - Signal Processing - Scipy.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Notch Filter Plot our original, non-noisy two-component signal: Perform the Fourier transform, and set the 200 Hz signal to zero:
# Fourier transform Yfft = np.fft.fft(Y_s); freqs = sr*np.arange(0,len(Yfft)/2)/len(Y_sn) # Frequencies of the FT ind250Hz = np.where(freqs==250)[0][0] # Index to get just 250 Hz Signal Y_filt = Yfft[:] # The original, non-absolute, full spectrum is important full_w = 200 # Width of spectrum to set to zero # Set FT at frequency to zero Y_filt[ind250Hz-int(full_w/2):ind250Hz+int(full_w/2)] = 0 # Set the 250 Hz signal (+-) to zero on the lower side Y_filt[-ind250Hz-int(full_w/2):-ind250Hz+int(full_w/2)] = 0 # Set the 250 Hz signal (+-) to zero on the upper side # Determine single sided Fourier transform SSFT_filt = Y_filt[:int(len(Y_filt)/2)] # Index the first half SSFT_filt = np.abs(SSFT_filt) # Use the absolute Value SSFT_filt = SSFT_filt/len(X) * 2 # Normalize and double the values (FFT is wrapped) # Plot plt.plot(freqs[1:], SSFT_filt[1:]) plt.title('Single-Sided Spectrum of Signal') plt.xlabel('freq (Hz)') plt.ylabel('Amplitude of X') plt.show()
08 - Signal Processing - Scipy.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Inverse Fourier transform back, and plot the original filtered signal:
# Inverse FFT the original, non-absolute, full spectrum Y2 = np.fft.ifft(Y_filt) Y2 = np.real(Y2) # Use the real values to plot the filtered signal # Plot plt.plot(X[:100],Y_s[:100], label='Original') plt.plot(X[:100],Y2[:100], label='Filtered') plt.title('Two Signals') plt.xlabel('Time (s)') plt.ylabel('Signal Amplutude') plt.legend(loc='best') plt.show()
08 - Signal Processing - Scipy.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
While the Fourier amplitudes properly represent the amplitude of frequency components, the power spectral density (square of the discrete fourier transform) can be estimated using a periodogram:
# Determine approx power spectral density f, Pxx_den = signal.periodogram(Y_s, sr) # Plot plt.plot(f, Pxx_den) plt.xlabel('frequency [Hz]') plt.ylabel('PSD') plt.show()
08 - Signal Processing - Scipy.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Correlation Correlations are a measure of the product of two signals as a function of the x-axis shift between them. They are often used to determine similarity between the two signals, e.g. is there some structure or repeating feature that is present in both signals?
# Create a signal npts = 200 heartbeat = np.array([0,1,0,0,4,8,2,-4,0,4,0,1,2,1,0,0,0,0])/8 xvals = np.linspace(0,len(heartbeat),npts) heartbeat = np.interp(xvals,np.arange(0,len(heartbeat)),heartbeat) # Use interpolation to spread the signal out # Repeat the signal ten times, add some noise: hrtbt = np.tile(heartbeat,10) hrtbt_noise = hrtbt + np.random.rand(len(hrtbt)) # Plot G = gridspec.GridSpec(2, 1) axis1 = plt.subplot(G[0, 0]) axis1.plot(heartbeat) axis1.set_title('Single Heartbeat Electrocardiogram') axis2 = plt.subplot(G[1, 0]) axis2.plot(hrtbt_noise) axis2.set_title('Noisy Electrocardiogram for repeating Heartbeat') plt.tight_layout() plt.show()
08 - Signal Processing - Scipy.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
The center of the repeating (heartbeat) signal is marked as a centroid:
# Find center of each repeating signal cent_x = np.arange(1,11)*200 - 100 cent_y = np.ones(10)*max(hrtbt) # Plot plt.plot(hrtbt[:], label='heartbeat') plt.plot(cent_x[:],cent_y[:],'r^', label='Centroid') plt.title('Heartbeat Electrocardiogram') plt.xlabel('Time') plt.ylabel('Volts') plt.legend(loc='best') plt.show()
08 - Signal Processing - Scipy.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Correlate the single signal with the repeating, noisy one:
# Correlate corr = signal.correlate(hrtbt_noise, heartbeat, mode='same') # Plot plt.plot(corr/max(corr), label='Corelogram') plt.plot(cent_x,cent_y,'r^', label='Centroid') plt.title('Correlogram') plt.xlabel('Delay') plt.ylabel('Normalized Volts $^2$') plt.legend(loc='best') plt.show()
08 - Signal Processing - Scipy.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
The correlogram recovered the repeating signal central points. This is because at these points, the signal has the greatest similarity with the rectangular pulse. In other words, we're recovering the areas that share the greatest amount of similarity with our rectangular pulse. Convolution Convolution is a process in which the shape of one function is expressed in another. They're useful for adjusting features, or representing real-world measurements if the response of the filter or instrument is known. As an example, consider a 1 dimensional image taken by an optical microscope (here, a sawtooth wave). The microscope itself imposes empirical limitations in the optics they use approximated by a Gaussian point squared function (PSF). The final image is the convolution of the original image and the PSF.
# Signal and PSF orig_sig = signal.sawtooth(2*np.pi*np.linspace(0,3,300))/2+0.5 psf = signal.gaussian(101, std=15) # Convolve convolved = signal.convolve(orig_sig, psf) # Plot G = gridspec.GridSpec(3, 1) axis1 = plt.subplot(G[0, 0]) axis1.plot(orig_sig) axis1.set_xlim(0, len(convolved)) axis1.set_title('Original Pulse') axis2 = plt.subplot(G[1, 0]) axis2.plot(psf) axis2.set_xlim(0, len(convolved)) axis2.set_title('Point Spread Function') axis3 = plt.subplot(G[2, 0]) axis3.plot(convolved) axis3.set_xlim(0, len(convolved)) axis3.set_title('Convolved Signal') plt.tight_layout() plt.show()
08 - Signal Processing - Scipy.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Deconvolution Deconvolution can be thought of as removing the filter or instrument response. This is pretty common when reconstructing real signals if the response is known. In the microscope example, this would be deconvolving image with a known response of the instrument to a point source. If it is known how much the entire image is spread out, the original image can be recovered.
# Deconvolve recovered, remainder = signal.deconvolve(convolved, psf) # Plot G = gridspec.GridSpec(3, 1) axis1 = plt.subplot(G[0, 0]) axis1.plot(convolved) axis1.set_xlim(0, len(convolved)) axis1.set_title('Convolved Signal') axis2 = plt.subplot(G[1, 0]) axis2.plot(psf) axis2.set_xlim(0, len(convolved)) axis2.set_title('Known Impulse Response') axis3 = plt.subplot(G[2, 0]) axis3.plot(recovered) axis3.set_xlim(0, len(convolved)) axis3.set_title('Recovered Pulse') plt.tight_layout() plt.show()
08 - Signal Processing - Scipy.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Filtering Filters recieve a signal input and selectively reduce the amplitude of certain frequencies. Working with digital signals, they can broadly be divided into infinitie impulse response (IIR) and finite impulse response (FIR). IIR filters that receive an impulse response (signal of value 1 followed by many zeros) yield a (theoretically) infinite number of non-zero values. This is in contrast with the the finite impulse response (FIR) filter that receives an impulse response and does become exactly zero beyond the duration of the impulse. IIR and FIR filters are also different in that they have different filter coefficients (b, a) that represent the feed forward coefficients and feedback coefficients, respectively. Feed forward coefficients (b) are applied to input (x) values, and feedback coefficients (a) are applied to output (y) values - i.e: $y(n) = b_1x(n) + b_2x(n) - a_1y(n) - a_2y(n) $ where: $b_jx(n)$ are the feed forward coefficients, using $x$ values $a_jy(n)$ are the feedback coefficients (notice the $y(n)$!) Generate Signal First, we generate a signal and approximate the power spectral density (PSD):
frq1 = 250 # Frequency 1(hz) amp1 = 3 # Amplitude 1 sr = 2000 # Sample rate dur = 1 # Duration (s) (increasing/decreasing this changes S/N) # Create timesteps, signal and noise X = np.linspace(0, dur-1/sr, int(dur*sr)) # Time Y = amp1*np.sin(X*2*np.pi*frq1) # Signal Y_noise = Y + 40*np.random.rand(len(X)) # Add noise # Approx PSD f, Pxx_den = signal.periodogram(Y_noise, sr) # Plot G = gridspec.GridSpec(2, 1) axis1 = plt.subplot(G[0, 0]) axis1.plot(X, Y_noise) axis1.set_title('Plot of Signal with Noise') axis2 = plt.subplot(G[1, 0]) axis2.plot(f, Pxx_den) axis2.set_title('Approx PSD') plt.tight_layout() plt.show()
08 - Signal Processing - Scipy.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Infinite Impulse Response (IIR) filters Digital filters inherently account for digital signal limitations, i.e. the sampling frequency. The Nyquist theorem asserts that we can't measure frequencies that are higher than 1/2 the sampling frequency, and the digital filter operates on this principle. Next, we create the digital filter and plot the response, using both feedforward (b) and feedback (a) coefficeints.:
f_order = 10.0 # Filter order f_pass = 'low' # Filter is low pass f_freq = 210.0 # Frequency to pass f_cutoff = f_freq/(sr/2) # Convert frequency into # Create the filter b, a = signal.iirfilter(f_order, f_cutoff, btype=f_pass, ftype='butter') # Test the filter w, h = signal.freqz(b, a, 1000) # Test response of filter across # frequencies (Use 'freqz' for digital) freqz_hz = w * sr / (2 * np.pi) # Convert frequency to Hz resp_db = 20 * np.log10(abs(h)) # Convert response to decibels # Plot filter response plt.semilogx(freqz_hz, resp_db ) plt.title('Butterworth Bandpass Frequency Response') plt.xlabel('Frequency (Hz)') plt.ylabel('Amplitude (dB)') plt.axis((100, 500, -200, 10)) plt.grid(which='both', axis='both') plt.show()
08 - Signal Processing - Scipy.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Applying the filter to our signal filters all higher frequencies:
# Apply filter to signal sig_filtered = signal.filtfilt(b, a, Y_noise) # Determine approx PSD f, Pxx_den_f = signal.periodogram(sig_filtered, sr) # Plot G = gridspec.GridSpec(2, 1) axis1 = plt.subplot(G[0, 0]) axis1.plot(f, Pxx_den) axis1.set_title('Approx PSD of Original Signal') axis2 = plt.subplot(G[1, 0]) axis2.plot(f, Pxx_den_f) axis2.set_title('Approx PSD of Filtered Signal') plt.tight_layout() plt.show()
08 - Signal Processing - Scipy.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Finite Impulse Response Filters A finite impulse response (FIR) filter can be designed where a linear phase response is specified within specified regions (up to the Nyquist or 1/2 of the sampling frequency). Only feedforward coefficients (b) are used.
# Create FIR filter taps = 150 # Analogus to IIR order - indication # of memory, calculation, and # 'filtering' freqs = [0, 150, 300, 500, sr/2.] # FIR frequencies ny_fract = np.array(freqs)/(sr/2) # Convert frequency to fractions of # the Nyquist freq gains = [10.0, 1.0, 10.0, 0.0, 0.0] # Gains at each frequency b = signal.firwin2(taps, ny_fract, gains) # Make the filter (there are no # 'a' coefficients) w, h = signal.freqz(b) # Check filter response # Test FIR filter freqz_hz = w * sr / (2 * np.pi) # Convert frequency to Hz resp_db = 20 * np.log10(abs(h)) # Convert response to decibels # Plot filter response plt.title('Digital filter frequency response') plt.plot(freqz_hz, np.abs(h)) plt.title('Digital filter frequency response') plt.ylabel('Amplitude Response') plt.xlabel('Frequency (Hz)') plt.grid() plt.show()
08 - Signal Processing - Scipy.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
And the effect of the FIR digital filter:
# Apply FIR filter sig_filtered = signal.filtfilt(b, 1, Y_noise) # Determine approx PSD f, Pxx_den_f = signal.periodogram(sig_filtered, sr) # Plot G = gridspec.GridSpec(2, 1) axis1 = plt.subplot(G[0, 0]) axis1.plot(f, Pxx_den) axis1.set_title('Approx PSD of Original Signal') axis2 = plt.subplot(G[1, 0]) axis2.plot(f, Pxx_den_f) axis2.set_title('Approx PSD of Filtered Signal') plt.tight_layout() plt.show()
08 - Signal Processing - Scipy.ipynb
blakeflei/IntroScientificPythonWithJupyter
bsd-3-clause
Now construct the class containing the initial conditions of the problem
LM0 = LakeModel(lamb,alpha,b,d) x0 = LM0.find_steady_state()# initial conditions print "Initial Steady State: ", x0
solutions/lakemodel_solutions.ipynb
gxxjjj/QuantEcon.py
bsd-3-clause
New legislation changes $\lambda$ to $0.2$
LM1 = LakeModel(0.2,alpha,b,d) xbar = LM1.find_steady_state() # new steady state X_path = np.vstack(LM1.simulate_stock_path(x0*N0,T)) # simulate stocks x_path = np.vstack(LM1.simulate_rate_path(x0,T)) # simulate rates print "New Steady State: ", xbar
solutions/lakemodel_solutions.ipynb
gxxjjj/QuantEcon.py
bsd-3-clause