text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
```
from collections import Counter, defaultdict
class Solution:
def maxScoreWords(self, words, letters, score) -> int:
def get_words(num):
w_list = []
cnt = 0
while num:
if num & 1:
w_list.append(words[cnt])
num >>= 1
cnt += 1
return w_list
def check_and_calc(words): #检查是否有效,如果是有效的,则计算分数
char_freq = Counter()
for w in words:
char_freq += w_letters[w]
for k, v in char_freq.items():
if v > counter[k]:
return 0, False
cnt = 0
for k, v in char_freq.items():
idx = char2idx(k)
cnt += score[idx] * v
return cnt, True
# 计算每个单词的字母出现的次数
w_letters = {}
for w in words:
w_letters[w] = Counter(w)
char2idx = lambda x: ord(x) - ord('a')
counter = Counter(letters)
res = 0 # 每一个单词都可以选择、不选择, 那就是 pow(2, len(words))
w_n = len(words)
# 外层循环:pow(2, 14) == 16384, get_words(): O(14)
for i in range(1, 1 << w_n): # 遍历每一种单词的选择情况,如果当前的选择是合法的,计算得分
w_list = get_words(i)
s, state = check_and_calc(w_list)
if state:
res = max(res, s)
return res
from collections import Counter, defaultdict
class Solution:
def maxScoreWords(self, words, letters, score) -> int:
def get_words(num):
w_list = []
cnt = 0
while num:
if num & 1:
w_list.append(words[cnt])
num >>= 1
cnt += 1
return w_list
def check_and_calc(words): #检查是否有效,如果是有效的,则计算分数
char_freq = defaultdict(int)
for w in words:
for l, t in w_letters[w].items():
char_freq[l] += t
# 验证是否有效
for k, v in char_freq.items():
if v > counter[k]:
return 0, False
# 如果有效计算得分
cnt = 0
for k, v in char_freq.items():
idx = char2idx(k)
cnt += score[idx] * v
return cnt, True
# 计算每个单词的字母出现的次数
w_letters = {}
for w in words:
l_freq = defaultdict(int)
for l in w:
l_freq[l] += 1
w_letters[w] = l_freq
char2idx = lambda x: ord(x) - ord('a')
counter = Counter(letters)
res = 0 # 每一个单词都可以选择、不选择, 那就是 pow(2, len(words))
w_n = len(words)
# 外层循环:pow(2, 14) == 16384, get_words(): O(14)
for i in range(1, 1 << w_n): # 遍历每一种单词的选择情况,如果当前的选择是合法的,计算得分
w_list = get_words(i)
s, state = check_and_calc(w_list)
if state:
res = max(res, s)
return res
class Solution:
def maxScoreWords(self, words: List[str], letters: List[str], score: List[int]) -> int:
self.wordsScore = [self.getWordScore(i, score) for i in words]
self.total = 0
self.maxTotal = 0
dic = {}
for i in letters:
dic[i] = dic.setdefault(i, 0) + 1
self.DFS(words, 0, dic)
return self.maxTotal
def getWordScore(self, word:str, score:List[int]) -> int:
s = 0
for i in word:
s += score[ord(i) - ord('a')]
return s
def DFS(self, words, index, dic):
if index >= len(words):
return
newDic = dic.copy()
if self.checkAndDeal(words[index], newDic):
self.total += self.wordsScore[index]
self.maxTotal = max(self.maxTotal, self.total)
self.DFS(words, index + 1, newDic)
self.total -= self.wordsScore[index]
self.DFS(words, index + 1, dic)
def checkAndDeal(self, word, dic):
for i in word:
if i not in dic or dic[i] - 1 < 0:
return False
else:
dic[i] = dic[i] - 1
return True
solution = Solution()
solution.maxScoreWords(words = ["dog","cat","dad","good"],
letters = ["a","a","c","d","d","d","g","o","o"],
score = [1,0,9,5,0,0,3,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0])
pow(2, 14)
words = ["dog","cat","dad","good"]
Counter(words)
```
| github_jupyter |
# Parameterize SageMaker Pipelines
Customers can use SageMaker Pipelines to build scalable machine learning pipelines that preprocess data and train machine learning models. With SageMaker Pipelines, customers have a toolkit for every part of the machine learning lifecycle that provides deep customizations and tuning options to fit every organization. Customers have the freedom to customize SageMaker Pipelines to specific use cases, but also to create generic machine learning pipelines that can be reused across different use cases.
From a birds-eye view a machine learning pipeline usually consists of 3 general steps: a preprocess step where the data is transformed, a training step where a machine learning model is trained, and an evaluation step which tests the performance of the trained model. If the model is performing according to the objective metric you’re optimizing for, then that becomes a candidate model for deployment to one or more environments. These candidate models should be registered into SageMaker Model Registry to catalog and store key metadata for that model version.

These steps have a lot of commonalities, even across different machine learning use cases. Customers that want to create training pipelines that can be re-used in an organization can use SageMaker Pipelines to create parameterized, generic training pipelines. Parameters allow customers to identify specific parameters that can be passed into the pipeline during pipeline execution without having to directly change the pipeline code itself.
**This notebook** demonstrates how SageMaker Pipelines can be used to create a generic binary classification machine learning pipeline using XGBoost that's reusable across teams, machine learning use cases and even customers in a SaaS system.
### SageMaker Pipelines
Amazon SageMaker Pipelines is a purpose-built, easy-to-use CI/CD service for machine learning. With SageMaker Pipelines, customers can create machine learning workflows with an easy-to-use Python SDK, and then visualize and manage workflows using Amazon SageMaker Studio.
#### SageMaker Pipeline steps and parameters
SageMaker pipelines works on the concept of steps. The order steps are executed in is inferred from the dependencies each step has. If a step has a dependency on the output from a previous step, it's not executed until after that step has completed successfully.
SageMaker Pipeline Parameters are input parameters specified when triggering a pipeline execution. They need to be explicitly defined when creating the pipeline and contain default values.
To know more about the type of steps and parameters supported, check out the [SageMaker Pipelines Overview](https://docs.aws.amazon.com/sagemaker/latest/dg/pipelines-sdk.html).
#### SageMaker Pipeline DAG
When creating a SageMaker Pipeline, SageMaker creates a Direct Acyclic Graph, DAG, that customers can visualize in Amazon SageMaker Studio. The DAG can be used to track pipeline executions, outputs and metrics. In this notebook, a SageMaker Pipeline with the following DAG is created:

## Predict customer churn and credit risk with XGBoost
### Data
This notebook uses 2 datasets to demonstrate pipeline portability:
1. A synthetic customer churn dataset.
2. The [Statlog German credit data](https://archive.ics.uci.edu/ml/datasets/statlog+(german+credit+data)) from UCI's ML Repository.
### Overview
**Disclaimer** This notebook was created using [Amazon SageMaker Studio](https://aws.amazon.com/sagemaker/studio/) and the `Python3(DataScience) kernel`. SageMaker Studio is required for the visualizations of the DAG and model metrics to work.
The purpose of this notebook is to demonstrate how SageMaker Pipelines can be used to create a generic XGBoost training pipeline that preprocesses, trains, tunes, evaluates and registers new machine learning models with the SageMaker model registry, that is reusable across teams, customers and use cases. All scripts to preprocess the data and evaluate the trained model have been prepared in advance and are available here:
- [credit/preprocess.py](credit/preprocess.py)
- [customer_churn/preprocess.py](customer_churn/preprocess.py)
- [evaluate.py](evaluate.py)
```
!pip install -U sagemaker --quiet # Ensure latest version of SageMaker is installed
import sagemaker
import sagemaker.session
session = sagemaker.session.Session()
region = session.boto_region_name
role = sagemaker.get_execution_role()
bucket = session.default_bucket()
prefix = "paramaterized" # Prefix to S3 artifacts
pipeline_name = "DEMO-parameterized-pipeline" # SageMaker Pipeline name
credit_model_group = "DEMO-credit-registry"
churn_model_group = "DEMO-churn-registry"
```
### Download data
Start with downloading all data sets
```
!aws s3 cp s3://sagemaker-sample-files/datasets/tabular/uci_statlog_german_credit_data/german_credit_data.csv credit_risk/german_credit_data.csv
!aws s3 cp s3://sagemaker-sample-files/datasets/tabular/synthetic/churn.csv customer_churn/churn-dataset.csv
```
### Upload data
Upload all data sets and scripts to S3.
```
# Upload the raw datasets and scripts to S3
customer_churn_data_uri = session.upload_data(
path="customer_churn/churn-dataset.csv", key_prefix=prefix + "/data"
)
credit_data_uri = session.upload_data(
path="credit_risk/german_credit_data.csv", key_prefix=prefix + "/data"
)
churn_preprocess_uri = session.upload_data(
path="customer_churn/preprocess.py", key_prefix=prefix + "/preprocess/churn"
)
credit_preprocess_uri = session.upload_data(
path="credit_risk/preprocess.py", key_prefix=prefix + "/preprocess/credit"
)
evaluate_script_uri = session.upload_data(path="evaluate.py", key_prefix=prefix + "/evaluate")
print("Customer churn data set uploaded to ", customer_churn_data_uri)
print("Credit data set uploaded to ", credit_data_uri)
print("Customer churn preprocessing script uploaded to ", churn_preprocess_uri)
print("Credit preprocessing script uploaded to ", credit_preprocess_uri)
print("Evaluation script uploaded to ", evaluate_script_uri)
```
<a id='parameters'></a>
### Pipeline input parameters
Pipeline Parameters are input parameter when triggering a pipeline execution. They need to be explicitly defined when creating the pipeline and contain default values.
Create parameters for the inputs to the pipeline. In this case, parameters will be used for:
- `ModelGroup` - Which registry to register the trained model with.
- `InputData` - S3 URI to pipeline input data.
- `PreprocessScript` - S3 URI to python script to preprocess the data.
- `EvaluateScript` - S3 URI to python script to evaluate the trained model.
- `MaxiumTrainingJobs` - How many training jobs to allow when hyperparameter tuning the model
- `MaxiumParallelTrainingJobs` - How many training jobs to allow in parallel when hyperparameter tuning the model.
- `AccuracyConditionThreshold` - Only register models with the model registry if the have at least this classification accuracy.
- `ProcessingInstanceType` - What EC2 instance type to use for processing.
- `TrainingInstanceType` - What EC2 instance type to use for training.
```
from sagemaker.workflow.parameters import (
ParameterInteger,
ParameterString,
ParameterFloat,
)
# To what Registry to register the model and its versions.
model_registry_package = ParameterString(name="ModelGroup", default_value="default-registry")
# S3 URI to input data
input_data = ParameterString(name="InputData", default_value="s3://{}/uri/data.csv".format(bucket))
# S3 URI to preprocessing script
preprocess_script = ParameterString(
name="PreprocessScript", default_value="s3://{}/uri/preprocess.py".format(bucket)
)
# S3 URI to evaluation script
evaluate_script = ParameterString(
name="EvaluateScript", default_value="s3://{}/uri/evaluate.py".format(bucket)
)
# Maximum amount of training jobs to allow in the HP tuning
max_training_jobs = ParameterInteger(name="MaxiumTrainingJobs", default_value=1)
# Maximum amount of trainingjobs to allow in the HP tuning
max_parallel_training_jobs = ParameterInteger(name="MaxiumParallelTrainingJobs", default_value=1)
# Accuracy threshold to decide whether or not to register the model with Model Registry
accuracy_condition_threshold = ParameterFloat(name="AccuracyConditionThreshold", default_value=0.7)
# What instance type to use for processing.
processing_instance_type = ParameterString(
name="ProcessingInstanceType", default_value="ml.m5.large"
)
# What instance type to use for training.
training_instance_type = ParameterString(name="TrainingInstanceType", default_value="ml.m5.xlarge")
```
<a id='preprocess'></a>
## Preprocess data step
In the first step an sklearn processor is created, used in the ProcessingStep.
```
from sagemaker.sklearn.processing import SKLearnProcessor
from sagemaker.processing import ProcessingInput, ProcessingOutput
from sagemaker.workflow.steps import ProcessingStep
from sagemaker.workflow.functions import Join
from sagemaker.workflow.execution_variables import ExecutionVariables
# Create SKlearn processor object,
# The object contains information about what instance type to use, the IAM role to use etc.
# A managed processor comes with a preconfigured container, so only specifying version is required.
sklearn_processor = SKLearnProcessor(
framework_version="0.23-1", role=role, instance_type=processing_instance_type, instance_count=1
)
# Use the sklearn_processor in a SageMaker Pipelines ProcessingStep
step_preprocess_data = ProcessingStep(
name="Preprocess-Data",
processor=sklearn_processor,
inputs=[
ProcessingInput(source=input_data, destination="/opt/ml/processing/input"),
],
outputs=[
ProcessingOutput(
output_name="train",
source="/opt/ml/processing/train",
destination=Join(
on="/",
values=[
"s3://{}".format(bucket),
prefix,
ExecutionVariables.PIPELINE_EXECUTION_ID,
"train",
],
),
),
ProcessingOutput(
output_name="validation",
source="/opt/ml/processing/validation",
destination=Join(
on="/",
values=[
"s3://{}".format(bucket),
prefix,
ExecutionVariables.PIPELINE_EXECUTION_ID,
"validation",
],
),
),
ProcessingOutput(
output_name="test",
source="/opt/ml/processing/test",
destination=Join(
on="/",
values=[
"s3://{}".format(bucket),
prefix,
ExecutionVariables.PIPELINE_EXECUTION_ID,
"test",
],
),
),
],
code=preprocess_script,
)
```
<a id='train'></a>
## Train model step
In the second step, the train and validation output from the previous processing step are used to train a model. The XGBoost container is retrieved and then an XGBoost estimator is created, on which hyperparameters are specified before the training step is created.
```
from sagemaker.inputs import TrainingInput
from sagemaker.estimator import Estimator
from sagemaker.tuner import HyperparameterTuner, ContinuousParameter, IntegerParameter
from sagemaker.workflow.steps import TuningStep
# Fetch container to use for training
image_uri = sagemaker.image_uris.retrieve(
framework="xgboost",
region=region,
version="1.2-2",
py_version="py3",
instance_type=training_instance_type,
)
# Create XGBoost estimator object
# The object contains information about what container to use, what instance type etc.
xgb_estimator = Estimator(
image_uri=image_uri,
instance_type=training_instance_type,
instance_count=1,
role=role,
disable_profiler=True,
)
# Create Hyperparameter tuner object. Ranges from https://docs.aws.amazon.com/sagemaker/latest/dg/xgboost-tuning.html
xgb_tuner = HyperparameterTuner(
estimator=xgb_estimator,
objective_metric_name="validation:auc",
hyperparameter_ranges={
"eta": ContinuousParameter(0, 0.5),
"alpha": ContinuousParameter(0, 1000),
"min_child_weight": ContinuousParameter(1, 120),
"max_depth": IntegerParameter(1, 10),
"num_round": IntegerParameter(1, 2000),
"subsample": ContinuousParameter(0.5, 1),
},
max_jobs=max_training_jobs,
max_parallel_jobs=max_parallel_training_jobs,
)
# use the tuner in a SageMaker pipielines tuning step.
step_tuning = TuningStep(
name="Train-And-Tune-Model",
tuner=xgb_tuner,
inputs={
"train": TrainingInput(
s3_data=step_preprocess_data.properties.ProcessingOutputConfig.Outputs[
"train"
].S3Output.S3Uri,
content_type="text/csv",
),
"validation": TrainingInput(
s3_data=step_preprocess_data.properties.ProcessingOutputConfig.Outputs[
"validation"
].S3Output.S3Uri,
content_type="text/csv",
),
},
)
```
<a id='evaluate'></a>
## Evaluate model step
When a model is trained, it's common to evaluate the model on unseen data before registering it with the model registry. This ensures the model registry isn't cluttered with poorly performing model versions.
```
from sagemaker.processing import ScriptProcessor
from sagemaker.workflow.properties import PropertyFile
# Create ScriptProcessor object.
# The object contains information about what container to use, what instance type etc.
evaluate_model_processor = ScriptProcessor(
image_uri=image_uri,
command=["python3"],
instance_type=processing_instance_type,
instance_count=1,
role=role,
)
# Create a PropertyFile
# A PropertyFile is used to be able to reference outputs from a processing step, for instance to use in a condition step.
# For more information, visit https://docs.aws.amazon.com/sagemaker/latest/dg/build-and-manage-propertyfile.html
evaluation_report = PropertyFile(
name="EvaluationReport", output_name="evaluation", path="evaluation.json"
)
# Use the evaluate_model_processor in a SageMaker Pipelines ProcessingStep.
# Extract the best model for evaluation.
step_evaluate_model = ProcessingStep(
name="Evaluate-Model",
processor=evaluate_model_processor,
inputs=[
ProcessingInput(
source=step_tuning.get_top_model_s3_uri(top_k=0, s3_bucket=bucket),
destination="/opt/ml/processing/model",
),
ProcessingInput(
source=step_preprocess_data.properties.ProcessingOutputConfig.Outputs[
"test"
].S3Output.S3Uri,
destination="/opt/ml/processing/test",
),
],
outputs=[
ProcessingOutput(
output_name="evaluation",
source="/opt/ml/processing/evaluation",
destination=Join(
on="/",
values=[
"s3://{}".format(bucket),
prefix,
ExecutionVariables.PIPELINE_EXECUTION_ID,
"evaluation-report",
],
),
),
],
code=evaluate_script,
property_files=[evaluation_report],
)
```
<a id='register'></a>
## Register model step
If the trained model meets the model performance requirements, a new model version is registered with the model registry for further analysis. To attach model metrics to the model version, create a [ModelMetrics](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-model-quality-metrics.html) object using the evaluation report created in the evaluation step. Then, create the RegisterModel step.
```
from sagemaker.model_metrics import MetricsSource, ModelMetrics
from sagemaker.workflow.step_collections import RegisterModel
# Create ModelMetrics object using the evaluation report from the evaluation step
# A ModelMetrics object contains metrics captured from a model.
model_metrics = ModelMetrics(
model_statistics=MetricsSource(
s3_uri=Join(
on="/",
values=[
step_evaluate_model.arguments["ProcessingOutputConfig"]["Outputs"][0]["S3Output"][
"S3Uri"
],
"evaluation.json",
],
),
content_type="application/json",
)
)
# Crete a RegisterModel step, which registers the model with SageMaker Model Registry.
step_register_model = RegisterModel(
name="Register-Model",
estimator=xgb_estimator,
model_data=step_tuning.get_top_model_s3_uri(top_k=0, s3_bucket=bucket),
content_types=["text/csv"],
response_types=["text/csv"],
inference_instances=["ml.t2.medium", "ml.m5.xlarge", "ml.m5.large"],
transform_instances=["ml.m5.xlarge"],
model_package_group_name=model_registry_package,
model_metrics=model_metrics,
)
```
<a id='condition'></a>
## Accuracy condition step
Adding conditions to the pipeline is done with a ConditionStep.
In this case, we only want to register the new model version with the model registry if the new model meets an accuracy condition.
```
from sagemaker.workflow.conditions import ConditionGreaterThanOrEqualTo
from sagemaker.workflow.condition_step import ConditionStep
from sagemaker.workflow.functions import JsonGet
# Create accuracy condition to ensure the model meets performance requirements.
# Models with a test accuracy lower than the condition will not be registered with the model registry.
cond_gte = ConditionGreaterThanOrEqualTo(
left=JsonGet(
step_name=step_evaluate_model.name,
property_file=evaluation_report,
json_path="binary_classification_metrics.accuracy.value",
),
right=accuracy_condition_threshold,
)
# Create a SageMaker Pipelines ConditionStep, using the condition above.
# Enter the steps to perform if the condition returns True / False.
step_cond = ConditionStep(
name="Accuracy-Condition",
conditions=[cond_gte],
if_steps=[step_register_model],
else_steps=[],
)
```
<a id='orchestrate'></a>
## Pipeline Creation: Orchestrate all steps
Now that all pipeline steps are created, a pipeline is created.
```
from sagemaker.workflow.pipeline import Pipeline
# Create a SageMaker Pipeline.
# Each parameter for the pipeline must be set as a parameter explicitly when the pipeline is created.
# Also pass in each of the steps created above.
# Note that the order of execution is determined from each step's dependencies on other steps,
# not on the order they are passed in below.
pipeline = Pipeline(
name=pipeline_name,
parameters=[
processing_instance_type,
training_instance_type,
input_data,
preprocess_script,
evaluate_script,
accuracy_condition_threshold,
model_registry_package,
max_parallel_training_jobs,
max_training_jobs,
],
steps=[step_preprocess_data, step_tuning, step_evaluate_model, step_cond],
)
# Submit pipeline
pipeline.upsert(role_arn=role)
```
## Start pipeline with different parameters.
Now that the pipeline is created, it can be started with custom parameters making the pipeline agnostic to who is triggering it, but also to the scripts and data used. The pipeline can be started using the CLI, the SageMaker Studio UI or the SDK and below there is a screenshot of what it looks like in the SageMaker Studio UI.

#### Starting the pipeline with the SDK
In the examples below, the pipeline is triggered for two machine learning problems, each with different preprocessing scripts and model registry. Each machine learning problem is run with two different sets of parameters.
```
# Start pipeline with credit data and preprocessing script
pipeline.start(
execution_display_name="Credit",
parameters=dict(
InputData=credit_data_uri,
PreprocessScript=credit_preprocess_uri,
EvaluateScript=evaluate_script_uri,
AccuracyConditionThreshold=0.2,
MaxiumParallelTrainingJobs=2,
MaxiumTrainingJobs=5,
ModelGroup=credit_model_group,
),
)
# Start pipeline with credit data and preprocessing script
pipeline.start(
execution_display_name="Credit",
parameters=dict(
InputData=credit_data_uri,
PreprocessScript=credit_preprocess_uri,
EvaluateScript=evaluate_script_uri,
AccuracyConditionThreshold=0.7,
MaxiumParallelTrainingJobs=3,
MaxiumTrainingJobs=42,
ModelGroup=credit_model_group,
),
)
# Start pipeline with customer churn data and preprocessing script
pipeline.start(
execution_display_name="Churn",
parameters=dict(
InputData=customer_churn_data_uri,
PreprocessScript=churn_preprocess_uri,
EvaluateScript=evaluate_script_uri,
AccuracyConditionThreshold=0.4,
MaxiumParallelTrainingJobs=1,
MaxiumTrainingJobs=2,
ModelGroup=churn_model_group,
),
)
# Start pipeline with customer churn data and preprocessing script
pipeline.start(
execution_display_name="Churn",
parameters=dict(
InputData=customer_churn_data_uri,
PreprocessScript=churn_preprocess_uri,
EvaluateScript=evaluate_script_uri,
AccuracyConditionThreshold=0.8,
MaxiumParallelTrainingJobs=4,
MaxiumTrainingJobs=40,
ModelGroup=churn_model_group,
),
)
```
## Visualize model performance metrics
Once the pipelines have completed successfully, metrics attached to the model version can be visualized. In SageMaker Studio, choose `SageMaker Components and registries` in the left pane and under `Model registry`, select one of the model package that was created. Select both versions and right-click. Choose `Compare model versions`.
The screenshot below shows what comparing the customer churn model versions looks like. Note that the standard deviation shows as NaN since it is not relevant to this model's calculated metrics.

The screenshot below shows what comparing the credit risk model versions looks like.

## Clean up (optional)
Delete the model registries and the pipeline to keep the Studio environment tidy.
```
def delete_model_package_group(sm_client, package_group_name):
try:
model_versions = sm_client.list_model_packages(ModelPackageGroupName=package_group_name)
except Exception as e:
print("{} \n".format(e))
return
for model_version in model_versions["ModelPackageSummaryList"]:
try:
sm_client.delete_model_package(ModelPackageName=model_version["ModelPackageArn"])
except Exception as e:
print("{} \n".format(e))
time.sleep(0.5) # Ensure requests aren't throttled
try:
sm_client.delete_model_package_group(ModelPackageGroupName=package_group_name)
print("{} model package group deleted".format(package_group_name))
except Exception as e:
print("{} \n".format(e))
return
def delete_sagemaker_pipeline(sm_client, pipeline_name):
try:
sm_client.delete_pipeline(
PipelineName=pipeline_name,
)
print("{} pipeline deleted".format(pipeline_name))
except Exception as e:
print("{} \n".format(e))
return
import boto3
import time
client = boto3.client("sagemaker")
registries = [credit_model_group, churn_model_group]
for registry in registries:
delete_model_package_group(client, registry)
delete_sagemaker_pipeline(client, pipeline_name)
```
| github_jupyter |
##### Copyright 2020 The Cirq Developers
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Neutral atom device class
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.example.org/cirq/tutorials/educators/neutral_atom"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on QuantumLib</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/tutorials/educators/neutral_atom.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/Cirq/blob/master/docs/tutorials/educators/neutral_atom.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/Cirq/docs/tutorials/educators/neutral_atom.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial provides an introduction to making circuits that are compatible with neutral atom devices.
Neutral atom devices implement quantum gates in one of two ways. One method is by hitting the entire qubit array with microwaves to simultaneously act on every qubit. This method implements global $XY$ gates which take up to $100$ microseconds to perform. Alternatively, we can shine laser light on some fraction of the array. Gates of this type typically take around $1$ microsecond to perform. This method can act on one or more qubits at a time up to some limit dictated by the available laser power and the beam steering system used to address the qubits. Each category in the native gate set has its own limit, discussed more below.
```
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install cirq --quiet
print("installed cirq.")
from math import pi
import cirq
```
## Defining a `NeutralAtomDevice`
To define a `NeutralAtomDevice`, we specify
- The set of qubits in the device.
- The maximum duration of gates and measurements.
- `max_parallel_z`: The maximum number of single qubit $Z$ rotations that can be applied in parallel.
- `max_parallel_xy`: The maximum number of single qubit $XY$ rotations that can be applied in parallel.
- `max_parallel_c`: The maximum number of atoms that can be affected by controlled gates simultaneously.
- Note that `max_parallel_c` must be less than or equal to the minimum of `max_parallel_z` and `max_parallel_xy`.
- `control_radius`: The maximum allowed distance between atoms acted on by controlled gates.
We show an example of defining a `NeutralAtomDevice` below.
```
"""Defining a NeutralAtomDevice."""
# Define milliseconds and microseconds for convenience.
ms = cirq.Duration(nanos=10**6)
us = cirq.Duration(nanos=10**3)
# Create a NeutralAtomDevice
neutral_atom_device = cirq.NeutralAtomDevice(
qubits=cirq.GridQubit.rect(2, 3),
measurement_duration=5 * ms,
gate_duration=100 * us,
max_parallel_z=3,
max_parallel_xy=3,
max_parallel_c=3,
control_radius=2
)
```
Note that all above arguments are required to instantiate a `NeutralAtomDevice`. The example device above has the following properties:
- The device is defined on a $3 \times 3$ grid of qubits.
- Measurements take $5$ milliseconds.
- Gates may take as long as $100$ microseconds if we utilize global microwave gates. Otherwise, a more reasonable bound would be $1$ microsecond.
- A maximum of $3$ qubits may be simultaneously acted on by any gate category (`max_parallel_c = 3`).
- Controlled gates have next-nearest neighbor connectivity (`control_radius = 2`).
We can see some properties of the device as follows.
```
"""View some properties of the device."""
# Display the neutral atom device.
print("Neutral atom device:", neutral_atom_device, sep="\n")
# Get the neighbors of a qubit.
qubit = cirq.GridQubit(0, 1)
print(f"\nNeighbors of qubit {qubit}:")
print(neutral_atom_device.neighbors_of(qubit))
```
## Native gate set
The gates supported by the `NeutralAtomDevice` class can be placed into three categories:
1. Single-qubit rotations about the $Z$ axis.
2. Single-qubit rotations about an arbitrary axis in the $X$-$Y$ plane. We refer to these as $XY$ gates in this tutorial.
3. Controlled gates: CZ, CNOT, CCZ, and CCNOT (TOFFOLI).
Any rotation angle is allowed for single-qubit rotations. Some examples of valid single-qubit rotations are shown below.
```
"""Examples of valid single-qubit gates."""
# Single qubit Z rotations with any angle are valid.
neutral_atom_device.validate_gate(cirq.rz(pi / 5))
# Single qubit rotations about the X-Y axis with any angle are valid.
neutral_atom_device.validate_gate(
cirq.PhasedXPowGate(phase_exponent=pi / 3, exponent=pi / 7)
)
```
A Hadamard gate is invalid because it is a rotation in the $X$-$Z$ plane instead of the $X$-$Y$ plane.
```
"""Example of an invalid single-qubit gate."""
invalid_gate = cirq.H
try:
neutral_atom_device.validate_gate(invalid_gate)
except ValueError as e:
print(f"As expected, {invalid_gate} is invalid!", e)
```
For controlled gates, the rotation must be a multiple of $\pi$ due to the physical implementation of the gates. In Cirq, this means the exponent of a controlled gate must be an integer. The next cell shows two examples of valid controlled gates.
```
"""Examples of valid multi-qubit gates."""
# Controlled gates with integer exponents are valid.
neutral_atom_device.validate_gate(cirq.CNOT)
# Controlled NOT gates with two controls are valid.
neutral_atom_device.validate_gate(cirq.TOFFOLI)
```
Any controlled gate with non-integer exponent is invalid.
```
"""Example of an invalid controlled gate."""
invalid_gate = cirq.CNOT ** 1.5
try:
neutral_atom_device.validate_gate(invalid_gate)
except ValueError as e:
print(f"As expected, {invalid_gate} is invalid!", e)
```
Multiple controls are allowed as long as every pair of atoms (qubits) acted on by the controlled gate are close enough to each other. We can see this by using the `validate_operation` (or `validate_circuit`) method, as follows.
```
"""Examples of valid and invalid multi-controlled gates."""
# This TOFFOLI is valid because all qubits involved are close enough to each other.
valid_toffoli = cirq.TOFFOLI.on(cirq.GridQubit(0, 0), cirq.GridQubit(0, 1), cirq.GridQubit(0, 2))
neutral_atom_device.validate_operation(valid_toffoli)
# This TOFFOLI is invalid because all qubits involved are not close enough to each other.
invalid_toffoli = cirq.TOFFOLI.on(cirq.GridQubit(0, 0), cirq.GridQubit(1, 0), cirq.GridQubit(0, 2))
try:
neutral_atom_device.validate_operation(invalid_toffoli)
except ValueError as e:
print(f"As expected, {invalid_toffoli} is invalid!", e)
```
`NeutralAtomDevice`s do not currently support gates with more than two controls although these are in principle allowed by the physical realizations.
```
"""Any gate with more than two controls is invalid."""
invalid_gate = cirq.ControlledGate(cirq.TOFFOLI)
try:
neutral_atom_device.validate_gate(invalid_gate)
except ValueError as e:
print(f"As expected, {invalid_gate} is invalid!", e)
```
Finally, we note that the duration of any operation can be determined via the `duration_of` method.
```
"""Example of getting the duration of a valid operation."""
neutral_atom_device.duration_of(valid_toffoli)
```
### Moment and circuit rules
In addition to consisting of valid operations as discussed above, valid moments on a `NeutralAtomDevice` must satisfy the following criteria:
1. Only `max_parallel_c` gates of the same category may be performed in the same moment.
2. All instances of gates in the same category in the same moment must be identical.
3. Controlled gates cannot be applied in parallel with other gate types.
- Physically, this is because controlled gates make use of all types of light used to implement gates.
4. Qubits acted on by different controlled gates in parallel must be farther apart than the `control_radius`.
- Physically, this is so that the entanglement mechanism doesn't cause the gates to interfere with one another.
5. All measurements must be terminal.
Moments can be validated with the `validate_moment` method. Some examples are given below.
```
"""Example of a valid moment with single qubit gates."""
qubits = sorted(neutral_atom_device.qubits)
# Get a valid moment.
valid_moment = cirq.Moment(cirq.Z.on_each(qubits[:3]) + cirq.X.on_each(qubits[3:6]))
# Display it.
print("Example of a valid moment with single-qubit gates:", cirq.Circuit(valid_moment), sep="\n\n")
# Verify it is valid.
neutral_atom_device.validate_moment(valid_moment)
```
Recall that we defined `max_parallel_z = 3` in our device. Thus, if we tried to do 4 $Z$ gates in the same moment, this would be invalid.
```
"""Example of an invalid moment with single qubit gates."""
# Get an invalid moment.
invalid_moment = cirq.Moment(cirq.Z.on_each(qubits[:4]))
# Display it.
print("Example of an invalid moment with single-qubit gates:", cirq.Circuit(invalid_moment), sep="\n\n")
# Uncommenting raises ValueError: Too many simultaneous Z gates.
# neutral_atom_device.validate_moment(invalid_moment)
```
This is also true for 4 $XY$ gates since we set `max_parallel_xy = 3`. However, there is an exception for $XY$ gates acting on *every* qubit, as illustrated below.
```
"""An XY gate can be performed on every qubit in the device simultaneously.
If the XY gate does not act on every qubit, it must act on <= max_parallel_xy qubits.
"""
valid_moment = cirq.Moment(cirq.X.on_each(qubits))
neutral_atom_device.validate_moment(valid_moment)
```
Although both $Z$ and $Z^{1.5}$ are valid gates, they cannot be performed simultaneously because all gates "of the same type" must be identical in the same moment.
```
"""Example of an invalid moment with single qubit gates."""
# Get an invalid moment.
invalid_moment = cirq.Moment(cirq.Z(qubits[0]), cirq.Z(qubits[1]) ** 1.5)
# Display it.
print("Example of an invalid moment with single-qubit gates:", cirq.Circuit(invalid_moment), sep="\n\n")
# Uncommenting raises ValueError: Non-identical simultaneous Z gates.
# neutral_atom_device.validate_moment(invalid_moment)
```
### Appending operations
A common pattern for constructing circuits is to append a sequence of operations instead of explicitly creating moments. For a circuit defined on a `NeutralAtomDevice`, Cirq will respect the above rules for creating valid moments.
For example, if we append $Z$ and $Z^{1.5}$ from the previous example, Cirq will place them into two moments as shown below.
```
"""Cirq satisfies device restrictions automatically when appending operations."""
# Create a circuit for a NeutralAtomDevice.
circuit = cirq.Circuit(device=neutral_atom_device)
# Append two gates which cannot be in the same moment.
circuit.append([cirq.Z(qubits[0]), cirq.Z(qubits[1]) ** 1.5])
# Display the circuit.
print(circuit)
```
This is true for all device rules. As another example, we can see how Cirq separates controlled gates from other gate types (the third rule above).
```
"""Cirq satisfies device restrictions automatically when appending operations."""
# Create a circuit for a NeutralAtomDevice.
circuit = cirq.Circuit(device=neutral_atom_device)
# Append two gates which cannot be in the same moment.
circuit.append([cirq.Z(qubits[0]), cirq.CNOT(*qubits[1: 3])])
# Display the circuit.
print(circuit)
```
Without any device restrictions, the `Z` and `CNOT` operations could be in the same moment, but because the circuit is defined on a `NeutralAtomDevice`, the `CNOT` is placed into a new moment.
### Exercise: Multiple controlled gates in the same moment
Construct a `NeutralAtomDevice` which is capable of implementing two `CNOT`s in the same moment. Verify that these operations can indeed be performed in parallel by calling the `validate_moment` method or showing that Cirq inserts the operations into the same moment.
```
# Your code here!
```
#### Solution
```
"""Example solution for creating a device which allows two CNOTs in the same moment."""
# Create a NeutralAtomDevice.
device = cirq.NeutralAtomDevice(
qubits=cirq.GridQubit.rect(2, 3),
measurement_duration=5 * cirq.Duration(nanos=10**6),
gate_duration=100 * cirq.Duration(nanos=10**3),
max_parallel_z=4,
max_parallel_xy=4,
max_parallel_c=4,
control_radius=1
)
print("Device:")
print(device)
# Create a circuit for a NeutralAtomDevice.
circuit = cirq.Circuit(device=device)
# Append two CNOTs that can be in the same moment.
circuit.append(
[cirq.CNOT(cirq.GridQubit(0, 0), cirq.GridQubit(1, 0)),
cirq.CNOT(cirq.GridQubit(0, 2), cirq.GridQubit(1, 2))]
)
# Append two CNOTs that cannot be in the same moment.
circuit.append(
[cirq.CNOT(cirq.GridQubit(0, 0), cirq.GridQubit(1, 0)),
cirq.CNOT(cirq.GridQubit(0, 1), cirq.GridQubit(1, 1))]
)
# Display the circuit.
print("\nCircuit:")
print(circuit)
```
Note that the square brackets above/below the circuit indicate the first two `CNOT`s are in the same moment.
## Decomposing operations and circuits
Invalid operations can be decomposed into valid operations via the `decompose_operation` method. For example, we saw above that `cirq.H` was an invalid gate for a `NeutralAtomDevice`. This can be decomposed into valid operations as follows.
```
"""Example of decomposing an operation."""
# Decompose a Hadamard operation.
ops = neutral_atom_device.decompose_operation(cirq.H.on(qubits[0]))
# Display the circuit.
print("Circuit for H on a NeutralAtomDevice:\n")
cirq.Circuit(ops, device=neutral_atom_device)
```
Two-qubit and other operations can be decomposed in an analogous manner, for example the `FSimGate` below.
```
"""Another example of decomposing an operation."""
# Decompose an FSimGate operation.
ops = neutral_atom_device.decompose_operation(
cirq.FSimGate(theta=0.1, phi=0.3).on(cirq.GridQubit(0, 0), cirq.GridQubit(1, 0))
)
# Display the circuit.
print("Circuit for FSim on a NeutralAtomDevice:\n")
cirq.Circuit(ops, device=neutral_atom_device)
```
> *Note*: As with any proposed architecture for quantum computing, several research groups around the world are working towards a device based on neutral atom qubits. Each research group has a different approach, such as using different atomic species or working with a different number of dimensions of atomic qubit arrays. As such, the `NeutralAtomDevice` class will not accurately reflect all such devices. The class is based on the two dimensional Cesium array at the University of Wisconsin-Madison in the research group of Mark Saffman. Development of this device is being pursued as part of a strategic partnership between the University of Wisconsin-Madison and ColdQuanta.
| github_jupyter |
## Data Extracting
### Get crime data from Analyze Boston website
reference like: https://data.boston.gov/dataset/crime-incident-reports-august-2015-to-date-source-new-system/resource/12cb3883-56f5-47de-afa5-3b1cf61b257b
- use a request to query the data from the api
- store data in a dataframe using pandas framework <br/>
We also drop column we will not use
```
import pandas as pd
crime = pd.read_csv('data/crime.csv')
crime['OCCURRED_ON_DATE'] = pd.to_datetime(crime['OCCURRED_ON_DATE'])
crime['Lat'] = pd.to_numeric(crime['Lat'])
crime['Long'] = pd.to_numeric(crime['Long'])
print("strat date:", crime['OCCURRED_ON_DATE'].min())
print("end date:", crime['OCCURRED_ON_DATE'].max())
# drop redundance column
crime = crime.drop(['HOUR', 'Location', 'MONTH', 'OFFENSE_CODE_GROUP', 'REPORTING_AREA','SHOOTING', 'STREET', 'YEAR', 'UCR_PART', 'INCIDENT_NUMBER'], axis=1)
crime.describe()
```
### Get weather data from csv files
reference like: https://www.ncdc.noaa.gov/cdo-web/datasets/GHCND/stations/GHCND:USW00014739/detail
- load data from csv file and keep only needed column
```
weather = pd.read_csv('data/weather.csv')
weather['DATE'] = pd.to_datetime(weather['DATE']).dt.date
weather = weather[['DATE', 'SNOW', 'TAVG']]
weather.describe()
```
## Data Visualization and Cleaning
### Visualize weather data
We start with weather data because there is only one thing we need which is average tempurature
- Check null value in the dataset
```
print(weather.isnull().sum(axis=0))
```
- Find a null value <br/>
Only one data is missing, we will remove this when we finish cleaning and export the cleaned data seet
```
weather.loc[weather['TAVG'].isnull() == True]
```
- Plot average tempurature
```
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=(20,10))
plt.hist(weather['TAVG'], 50)
plt.show
```
### Visualize crime data
- Check null value in data
```
crime.isnull().sum(axis=0)
```
- Romove district that has value "External" <br/>
There are created for test.
```
crime = crime.drop(crime.loc[crime['DISTRICT'] == 'External'].index.tolist())
crime['DISTRICT'].unique()
```
- Remove null value that cannot be determinate <br/>
We can use machine learning to fill null value of district and location, <br/>
but if both location and district is null we cannot use that. <br/>
Since, there is only 48 rows from 32,000 that have null values on both location and district, <br/>
we will just remove these rows.
```
crime.loc[(crime['DISTRICT'].isnull() == True) & ((crime['Lat'].isnull() == True) | (crime['Lat'] == -1))].count()
crime['DISTRICT'] = crime['DISTRICT'].fillna("none")
crime = crime.drop(crime.loc[(crime['DISTRICT'] == 'none') & ((crime['Lat'].isnull() == True) | (crime['Lat'] == -1))].index.tolist())
# we also drop data that have lat and long at -1
crime = crime.drop(crime.loc[crime['Lat'] == -1].index.tolist())
# we also drop data that have external district, there are testing data
crime.loc[(crime['DISTRICT'] == 'none') & ((crime['Lat'].isnull() == True) | (crime['Lat'] == -1))].count()
```
- Find the mean "Lat" and "Long" of each district for fill null value <br/>
```
crime_without_null = crime[['DISTRICT', 'Lat', 'Long']]
crime_without_null = crime_without_null.dropna(subset=['Lat', 'Long'])
crime_without_null['Lat'] = pd.to_numeric(crime_without_null['Lat'])
crime_without_null['Long'] = pd.to_numeric(crime_without_null['Long'])
mean_district = crime_without_null.groupby('DISTRICT').mean()
mean_district
```
- Fill Lat and Long null value of each district with its mean lat and long <br/>
```
crime['Lat'] = crime.apply(lambda x : mean_district[mean_district.index == x['DISTRICT']]['Lat'][0] if not x['Lat'] == x['Lat'] else x['Lat'], axis = 1)
crime['Long'] = crime.apply(lambda x : mean_district[mean_district.index == x['DISTRICT']]['Long'][0] if not x['Long'] == x['Long'] else x['Long'], axis = 1)
crime.isnull().sum()
```
- Plot location of each district<br/>
We can see that none value is all over the place and some location have mislabelled district
```
# change all lat and long to number for plot
crime['Lat'] = pd.to_numeric(crime['Lat'])
crime['Long'] = pd.to_numeric(crime['Long'])
# plot each district location
districts = sorted(crime['DISTRICT'].unique())
plt.figure(figsize=(20,10))
for district in districts:
if district == "none":
plt.scatter(crime[crime['DISTRICT'] == district]['Long'],crime[crime['DISTRICT'] == district]['Lat'],label=district,alpha = 1, color='black')
else :
plt.scatter(crime[crime['DISTRICT'] == district]['Long'],crime[crime['DISTRICT'] == district]['Lat'],label=district,alpha = 0.5)
plt.legend()
plt.show()
from sklearn.neighbors import KNeighborsClassifier
# change none value back to null for train
crime['DISTRICT'] = crime.apply(lambda x : None if x['DISTRICT'] == 'none' else x['DISTRICT'], axis = 1)
# create dataframe for train
trainset = crime[crime['DISTRICT'].isnull() == False][0:21000] # data is unclean should use only some of them
features=list(zip(trainset['Long'],trainset['Lat']))
print(features[0:5])
model = KNeighborsClassifier(n_neighbors=30)
# Train the model using the training sets
model.fit(features,trainset['DISTRICT'])
# Predict Output
predicted = model.predict([[0,2]])
print(predicted)
features = list(zip(crime['Long'],crime['Lat']))
predicted = model.predict(features)
crime['DISTRICT'] = predicted
districts = crime['DISTRICT'].unique().tolist()
plt.figure(figsize=(20,10))
for district in districts:
plt.scatter(crime[crime['DISTRICT'] == district]['Long'],crime[crime['DISTRICT'] == district]['Lat'],label=district,alpha = 0.5)
plt.legend()
plt.show()
```
### Export to csv file for analysis
```
crime['DATE'] = pd.to_datetime(crime['OCCURRED_ON_DATE']).dt.date
crimeandweather = pd.merge(crime, weather, on='DATE', how='left')
crimeandweather.head()
crimeandweather = crimeandweather.dropna(subset=['TAVG'])
crimeandweather.head()
crimeandweather.to_csv("data/crimeandweather.csv", index=False)
```
| github_jupyter |
<h1>Testing the E2E simulations</h1>
## -- JWST aperture --
This script introduces the end-to-end (E2E) simulations that are used in **`calibration.py`**, for the influence calibration of each individual segment. The testing of the script itself is done in this next notebook.
```
import os
import time
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import LogNorm
from astropy.io import fits
import astropy.units as u
import webbpsf
os.chdir('../../pastis/')
from config import CONFIG_PASTIS
import util as util
import image_pastis as impastis
# Path to all the outputs from "aperture_definition.py".
data_dir = '/Users/ilaginja/Documents/data_from_repos/pastis_data/active/calibration'
# Change into that directory
#os.chdir(data_dir)
os.environ['WEBBPSF_PATH'] = CONFIG_PASTIS.get('local', 'webbpsf_data_path')
print('Currenlty running on WebbPSF', webbpsf.version.version)
# Get some parameters
fpm = CONFIG_PASTIS.get('JWST', 'focal_plane_mask') # focal plane mask
lyot_stop = CONFIG_PASTIS.get('JWST', 'pupil_plane_stop') # Lyot stop
filter = CONFIG_PASTIS.get('JWST', 'filter_name') # filter
im_size_e2e = CONFIG_PASTIS.getint('numerical', 'im_size_px_webbpsf') # image size in pixels
wss_segs = webbpsf.constants.SEGNAMES_WSS_ORDER
nb_seg = CONFIG_PASTIS.getint('JWST', 'nb_subapertures')
zern_max = CONFIG_PASTIS.getint('zernikes', 'max_zern')
inner_wa = CONFIG_PASTIS.getint('JWST', 'IWA')
outer_wa = CONFIG_PASTIS.getint('JWST', 'OWA')
sampling = CONFIG_PASTIS.getfloat('JWST', 'sampling') # sampling
nm_aber = CONFIG_PASTIS.getfloat('JWST', 'calibration_aberration') * u.nm # [nm] amplitude of aberration
zern_number = CONFIG_PASTIS.getint('calibration', 'local_zernike') # Which (Noll) Zernike we are calibrating for
wss_zern_nb = util.noll_to_wss(zern_number) # Convert from Noll to WSS framework
```
For starters, lets completely independently create some WebbPSF images of a direct image (no coronagraph) and a coronagraphic image.
```
# Create two NIRCam objects
nc = webbpsf.NIRCam()
nc_coro = webbpsf.NIRCam()
# Btw:
print('NIRCam pixelscale:', nc.pixelscale)
print('Telescope:', nc.telescope)
print('nc name:', nc.name)
print('NIRCam module used:', nc.module)
print('NIRCam list of image masks:', nc.image_mask_list)
print('NIRCam list of pupil masks:', nc.pupil_mask_list)
print('NIRCam currently used OPD:', nc.pupilopd)
print('NIRCam detector list:', nc.detector_list)
print('nc used detector:', nc.detector)
print('Pixel position in (X, Y) on the detector:', nc.detector_position)
print('NIRCam filter list:', nc.filter_list)
print('nc used filter:', nc.filter)
print('nc channel used:', nc.channel)
# Some displays
plt.figure(figsize=(19, 19))
nc.display()
plt.show()
#nc.calc_psf?
#nc.calcPSF?
# Show the pupil used
nc_pup = fits.getdata(nc.pupil)
plt.imshow(nc_pup)
plt.title('WebbPSF NIRCam pupil')
plt.show()
print('Pupil shape:', nc_pup.shape)
```
We can see here how big the pupil array is in terms of pixels that is used in the E2E simulations. The pupil we generate in "aperture_generation.py" for PASTIS needs to have the same pupil array size! Eventually, this will be a number that we enter into the configfile. Currently, the PASTIS image size *im_size_pastis* and the pupil size are the same.
```
# Null the OTE OPDs for the PSFs, and also the science instrument (SI) internal WFE.
nc, ote = webbpsf.enable_adjustable_ote(nc) # create OTE for default PSF
nc_coro, ote_coro = webbpsf.enable_adjustable_ote(nc_coro) # create OTE for coronagraph
ote.zero() # set OTE for direct PSF to zero
ote_coro.zero() # set OTE for coronagraph to zero
nc.include_si_wfe= False # set SI internal WFE to zero
nc_coro.include_si_wfe= False
# Display NIRCam isntrument without OTE and SI WFE
plt.figure(figsize=(19, 19))
nc.display()
plt.show()
```
From the WebbPSF turotial (https://github.com/mperrin/webbpsf/blob/master/notebooks/WebbPSF_tutorial.ipynb) we know that calc_psf() calculates images with different sampling (I think I also explain this in my notebook "DealingWithWebbPSF.ipynb") and we can access them in the different HDU extensions.
In that same notebook, I also explain why I use oversample=1 and nlambda=1 to make the calculations faster.
## NO CORONAGRAPH
### Generating a direct PSF without aberrations
```
# Let's see what the current direct PSF looks like (coronagraphic PSF is the same since they've been set up the same
# and I haven't added the coronagraph yet)
psf_direct_hdu = nc.calc_psf(oversample=1, nlambda=1)
# Display by WebbPSF
plt.figure(figsize=(10,10))
webbpsf.display_psf(psf_direct_hdu)
plt.show()
# Display with matplotlib
psf_direct = psf_direct_hdu[1].data
print('PSF shape:', psf_direct.shape)
print('PSF max:', np.max(psf_direct))
# Keeping this since I don't tell WebbPSF how big I want my images to be.
# I will start telling it further below though, and then I'll start using
# the zoom() function.
xcen = int(psf_direct.shape[1]/2)
ycen = int(psf_direct.shape[0]/2)
boxhw = 27
plt.figure(figsize=(20,10))
plt.subplot(1, 2, 1)
plt.imshow(psf_direct, norm=LogNorm(), origin='lower') # WebbPSF uses origin='lower' too, which will
plt.title('Direct PSF') # be important later on with the coronagraphic images
plt.colorbar()
plt.subplot(1, 2, 2)
plt.imshow(psf_direct[ycen-boxhw:ycen+boxhw, xcen-boxhw:xcen+boxhw], norm=LogNorm(), origin='lower')
plt.title('Zoomed in')
plt.show()
```
We need to make a wavelength and filter choice:
```
# Add the filter we want to use
nc.filter = filter
nc_coro.filter = filter
# So far both nc objects are still the same, so I'll display only one
psf = nc.calc_psf(oversample=1, nlambda=1)
plt.figure(figsize=(20,10))
plt.subplot(1, 2, 1)
webbpsf.display_psf(psf)
psf = psf[1].data
# Still using the default image size from WebbPSF
xcen = int(psf.shape[1]/2)
ycen = int(psf.shape[0]/2)
boxhw = 27
plt.subplot(1, 2, 2)
plt.imshow(psf[ycen-boxhw:ycen+boxhw, xcen-boxhw:xcen+boxhw], norm=LogNorm(), origin='lower')
plt.title('Direct PSF')
plt.colorbar()
plt.show()
print('Max of direct PSF:', np.max(psf))
```
We want our images to be the same size like our simulations, so we use "fov_pixels".
```
# Both nc (non-coro and coro) objects are still the same, so I'll display only one.
# Now we're using our custom image size *im_size_e2e*.
psf = nc.calc_psf(fov_pixels=im_size_e2e, oversample=1, nlambda=1)
webbpsf.display_psf(psf)
plt.show()
```
We want to have normalized images, normalized to the non-coronagraphic, non-aberrated (meaning no sgment is actively moved) PSF that is displayed above. **normp** will be our normalization factor.
```
normp = np.max(psf_direct)
```
Remember what the differend hdu extensions in the WebbPSF images are:
- If oversample = 1: image calculation with detector sampling, and extension 0 andn 1 are the same
- if oversample > 1: image calculation will be done with increased sampling and then binned down to detector sampling. This will make the calculation more accurate, since JWST observations will do things like dithering in order to make images better. Has to be done because some detectors in some wavelengths don't even have Nyquist sampling. Then ext=1 is oversampled image and ext=0 is binned image.
```
# Look at the different extensions of the WebbPSF image
psf.info()
webbpsf.display_psf(psf, ext=3)
plt.show()
# Extract the numpy array
psf = psf[1].data
# Normalize the PSF
psf = psf/normp
print('Done')
# Display with matplotlibpsf = psf[1].data
# Now starting to use zoom_cen()
boxhw = 27
plt.figure(figsize=(20, 10))
plt.subplot(1, 2, 1)
plt.imshow(psf, norm=LogNorm(), origin='lower')
plt.title('Direct PSF')
plt.colorbar()
plt.subplot(1, 2, 2)
plt.imshow(util.zoom_cen(psf, boxhw), norm=LogNorm(), origin='lower')
plt.title('Direct PSF - zoomed in')
plt.show()
print('Total image shape:', psf.shape)
print('PSF max:', np.max(psf))
```
### A single aberrated segment
To compare to the analytical images step by step, I will first create images with only one segment aberrated.
```
segnum = 5 # Which segment are we aberrating - I number them starting with 1
segnum -= 1 # Which is why I have to subtract one, because WebbPSF starts numbering them at 0
#nm_aber = 100 # in in put units
# Extract the correct segment name from WebbPSF
seg = wss_segs[segnum].split('-')[0]
print('Aberrated segment:', seg)
# Create arrays to hold Zernike aberration coefficients
Aber_WSS = np.zeros([nb_seg, zern_max]) # The Zernikes here will be filled in the WSS order!!!
# Because it goes into _apply_hexikes_to_seg().
# Feed the aberration nm_aber into the array position
# that corresponds to the correct Zernike, but only on segment i
Aber_WSS[segnum, wss_zern_nb-1] = nm_aber.to(u.m).value # Aberration on the segment we're currently working on;
# convert to meters; -1 on the Zernike because Python starts
# numbering at 0.
#-# Crate OPD with aberrated segment, NO CORONAGRAPH
print('Applying aberration to OTE.')
print('nm_aber: {}'.format(nm_aber))
ote.reset() # Making sure there are no previous movements on the segments.
ote.zero() # For now, ignore internal WFE.
ote._apply_hexikes_to_seg(seg, Aber_WSS[segnum,:])
# Display the OTE
ote.display_opd()
plt.show()
# At this point, WebbPSF still numbers the segments wrong in the exit pupil,
# so it's the easiest to orient yourself by the spiders.
# Calculate the PSF
psf_minizern = nc.calc_psf(fov_pixels=im_size_e2e, oversample=1, nlambda=1)
webbpsf.display_psf(psf_minizern)
plt.show()
psf_minizern = psf_minizern[1].data/normp
# Display with matplotlib
boxhw = 27
print('psf_minizern.shape:', psf_minizern.shape)
plt.figure(figsize=(20, 10))
plt.subplot(1, 2, 1)
plt.imshow(psf_minizern, norm=LogNorm(), origin='lower')
plt.title('Equivalent of Envelope from mini Zernike')
plt.colorbar()
plt.subplot(1, 2, 2)
plt.imshow(util.zoom_cen(psf_minizern, boxhw), norm=LogNorm(), origin='lower')
plt.title('Envelope - zoomed in')
plt.show()
```
Compare this image with one single aberrated segment vs. the non-aberrated PSF:
```
# Subtract the pserfect direct PSF off the single-segment aberrated PSF
one_aber_residual = psf - psf_minizern
plt.figure(figsize=(20, 10))
plt.subplot(1, 3, 1)
plt.imshow(util.zoom_cen(psf, boxhw), norm=LogNorm(), origin='lower')
plt.title('Direct PSF, perfect')
#plt.colorbar()
plt.subplot(1, 3, 2)
plt.imshow(util.zoom_cen(psf_minizern, boxhw), norm=LogNorm(), origin='lower')
plt.title('Direct PSF one aberrated segment')
#plt.colorbar()
plt.subplot(1, 3, 3)
plt.imshow(util.zoom_cen(one_aber_residual, boxhw), norm=LogNorm(), origin='lower')
plt.title('Residual')
#plt.colorbar()
plt.show()
# Repeat on a smaller image direclty instead of cropping it afterwards, for faster computation
# Calculate the PSF
psf_minizern = nc.calc_psf(fov_pixels=54, oversample=1, nlambda=1)
webbpsf.display_psf(psf_minizern)
plt.show()
psf_minizern = psf_minizern[1].data/normp
# Display with matplotlib
boxhw = 27
plt.figure(figsize=(20, 10))
plt.subplot(1, 2, 1)
plt.imshow(psf_minizern, norm=LogNorm(), origin='lower')
plt.title('PSF')
plt.colorbar()
plt.show()
"""
# Make a loop over the first eight Zernike envelopes, like in notebook 2
aber_wss_loop = np.zeros([nb_seg, 8])
psfs_env = []
plt.figure(figsize=(18, 60))
plt.suptitle('Different Zernikes envelopes from WebbPSF')
noll_as_wss = np.array([1, 3, 2, 5, 4, 6, 7, 8]) #, 11, 9, 10]) # reordering Noll Zernikes to WSS, for ease of use
print('nm_aber:', nm_aber, 'in input units')
for i, zern in enumerate(noll_as_wss):
# Put the Zernike coefficient in correct place in
aber_wss_loop[:,:] = 0 # set all entries to zero
aber_wss_loop[segnum, zern-1] = nm_aber / aber_u # fill only the index for current Zernike, in meters
#print(aber_wss_loop[segnum, :])
# Put Zernike on correct segment on OTE
ote.reset() # Making sure there are no previous movements on the segments.
ote.zero() # For now, ignore internal WFE.
ote._apply_hexikes_to_seg(seg, aber_wss_loop[segnum,:])
# Display the OTE
plt.subplot(8, 2, i*2+1)
ote.display_opd()
# Calculate the PSF
print('Calculating PSF', str(i+1) + '/' + '8')
psf_zernloop = nc.calc_psf(fov_pixels=54, oversample=1, nlambda=1)
psf_zernloop = psf_zernloop[1].data
psfs_env.append(psf_zernloop)
# Display the PSF
plt.subplot(8, 2, i*2+2)
plt.imshow(psf_zernloop, norm=LogNorm(), origin='lower')
plt.show()
psfs_env = np.array(psfs_env)
"""
# This was a thinking mistake from my side. I tried modeling the single Zernike envelope
# from the analytical model, which I can't do direcly in this simulation because I only
# have access to the full aperture.
# But I'll keep the code, because you never know what it could be useful for.
"""
# Display them
plt.figure(figsize=(16, 8))
for i in range(noll_as_wss.shape[0]):
plt.subplot(2, 4, i+1)
plt.imshow(psfs_env[i], norm=LogNorm(), origin='lower')
plt.title('Noll Zernike: ' + str(i+1))
plt.show()
"""
```
### Pair-wise aberrated segments
```
# Decide which two segments you want to aberrate
segnum1 = 8 # Which segments are we aberrating - I number them starting with 1
segnum2 = 16
segnum_array = np.array([segnum1, segnum2])
segnum_array -= 1 # Which is why I have to subtract one, because WebbPSF starts numbering them at 0
zern_pair = 1 # Which Noll Zernike are we putting on the segments.
# Extract the correct segment names from WebbPSF
seg_array = []
for i, senu in enumerate(segnum_array):
seg_array.append(wss_segs[senu].split('-')[0])
seg_array = np.array(seg_array)
print('Aberration: {}'.format(nm_aber))
print('Aberrated segments:', seg_array)
print('Noll Zernike used:', zern_pair)
aber_wss_loop = np.zeros([nb_seg, 8])
noll_as_wss = np.array([1, 3, 2, 5, 4, 6, 7, 8]) #, 11, 9, 10]) # reordering Noll Zernikes to WSS, for ease of use
print('nm_aber: {}'.format(nm_aber))
# Apply aberration to all sgements
ote.reset() # Making sure there are no previous movements on the segments.
ote.zero() # For now, ignore internal WFE.
for i, nseg in enumerate(seg_array):
aber_wss_loop[segnum_array[i], noll_as_wss[zern_pair-1]-1] = nm_aber.to(u.m).value # fill only the index for current Zernike, in meters
# Put Zernike on correct segments on OTE
ote._apply_hexikes_to_seg(nseg, aber_wss_loop[segnum_array[i],:])
# Display the OTE
ote.display_opd()
plt.show()
# Calculate the PSF
psf_zernpair= nc.calc_psf(fov_pixels=154, oversample=1, nlambda=1) # oversampled for beeter seeign the fringes
psf_zernpair = psf_zernpair[0].data/normp # getting the oversampled extension
# Display the PSF
plt.figure(figsize=(10, 10))
plt.subplot(1, 1, 1)
plt.imshow(psf_zernpair, norm=LogNorm(), origin='lower')
plt.title('Direct PSF of a pair-wise aberrated segmented OTE')
plt.colorbar()
plt.show()
print(psf_zernpair.shape)
```
I'm gonna stop here and go back to do the same thing with the analytical model in notebook 2. I am not sure the effect in the focal plane of me aberrating a pair of segments is really what it's supposed to be.
I created some images from specific pairs and then saved them:
```
#segs_3_11_noll_1_dir = np.copy(psf_zernpair)
#segs_11_17_noll_1_dir = np.copy(psf_zernpair)
#segs_6_11_noll_1_dir = np.copy(psf_zernpair)
#segs_9_2_noll_1_dir = np.copy(psf_zernpair)
#segs_9_5_noll_1_dir = np.copy(psf_zernpair)
#segs_9_15_noll_1_dir = np.copy(psf_zernpair)
#segs_8_1_noll_1_dir = np.copy(psf_zernpair)
#segs_8_6_noll_1_dir = np.copy(psf_zernpair)
#segs_8_16_noll_1_dir = np.copy(psf_zernpair)
save_dir1 = '/astro/opticslab1/PASTIS/jwst_data/E2E_pair_aberrations/2019-1-25-16h-18min_piston_100nm'
# util.write_fits(segs_3_11_noll_1_dir, os.path.join(save_dir1, 'segs_3_11_noll_1_dir.fits'))
# util.write_fits(segs_11_17_noll_1_dir, os.path.join(save_dir1, 'segs_11_17_noll_1_dir.fits'))
# util.write_fits(segs_6_11_noll_1_dir, os.path.join(save_dir1, 'segs_6_11_noll_1_dir.fits'))
# util.write_fits(segs_9_2_noll_1_dir, os.path.join(save_dir1, 'segs_9_2_noll_1_dir.fits'))
# util.write_fits(segs_9_5_noll_1_dir, os.path.join(save_dir1, 'segs_9_5_noll_1_dir.fits'))
# util.write_fits(segs_9_15_noll_1_dir, os.path.join(save_dir1, 'segs_9_15_noll_1_dir.fits'))
# util.write_fits(segs_8_1_noll_1_dir, os.path.join(save_dir1, 'segs_8_1_noll_1_dir.fits'))
# util.write_fits(segs_8_6_noll_1_dir, os.path.join(save_dir1, 'segs_8_6_noll_1_dir.fits'))
# util.write_fits(segs_8_16_noll_1_dir, os.path.join(save_dir1, 'segs_8_16_noll_1_dir.fits'))
```
In general, I will have to load these images from central store:
- '/astro/opticslab1/PASTIS/jwst_data/E2E_pair_aberrations/2019-1-18-17h-5min_piston_1000nm_pairs' will have images generated with aberrations of 1000 nm per segment which is too much compared to JWST's wavelength and the sort of aberrations that are expected in-flight
- '/astro/opticslab1/PASTIS/jwst_data/E2E_pair_aberrations/2019-1-25-16h-18min_piston_100nm' has images generated with aberrations of 100 nm per segment, but this aberration is not high enough to make us see the fringes
```
read_dir1 = '/astro/opticslab1/PASTIS/jwst_data/E2E_pair_aberrations/2019-1-18-17h-5min_piston_1000nm_pairs'
segs_3_11_noll_1_dir = fits.getdata(os.path.join(read_dir1, 'segs_3_11_noll_1_dir.fits'))
segs_11_17_noll_1_dir = fits.getdata(os.path.join(read_dir1, 'segs_11_17_noll_1_dir.fits'))
segs_6_11_noll_1_dir = fits.getdata(os.path.join(read_dir1, 'segs_6_11_noll_1_dir.fits'))
segs_9_2_noll_1_dir = fits.getdata(os.path.join(read_dir1, 'segs_9_2_noll_1_dir.fits'))
segs_9_5_noll_1_dir = fits.getdata(os.path.join(read_dir1, 'segs_9_5_noll_1_dir.fits'))
segs_9_15_noll_1_dir = fits.getdata(os.path.join(read_dir1, 'segs_9_15_noll_1_dir.fits'))
segs_8_1_noll_1_dir = fits.getdata(os.path.join(read_dir1, 'segs_8_1_noll_1_dir.fits'))
segs_8_6_noll_1_dir = fits.getdata(os.path.join(read_dir1, 'segs_8_6_noll_1_dir.fits'))
segs_8_16_noll_1_dir = fits.getdata(os.path.join(read_dir1, 'segs_8_16_noll_1_dir.fits'))
```
Let's have a look at some of the images (refer to numbered pupil to identify baslines these correspond to).
```
# Gotta check how big the loaded images are!
print('Loaded images shape:', segs_3_11_noll_1_dir.shape)
print('im_size_e2e:', im_size_e2e)
# If im_size_e2e is bigger than images we loaded, this won't work
# and you have to define a box half-size manually for imwidth.
boxw = int(im_size_e2e/2)
boxw2 = boxw/2
if im_size_e2e < segs_3_11_noll_1_dir.shape[0]:
imwidth = bozw2
else:
#raise Exception('! You have to set imwidth manually ! And then comment this line out.')
pass
# Chose what image size (in pixels) we want to display
imwidth = 40
plt.figure(figsize=(18, 12))
plt.suptitle('Pair-wise aberrations on direct (no coro) WebbPSF images')
plt.subplot(2, 3, 1)
plt.imshow(util.zoom_cen(segs_3_11_noll_1_dir, imwidth), norm=LogNorm(), origin='lower')
plt.title('Piston on segments 3 and 11')
plt.subplot(2, 3, 2)
plt.imshow(util.zoom_cen(segs_6_11_noll_1_dir, imwidth), norm=LogNorm(), origin='lower')
plt.title('Piston on segments 6 and 11')
plt.subplot(2, 3, 3)
plt.imshow(util.zoom_cen(segs_11_17_noll_1_dir, imwidth), norm=LogNorm(), origin='lower')
plt.title('Piston on segments 11 and 17')
plt.subplot(2, 3, 4)
plt.imshow(util.zoom_cen(segs_9_2_noll_1_dir, imwidth), norm=LogNorm(), origin='lower')
plt.title('Piston on segments 9 and 2')
plt.subplot(2, 3, 5)
plt.imshow(util.zoom_cen(segs_9_5_noll_1_dir, imwidth), norm=LogNorm(), origin='lower')
plt.title('Piston on segments 5 and 9')
plt.subplot(2, 3, 6)
plt.imshow(util.zoom_cen(segs_9_15_noll_1_dir, imwidth), norm=LogNorm(), origin='lower')
plt.title('Piston on segments 9 and 15')
plt.show()
```
I made sure to make images from the same aberrated pairs like in the analytical notebook (notebook 2), so we can compare them here now.
```
# Load the analytical images
read_dir_ana = '/astro/opticslab1/PASTIS/jwst_data/uncalibrated_analytical_images/2018-01-19-18h-31min_piston_1000nm_exitpupil'
segs_3_11_noll_1_ana = fits.getdata(os.path.join(read_dir_ana, 'segs_3_11_noll_1.fits'))
segs_11_17_noll_1_ana = fits.getdata(os.path.join(read_dir_ana, 'segs_11_17_noll_1.fits'))
segs_6_11_noll_1_ana = fits.getdata(os.path.join(read_dir_ana, 'segs_6_11_noll_1.fits'))
segs_9_2_noll_1_ana = fits.getdata(os.path.join(read_dir_ana, 'segs_9_2_noll_1.fits'))
segs_9_5_noll_1_ana = fits.getdata(os.path.join(read_dir_ana, 'segs_9_5_noll_1.fits'))
segs_9_15_noll_1_ana = fits.getdata(os.path.join(read_dir_ana, 'segs_9_15_noll_1.fits'))
segs_8_1_noll_1_ana = fits.getdata(os.path.join(read_dir_ana, 'segs_8_1_noll_1.fits'))
segs_8_6_noll_1_ana = fits.getdata(os.path.join(read_dir_ana, 'segs_8_6_noll_1.fits'))
segs_8_16_noll_1_ana = fits.getdata(os.path.join(read_dir_ana, 'segs_8_16_noll_1.fits'))
```
Compare pairs **3-11**, **6-11** and **11-17** between E2E and analytical:
```
# Chose what image size (in pixels) we want to display
imwidth = imwidth
plt.figure(figsize=(18, 12))
plt.suptitle('Comparison of E2E and analtical DIRECT images')
plt.subplot(2, 3, 1)
plt.imshow(util.zoom_cen(segs_3_11_noll_1_dir, imwidth), norm=LogNorm(), origin='lower')
plt.title('Piston on segments 3 and 11 - E2E')
plt.subplot(2, 3, 2)
plt.imshow(util.zoom_cen(segs_6_11_noll_1_dir, imwidth), norm=LogNorm(), origin='lower')
plt.title('Piston on segments 6 and 11 - E2E')
plt.subplot(2, 3, 3)
plt.imshow(util.zoom_cen(segs_11_17_noll_1_dir, imwidth), norm=LogNorm(), origin='lower')
plt.title('Piston on segments 11 and 17 - E2E')
plt.subplot(2, 3, 4)
plt.imshow(util.zoom_cen(segs_3_11_noll_1_ana, imwidth), norm=LogNorm(), origin='lower')
plt.title('Piston on segments 3 and 11 - analytical')
plt.subplot(2, 3, 5)
plt.imshow(util.zoom_cen(segs_6_11_noll_1_ana, imwidth), norm=LogNorm(), origin='lower')
plt.title('Piston on segments 5 and 11 - analytical')
plt.subplot(2, 3, 6)
plt.imshow(util.zoom_cen(segs_11_17_noll_1_ana, imwidth), norm=LogNorm(), origin='lower')
plt.title('Piston on segments 11 and 17 - analytical')
plt.show()
```
There is clearly a lot more going on in the WebbPSF images, since they will have incorporated many additional effects compared to the analytical images. But, foro the same aberrated segment, we can see fringes of the same structure and orientation, so I think this is fine!
## WITH CORONAGRAPH
### Generating a coronagraphic PSF without aberrations
```
# Now add the coronagraph to nc_coro
nc_coro.image_mask = fpm
nc_coro.pupil_mask = lyot_stop
# And show what that looks like
plt.figure(figsize=(18, 9))
psf_coro = nc_coro.calc_psf(fov_pixels=im_size_e2e, oversample=1, nlambda=1, display=True)
plt.show()
psf_coro_im = psf_coro[1].data/normp
print('PSF calculation done')
# I can't use webbpsf.display_psf(psf_coro) because I couldn't figure out how to change the color scaling
# and it turns out all black. So I'll just use matplotlib.
plt.figure(figsize=(20, 10))
plt.subplot(1, 2, 1)
plt.imshow(psf_coro_im, norm=LogNorm(), origin='lower')
plt.title('Coronagraphic PSF - zoomed in')
plt.subplot(1, 2, 2)
plt.imshow(util.zoom_cen(psf_coro_im, 70), norm=LogNorm(), origin='lower')
plt.title('Coronagraphic PSF')
plt.colorbar()
plt.show()
```
Let's confirm what image size we're using. The NIRCam field of view is 20'' and the plate scale in the long wavelength channel is 0.063''/pixel. This means if we divide 20 by 0.063, we can tell how big the total FoV is in NIRCam pixels.
```
print('NIRCam images will have', 20/0.063, 'pixels on either side of the detector.')
```
And we just rounded up to 320 pixels. For comparison here, we'll show the WebbPSF native display that gives you the image in terms of arcseconds - and you should see a 20'' x 20'' field of view (going from -10'' to 10'' on both axes).
```
# For comparison, the webbpsf display in physical units for the fov:
# Also, I have figured out here how to change the image scale.
plt.figure(figsize=(10, 10))
webbpsf.display_psf(psf_coro, vmin=1e-12, vmax=1e-6)
plt.show()
```
This is the place where we can see that in order to match our matplotlib displays of the PSF with that of WebbPSF, we need to use the keyword "origin='lower'" in imshow().
### A single aberrated segment
For the calibration of the analytical images, I need to create images that stem for the pupil having one single aberrated segment.
```
# Define what segment to aberrate
segnum = 5 # Which segment are we aberrating - I number them starting with 1
segnum -= 1 # Which is why I have to subtract one, because WebbPSF starts numbering them at 0
# Extract the correct segment name from WebbPSF
seg = wss_segs[segnum].split('-')[0]
print('Aberrated segment:', seg)
# Define what Noll Zernike we're using
zern_number = 1
wss_zern_nb = util.noll_to_wss(zern_number)
# Maybe play around with amount of aberration
#nm_aber = 1000. # in input units
# Create arrays to hold Zernike aberration coefficients
Aber_WSS = np.zeros([nb_seg, zern_max]) # The Zernikes here will be filled in the WSS order!!!
# Because it goes into _apply_hexikes_to_seg().
# Feed the aberration nm_aber into the array position
# that corresponds to the correct Zernike, but only on segment i
Aber_WSS[segnum, wss_zern_nb-1] = nm_aber.to(u.m).value # Aberration on the segment we're currently working on;
# convert to meters; -1 on the Zernike because Python starts
# numbering at 0.
#-# Crate OPD with aberrated segment, NO CORONAGRAPH
print('Applying aberration to OTE.')
print('nm_aber: {}'.format(nm_aber))
ote_coro.reset() # Making sure there are no previous movements on the segments.
ote_coro.zero() # For now, ignore internal WFE.
ote_coro._apply_hexikes_to_seg(seg, Aber_WSS[segnum,:])
# Display the OTE
ote_coro.display_opd()
plt.show()
# At this point, WebbPSF still numbers the segments wrong in the exit pupil,
# so it's the easiest to orient yourself by the spiders.
# Calculate the PSF
psf_single_coro = nc_coro.calc_psf(fov_pixels=im_size_e2e, oversample=1, nlambda=1)
plt.figure(figsize=(10, 10))
webbpsf.display_psf(psf_single_coro, vmin=1e-12, vmax=1e-6)
plt.show()
psf_single_coro = psf_single_coro[1].data/normp
# Display with matplotlib
boxhw = im_size_e2e/2
box2 = boxhw/2
print('nm_aber: {}'.format(nm_aber))
plt.figure(figsize=(20, 10))
plt.subplot(1, 2, 1)
plt.imshow(psf_single_coro, norm=LogNorm(), origin='lower')
plt.title('One aberrated segment in coronagraphic setup')
plt.colorbar()
plt.subplot(1, 2, 2)
plt.imshow(util.zoom_cen(psf_single_coro, box2), norm=LogNorm(), origin='lower')
plt.title('Zoomed in')
plt.show()
```
For piston, an aberration of 10 nm shows no effect, 100 nm already visibly messes the PSF up and 1000 nm make a very distinct change to the PSF, probably too much for PASTIS purposes.
### Pair-wise aberrated segments with coronagraph
```
# Decide which two segments you want to aberrate
segnum1 = 8 # Which segments are we aberrating - I number them starting with 1
segnum2 = 16
# Segment aberrations are additive, so if you use a segment number twice, the
# aberration will be applied twice!
segnum_array = np.array([segnum1, segnum2])
segnum_array -= 1 # Which is why I have to subtract one, because WebbPSF starts numbering them at 0
zern_pair = 1 # Which Noll Zernike are we putting on the segments.
# Extract the correct segment names from WebbPSF
seg_array = []
for i, senu in enumerate(segnum_array):
seg_array.append(wss_segs[senu].split('-')[0])
seg_array = np.array(seg_array)
print('Aberration used: {}'.format(nm_aber))
print('Aberrated segments:', seg_array)
print('Noll Zernike used:', zern_pair)
aber_wss_loop = np.zeros([nb_seg, 8])
noll_as_wss = np.array([1, 3, 2, 5, 4, 6, 7, 8]) #, 11, 9, 10]) # reordering Noll Zernikes to WSS, for ease of use
print('nm_aber: {}'.format(nm_aber))
# Apply aberration to all sgements
ote_coro.reset() # Making sure there are no previous movements on the segments.
ote_coro.zero() # For now, ignore internal WFE.
for i, nseg in enumerate(seg_array):
aber_wss_loop[segnum_array[i], noll_as_wss[zern_pair-1]-1] = nm_aber.to(u.m).value # fill only the index for current Zernike, in meters
# Put Zernike on correct segments on OTE
ote_coro._apply_hexikes_to_seg(nseg, aber_wss_loop[segnum_array[i],:])
# Display the OTE
ote_coro.display_opd()
plt.show()
# Calculate the PSF
psf_coro_pair= nc_coro.calc_psf(fov_pixels=im_size_e2e, oversample=1, nlambda=1)
psf_coro_pair = psf_coro_pair[0].data/normp # getting the oversampled extension
"""
# Display the PSF
plt.figure(figsize=(20, 10))
plt.subplot(1, 2, 1)
plt.imshow(psf_coro_pair, norm=LogNorm(), origin='lower')
plt.title('Pair-wise aberrated coronagraphic PSF')
plt.subplot(1, 2, 2)
plt.imshow(util.zoom_cen(psf_coro_pair, box2), norm=LogNorm(), origin='lower')
plt.title('Zoomed')
plt.show()
"""
print('nm_aber: {}'.format(nm_aber))
print('Aberrated segments:', seg_array)
print('Noll Zernike used:', zern_pair)
print(psf_coro_pair.shape)
# Create DH
dh_area = util.create_dark_hole(psf_coro_pair, inner_wa, outer_wa, sampling)
testim = psf_coro_pair * dh_area
#
contrast = np.mean(testim[np.where(testim != 0)])
print(contrast)
plt.imshow(testim)
plt.show()
#segs_3_11_noll_1_coro = np.copy(psf_coro_pair)
#segs_11_17_noll_1_coro = np.copy(psf_coro_pair)
#segs_6_11_noll_1_coro = np.copy(psf_coro_pair)
#segs_9_2_noll_1_coro = np.copy(psf_coro_pair)
#segs_9_5_noll_1_coro = np.copy(psf_coro_pair)
#segs_9_15_noll_1_coro = np.copy(psf_coro_pair)
#segs_8_1_noll_1_coro = np.copy(psf_coro_pair)
#segs_8_6_noll_1_coro = np.copy(psf_coro_pair)
#segs_8_16_noll_1_coro = np.copy(psf_coro_pair)
# Save to central store
save_dir1 = '/astro/opticslab1/PASTIS/jwst_data/E2E_pair_aberrations/2019-1-22-9h-53min'
#util.write_fits(segs_3_11_noll_1_coro, os.path.join(save_dir1, 'segs_3_11_noll_1_coro.fits'))
#util.write_fits(segs_11_17_noll_1_coro, os.path.join(save_dir1, 'segs_11_17_noll_1_coro.fits'))
#util.write_fits(segs_6_11_noll_1_coro, os.path.join(save_dir1, 'segs_6_11_noll_1_coro.fits'))
#util.write_fits(segs_9_2_noll_1_coro, os.path.join(save_dir1, 'segs_9_2_noll_1_coro.fits'))
#util.write_fits(segs_9_5_noll_1_coro, os.path.join(save_dir1, 'segs_9_5_noll_1_coro.fits'))
#util.write_fits(segs_9_15_noll_1_coro, os.path.join(save_dir1, 'segs_9_15_noll_1_coro.fits'))
#util.write_fits(segs_8_1_noll_1_coro, os.path.join(save_dir1, 'segs_8_1_noll_1_coro.fits'))
#util.write_fits(segs_8_6_noll_1_coro, os.path.join(save_dir1, 'segs_8_6_noll_1_coro.fits'))
#util.write_fits(segs_8_16_noll_1_coro, os.path.join(save_dir1, 'segs_8_16_noll_1_coro.fits'))
# Read from central store
# 1000 nm aberrations:
read_dir1 = '/astro/opticslab1/PASTIS/jwst_data/E2E_pair_aberrations/2019-1-22-9h-53min_coro_piston_1000nm_pairs'
# 100 nm aberrations
#read_dir1 = '/astro/opticslab1/PASTIS/jwst_data/E2E_pair_aberrations/2019-1-25-16h-18min_piston_100nm'
segs_3_11_noll_1_coro = fits.getdata(os.path.join(read_dir1, 'segs_3_11_noll_1.fits'))
segs_11_17_noll_1_coro = fits.getdata(os.path.join(read_dir1, 'segs_11_17_noll_1.fits'))
segs_6_11_noll_1_coro = fits.getdata(os.path.join(read_dir1, 'segs_6_11_noll_1.fits'))
segs_9_2_noll_1_coro = fits.getdata(os.path.join(read_dir1, 'segs_9_2_noll_1.fits'))
segs_9_5_noll_1_coro = fits.getdata(os.path.join(read_dir1, 'segs_9_5_noll_1.fits'))
segs_9_15_noll_1_coro = fits.getdata(os.path.join(read_dir1, 'segs_9_15_noll_1.fits'))
segs_8_1_noll_1_coro = fits.getdata(os.path.join(read_dir1, 'segs_8_1_noll_1.fits'))
segs_8_6_noll_1_coro = fits.getdata(os.path.join(read_dir1, 'segs_8_6_noll_1.fits'))
segs_8_16_noll_1_coro = fits.getdata(os.path.join(read_dir1, 'segs_8_16_noll_1.fits'))
# Have a look at the images
# Gotta check how big the loaded images are!
print('Loaded images shape:', segs_3_11_noll_1_coro.shape)
print('im_size_e2e:', im_size_e2e)
# If im_size_e2e is bigger than images we loaded, this won't work
# and you have to define a box half-size manually for imwidth.
boxw = int(im_size_e2e/2)
boxw2 = boxw/2
if im_size_e2e < segs_3_11_noll_1_dir.shape[0]:
imwidth = bozw2
else:
#raise Exception('! You have to set imwidth manually ! And then comment this line out.')
pass
# Chose what image size (in pixels) we want to display
imwidth = 50
plt.figure(figsize=(18, 12))
plt.suptitle('Pair-wise aberration in coronagraphpic WebbPSF images')
plt.subplot(2, 3, 1)
plt.imshow(util.zoom_cen(segs_3_11_noll_1_coro, imwidth), norm=LogNorm(), origin='lower')
plt.title('Piston on segments 3 and 11')
plt.subplot(2, 3, 2)
plt.imshow(util.zoom_cen(segs_6_11_noll_1_coro, imwidth), norm=LogNorm(), origin='lower')
plt.title('Piston on segments 6 and 11')
plt.subplot(2, 3, 3)
plt.imshow(util.zoom_cen(segs_11_17_noll_1_coro, imwidth), norm=LogNorm(), origin='lower')
plt.title('Piston on segments 11 and 17')
plt.subplot(2, 3, 4)
plt.imshow(util.zoom_cen(segs_9_2_noll_1_coro, imwidth), norm=LogNorm(), origin='lower')
plt.title('Piston on segments 9 and 2')
plt.subplot(2, 3, 5)
plt.imshow(util.zoom_cen(segs_9_5_noll_1_coro, imwidth), norm=LogNorm(), origin='lower')
plt.title('Piston on segments 5 and 9')
plt.subplot(2, 3, 6)
plt.imshow(util.zoom_cen(segs_9_15_noll_1_coro, imwidth), norm=LogNorm(), origin='lower')
plt.title('Piston on segments 9 and 15')
plt.show()
```
We're missing one intermediate baseline with these combinations though, because we have to skip the center segment. I want to know what that looks like though, so here's some more images.
```
# Chose what image size (in pixels) we want to display
imwidth = imwidth
plt.figure(figsize=(18, 6))
plt.suptitle('Pair-wise aberrated coronagraphpic WebbPSF images')
plt.subplot(1, 3, 1)
plt.imshow(util.zoom_cen(segs_8_1_noll_1_coro, imwidth), norm=LogNorm(), origin='lower')
plt.title('Piston on segments 8 and 1')
plt.subplot(1, 3, 2)
plt.imshow(util.zoom_cen(segs_8_6_noll_1_coro, imwidth), norm=LogNorm(), origin='lower')
plt.title('Piston on segments 8 and 6')
plt.subplot(1, 3, 3)
plt.imshow(util.zoom_cen(segs_8_16_noll_1_coro, imwidth), norm=LogNorm(), origin='lower')
plt.title('Piston on segments 8 and 16')
plt.show()
```
## COMPARING ANALYTICAL, E2E DIRECT AND E2E CORONAGRAPHIC
Display comparison for **piston** with the pairs **9-2**, **9-5** and **9-15**.
```
# Chose what image size (in pixels) we want to display
imwidth = imwidth
plt.figure(figsize=(18, 18))
plt.suptitle('Comparison of pair-wise aberration')
plt.subplot(3, 3, 1)
plt.imshow(util.zoom_cen(segs_9_2_noll_1_ana, imwidth), norm=LogNorm(), origin='lower')
plt.title('Analytical - Piston on segments 9 and 2')
plt.subplot(3, 3, 2)
plt.imshow(util.zoom_cen(segs_9_5_noll_1_ana, imwidth), norm=LogNorm(), origin='lower')
plt.title('Analytical - Piston on segments 5 and 9')
plt.subplot(3, 3, 3)
plt.imshow(util.zoom_cen(segs_9_15_noll_1_ana, imwidth), norm=LogNorm(), origin='lower')
plt.title('Analytical - Piston on segments 9 and 15')
plt.subplot(3, 3, 4)
plt.imshow(util.zoom_cen(segs_9_2_noll_1_dir, imwidth), norm=LogNorm(), origin='lower')
plt.title('Direct WebbPSF - Piston on segments 9 and 2')
plt.subplot(3, 3, 5)
plt.imshow(util.zoom_cen(segs_9_5_noll_1_dir, imwidth), norm=LogNorm(), origin='lower')
plt.title('Direct WebbPSF - Piston on segments 5 and 9')
plt.subplot(3, 3, 6)
plt.imshow(util.zoom_cen(segs_9_15_noll_1_dir, imwidth), norm=LogNorm(), origin='lower')
plt.title('Direct WebbPSF - Piston on segments 9 and 15')
plt.subplot(3, 3, 7)
plt.imshow(util.zoom_cen(segs_9_2_noll_1_coro, imwidth), norm=LogNorm(), origin='lower')
plt.title('Coro WebbPSF - Piston on segments 9 and 2')
plt.subplot(3, 3, 8)
plt.imshow(util.zoom_cen(segs_9_5_noll_1_coro, imwidth), norm=LogNorm(), origin='lower')
plt.title('Coro WebbPSF - Piston on segments 5 and 9')
plt.subplot(3, 3, 9)
plt.imshow(util.zoom_cen(segs_9_15_noll_1_coro, imwidth), norm=LogNorm(), origin='lower')
plt.title('Coro WebbPSF - Piston on segments 9 and 15')
plt.show()
```
Display the comparison for **piston** with the pairs **8-1**, **8-6** and **8-16**.
```
# Chose what image size (in pixels) we want to display
imwidth = imwidth
plt.figure(figsize=(18, 18))
plt.suptitle('Comparison of pair-wise aberration')
plt.subplot(3, 3, 1)
plt.imshow(util.zoom_cen(segs_8_1_noll_1_ana, imwidth), norm=LogNorm(), origin='lower')
plt.title('Analytical - Piston on segments 8 and 1')
plt.subplot(3, 3, 2)
plt.imshow(util.zoom_cen(segs_8_6_noll_1_ana, imwidth), norm=LogNorm(), origin='lower')
plt.title('Analytical - Piston on segments 8 and 6')
plt.subplot(3, 3, 3)
plt.imshow(util.zoom_cen(segs_8_16_noll_1_ana, imwidth), norm=LogNorm(), origin='lower')
plt.title('Analytical - Piston on segments 8 and 16')
plt.subplot(3, 3, 4)
plt.imshow(util.zoom_cen(segs_8_1_noll_1_dir, imwidth), norm=LogNorm(), origin='lower')
plt.title('Direct WebbPSF - Piston on segments 8 and 1')
plt.subplot(3, 3, 5)
plt.imshow(util.zoom_cen(segs_8_6_noll_1_dir, imwidth), norm=LogNorm(), origin='lower')
plt.title('Direct WebbPSF - Piston on segments 8 and 6')
plt.subplot(3, 3, 6)
plt.imshow(util.zoom_cen(segs_8_16_noll_1_dir, imwidth), norm=LogNorm(), origin='lower')
plt.title('Direct WebbPSF - Piston on segments 6 and 16')
plt.subplot(3, 3, 7)
plt.imshow(util.zoom_cen(segs_8_1_noll_1_coro, imwidth), norm=LogNorm(), origin='lower')
plt.title('Coro WebbPSF - Piston on segments 8 and 1')
plt.subplot(3, 3, 8)
plt.imshow(util.zoom_cen(segs_8_6_noll_1_coro, imwidth), norm=LogNorm(), origin='lower')
plt.title('Coro WebbPSF - Piston on segments 8 and 69')
plt.subplot(3, 3, 9)
plt.imshow(util.zoom_cen(segs_8_16_noll_1_coro, imwidth), norm=LogNorm(), origin='lower')
plt.title('Coro WebbPSF - Piston on segments 8 and 18')
plt.show()
```
| github_jupyter |
# Recurrent Neural Networks
Classical neural networks, including convolutional ones, suffer from two severe limitations:
+ They only accept a fixed-sized vector as input and produce a fixed-sized vector as output.
+ They do not consider the sequential nature of some data (language, video frames, time series, etc.)
Recurrent neural networks overcome these limitations by allowing to operate over sequences of vectors (in the input, in the output, or both).
RNN can be interpreted as running a fixed program (consisting of a recurrent transformation that can be applied as many times as we like) with certain inputs and certain internal variables.
## Learning to add
Source: http://projects.rajivshah.com/blog/2016/04/05/rnn_addition/
The objective of this code developed by Rajiv Shah is to train a RNN for adding a sequence of integers.
```
# Import basic libraries
import numpy as np
import tensorflow as tf
from tensorflow.models.rnn import rnn_cell
from tensorflow.models.rnn import rnn
from tensorflow.models.rnn import seq2seq
from numpy import sum
import matplotlib.pyplot as plt
from tqdm import *
%matplotlib inline
```
We will define first a set of hyperparameters, beign the most important ``num_units``, that is the parameter that represents the internal memory in the basic LSTM cell.
```
num_units = 50
input_size = 1
batch_size = 50
seq_len = 15
drop_out = 0.6
```
Then, we can write an auxiliar function to generate random sequences of integers (and the result of their addition):
```
# Creates our random sequences
def gen_data(min_length=5, max_length=15, n_batch=50):
X = np.concatenate([np.random.randint(10,size=(n_batch, max_length, 1))],
axis=-1)
y = np.zeros((n_batch,))
# Compute masks and correct values
for n in range(n_batch):
# Randomly choose the sequence length
length = np.random.randint(min_length, max_length)
X[n, length:, 0] = 0
# Sum the dimensions of X to get the target value
y[n] = np.sum(X[n, :, 0]*1)
return (X,y)
print gen_data(2,5,1)
```
Now we are ready to star the model construction phase:
```
# Model architecture
num_layers = 2
cell = rnn_cell.BasicLSTMCell(num_units)
cell = rnn_cell.MultiRNNCell([cell] * num_layers)
cell = rnn_cell.DropoutWrapper(cell,output_keep_prob=drop_out)
# Create placeholders for X and y
inputs = [tf.placeholder(tf.float32,shape=[batch_size,1]) for _ in range(seq_len)]
result = tf.placeholder(tf.float32, shape=[batch_size])
# We initialize the initial cell state to 0
initial_state = cell.zero_state(batch_size, tf.float32)
outputs, states = seq2seq.rnn_decoder(inputs, initial_state, cell, scope ='rnnln')
# We are only interested in the final LSTM output value
outputs2 = outputs[-1]
# Tranformation of the final LSTM output value to a real value
W_o = tf.Variable(tf.random_normal([num_units,input_size], stddev=0.01))
b_o = tf.Variable(tf.random_normal([input_size], stddev=0.01))
outputs3 = tf.matmul(outputs2, W_o) + b_o
# Definition of the mean square loss function
cost = tf.pow(tf.sub(tf.reshape(outputs3, [-1]), result),2)
train_op = tf.train.RMSPropOptimizer(0.005, 0.2).minimize(cost)
### Generate Validation Data
tempX,y_val = gen_data(5,seq_len,batch_size)
X_val = []
for i in range(seq_len):
X_val.append(tempX[:,i,:])
##Session
sess = tf.Session()
sess.run(tf.initialize_all_variables())
train_score =[]
val_score= []
x_axis=[]
num_epochs=1000
for k in tqdm(range(1,num_epochs)):
#Generate Data for each epoch
tempX,y = gen_data(5,seq_len,batch_size)
X = []
for i in range(seq_len):
X.append(tempX[:,i,:])
#Create the dictionary of inputs to feed into sess.run
temp_dict = {inputs[i]:X[i] for i in range(seq_len)}
temp_dict.update({result: y})
_,c_train = sess.run([train_op,cost],feed_dict=temp_dict) #perform an update on the parameters
val_dict = {inputs[i]:X_val[i] for i in range(seq_len)} #create validation dictionary
val_dict.update({result: y_val})
c_val = sess.run([cost],feed_dict = val_dict ) #compute the cost on the validation set
if (k%100==0):
train_score.append(sum(c_train))
val_score.append(sum(c_val))
x_axis.append(k)
print "Final Train cost: {}, on Epoch {}".format(train_score[-1],k)
print "Final Validation cost: {}, on Epoch {}".format(val_score[-1],k)
plt.plot(train_score, 'r-', val_score, 'b-')
plt.show()
##This part generates a new validation set to test against
val_score_v =[]
num_epochs=1
for k in range(num_epochs):
#Generate Data for each epoch
tempX,y = gen_data(5,seq_len,batch_size)
X = []
for i in range(seq_len):
X.append(tempX[:,i,:])
val_dict = {inputs[i]:X[i] for i in range(seq_len)}
val_dict.update({result: y})
outv, c_val = sess.run([outputs3,cost],feed_dict = val_dict )
val_score_v.append([c_val])
##Target
tempX[3],y[3]
#Prediction
outv[3]
```
## Example
A Recurrent Neural Network (LSTM) implementation example using TensorFlow library.
This example is using the MNIST database of handwritten digits (http://yann.lecun.com/exdb/mnist/)
Long Short Term Memory paper: http://deeplearning.cs.cmu.edu/pdfs/Hochreiter97_lstm.pdf
Author: Aymeric Damien
Project: https://github.com/aymericdamien/TensorFlow-Examples/
```
import tensorflow as tf
from tensorflow.models.rnn import rnn, rnn_cell
import numpy as np
# Import MINST data
import sys
sys.path.insert(0, './helpers')
import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
```
To classify images using a recurrent neural network, we consider every image row as a sequence of pixels. Because MNIST image shape is 28*28px, we will then handle 28 sequences of 28 steps for every sample.
```
# Parameters
learning_rate = 0.001
training_iters = 100000
batch_size = 128
display_step = 10
# Network Parameters
n_input = 28 # number of sequences for every sample
n_steps = 28 # number of timesteps for every sequence
n_hidden = 64 # hidden layer num of features
n_classes = 10 # total classes (0-9 digits)
# tf Graph input
x = tf.placeholder("float", [None, n_steps, n_input])
# Tensorflow LSTM cell requires 2x n_hidden length (state & cell)
istate = tf.placeholder("float", [None, 2*n_hidden])
y = tf.placeholder("float", [None, n_classes])
# Define weights
weights = {
'hidden': tf.Variable(tf.random_normal([n_input, n_hidden])),
# Hidden layer weights
'out': tf.Variable(tf.random_normal([n_hidden, n_classes]))
}
biases = {
'hidden': tf.Variable(tf.random_normal([n_hidden])),
'out': tf.Variable(tf.random_normal([n_classes]))
}
def RNN(_X, _istate, _weights, _biases):
# Define a lstm cell with tensorflow
lstm_cell = rnn_cell.BasicLSTMCell(n_hidden)
# input shape: (batch_size, n_steps, n_input)
_X = tf.transpose(_X, [1, 0, 2]) # permute n_steps and batch_size
# Reshape to prepare input to hidden activation
_X = tf.reshape(_X, [-1, n_input]) # (n_steps*batch_size, n_input)
# Linear activation
_X = tf.matmul(_X, _weights['hidden']) + _biases['hidden']
# Split data because rnn cell needs a list of inputs
# for the RNN inner loop
_X = tf.split(0, n_steps, _X) # n_steps * (batch_size, n_hidden)
# Get lstm cell output
outputs, states = rnn.rnn(lstm_cell, _X, initial_state=_istate)
# Linear activation
# Get inner loop last output
return tf.matmul(outputs[-1], _weights['out']) + _biases['out']
pred = RNN(x, istate, weights, biases)
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y)) # Softmax loss
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) # Adam Optimizer
# Evaluate model
correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# Initializing the variables
init = tf.initialize_all_variables()
# Launch the graph
with tf.Session() as sess:
sess.run(init)
step = 1
# Keep training until reach max iterations
while step * batch_size < training_iters:
# mnist.train is a tensor (an n-dimensional array)
# with a shape of [55000, 784]
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# Reshape data to get 28 seq of 28 elements
batch_xs = batch_xs.reshape((batch_size, n_steps, n_input))
# Fit training using batch data
sess.run(optimizer, feed_dict={x: batch_xs, y: batch_ys,
istate: np.zeros((batch_size, 2*n_hidden))})
if step % display_step == 0:
# Calculate batch accuracy
acc = sess.run(accuracy, feed_dict={x: batch_xs, y: batch_ys,
istate: np.zeros((batch_size, 2*n_hidden))})
# Calculate batch loss
loss = sess.run(cost, feed_dict={x: batch_xs, y: batch_ys,
istate: np.zeros((batch_size, 2*n_hidden))})
print "Iter " + str(step*batch_size) + ", Minibatch Loss= " + "{:.6f}".format(loss) + \
", Training Accuracy= " + "{:.5f}".format(acc)
step += 1
print "Optimization Finished!"
# Calculate accuracy for mnist test images
test_len = 5000
test_data = mnist.test.images[:test_len].reshape((-1, n_steps, n_input))
test_label = mnist.test.labels[:test_len]
print "Testing Accuracy:", sess.run(accuracy, feed_dict={x: test_data, y: test_label,
istate: np.zeros((test_len, 2*n_hidden))})
```
## Names
```
from __future__ import absolute_import, division, print_function
import tflearn
def textfile_to_seq(file, seq_maxlen=25, redun_step=3):
""" string_to_semi_redundant_sequences.
Vectorize a string and returns parsed sequences and targets, along with
the associated dictionary.
Arguments:
string: `str`. Lower-case text from input text file.
seq_maxlen: `int`. Maximum length of a sequence. Default: 25.
redun_step: `int`. Redundancy step. Default: 3.
Returns:
`tuple`: (inputs, targets, dictionary)
"""
import numpy as np
import re
print("Vectorizing text...")
import codecs
f = codecs.open('toponims.txt', "r", "utf-8")
string = f.read()
string.encode('utf-8')
string = re.sub( '([A-Z])', '^\\1', string ).lower()
chars = set()
chars.update(string)
char_idx = {c: i for i, c in enumerate(chars)}
sequences = []
next_chars = []
for i in range(0, len(string) - seq_maxlen, redun_step):
sequences.append(string[i: i + seq_maxlen])
next_chars.append(string[i + seq_maxlen])
X = np.zeros((len(sequences), seq_maxlen, len(chars)), dtype=np.bool)
Y = np.zeros((len(sequences), len(chars)), dtype=np.bool)
for i, seq in enumerate(sequences):
for t, char in enumerate(seq):
X[i, t, char_idx[char]] = 1
Y[i, char_idx[next_chars[i]]] = 1
print("Text total length: " + str(len(string)))
print("Distinct chars: " + str(len(chars)))
print("Total sequences: " + str(len(sequences)))
return X, Y, char_idx
def random_sequence_from_string(string, seq_maxlen):
import random
rand_index = random.randint(0, len(string) - seq_maxlen - 1)
return string[rand_index: rand_index + seq_maxlen]
def random_sequence_from_textfile(path, seq_maxlen):
import codecs
import re
f = codecs.open(path, "r", "utf-8")
text = f.read()
text.encode('utf-8')
text = re.sub( '([A-Z])', '^\\1', text ).lower()
return random_sequence_from_string(text, seq_maxlen)
path = 'toponims.txt'
maxlen = 20
X, Y, char_idx = \
textfile_to_seq(path, seq_maxlen=maxlen, redun_step=2)
g = tflearn.input_data(shape=[None, maxlen, len(char_idx)])
g = tflearn.lstm(g, 64, return_seq=True)
g = tflearn.dropout(g, 0.5)
g = tflearn.lstm(g, 64)
g = tflearn.dropout(g, 0.5)
g = tflearn.fully_connected(g, len(char_idx), activation='softmax')
g = tflearn.regression(g, optimizer='adam', loss='categorical_crossentropy',
learning_rate=0.01)
m = tflearn.SequenceGenerator(g, dictionary=char_idx,
seq_maxlen=maxlen,
clip_gradients=5.0)
for i in range(100):
seed = random_sequence_from_textfile(path, maxlen)
m.fit(X, Y, validation_set=0.1, batch_size=128,
n_epoch=1, run_id='toponims')
print("-- TESTING...")
print("-- EPOCH = ", i)
print("-- Test with temperature of 1.2 --")
print(m.generate(30, temperature=1.2, seq_seed=seed))
print("-- Test with temperature of 1.0 --")
print(m.generate(30, temperature=1.0, seq_seed=seed))
print("-- Test with temperature of 0.5 --")
print(m.generate(30, temperature=0.5, seq_seed=seed))
```
| github_jupyter |
This examples shows how a classifier is optimized by cross-validation, which is done using the [sklearn.model_selection.GridSearchCV](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html#sklearn.model_selection.GridSearchCV) object on a development set that comprises only half of the available labeled data.
The performance of the selected hyper-parameters and trained model is then measured on a dedicated evaluation set that was not used during the model selection step.
More details on tools available for model selection can be found in the sections on [Cross-validation: evaluating estimator performance](http://scikit-learn.org/stable/modules/cross_validation.html#cross-validation) and [Tuning the hyper-parameters of an estimator](http://scikit-learn.org/stable/modules/grid_search.html#grid-search).
#### New to Plotly?
Plotly's Python library is free and open source! [Get started](https://plot.ly/python/getting-started/) by downloading the client and [reading the primer](https://plot.ly/python/getting-started/).
<br>You can set up Plotly to work in [online](https://plot.ly/python/getting-started/#initialization-for-online-plotting) or [offline](https://plot.ly/python/getting-started/#initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plot.ly/python/getting-started/#start-plotting-online).
<br>We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started!
### Version
```
import sklearn
sklearn.__version__
```
### Imports
This tutorial imports [train_test_split](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html#sklearn.model_selection.train_test_split), [GridSearchCV](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html#sklearn.model_selection.GridSearchCV), [classification_report](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html#sklearn.metrics.classification_report) and [SVC](http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html#sklearn.svm.SVC).
```
from __future__ import print_function
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import classification_report
from sklearn.svm import SVC
print(__doc__)
```
### Calculations
```
# Loading the Digits dataset
digits = datasets.load_digits()
# To apply an classifier on this data, we need to flatten the image, to
# turn the data in a (samples, feature) matrix:
n_samples = len(digits.images)
X = digits.images.reshape((n_samples, -1))
y = digits.target
# Split the dataset in two equal parts
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.5, random_state=0)
# Set the parameters by cross-validation
tuned_parameters = [{'kernel': ['rbf'], 'gamma': [1e-3, 1e-4],
'C': [1, 10, 100, 1000]},
{'kernel': ['linear'], 'C': [1, 10, 100, 1000]}]
scores = ['precision', 'recall']
for score in scores:
print("# Tuning hyper-parameters for %s" % score)
print()
clf = GridSearchCV(SVC(C=1), tuned_parameters, cv=5,
scoring='%s_macro' % score)
clf.fit(X_train, y_train)
print("Best parameters set found on development set:")
print()
print(clf.best_params_)
print()
print("Grid scores on development set:")
print()
means = clf.cv_results_['mean_test_score']
stds = clf.cv_results_['std_test_score']
for mean, std, params in zip(means, stds, clf.cv_results_['params']):
print("%0.3f (+/-%0.03f) for %r"
% (mean, std * 2, params))
print()
print("Detailed classification report:")
print()
print("The model is trained on the full development set.")
print("The scores are computed on the full evaluation set.")
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred))
print()
# Note the problem is too easy: the hyperparameter plateau is too flat and the
# output model is the same for precision and recall with ties in quality.
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
! pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'Parameter Estimation using Grid Search with Cross-Validation.ipynb', 'scikit-learn/grid-search-digits/', 'Parameter Estimation using Grid Search with Cross-Validation | plotly',
' ',
title = 'Parameter Estimation using Grid Search with Cross-Validation | plotly',
name = 'Parameter Estimation using Grid Search with Cross-Validation',
has_thumbnail='true', thumbnail='thumbnail/scikit-default.jpg',
language='scikit-learn', page_type='example_index',
display_as='model_selection', order=5,
ipynb= '~Diksha_Gabha/3420')
```
| github_jupyter |
# Documentation of Economic Analysis behind Simulation Engine - Part 3
In this notebook, we consider a dynamical system approach to analyze economic network response to demand shocks. Initially, an economy in a steady state is perturbed by means of an impulse shock. In a static view, where one assumes the output of economy will immediately adjust itself to the shock, will be contrasted with a dynamical system approach, especially in view of output behaviour with respect to time.
**Table of contents**
* [Dynamics of Shock response](#shock_dynamics)
**Inputs**
- L matrix from _Documentation of Economic Analysis behind Simulation Engine - Part 1_
```
# Imports and path
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import networkx as nx
import seaborn as sbs
import random
from scipy.integrate import odeint
%matplotlib inline
```
<a id='shock_dynamics'></a>
## Static vs Dynamic shock response
A simple class definition of economic network with Leontief matrix with functions to produce impulse shock on country, sector and function to compute output given the demand.
```
class LeonTradeModel:
def __init__(self,df_L,demand='unit'):
self.df_L = df_L
self.country_indices, self.sector_indices = df_L.index.get_level_values(0).values, df_L.index.get_level_values(1).values
self.countries, self.sectors = df_L.index.get_level_values(0).unique().values, df_L.index.get_level_values(1).unique().values
self.A = np.linalg.inv(self.df_L.values)
if (demand == 'unit'):
self.d_base = np.ones(len(df_L))
else:
self.d_base = demand
self.x_base = np.dot(self.df_L.values,self.d_base)
self.x_out = self.x_base
def shock_impulse(self,shock_val,tsteps=30,country_code=None,sector_code=None):
shock_vec = np.zeros(len(self.d_base))
if (sector_code is not None) & (country_code is not None):
#print(self.countries)
#print(np.where(self.countries==country_code)[0])
shocked_val_idx = (np.where(self.countries==country_code)[0][0] * len(set(self.sectors))) + np.where(self.sectors==sector_code)[0][0]
elif (sector_code is None) & (country_code is not None):
shocked_val_idx = np.where(self.country_indices==country_code)[0]
elif (country_code is None) & (sector_code is not None):
shocked_val_idx = np.where(self.sector_indices==sector_code)[0]
shock_vec[shocked_val_idx] = shock_val
return shock_vec
def propagate_shock(self,shock_vec,inertia = 0,type='demand'):
# Inertia models the time taken by the real economy to readjust.
if type == 'demand':
self.x_out = inertia*self.x_out + (1-inertia)*np.dot(self.df_L,demand_vec)
else:
self.x_out = inertia*self.x_out + (1-inertia)*np.dot(self.df_L.T,shock_vec)
self.dem_out = demand_vec
return self.x_out
```
Initialize a Economic network object with the actual Leontief matrix
```
# Read the L matrix and construct a dataframe with 2 indices country and sector
df_lev = pd.read_csv('< path >/Leontief_inverse_world.csv',low_memory=False,index_col=[0,1],skiprows=[0,1])
# Remap the column names
df_lev.columns = df_lev.index.get_level_values(1).values
network_model = LeonTradeModel(df_lev)
```
Supply now a impulse shock at time=0 to say Sector 28 in Germany (DEU) and see the effect on same Sector 28 in the USA
```
shock_vec = network_model.shock_impulse(shock_val=0.01,country_code='DEU',sector_code='28')
demand_vec = shock_vec#network_model.d_base-shock_vec
x_out = network_model.propagate_shock(demand_vec)
out_change = x_out#(network_model.x_base-x_out)/network_model.x_base
print('Shock to Sector 28 in the US %s'%(out_change[1873]))
print('Overall output contraction %s'%(sum(out_change)))
```
Visualize how the shock to Sector 28 in Germany affects other sectors in the same country
```
german_sector_indices = np.where(network_model.country_indices=='DEU')[0]
#german_sector_indices = german_sector_indices[german_sector_indices!=28]
response_shock_vals = (out_change[german_sector_indices])/max(shock_vec)
response_sector_mask = response_shock_vals<1
plt.figure(figsize=(4,12))
from matplotlib.colors import LinearSegmentedColormap
cmap=LinearSegmentedColormap.from_list('gymr',["g", "y","m", "r"], N=256)
ax = sbs.heatmap((response_shock_vals[response_sector_mask]).reshape(-1,1),annot=False,ax=None,cmap=cmap)
ax.set_yticklabels(network_model.sectors[response_sector_mask],rotation=0);
```
Essentially, the number above says that an impulse 0.01 Demand shock to Sector 28 in Germany at time t=0 causes a shock of 2.2e-6 units in the same sector in the US at that instant and an overall contraction in the economic output of the world by 0.016 units
Let us now make the shock response a bit more dynamic now. Following the work from Klimek et. al., (https://www.nature.com/articles/s41467-019-09357-w.pdf), we model the output dynamics by a simple ODE of order 1,
$Y_{dot} = (A-I)* Y + D$,
where Y, Y_dot, A,I and D denote output, vector differential of the output, I/O matrix, Identity matrix of the corresponding shape and Demand vector respectively.
A useful technique now is to characterize the system response by perturbing the system by applying an impulse. Unlike mathematical impulses which have infinite height and zero width, in the scope of this analysis the impulse is modelled as a finite pulse of a certain height and duration.
For ease of analysis, it is assumed that the shock pulse starts at time 0 and ends at time step 1. Use standard numerical integration to solve for system response
```
def economic_dynamics_ode(y,t,A,D,step_width=1):
if (t < step_width):
d = D
else:
d = np.zeros(len(D))
return np.dot(A-np.eye(A.shape[0]),y) + d
sol = odeint(economic_dynamics_ode, np.zeros(len(network_model.df_L)), np.linspace(0,10,10), args=(-network_model.A,shock_vec))
```
Let us now look at time evolution of sector 28 in the US.
```
fig,axs=plt.subplots(2,1,figsize=(12,18),sharex=True)
axs[0].step(range(10),np.array([0,0.01,0,0,0,0,0,0,0,0]),'r')
axs[0].set_ylabel('External Shock to Sector 28 (DEU)');
axs[1].plot(range(10),sol[:,german_sector_indices[response_sector_mask]],'-');
axs[1].set_xlabel('Time (units)');
axs[1].set_ylabel(' RelativeSectorial shocks (Germany)');
leg = axs[1].legend(network_model.sectors[response_sector_mask],prop={'size':8},loc='upper right')
plt.draw()
bb = leg.get_bbox_to_anchor().inverse_transformed(ax.transAxes)
xOffset = -0.25
bb.x0 += xOffset
bb.x1 += xOffset - 0.1
yOffset = -0.10
bb.y0 += yOffset
bb.y1 += yOffset - yOffset - 0.15
leg.set_bbox_to_anchor(bb, transform = ax.transAxes)
plt.show()
#yticklabels=np.exp(plt.gca().get_yticks())
#from matplotlib.ticker import FormatStrFormatter
#ax = plt.gca()
#fmt=lambda x: "{:.2E}".format(x)
#ax.set_yticklabels([fmt(_tickval) for _tickval in yticklabels]);
#ax.yaxis.set_yticklabels(FormatStrFormatter('%.2f'))
#plt.gca().set_yticklabels(np.array(sol[:,german_sector_indices]),rotation=0);
#plt.legend(range(len(german_sector_indices)),network_model.sectors[german_sector_indices])
```
and the total contraction of global output is given by
```
np.sum(sol,axis=1)[-1]
```
Note the above estimates are a bit more conservative than a simple static computation and naturally exhibit a richer structure
#### Authors
* **Álvaro Corrales Cano** is a Data Scientist within IBM's Cloud Pak Acceleration team. With a background in Economics, Álvaro specialises in a wide array Econometric techniques and causal inference, including regression, discrete choice models, time series and duration analysis.
* **Deepak Shankar Srinivasan** is a Developer in R2 Data Labs, Rolls Royce Deutschland, Germany, specializing in Data Science applications for Equipment Health Management and Deep Domain Specific Smart Assistants.
Copyright © IBM Corp. 2020. Licensed under the Apache License, Version 2.0. Released as licensed Sample Materials.
| github_jupyter |
```
import numpy as np
import random
from matplotlib import pyplot as plt
plt.rcParams['figure.figsize'] = (10.0, 8.0)
from sklearn.datasets import make_biclusters
from sklearn.datasets import samples_generator as sg
# from sklearn.cluster.bicluster import SpectralCoclustering
from sklearn.metrics import consensus_score
from biclustering import DeltaBiclustering, MSR
%pylab inline
from IPython.core.display import Image
Image(filename='synteticBiclusters.png')
def generate_dataset(option, noise=1, noise_background=True, shuffle=False):
shape = (150,150)
n,m = shape
# values can't be a lot far...
centers = [20, 40, 60, 80, 100]
y_row = np.zeros(150)
y_col = np.zeros(150)
if noise_background:
data = np.random.rand(n, m)*100
else:
data = np.zeros(n*m).reshape(shape)
if option == 'a':
data[60:110][:,70:140] = np.random.rand(50,70)*noise + centers[0]
y_row[60:110] += 1
y_col[70:140] += 1
elif option == 'd':
data[0:50][:,0:70] = np.random.rand(50,70)*noise + centers[0]
y_row[0:50] += 1
y_col[0:70] += 1
data[50:100][:,50:100] = np.random.rand(50,50)*noise + centers[2]
y_row[50:100] += 2
y_col[50:100] += 2
data[100:150][:,80:150] = np.random.rand(50,70)*noise + centers[1]
y_row[100:150] += 3
y_col[80:150] += 3
elif option == 'e':
data[0:70][:,0:50] = np.random.rand(70,50)*noise + centers[3]
y_row[0:70] += 1
y_col[0:50] += 1
data[50:100][:,50:100] = np.random.rand(50,50)*noise + centers[1]
y_row[50:100] += 2
y_col[50:100] += 2
data[80:150][:,100:150] = np.random.rand(70,50)*noise + centers[2]
y_row[80:150] += 3
y_col[100:150] += 3
elif option == 'f':
data[0:50][:,0:40] = np.random.rand(50,40)*noise + centers[4]
y_row[0:50] += 1
y_col[0:40] += 1
data[50:150][:,0:40] = np.random.rand(100,40)*noise + centers[0]
y_row[50:150] += 2
data[110:150][:,40:95] = np.random.rand(40,55)*noise + centers[2]
y_row[110:150] += 3
y_col[40:95] += 2
data[110:150][:,95:150] = np.random.rand(40,55)*noise + centers[1]
y_row[110:150] += 3
y_col[95:150] += 3
elif option == 'g':
data[0:110][:,0:40] = np.random.rand(110,40)*noise + centers[0]
data[110:150][:,0:110] = np.random.rand(40,110)*noise + centers[2]
data[40:150][:,110:150] = np.random.rand(110,40)*noise + centers[1]
data[0:40][:,40:150] = np.random.rand(40,110)*noise + centers[3]
elif option == 'h':
data[0:90][:,0:90] = np.random.rand(90,90)*noise + centers[0]
data[35:55][:,35:55] = (np.random.rand(20,20)*noise + centers[1]) + data[35:55][:,35:55]
data[110:140][:,35:90] = np.random.rand(30,55)*noise + centers[4]
data[0:140][:,110:150] = np.random.rand(140,40)*noise + centers[2]
data[0:55][:,130:150] = (np.random.rand(55,20)*noise + centers[3]) + data[0:55][:,130:150]
elif option == 'i':
data[20:70][:,20:70] = np.random.rand(50,50)*noise + centers[0]
data[20:70][:,100:150] = np.random.rand(50,50)*noise + centers[1]
data[50:110][:,50:120] = np.random.rand(60,70)*noise + centers[2]
data[120:150][:,20:100] = np.random.rand(30,80)*noise + centers[3]
if shuffle:
np.random.shuffle(data)
np.random.shuffle(data.T)
return data, y_row, y_col
```
# Nonnegative Block Value Decomposition
```
%%latex
This is a coclustering algorithm called Block Value Decomposition (BVD) based on Nonnegative Matrix Factorization (NMF)
technique.
The goal is to find a factorization for the data matrix $X \in \mathbb{R}_{+}^{n \times m}$, where $n$ is the number of objects
and $m$ is the number of features of theses objects and the factorization takes the form: $X \approx USV^T$, where
$U \in \mathbb{R}_{+}^{n \times l}$ is a matrix of rows factors representing features clusters,
$S \in \mathbb{R}_{+}^{l \times k}$ is a block matrix representing how blocks are related, and
$V \in \mathbb{R}_{+}^{m \times k}$ is a matrix of columns factors representing rows clusters.
%%latex
This algorithm solves the following optimization problem:
$$\textit{min } ||X - USV^T||^2 \textit{ s.t. } U \geq 0, S \geq 0, V \geq 0$$
%%latex
The optimization problem can be solved using Lagrange multipliers ($\lambda$), optimizing the following Lagrange function:
$${\cal L} = |X - USV^T||^2 - tr(\lambda_1U^T) - tr(\lambda_2S^T) - tr(\lambda_3V^T)$$
%%latex
Then ${\cal L}$ must satisfy the K.K.T. conditions:
$$\frac{\partial {\cal L}}{\partial U} = 0$$
$$\frac{\partial {\cal L}}{\partial S} = 0$$
$$\frac{\partial {\cal L}}{\partial V} = 0$$
$$\lambda_1 \odot U = 0$$
$$\lambda_2 \odot S = 0$$
$$\lambda_3 \odot V = 0$$
%%latex
Solving the derivatives and equal them to $0$, is possible to solve the optimization problem by applying gradient
ascending on ${\cal L}$ with the following update rules:
$$U \gets U \odot \frac{XVS^T}{USV^TVS^T}$$
$$V \gets V \odot \frac{X^TUS}{VS^TU^TUS}$$
$$S \gets S \odot \frac{U^TXV}{U^TU^TSV^TV}$$
def nmtf(X, k, l, num_iter=1):
"""
Performs nonnegative block value decomposition
Parameters
----------
X : array or sparse (CSR) matrix of shape (n_samples, n_features), or \
array of shape (n_samples, n_samples)
A feature array, or array of distances between samples if
``metric='precomputed'``.
k : int
number of rows clusters
l : int
number of columns clusters
num_iter : int, optional
number of iterations to make the factorization
Returns
-------
U : array matrix of shape (n_samples, n_features_clusters)
matrix of rows factors representing features clusters
S : array matrix of shape (n_samples_clusters, n_features_clusters)
block matrix representing how blocks are related
V : array matrix of shape (n_features, n_samples_clusters)
"""
def scale(A):
return (A - A.min()) / (A.max() - A.min())
n, m = X.shape
U = (np.random.rand(n,k))
S = (np.random.rand(k,l))
V = (np.random.rand(l,m))
X_norm = (X)
for i in xrange(num_iter):
U_delta = (X_norm.dot(V.T).dot(S.T)) / (U.dot(S).dot(V).dot(V.T).dot(S.T))
U_new = np.multiply(U,U_delta)
V_delta = (S.T.dot(U.T).dot(X_norm)) / (S.T.dot(U.T).dot(U).dot(S).dot(V))
V_new = np.nan_to_num(np.multiply(V,V_delta))
S_delta = (U.T.dot(X_norm).dot(V.T)) / (U.T.dot(U).dot(S).dot(V).dot(V.T))
S_new = np.nan_to_num(np.multiply(S,S_delta))
np.sum((X - U_new.dot(S_new).dot(V_new.T))
U = U_new
V = V_new
S = S_new
# normalization
# U_diag = np.diag(np.diag(np.ones(n).dot(U)))
# V_diag = np.diag(np.diag(np.ones(m).dot(U.T).T))
# U = np.multiply(U, np.diag(S.dot(V_diag)))
# V = np.multiply(V, np.diag(U_diag.dot(S)))
return U, S, V
```
# Fast Nonnegative Matrix Tri Factorization
```
%%latex
In this case, the goal is to optimize the following problem:
$$\textit{min } ||X - USV^T||^2 \textit{ s.t. } U \in \Psi^{n \times k}, S \in \mathbb{R}_{+}^{l \times k}, V \in \Psi^{m \times l}$$
where $U$ and $V$ turns into cluster indicator matrices, with vectors $\vec{u_i}$ and $\vec{v_j}$ which contains $1$ in
only one position, indicating the cluster that that this vector belongs, and $0s$ in the rest.
%%latex
Similar to the other algorithm, it optimizes $S$ with a multiplicative update rule and the following subproblems:
$$S \gets (U^TU)^{-1}U^TXV(V^TV)^{-1}$$
$$v_{ij} \left\{
\begin{array}{ll}
1 & j = \textit{argmin}_l ||\vec{x_i} - \vec{\tilde{u_l}}||^2 \\
0 & \textit{otherwise}
\end{array}
\right.$$
$$u_{ij} \left\{
\begin{array}{ll}
1 & i = \textit{argmin}_k ||\vec{x_j} - \vec{\tilde{v_k}}||^2 \\
0 & \textit{otherwise}
\end{array}
\right.$$
where $\tilde{U} = US$ and $\tilde{V} = SV^T$
def fnmtf(X, k, l, num_iter=1):
def scale(A):
return (A - A.min()) / (A.max() - A.min())
n, m = X.shape
U = (np.random.rand(n,k))
S = (np.random.rand(k,l))
V = (np.random.rand(m,l))
# solve subproblem to update U
U_new = np.zeros(n*k).reshape(n,k)
X_norm = (X)
for i in xrange(num_iter):
S = np.linalg.pinv(U.T.dot(U)).dot(U.T).dot(X_norm).dot(V).dot(np.linalg.pinv(V.T.dot(V)))
U_tilde = U.dot(S)
for i in xrange(m):
subproblem_result = np.zeros(l)
for j in xrange(l):
subproblem_result[j] = np.linalg.norm(X[:][:,i] - U_tilde[:][:,j])
ind = np.argmin(subproblem_result)
U_new[i][ind] = 1
# solve subproblem to update V
for j in xrange(n):
subproblem_result = np.zeros(k)
for i in xrange(k):
optimization_result[i] = np.linalg.norm(X[j][:] - V_tilde.T[i][:])
ind = np.argmin(optimization_result)
V_new[j][ind] = 1
U = U_new
V = V_new
S = S_new
# normalization
# U_diag = np.diag(np.diag(np.ones(n).dot(U)))
# V_diag = np.diag(np.diag(np.ones(m).dot(U.T).T))
# U = np.multiply(U, np.diag(S.dot(V_diag)))
# V = np.multiply(V, np.diag(U_diag.dot(S)))
return U, S, V
X, _, _ = generate_dataset(option='a', noise=1.0, noise_background=False)
plt.matshow(X, cmap=plt.cm.Blues)
plt.title("Original Data")
plt.show()
U, S, V = nmtf(X,2,2,1)
plt.matshow(U, cmap=plt.cm.Blues)
plt.matshow(S, cmap=plt.cm.Blues)
plt.matshow(V, cmap=plt.cm.Blues)
plt.matshow(U.dot(S).dot(V), cmap=plt.cm.Blues)
X = np.zeros(10*10).reshape((10,10))
X[3:8][:,3:8] = X[3:8][:,3:8] + 20
plt.matshow(X, cmap=plt.cm.Blues)
plt.title("Original Data")
plt.show()
U, S, V = nmtf(X,2,2,1)
plt.matshow(U, cmap=plt.cm.Blues)
plt.matshow(S, cmap=plt.cm.Blues)
plt.matshow(V, cmap=plt.cm.Blues)
plt.matshow(U.dot(S).dot(V), cmap=plt.cm.Blues)
X = np.zeros(10*10).reshape((10,10))
X[0:4][:,0:4] = X[0:4][:,0:4] + 20
X[2:8][:,4:6] = X[2:8][:,4:6] + 40
X[6:10][:,6:10] = X[6:10][:,6:10] + 60
plt.matshow(X, cmap=plt.cm.Blues)
plt.title("Original Data")
plt.show()
U, S, V = nmtf(X,4,4,9)
plt.matshow(U, cmap=plt.cm.Blues)
plt.matshow(S, cmap=plt.cm.Blues)
plt.matshow(V, cmap=plt.cm.Blues)
plt.matshow(U.dot(S).dot(V), cmap=plt.cm.Blues)
plt.matshow(generate_dataset(option='g', noise=1.0, noise_background=False), cmap=plt.cm.Blues)
plt.title("Original Data")
plt.show()
data = generate_dataset(option='g', noise=1.0, noise_background=False, shuffle=False)
X = data
U, S, V = nmtf(X,5,5,num_iter=1)
plt.matshow(U, cmap=plt.cm.Blues)
plt.matshow(S, cmap=plt.cm.Blues)
plt.matshow(V, cmap=plt.cm.Blues)
plt.matshow(U.dot(S).dot(V), cmap=plt.cm.Blues)
```
| github_jupyter |
**Chapter 10 – Introduction to Artificial Neural Networks**
_This notebook contains all the sample code and solutions to the exercises in chapter 10._
# Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
```
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
def reset_graph(seed=42):
tf.reset_default_graph()
tf.set_random_seed(seed)
np.random.seed(seed)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "ann"
def save_fig(fig_id, tight_layout=True):
path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png")
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)
```
# Perceptrons
**Note**: we set `max_iter` and `tol` explicitly to avoid warnings about the fact that their default value will change in future versions of Scikit-Learn.
```
import numpy as np
from sklearn.datasets import load_iris
from sklearn.linear_model import Perceptron
iris = load_iris()
X = iris.data[:, (2, 3)] # petal length, petal width
y = (iris.target == 0).astype(np.int)
per_clf = Perceptron(max_iter=100, tol=-np.infty, random_state=42)
per_clf.fit(X, y)
y_pred = per_clf.predict([[2, 0.5]])
y_pred
a = -per_clf.coef_[0][0] / per_clf.coef_[0][1]
b = -per_clf.intercept_ / per_clf.coef_[0][1]
axes = [0, 5, 0, 2]
x0, x1 = np.meshgrid(
np.linspace(axes[0], axes[1], 500).reshape(-1, 1),
np.linspace(axes[2], axes[3], 200).reshape(-1, 1),
)
X_new = np.c_[x0.ravel(), x1.ravel()]
y_predict = per_clf.predict(X_new)
zz = y_predict.reshape(x0.shape)
plt.figure(figsize=(10, 4))
plt.plot(X[y==0, 0], X[y==0, 1], "bs", label="Not Iris-Setosa")
plt.plot(X[y==1, 0], X[y==1, 1], "yo", label="Iris-Setosa")
plt.plot([axes[0], axes[1]], [a * axes[0] + b, a * axes[1] + b], "k-", linewidth=3)
from matplotlib.colors import ListedColormap
custom_cmap = ListedColormap(['#9898ff', '#fafab0'])
plt.contourf(x0, x1, zz, cmap=custom_cmap)
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.legend(loc="lower right", fontsize=14)
plt.axis(axes)
save_fig("perceptron_iris_plot")
plt.show()
```
# Activation functions
```
def sigmoid(z):
return 1 / (1 + np.exp(-z))
def relu(z):
return np.maximum(0, z)
def derivative(f, z, eps=0.000001):
return (f(z + eps) - f(z - eps))/(2 * eps)
z = np.linspace(-5, 5, 200)
plt.figure(figsize=(11,4))
plt.subplot(121)
plt.plot(z, np.sign(z), "r-", linewidth=1, label="Step")
plt.plot(z, sigmoid(z), "g--", linewidth=2, label="Sigmoid")
plt.plot(z, np.tanh(z), "b-", linewidth=2, label="Tanh")
plt.plot(z, relu(z), "m-.", linewidth=2, label="ReLU")
plt.grid(True)
plt.legend(loc="center right", fontsize=14)
plt.title("Activation functions", fontsize=14)
plt.axis([-5, 5, -1.2, 1.2])
plt.subplot(122)
plt.plot(z, derivative(np.sign, z), "r-", linewidth=1, label="Step")
plt.plot(0, 0, "ro", markersize=5)
plt.plot(0, 0, "rx", markersize=10)
plt.plot(z, derivative(sigmoid, z), "g--", linewidth=2, label="Sigmoid")
plt.plot(z, derivative(np.tanh, z), "b-", linewidth=2, label="Tanh")
plt.plot(z, derivative(relu, z), "m-.", linewidth=2, label="ReLU")
plt.grid(True)
#plt.legend(loc="center right", fontsize=14)
plt.title("Derivatives", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
save_fig("activation_functions_plot")
plt.show()
def heaviside(z):
return (z >= 0).astype(z.dtype)
def mlp_xor(x1, x2, activation=heaviside):
return activation(-activation(x1 + x2 - 1.5) + activation(x1 + x2 - 0.5) - 0.5)
x1s = np.linspace(-0.2, 1.2, 100)
x2s = np.linspace(-0.2, 1.2, 100)
x1, x2 = np.meshgrid(x1s, x2s)
z1 = mlp_xor(x1, x2, activation=heaviside)
z2 = mlp_xor(x1, x2, activation=sigmoid)
plt.figure(figsize=(10,4))
plt.subplot(121)
plt.contourf(x1, x2, z1)
plt.plot([0, 1], [0, 1], "gs", markersize=20)
plt.plot([0, 1], [1, 0], "y^", markersize=20)
plt.title("Activation function: heaviside", fontsize=14)
plt.grid(True)
plt.subplot(122)
plt.contourf(x1, x2, z2)
plt.plot([0, 1], [0, 1], "gs", markersize=20)
plt.plot([0, 1], [1, 0], "y^", markersize=20)
plt.title("Activation function: sigmoid", fontsize=14)
plt.grid(True)
```
# FNN for MNIST
## Using the Estimator API (formerly `tf.contrib.learn`)
```
import tensorflow as tf
```
**Warning**: `tf.examples.tutorials.mnist` is deprecated. We will use `tf.keras.datasets.mnist` instead. Moreover, the `tf.contrib.learn` API was promoted to `tf.estimators` and `tf.feature_columns`, and it has changed considerably. In particular, there is no `infer_real_valued_columns_from_input()` function or `SKCompat` class.
```
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data()
X_train = X_train.astype(np.float32).reshape(-1, 28*28) / 255.0
X_test = X_test.astype(np.float32).reshape(-1, 28*28) / 255.0
y_train = y_train.astype(np.int32)
y_test = y_test.astype(np.int32)
X_valid, X_train = X_train[:5000], X_train[5000:]
y_valid, y_train = y_train[:5000], y_train[5000:]
feature_cols = [tf.feature_column.numeric_column("X", shape=[28 * 28])]
dnn_clf = tf.estimator.DNNClassifier(hidden_units=[300,100], n_classes=10,
feature_columns=feature_cols)
input_fn = tf.estimator.inputs.numpy_input_fn(
x={"X": X_train}, y=y_train, num_epochs=40, batch_size=50, shuffle=True)
dnn_clf.train(input_fn=input_fn)
test_input_fn = tf.estimator.inputs.numpy_input_fn(
x={"X": X_test}, y=y_test, shuffle=False)
eval_results = dnn_clf.evaluate(input_fn=test_input_fn)
eval_results
y_pred_iter = dnn_clf.predict(input_fn=test_input_fn)
y_pred = list(y_pred_iter)
y_pred[0]
```
## Using plain TensorFlow
```
import tensorflow as tf
n_inputs = 28*28 # MNIST
n_hidden1 = 300
n_hidden2 = 100
n_outputs = 10
reset_graph()
X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X")
y = tf.placeholder(tf.int32, shape=(None), name="y")
def neuron_layer(X, n_neurons, name, activation=None):
with tf.name_scope(name):
n_inputs = int(X.get_shape()[1])
stddev = 2 / np.sqrt(n_inputs)
init = tf.truncated_normal((n_inputs, n_neurons), stddev=stddev)
W = tf.Variable(init, name="kernel")
b = tf.Variable(tf.zeros([n_neurons]), name="bias")
Z = tf.matmul(X, W) + b
if activation is not None:
return activation(Z)
else:
return Z
with tf.name_scope("dnn"):
hidden1 = neuron_layer(X, n_hidden1, name="hidden1",
activation=tf.nn.relu)
hidden2 = neuron_layer(hidden1, n_hidden2, name="hidden2",
activation=tf.nn.relu)
logits = neuron_layer(hidden2, n_outputs, name="outputs")
with tf.name_scope("loss"):
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y,
logits=logits)
loss = tf.reduce_mean(xentropy, name="loss")
learning_rate = 0.01
with tf.name_scope("train"):
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
with tf.name_scope("eval"):
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_epochs = 40
batch_size = 50
def shuffle_batch(X, y, batch_size):
rnd_idx = np.random.permutation(len(X))
n_batches = len(X) // batch_size
for batch_idx in np.array_split(rnd_idx, n_batches):
X_batch, y_batch = X[batch_idx], y[batch_idx]
yield X_batch, y_batch
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
for X_batch, y_batch in shuffle_batch(X_train, y_train, batch_size):
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
acc_batch = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_val = accuracy.eval(feed_dict={X: X_valid, y: y_valid})
print(epoch, "Batch accuracy:", acc_batch, "Val accuracy:", acc_val)
save_path = saver.save(sess, "./my_model_final.ckpt")
with tf.Session() as sess:
saver.restore(sess, "./my_model_final.ckpt") # or better, use save_path
X_new_scaled = X_test[:20]
Z = logits.eval(feed_dict={X: X_new_scaled})
y_pred = np.argmax(Z, axis=1)
print("Predicted classes:", y_pred)
print("Actual classes: ", y_test[:20])
from tensorflow_graph_in_jupyter import show_graph
show_graph(tf.get_default_graph())
```
## Using `dense()` instead of `neuron_layer()`
Note: previous releases of the book used `tensorflow.contrib.layers.fully_connected()` rather than `tf.layers.dense()` (which did not exist when this chapter was written). It is now preferable to use `tf.layers.dense()`, because anything in the contrib module may change or be deleted without notice. The `dense()` function is almost identical to the `fully_connected()` function, except for a few minor differences:
* several parameters are renamed: `scope` becomes `name`, `activation_fn` becomes `activation` (and similarly the `_fn` suffix is removed from other parameters such as `normalizer_fn`), `weights_initializer` becomes `kernel_initializer`, etc.
* the default `activation` is now `None` rather than `tf.nn.relu`.
* a few more differences are presented in chapter 11.
```
n_inputs = 28*28 # MNIST
n_hidden1 = 300
n_hidden2 = 100
n_outputs = 10
reset_graph()
X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X")
y = tf.placeholder(tf.int32, shape=(None), name="y")
with tf.name_scope("dnn"):
hidden1 = tf.layers.dense(X, n_hidden1, name="hidden1",
activation=tf.nn.relu)
hidden2 = tf.layers.dense(hidden1, n_hidden2, name="hidden2",
activation=tf.nn.relu)
logits = tf.layers.dense(hidden2, n_outputs, name="outputs")
y_proba = tf.nn.softmax(logits)
with tf.name_scope("loss"):
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy, name="loss")
learning_rate = 0.01
with tf.name_scope("train"):
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
with tf.name_scope("eval"):
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_epochs = 20
n_batches = 50
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
for X_batch, y_batch in shuffle_batch(X_train, y_train, batch_size):
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
acc_batch = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_valid = accuracy.eval(feed_dict={X: X_valid, y: y_valid})
print(epoch, "Batch accuracy:", acc_batch, "Validation accuracy:", acc_valid)
save_path = saver.save(sess, "./my_model_final.ckpt")
show_graph(tf.get_default_graph())
```
# Exercise solutions
## 1. to 8.
See appendix A.
## 9.
_Train a deep MLP on the MNIST dataset and see if you can get over 98% precision. Just like in the last exercise of chapter 9, try adding all the bells and whistles (i.e., save checkpoints, restore the last checkpoint in case of an interruption, add summaries, plot learning curves using TensorBoard, and so on)._
First let's create the deep net. It's exactly the same as earlier, with just one addition: we add a `tf.summary.scalar()` to track the loss and the accuracy during training, so we can view nice learning curves using TensorBoard.
```
n_inputs = 28*28 # MNIST
n_hidden1 = 300
n_hidden2 = 100
n_outputs = 10
reset_graph()
X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X")
y = tf.placeholder(tf.int32, shape=(None), name="y")
with tf.name_scope("dnn"):
hidden1 = tf.layers.dense(X, n_hidden1, name="hidden1",
activation=tf.nn.relu)
hidden2 = tf.layers.dense(hidden1, n_hidden2, name="hidden2",
activation=tf.nn.relu)
logits = tf.layers.dense(hidden2, n_outputs, name="outputs")
with tf.name_scope("loss"):
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy, name="loss")
loss_summary = tf.summary.scalar('log_loss', loss)
learning_rate = 0.01
with tf.name_scope("train"):
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
with tf.name_scope("eval"):
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
accuracy_summary = tf.summary.scalar('accuracy', accuracy)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
```
Now we need to define the directory to write the TensorBoard logs to:
```
from datetime import datetime
def log_dir(prefix=""):
now = datetime.utcnow().strftime("%Y%m%d%H%M%S")
root_logdir = "tf_logs"
if prefix:
prefix += "-"
name = prefix + "run-" + now
return "{}/{}/".format(root_logdir, name)
logdir = log_dir("mnist_dnn")
```
Now we can create the `FileWriter` that we will use to write the TensorBoard logs:
```
file_writer = tf.summary.FileWriter(logdir, tf.get_default_graph())
```
Hey! Why don't we implement early stopping? For this, we are going to need to use the validation set.
```
m, n = X_train.shape
n_epochs = 10001
batch_size = 50
n_batches = int(np.ceil(m / batch_size))
checkpoint_path = "/tmp/my_deep_mnist_model.ckpt"
checkpoint_epoch_path = checkpoint_path + ".epoch"
final_model_path = "./my_deep_mnist_model"
best_loss = np.infty
epochs_without_progress = 0
max_epochs_without_progress = 50
with tf.Session() as sess:
if os.path.isfile(checkpoint_epoch_path):
# if the checkpoint file exists, restore the model and load the epoch number
with open(checkpoint_epoch_path, "rb") as f:
start_epoch = int(f.read())
print("Training was interrupted. Continuing at epoch", start_epoch)
saver.restore(sess, checkpoint_path)
else:
start_epoch = 0
sess.run(init)
for epoch in range(start_epoch, n_epochs):
for X_batch, y_batch in shuffle_batch(X_train, y_train, batch_size):
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
accuracy_val, loss_val, accuracy_summary_str, loss_summary_str = sess.run([accuracy, loss, accuracy_summary, loss_summary], feed_dict={X: X_valid, y: y_valid})
file_writer.add_summary(accuracy_summary_str, epoch)
file_writer.add_summary(loss_summary_str, epoch)
if epoch % 5 == 0:
print("Epoch:", epoch,
"\tValidation accuracy: {:.3f}%".format(accuracy_val * 100),
"\tLoss: {:.5f}".format(loss_val))
saver.save(sess, checkpoint_path)
with open(checkpoint_epoch_path, "wb") as f:
f.write(b"%d" % (epoch + 1))
if loss_val < best_loss:
saver.save(sess, final_model_path)
best_loss = loss_val
else:
epochs_without_progress += 5
if epochs_without_progress > max_epochs_without_progress:
print("Early stopping")
break
os.remove(checkpoint_epoch_path)
with tf.Session() as sess:
saver.restore(sess, final_model_path)
accuracy_val = accuracy.eval(feed_dict={X: X_test, y: y_test})
accuracy_val
```
| github_jupyter |
# TensorFlow Tutorial
Welcome to this week's programming assignment. Until now, you've always used numpy to build neural networks. Now we will step you through a deep learning framework that will allow you to build neural networks more easily. Machine learning frameworks like TensorFlow, PaddlePaddle, Torch, Caffe, Keras, and many others can speed up your machine learning development significantly. All of these frameworks also have a lot of documentation, which you should feel free to read. In this assignment, you will learn to do the following in TensorFlow:
- Initialize variables
- Start your own session
- Train algorithms
- Implement a Neural Network
Programing frameworks can not only shorten your coding time, but sometimes also perform optimizations that speed up your code.
## <font color='darkblue'>Updates</font>
#### If you were working on the notebook before this update...
* The current notebook is version "v3b".
* You can find your original work saved in the notebook with the previous version name (it may be either TensorFlow Tutorial version 3" or "TensorFlow Tutorial version 3a.)
* To view the file directory, click on the "Coursera" icon in the top left of this notebook.
#### List of updates
* forward_propagation instruction now says 'A1' instead of 'a1' in the formula for Z2;
and are updated to say 'A2' instead of 'Z2' in the formula for Z3.
* create_placeholders instruction refer to the data type "tf.float32" instead of float.
* in the model function, the x axis of the plot now says "iterations (per fives)" instead of iterations(per tens)
* In the linear_function, comments remind students to create the variables in the order suggested by the starter code. The comments are updated to reflect this order.
* The test of the cost function now creates the logits without passing them through a sigmoid function (since the cost function will include the sigmoid in the built-in tensorflow function).
* Updated print statements and 'expected output that are used to check functions, for easier visual comparison.
## 1 - Exploring the Tensorflow Library
To start, you will import the library:
```
import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.python.framework import ops
from tf_utils import load_dataset, random_mini_batches, convert_to_one_hot, predict
%matplotlib inline
np.random.seed(1)
```
Now that you have imported the library, we will walk you through its different applications. You will start with an example, where we compute for you the loss of one training example.
$$loss = \mathcal{L}(\hat{y}, y) = (\hat y^{(i)} - y^{(i)})^2 \tag{1}$$
```
y_hat = tf.constant(36, name='y_hat') # Define y_hat constant. Set to 36.
y = tf.constant(39, name='y') # Define y. Set to 39
loss = tf.Variable((y - y_hat)**2, name='loss') # Create a variable for the loss
init = tf.global_variables_initializer() # When init is run later (session.run(init)),
# the loss variable will be initialized and ready to be computed
with tf.Session() as session: # Create a session and print the output
session.run(init) # Initializes the variables
print(session.run(loss)) # Prints the loss
```
Writing and running programs in TensorFlow has the following steps:
1. Create Tensors (variables) that are not yet executed/evaluated.
2. Write operations between those Tensors.
3. Initialize your Tensors.
4. Create a Session.
5. Run the Session. This will run the operations you'd written above.
Therefore, when we created a variable for the loss, we simply defined the loss as a function of other quantities, but did not evaluate its value. To evaluate it, we had to run `init=tf.global_variables_initializer()`. That initialized the loss variable, and in the last line we were finally able to evaluate the value of `loss` and print its value.
Now let us look at an easy example. Run the cell below:
```
a = tf.constant(2)
b = tf.constant(10)
c = tf.multiply(a,b)
print(c)
```
As expected, you will not see 20! You got a tensor saying that the result is a tensor that does not have the shape attribute, and is of type "int32". All you did was put in the 'computation graph', but you have not run this computation yet. In order to actually multiply the two numbers, you will have to create a session and run it.
```
sess = tf.Session()
print(sess.run(c))
```
Great! To summarize, **remember to initialize your variables, create a session and run the operations inside the session**.
Next, you'll also have to know about placeholders. A placeholder is an object whose value you can specify only later.
To specify values for a placeholder, you can pass in values by using a "feed dictionary" (`feed_dict` variable). Below, we created a placeholder for x. This allows us to pass in a number later when we run the session.
```
# Change the value of x in the feed_dict
x = tf.placeholder(tf.int64, name = 'x')
print(sess.run(2 * x, feed_dict = {x: 3}))
sess.close()
```
When you first defined `x` you did not have to specify a value for it. A placeholder is simply a variable that you will assign data to only later, when running the session. We say that you **feed data** to these placeholders when running the session.
Here's what's happening: When you specify the operations needed for a computation, you are telling TensorFlow how to construct a computation graph. The computation graph can have some placeholders whose values you will specify only later. Finally, when you run the session, you are telling TensorFlow to execute the computation graph.
### 1.1 - Linear function
Lets start this programming exercise by computing the following equation: $Y = WX + b$, where $W$ and $X$ are random matrices and b is a random vector.
**Exercise**: Compute $WX + b$ where $W, X$, and $b$ are drawn from a random normal distribution. W is of shape (4, 3), X is (3,1) and b is (4,1). As an example, here is how you would define a constant X that has shape (3,1):
```python
X = tf.constant(np.random.randn(3,1), name = "X")
```
You might find the following functions helpful:
- tf.matmul(..., ...) to do a matrix multiplication
- tf.add(..., ...) to do an addition
- np.random.randn(...) to initialize randomly
```
# GRADED FUNCTION: linear_function
def linear_function():
"""
Implements a linear function:
Initializes X to be a random tensor of shape (3,1)
Initializes W to be a random tensor of shape (4,3)
Initializes b to be a random tensor of shape (4,1)
Returns:
result -- runs the session for Y = WX + b
"""
np.random.seed(1)
"""
Note, to ensure that the "random" numbers generated match the expected results,
please create the variables in the order given in the starting code below.
(Do not re-arrange the order).
"""
### START CODE HERE ### (4 lines of code)
X = tf.constant(np.random.randn(3,1), name = "X")
W = tf.constant(np.random.randn(4,3), name = "W")
b = tf.constant(np.random.randn(4,1), name = "b")
Y = tf.add( tf.matmul(W, X), b )
Y = tf.matmul(W, X) + b
### END CODE HERE ###
# Create the session using tf.Session() and run it with sess.run(...) on the variable you want to calculate
### START CODE HERE ###
sess = tf.Session()
result = sess.run( Y )
### END CODE HERE ###
# close the session
sess.close()
return result
print( "result = \n" + str(linear_function()))
```
*** Expected Output ***:
```
result =
[[-2.15657382]
[ 2.95891446]
[-1.08926781]
[-0.84538042]]
```
### 1.2 - Computing the sigmoid
Great! You just implemented a linear function. Tensorflow offers a variety of commonly used neural network functions like `tf.sigmoid` and `tf.softmax`. For this exercise lets compute the sigmoid function of an input.
You will do this exercise using a placeholder variable `x`. When running the session, you should use the feed dictionary to pass in the input `z`. In this exercise, you will have to (i) create a placeholder `x`, (ii) define the operations needed to compute the sigmoid using `tf.sigmoid`, and then (iii) run the session.
** Exercise **: Implement the sigmoid function below. You should use the following:
- `tf.placeholder(tf.float32, name = "...")`
- `tf.sigmoid(...)`
- `sess.run(..., feed_dict = {x: z})`
Note that there are two typical ways to create and use sessions in tensorflow:
**Method 1:**
```python
sess = tf.Session()
# Run the variables initialization (if needed), run the operations
result = sess.run(..., feed_dict = {...})
sess.close() # Close the session
```
**Method 2:**
```python
with tf.Session() as sess:
# run the variables initialization (if needed), run the operations
result = sess.run(..., feed_dict = {...})
# This takes care of closing the session for you :)
```
```
# GRADED FUNCTION: sigmoid
def sigmoid(z):
"""
Computes the sigmoid of z
Arguments:
z -- input value, scalar or vector
Returns:
results -- the sigmoid of z
"""
### START CODE HERE ### ( approx. 4 lines of code)
# Create a placeholder for x. Name it 'x'.
x = tf.placeholder(tf.float32, name='x')
# compute sigmoid(x)
sigmoid = tf.sigmoid( z + 0. )
# Create a session, and run it. Please use the method 2 explained above.
# You should use a feed_dict to pass z's value to x.
with tf.Session() as session:
# Run session and call the output "result"
result = session.run(sigmoid, feed_dict={ x: z })
### END CODE HERE ###
return result
print ("sigmoid(0) = " + str(sigmoid(0)))
print ("sigmoid(12) = " + str(sigmoid(12)))
```
*** Expected Output ***:
<table>
<tr>
<td>
**sigmoid(0)**
</td>
<td>
0.5
</td>
</tr>
<tr>
<td>
**sigmoid(12)**
</td>
<td>
0.999994
</td>
</tr>
</table>
<font color='blue'>
**To summarize, you how know how to**:
1. Create placeholders
2. Specify the computation graph corresponding to operations you want to compute
3. Create the session
4. Run the session, using a feed dictionary if necessary to specify placeholder variables' values.
### 1.3 - Computing the Cost
You can also use a built-in function to compute the cost of your neural network. So instead of needing to write code to compute this as a function of $a^{[2](i)}$ and $y^{(i)}$ for i=1...m:
$$ J = - \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log a^{ [2] (i)} + (1-y^{(i)})\log (1-a^{ [2] (i)} )\large )\small\tag{2}$$
you can do it in one line of code in tensorflow!
**Exercise**: Implement the cross entropy loss. The function you will use is:
- `tf.nn.sigmoid_cross_entropy_with_logits(logits = ..., labels = ...)`
Your code should input `z`, compute the sigmoid (to get `a`) and then compute the cross entropy cost $J$. All this can be done using one call to `tf.nn.sigmoid_cross_entropy_with_logits`, which computes
$$- \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log \sigma(z^{[2](i)}) + (1-y^{(i)})\log (1-\sigma(z^{[2](i)})\large )\small\tag{2}$$
```
# GRADED FUNCTION: cost
def cost(logits, labels):
"""
Computes the cost using the sigmoid cross entropy
Arguments:
logits -- vector containing z, output of the last linear unit (before the final sigmoid activation)
labels -- vector of labels y (1 or 0)
Note: What we've been calling "z" and "y" in this class are respectively called "logits" and "labels"
in the TensorFlow documentation. So logits will feed into z, and labels into y.
Returns:
cost -- runs the session of the cost (formula (2))
"""
### START CODE HERE ###
# Create the placeholders for "logits" (z) and "labels" (y) (approx. 2 lines)
z = tf.placeholder(tf.float32, name='z')
y = tf.placeholder(tf.float32, name='y')
# Use the loss function (approx. 1 line)
cost = tf.nn.sigmoid_cross_entropy_with_logits(logits=z, labels=y)
# Create a session (approx. 1 line). See method 1 above.
sess = tf.Session()
# Run the session (approx. 1 line).
cost = sess.run(cost, feed_dict={z: logits, y: labels})
# Close the session (approx. 1 line). See method 1 above.
sess.close()
### END CODE HERE ###
return cost
logits = np.array([0.2,0.4,0.7,0.9])
cost = cost(logits, np.array([0,0,1,1]))
print ("cost = " + str(cost))
```
** Expected Output** :
```
cost = [ 0.79813886 0.91301525 0.40318605 0.34115386]
```
### 1.4 - Using One Hot encodings
Many times in deep learning you will have a y vector with numbers ranging from 0 to C-1, where C is the number of classes. If C is for example 4, then you might have the following y vector which you will need to convert as follows:
<img src="images/onehot.png" style="width:600px;height:150px;">
This is called a "one hot" encoding, because in the converted representation exactly one element of each column is "hot" (meaning set to 1). To do this conversion in numpy, you might have to write a few lines of code. In tensorflow, you can use one line of code:
- tf.one_hot(labels, depth, axis)
**Exercise:** Implement the function below to take one vector of labels and the total number of classes $C$, and return the one hot encoding. Use `tf.one_hot()` to do this.
```
# GRADED FUNCTION: one_hot_matrix
def one_hot_matrix(labels, C):
"""
Creates a matrix where the i-th row corresponds to the ith class number and the jth column
corresponds to the jth training example. So if example j had a label i. Then entry (i,j)
will be 1.
Arguments:
labels -- vector containing the labels
C -- number of classes, the depth of the one hot dimension
Returns:
one_hot -- one hot matrix
"""
### START CODE HERE ###
# Create a tf.constant equal to C (depth), name it 'C'. (approx. 1 line)
C = tf.constant(C, name='C')
# Use tf.one_hot, be careful with the axis (approx. 1 line)
one_hot_matrix = tf.one_hot(labels, C, axis=0)
# Create the session (approx. 1 line)
sess = tf.Session()
# Run the session (approx. 1 line)
one_hot = sess.run(one_hot_matrix)
# Close the session (approx. 1 line). See method 1 above.
sess.close()
### END CODE HERE ###
return one_hot
labels = np.array([1,2,3,0,2,1])
one_hot = one_hot_matrix(labels, C = 4)
print ("one_hot = \n" + str(one_hot))
```
**Expected Output**:
```
one_hot =
[[ 0. 0. 0. 1. 0. 0.]
[ 1. 0. 0. 0. 0. 1.]
[ 0. 1. 0. 0. 1. 0.]
[ 0. 0. 1. 0. 0. 0.]]
```
### 1.5 - Initialize with zeros and ones
Now you will learn how to initialize a vector of zeros and ones. The function you will be calling is `tf.ones()`. To initialize with zeros you could use tf.zeros() instead. These functions take in a shape and return an array of dimension shape full of zeros and ones respectively.
**Exercise:** Implement the function below to take in a shape and to return an array (of the shape's dimension of ones).
- tf.ones(shape)
```
# GRADED FUNCTION: ones
def ones(shape):
"""
Creates an array of ones of dimension shape
Arguments:
shape -- shape of the array you want to create
Returns:
ones -- array containing only ones
"""
### START CODE HERE ###
# Create "ones" tensor using tf.ones(...). (approx. 1 line)
ones = tf.ones(shape)
# Create the session (approx. 1 line)
sess = tf.Session()
# Run the session to compute 'ones' (approx. 1 line)
ones = sess.run(ones)
# Close the session (approx. 1 line). See method 1 above.
sess.close()
### END CODE HERE ###
return ones
print ("ones = " + str(ones([3])))
```
**Expected Output:**
<table>
<tr>
<td>
**ones**
</td>
<td>
[ 1. 1. 1.]
</td>
</tr>
</table>
# 2 - Building your first neural network in tensorflow
In this part of the assignment you will build a neural network using tensorflow. Remember that there are two parts to implement a tensorflow model:
- Create the computation graph
- Run the graph
Let's delve into the problem you'd like to solve!
### 2.0 - Problem statement: SIGNS Dataset
One afternoon, with some friends we decided to teach our computers to decipher sign language. We spent a few hours taking pictures in front of a white wall and came up with the following dataset. It's now your job to build an algorithm that would facilitate communications from a speech-impaired person to someone who doesn't understand sign language.
- **Training set**: 1080 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (180 pictures per number).
- **Test set**: 120 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (20 pictures per number).
Note that this is a subset of the SIGNS dataset. The complete dataset contains many more signs.
Here are examples for each number, and how an explanation of how we represent the labels. These are the original pictures, before we lowered the image resolutoion to 64 by 64 pixels.
<img src="images/hands.png" style="width:800px;height:350px;"><caption><center> <u><font color='purple'> **Figure 1**</u><font color='purple'>: SIGNS dataset <br> <font color='black'> </center>
Run the following code to load the dataset.
```
# Loading the dataset
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
```
Change the index below and run the cell to visualize some examples in the dataset.
```
# Example of a picture
index = 0
plt.imshow(X_train_orig[index])
print ("y = " + str(np.squeeze(Y_train_orig[:, index])))
```
As usual you flatten the image dataset, then normalize it by dividing by 255. On top of that, you will convert each label to a one-hot vector as shown in Figure 1. Run the cell below to do so.
```
# Flatten the training and test images
X_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T
X_test_flatten = X_test_orig.reshape(X_test_orig.shape[0], -1).T
# Normalize image vectors
X_train = X_train_flatten/255.
X_test = X_test_flatten/255.
# Convert training and test labels to one hot matrices
Y_train = convert_to_one_hot(Y_train_orig, 6)
Y_test = convert_to_one_hot(Y_test_orig, 6)
print ("number of training examples = " + str(X_train.shape[1]))
print ("number of test examples = " + str(X_test.shape[1]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
```
**Note** that 12288 comes from $64 \times 64 \times 3$. Each image is square, 64 by 64 pixels, and 3 is for the RGB colors. Please make sure all these shapes make sense to you before continuing.
**Your goal** is to build an algorithm capable of recognizing a sign with high accuracy. To do so, you are going to build a tensorflow model that is almost the same as one you have previously built in numpy for cat recognition (but now using a softmax output). It is a great occasion to compare your numpy implementation to the tensorflow one.
**The model** is *LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX*. The SIGMOID output layer has been converted to a SOFTMAX. A SOFTMAX layer generalizes SIGMOID to when there are more than two classes.
### 2.1 - Create placeholders
Your first task is to create placeholders for `X` and `Y`. This will allow you to later pass your training data in when you run your session.
**Exercise:** Implement the function below to create the placeholders in tensorflow.
```
# GRADED FUNCTION: create_placeholders
def create_placeholders(n_x, n_y):
"""
Creates the placeholders for the tensorflow session.
Arguments:
n_x -- scalar, size of an image vector (num_px * num_px = 64 * 64 * 3 = 12288)
n_y -- scalar, number of classes (from 0 to 5, so -> 6)
Returns:
X -- placeholder for the data input, of shape [n_x, None] and dtype "tf.float32"
Y -- placeholder for the input labels, of shape [n_y, None] and dtype "tf.float32"
Tips:
- You will use None because it let's us be flexible on the number of examples you will for the placeholders.
In fact, the number of examples during test/train is different.
"""
### START CODE HERE ### (approx. 2 lines)
X = tf.placeholder(tf.float32, shape=(n_x, None), name='X')
Y = tf.placeholder(tf.float32, shape=(n_y, None), name='Y')
### END CODE HERE ###
return X, Y
X, Y = create_placeholders(12288, 6)
print ("X = " + str(X))
print ("Y = " + str(Y))
```
**Expected Output**:
<table>
<tr>
<td>
**X**
</td>
<td>
Tensor("Placeholder_1:0", shape=(12288, ?), dtype=float32) (not necessarily Placeholder_1)
</td>
</tr>
<tr>
<td>
**Y**
</td>
<td>
Tensor("Placeholder_2:0", shape=(6, ?), dtype=float32) (not necessarily Placeholder_2)
</td>
</tr>
</table>
### 2.2 - Initializing the parameters
Your second task is to initialize the parameters in tensorflow.
**Exercise:** Implement the function below to initialize the parameters in tensorflow. You are going use Xavier Initialization for weights and Zero Initialization for biases. The shapes are given below. As an example, to help you, for W1 and b1 you could use:
```python
W1 = tf.get_variable("W1", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer())
```
Please use `seed = 1` to make sure your results match ours.
```
# GRADED FUNCTION: initialize_parameters
def initialize_parameters():
"""
Initializes parameters to build a neural network with tensorflow. The shapes are:
W1 : [25, 12288]
b1 : [25, 1]
W2 : [12, 25]
b2 : [12, 1]
W3 : [6, 12]
b3 : [6, 1]
Returns:
parameters -- a dictionary of tensors containing W1, b1, W2, b2, W3, b3
"""
tf.set_random_seed(1) # so that your "random" numbers match ours
### START CODE HERE ### (approx. 6 lines of code)
W1 = tf.get_variable("W1", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer())
W2 = tf.get_variable("W2", [12,25], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
b2 = tf.get_variable("b2", [12,1], initializer = tf.zeros_initializer())
W3 = tf.get_variable("W3", [6,12], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
b3 = tf.get_variable("b3", [6,1], initializer = tf.zeros_initializer())
### END CODE HERE ###
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2,
"W3": W3,
"b3": b3}
return parameters
tf.reset_default_graph()
with tf.Session() as sess:
parameters = initialize_parameters()
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected Output**:
<table>
<tr>
<td>
**W1**
</td>
<td>
< tf.Variable 'W1:0' shape=(25, 12288) dtype=float32_ref >
</td>
</tr>
<tr>
<td>
**b1**
</td>
<td>
< tf.Variable 'b1:0' shape=(25, 1) dtype=float32_ref >
</td>
</tr>
<tr>
<td>
**W2**
</td>
<td>
< tf.Variable 'W2:0' shape=(12, 25) dtype=float32_ref >
</td>
</tr>
<tr>
<td>
**b2**
</td>
<td>
< tf.Variable 'b2:0' shape=(12, 1) dtype=float32_ref >
</td>
</tr>
</table>
As expected, the parameters haven't been evaluated yet.
### 2.3 - Forward propagation in tensorflow
You will now implement the forward propagation module in tensorflow. The function will take in a dictionary of parameters and it will complete the forward pass. The functions you will be using are:
- `tf.add(...,...)` to do an addition
- `tf.matmul(...,...)` to do a matrix multiplication
- `tf.nn.relu(...)` to apply the ReLU activation
**Question:** Implement the forward pass of the neural network. We commented for you the numpy equivalents so that you can compare the tensorflow implementation to numpy. It is important to note that the forward propagation stops at `z3`. The reason is that in tensorflow the last linear layer output is given as input to the function computing the loss. Therefore, you don't need `a3`!
```
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
"""
Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX
Arguments:
X -- input dataset placeholder, of shape (input size, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3"
the shapes are given in initialize_parameters
Returns:
Z3 -- the output of the last LINEAR unit
"""
# Retrieve the parameters from the dictionary "parameters"
W1 = parameters['W1']
b1 = parameters['b1']
W2 = parameters['W2']
b2 = parameters['b2']
W3 = parameters['W3']
b3 = parameters['b3']
### START CODE HERE ### (approx. 5 lines) # Numpy Equivalents:
Z1 = tf.add(tf.matmul(W1, X), b1) # Z1 = np.dot(W1, X) + b1
A1 = tf.nn.relu(Z1) # A1 = relu(Z1)
Z2 = tf.add(tf.matmul(W2, A1), b2) # Z2 = np.dot(W2, a1) + b2
A2 = tf.nn.relu(Z2) # A2 = relu(Z2)
Z3 = tf.add(tf.matmul(W3, A2), b3) # Z3 = np.dot(W3,Z2) + b3
### END CODE HERE ###
return Z3
tf.reset_default_graph()
with tf.Session() as sess:
X, Y = create_placeholders(12288, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
print("Z3 = " + str(Z3))
```
**Expected Output**:
<table>
<tr>
<td>
**Z3**
</td>
<td>
Tensor("Add_2:0", shape=(6, ?), dtype=float32)
</td>
</tr>
</table>
You may have noticed that the forward propagation doesn't output any cache. You will understand why below, when we get to brackpropagation.
### 2.4 Compute cost
As seen before, it is very easy to compute the cost using:
```python
tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = ..., labels = ...))
```
**Question**: Implement the cost function below.
- It is important to know that the "`logits`" and "`labels`" inputs of `tf.nn.softmax_cross_entropy_with_logits` are expected to be of shape (number of examples, num_classes). We have thus transposed Z3 and Y for you.
- Besides, `tf.reduce_mean` basically does the summation over the examples.
```
# GRADED FUNCTION: compute_cost
def compute_cost(Z3, Y):
"""
Computes the cost
Arguments:
Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples)
Y -- "true" labels vector placeholder, same shape as Z3
Returns:
cost - Tensor of the cost function
"""
# to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...)
logits = tf.transpose(Z3)
labels = tf.transpose(Y)
### START CODE HERE ### (1 line of code)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels))
### END CODE HERE ###
return cost
tf.reset_default_graph()
with tf.Session() as sess:
X, Y = create_placeholders(12288, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
cost = compute_cost(Z3, Y)
print("cost = " + str(cost))
```
**Expected Output**:
<table>
<tr>
<td>
**cost**
</td>
<td>
Tensor("Mean:0", shape=(), dtype=float32)
</td>
</tr>
</table>
### 2.5 - Backward propagation & parameter updates
This is where you become grateful to programming frameworks. All the backpropagation and the parameters update is taken care of in 1 line of code. It is very easy to incorporate this line in the model.
After you compute the cost function. You will create an "`optimizer`" object. You have to call this object along with the cost when running the tf.session. When called, it will perform an optimization on the given cost with the chosen method and learning rate.
For instance, for gradient descent the optimizer would be:
```python
optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost)
```
To make the optimization you would do:
```python
_ , c = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})
```
This computes the backpropagation by passing through the tensorflow graph in the reverse order. From cost to inputs.
**Note** When coding, we often use `_` as a "throwaway" variable to store values that we won't need to use later. Here, `_` takes on the evaluated value of `optimizer`, which we don't need (and `c` takes the value of the `cost` variable).
### 2.6 - Building the model
Now, you will bring it all together!
**Exercise:** Implement the model. You will be calling the functions you had previously implemented.
```
def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001,
num_epochs = 1500, minibatch_size = 32, print_cost = True):
"""
Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX.
Arguments:
X_train -- training set, of shape (input size = 12288, number of training examples = 1080)
Y_train -- test set, of shape (output size = 6, number of training examples = 1080)
X_test -- training set, of shape (input size = 12288, number of training examples = 120)
Y_test -- test set, of shape (output size = 6, number of test examples = 120)
learning_rate -- learning rate of the optimization
num_epochs -- number of epochs of the optimization loop
minibatch_size -- size of a minibatch
print_cost -- True to print the cost every 100 epochs
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
"""
ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
tf.set_random_seed(1) # to keep consistent results
seed = 3 # to keep consistent results
(n_x, m) = X_train.shape # (n_x: input size, m : number of examples in the train set)
n_y = Y_train.shape[0] # n_y : output size
costs = [] # To keep track of the cost
# Create Placeholders of shape (n_x, n_y)
### START CODE HERE ### (1 line)
X, Y = create_placeholders(n_x, n_y)
### END CODE HERE ###
# Initialize parameters
### START CODE HERE ### (1 line)
parameters = initialize_parameters()
### END CODE HERE ###
# Forward propagation: Build the forward propagation in the tensorflow graph
### START CODE HERE ### (1 line)
Z3 = forward_propagation(X, parameters)
### END CODE HERE ###
# Cost function: Add cost function to tensorflow graph
### START CODE HERE ### (1 line)
cost = compute_cost(Z3, Y)
### END CODE HERE ###
# Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer.
### START CODE HERE ### (1 line)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
### END CODE HERE ###
# Initialize all the variables
init = tf.global_variables_initializer()
# Start the session to compute the tensorflow graph
with tf.Session() as sess:
# Run the initialization
sess.run(init)
# Do the training loop
for epoch in range(num_epochs):
epoch_cost = 0. # Defines a cost related to an epoch
num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set
seed = seed + 1
minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)
# print("num_minibatches", num_minibatches)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# IMPORTANT: The line that runs the graph on a minibatch.
# Run the session to execute the "optimizer" and the "cost", the feedict should contain a minibatch for (X,Y).
### START CODE HERE ### (1 line)
_ , minibatch_cost = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})
### END CODE HERE ###
epoch_cost += minibatch_cost / num_minibatches
# Print the cost every epoch
if print_cost == True and epoch % 100 == 0:
print ("Cost after epoch %i: %f" % (epoch, epoch_cost))
if print_cost == True and epoch % 5 == 0:
costs.append(epoch_cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per fives)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
# lets save the parameters in a variable
parameters = sess.run(parameters)
print ("Parameters have been trained!")
# Calculate the correct predictions
correct_prediction = tf.equal(tf.argmax(Z3), tf.argmax(Y))
# Calculate accuracy on the test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print ("Train Accuracy:", accuracy.eval({X: X_train, Y: Y_train}))
print ("Test Accuracy:", accuracy.eval({X: X_test, Y: Y_test}))
return parameters
```
Run the following cell to train your model! On our machine it takes about 5 minutes. Your "Cost after epoch 100" should be 1.016458. If it's not, don't waste time; interrupt the training by clicking on the square (⬛) in the upper bar of the notebook, and try to correct your code. If it is the correct cost, take a break and come back in 5 minutes!
```
parameters = model(X_train, Y_train, X_test, Y_test)
```
**Expected Output**:
<table>
<tr>
<td>
**Train Accuracy**
</td>
<td>
0.999074
</td>
</tr>
<tr>
<td>
**Test Accuracy**
</td>
<td>
0.716667
</td>
</tr>
</table>
Amazing, your algorithm can recognize a sign representing a figure between 0 and 5 with 71.7% accuracy.
**Insights**:
- Your model seems big enough to fit the training set well. However, given the difference between train and test accuracy, you could try to add L2 or dropout regularization to reduce overfitting.
- Think about the session as a block of code to train the model. Each time you run the session on a minibatch, it trains the parameters. In total you have run the session a large number of times (1500 epochs) until you obtained well trained parameters.
### 2.7 - Test with your own image (optional / ungraded exercise)
Congratulations on finishing this assignment. You can now take a picture of your hand and see the output of your model. To do that:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Write your image's name in the following code
4. Run the code and check if the algorithm is right!
```
import scipy
from PIL import Image
from scipy import ndimage
## START CODE HERE ## (PUT YOUR IMAGE NAME)
my_image = "thumbs_up.jpg"
## END CODE HERE ##
# We preprocess your image to fit your algorithm.
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
image = image/255.
my_image = scipy.misc.imresize(image, size=(64,64)).reshape((1, 64*64*3)).T
my_image_prediction = predict(my_image, parameters)
plt.imshow(image)
print("Your algorithm predicts: y = " + str(np.squeeze(my_image_prediction)))
```
You indeed deserved a "thumbs-up" although as you can see the algorithm seems to classify it incorrectly. The reason is that the training set doesn't contain any "thumbs-up", so the model doesn't know how to deal with it! We call that a "mismatched data distribution" and it is one of the various of the next course on "Structuring Machine Learning Projects".
<font color='blue'>
**What you should remember**:
- Tensorflow is a programming framework used in deep learning
- The two main object classes in tensorflow are Tensors and Operators.
- When you code in tensorflow you have to take the following steps:
- Create a graph containing Tensors (Variables, Placeholders ...) and Operations (tf.matmul, tf.add, ...)
- Create a session
- Initialize the session
- Run the session to execute the graph
- You can execute the graph multiple times as you've seen in model()
- The backpropagation and optimization is automatically done when running the session on the "optimizer" object.
| github_jupyter |
```
# trees.r
# MWL, Lecture 2
# Author(s): [Phil Snyder]
#install.packages("mlbench", repos="http://cran.rstudio.com/") # we can download new libraries right from the R terminal!
library("mlbench")
#help(package="mlbench")
data(BreastCancer)
#help(topic="BreastCancer", package="mlbench")
BreastCancer$Id <- NULL # Just get rid of this column. We won't need it.
# Let's fit a tree to our data
library(rpart) # rpart stands for "recursive partitioning"
basicTree <- rpart(Class ~ ., BreastCancer, method='class')
basicTree
plot(basicTree)
text(basicTree, cex=0.7) # cex controls text size
```
The split labels are generated by R, so they don't appear to make any sense. See the text description of the tree above for the actual split criteria.
```
basicTreePredictions <- predict(basicTree, BreastCancer, type='class')
basicTreeResults <- table(basicTreePredictions == BreastCancer$Class) / length(basicTreePredictions)
basicTreeResults
# Why don't we try growing our tree so far down that every node contains only one class?
godTree <- rpart(Class ~ ., BreastCancer, method='class',
control=c(cp=-1, minsplit=2, minbucket=1))
plot(godTree)
text(godTree, cex=0.5)
godTreePredictions <- predict(godTree, BreastCancer, type='class')
godTreeResults <- table(godTreePredictions == BreastCancer$Class) / length(godTreePredictions)
godTreeResults
```
Amazing! We have a perfect predictor. All other machine learning algorithms cower in
sight of the glorious predictive power of godTree.
Of course, I'm kidding. What happens when a new data point comes along and, due to
uncertainty, randomness, etc., doesn't conform perfectly to the model we have constructed?
We are interested in how well our predictor performs on *unseen*, future data, so we must
'holdout' some data when we fit our model, then see how well the model performs on our
holdout data. This will give us a good estimate of how well our model *actually* performs
on unseen data.
In general, we partition our data into a 'train' set, which we fit our model to,
and a 'test' set, which we evaluate our model on. This is the most basic form of
cross-validation.
```
partition <- sample(nrow(BreastCancer), floor(0.7 * nrow(BreastCancer)))
trainData <- BreastCancer[partition,]
testData <- BreastCancer[-partition,] # can you guess what the '-' operator is doing here?
godTree <- rpart(Class ~ ., trainData, method='class',
control=c(cp=-1, minsplit=2, minbucket=1))
godTreeTrainPredictions <- predict(godTree, trainData, type='class')
godTreeTestPredictions <- predict(godTree, testData, type='class')
godTreeTrainResults <- table(godTreeTrainPredictions
== trainData$Class) / length(godTreeTrainPredictions)
godTreeTrainResults # Accuracy on train set
godTreeTestResults <- table(godTreeTestPredictions
== testData$Class) / length(godTreeTestPredictions)
godTreeTestResults # Accuracy on test set
```
When we have significantly higher accuracy on the training data as opposed to the
test data, this is called 'overfitting'. We fit our model to the train data **too** well.
We now have a model, and a way to more accurately test the predictive power of our model by partitioning our data into a training set and a test set. The question remains, what parameters should our model have, and what values should they take on? In the case of linear regression, our parameters are the slope and intercept of the regression line. In the case of decision trees, our parameters should control how far down we grow our tree. There are a few ways to do this, but in the rpart function we may control the growth of our tree by varying the complexity parameter (cp), the minimum # of data points in a node needed to consider a split (minsplit), the minimum # of data points that are allowed to sit in a leaf (minbucket), or the maxdepth of the tree (maxdepth). You may look up what exactly a [complexity parameter](https://cran.r-project.org/web/packages/rpart/vignettes/longintro.pdf) is (Page 25), but all you really need to know is that the lower your cp, the more your tree will grow (subject to the minsplit, minbucket, maxdepth constraints). Setting cp = -1 (like in the godTree example) will tell rpart to keep splitting until it cannot split anymore (again, subject to the minsplit, minbucket, maxdepth constraints).
In general, finding optimal parameters is an optimization problem. Usually a numerical optimization problem (i.e., there is no closed form optimal solution). More on this in later lectures.
We will use an algorithm called 'grid search' to find an optimal parameter set. Grid search is just a fancy name for trying-every-reasonable-combination-of-parameters. Since cp, minsplit, minbucket, and maxdepth are each different ways of measuring the same thing, we can effectively tell rpart to ignore the minsplit, minbucket, and maxdepth constraints and only consider the cp.
```
# R is a functional language, so it can be frustratingly difficult to do something as
# simple as add an element to the end of an array (vector). We use the foreach library
# to streamline the process. We will also be using a new dataset from a new library
# ElemStatLearn, 'Spam', which is more difficult to classify and illustrates
# the concept better.
#install.packages(c("foreach", "ElemStatLearn"), repos="http://cran.rstudio.com/")
library("ElemStatLearn")
library("foreach")
data(spam)
#help(topic="spam", package="ElemStatLearn")
partition <- sample(nrow(spam), floor(0.7 * nrow(spam)))
trainData <- spam[partition,]
testData <- spam[-partition,]
cpValues <- c(0.5, 0.1, 0.05, 0.01, 0.005, 0.001, 0.0005, 0.0001)
treesLoss <- foreach(val = cpValues, .combine='c') %do% {
ctrl <- rpart.control(cp=val, minsplit=2, minbucket=1) # maxdepth defaults to 30
tree <- rpart(spam ~ ., trainData, method='class', control=ctrl)
treePredictions <- predict(tree, testData, type='class')
# proportion incorrect
loss <- table(treePredictions == testData$spam)["FALSE"][[1]] / length(treePredictions)
return(loss)
}
results <- data.frame(cp = cpValues, loss = treesLoss)
plot(results, log='x', xlim=c(max(results$cp), min(results$cp)), type='o') # x log scale and reversed
```
Great. Around 1e-3 seems optimal. BUT, we have made yet another naive mistake. Decision trees are *high variance* predictors. This means that the decision trees we generate are highly dependent on the specific data points in our training dataset. If we had sampled a different training set (and, as a consequence, a different test set, it's possible we may have had a different optimal value. To counterbalance this variability, we use k-fold cross-validation. Wikipedia has <a name="https://en.wikipedia.org/wiki/Cross-validation_(statistics)#k-fold_cross-validation">a nice paragraph on k-fold CV</a>.
In the link above, you can think of 'validation set' as a kind of test set. K-fold CV reduces variability (in the traditional statistical sense of the word) by averaging our results.
This issue alludes to something you will need to know about (eventually), but probably won't be covered this lecture. That is the [Bias-Variance Tradeoff]("https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff").
```
# do 10-fold CV
trainData <- trainData[sample(nrow(spam)),] # randomly permute the rows in our data frame
partitionSize <- floor(nrow(trainData) / 10)
treesLoss <- foreach(val = cpValues, .combine='c') %do% {
ctrl <- rpart.control(cp=val, minsplit=2, minbucket=1)
summedResults <- 0
for (i in seq(1, nrow(trainData) - partitionSize, partitionSize)) {
validationSetIndices <- i:(i + partitionSize - 1) # seq from i to (partitionSize-1)
validationData <- trainData[validationSetIndices,]
nonValidationData <- trainData[-validationSetIndices,]
tree <- rpart(spam ~ ., nonValidationData, method='class', control=ctrl)
treePredictions <- predict(tree, validationData, type='class')
loss <- table(treePredictions == validationData$spam)["FALSE"][[1]] / length(treePredictions)
summedResults <- summedResults + loss
}
averagedResults <- summedResults / 10
return(averagedResults)
}
cvResults <- data.frame(cp = cpValues, loss = treesLoss)
plot(cvResults, log='x', xlim=c(max(results$cp), min(results$cp)), type='o')
```
Now we have a nice, smooth(er) curve, and the optimal cp value will either be 1e-3 or 5e-4 depending on how easy to classify your test set happens to be (Sometimes we repeat the k-fold CV process itself multiple times to eliminate this 'lucky draw' element). If you're bored you can modify the code and plot the standard error bars on top of each data point. Another way of choosing the optimal parameter is to choose the value that gives the loosest fit to the training data yet is still within one standard error of the "best" value.
```
# So (approximately) how good is a decision tree with an optimal parameter value?
bestTree <- rpart(spam ~ ., trainData, method='class',
control=c(cp=1e-3, minsplit=2, minbucket=1))
bestTreePredictions <- predict(bestTree, testData, type='class')
bestTreeResults <- table(bestTreePredictions == testData$spam) / length(bestTreePredictions)
```
Now that we've made it this far, I can tell you a secret: decision trees are relatively crude predictors. Yet we are still able to correctly identify over 90% of emails as either spam or not spam using a singe decision tree and the 'bag-of-words' model (this is how the variables in our spam data were generated, see [ESL 9.2.5]). To create even more powerful tree-based predictors, we must learn about ensembles... (see treeEnsembles.r)
| github_jupyter |
```
'''
Step 1: from output of evaluate qualification.py: worker_id and int percentage value
Step 2: open worker csv file (downloaded from turk) and
stat_dict = {worker_id: eval_score}
for each cell in csv:
if cellworker_id in stat_dict:
update the qual column with stat_dict[cell_worker_id]
else copy all rows
Step 3: now upload this csv
Step 4: for the task add this qualification task eval score > 80
'''
# also note that: data/qualified_workers.txt has list of all worker ids that have been qualified
x = [["fill the big hole with water", "fill the <span style='background-color: #FFFF00'>big hole</span> with water", "fill", "reference_object"],
["make a copy of that", "make a copy of <span style='background-color: #FFFF00'>that</span>", "copy", "reference_object"],
["make a copy of that behind me", "make a copy of that <span style='background-color: #FFFF00'>behind me</span>", "copy", "location"],
["spawn enderman over there", "spawn <span style='background-color: #FFFF00'>enderman</span> over there", "spawn", "reference_object"],
["spawn enderman over there", "spawn enderman <span style='background-color: #FFFF00'>over there</span>", "spawn", "location"],
["destroy the big house", "destroy the <span style='background-color: #FFFF00'>big house</span>", "destroy", "reference_object"],
["go behind the sheep", "go <span style='background-color: #FFFF00'>behind the sheep</span>", "move", "location"],
["dig a 10 x 10 hole there", "dig a <span style='background-color: #FFFF00'>10 x 10 hole</span> there", "dig", "schematic"],
["dig a 10 x 10 hole there", "dig a 10 x 10 hole <span style='background-color: #FFFF00'>there</span>", "dig", "location"],
["complete the village", "complete the <span style='background-color: #FFFF00'>village</span>", "freebuild", "reference_object"],
["do a dance over there", "do a dance <span style='background-color: #FFFF00'>over there</span>", "dance", "location"],
["build a green house there", "build a green house <span style='background-color: #FFFF00'>there</span>", "build", "location"],
["build a green house there", "build a <span style='background-color: #FFFF00'>green house</span> there", "build", "schematic"],
["go around the house three times", "go around <span style='background-color: #FFFF00'>the house</span> three times", "move", "location"],
["that looks nice", "that looks <span style='background-color: #FFFF00'>nice</span>", "tag", "tag_val"],
["that looks nice", "<span style='background-color: #FFFF00'>that</span> looks nice", "tag", "filters"]
]
with open('highlight_example.txt', 'w') as f:
for line in x:
words = "\t".join(line)
f.write(words+"\n")
old_stats = {}
with open('data/qual_test_workers/all_qualified_workers.txt') as f:
for line in f.readlines():
worker_id, score = line.strip().split("\t")
if worker_id in old_stats:
print(worker_id)
old_stats[worker_id] = score
print(len(old_stats.keys()))
worker_stats = {}
cnt = 0
with open('data/qual_test_workers/second_500_workers.txt') as f:
for line in f.readlines():
worker_id, score = line.strip().split("\t")
if worker_id in old_stats:
cnt += 1
print("%r has already given us data before. Old score: %r new score: %r"%(worker_id, old_stats[worker_id], score))
worker_stats[worker_id] = score
print("%r workers had given us data already"%(cnt))
# write out all unique workers
cnt = 0
with open('data/qual_test_workers/all_qualified_workers.txt', 'a') as f:
for k, v in worker_stats.items():
if k not in old_stats:
cnt += 1
f.write(k +"\t"+v+"\n")
print("Written :%r new lines" % (cnt))
perfect_score =0
with open('data/qual_test_workers/all_qualified_workers.txt') as f:
for line in f.readlines():
score = line.strip().split("\t")[1]
if score =='100':
perfect_score+= 1
print(perfect_score)
print(len(worker_stats.keys()))
print(len(old_stats.keys()))
from tempfile import NamedTemporaryFile
import shutil
import csv
filename = '/Users/kavyasrinet/Downloads/all_workers.csv'
tempfile = 'data/updated_workers.csv'
with open(filename, 'r') as csvfile, open(tempfile, 'w') as outfile:
reader = csv.reader(csvfile)
writer = csv.writer(outfile)
cnt, cnt2 = 0, 0
for i, row in enumerate(reader):
# remove all old validated workers
if (row[13] or row[14]) and i!=0:
cnt += 1
# row[13] = None
w_id = row[0]
if w_id in worker_stats:
cnt2 += 1
# print('updating row', row)
row[14] = worker_stats[w_id]
writer.writerow(row)
print(cnt)
print(cnt2)
cnt, cnt2 = 0, 0
with open('data/round2_updated_workers.csv', 'r') as f:
reader = csv.reader(f)
cnt = 0
for row in reader:
if row[13]:
cnt += 1
if row[14]:
#print(row[14])
cnt2 += 1
print(cnt)
print(cnt2)
data_stats = {}
with open('data/mturk_workers_folder/updated_qual_workers.csv', 'r') as csvfile:
reader = csv.reader(csvfile)
cnt, cnt2 = 0, 0
for i, row in enumerate(reader):
# remove all old validated workers
if row[13] and i!=0:
cnt +=1
if row[14] and i!= 0:
data_stats[row[14]] = data_stats.get(row[14], 0) + 1
print(cnt, cnt2)
print(data_stats)
```
| github_jupyter |
**Downlaod and extract data**
```
! wget -O A.zip http://epileptologie-bonn.de/cms/upload/workgroup/lehnertz/Z.zip
! wget -O B.zip http://epileptologie-bonn.de/cms/upload/workgroup/lehnertz/O.zip
! wget -O C.zip http://epileptologie-bonn.de/cms/upload/workgroup/lehnertz/N.zip
! wget -O D.zip http://epileptologie-bonn.de/cms/upload/workgroup/lehnertz/F.zip
! wget -O E.zip http://epileptologie-bonn.de/cms/upload/workgroup/lehnertz/S.zip
! mkdir A B C D E
! unzip /content/A.zip -d A
! unzip /content/B.zip -d B
! unzip /content/C.zip -d C
! unzip /content/D.zip -d D
! unzip /content/E.zip -d E
! rm -rf /content/A.zip
! rm -rf /content/B.zip
! rm -rf /content/D.zip
! rm -rf /content/C.zip
! rm -rf /content/E.zip
!ls
```
**Load and preprocess data**
```
import os
root = "/content/"
files = os.listdir()
A_path = os.path.join(root, [elem for elem in files if elem=='A'][0])
B_path = os.path.join(root, [elem for elem in files if elem=='B'][0])
C_path = os.path.join(root, [elem for elem in files if elem=='C'][0])
D_path = os.path.join(root, [elem for elem in files if elem=='D'][0])
E_path = os.path.join(root, [elem for elem in files if elem=='E'][0])
print(A_path)
print(B_path)
print(C_path)
print(D_path)
print(E_path)
```
**SET A**
```
import numpy as np
from scipy.signal import butter, filtfilt
pass_band = [0.5*2/173, 40*2/173]
b, a = butter(1, pass_band, 'bandpass')
A_files = [os.path.join(A_path, path) for path in os.listdir(A_path)]
A_signals = []
for signal in A_files:
signal = np.loadtxt(signal)
signal = filtfilt(b, a, signal)
A_signals.append(signal)
A_signals = np.array(A_signals)
```
**SET B**
```
B_files = [os.path.join(B_path, path) for path in os.listdir(B_path)]
B_signals = []
for signal in B_files:
signal = np.loadtxt(signal)
signal = filtfilt(b, a, signal)
B_signals.append(signal)
B_signals = np.array(B_signals)
```
**SET C**
```
C_files = [os.path.join(C_path, path) for path in os.listdir(C_path)]
C_signals = []
for signal in C_files:
signal = np.loadtxt(signal)
signal = filtfilt(b, a, signal)
C_signals.append(signal)
C_signals = np.array(C_signals)
```
**SET D**
```
D_files = [os.path.join(D_path, path) for path in os.listdir(D_path)]
D_signals = []
for signal in D_files:
signal = np.loadtxt(signal)
signal = filtfilt(b, a, signal)
D_signals.append(signal)
D_signals = np.array(D_signals)
```
**SET E**
```
E_files = [os.path.join(E_path, path) for path in os.listdir(E_path)]
E_signals = []
for signal in E_files:
signal = np.loadtxt(signal)
signal = filtfilt(b, a, signal)
E_signals.append(signal)
E_signals = np.array(E_signals)
```
**Prepare datasets for train**
```
print(A_signals.shape)
print(B_signals.shape)
print(C_signals.shape)
print(D_signals.shape)
print(E_signals.shape)
```
**CASE 1**
```
# Construct the sets we want to train on here
# Case 1
A_labels = np.zeros(len(A_signals))
E_labels = np.ones(len(E_signals))
X = np.concatenate((A_signals, E_signals), axis=0)
Y = np.concatenate((A_labels, E_labels), axis=0)
print(X.shape)
print(Y.shape)
```
**CASE 2**
```
# Case 2
B_labels = np.zeros(len(B_signals))
E_labels = np.ones(len(E_signals))
X = np.concatenate((B_signals, E_signals), axis=0)
Y = np.concatenate((B_labels, E_labels), axis=0)
print(X.shape)
print(Y.shape)
```
**CASE 3**
```
# Case 3
C_labels = np.zeros(len(C_signals))
E_labels = np.ones(len(E_signals))
X = np.concatenate((C_signals, E_signals), axis=0)
Y = np.concatenate((C_labels, E_labels), axis=0)
print(X.shape)
print(Y.shape)
```
**CASE 4**
```
# Case 4
D_labels = np.zeros(len(D_signals))
E_labels = np.ones(len(E_signals))
X = np.concatenate((D_signals, E_signals), axis=0)
Y = np.concatenate((D_labels, E_labels), axis=0)
print(X.shape)
print(Y.shape)
```
**CASE 5**
```
# Case 5
A_labels = np.zeros(len(A_signals))
B_labels = np.zeros(len(B_signals))
C_labels = np.zeros(len(C_signals))
D_labels = np.zeros(len(D_signals))
E_labels = np.ones(len(E_signals))
X = np.concatenate((A_signals, B_signals, C_signals, D_signals, E_signals), axis=0)
Y = np.concatenate((A_labels, B_labels, C_labels, D_labels, E_labels), axis=0)
print(X.shape)
print(Y.shape)
```
**Visualization**
```
import matplotlib.pyplot as plt
plt.figure(figsize=(25, 10))
plt.plot(X[0], label = 'EEG Signal')
plt.legend()
X = X[:, 0:868].reshape(-1, 868, 1)
print(X.shape)
print(Y.shape)
```
**Normalization**
```
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler(feature_range=(-1, 1))
X = scaler.fit_transform(X.reshape(-1, 868)).reshape(-1, 868, 1)
print(X.shape)
```
**Define Model Architecture**
```
import tensorflow as tf
from tensorflow.keras.layers import Input, Conv1D, MaxPooling1D, BatchNormalization
from tensorflow.keras.layers import Flatten, Dense, Reshape, Conv1D, Dropout
from tensorflow.keras.layers import AveragePooling1D, UpSampling1D, Activation
from tensorflow.keras.layers import ZeroPadding1D
from tensorflow.keras.models import Model
from keras.utils import plot_model
inputs = Input(shape=(868, 1))
# ENCODER
# part 1
x = Conv1D(32, 7, padding='same', strides=1)(inputs)
x = Activation('relu')(x)
x = MaxPooling1D(pool_size=4)(x)
x = BatchNormalization(axis=-1, momentum=0.99)(x)
x = Dropout(0.5)(x)
x = Conv1D(32, 7, padding='same', strides=1)(x)
x = Activation('relu')(x)
x = MaxPooling1D(pool_size=4, name='encoder_out')(x)
# DECODER
# part 2
x = Conv1D(32, 7, padding='same', strides=1)(x)
x = Activation('relu')(x)
x = UpSampling1D(4)(x)
x = BatchNormalization(axis=-1, momentum=0.99)(x)
x = Conv1D(32, 7, padding='same', strides=1)(x)
x = Activation('relu')(x)
x = UpSampling1D(4)(x)
x = BatchNormalization(axis=-1, momentum=0.99)(x)
x = ZeroPadding1D(padding=2)(x)
x = Conv1D(1, 7, padding='same', strides=1)(x)
x = Activation('tanh')(x)
model= Model(inputs, x)
print(model.summary())
plot_model(model, show_shapes=True)
```
**Split dataset into train and test**
```
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.1, random_state=42,
shuffle=True, stratify=Y)
print(X_train.shape)
print(X_test.shape)
```
**train the model**
```
# adadelta for autoencoder
model.compile(loss='mse',
optimizer='adadelta')
history = model.fit(X_train, X_train, validation_data=(X_test, X_test),
epochs=2000, batch_size=16)
```
**Evaluation**
```
model.evaluate(X_test, X_test)
```
**Training and Testing Curves**
```
import matplotlib.pyplot as plt
# plot loss
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], label='test')
plt.legend()
plt.show()
```
**Signal Reconstruction**
```
# plot created model output vs signal
import numpy as np
preds = model.predict(X_test)
plt.figure(figsize=(20, 5))
plt.plot(X_test[0][0:400], color='red', label = 'Original Signal')
# plt.figure(figsize=(20, 5))
plt.plot(preds[0][0:400], color='blue', label = 'Reconstructed Signal')
plt.legend()
```
**classification**
```
# classification model creation
# encoder only model.
clf_in = model.input
clf_out = [layer.output for layer in model.layers if layer.name == 'encoder_out'][0]
clf_x = Flatten()(clf_out)
clf_x = Dense(1, activation='sigmoid')(clf_x)
clf = tf.keras.models.Model(inputs=clf_in,
outputs=clf_x)
print(clf.summary())
# sgd for classification
clf.compile(loss='binary_crossentropy',
optimizer='sgd', metrics=['accuracy'])
history = clf.fit(X_train, Y_train, validation_data=(X_test, Y_test),
epochs=50, batch_size=16)
import matplotlib.pyplot as plt
# plot loss
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], label='test')
plt.legend()
plt.show()
```
**Calculate Metrics**
```
from sklearn.metrics import classification_report
preds = np.around(clf.predict(X_test))
rep = classification_report(Y_test, preds)
print(rep)
```
| github_jupyter |
# Estimating the Cost of Equity from Historical Price Data
We want to estimate the cost of equity for a company. We have historical data on its stock prices, as well as prices of a market portfolio. We will estimate the CAPM $\beta$, and then calculate the CAPM to determine the cost of equity.
:As a reminder, the CAPM formula is given by $$r_i = r_f + \beta (r_m - r_f) + \epsilon$$
## Load in Price Data
First let's load in the historical price data. We can use `pandas` to load the Excel file into Python. Ensure that the Excel workbook is in the same folder as your Jupyer notebook.
```
import pandas as pd
df = pd.read_excel('price data.xlsx')
df.head() # print the first 5 rows
```
## Calculating Returns
The CAPM works with returns and not prices, so let's convert our prices to returns. Luckily the pandas method `pct_change` handles this for us.
```
returns = df.pct_change()
returns.head()
```
The first values are missing (`NaN`) because we can't calculate a return off of a single number.
## Calculating the Market Risk Premium
We are ultimately going to be running a regression to determine $\beta$. We can think of a standard regression line as following the equation: $$y = a + bx$$ We can put the CAPM in this format if we assume $\epsilon$ is zero, then treat $r_i$ as $y$, $r_f$ as $a$, and $(r_m - r_f)$ as $x$. Therefore we need to calculate the market risk premium (MRP), $(r_m - r_f)$, to use in the regression.
From the problem, the risk free rate is 3%. So just subtract that from the market returns to get the MRP.
```
risk_free = 0.03
returns['MRP'] = returns['Market Portfolio'] - risk_free
returns.dropna(inplace=True)
returns.head()
```
## Calculating $\beta$
Now we are ready to run the regression of stock returns on the MRP. We can use `statsmodels` to run the OLS regression. We will also add a constant to the X variables, to have an intercept in the regression.
```
import statsmodels.api as sm
X = sm.add_constant(returns['MRP'])
y = returns['Asset Price']
model = sm.OLS(y, X)
```
But you can see we ran into a problem `MissingDataError`. This is because we had those `NaN`s in the first row. We can remove these easily.
```
returns = returns.dropna()
returns.head()
```
Simply by using `.dropna()` we can remove those `NaN`s so we can run the regression. Let's try that again.
```
X = sm.add_constant(returns['MRP'])
y = returns['Asset Price']
model = sm.OLS(y, X)
results = model.fit()
results.summary()
```
We can see there is a 0.8338 coefficient on the MRP. This means our $\beta$ is 0.8338. We can extract that exact number as follows:
```
beta = results.params['MRP']
beta
```
## Estimating the Market Return
Now we are only missing one component to plug into CAPM to get the cost of equity: the market return. A good way to estimate this is by taking an average of the historical returns. This can also be adjusted for expectations about the economy in the future (recession, etc.).
```
market_return = returns['Market Portfolio'].mean()
market_return
```
## Estimating the Cost of Equity
Now we can plug everything into the CAPM formula to get the $r_i$ cost of equity. CAPM again: $$r_i = r_f + \beta (r_m - r_f) + \epsilon$$
```
cost_of_equity = risk_free + beta * (market_return - risk_free)
print(f'The cost of equity is {cost_of_equity:.2%}.')
```
## The Exercise in Excel
All the steps of the exercise are the same in Excel. The only difference is the functions/process to run each step. For calculating returns, a simple formula of $(new - old)/old$ can be calculated for one cell and dragged to get all the returns. 3% can be subtracted from the market returns and dragged down to yield the MRPs. The regression can be run by enabling the Data Analysis Toolpak add-in and following the prompts.
| github_jupyter |
### Python API
HyperTS是DataCanvas Automatic Toolkit(DAT)工具链中,依托[Hypernets](https://github.com/DataCanvasIO/Hypernets)研发的关于时间序列的全Pipeline的自动化工具包。它遵循了make_expriment的使用习惯(类似于[HyperGBM](https://github.com/DataCanvasIO/HyperGBM)的API,一个针对于结构化表格数据的AutoML工具),也符合```scikit-learn```中model API的使用规范。我们可以创造一个```make_expriment```,```run```之后获得pipeline_model, 即一个最终优化完毕的estimator, 然后使用它的```predict```, ```evaluate```, ```plot```去分析未知的数据。
接下来,我们一起动手快速看两个案例指南吧。
## 1. 预测案例
### 1.1 准备数据
PS: 如果您对于HyperTS的数据形式存有疑惑,请回看[01_datatypes_for_hyperts.ipynb](https://github.com/DataCanvasIO/HyperTS/blob/main/examples/01_datatypes_for_hyperts.ipynb)中的介绍。
```
from hyperts.datasets import load_network_traffic
from sklearn.model_selection import train_test_split
```
在划分训练集和测试集中,由于数据存在时间上的先后顺序,为防止信息泄露,我们从整体数据集的后边切分一部分,故```shuffle=False```.
```
df = load_network_traffic()
train_data, test_data = train_test_split(df, test_size=168, shuffle=False)
```
在本案例数据集,我们暴露其一些基本信息供参考,具体如下:
- 时间列名称: 'TimeStamp';
- 目标列名称: ['Var_1', 'Var_2', 'Var_3', 'Var_4', 'Var_5', 'Var_6'];
- 协变量列名称: ['HourSin', 'WeekCos', 'CBWD'];
- 时间频率: 'H'.
### 1.2 创建实验
我们通过创建实验```make_experiment```搜索一个时序模型, 并调用```run()```方法来执行实验。
```
from hyperts import make_experiment
```
**注意:**
在预测任务中,我们必须向```make_experiment```传入参数```timestamp```列名。如果存在协变量,也需要传入```covariables```列名。
因此,在本案例中,我们需要向```make_experiment```传入以下参数:
1.告知其我们现在需要做一个时序预测任务,即```task='forecast'```;
2、告知其数据集的时间列名称,即```timestamp='TimeStamp'```;
3、告知其数据集中协变量列的名称,即```covariables=['HourSin', 'WeekCos', 'CBWD']```;
4、如果想要获得强大的性能表现,还可以修改其他默认的参数,具体可以参考参数介绍。
```
experiment = make_experiment(train_data=train_data.copy(),
task='forecast',
timestamp='TimeStamp',
covariables=['HourSin', 'WeekCos', 'CBWD'])
model = experiment.run()
```
看一看获得最终搜索的模型的参数。
```
model.get_params
```
### 1.3 结果预测
对test data切分X与y, 调用```predict()```方法执行结果预测。
```
X_test, y_test = model.split_X_y(test_data.copy())
forecast = model.predict(X_test)
forecast.head()
```
### 1.4 结果评估
调用```evaluate```方法执行结果评估,便可以观测到各个评估指标下的得分情况。
这里会返回一些默认的指标评分,如果想观测指定指标的评分,可以设置参数```metrics```, 例如metrics=['mae', 'mse', mape_func]。
其中,mape_func可以是自定义的评估函数或者来自于sklearn的评估函数。
```
results = model.evaluate(y_true=y_test, y_pred=forecast)
results.head()
```
### 1.5 可视化
调用```plot()```方法可视化,看一看预测曲线,并与实际的曲线对比一下。
```
model.plot(forecast=forecast, actual=test_data, var_id='Var_3', interactive=False)
```
**注意**
- 这里会显示某一个变量的预测曲线,默认为第一个目标变量;
- 如果为多变量预测,想要观测其他的变量曲线变化的情况,可以修改参数```var_id```, 例如:var_id=2 或者var_id='Var_2';
- ```plot```可支持交互式可视化通过设置```interactive=False```(默认交互, 需安装plotly);
- 绘制更长期的历史信息,设置参数```history=sub_train_data```.
<bar>
<bar>
## 2. 分类案例
### 2.1 准备数据集
PS: 如果您对于HyperTS的数据形式存有疑惑,请回看<01_datatypes_for_hyperts.ipynb>中的介绍。
```
from hyperts.datasets import load_basic_motions
from sklearn.model_selection import train_test_split
df = load_basic_motions()
train_df, test_df = train_test_split(df, test_size=0.2)
```
对本案例数据集,为了方便理解,我们暴露一些基本的信息如下:
- 特征列名称: ['Var_1', 'Var_2', 'Var_3', 'Var_4', 'Var_5', 'Var_6'];
- 目标列名称: 'target'.
### 2.2 创建实验
我们通过创建实验```make_experiment```搜索一个时序模型, 并调用```run()```方法来执行实验。
```
experiment = make_experiment(train_data=train_df.copy(), task='classification', target='target')
model = experiment.run()
```
看一看获得最终搜索的模型的参数。
```
model.get_params
```
### 2.3 结果预测
对test data切分X与y, 调用```predict()```方法执行结果预测。
```
X_test, y_test = model.split_X_y(test_df.copy())
y_pred = model.predict(X_test)
y_proba = model.predict_proba(X_test)
```
### 2.4 结果评估
调用```evaluate```方法执行结果评估,便可以观测到各个评估指标下的得分情况。
这里会返回一些默认的指标评分,如果想观测指定指标的评分,可以设置参数```metrics```, 例如metrics=['accuracy', 'auc', f1_func]。
其中,mape_func可以是自定义的评估函数或者来自于sklearn的评估函数。
```
results = model.evaluate(y_true=y_test, y_pred=y_pred, y_proba=y_proba)
results.head()
```
<img src="" width = "500">
| github_jupyter |
```
%pylab inline
from ipyparallel import Client, error
cluster=Client(profile="mpi")
view=cluster[:]
view.block=True
try:
from openmdao.utils.notebook_utils import notebook_mode
except ImportError:
!python -m pip install openmdao[notebooks]
```
# Conversion Guide for the Auto-IVC (IndepVarComp) Feature
As of the OpenMDAO 3.2 release, it is no longer necessary to add an IndepVarComp to your model to handle the assignment of unconnected inputs as design variables.
## Declaring Design Variables
````{tabbed} Pre Auto IVC
This is what we used to do
```python
import openmdao.api as om
prob = om.Problem()
indeps = prob.model.add_subsystem('indeps', om.IndepVarComp())
indeps.add_output('x', 3.0)
indeps.add_output('y', -4.0)
prob.model.add_subsystem('paraboloid',
om.ExecComp('f = (x-3)**2 + x*y + (y+4)**2 - 3'))
prob.model.connect('indeps.x', 'paraboloid.x')
prob.model.connect('indeps.y', 'paraboloid.y')
# setup the optimization
prob.driver = om.ScipyOptimizeDriver()
prob.driver.options['optimizer'] = 'SLSQP'
prob.model.add_design_var('indeps.x', lower=-50, upper=50)
prob.model.add_design_var('indeps.y', lower=-50, upper=50)
prob.model.add_objective('paraboloid.f')
prob.setup()
prob.run_driver()
```
````
`````{tabbed} With Auto IVC
This is how we handle IVCs now
````python
prob = om.Problem()
prob.model.add_subsystem('paraboloid',
om.ExecComp('f = (x-3)**2 + x*y + (y+4)**2 - 3'),
promotes_inputs=['x', 'y'])
# setup the optimization
prob.driver = om.ScipyOptimizeDriver()
prob.driver.options['optimizer'] = 'SLSQP'
prob.model.add_design_var('x', lower=-50, upper=50)
prob.model.add_design_var('y', lower=-50, upper=50)
prob.model.add_objective('paraboloid.f')
prob.setup()
prob['x'] = 3.0
prob['y'] = -4.0
prob.run_driver()
````
`````
```
# Old way
import openmdao.api as om
prob = om.Problem()
indeps = prob.model.add_subsystem('indeps', om.IndepVarComp())
indeps.add_output('x', 3.0)
indeps.add_output('y', -4.0)
prob.model.add_subsystem('paraboloid',
om.ExecComp('f = (x-3)**2 + x*y + (y+4)**2 - 3'))
prob.model.connect('indeps.x', 'paraboloid.x')
prob.model.connect('indeps.y', 'paraboloid.y')
# setup the optimization
prob.driver = om.ScipyOptimizeDriver()
prob.driver.options['optimizer'] = 'SLSQP'
prob.model.add_design_var('indeps.x', lower=-50, upper=50)
prob.model.add_design_var('indeps.y', lower=-50, upper=50)
prob.model.add_objective('paraboloid.f')
prob.setup()
prob.run_driver();
# New way
import openmdao.api as om
prob = om.Problem()
prob.model.add_subsystem('paraboloid',
om.ExecComp('f = (x-3)**2 + x*y + (y+4)**2 - 3'),
promotes_inputs=['x', 'y'])
# setup the optimization
prob.driver = om.ScipyOptimizeDriver()
prob.driver.options['optimizer'] = 'SLSQP'
prob.model.add_design_var('x', lower=-50, upper=50)
prob.model.add_design_var('y', lower=-50, upper=50)
prob.model.add_objective('paraboloid.f')
prob.setup()
prob['x'] = 3.0
prob['y'] = -4.0
prob.run_driver();
```
## Declaring a Multi-Component Input as a Design Variable
````{tabbed} Pre Auto IVC
```python
import openmdao.api as om
prob = om.Problem()
indeps = prob.model.add_subsystem('indeps', om.IndepVarComp())
indeps.add_output('x', 3.0)
indeps.add_output('y', -4.0)
prob.model.add_subsystem('parab', Paraboloid())
# define the component whose output will be constrained
prob.model.add_subsystem('const', om.ExecComp('g = x + y'))
prob.model.connect('indeps.x', ['parab.x', 'const.x'])
prob.model.connect('indeps.y', ['parab.y', 'const.y'])
# setup the optimization
prob.driver = om.ScipyOptimizeDriver()
prob.driver.options['optimizer'] = 'COBYLA'
prob.model.add_design_var('indeps.x', lower=-50, upper=50)
prob.model.add_design_var('indeps.y', lower=-50, upper=50)
prob.model.add_objective('parab.f_xy')
# to add the constraint to the model
prob.model.add_constraint('const.g', lower=0, upper=10.)
# prob.model.add_constraint('const.g', equals=0.)
prob.setup()
prob.run_driver()
```
````
`````{tabbed} With Auto IVC
````python
prob = om.Problem()
prob.model.add_subsystem('parab', Paraboloid(),
promotes_inputs=['x', 'y'])
# define the component whose output will be constrained
prob.model.add_subsystem('const', om.ExecComp('g = x + y'),
promotes_inputs=['x', 'y'])
# Design variables 'x' and 'y' span components, so we need to provide a common initial
# value for them.
prob.model.set_input_defaults('x', 3.0)
prob.model.set_input_defaults('y', -4.0)
# setup the optimization
prob.driver = om.ScipyOptimizeDriver()
prob.driver.options['optimizer'] = 'COBYLA'
prob.model.add_design_var('x', lower=-50, upper=50)
prob.model.add_design_var('y', lower=-50, upper=50)
prob.model.add_objective('parab.f_xy')
# to add the constraint to the model
prob.model.add_constraint('const.g', lower=0, upper=10.)
prob.setup()
prob.run_driver()
````
`````
```
import openmdao.api as om
from openmdao.test_suite.components.paraboloid import Paraboloid
prob = om.Problem()
indeps = prob.model.add_subsystem('indeps', om.IndepVarComp())
indeps.add_output('x', 3.0)
indeps.add_output('y', -4.0)
prob.model.add_subsystem('parab', Paraboloid())
# define the component whose output will be constrained
prob.model.add_subsystem('const', om.ExecComp('g = x + y'))
prob.model.connect('indeps.x', ['parab.x', 'const.x'])
prob.model.connect('indeps.y', ['parab.y', 'const.y'])
# setup the optimization
prob.driver = om.ScipyOptimizeDriver()
prob.driver.options['optimizer'] = 'COBYLA'
prob.model.add_design_var('indeps.x', lower=-50, upper=50)
prob.model.add_design_var('indeps.y', lower=-50, upper=50)
prob.model.add_objective('parab.f_xy')
# to add the constraint to the model
prob.model.add_constraint('const.g', lower=0, upper=10.)
# prob.model.add_constraint('const.g', equals=0.)
prob.setup()
prob.run_driver();
import openmdao.api as om
prob = om.Problem()
prob.model.add_subsystem('parab', Paraboloid(),
promotes_inputs=['x', 'y'])
# define the component whose output will be constrained
prob.model.add_subsystem('const', om.ExecComp('g = x + y'),
promotes_inputs=['x', 'y'])
# Design variables 'x' and 'y' span components, so we need to provide a common initial
# value for them.
prob.model.set_input_defaults('x', 3.0)
prob.model.set_input_defaults('y', -4.0)
# setup the optimization
prob.driver = om.ScipyOptimizeDriver()
prob.driver.options['optimizer'] = 'COBYLA'
prob.model.add_design_var('x', lower=-50, upper=50)
prob.model.add_design_var('y', lower=-50, upper=50)
prob.model.add_objective('parab.f_xy')
# to add the constraint to the model
prob.model.add_constraint('const.g', lower=0, upper=10.)
prob.setup()
prob.run_driver();
```
## Declaring a New Name for a Promoted Input
````{tabbed} Pre Auto IVC
```python
prob = om.Problem()
indeps = prob.model.add_subsystem('indeps', om.IndepVarComp())
indeps.add_output('width', 3.0)
indeps.add_output('length', -4.0)
prob.model.add_subsystem('paraboloid',
om.ExecComp('f = (x-3)**2 + x*y + (y+4)**2 - 3'))
prob.model.connect('indeps.width', 'paraboloid.x')
prob.model.connect('indeps.length', 'paraboloid.y')
prob.setup()
```
````
`````{tabbed} With Auto IVC
````python
prob = om.Problem()
prob.model.add_subsystem('paraboloid',
om.ExecComp('f = (x-3)**2 + x*y + (y+4)**2 - 3'),
promotes_inputs=[('x', 'width'), ('y', 'length')])
# Could also set these after setup.
prob.model.set_input_defaults('width', 3.0)
prob.model.set_input_defaults('length', -4.0)
prob.setup()
````
`````
```
import openmdao.api as om
prob = om.Problem()
indeps = prob.model.add_subsystem('indeps', om.IndepVarComp())
indeps.add_output('width', 3.0)
indeps.add_output('length', -4.0)
prob.model.add_subsystem('paraboloid',
om.ExecComp('f = (x-3)**2 + x*y + (y+4)**2 - 3'))
prob.model.connect('indeps.width', 'paraboloid.x')
prob.model.connect('indeps.length', 'paraboloid.y')
prob.setup();
import openmdao.api as om
prob = om.Problem()
prob.model.add_subsystem('paraboloid',
om.ExecComp('f = (x-3)**2 + x*y + (y+4)**2 - 3'),
promotes_inputs=[('x', 'width'), ('y', 'length')])
# Could also set these after setup.
prob.model.set_input_defaults('width', 3.0)
prob.model.set_input_defaults('length', -4.0)
prob.setup();
```
## Declare an Input Defined with Source Indices as a Design Variable
````{tabbed} Pre Auto IVC
```python
class MyComp1(om.ExplicitComponent):
def setup(self):
# this input will connect to entries 0, 1, and 2 of its source
self.add_input('x', np.ones(3), src_indices=[0, 1, 2])
self.add_output('y', 1.0)
def compute(self, inputs, outputs):
outputs['y'] = np.sum(inputs['x'])*2.0
class MyComp2(om.ExplicitComponent):
def setup(self):
# this input will connect to entries 3 and 4 of its source
self.add_input('x', np.ones(2), src_indices=[3, 4])
self.add_output('y', 1.0)
def compute(self, inputs, outputs):
outputs['y'] = np.sum(inputs['x'])*4.0
p = om.Problem()
p.model.add_subsystem('indep', om.IndepVarComp('x', np.ones(5)),
promotes_outputs=['x'])
p.model.add_subsystem('C1', MyComp1(), promotes_inputs=['x'])
p.model.add_subsystem('C2', MyComp2(), promotes_inputs=['x'])
p.model.add_design_var('x')
p.setup()
p.run_model()
```
````
`````{tabbed} With Auto IVC
````python
class MyComp1(om.ExplicitComponent):
def setup(self):
# this input will connect to entries 0, 1, and 2 of its source
self.add_input('x', np.ones(3), src_indices=[0, 1, 2])
self.add_output('y', 1.0)
def compute(self, inputs, outputs):
outputs['y'] = np.sum(inputs['x'])*2.0
class MyComp2(om.ExplicitComponent):
def setup(self):
# this input will connect to entries 3 and 4 of its source
self.add_input('x', np.ones(2), src_indices=[3, 4])
self.add_output('y', 1.0)
def compute(self, inputs, outputs):
outputs['y'] = np.sum(inputs['x'])*4.0
p = om.Problem()
# IndepVarComp is required to define the full size of the source vector.
p.model.add_subsystem('indep', om.IndepVarComp('x', np.ones(5)),
promotes_outputs=['x'])
p.model.add_subsystem('C1', MyComp1(), promotes_inputs=['x'])
p.model.add_subsystem('C2', MyComp2(), promotes_inputs=['x'])
p.model.add_design_var('x')
p.setup()
p.run_model()
````
`````
```
import numpy as np
import openmdao.api as om
class MyComp1(om.ExplicitComponent):
def setup(self):
# this input will connect to entries 0, 1, and 2 of its source
self.add_input('x', np.ones(3), src_indices=[0, 1, 2])
self.add_output('y', 1.0)
def compute(self, inputs, outputs):
outputs['y'] = np.sum(inputs['x'])*2.0
class MyComp2(om.ExplicitComponent):
def setup(self):
# this input will connect to entries 3 and 4 of its source
self.add_input('x', np.ones(2), src_indices=[3, 4])
self.add_output('y', 1.0)
def compute(self, inputs, outputs):
outputs['y'] = np.sum(inputs['x'])*4.0
p = om.Problem()
p.model.add_subsystem('indep', om.IndepVarComp('x', np.ones(5)),
promotes_outputs=['x'])
p.model.add_subsystem('C1', MyComp1(), promotes_inputs=['x'])
p.model.add_subsystem('C2', MyComp2(), promotes_inputs=['x'])
p.model.add_design_var('x')
p.setup()
p.run_model()
import openmdao.api as om
class MyComp1(om.ExplicitComponent):
def setup(self):
# this input will connect to entries 0, 1, and 2 of its source
self.add_input('x', np.ones(3), src_indices=[0, 1, 2])
self.add_output('y', 1.0)
def compute(self, inputs, outputs):
outputs['y'] = np.sum(inputs['x'])*2.0
class MyComp2(om.ExplicitComponent):
def setup(self):
# this input will connect to entries 3 and 4 of its source
self.add_input('x', np.ones(2), src_indices=[3, 4])
self.add_output('y', 1.0)
def compute(self, inputs, outputs):
outputs['y'] = np.sum(inputs['x'])*4.0
p = om.Problem()
# IndepVarComp is required to define the full size of the source vector.
p.model.add_subsystem('indep', om.IndepVarComp('x', np.ones(5)),
promotes_outputs=['x'])
p.model.add_subsystem('C1', MyComp1(), promotes_inputs=['x'])
p.model.add_subsystem('C2', MyComp2(), promotes_inputs=['x'])
p.model.add_design_var('x')
p.setup()
p.run_model()
```
## Setting Default Units for an Input
````{tabbed} Pre Auto IVC
```python
prob = om.Problem()
ivc = om.IndepVarComp()
ivc.add_output('x2', 100.0, units='degC')
prob.model.add_subsystem('T1', ivc,
promotes_outputs=['x2'])
# Input units in degF
prob.model.add_subsystem('tgtF', TgtCompF(),
promotes_inputs=['x2'])
# Input units in degC
prob.model.add_subsystem('tgtC', TgtCompC(),
promotes_inputs=['x2'])
# Input units in deg
prob.model.add_subsystem('tgtK', TgtCompK(),
promotes_inputs=['x2'])
prob.setup()
```
````
`````{tabbed} With Auto IVC
````python
prob = om.Problem()
# Input units in degF
prob.model.add_subsystem('tgtF', TgtCompF(),
promotes_inputs=['x2'])
# Input units in degC
prob.model.add_subsystem('tgtC', TgtCompC(),
promotes_inputs=['x2'])
# Input units in degK
prob.model.add_subsystem('tgtK', TgtCompK(),
promotes_inputs=['x2'])
prob.model.set_input_defaults('x2', 100.0, units='degC')
prob.setup()
````
`````
```
import openmdao.api as om
from openmdao.test_suite.components.unit_conv import TgtCompC, TgtCompF, TgtCompK
prob = om.Problem()
ivc = om.IndepVarComp()
ivc.add_output('x2', 100.0, units='degC')
prob.model.add_subsystem('T1', ivc,
promotes_outputs=['x2'])
# Input units in degF
prob.model.add_subsystem('tgtF', TgtCompF(),
promotes_inputs=['x2'])
# Input units in degC
prob.model.add_subsystem('tgtC', TgtCompC(),
promotes_inputs=['x2'])
# Input units in deg
prob.model.add_subsystem('tgtK', TgtCompK(),
promotes_inputs=['x2'])
prob.setup();
import openmdao.api as om
prob = om.Problem()
# Input units in degF
prob.model.add_subsystem('tgtF', TgtCompF(),
promotes_inputs=['x2'])
# Input units in degC
prob.model.add_subsystem('tgtC', TgtCompC(),
promotes_inputs=['x2'])
# Input units in degK
prob.model.add_subsystem('tgtK', TgtCompK(),
promotes_inputs=['x2'])
prob.model.set_input_defaults('x2', 100.0, units='degC')
prob.setup();
```
## Creating a Distributed Component with Unconnected Inputs
````{tabbed} Pre Auto IVC
```python
size = 4
prob = om.Problem()
prob.model.add_subsystem("C1", DistribNoncontiguousComp(arr_size=size),
promotes=['invec', 'outvec'])
prob.setup()
rank = prob.model.comm.rank
if rank == 0:
prob.set_val('invec', np.array([1.0, 3.0]))
else:
prob.set_val('invec', np.array([5.0, 7.0]))
prob.run_model()
```
````
`````{tabbed} With Auto IVC
````python
size = 4
prob = om.Problem()
# An IndepVarComp is required on all unconnected distributed inputs.
ivc = om.IndepVarComp()
ivc.add_output('invec', np.ones(size), distributed=True)
prob.model.add_subsystem('P', ivc,
promotes_outputs=['invec'])
prob.model.add_subsystem("C1", DistribNoncontiguousComp(arr_size=size),
promotes=['invec', 'outvec'])
prob.setup()
prob.set_val('P.invec', np.array([1.0, 3.0, 5.0, 7.0]))
prob.run_model()
````
`````
```
%%px
import openmdao.api as om
from openmdao.utils.array_utils import take_nth
class DistribNoncontiguousComp(om.ExplicitComponent):
"""Uses 4 procs and takes non-contiguous input var slices and has output
var slices as well
"""
def initialize(self):
self.options.declare('arr_size', types=int, default=11,
desc="Size of input and output vectors.")
def compute(self, inputs, outputs):
outputs['outvec'] = inputs['invec']*2.0
def setup(self):
comm = self.comm
rank = comm.rank
arr_size = self.options['arr_size']
idxs = list(take_nth(rank, comm.size, range(arr_size)))
self.add_input('invec', np.ones(len(idxs), float), distributed=True)
self.add_output('outvec', np.ones(len(idxs), float), distributed=True)
%%px
size = 4
import openmdao.api as om
prob = om.Problem()
# An IndepVarComp is required on all unconnected distributed inputs.
ivc = om.IndepVarComp()
ivc.add_output('invec', np.ones(size), distributed=True)
prob.model.add_subsystem('P', ivc,
promotes_outputs=['invec'])
prob.model.add_subsystem("C1", DistribNoncontiguousComp(arr_size=size),
promotes=['invec', 'outvec'])
prob.setup()
prob.set_val('P.invec', np.array([1.0, 3.0, 5.0, 7.0]))
prob.run_model()
from openmdao.utils.assert_utils import assert_near_equal
assert_near_equal(prob.get_val('C1.outvec', get_remote=True), np.array([2., 6., 10., 14.]))
```
## Setting and Getting Inputs
````{tabbed} Pre Auto IVC
```python
prob = om.Problem()
indeps = prob.model.add_subsystem('indeps', om.IndepVarComp())
indeps.add_output('x', 3.0)
indeps.add_output('y', -4.0)
prob.model.add_subsystem('paraboloid',
om.ExecComp('f = (x-3)**2 + x*y + (y+4)**2 - 3'))
prob.model.connect('indeps.x', 'paraboloid.x')
prob.model.connect('indeps.y', 'paraboloid.y')
prob.setup()
x = prob.get_val('indeps.x')
prob.set_val('indeps.y', 15.0)
```
````
`````{tabbed} With Auto IVC
````python
prob = om.Problem()
prob.model.add_subsystem('paraboloid',
om.ExecComp('f = (x-3)**2 + x*y + (y+4)**2 - 3'),
promotes_inputs=['x', 'y'])
prob.setup()
x = prob.get_val('x')
prob.set_val('y', 15.0)
````
`````
```
import openmdao.api as om
prob = om.Problem()
indeps = prob.model.add_subsystem('indeps', om.IndepVarComp())
indeps.add_output('x', 3.0)
indeps.add_output('y', -4.0)
prob.model.add_subsystem('paraboloid',
om.ExecComp('f = (x-3)**2 + x*y + (y+4)**2 - 3'))
prob.model.connect('indeps.x', 'paraboloid.x')
prob.model.connect('indeps.y', 'paraboloid.y')
prob.setup()
x = prob.get_val('indeps.x')
prob.set_val('indeps.y', 15.0)
import openmdao.api as om
prob = om.Problem()
prob.model.add_subsystem('paraboloid',
om.ExecComp('f = (x-3)**2 + x*y + (y+4)**2 - 3'),
promotes_inputs=['x', 'y'])
prob.setup()
x = prob.get_val('x')
prob.set_val('y', 15.0)
```
| github_jupyter |
```
# reload packages
%load_ext autoreload
%autoreload 2
```
### Choose GPU
```
%env CUDA_DEVICE_ORDER=PCI_BUS_ID
%env CUDA_VISIBLE_DEVICES=3
import tensorflow as tf
gpu_devices = tf.config.experimental.list_physical_devices('GPU')
if len(gpu_devices)>0:
tf.config.experimental.set_memory_growth(gpu_devices[0], True)
print(gpu_devices)
tf.keras.backend.clear_session()
```
### dataset information
```
from datetime import datetime
dataset = "fmnist"
dims = (28, 28, 1)
num_classes = 10
labels_per_class = 256 # full
batch_size = 128
datestring = datetime.now().strftime("%Y_%m_%d_%H_%M_%S_%f")
datestring = (
str(dataset)
+ "_"
+ str(labels_per_class)
+ "____"
+ datestring
+ '_baseline'
)
print(datestring)
```
### Load packages
```
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tqdm.autonotebook import tqdm
from IPython import display
import pandas as pd
import umap
import copy
import os, tempfile
```
### Load dataset
```
from tfumap.load_datasets import load_FMNIST, mask_labels
X_train, X_test, X_valid, Y_train, Y_test, Y_valid = load_FMNIST(flatten=False)
X_train.shape
if labels_per_class == "full":
X_labeled = X_train
Y_masked = Y_labeled = Y_train
else:
X_labeled, Y_labeled, Y_masked = mask_labels(
X_train, Y_train, labels_per_class=labels_per_class
)
```
### Build network
```
from tensorflow.keras import datasets, layers, models
from tensorflow_addons.layers import WeightNormalization
def conv_block(filts, name, kernel_size = (3, 3), padding = "same", **kwargs):
return WeightNormalization(
layers.Conv2D(
filts, kernel_size, activation=None, padding=padding, **kwargs
),
name="conv"+name,
)
#CNN13
#See:
#https://github.com/vikasverma1077/ICT/blob/master/networks/lenet.py
#https://github.com/brain-research/realistic-ssl-evaluation
lr_alpha = 0.1
dropout_rate = 0.5
num_classes = 10
input_shape = dims
model = models.Sequential()
model.add(tf.keras.Input(shape=input_shape))
### conv1a
name = '1a'
model.add(conv_block(name = name, filts = 128, kernel_size = (3,3), padding="same"))
model.add(layers.BatchNormalization(name="bn"+name))
model.add(layers.LeakyReLU(alpha=lr_alpha, name = 'lrelu'+name))
### conv1b
name = '1b'
model.add(conv_block(name = name, filts = 128, kernel_size = (3,3), padding="same"))
model.add(layers.BatchNormalization(name="bn"+name))
model.add(layers.LeakyReLU(alpha=lr_alpha, name = 'lrelu'+name))
### conv1c
name = '1c'
model.add(conv_block(name = name, filts = 128, kernel_size = (3,3), padding="same"))
model.add(layers.BatchNormalization(name="bn"+name))
model.add(layers.LeakyReLU(alpha=lr_alpha, name = 'lrelu'+name))
# max pooling
model.add(layers.MaxPooling2D(pool_size=(2, 2), strides=2, padding='valid', name="mp1"))
# dropout
model.add(layers.Dropout(dropout_rate, name="drop1"))
### conv2a
name = '2a'
model.add(conv_block(name = name, filts = 256, kernel_size = (3,3), padding="same"))
model.add(layers.BatchNormalization(name="bn"+name))
model.add(layers.LeakyReLU(alpha=lr_alpha))
### conv2b
name = '2b'
model.add(conv_block(name = name, filts = 256, kernel_size = (3,3), padding="same"))
model.add(layers.BatchNormalization(name="bn"+name))
model.add(layers.LeakyReLU(alpha=lr_alpha, name = 'lrelu'+name))
### conv2c
name = '2c'
model.add(conv_block(name = name, filts = 256, kernel_size = (3,3), padding="same"))
model.add(layers.BatchNormalization(name="bn"+name))
model.add(layers.LeakyReLU(alpha=lr_alpha, name = 'lrelu'+name))
# max pooling
model.add(layers.MaxPooling2D(pool_size=(2, 2), strides=2, padding='valid', name="mp2"))
# dropout
model.add(layers.Dropout(dropout_rate, name="drop2"))
### conv3a
name = '3a'
model.add(conv_block(name = name, filts = 512, kernel_size = (3,3), padding="valid"))
model.add(layers.BatchNormalization(name="bn"+name))
model.add(layers.LeakyReLU(alpha=lr_alpha, name = 'lrelu'+name))
### conv3b
name = '3b'
model.add(conv_block(name = name, filts = 256, kernel_size = (1,1), padding="valid"))
model.add(layers.BatchNormalization(name="bn"+name))
model.add(layers.LeakyReLU(alpha=lr_alpha, name = 'lrelu'+name))
### conv3c
name = '3c'
model.add(conv_block(name = name, filts = 128, kernel_size = (1,1), padding="valid"))
model.add(layers.BatchNormalization(name="bn"+name))
model.add(layers.LeakyReLU(alpha=lr_alpha, name = 'lrelu'+name))
# max pooling
model.add(layers.AveragePooling2D(pool_size=(3, 3), strides=2, padding='valid'))
model.add(layers.Flatten())
model.add(layers.Dense(256, activation=None, name='z'))
model.add(WeightNormalization(layers.Dense(256, activation=None)))
model.add(layers.LeakyReLU(alpha=lr_alpha, name = 'lrelufc1'))
model.add(WeightNormalization(layers.Dense(256, activation=None)))
model.add(layers.LeakyReLU(alpha=lr_alpha, name = 'lrelufc2'))
model.add(WeightNormalization(layers.Dense(num_classes, activation=None)))
model.summary()
early_stopping = tf.keras.callbacks.EarlyStopping(
monitor='val_accuracy', min_delta=0, patience=100, verbose=1, mode='auto',
baseline=None, restore_best_weights=True
)
import tensorflow_addons as tfa
opt = tf.keras.optimizers.Adam(1e-4)
opt = tfa.optimizers.MovingAverage(opt)
loss = tf.keras.losses.CategoricalCrossentropy(label_smoothing=0.2, from_logits=True)
model.compile(opt, loss = loss, metrics=['accuracy'])
Y_valid_one_hot = tf.keras.backend.one_hot(
Y_valid, num_classes
)
Y_labeled_one_hot = tf.keras.backend.one_hot(
Y_labeled, num_classes
)
from livelossplot import PlotLossesKerasTF
# plot losses callback
plotlosses = PlotLossesKerasTF()
train_ds = (
tf.data.Dataset.from_tensor_slices((X_labeled, Y_labeled_one_hot))
.repeat()
.shuffle(len(X_labeled))
.batch(batch_size)
.prefetch(tf.data.experimental.AUTOTUNE)
)
steps_per_epoch = int(len(X_train)/ batch_size)
history = model.fit(
train_ds,
epochs=500,
validation_data=(X_valid, Y_valid_one_hot),
callbacks = [early_stopping, plotlosses],
steps_per_epoch = steps_per_epoch,
)
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
submodel = tf.keras.models.Model(
[model.inputs[0]], [model.get_layer('z').output]
)
z = submodel.predict(X_train)
np.shape(z)
reducer = umap.UMAP(verbose=True)
embedding = reducer.fit_transform(z.reshape(len(z), np.product(np.shape(z)[1:])))
plt.scatter(embedding[:, 0], embedding[:, 1], c=Y_train.flatten(), s= 1, alpha = 0.1, cmap = plt.cm.tab10)
z_valid = submodel.predict(X_valid)
np.shape(z_valid)
reducer = umap.UMAP(verbose=True)
embedding = reducer.fit_transform(z_valid.reshape(len(z_valid), np.product(np.shape(z_valid)[1:])))
plt.scatter(embedding[:, 0], embedding[:, 1], c=Y_valid.flatten(), s= 1, alpha = 0.1, cmap = plt.cm.tab10)
fig, ax = plt.subplots(figsize=(10,10))
ax.scatter(embedding[:, 0], embedding[:, 1], c=Y_valid.flatten(), s= 1, alpha = 1, cmap = plt.cm.tab10)
predictions = model.predict(X_valid)
fig, ax = plt.subplots(figsize=(10,10))
ax.scatter(embedding[:, 0], embedding[:, 1], c=np.argmax(predictions, axis=1), s= 1, alpha = 1, cmap = plt.cm.tab10)
Y_test_one_hot = tf.keras.backend.one_hot(
Y_test, num_classes
)
result = model.evaluate(X_test, Y_test_one_hot)
```
### save results
```
# save score, valid embedding, weights, results
from tfumap.paths import MODEL_DIR, ensure_dir
save_folder = MODEL_DIR / 'semisupervised-keras' / dataset / str(labels_per_class) / datestring
ensure_dir(save_folder)
```
#### save weights
```
encoder = tf.keras.models.Model(
[model.inputs[0]], [model.get_layer('z').output]
)
encoder.save_weights((save_folder / "encoder").as_posix())
classifier = tf.keras.models.Model(
[tf.keras.Input(tensor=model.get_layer('weight_normalization').input)], [model.outputs[0]]
)
print([i.name for i in classifier.layers])
classifier.save_weights((save_folder / "classifier").as_posix())
```
#### save score
```
Y_test_one_hot = tf.keras.backend.one_hot(
Y_test, num_classes
)
result = model.evaluate(X_test, Y_test_one_hot)
np.save(save_folder / 'test_loss.npy', result)
```
#### save embedding
```
z = encoder.predict(X_train)
reducer = umap.UMAP(verbose=True)
embedding = reducer.fit_transform(z.reshape(len(z), np.product(np.shape(z)[1:])))
plt.scatter(embedding[:, 0], embedding[:, 1], c=Y_train.flatten(), s= 1, alpha = 0.1, cmap = plt.cm.tab10)
np.save(save_folder / 'train_embedding.npy', embedding)
```
#### save results
```
import pickle
with open(save_folder / 'history.pickle', 'wb') as file_pi:
pickle.dump(history.history, file_pi)
```
| github_jupyter |
Audit Grouping
===
- Load (calibrated) ORES scores
- Load revert probability scores
- Group in some way (caliper width?)
- Investigate groupings
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
import os
from tqdm import tqdm
import bz2
import sqlite3
import difflib
import gzip
import json
import re
import hashlib
from datetime import datetime
from datetime import timezone
import nltk
import scipy.stats
import para
from itertools import groupby
from collections import Counter
git_root_dir = !git rev-parse --show-toplevel
git_root_dir = git_root_dir[0]
git_root_dir
raw_data_dir = "/export/scratch2/wiki_data"
derived_data_dir = os.path.join(git_root_dir, "data", "derived")
raw_data_dir, derived_data_dir
revision_sample_dir = os.path.join(derived_data_dir, 'revision_sample')
working_dir = os.path.join(derived_data_dir, 'audit')
working_dir
```
### Load (calibrated) ORES scores
```
s = datetime.now()
audit_dir = os.path.join(derived_data_dir, 'audit')
calibrated_probs_filepath = os.path.join(audit_dir, 'sample3_ores_scores_calibrated.csv')
ores_df = pd.read_csv(calibrated_probs_filepath)
print(f"{datetime.now() - s}")
len(ores_df)
ores_df.head()
```
### Load revert scores
```
# read the revert scores
s = datetime.now()
audit_dir = os.path.join(derived_data_dir, 'audit')
revert_score_filepath = os.path.join(audit_dir, 'sample3_revert_scores.csv')
revert_score_df = pd.read_csv(revert_score_filepath)
print(f"{datetime.now() - s}")
len(revert_score_df)
revert_score_df.head()
```
### Load revert metadata
### Merge scores and group
```
df = pd.merge(ores_df, revert_score_df, on='rev_id')
len(df)
df.head()
fig, ax = plt.subplots(1, 1, figsize=(14, 14))
hb = plt.hexbin(df.revert_prob, df.damaging_prob_calibrated, bins='log', gridsize=(50,50), mincnt=50)
plt.xlabel('Revert Probability')
plt.ylabel('ORES Damaging Probability')
cb = fig.colorbar(hb, ax=ax)
cb.set_label('Bin Count')
plt.show()
df['revert_group'] = df.revert_prob.map(lambda rp: int(rp * 100))
Counter(df.revert_group).most_common(20)
xs = []
ys =[]
for group_num, subset in df.groupby('revert_group'):
pct_reverted = np.sum(subset.is_reverted) / len(subset)
pct_predicted = (group_num / 100) + 0.005
xs.append(pct_predicted)
ys.append(pct_reverted)
fig, ax = plt.subplots(1, 1, figsize=(10, 10))
ax.plot([0, 1], [0, 1], "k:", label="Perfectly calibrated")
plt.plot(xs, ys, label='Revert predictions')
plt.scatter(xs, ys, color='black', marker='.')
plt.legend()
plt.xlabel("Predicted revert probability")
plt.ylabel("Observed revert proportion")
plt.show()
fig, axes = plt.subplots(5, 5, figsize=(18, 26))
bins = np.linspace(0, 1, num=50)
group = 0
for row in axes:
for ax in row:
#group = int(np.random.random() * 100)
subset = df[df.revert_group == group]
ax.set_title(f"R={group}%; n={len(subset)}; D={np.sum(subset.damaging_prob_calibrated >= 0.5) / len(subset)*100:.1f}%")
ax.hist(subset.damaging_prob_calibrated, bins=bins, log=True)
mean_damaging_prob = np.mean(subset.damaging_prob_calibrated)
mean_diff = (group / 100) - mean_damaging_prob
std_damaging_prob = np.std(subset.damaging_prob_calibrated)
ax.set_xlabel(f'M={mean_damaging_prob*100:.1f}%; diff={mean_diff*100:.1f}; S={std_damaging_prob*100:.1f}')
group += 4
plt.show()
import scipy.stats
d = []
for group_num, subset in df.groupby('revert_group'):
mean_damaging_prob = np.mean(subset.damaging_prob_calibrated)
mean_diff = (group_num / 100) - mean_damaging_prob
std_damaging_prob = np.std(subset.damaging_prob_calibrated)
#scipy.stats.pearsonr(subset., y)
d.append({
'group': group_num,
'n': len(subset),
'mean_damaging_prob': mean_damaging_prob,
'std_damaging_prob': std_damaging_prob,
'mean_diff': mean_diff,
'abs_mean_diff': np.abs(mean_diff),
'rev_sample': ' '.join([str(r) for r in subset.sample(n=5).rev_id]),
'damaging_rev_sample': ' '.join([str(r) for r in subset[subset.damaging_prob_calibrated >= 0.9].sample(n=5).rev_id]),
'nondamaging_rev_sample': ' '.join([str(r) for r in subset[subset.damaging_prob_calibrated <= 0.1].sample(n=5).rev_id])
})
len(d)
group_df = pd.DataFrame(d)
len(group_df)
group_df.head()
group_df.sort_values(by='abs_mean_diff', ascending=False)
group_df.sort_values(by='std_damaging_prob', ascending=False)
# https://en.wikipedia.org/w/index.php?diff=
group_num = 50
print("damaging")
for link in [f"https://en.wikipedia.org/w/index.php?diff={rev_id}" for rev_id in group_df[group_df.group == group_num].damaging_rev_sample.iloc[0].split(" ")]:
print(link)
print("nondamaging")
for link in [f"https://en.wikipedia.org/w/index.php?diff={rev_id}" for rev_id in group_df[group_df.group == group_num].nondamaging_rev_sample.iloc[0].split(" ")]:
print(link)
```
| github_jupyter |
```
import torch
import torch.autograd as autograd
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import random, numpy as np
import pandas as pd
import matplotlib.pyplot as plt
torch.manual_seed(1)
```
## Loading the datasets, i.e loading frames for few actions
```
#loading and prepping data
#initially only one action
dframe = pd.read_csv('./csv_data/action_1.csv')
dframe2 = pd.read_csv('./csv_data/action_2.csv')
dframe3 = pd.read_csv('./csv_data/action_3.csv')
dframe4 = pd.read_csv('./csv_data/action_4.csv')
dframe5 = pd.read_csv('./csv_data/action_5.csv')
dframe6 = pd.read_csv('./csv_data/action_6.csv')
dframe7 = pd.read_csv('./csv_data/action_7.csv')
#to look at data
dframe.iloc[0:5, :]
```
## Some utility functions to split the datasets and loading the datasets in batch
```
#making test and train split
#the recentering has been done so that the pelvic joint is always at the origin
#labels are to be zero indexed
def train_test_split(dframe_list):
train_split = np.empty(0, dtype=object)
test_split = np.empty(0, dtype=object)
for dframe in dframe_list:
label = dframe.iloc[0,75]-1
# print(label)
num_samples = len(dframe.iloc[:,:])
video_ids = np.unique(dframe.iloc[:,-1].values)
train_video_ids = video_ids[:-15]
test_video_ids = video_ids[-15:]
train_split1 = np.empty(len(train_video_ids), dtype=object)
test_split1 = np.empty(len(test_video_ids), dtype=object)
for idx,i in enumerate(train_video_ids):
train_split1[idx] = dframe.loc[dframe['video_id'] == i].values[:,0:75]
for fidx, f in enumerate(train_split1[idx]):
f = np.reshape(f, (25,3))
f = f-f[0,:]
f = np.reshape(f, (1,75))
train_split1[idx][fidx] = f
# mean_vec = np.mean(train_split1[idx], axis=0)
# std_vec = np.std(train_split1[idx], axis=0)
train_split1[idx] = (train_split1[idx], label)
for idx,i in enumerate(test_video_ids):
test_split1[idx] = dframe.loc[dframe['video_id'] == i].values[:,0:75]
for fidx, f in enumerate(test_split1[idx]):
f = np.reshape(f, (25,3))
f = f-f[0,:]
f = np.reshape(f, (1,75))
test_split1[idx][fidx] = f
# mean_vec = np.mean(test_split1[idx], axis=0)
# std_vec = np.std(test_split1[idx], axis=0)
test_split1[idx] = (test_split1[idx], label)
train_split = np.concatenate((train_split, train_split1))
test_split = np.concatenate((test_split, test_split1))
return train_split, test_split
train_split, test_split = train_test_split([dframe, dframe2, dframe3, dframe4, dframe5, dframe6, dframe7])
# #looking at split
train_split[0:3]
SEQ_LEN = None
def Data_gen( train_split, SEQ_LEN):
while(True):
X = train_split
databatch = random.sample(list(X), 1)[0]
# print(databatch)
databatch, label = databatch[0], databatch[1]
if SEQ_LEN is not None:
if len(databatch) > SEQ_LEN:
databatch = databatch[0:SEQ_LEN]
elif len(databatch) < SEQ_LEN:
databatch = np.concatenate((databatch, np.zeros((SEQ_LEN - len(databatch), 75))))
else:
pass
yield databatch,label
else:
yield databatch,label
ACTd = Data_gen(train_split, SEQ_LEN)
#to look at batch created by Actd
next(ACTd)
```
## LSTM Classifier model defination and intialisation
```
#action LSTM
class LSTMClassifier(nn.Module):
def __init__(self, joints_dim, hidden_dim, label_size, batch_size, num_layers, kernel_size):
super(LSTMClassifier, self).__init__()
self.hidden_dim = hidden_dim
self.batch_size = batch_size
self.num_layers = num_layers
joints_dim2d = joints_dim - 25
self.lstm3 = nn.LSTM(joints_dim, hidden_dim, num_layers=self.num_layers)
self.lstm2_1 = nn.LSTM(joints_dim2d, hidden_dim, num_layers=self.num_layers)
self.lstm2_2 = nn.LSTM(joints_dim2d, hidden_dim, num_layers=self.num_layers)
self.lstm2_3 = nn.LSTM(joints_dim2d, hidden_dim, num_layers=self.num_layers)
self.conv1_1 = nn.Conv1d(4, 2, kernel_size, stride=1, padding=1) #for kernel size=3
self.conv1_2 = nn.Conv1d(2, 1, kernel_size, stride=1, padding=1) #for kernel size=3
self.hidden3 = self.init_hidden3()
self.hidden2_1 = self.init_hidden2_1()
self.hidden2_2 = self.init_hidden2_2()
self.hidden2_3 = self.init_hidden2_3()
self.hidden2label = nn.Linear(hidden_dim, label_size)
def init_hidden3(self):
# the first is the hidden h
# the second is the cell c
return (autograd.Variable(torch.zeros(self.num_layers, self.batch_size, self.hidden_dim).cuda()),
autograd.Variable(torch.zeros(self.num_layers, self.batch_size, self.hidden_dim).cuda()))
def init_hidden2_1(self):
# the first is the hidden h
# the second is the cell c
return (autograd.Variable(torch.zeros(self.num_layers, self.batch_size, self.hidden_dim).cuda()),
autograd.Variable(torch.zeros(self.num_layers, self.batch_size, self.hidden_dim).cuda()))
def init_hidden2_2(self):
# the first is the hidden h
# the second is the cell c
return (autograd.Variable(torch.zeros(self.num_layers, self.batch_size, self.hidden_dim).cuda()),
autograd.Variable(torch.zeros(self.num_layers, self.batch_size, self.hidden_dim).cuda()))
def init_hidden2_3(self):
# the first is the hidden h
# the second is the cell c
return (autograd.Variable(torch.zeros(self.num_layers, self.batch_size, self.hidden_dim).cuda()),
autograd.Variable(torch.zeros(self.num_layers, self.batch_size, self.hidden_dim).cuda()))
def forward(self, joints3d_vec):
x3 = joints3d_vec
x2 = x3.view(-1, 25, 3)
x2_1 = x2[:,:,1:3].contiguous().view(-1, 1, 50)
x2_2 = x2[:,:,0:2].contiguous().view(-1, 1, 50)
x2_3 = x2[:,:,[0,2]].contiguous().view(-1, 1, 50)
# print('x2_3 : ',x2_3.size())
lstm_out3, self.hidden3 = self.lstm3(x3, self.hidden3)
lstm_out2_1, self.hidden2_1 = self.lstm2_1(x2_1, self.hidden2_1)
lstm_out2_2, self.hidden2_2 = self.lstm2_2(x2_2, self.hidden2_2)
lstm_out2_3, self.hidden2_3 = self.lstm2_3(x2_3, self.hidden2_3)
# print('lstm_out[-1] : ', lstm_out[-1].size())
t3 = lstm_out3[-1]
# print('t3 : ', t3.size())
t2_1 = lstm_out2_1[-1]
t2_2 = lstm_out2_2[-1]
t2_3 = lstm_out2_3[-1]
# print('t2_3 : ', t2_3.size())
t = autograd.Variable(torch.zeros(self.batch_size, 4, self.hidden_dim).cuda())
t[:,0,:] = t3
t[:,1,:] = t2_1
t[:,2,:] = t2_2
t[:,3,:] = t2_3
# print('t : ', t.size())
y3 = self.conv1_1(t)
# print('y3 : ', y3.size())
y3 = self.conv1_2(y3)
# print('y3 : ', y3.size())
y3 = y3.contiguous().view(-1, self.hidden_dim)
# print('y3 : ', y3.size())
y = self.hidden2label(y3)
log_probs = F.softmax(y, dim=1)
return log_probs
#instanstiating a model
model0 = LSTMClassifier(75, 512, 7, 1, 2, 3)
#to do stuff in CUDA
model0 = model0.cuda()
Xt = autograd.Variable(torch.rand(23, 1, 75).cuda())
model0(Xt)
```
## Training the model
```
def evaluate_accuracy(model, test_split):
pred_labels = np.empty(len(test_split))
orig_labels = np.array([t[1] for t in test_split])
for i in range(len(test_split)):
d_in = autograd.Variable(torch.from_numpy(test_split[i][0]).float().cuda())
d_in = d_in.view(d_in.size()[0], 1, -1)
y_pred = model(d_in)
pred_labels[i] = y_pred.data.cpu().max(1)[1].numpy()[0];
n_samples = len(pred_labels)
res=(orig_labels==pred_labels)
correct_count = (res==True).sum()
return (correct_count*100/n_samples)
#training function
def train(model, num_epoch, num_iter, rec_interval, disp_interval):
optimizer = optim.Adam(model.parameters(), lr = 1e-5)
loss_values = []
avg_loss_values = []
rec_step = 0
print('Starting the training ...')
for eph in range(num_epoch):
print('epoch {} starting ...'.format(eph))
avg_loss = 0
n_samples = 0
for i in range(num_iter):
model.hidden3 = (model.hidden3[0].detach(), model.hidden3[1].detach())
model.hidden2_1 = (model.hidden2_1[0].detach(), model.hidden2_1[1].detach())
model.hidden2_2 = (model.hidden2_2[0].detach(), model.hidden2_2[1].detach())
model.hidden2_3 = (model.hidden2_3[0].detach(), model.hidden2_3[1].detach())
model.zero_grad()
X,Y = next(ACTd)
n_samples += len(X)
X = autograd.Variable(torch.from_numpy(X).float().cuda())
X = X.view(len(X), 1, -1)
Y = autograd.Variable(torch.LongTensor(np.array([Y])).cuda())
y_hat = model(X)
# print(eph, i, y_hat, Y)
loss = F.cross_entropy(y_hat, Y)
# print(loss)
avg_loss += loss.data[0]
if i % disp_interval == 0:
print('epoch: %d iterations: %d loss :%g' % (eph, i, loss.data[0]))
if rec_step%rec_interval==0:
loss_values.append(loss.data[0])
loss.backward()
optimizer.step()
rec_step += 1
avg_loss /= n_samples
avg_loss_values.append(avg_loss)
#evaluating model accuracy
acc = evaluate_accuracy(model, test_split)
print('epoch: {} <====train track===> avg_loss: {}, accuracy: {}% \n'.format(eph, avg_loss, acc))
return loss_values, avg_loss_values
loss_vals, avg_loss_vals = train(model0, 100, 1000, 2, 100) #ran 4 times with 3e-5,1e-5, 1e-5, 1e-6
plt.figure()
plt.plot(loss_vals)
plt.figure()
plt.plot(avg_loss_vals)
plt.xlabel('epoch')
plt.ylabel('avg loss')
def save_model(model_name, path, model):
p = path+'/'+model_name
print('saving at {}'.format(p))
torch.save(model.state_dict(), p)
print('saved at {}'.format(p))
save_model('LSTMClassifierX2_c7.pth', './checkpoints', model0)
```
| github_jupyter |
# [LEGALST-123] Lab 07: Intro to Folium
```
#from datascience import *
%matplotlib inline
import matplotlib.pyplot as plt
from folium.plugins import HeatMap
import numpy as np
import folium
import json
import os
!pip install folium --upgrade
from folium.plugins import HeatMap
```
## Data
This lab will serve as an introduction to the folium package. We will learn how to plot a folium map, overlay it with a json file, and add markers. Consult (https://python-visualization.github.io/folium/quickstart.html) for help! We will use folium's interface for building the base maps, and the "us-states" dataset for overlaying a GeoJSON file.
## Basic Mapping
```
us_states = os.path.join('data', 'us-states.json')
geo_json_data = json.load(open(us_states))
```
First, plot a map of the United States. To do this:
1. Look up the coordinates for the geographic center of the United States
2. Use folium.Map to plot the U.S. based on these coordiantes
3. Experiment with different zoom_start numbers to visualize the continental U.S.
```
m = folium.Map([39.83, -98.59], zoom_start=4)
m
```
#### Question 1
Where is the geographic center of the United States? Why does it make sense to start from the center and them zoom out?
## Overlay with GeoJson data
In this next section, we will learn how to work with GeoJSON data. JavaScript Object Notation (JSON) is an open-standard file format that maps text into attributes. GeoJSON is a particular implementation of this that stores geographic information (shapes of boundaries, coordinates, etc.). Now use folium.GeoJson to overlay the "us_states" json file on top of the map of the U.S.
Hint: The GeoJSON file was already loaded in earlier with the "json.load" command!
```
geo_json_data
folium.GeoJson(geo_json_data
).add_to(m)
m
```
Add a "style function" to customize your GeoJSON overlay! Try following this format:
folium.GeoJson(
geo_json_data,
style_function = lambda feature: {
'fillColor':
'color':
'weight':
'dashArray':
)
).add_to()
Experiment with different options for these arguments, then call your map.
```
folium.GeoJson(
geo_json_data,
style_function=lambda feature: {
'fillColor': '#ffff00',
'color': 'black',
'weight': 2,
'dashArray': '2, 2'
}
).add_to(m)
m
```
Now write code that customizes the 'fillColor' argument to fill in states that you have visited!
Here's a hint:
'fillColor': 'green' if 'e' in feature['properties']['name'].tolower() else '#ffff00'
would generate a map where states with 'e' in their names would be shaded green, and the rest would be shaded yellow ('#ffff00' is the hex color for yellow). Try adding more "else" statements for the states that you have visited.
```
folium.GeoJson(
geo_json_data,
style_function=lambda feature: {
'fillColor': 'green' if 'california' in feature['properties']['name'].lower()
else 'green' if 'new jersey' in feature['properties']['name'].lower()
else 'green' if 'new york' in feature['properties']['name'].lower()
else 'green' if 'pennsylvania' in feature['properties']['name'].lower()
else 'green' if 'maryland' in feature['properties']['name'].lower()
else 'green' if 'delaware' in feature['properties']['name'].lower()
else 'green' if 'connecticut' in feature['properties']['name'].lower()
else 'green' if 'massachusetts' in feature['properties']['name'].lower()
else 'green' if 'ohio' in feature['properties']['name'].lower()
else 'green' if 'indiana' in feature['properties']['name'].lower()
else 'green' if 'illinois' in feature['properties']['name'].lower()
else 'green' if 'wisconsin' in feature['properties']['name'].lower()
else 'green' if 'michigan' in feature['properties']['name'].lower()
else 'green' if 'virginia' in feature['properties']['name'].lower()
else 'green' if 'north carolina' in feature['properties']['name'].lower()
else 'green' if 'florida' in feature['properties']['name'].lower()
else 'green' if 'louisiana' in feature['properties']['name'].lower()
else 'green' if 'arizona' in feature['properties']['name'].lower()
else 'green' if 'utah' in feature['properties']['name'].lower()
else 'green' if 'nevada' in feature['properties']['name'].lower()
else 'green' if 'washington' in feature['properties']['name'].lower()
else 'green' if 'oregon' in feature['properties']['name'].lower()
else 'green' if 'colorado' in feature['properties']['name'].lower()
else 'green' if 'georgia' in feature['properties']['name'].lower()
else '#ffff00',
'color': 'black',
'weight': 2,
'dashArray': '5, 5'
}
).add_to(m)
m
```
## Tiles and Markers
Now let's try plotting San Francisco! Do the following:
1. Save the coordinates into an object called "sf_coords"
2. Use "sf_coords" in the "folium.Map" argument instead of explicitly calling the coordinates
3. Create four maps using zoom levels 1, 4, 12, and 20. Which one provides the best look at San Francisco?
```
sf_coords = (37.76, -122.45)
sfmap_zoom1 = folium.Map(sf_coords, zoom_start = 1)
sfmap_zoom4 = folium.Map(sf_coords, zoom_start = 4)
sfmap_zoom12 = folium.Map(sf_coords, zoom_start = 12)
sfmap_zoom20 = folium.Map(sf_coords, zoom_start = 20)
sfmap_zoom1
sfmap_zoom4
sfmap_zoom12
sfmap_zoom20
```
Now add the "tiles" argument to folium.Map. Plot maps of San Francisco with the "Stamen Toner" tileset, and the "Open Street Map" tileset.
```
sfmap_stamen_toner = folium.Map(sf_coords, tiles = "Stamen Toner", zoom_start = 12)
sfmap_stamen_toner
sfmap_mapbox = folium.Map(sf_coords, tiles = "Open Street Map", zoom_start = 10)
sfmap_mapbox
```
If you would like to try out other tilesets, a list of custom styles can be found here: http://leaflet-extras.github.io/leaflet-providers/preview/.
```
Plain JavaScript:
var OpenStreetMap_DE = L.tileLayer('https://{s}.tile.openstreetmap.de/tiles/osmde/{z}/{x}/{y}.png', {
maxZoom: 18,
attribution: '© <a href="https://www.openstreetmap.org/copyright">OpenStreetMap</a> contributors'
});
```
You will find a snippet of JavaScript code for each tileset. To import a custom tileset into Python, assign the `tiles` parameter to the URL generated (it starts with 'https' and ends with '.png'), and assign the `attr` parameter to `'© <a href="https://www.openstreetmap.org/copyright">OpenStreetMap</a> contributors'`.
The full code for generating a map with custom styles is:
```
folium.Map([x_coord, y_coord], tiles="https...png", attr='© <a href="https://www.openstreetmap.org/copyright">OpenStreetMap</a> contributors')
```
For example,
```
folium.Map(sf_coords, tiles="https://{s}.tile.openstreetmap.de/tiles/osmde/{z}/{x}/{y}.png", attr='© <a href="https://www.openstreetmap.org/copyright">OpenStreetMap</a> contributors')
```
Next, plot a map of the United States again. Place markers at the following famous landmarks:
1. The Statue of Liberty (New York City/New Jersey)
2. Yosemite National Park (California)
3. Gateway Arch (St. Louis, Missouri)
4. Independence Hall (Philadelphia, Pennsylvania)
5. Cloud Gate (Chicago, Illinois)
6. United States Capitol Building (Washington D.C.)
7. Jackson Square (New Orleans, Louisiana)
8. Space Needle (Seattle, Washington)
9. Walt Disney World (Orlando, Florida)
10. Las Vegas Strip (Las Vegas, Nevada)
Experiment with different icon types by using the "folium.Icon" argument! Try placing one "cloud" icon and one "green" icon on the map!
```
us_map = folium.Map([39.83, -98.59], zoom_start=4)
folium.Marker([40.6892, -74.0445], popup = 'Statue of Liberty', icon = folium.Icon(icon='cloud')).add_to(us_map)
folium.Marker([37.8651, -119.5393], popup = "Yosemite National Park", icon = folium.Icon(color = 'green')).add_to(us_map)
folium.Marker([38.6247, -90.1848], popup = "Gateway Arch").add_to(us_map)
folium.Marker([39.9489, -75.1500], popup = "Independence Hall").add_to(us_map)
folium.Marker([41.8827, -87.6233], popup = "Cloud Gate").add_to(us_map)
folium.Marker([38.8899, -77.0091], popup = "United States Capitol Building").add_to(us_map)
folium.Marker([29.9574, -90.0629], popup = "Jackson Square").add_to(us_map)
folium.Marker([47.6205, -122.3493], popup = "Space Needle").add_to(us_map)
folium.Marker([28.3852, -81.5639], popup = "Walt Disney World").add_to(us_map)
folium.Marker([36.1147, -115.1728], popup = "Las Vegas Strip").add_to(us_map)
us_map
```
Next, create a map of Berkeley. Use the "folium.ClickforMarker()" method to place markers at the following locations by clicking:
1. Barrows Hall
2. UC Berkeley Law
3. The Big C
4. The Campanile
```
berkeley_map = folium.Map([37.8716, -122.2727], zoom_start=15)
berkeley_map.add_child(folium.ClickForMarker())
berkeley_map
```
| github_jupyter |
## Send More Money cryptarithmetic puzzle
While not often spoken about as a classic data science technique,
constraint programming can be a very useful tool in numerous scenarios.
We'll look at solving a problem using brute force and then how
constraint programming provides a very declarative style
which saves us having to worry about the implementation details.
For our purposes, we'll use a classic example of a [cryptarithmetic puzzle](https://en.wikipedia.org/wiki/Verbal_arithmetic).
Such puzzles have several words arranged as a mathematical equation.
The goal is to guess each letter where each letter represents a different digit.
By convention, the leading digit of a multi-digit number should not be zero.
For us, the puzzle is:<br>
<code> S E N D</code><br>
<code> + M O R E</code><br>
<code> = M O N E Y</code>
### Brute force approaches
First a brute force solution in Python:
```
def solutions():
# letters = ('s', 'e', 'n', 'd', 'm', 'o', 'r', 'y')
all_solutions = list()
for s in range(1, 10):
for e in range(0, 10):
for n in range(0, 10):
for d in range(0, 10):
for m in range(1, 10):
for o in range(0, 10):
for r in range(0, 10):
for y in range(0, 10):
if len({s, e, n, d, m, o, r, y}) == 8:
send = 1000 * s + 100 * e + 10 * n + d
more = 1000 * m + 100 * o + 10 * r + e
money = 10000 * m + 1000 * o + 100 * n + 10 * e + y
if send + more == money:
all_solutions.append((send, more, money))
return all_solutions
print(solutions())
```
Next a brute force solution in Groovy:
```
%%groovy
for (s in 1..9)
for (e in 0..9)
for (n in 0..9)
for (d in 0..9)
for (m in 1..9)
for (o in 0..9)
for (r in 0..9)
for (y in 0..9)
if ([s, e, n, d, m, o, r, y].toSet().size() == 8) {
def send = 1000 * s + 100 * e + 10 * n + d
def more = 1000 * m + 100 * o + 10 * r + e
def money = 10000 * m + 1000 * o + 100 * n + 10 * e + y
if (send + more == money) {
println "s = $s, e = $e, n = $n, d = $d"
println "m = $m, o = $o, r = $r, y = $y"
}
}
OutputCell.HIDDEN
```
We can use permutations with Python:
```
from itertools import permutations
def solution2():
letters = ('s', 'e', 'n', 'd', 'm', 'o', 'r', 'y')
digits = range(10)
for perm in permutations(digits, len(letters)):
sol = dict(zip(letters, perm))
if sol['s'] == 0 or sol['m'] == 0:
continue
send = 1000 * sol['s'] + 100 * sol['e'] + 10 * sol['n'] + sol['d']
more = 1000 * sol['m'] + 100 * sol['o'] + 10 * sol['r'] + sol['e']
money = 10000 * sol['m'] + 1000 * sol['o'] + 100 * sol['n'] + 10 * sol['e'] + sol['y']
if send + more == money:
return send, more, money
print(solution2())
```
We can use permutations with Groovy:
```
%%groovy
digits = 0..9
for (p in digits.permutations()) {
if (p[-1] < p[-2]) continue
def (s, e, n, d, m, o, r, y) = p
if (s == 0 || m == 0) continue
def send = 1000 * s + 100 * e + 10 * n + d
def more = 1000 * m + 100 * o + 10 * r + e
def money = 10000 * m + 1000 * o + 100 * n + 10 * e + y
if (send + more == money) {
println "s = $s, e = $e, n = $n, d = $d"
println "m = $m, o = $o, r = $r, y = $y"
}
}
OutputCell.HIDDEN
```
### Constraint programming
We can use the [Choco constraint programming library](http://www.choco-solver.org/) which allows us to write our solution in a very declarative style using only constraints.
The set of constraints must be satisfied in every solution.
The constraint programming engine solves by applying various constraint filtering algorithms in combination with a search mechanism.
If you have heard of [Prolog](https://en.wikipedia.org/wiki/Prolog) and back-tracking, you will have the idea.
```
%%groovy
@Grab('org.choco-solver:choco-solver:4.10.2')
import org.chocosolver.solver.Model
import org.chocosolver.solver.variables.IntVar
def model = new Model("SEND+MORE=MONEY")
def S = model.intVar("S", 1, 9)
def E = model.intVar("E", 0, 9)
def N = model.intVar("N", 0, 9)
def D = model.intVar("D", 0, 9)
def M = model.intVar("M", 1, 9)
def O = model.intVar("0", 0, 9)
def R = model.intVar("R", 0, 9)
def Y = model.intVar("Y", 0, 9)
model.allDifferent(S, E, N, D, M, O, R, Y).post()
IntVar[] ALL = [
S, E, N, D,
M, O, R, E,
M, O, N, E, Y]
int[] COEFFS = [
1000, 100, 10, 1,
1000, 100, 10, 1,
-10000, -1000, -100, -10, -1]
model.scalar(ALL, COEFFS, "=", 0).post()
//model.solver.findSolution()
model.solver.with {
showStatistics()
// showDecisions()
// showSolutions()
findSolution()
}
```
| github_jupyter |
```
# Installs
!pip install --upgrade -q pip jax jaxlib
!pip install --upgrade -q git+https://github.com/google/flax.git
!pip install --upgrade -q git+https://github.com/rolandgvc/flaxvision.git
# General imports
import jax
import jax.numpy as jnp
import numpy as np
from flax import linen as nn
from flax import optim
from flaxvision import models
from torchvision import datasets
# Load dataset from torchvision into memory
train_ds = datasets.MNIST('./data', train=True, download=True)
test_ds = datasets.MNIST('./data', train=False, download=True)
train_ds = {'image': np.expand_dims(train_ds.data.numpy(), 3),
'label': train_ds.targets.numpy()}
test_ds = {'image': np.expand_dims(test_ds.data.numpy(), 3),
'label': test_ds.targets.numpy()}
train_ds['image'] = jnp.float32(train_ds['image']) / 255.
test_ds['image'] = jnp.float32(test_ds['image']) / 255.
# Instantiate pretrained model
RNG = jax.random.PRNGKey(0)
vgg, vgg_params = models.vgg16(RNG, pretrained=True)
#TODO: test with an image from dataset
batch = jnp.ones((1, 224, 224, 3))
out = vgg.apply(vgg_params, batch, mutable=False)
# Define backbone instantization as a lambda function
vgg_backbone = lambda: models.VGG.make_backbone(vgg)
# Define new model
from flax import linen as nn
class Classifier(nn.Module):
dtype: Any = jnp.float32
@nn.compact
def __call__(self, inputs, train: bool = False):
x = nn.Dense(2048, dtype=self.dtype)(inputs)
x = nn.relu(x)
x = nn.Dropout(rate=0.5)(x, deterministic=not train)
x = nn.Dense(2048, dtype=self.dtype)(x)
x = nn.relu(x)
x = nn.Dropout(rate=0.5)(x, deterministic=not train)
x = nn.Dense(10, dtype=self.dtype)(x)
return x
class MyModel(nn.Module):
def setup(self):
self.backbone = vgg_backbone()
self.classifier = Classifer()
def __call__(self, inputs, train: bool = False)
x = self.backbone(inputs, train=False)
x = x.transpose((0, 3, 1, 2))
x = x.reshape((x.shape[0], -1))
x = self.classifier(x, train)
return x
# Setup training loop
def get_initial_params(key):
init_shape = jnp.ones((1, 224, 224, 3), jnp.float32)
initial_params = MyModel().init(key, init_shape)['params']
return initial_params
def create_optimizer(params, learning_rate, beta):
optimizer_def = optim.Momentum(learning_rate=learning_rate, beta=beta)
optimizer = optimizer_def.create(params)
return optimizer
def onehot(labels, num_classes=10):
x = (labels[..., None] == jnp.arange(num_classes)[None])
return x.astype(jnp.float32)
def cross_entropy_loss(logits, labels):
return -jnp.mean(jnp.sum(onehot(labels) * logits, axis=-1))
def compute_metrics(logits, labels):
loss = cross_entropy_loss(logits, labels)
accuracy = jnp.mean(jnp.argmax(logits, -1) == labels)
metrics = {
'loss': loss,
'accuracy': accuracy,
}
return metrics
@jax.jit
def train_step(optimizer, batch):
"""Train for a single step."""
def loss_fn(params):
logits = CNN().apply({'params': params}, batch['image'])
loss = cross_entropy_loss(logits, batch['label'])
return loss, logits
grad_fn = jax.value_and_grad(loss_fn, has_aux=True)
(_, logits), grad = grad_fn(optimizer.target)
optimizer = optimizer.apply_gradient(grad)
metrics = compute_metrics(logits, batch['label'])
return optimizer, metrics
@jax.jit
def eval_step(params, batch):
logits = CNN().apply({'params': params}, batch['image'])
return compute_metrics(logits, batch['label'])
def train_epoch(optimizer, train_ds, batch_size, epoch, rng):
"""Train for a single epoch."""
train_ds_size = len(train_ds['image'])
steps_per_epoch = train_ds_size // batch_size
perms = jax.random.permutation(rng, len(train_ds['image']))
perms = perms[:steps_per_epoch * batch_size] # skip incomplete batch
perms = perms.reshape((steps_per_epoch, batch_size))
batch_metrics = []
for perm in perms:
batch = {k: v[perm] for k, v in train_ds.items()}
optimizer, metrics = train_step(optimizer, batch)
batch_metrics.append(metrics)
# compute mean of metrics across each batch in epoch.
batch_metrics_np = jax.device_get(batch_metrics)
epoch_metrics_np = {
k: onp.mean([metrics[k] for metrics in batch_metrics_np])
for k in batch_metrics_np[0]}
logging.info('train epoch: %d, loss: %.4f, accuracy: %.2f', epoch,
epoch_metrics_np['loss'], epoch_metrics_np['accuracy'] * 100)
return optimizer, epoch_metrics_np
def eval_model(model, test_ds):
metrics = eval_step(model, test_ds)
metrics = jax.device_get(metrics)
summary = jax.tree_map(lambda x: x.item(), metrics)
return summary['loss'], summary['accuracy']
# TODO: Run training loop
# summary_writer = tensorboard.SummaryWriter()
rng, init_rng = jax.random.split(PRNG)
params = get_initial_params(init_rng)
optimizer = create_optimizer(params, learning_rate, momentum)
for epoch in range(1, num_epochs + 1):
rng, input_rng = jax.random.split(rng)
optimizer, train_metrics = train_epoch(optimizer, train_ds, batch_size,
epoch, input_rng)
loss, accuracy = eval_model(optimizer.target, test_ds)
summary_writer.scalar('train_loss', train_metrics['loss'], epoch)
summary_writer.scalar('train_accuracy', train_metrics['accuracy'], epoch)
summary_writer.scalar('eval_loss', loss, epoch)
summary_writer.scalar('eval_accuracy', accuracy, epoch)
summary_writer.flush()
if (step + 1) % steps_per_checkpoint == 0 or step + 1 == num_steps:
state = sync_batch_stats(state)
step = int(state.step)
checkpoints.save_checkpoint(workdir, state, step, keep=3)
# Load from checkpoint and inference
```
| github_jupyter |
# Random walk baseline
```
import numpy as np
import pandas as pd
from scipy.fftpack import dct, idct
import matplotlib
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import matplotlib.gridspec as gridspec
from music21 import converter
matplotlib.style.use('styles.mplstyle')
import scipy
import sys
sys.path.append('../')
from helpers import cm2inch, title
def show_length_distribution(lam=12, size=100000):
lengths = np.random.poisson(lam=lam, size=size)
lengths[np.where(lengths < 3)] = 3
sns.histplot(lengths, stat='probability', discrete=True, lw=0, shrink=.8)
# plt.plot(scipy.stats.poisson.pmf(range(30), mu=12), 'C3.-', lw=.5, label='Poisson(12)')
plt.xlabel('num. notes')
def show_step_distribution(n=10, p=0.5, size=100000):
steps = np.random.binomial(n=n, p=p, size=size) - n*p
sns.histplot(steps, stat='probability', discrete=True, lw=0, shrink=.8)
title('Binomial step distr.')
plt.figure(figsize=cm2inch(8.2, 3))
plt.subplot(121)
show_length_distribution()
title('Poisson length distr.')
plt.subplot(122)
show_step_distribution()
title('Binomial step distr.')
def random_contour(lam=12, num_samples=50, n=10, p=0.5):
length = max(3, np.random.poisson(lam=lam))
contour = [np.random.randint(60,85)]
for i in range(1, length):
step = np.random.binomial(n, p) - n*p
step = min(max(-12, step), 12)
if (contour[i-1] + step > 84) or (contour[i-1] + step < 60):
contour.append(contour[i-1] - step)
else:
contour.append(contour[i-1] + step)
contour.append(contour[-1])
times = np.linspace(0, 1, len(contour))
func = scipy.interpolate.interp1d(times, contour, kind='previous')
return func(np.linspace(0, 1, num_samples))
contours = np.array([random_contour() for _ in range(10000)])
def show_random_contour_examples(contours=contours, num_examples=3):
plt.plot(contours[:num_examples, :].T, '.-', lw=.5)
plt.xlabel('time step')
plt.ylabel('pitch')
plt.figure(figsize=cm2inch(8.2, 4))
show_random_contour_examples()
title('Examples of random contours')
def show_avg_contour(contours, f=1, num_examples=3, color='k'):
xs = np.arange(contours.shape[1])
contours = contours - contours.mean(axis=1)[:, np.newaxis]
mean = contours.mean(axis=0)
std = f/2 * contours.std(axis=0)
plt.plot(xs, mean, color, lw=1, label=f'avg contour')
plt.fill_between(xs, mean-std, mean+std, alpha=.1,
color=color, label=f'1 std dev', lw=0)
plt.plot(xs, contours[:num_examples, :].T, ':', lw=.5)
plt.plot(0, 0, 'k:', lw=.5, label='Examples')
plt.legend()
plt.ylabel('pitch (w.r.t. mean)')
plt.xlabel('time step')
plt.figure(figsize=cm2inch(8.2, 3))
show_avg_contour(contours)
```
## Combined plot
```
fig = plt.figure(figsize=cm2inch(8.2, 8))
gs = gridspec.GridSpec(3, 2)
ax = fig.add_subplot(gs[0, 0])
show_length_distribution()
title('A. Poisson length distr.')
ax = fig.add_subplot(gs[0, 1])
show_step_distribution()
title('B. Binomial step distr.')
ax = fig.add_subplot(gs[1, :])
show_random_contour_examples()
title('C. Examples of random contours')
ax = fig.add_subplot(gs[2, :])
show_avg_contour(contours)
title('D. Average contour')
plt.legend(ncol=3)
plt.tight_layout()
plt.savefig('../figures/suppl-S1/figS01a.pdf')
```
## Toeplitz difference
```
def toeplitz_difference(S):
diffs = np.zeros(S.shape)
for i in range(S.shape[0]):
for j in range(S.shape[1]):
k = j - i
avg = np.mean(np.diag(S, k))
diffs[i, j] = S[i, j] - avg
return diffs
contours4 = np.array([random_contour(lam=4, num_samples=100) for _ in range(10000)])
contours12 = np.array([random_contour(lam=12, num_samples=100) for _ in range(10000)])
contours50 = np.array([random_contour(lam=50, num_samples=100) for _ in range(10000)])
contours100 = np.array([random_contour(lam=100, num_samples=100) for _ in range(10000)])
S4 = np.cov(contours4.T)
S12 = np.cov(contours12.T)
S50 = np.cov(contours50.T)
S100 = np.cov(contours100.T)
def title(text, ax=None):
if ax is None: ax = plt.gca()
ax.set_title(text, ha='left', x=0)
def show_row(S, gs, row, cbar=False):
ax1 = fig.add_subplot(gs[row, 0])
plt.imshow(S)
plt.xticks([0, 50, 100])
plt.yticks([0, 50, 100])
if cbar: plt.colorbar()
ax2 = fig.add_subplot(gs[row, 1])
plt.imshow(toeplitz_difference(S),
cmap='RdBu_r', vmin=-10, vmax=10)
plt.xticks([0, 50, 100])
plt.yticks([0, 50, 100])
if cbar: plt.colorbar()
ax3 = fig.add_subplot(gs[row, 2])
lam, V = np.linalg.eig(S)
top_lambdas = lam.argsort()[::-1][1:4]
plt.plot(V[:, top_lambdas])
return ax1, ax2, ax3
fig = plt.figure(figsize=cm2inch(8.2, 8))
gs = gridspec.GridSpec(3, 3)
ax1, ax2, ax3 = show_row(S4, gs, 0, cbar=False)
title('A. Covariance', ax=ax1)
title('B. Toeplitzness', ax=ax2)
title('C. PCs', ax=ax3)
ax1.set_ylabel('avg. length 4', fontweight='bold')
axs = show_row(S12, gs, 1, cbar=False)
axs[0].set_ylabel('avg. length 12', fontweight='bold')
axs = show_row(S100, gs, 2, cbar=False)
axs[0].set_ylabel('avg. length 100', fontweight='bold')
plt.tight_layout()
plt.savefig('../figures/suppl-S1/figS01b.pdf')
```
| github_jupyter |
```
import os
import sys
import base64
from io import BytesIO
import numpy as np
from PIL import Image
sys.path.append("..")
from dash_reusable_components import *
# Displays images smaller
def display(im, new_width=400):
ratio = new_width / im.size[0]
new_height = round(im.size[1] * ratio)
return im.resize((new_width, new_height))
```
## Testing PIL vs b64
```
image_path = "../images/IU.jpg"
im = Image.open(image_path)
print("Shape of Image:", im.size)
print("Size of Image:", os.stat(image_path).st_size, "bytes")
display(im)
```
### Encoding
```
enc_png = pil_to_b64(im)
print("PNG results:")
print("Length of string:", len(enc_png))
print("Size of string:", sys.getsizeof(enc_png), "bytes")
print("Time taken to convert from PIL to b64:")
%timeit pil_to_b64(im)
enc_jpg = pil_to_b64(im, enc_format='jpeg')
print("\nJPEG results:")
print("Length of string:", len(enc_jpg))
print("Size of string:", sys.getsizeof(enc_jpg), "bytes")
print("Time taken to convert from PIL to b64:")
%timeit pil_to_b64(im, enc_format='jpeg')
```
### Decoding
```
dec_png = b64_to_pil(enc_png)
print("PNG results:")
print("Time taken to convert from b64 to PIL:")
%timeit b64_to_pil(enc_png)
dec_jpg = b64_to_pil(enc_jpg)
print("\nJPEG results:")
print("Time taken to convert from b64 to PIL:")
%timeit b64_to_pil(enc_jpg)
decoded = b64_to_pil(enc_png)
display(decoded)
```
## Testing Numpy and b64
### Encoding
```
# Get numpy array from previous image
np_array = np.asarray(im)
print("Numpy array shape:", np_array.shape)
print("Numpy array size:", np_array.nbytes, "bytes")
enc_png = numpy_to_b64(im, scalar=False, enc_format='png')
print("\nPNG results:")
print("Length of string:", len(enc_png))
print("Size of string:", sys.getsizeof(enc_png), "bytes")
print("Time taken to convert from Numpy to b64:")
%timeit numpy_to_b64(im, scalar=False)
enc_jpg = numpy_to_b64(im, scalar=False, enc_format='jpeg')
print("\nJPEG results:")
print("Length of string:", len(enc_jpg))
print("Size of string:", sys.getsizeof(enc_jpg), "bytes")
print("Time taken to convert from Numpy to b64:")
%timeit numpy_to_b64(im, scalar=False, enc_format='jpeg')
```
### Decoding
```
dec_png = b64_to_numpy(enc_png, to_scalar=False)
print("PNG results:")
print("Time taken to convert from b64 to Numpy:")
%timeit b64_to_numpy(enc_png)
print("Time taken to convert from b64 to Numpy (to_scalar false):")
%timeit b64_to_numpy(enc_png, to_scalar=False)
dec_jpg = b64_to_numpy(enc_jpg, to_scalar=False)
print("\nJPEG results:")
print("Time taken to convert from b64 to Numpy:")
%timeit b64_to_numpy(enc_jpg)
print("Time taken to convert from b64 to Numpy (to_scalar false):")
%timeit b64_to_numpy(enc_jpg, to_scalar=False)
```
## Testing PIL and Bytes Encoding/Decoding
```
print("Time taken to convert from PIL to bytes string:")
%timeit pil_to_bytes_string(im)
enc_b, im_size, mode = pil_to_bytes_string(im)
print("\nTime taken to convert from bytes string to PIL:")
%timeit bytes_string_to_pil(enc_b, im_size)
```
### Compare Matching for Jpeg and png encodings
```
print("dec_png and np_array are same:", np.all(dec_png == np_array))
print("dec_jpg and np_array are same:", np.all(dec_jpg == np_array))
matching_count = np.count_nonzero(dec_jpg == np_array)
non_matching_count = np.count_nonzero(dec_jpg != np_array)
total = matching_count + non_matching_count
print("\nNumber of matching values:", matching_count)
print("Number of non-matching values:", non_matching_count)
print(f"{100 * matching_count / total:.2f}% matching vs {100 * non_matching_count / total:.2f}% not matching")
display(Image.fromarray(dec_jpg))
```
## Conversion speed at different dimensions
### PIL to b64
```
heights = [360, 480, 720, 1080, 2160]
for height in heights:
width = round(height * 16 / 9)
resized_im = im.resize((width, height))
print(f"Size: {width}x{height}")
print("Time taken to convert from PIL to b64 (png):")
%timeit pil_to_b64(resized_im, enc_format='png')
print("Time taken to convert from PIL to b64 (jpeg):")
%timeit pil_to_b64(resized_im, enc_format='jpeg')
print()
```
### Numpy to b64
```
heights = [360, 480, 720, 1080, 2160]
for height in heights:
width = round(height * 16 / 9)
resized_im = im.resize((width, height))
print(f"Size: {width}x{height}")
print("Time taken to convert from numpy to b64 (png):")
%timeit numpy_to_b64(resized_im, scalar=False)
print("Time taken to convert from numpy to b64 (jpeg):")
%timeit numpy_to_b64(resized_im, enc_format='jpeg', scalar=False)
print()
buff = BytesIO()
%timeit im.save(buff, format='png', compression_level=1)
%timeit encoded = base64.b64encode(buff.getvalue())
```
## Exploring Jpeg Compression
```
dec_jpg.filter(ImageFilter.BLUR).size
from PIL import ImageFilter
im = Image.open('../images/cats.jpg')
np_array = np.asarray(im)
for x in range(1, 11):
enc_jpg = pil_to_b64(im, enc_format='jpeg', quality=100)
dec_jpg = b64_to_pil(enc_jpg)
random = np.random.randint(0, 1500)
# Apply some operation
box = (random, random, random + 50, random + 50)
cropped = dec_jpg.filter(ImageFilter.BLUR).crop(box)
dec_jpg.paste(cropped, box=box)
dec_arr = np.asarray(dec_jpg)
matching_count = np.count_nonzero(dec_arr == np_array)
non_matching_count = np.count_nonzero(dec_arr != np_array)
total = matching_count + non_matching_count
print(f"\nNumber of matching values after {x} compressions: {matching_count}")
print("Number of non-matching values:", non_matching_count)
print(f"{100 * matching_count / total:.2f}% matching vs {100 * non_matching_count / total:.2f}% not matching")
```
### Exploring Lossless jpeg compression (jpeg 2000)
```
def pil_to_b64(im, enc_format='png', verbose=False, **kwargs):
"""
Converts a PIL Image into base64 string for HTML displaying
:param im: PIL Image object
:param enc_format: The image format for displaying. If saved the image will have that extension.
:return: base64 encoding
"""
t_start = time.time()
buff = BytesIO()
im.save(buff, format=enc_format, **kwargs)
encoded = base64.b64encode(buff.getvalue()).decode("utf-8")
t_end = time.time()
if verbose:
print(f"PIL converted to b64 in {t_end - t_start:.3f} sec")
return encoded
%timeit pil_to_b64(im, enc_format='png')
%timeit pil_to_b64(im, enc_format='jpeg2000')
%timeit pil_to_b64(im, enc_format='jpeg')
```
### Exploring Jpeg compression Sizes
```
%timeit pil_to_b64(im, enc_format='jpeg', quality=100)
%timeit pil_to_b64(im, enc_format='jpeg', quality=95)
im = Image.open('../images/cats.jpg')
print(len(pil_to_b64(im, enc_format='jpeg', quality=90)))
print(len(pil_to_b64(im, enc_format='jpeg', quality=95)))
print(len(pil_to_b64(im, enc_format='jpeg', quality=100)))
```
## Supplementary Exploration
```
import pandas as pd
im = Image.open('../images/IU2.jpg')
arr = np.asarray(im)
print(arr.size)
%timeit im.getdata()
%timeit pil_to_b64(im)
%timeit Image.fromarray(arr)
barr = arr.tobytes()
back = np.frombuffer(barr, dtype=np.uint8).reshape(arr.shape)
display(Image.fromarray(back))
%timeit barr = np.asarray(im).tobytes()
%timeit Image.fromarray(np.frombuffer(barr, dtype=np.uint8).reshape(arr.shape))
%timeit imgSize = im.size
%timeit rawData = im.tobytes()
%timeit Image.frombytes('RGB', imgSize, rawData)
im = Image.open('../images/IU2.jpg')
imgSize = im.size
imb = im.tobytes()
enc_str = base64.b64encode(imb).decode('ascii')
dec = base64.b64decode(enc_str.encode('ascii'))
display(Image.frombytes('RGB', imgSize, dec))
im = Image.open('../images/IU2.jpg')
arr = np.asarray(im)
arrb = arr.tobytes()
enc_str = base64.b64encode(barr).decode('ascii')
imgSize = arr.shape
dec = base64.b64decode(enc_str.encode('ascii'))
retrieved_arr = np.frombuffer(barr, dtype=np.uint8).reshape(imgSize)
im_retrieved = Image.fromarray(retrieved_arr)
print(type(im_retrieved))
display(im_retrieved)
%timeit pil_to_b64(im, enc_format='bmp')
string = pil_to_b64(im, enc_format='bmp')
%timeit b64_to_pil(string)
# Image utility functions
def pil_to_b64_png(im, verbose=False, comp=6):
"""
Converts a PIL Image into base64 string for HTML displaying
:param im: PIL Image object
:param enc_format: The image format for displaying. If saved the image will have that extension.
:return: base64 encoding
"""
t_start = time.time()
buff = BytesIO()
im.save(buff, format='png', compress_level=comp)
encoded = base64.b64encode(buff.getvalue()).decode("utf-8")
t_end = time.time()
if verbose:
print(f"PIL converted to b64 in {t_end - t_start:.3f} sec")
return encoded
%timeit pil_to_b64_png(im, comp=1)
string = pil_to_b64_png(im, comp=1)
%timeit b64_to_pil(string)
def func(im):
buff = BytesIO()
im.save(buff, format='png', compress_level=1)
%timeit func(im)
```
| github_jupyter |
# Point Source Deconvolution
Deconvolution of a small, simulated point-source image demonstrating the simplest possible example. This is an idealized version of deconvolving subresolution bead images.
**NOTE**: This is definitely a CPU-friendly example (it is not computationally intensive at all).
```
%matplotlib inline
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from flowdec import psf as fd_psf
from flowdec import data as fd_data
from flowdec import restoration as fd_restoration
from scipy.signal import fftconvolve
```
First, create a small odd length image volume so that the center can be a single voxel:
```
img = np.zeros((11, 11, 11), dtype='float32') # z, y, x
img[5, 5, 5] = 1
img.shape, img.dtype
```
Create a theoretical PSF to be used to simulate blurring of this image:
```
# This is meant to be representative of a 60x widefield image capture (all distance units are in microns)
psf = fd_psf.GibsonLanni(
na=1.4, # Numerical aperture
m=60, # Magnification
ni0=1.51, # Immersion RI
res_lateral=.125, # X/Y resolution
res_axial=.3, # Axial resolution
wavelength=.580, # Emission wavelength
size_x=img.shape[2],
size_y=img.shape[1],
size_z=img.shape[0]
).generate()
psf.shape, psf.dtype
```
Convolve the simulated point source with the PSF, noting that in this case the "blurred" result
and the original will be identical up to a constant factor (but this step is left in to make
it easier to try other simulated images with more than a single voxel):
```
# Use scipy.ndimage.convolve (which is a lot slower) if fftconvolve moves non-centered pixels to different quandrants
# (the function uses circulant convolutions that sometimes move single pixels at quadrant corners into other quadrants)
blur = fftconvolve(img, psf/psf.sum(), mode='same')
blur.shape, blur.dtype
```
Run deconvolution and record similarity metrics between the current result and original image at each iteration:
```
from skimage.measure import compare_mse, compare_psnr, compare_ssim
scores = {}
def observer_fn(img_restore, i, *args):
scores[i] = {
'mse': compare_mse(img, img_restore),
'ssim': compare_ssim(img, img_restore),
'psnr': compare_psnr(img, img_restore)
}
algo = fd_restoration.RichardsonLucyDeconvolver(3, observer_fn=observer_fn).initialize()
res = algo.run(fd_data.Acquisition(blur, psf), niter=500).data
```
Plot the scores vs iteration number, noting that most converge at or before 100 iterations:
```
pd.DataFrame(scores).T.plot(subplots=True, figsize=(18, 8))
pd.DataFrame(scores).T.tail()
```
Show all of the images used as max-z projections with annotated pixel values:
```
fig, axs = plt.subplots(2, 2)
fig.set_size_inches(28, 12)
sns.heatmap(img.max(0), cmap='Spectral', annot=True, ax=axs[0, 0])
axs[0, 0].set_title('Original')
sns.heatmap(psf.max(0), cmap='Spectral', annot=True, ax=axs[0, 1])
axs[0, 1].set_title('PSF')
sns.heatmap(blur.max(0), cmap='Spectral', annot=True, ax=axs[1, 0])
axs[1, 0].set_title('Blurred')
sns.heatmap(res.max(0), cmap='Spectral', annot=True, ax=axs[1, 1])
axs[1, 1].set_title('Deconvolved')
None
```
| github_jupyter |
## Aula 01 - Entendendo Série Temporal
### Parte 1 - Coleta de Dados e Primeiras Análises
- Fonte dos dados: [Governo do Estado de São Paulo](https://www.seade.gov.br/coronavirus/)
```
src = "../../data/modulo_03/dados_covid_sp.zip"
import pandas as pd
dados = pd.read_csv(src, sep=";")
dados.head()
dados["datahora"] = pd.to_datetime(dados["datahora"], format="%Y-%m-%d")
import matplotlib as mpl
mpl.rcParams["font.size"] = 12
mpl.rcParams["figure.figsize"] = (15,8)
import seaborn as sns
sns.lineplot(x="datahora", y="casos", data=dados)
```
#### Fórmula
$$ e^x $$
```
import numpy as np
eixo_x = np.linspace(-2,2,100)
y_exp = np.exp(eixo_x)
sns.lineplot(x=eixo_x, y=y_exp)
```
#### Fórmula
$$ log_2 2 = 1 $$
$$ log_2 2^x = x $$
$$ log_e e^x = x $$
```
sns.lineplot(x=eixo_x, y=y_exp)
mpl.pyplot.yscale("log")
dados_sp = dados.query('nome_munic == "São Paulo"')
sns.lineplot(x="datahora",y="casos",data=dados_sp)
mpl.pyplot.yscale("log")
```
- <p style="color:red">O que é tranformação logarítmica? Qual sua utilidade para o estudo de séries temporais?</p>
Diminui possíveis efeitos de viés e outliers.
### DESAFIO 03.01.01: Tentar filtrar a base com alguma outra função que não o Query.
**Documentação Consultada:**
- [Pandas Set Index](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.set_index.html#pandas.DataFrame.set_index)
- [Pandas Slice data](https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html)
```
dados=dados.set_index("nome_munic")
dados.head()
dados.loc["São Paulo"]
sns.lineplot(x="datahora", y="casos", data=dados.loc["São Paulo"])
sns.lineplot(x="datahora",y="casos",data=dados.loc["São Paulo"])
mpl.pyplot.yscale("log")
```
### DESAFIO 03.01.02: Fazer a mesma análise para outro município, preferencialmente de seu estado.
```
dados.loc["Botucatu"]
sns.lineplot(x="datahora", y="casos", data=dados.loc["Botucatu"])
sns.lineplot(x="datahora",y="casos",data=dados.loc["Botucatu"])
mpl.pyplot.yscale("log")
```
## Aula 01 - Entendendo Série Temporal
### Parte 2 - Taxa de Crecimento e Média Móvel
```
sns.lineplot(x='datahora', y='casos_novos', data=dados_sp)
dados_exemplo = pd.DataFrame(data=np.linspace(1,10,10))
dados_exemplo.head()
dados_exemplo['diferenciado'] = dados_exemplo.diff()
sns.lineplot(x=0, y=0, data=dados_exemplo)
sns.lineplot(x=0, y='diferenciado', data=dados_exemplo)
dados_sp['taxa_de_crescimento_casos'] = dados_sp['casos_novos'].diff()
dados_sp['taxa_de_crescimento_obitos'] = dados_sp['obitos_novos'].diff()
sns.lineplot(x='datahora', y='taxa_de_crescimento_casos', data=dados_sp)
sns.lineplot(x='datahora', y='taxa_de_crescimento_obitos', data=dados_sp)
sns.lineplot(x='datahora', y='taxa_de_crescimento_obitos', data=dados_sp)
import matplotlib.pyplot as plt
plt.bar(dados_sp['datahora'],dados_sp['taxa_de_crescimento_casos'])
dados_sp['ano'] = pd.DatetimeIndex(dados_sp['datahora']).year
dados_2021 = dados_sp.query('ano == 2021')
plt.bar(dados_2021['datahora'], dados_2021['taxa_de_crescimento_casos'])
dados_sp['media_movel_casos'] = dados_sp['casos_novos'].rolling(window=7,center=False).mean()
dados_sp['media_movel_obitos'] = dados_sp['obitos_novos'].rolling(window=7,center=False).mean()
sns.lineplot(x='datahora',y='casos_novos', data=dados_sp)
sns.lineplot(x='datahora', y='media_movel_casos', data=dados_sp)
```
### DESAFIO 03.01.03: Mostrar o notebook sem os warnings
```
import warnings
warnings.filterwarnings('ignore')
```
### DESAFIO 03.01.04: Comparar se o pico da taxa de casos novos coincide com a taxa dos óbitos novos.
**Documentação consultada:**
- [Seaborn Axes Style](https://seaborn.pydata.org/generated/seaborn.axes_style.html#seaborn.axes_style)
- [Seaborn Set Style](https://seaborn.pydata.org/generated/seaborn.set_style.html#seaborn.set_style)
```
sns.lineplot(x='datahora', y='taxa_de_crescimento_casos', data=dados_sp)
sns.lineplot(x='datahora', y='taxa_de_crescimento_obitos', data=dados_sp)
sns.set_style("darkgrid")
```
### DESAFIO 03.01.05: Melhorar todas as visualizações dos gráficos.
**Documentação Consultada:**
- [Matplotlib Documentation](https://matplotlib.org/stable/tutorials/introductory/usage.html)
```
sns.lineplot(x='datahora',y='casos_novos', data=dados_sp)
sns.lineplot(x='datahora', y='media_movel_casos', data=dados_sp)
sns.set_style("darkgrid")
plt.xlabel("Data")
plt.ylabel("Quantidade")
plt.title("Casos Novos e Média Móvel de Casos no Município de São Paulo")
plt.legend()
plt.show()
```
### DESAFIO 03.01.06: Gerar o gráfico da média móvel do número de óbitos. Mudar os gráficos de casos de linha para barra.
```
sns.lineplot(x='datahora',y='casos_novos', data=dados_sp)
sns.lineplot(x='datahora', y='media_movel_obitos', data=dados_sp)
sns.set_style("darkgrid")
plt.xlabel("Data")
plt.ylabel("Quantidade")
plt.title("Casos Novos e Média Móvel de Casos no Município de São Paulo")
#plt.legend()
plt.show()
dados_sp.head()
plt.bar(dados_sp['datahora'],dados_sp['casos_novos'])
```
## Aula 01 - Entendendo Série Temporal
### Parte 3 - Correlação e Função de autocorrelação
```
sns.lineplot(x='casos_novos',y='casos_novos',data=dados_sp)
sns.lineplot(x='casos_novos',y='obitos_novos',data=dados_sp)
dados_202101 = dados_sp.query('mes == 1 & ano == 2021')
dados_202102 = dados_sp.query('mes == 2 & ano == 2021')
plt.bar(np.linspace(1,31,31),dados_202101['casos_novos'])
plt.show()
plt.bar(np.linspace(1,28,28),dados_202102['casos_novos'])
```
**Função de autocorrelação para identificar sazonalidade:**
- Identifica se existe dependência temporal;
- Em valores aleatórios não existe dependência temporal, como se pode ver no segundo gráfico a seguir.
```
from pandas.plotting import autocorrelation_plot
autocorrelation_plot(dados_sp['casos_novos'])
dados_sp.shape
aleatorio = np.random.rand(443)
autocorrelation_plot(aleatorio)
```
### DESAFIO 03.01.07: Mover o número de obitos alguns dias para verificar se a relação entre os casos novos e os óbitos. Teste diferentes janelas de tempo (3 dias, 7 dias, 14 dias)
```
dados_sp_90dias = dados_sp[90:]
dados_sp_90dias
sns.lineplot(x='casos_novos',y='obitos_novos',data=dados_sp)
#sns.lineplot(x=dados_sp['casos_novos'],y=dados_sp_7dias['obitos_novos'])
sns.lineplot(x=dados_sp['casos_novos'],y=dados_sp_90dias['obitos_novos'])
```
### DESAFIO 03.01.08: Calcular a função de autocorrelação cruzada dos óbitos e casos. (teste a biblioteca statsmodels)
**Documentação consultada:**
- [Statsmodels Instalation](https://www.statsmodels.org/stable/install.html)
- [Statsmodels Calculate the autocorrelation function](https://www.statsmodels.org/stable/generated/statsmodels.tsa.stattools.acf.html)
- [Statsmodels Autocorrelation - Plot autocorrelation function](https://www.statsmodels.org/stable/generated/statsmodels.graphics.tsaplots.plot_acf.html)
- [Statsmodels Autocorrelation - Plot partial autocorrelation function](https://www.statsmodels.org/stable/generated/statsmodels.graphics.tsaplots.plot_pacf.html)
- [Two variables Autocorrelation Statsmodels Article](https://machinelearningmastery.com/gentle-introduction-autocorrelation-partial-autocorrelation/)
```
import statsmodels.api as sm
dta = sm.datasets.sunspots.load_pandas().data
dta.index = pd.Index(sm.tsa.datetools.dates_from_range('1700', '2008'))
del dta["YEAR"]
sm.graphics.tsa.plot_acf(dta.values.squeeze(), lags = 40)
plt.show()
serie_casos_novos_sp = dados_sp['casos_novos']
serie_casos_novos_sp.plot()
plt.show()
```
### DESAFIO 03.01.07: Mover o número de obitos alguns dias para verificar se a relação entre os casos novos e os óbitos. Teste diferentes janelas de tempo (3 dias, 7 dias, 14 dias)
### DESAFIO 03.01.08: Calcular a função de autocorrelação cruzada dos óbitos e casos. (teste a biblioteca statsmodels)
### DESAFIO 03.01.09: Estudar todos esses padrões para outro município e comparar com o município de São Paulo.
### O que aprendemos?
| github_jupyter |
```
import os
import time
import math
import bisect
import numpy as np
from numpy import array
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.ticker import MaxNLocator
from scipy.stats import t, ttest_ind
from collections import Counter
import warnings
from datetime import date
import matplotlib.pyplot as plt
from matplotlib.backends.backend_pdf import PdfPages
from collections import Counter, OrderedDict
warnings.simplefilter('ignore')
%matplotlib inline
def epoch(ts):
"""
convert the daytime into epochtime
"""
pattern = '%d-%m-%Y'
epochtime = int(time.mktime(time.strptime(ts, pattern)))
return epochtime
#def time_log(line):
# start = line.rfind("(")+1
# middle = line.rfind(",")
# end = line.rfind(")")
# t1 = epoch(line[start:middle])
# t2 = epoch(line[middle+1:end])
# tr = [t1,t2]
# return tr
def time_diff(ts, te):
"""
compute time difference between two time nodes
input
-----------------
ts, te: string of day-time (dd-mm-yy')
return
-----------------
time_diff: time difference
"""
t1_y = int(ts.split("-")[0])
t1_m = int(ts.split("-")[1])
t1_d = int(ts.split("-")[2])
#yyyy-mm-dd
t2_y = int(te.split("-")[0])
t2_m = int(te.split("-")[1])
t2_d = int(te.split("-")[2])
d1 = date(t1_y, t1_m, t1_d)
d2 = date(t2_y, t2_m, t2_d)
delta = d1 - d2
time_diff = abs(delta.days)
return time_diff
def real_sol():
"""
correct solution for the tasks in this experiment
"""
sol ={}
l1 = ['flood,Colorado,2013-09-09','blizzard,New York,2014-02-11','hurricane,North Carolina,2014-07-01']
l2 = ['tornado,Oklahoma,2013-05-20','blizzard,Massachusetts,2014-02-06','earthquake,California,2014-08-24']
l3 = ['hurricane,Florida,2013-06-09','earthquake,California,2014-03-17','blizzard,New York,2014-11-13',
'flood,Florida,2013-06-09']
sol['pos1,pos7,pos6'] = l1
sol['pos4,pos9,pos5'] = l2
sol['pos2,pos3,pos8'] = l3
return sol
def total_log_initial():
"""
initializing a dict to save all log results
output
-----------------
total_dict: empty dict
"""
fieldlist = ['query', 'slider', 'ZoomLevel', 'MouseDrag','checking_tweet',
'checking_filter', 'time_elapsed', 'actions']
ylist = ['nd_precision', 'nd_recall', 'location_precision', 'location_recall','nd_time_deviation']
algolist = ['baseline','kmeans','filters']
datalist = ['pos1,pos7,pos6','pos4,pos9,pos5','pos2,pos3,pos8']
keylist = []
for algo in algolist:
for data in datalist:
keylist.append((algo, data))
total_dict = {}
for field in fieldlist:
total_dict[field] = {}
total_dict[field]['x'] = {}
for algo in algolist:
for key in keylist:
total_dict[field]['x'][key] = []
for y in ylist:
total_dict[field][y] = {}
for key in keylist:
total_dict[field][y][key] = []
return total_dict
def total_log_writing(total_dict, log_dict):
"""
writing each log's dict into total dict
input
-----------------
total_dict: current total dictionary result
log_dict: current log dictionary result
output
-----------------
total_dict: new total dictionary result
"""
fieldlist = ['query', 'slider', 'ZoomLevel', 'MouseDrag', 'checking_tweet',
'checking_filter','time_elapsed', 'actions']
ylist = ['nd_precision', 'nd_recall', 'location_precision', 'location_recall','nd_time_deviation']
algo = log_dict['algo']
data = log_dict['data']
key = (algo, data)
for f in fieldlist:
unnorm_x = log_dict[f]['x']
if(len(unnorm_x)!=0):
total_dict[f]['x'][key].append(unnorm_x)
for y in ylist:
total_dict[f][y][key].append(log_dict[f][y])
return total_dict
def each_log(logDir, logFile, real_sol, action):
"""
log file parsing
input
-----------------
logDir: log directory; logFile: name of logfile; real_sol: correct solution
action: True for invidual action (This will add ending value of metric in each invidual action)
False for global field-time_elapsed and actions
output
-----------------
log_dict: new log_dict with current effort
"""
fieldlist = ['query', 'slider', 'ZoomLevel', 'MouseDrag', 'checking_tweet',
'checking_filter', 'time_elapsed', 'actions']
ylist = ['nd_precision', 'nd_recall', 'location_precision', 'location_recall','nd_time_deviation']
log_dict = {} # save log infor
nds = []
locations = []
ndtimes = []
for field in fieldlist:
log_dict[field] = {}
log_dict[field]['x'] = []
for y in ylist:
log_dict[field][y] = []
log = logDir + logFile
with open(log, "r") as f:
i = 0
ts = 0
tsq = 0
tq = 0
algo = 'good'
data = 'good'
for line in f:
i += 1
line_split = line.split('\t')
if('START' in line_split[1]):
algo = line_split[2]
data = line_split[-1][:-1]
log_dict['algo']= algo
log_dict['data']= data
sol = real_sol[data]
ts = float(line_split[0])/1000
elif('Query_Execution' in line_split[1]):
teq = float(line_split[0])/1000
tq += (teq - tsq)
elif('query' in line_split[1]):
xkey = 'query'
tsq = float(line_split[0])/1000
time_com = tsq - ts - tq
log_dict = each_effort(log_dict, algo, data, xkey, time_com, nds, locations, ndtimes, sol)
elif('slider' in line_split[1]):
xkey = 'slider'
time_com = float(line_split[0])/1000 - ts - tq
log_dict = each_effort(log_dict, algo, data, xkey, time_com, nds, locations, ndtimes, sol)
elif('ZoomLevel' in line_split[1]):
xkey = 'ZoomLevel'
time_com = float(line_split[0])/1000 - ts - tq
log_dict = each_effort(log_dict, algo, data, xkey, time_com, nds, locations, ndtimes, sol)
elif('MouseDrag' in line_split[1]):
xkey = 'MouseDrag'
time_com = float(line_split[0])/1000 - ts - tq
log_dict = each_effort(log_dict, algo, data, xkey, time_com, nds, locations, ndtimes, sol)
elif('clicking_tweet' in line_split[1]):
xkey = 'checking_tweet'
time_com = float(line_split[0])/1000 - ts - tq
log_dict = each_effort(log_dict, algo, data, xkey, time_com, nds, locations, ndtimes, sol)
elif(('clicking_bbox' in line_split[1]) or ('adding_filter' in line_split[1])
or ('removing_filter' in line_split[1])):
xkey = 'checking_filter'
time_com = float(line_split[0])/1000 - ts - tq
log_dict = each_effort(log_dict, algo, data, xkey, time_com, nds, locations, ndtimes, sol)
elif('final_answer' in line_split[1]):
nds.append(line_split[3])
ndtimes.append(line_split[5])
locations.append(line_split[7][:-1])
time_com = float(line_split[0])/1000 - ts - tq
if(action):
#adding end value of metric in each invidual action
for xkey in fieldlist[:-2]:
log_dict = each_effort(log_dict, algo, data, xkey, time_com, nds, locations, ndtimes, sol)
else:
log_dict = each_effort(log_dict, algo, data, xkey, time_com, nds, locations, ndtimes, sol)
f.close()
return log_dict
def each_effort(log_dict, algo, data, xkey, time, nd_list, location_list, nd_time, sol):
"""
write each effort's result into log_dict
1st level of keys in log_dict: actions, time_elapsed, query and other actions
2nd level of keys in log_dict: 'x', nd_precision and other metric
The value of log_dict is a nested list for different users' result
input
-----------------
log_dict: previous log_dict; algo: string of algorithm ('kmeans'); data: string of data ('pos1,pos7,pos6');
xkey: string of 1st level key in log_dict; time: behavior epoch time in log file
nd_list, location_list, nd_time: current user's answer(list); sol: correct solution
output
-----------------
log_dict: new log_dict with current effort
"""
# log effort part: the list with 'x' is used for x-value in plots
if(len(log_dict['actions']['x'])== 0):
log_dict['actions']['x'].append(1)
else:
log_dict['actions']['x'].append(log_dict['actions']['x'][-1]+1)
log_dict['time_elapsed']['x'].append(time)
if(len(log_dict[xkey]['x'])!=0):
log_dict[xkey]['x'].append(log_dict[xkey]['x'][-1]+1)
else:
log_dict[xkey]['x'].append(1)
# log results part
ylist = ['nd_precision', 'location_precision', 'nd_recall', 'location_recall','nd_time_deviation']
nd_sol = [x.split(',')[0] for x in sol]
location_sol = [x.split(',')[1] for x in sol]
ndtime_sol = [x.split(',')[2] for x in sol]
# log result part
nd_p = 0.000
nd_r = 0.000
location_p = 0.000
location_r = 0.000
time_err = 2190.000
#time_err = 0.000
if(len(nd_list)!=0):
nd_corr = 0
location_corr = 0
time_err_current = 0.000
for i in range(len(nd_list)):
if(nd_list[i] in nd_sol):
nd_corr += 1
if(location_list[i] in location_sol):
location_corr += 1
if((nd_list[i] in nd_sol) and (location_list[i] in location_sol)):
k = nd_sol.index(nd_list[i])
time_err_current += time_diff(nd_time[i], ndtime_sol[k]) - 730
#if((nd_list[i] in nd_sol) and (location_list[i] in location_sol)):
# k = nd_sol.index(nd_list[i])
# time_err_current += time_diff(nd_time[i], ndtime_sol[k])
#else:
# time_err_current += 730
nd_p = float(nd_corr)/len(nd_list)
nd_r = float(nd_corr)/3.0
location_p = float(location_corr)/len(location_list)
location_r = float(location_corr)/3.0
time_err += time_err_current
log_dict[xkey]['nd_precision'].append(nd_p)
log_dict['actions']['nd_precision'].append(nd_p)
log_dict['time_elapsed']['nd_precision'].append(nd_p)
log_dict[xkey]['nd_recall'].append(nd_r)
log_dict['actions']['nd_recall'].append(nd_r)
log_dict['time_elapsed']['nd_recall'].append(nd_r)
log_dict[xkey]['location_precision'].append(location_p)
log_dict['actions']['location_precision'].append(location_p)
log_dict['time_elapsed']['location_precision'].append(location_p)
log_dict[xkey]['location_recall'].append(location_r)
log_dict['actions']['location_recall'].append(location_r)
log_dict['time_elapsed']['location_recall'].append(location_r)
log_dict[xkey]['nd_time_deviation'].append(time_err)
log_dict['actions']['nd_time_deviation'].append(time_err)
log_dict['time_elapsed']['nd_time_deviation'].append(time_err)
return log_dict
def interp(X, Y, long_X):
"""
interpolate or extrapolate for miss X in long_X
output
-----------------
y_eval: list of Y with the same size of long_X
"""
xs = set(X)
xp = list(long_X - xs) #Note long_X is Set
y_eval = [float(0.000)]*len(Y)
if(len(Y)!= 0):
yp = np.interp(xp, X, Y)
y_eval = [yv for _,yv in sorted(zip(X,Y)+zip(xp, yp))]
return y_eval
def polation(total_dict):
"""
interpolate or extrapolate for total_dict
output
-----------------
pol_dict: a brand_new dict with interpolation or extrapolation treatment
"""
pol_dict = total_log_initial()
fieldlist = ['time_elapsed', 'actions', 'query', 'slider', 'ZoomLevel', 'MouseDrag',
'checking_tweet','checking_filter']
ylist = ['nd_precision', 'nd_recall', 'location_precision', 'location_recall','nd_time_deviation']
algolist = ['baseline','kmeans','filters']
datalist = ['pos1,pos7,pos6','pos4,pos9,pos5','pos2,pos3,pos8']
for y in ylist:
for f in fieldlist:
for algo in algolist:
keylist = []
for data in datalist:
keylist.append((algo, data))
unflatten_x = []
unflatten_y = []
for key in keylist:
lx = total_dict[f]['x'][key]
ly = total_dict[f][y][key]
unflatten_x += lx
unflatten_y += ly
unflatten_x = filter(None, unflatten_x)
unflatten_y = filter(None, unflatten_y) #remove empty nested-list
if (len(unflatten_x)!=0):
x_plot_set = set(reduce(lambda x1,x2: x1+x2,unflatten_x))
x_plot = list(x_plot_set)
x_plot.sort()
pol_dict[f]['x'][key] = x_plot
y_total = []
for j in range(len(unflatten_x)):
y_total_current = interp(unflatten_x[j], unflatten_y[j], x_plot_set)
y_total.append(y_total_current)
# if(len(y_total[0])!= len(y_total_current)):
# print len(y_total_current)
pol_dict[f][y][key] = y_total
return pol_dict
def plot_algo_y(total_dict, filename, field_plots, user):
"""
plotting value in total_dict into pdf file
input
-----------------
filename: pdf file name; field_plots: "global" for time_elapsed and actions
user: True for individual user plotting
"""
if(field_plots== "global"):
fieldlist = ['time_elapsed']
else:
fieldlist = ['query', 'slider', 'ZoomLevel', 'MouseDrag', 'checking_tweet','checking_filter']
#ylist = ['nd_precision', 'nd_recall', 'location_precision', 'location_recall','nd_time_deviation']
ylist = ['nd_recall','location_recall','nd_time_deviation']
algolist = ['baseline','kmeans','filters']
datalist = ['pos1,pos7,pos6','pos4,pos9,pos5','pos2,pos3,pos8']
color_algo = ['blue','green','red']
k = 1
pp = PdfPages(filename)
for y in ylist:
for f in fieldlist:
fig = plt.figure(k)
ax = plt.axes()
ax.grid(True)
ax.xaxis.set_major_locator(MaxNLocator(integer=True))
for i in range(len(algolist)):
algo = algolist[i]
if not ((algo=='baseline') and (f=='checking_filter')):
keylist = []
for data in datalist:
keylist.append((algo, data))
for key in keylist:
lx = total_dict[f]['x'][key]
ly = total_dict[f][y][key]
if(len(lx)!=0):
ym = np.mean(ly, axis = 0, dtype = float)
ym[0] = 0.00
if (y == "nd_time_deviation"):
ym[0] = 2190.00
if(user):
plt.plot(lx, ym, linestyle='-', color=color_algo[i])
for kk in range(len(ly)):
plt.plot(lx, ly[kk], linestyle=':', color=color_algo[i])
else:
plt.plot(lx, ym, linestyle='-', color=color_algo[i], label=algo)
ys = np.std(ly, axis= 0, dtype=float)
df = len(ly)-1
confidence = 0.95
ts = t.ppf(1-(1 - confidence)/2.0, df)
yi = ts * ys/math.sqrt(df)
yl = ym - yi
yu = ym + yi
# My code to output results
out=[]
out.append(array(lx))
out.append(ym)
out.append(yl)
out.append(yu)
df = pd.DataFrame(out)
df = df.transpose()
df.to_csv(algo+'_'+y+'.txt', header=None, index=None, sep='\t', mode='a')
plt.fill_between(lx, yl, yu, alpha=0.1, edgecolor='',
facecolor=color_algo[i], linewidth=0.0)
plt.title(f + ' VS '+ y)
if (f == "time_elapsed"):
plt.xlabel(f + " (Sec)")
else:
plt.xlabel(f)
if (y == "nd_time_deviation"):
plt.ylabel(y + " (Day)")
ax.set_ylim([0, 2500])
else:
plt.ylabel(y)
ax.set_ylim([0.0,1.2])
plt.legend(ncol = 3)
k+= 1
plt.savefig("plots/"+f+"_"+y+".pdf", format='pdf')
plt.savefig(pp, format='pdf')
plt.show()
plt.close("all")
pp.close()
def answer_level(r):
l = 0
if (r > 0.33) and (r < 0.66):
l = 1
elif (r > 0.66) and (r < 0.9):
l = 2
elif (r > 0.99):
l = 3
return l
def plot_algo_stage(total_dict, filename, tk):
"""
plotting for performance distribution for different algorithms at different stage
"""
xlist = ['time_elapsed']
ylist = ['nd_recall', 'location_recall','nd_time_deviation']
#ylist = ['nd_recall', 'location_recall']
algolist = ['filters','kmeans','baseline']
datalist = ['pos1,pos7,pos6','pos4,pos9,pos5','pos2,pos3,pos8']
highlight = ["mean","median"]
pp = PdfPages(filename)
s = 1.0/tk
up_time = 1200
up_actions = 480
k = 1
t = 0
for x in xlist:
for y in ylist:
data_bar = OrderedDict()
vplot_data = OrderedDict()
for i in np.linspace(s, 1.0, tk):
if(x == 'time_elapsed'):
t = i*up_time
else:
t = i*up_actions
for algo in algolist:
data_bar[algo] = [0, 0, 0, 0]
vplot_data[algo] = []
keylist = []
for data in datalist:
keylist.append((algo, data))
for key in keylist:
lx = total_dict[x]['x'][key]
ly = total_dict[x][y][key]
if(len(lx)!=0):
# directly get y value if target is in xlist
if (t in lx):
ind = lx.index(t)
for l in ly:
yv = l[ind]
vplot_data[algo].append(yv)
data_bar[algo][answer_level(yv)] += 1
# search two nearest elements for interpolation if target is not in xlist
else:
ub = bisect.bisect(lx, t)
# ub == len(lx) if (t >= lx[-1])
if(ub == len(lx)):
for l in ly:
yv = l[ub-1]
vplot_data[algo].append(yv)
data_bar[algo][answer_level(yv)] += 1
else:
xmax = lx[ub]
xmin = lx[ub-1]
for l in ly:
ymax = l[ub]
ymin = l[ub-1]
yv = np.interp(t, [xmin, xmax], [ymin, ymax])
vplot_data[algo].append(yv)
data_bar[algo][answer_level(yv)] += 1
#if(len(lx)!=0):
# xt = lx[int(i*len(lx))-1]
# for l in ly:
# vplot_data[algo].append(l[int(i*len(lx))-1])
fig = plt.figure(k)
k+= 1
ax = plt.axes()
#ax.grid(True)
ax.xaxis.set_major_locator(MaxNLocator(integer=True))
if (y=='nd_time_deviation'):
labels, data = vplot_data.keys(), vplot_data.values()
plt.boxplot(data, meanline=False)
medians = np.percentile(data, 50, axis=1)
means = np.mean(data, dtype = float, axis=1)
inds = np.arange(1, len(medians) + 1)
ax.scatter(inds, medians, marker='o', color='red', label = "median")
ax.scatter(inds, means, marker='o', color='black', label = "mean")
plt.legend(ncol=2)
plt.xticks(range(1, len(labels) + 1), labels)
plt.ylabel('time_deviation (Day)')
plt.yticks(np.arange(0, 3000, 500))
if(x == 'time_elapsed'):
plt.title('time_deviation VS algorithms at '+str(int(t))+' seconds')
else:
plt.title('time_deviation VS algorithms at '+str(int(t))+' actions')
plt.show()
# My code to output results
df = pd.DataFrame(vplot_data.values())
df = df.transpose()
df.to_csv(y+"_"+x+"_"+str(int(t))+'.txt', header=None, index=None, sep='\t', mode='a')
else:
########
## box plot for recall
########
labels, data = vplot_data.keys(), vplot_data.values()
plt.boxplot(data, meanline=False)
medians = np.percentile(data, 50, axis=1)
means = np.mean(data, dtype = float, axis=1)
inds = np.arange(1, len(medians) + 1)
ax.scatter(inds, medians, marker='o', color='red', label = "median")
ax.scatter(inds, means, marker='o', color='black', label = "mean")
plt.legend(ncol=2)
plt.xticks(range(1, len(labels) + 1), labels)
plt.yticks(np.arange(0, 1.3, 0.1))
yt = y.split("_")[0]
if(x == 'time_elapsed'):
plt.title(yt+' recall VS algorithms at '+str(int(t))+' seconds')
else:
plt.title(yt+' recall VS algorithms at '+str(int(t))+' actions')
plt.show(5)
#print vplot_data.values()
# My code to output results
df = pd.DataFrame(vplot_data.values())
df = df.transpose()
df.to_csv(y+"_"+x+"_"+str(int(t))+'.txt', header=None, index=None, sep='\t', mode='a')
plt.close("all")
pp.close()
print k
log_dir = "experiment_logs/"
#log_dir = "one_log/"
rs = real_sol()
#pdfname = "plots_actions.pdf"
#total_dict = total_log_initial()
#for filename in os.listdir(log_dir):
# one_dict = each_log(log_dir, filename, rs, True)
# total_dict = total_log_writing(total_dict, one_dict)
#plot_algo(total_dict, pdfname, "actions", False)
total_dict = total_log_initial()
for filename in os.listdir(log_dir):
#print filename
one_dict = each_log(log_dir, filename, rs, False)
total_dict = total_log_writing(total_dict, one_dict)
pol_dict = polation(total_dict)
pdfname = "plots_global.pdf"
#plot_algo_y(pol_dict, pdfname, "global", False)
tk = 4
plot_algo_stage(pol_dict, pdfname, tk)
```
| github_jupyter |
```
import os
import sys
import geopandas as gpd
import pandas as pd
import numpy as np
import scipy
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
import cartopy.feature as cf
from IPython.display import Markdown as md
from sklearn.preprocessing import PolynomialFeatures
from shapely import geometry
import cloudpickle
from functools import partial
from copy import deepcopy
from tpc import tpc_general
import seaborn as sns
import statsmodels.api as sm
from sklearn.decomposition import PCA
import warnings
sns.set(style='ticks', font_scale=1.3)
FIG_OPTIONS = {
'figsize' : (3, 1),
'dpi': 200
}
%matplotlib inline
```
# Developing latitudinal models of $T_\mathrm{opt}, CT_\textrm{min}$ and $CT_\textrm{max}$
```
plankton_o = pd.read_csv("../../data/Phytoplankton_temperature_growth_rate_dataset_2016_01_29/traits_derived_2016_01_29.csv", engine='python')
```
**Filter** the data by fit quality (perhaps) and marine:
```
# plankton = plankton_o[(plankton_o.minqual == "good") &
# (plankton_o.maxqual == "good") &
# (plankton_o.curvequal == "good")]
plankton = plankton_o[plankton_o.habitat == 'marine']
```
**Drop NAs**
```
plankton = plankton.dropna(
axis=0,
subset=[
'isolation.latitude',
'isolation.longitude',
'mu.g.opt.list',
'tmin',
'tmax'
]
)
```
Relevant parameter names/descriptions (from `../../data/Phytoplankton_temperature_growth_rate_dataset_2016_01_29/Dataset explanation.doc`)
```
23) mu.wlist = estimated thermal niche width (parameter ‘ɷ’ in the thermal reaction norm model)
24) mu.alist = estimate of parameter ‘a’ in the thermal reaction norm model
25) mu.blist = estimate of parameter ‘b’ in the thermal reaction norm model
26) mu.slist = variance parameter for the maximum likelihood model fit.
27) mu.c.opt.list = estimate of parameter ‘z’ in the thermal reaction norm model
28) mu.g.opt.val.list = estimated specific growth rate (per day) when temperature is at ‘z’ (i.e. mu.c.opt.list)
29) mu.g.opt.list = estimated optimum temperature for growth
30) mu.g.opt.val.list = estimated maximum specific growth rate (per day) based on the thermal reaction norm model fit
31) mu.n = number of points (i.e. number of growth rate measurements) in the curve
32) emp.max.growth = maximum specific growth rate (per day) measured during the growth assays. These were used for maximum growth rate analyses, but results did not differ significantly if estimated maximum specific growth rate based on the thermal reaction norm model fit (i.e. mu.g.opt.val.list) was used instead.
33) tmin = Tmin, or minimum persistence temperature, estimated from the thermal reaction norm model fit
34) tmax = Tmax, or maximum persistence temperature, estimated from the thermal reaction norm model fit
35) minqual = quality of Tmin estimate (quality control criteria found in supplementary info).
36) maxqual = quality of Tmax estimate (quality control criteria found in supplementary info).
37) curvequal = quality of niche width estimate (quality control criteria found in supplementary info).
38) abs.curveskew = Estimated absolute skewness of the thermal reaction norm
39) rel.curveskew = Estimated relative skewness of the thermal reaction norm
```
## $T_\mathrm{opt}$
```
topt_colname = 'mu.g.opt.list' ## 29) mu.g.opt.list = estimated optimum temperature for growth
sns.lmplot(
x = 'isolation.latitude',
y = topt_colname,
size = 6,
order = 2,
data = plankton
)
plt.title("$T_\mathrm{opt}$ by Latitude ($N = " + str(len(plankton)) + ")$")
plt.xlabel("Degrees Latitude")
plt.ylabel("T [deg C]")
```
### `statsmodels` fit
```
lat_column = 'isolation.latitude'
topt = plankton[topt_colname]
order2 = PolynomialFeatures(2).fit_transform(plankton[lat_column].to_numpy().reshape(-1,1))
topt_model = sm.OLS(topt, order2).fit()
topt_model.summary(title="Regular Latitude (Second order fit) Results")
```
## $T_\mathrm{min}$
```
sns.lmplot(
x = 'isolation.latitude',
y = 'tmin',
size = 6,
order = 2,
data = plankton
)
plt.title("$T_\mathrm{min}$ by Latitude ($N = " + str(len(plankton)) + ")$")
plt.xlabel("Degrees Latitude")
plt.ylabel("T [deg C]")
```
**Something off about that. might need a filter here**.
```
plankton_filtered = plankton[
plankton.curvequal == 'good'
]
sns.lmplot(
x = 'isolation.latitude',
y = 'tmin',
size = 6,
order = 2,
data = plankton_filtered
)
plt.title("$T_\mathrm{min}$ by Latitude ($N = " + str(len(plankton_filtered)) + ")$")
plt.xlabel("Degrees Latitude")
plt.ylabel("T [deg C]")
```
**Perhaps more reasonable**.
### compare `statsmodels` fits
**1) No Data Filter**
```
lat_column = 'isolation.latitude'
tmin = plankton['tmin']
order2 = PolynomialFeatures(2).fit_transform(plankton[lat_column].to_numpy().reshape(-1,1))
tmin_model_nofilter = sm.OLS(tmin, order2).fit()
tmin_model_nofilter.summary(title="Regular Latitude (Second order fit) Results")
```
**2) With filtered data**
```
lat_column = 'isolation.latitude'
tmin = plankton_filtered['tmin']
order2 = PolynomialFeatures(2).fit_transform(plankton_filtered[lat_column].to_numpy().reshape(-1,1))
tmin_model_filter = sm.OLS(tmin, order2).fit()
tmin_model_filter.summary(title="Regular Latitude (Second order fit) Results")
```
**Compare Fits:**
```
pd.merge(
tmin_model_filter.params.rename("Filtered"),
tmin_model_nofilter.params.rename("Not Filtered"),
left_index=True, right_index=True
)
```
**Compare $R^2$**
```
print(
f"Filtered R2: {tmin_model_filter.rsquared:.3f}",
f"Non-Filtered R2: {tmin_model_nofilter.rsquared:.3f}"
)
```
## $T_\mathrm{max}$
```
sns.lmplot(
x = 'isolation.latitude',
y = 'tmax',
size = 6,
order = 2,
data = plankton
)
plt.title("$T_\mathrm{max}$ by Latitude ($N = " + str(len(plankton)) + ")$")
plt.xlabel("Degrees Latitude")
plt.ylabel("T [deg C]")
sns.lmplot(
x = 'isolation.latitude',
y = 'tmax',
size = 6,
order = 2,
data = plankton_filtered
)
plt.title("$T_\mathrm{max}$ by Latitude ($N = " + str(len(plankton_filtered)) + ")$")
plt.xlabel("Degrees Latitude")
plt.ylabel("T [deg C]")
```
### Compare `statsmodels` fits
**1) No Data Filter**
```
lat_column = 'isolation.latitude'
tmax = plankton['tmax']
order2 = PolynomialFeatures(2).fit_transform(plankton[lat_column].to_numpy().reshape(-1,1))
tmax_model_nofilter = sm.OLS(tmax, order2).fit()
tmax_model_nofilter.summary(title="Regular Latitude (Second order fit) Results")
```
**2) With Data Filter**
```
lat_column = 'isolation.latitude'
tmax = plankton_filtered['tmax']
order2 = PolynomialFeatures(2).fit_transform(plankton_filtered[lat_column].to_numpy().reshape(-1,1))
tmax_model_filter = sm.OLS(tmax, order2).fit()
tmax_model_filter.summary(title="Regular Latitude (Second order fit) Results")
```
**Compare Fits:**
```
pd.merge(
tmax_model_filter.params.rename("Filtered"),
tmax_model_nofilter.params.rename("Not Filtered"),
left_index=True, right_index=True
)
```
Similar.
**Compare R^2**:
```
print(
f"Filtered R2: {tmax_model_filter.rsquared:.3f}",
f"Non-Filtered R2: {tmax_model_nofilter.rsquared:.3f}"
)
```
# Develop Generalized TPC Class
```
class GeneralizedTPC(object):
def __init__(self, toptModel, tminModel, tmaxModel):
self.toptModel = deepcopy(toptModel)
self.tminModel = deepcopy(tminModel)
self.tmaxModel = deepcopy(tmaxModel)
def getTPCParameters(self, latitude):
import numpy as np
from sklearn.preprocessing import PolynomialFeatures
latitude = PolynomialFeatures(2).fit_transform(np.array(latitude).reshape(-1, 1))
_topt = self.toptModel.predict(latitude).item()
_tmin = self.tminModel.predict(latitude).item()
_tmax = self.tmaxModel.predict(latitude).item()
return (_topt, _tmin, _tmax)
def getLatitudinalTPC(self, latitude):
_topt, _tmin, _tmax = self.getTPCParameters(latitude)
return(partial(
tpc.tpc_general,
Topt = _topt,
CTmin = _tmin,
CTmax = _tmax
))
```
## Instantiate + Save to File
We "ossify" the general class by instantiating it with the models derived in this notebook, above.
```
## "ossify"
gtpc = GeneralizedTPC(topt_model, tmin_model_filter, tmax_model_filter)
```
To save this to a file we're forced to use `cloudpickle` since it's the only available serialization package with support for interactively-defined user classes.
```
with open("gtpc_modeled.pkl", 'wb') as f:
cloudpickle.dump(gtpc, f)
```
---
## Approach 2: Decomposition/Multivariate Ordination
Reference: https://github.com/HuckleyLab/ThermalStress/blob/master/ToptCTmax_analysis.R
```
axes = plankton[[topt_colname, 'tmin', 'tmax']]
pca = PCA()
X = pca.fit_transform(axes)
pca.components_.T
axes.columns
weights = pd.DataFrame(
pca.components_.T,
columns=[f'PC{i+1}' for i in range(pca.components_.T.shape[1])],
index=axes.columns)
weights
loadings = pca.components_.T * np.sqrt(pca.explained_variance_)
loading_matrix = pd.DataFrame(
loadings,
columns=[f'PC{i+1}' for i in range(pca.components_.T.shape[1])],
index=axes.columns)
loading_matrix
```
| github_jupyter |
# Read in All Saildrone cruises downloaded from https://data.saildrone.com/data/sets
- 2017 onwards, note that earlier data is going to lack insruments and be poorer data quality in general
- For this code I want to develop a routine that reads in all the different datasets and creates a standardized set
- It may work best to first read each of the files individually into a dictionary
- then go through each dataset finding all variable names
- I decided to put all SST into TEMP_CTD_MEAN and same for Salinity so there is a single variable name
- this still preserves all the dataset information
```
import xarray as xr
from glob import glob
import matplotlib.pyplot as plt
dir_data = 'f:/data/cruise_data/saildrone/saildrone_data/'
dir_data_pattern = 'f:/data/cruise_data/saildrone/saildrone_data/*.nc'
list_var = ['time','lat','lon','SOG_MEAN','COG_MEAN','HDB_MEAN','ROLL_FILTERED_MEAN','PITCH_FILTERED_MEAN',
'UWND_MEAN','VWND_MEAN','WWND_MEAN','GUST_WND_MEAN','TEMP_AIR_MEAN','RH_MEAN','BARO_PRES_MEAN',
'PAR_AIR_MEAN','TEMP_CTD_MEAN','SAL_CTD_MEAN','TEMP_RBR_MEAN','SAL_RBR_MEAN',
'TEMP_O2_RBR_MEAN']
swapvar = {'TEMP_SBE37_MEAN':'TEMP_CTD_MEAN','SAL_SBE37_MEAN':'SAL_CTD_MEAN','SAL_MEAN':'SAL_CTD_MEAN',
'TEMP_O2_RBR_MEAN':'TEMP_O2_MEAN','TEMP_CTD_RBR_MEAN':'TEMP_RBR_MEAN'}
files = [x for x in glob(dir_data_pattern)]
for ifile,file in enumerate(files):
# if ifile>0:
# continue
ds = xr.open_dataset(file).rename({'latitude':'lat','longitude':'lon'})
if any(v=='trajectory' for v in ds.dims.keys()):
ds = ds.isel(trajectory=0)
ds.close()
for v in ds.dims.keys():
ds = ds.swap_dims({v:'time'})
if ds.trajectory.size==1:
iusv = float(ds.trajectory.data)
else:
iusv = float(ds.trajectory[0].data)
#renames some common variables to uniform name, drop variables not on list above
dssv = ds
for var in ds:
var2 = var
if swapvar.get(var):
ds = ds.rename({var:swapvar.get(var)})
var2 = swapvar.get(var)
if any(vv==var2 for vv in list_var):
ds #just a place holder does nothing
else:
#print('drop',var2)
ds = ds.drop(var2)
#check that there is a TEMP_CTD_MEAN, if not & temp_rbr_mean there, change it to temp_ctd_mean
if any(var=='TEMP_CTD_MEAN' for var in ds):
ds #just a place holder does nothing
else:
if any(var=='TEMP_RBR_MEAN' for var in ds):
ds = ds.rename({'TEMP_RBR_MEAN':'TEMP_CTD_MEAN'})
if any(var=='SAL_CTD_MEAN' for var in ds):
ds #just a place holder does nothing
else:
if any(var=='SAL_RBR_MEAN' for var in ds):
ds = ds.rename({'SAL_RBR_MEAN':'SAL_CTD_MEAN'})
if ds.attrs.get('project'):
pname = ds.attrs['project']
else:
pname = ds.attrs['id']
name = str(ds.time[0].dt.year.data)+'_'+str(int(iusv))+pname
name = name.replace(" ", "_")
name = name.replace("/", "_")
print(name)
if ifile==0:
data_dict = {name:ds}
else:
data_dict[name]=ds
import cartopy.crs as ccrs
ax = plt.axes(projection=ccrs.PlateCarree())
ax.coastlines()
for name in data_dict:
ds = data_dict[name]
#ax.plot(ds.lon,ds.lat,'k',transform=ccrs.PlateCarree())
ax.scatter(ds.lon,ds.lat,c=ds.TEMP_CTD_MEAN, s=.5,transform=ccrs.PlateCarree())
```
| github_jupyter |
# Remark<div class='tocSkip'/>
The code in this notebook differs slightly from the printed book. For example we frequently use pretty print (`pp.pprint`) instead of `print` and `tqdm`'s `progress_apply` instead of Pandas' `apply`.
Moreover, several layout and formatting commands, like `figsize` to control figure size or subplot commands are removed in the book.
You may also find some lines marked with three hashes ###. Those are not in the book as well as they don't contribute to the concept.
All of this is done to simplify the code in the book and put the focus on the important parts instead of formatting.
# Setup<div class='tocSkip'/>
## Determine Environment<div class='tocSkip'/>
```
import sys
ON_COLAB = 'google.colab' in sys.modules
if ON_COLAB:
BASE_DIR = "/content"
print("You are working on Google Colab.")
print(f'Files will be downloaded to "{BASE_DIR}".')
# adjust release
GIT_ROOT = "https://github.com/blueprints-for-text-analytics-python/early-release/raw/master"
else:
BASE_DIR = "../"
print("You are working on a local system.")
print(f'Files will be searched relative to "{BASE_DIR}".')
```
## Download files on Google Colab<div class='tocSkip'/>
If you are on Colab, copy the following statements into the code cell below and execute them.
```bash
!wget -P $BASE_DIR $GIT_ROOT/settings.py
!mkdir -p $BASE_DIR/data/un-general-debates
!wget -P $BASE_DIR/data/un-general-debates $GIT_ROOT/data/un-general-debates/un-general-debates-blueprint.csv.gz
```
## Install required libraries<div class='tocSkip'/>
Still todo: setup pip requirements.txt
If you are on Colab, copy the following statements into the code cell below and execute them.
```bash
!pip install textacy
```
```
import nltk
# make sure stop words are available
nltk.download('stopwords')
```
## Load Python Settings<div class="tocSkip"/>
Common imports, defaults for formatting in Matplotlib, Pandas etc.
```
%matplotlib inline
%config InlineBackend.figure_format = 'png'
%run "$BASE_DIR/settings.py"
%reload_ext autoreload
%autoreload 2
```
# Gaining Early Insights from Textual Data
## What you will learn and what we will build
# Exploratory Data Analysis
# Introducing the Dataset
```
file = f"{BASE_DIR}/data/un-general-debates/un-general-debates-blueprint.csv.gz"
df = pd.read_csv(file)
df.sample(2, random_state=53)
```
# Blueprint: Getting an Overview of the Data with Pandas
## Calculating Summary Statistics for Columns
```
df['length'] = df['text'].str.len()
df.describe().T
df[['country', 'speaker']].describe(include='O').T
```
## Checking for Missing Data
```
df.isna().sum()
df['speaker'].fillna('unkown', inplace=True)
df[df['speaker'].str.contains('Bush')]['speaker'].value_counts()
```
## Plotting Value Distributions
```
df['length'].plot(kind='box', vert=False, figsize=(8, 1))
df['length'].plot(kind='hist', bins=30, figsize=(8,2))
# Not in book: seaborn plot with gaussian kernel density estimate
import seaborn as sns
plt.figure(figsize=(8, 2))
sns.distplot(df['length'], bins=30, kde=True);
```
## Comparing Value Distributions across Categories
```
where = df['country'].isin(['USA', 'FRA', 'GBR', 'CHN', 'RUS'])
sns.catplot(data=df[where], x="country", y="length", kind='box', ax=axes[0])
sns.catplot(data=df[where], x="country", y="length", kind='violin', ax=axes[1])
```
## Visualizing Developments over Time
```
df.groupby('year').size().plot(title="Number of Countries", figsize=(6,2))
df.groupby('year').agg({'length': 'mean'}) \
.plot(title="Avg. Speech Length", ylim=(0,30000), figsize=(6,2))
df.groupby('year').size().plot(title="Number of Countries", ax=axes[0])
df.groupby('year').agg({'length': 'mean'}).plot(title="Avg. Speech Length", ax=axes[1], ylim=(0,30000))
```
# Blueprint: Building a Simple Text Preprocessing Pipeline
## Tokenization with Regular Expressions
```
import regex as re
def tokenize(text):
return re.findall(r'[\w-]*\p{L}[\w-]*', text)
text = "Let's defeat SARS-CoV-2 together in 2020!"
tokens = tokenize(text)
print("|".join(tokens))
```
## Treating Stop Words
```
import nltk
stopwords = set(nltk.corpus.stopwords.words('english'))
def remove_stop(tokens):
return [t for t in tokens if t.lower() not in stopwords]
include_stopwords = {'dear', 'regards', 'must', 'would', 'also'}
exclude_stopwords = {'against'}
stopwords |= include_stopwords
stopwords -= exclude_stopwords
```
## Processing a Pipeline with one Line of Code
```
pipeline = [str.lower, tokenize, remove_stop]
def prepare(text, pipeline):
tokens = text
for transform in pipeline:
tokens = transform(tokens)
return tokens
df['tokens'] = df['text'].progress_apply(prepare, pipeline=pipeline)
df['no_tokens'] = df['tokens'].progress_map(len)
```
# Analyzing Word Frequencies
## Blueprint: Counting Words with a Counter
```
from collections import Counter
tokens = tokenize("She likes my cats and my cats like my sofa.")
counter = Counter(tokens)
print(counter)
more_tokens = tokenize("She likes dogs and cats.")
counter.update(more_tokens)
print(counter)
counter = Counter()
_ = df['tokens'].map(counter.update)
pp.pprint(counter.most_common(5))
def count_words(df, column='tokens', preprocess=None, min_freq=2):
# process tokens and update counter
def update(doc):
tokens = doc if preprocess is None else preprocess(doc)
counter.update(tokens)
# create counter and run through all data
counter = Counter()
df[column].progress_map(update)
# transform counter into data frame
freq_df = pd.DataFrame.from_dict(counter, orient='index', columns=['freq'])
freq_df = freq_df.query('freq >= @min_freq')
freq_df.index.name = 'token'
return freq_df.sort_values('freq', ascending=False)
freq_df = count_words(df)
freq_df.head(5)
```
## Blueprint: Creating a Frequency Diagram
```
ax = freq_df.head(15).plot(kind='barh', width=0.95, figsize=(8,3))
ax.invert_yaxis()
ax.set(xlabel='Frequency', ylabel='Token', title='Top Words')
```
## Blueprint: Creating Word Clouds
```
from wordcloud import WordCloud
from matplotlib import pyplot as plt
text = df.query("year==2015 and country=='USA'")['text'].values[0]
wc = WordCloud(max_words=100, stopwords=stopwords)
wc.generate(text)
plt.imshow(wc, interpolation='bilinear')
plt.axis("off")
def wordcloud(word_freq, title=None, max_words=200, stopwords=None):
wc = WordCloud(width=800, height=400,
background_color= "black", colormap="Paired",
max_font_size=150, max_words=max_words)
# convert data frame into dict
if type(word_freq) == pd.Series:
counter = Counter(word_freq.fillna(0).to_dict())
else:
counter = word_freq
# filter stop words in frequency counter
if stopwords is not None:
counter = {token:freq for (token, freq) in counter.items()
if token not in stopwords}
wc.generate_from_frequencies(counter)
plt.title(title)
plt.imshow(wc, interpolation='bilinear')
plt.axis("off")
freq_2015_df = count_words(df[df['year']==2015])
plt.figure(figsize=(12,4))
wordcloud(freq_2015_df['freq'], max_words=100)
wordcloud(freq_2015_df['freq'], max_words=100, stopwords=freq_df.head(50).index)
```
## Blueprint: Ranking with TF-IDF
```
def idf(df, column='tokens', preprocess=None, min_df=2):
def update(doc):
tokens = doc if preprocess is None else preprocess(doc)
counter.update(set(tokens))
# count tokens
counter = Counter()
df[column].progress_map(update)
# create data frame and compute idf
idf_df = pd.DataFrame.from_dict(counter, orient='index', columns=['df'])
idf_df = idf_df[idf_df['df'] >= min_df]
idf_df['idf'] = np.log(len(df)/idf_df['df'])+0.1
idf_df.index.name = 'token'
return idf_df
idf_df = idf(df)
freq_df = freq_df.join(idf_df)
freq_df['tfidf'] = freq_df['freq'] * freq_df['idf']
freq_1970 = count_words(df[df['year'] == 1970])
freq_2015 = count_words(df[df['year'] == 2015])
freq_1970['tfidf'] = freq_1970['freq'] * idf_df['idf']
freq_2015['tfidf'] = freq_2015['freq'] * idf_df['idf']
#wordcloud(freq_df['freq'], title='All years', subplot=(1,3,1))
wordcloud(freq_1970['freq'], title='1970 - TF',
stopwords=['twenty-fifth', 'twenty-five'])
wordcloud(freq_2015['freq'], title='2015 - TF',
stopwords=['seventieth'])
wordcloud(freq_1970['tfidf'], title='1970 - TF-IDF',
stopwords=['twenty-fifth', 'twenty-five', 'twenty', 'fifth'])
wordcloud(freq_2015['tfidf'], title='2015 - TF-IDF',
stopwords=['seventieth'])
```
# Blueprint: Finding a Keyword in Context (KWIC)
```
from textacy.text_utils import KWIC
def kwic(doc_series, keyword, window=35, print_samples=None):
def add_kwic(text):
kwic_list.extend(KWIC(text, keyword, ignore_case=True,
window_width=window, print_only=False))
kwic_list = []
doc_series.progress_map(add_kwic)
if print_samples is None or print_samples==0:
return kwic_list
else:
k = min(print_samples, len(kwic_list))
print(f"{k} random samples out of {len(kwic_list)} " + \
f"contexts for '{keyword}':")
for sample in random.sample(list(kwic_list), k):
print(re.sub(r'[\n\t]', ' ', sample[0])+' '+ \
sample[1]+' '+\
re.sub(r'[\n\t]', ' ', sample[2]))
kwic(df[df['year'] == 2015]['text'], 'sdgs', window=35, print_samples=5)
from textacy.text_utils import KWIC
def kwic(doc_series, keyword, window=35, print_samples=5):
def add_kwic(text):
kwic_list.extend(KWIC(text, keyword, ignore_case=True,
window_width=window, print_only=False))
kwic_list = []
doc_series.progress_map(add_kwic)
if print_samples is None or print_samples==0:
return kwic_list
else:
k = min(print_samples, len(kwic_list))
print(f"{k} random samples out of {len(kwic_list)} " + \
f"contexts for '{keyword}':")
for sample in random.sample(list(kwic_list), k):
print(re.sub(r'[\n\t]', ' ', sample[0])+' '+ \
sample[1]+' '+\
re.sub(r'[\n\t]', ' ', sample[2]))
kwic(df[df['year'] == 2015]['text'], 'sdgs', print_samples=5)
```
# Blueprint: Analyzing N-Grams
```
text = "the visible manifestation of the global climate change"
tokens = tokenize(text)
def ngrams(tokens, n=2, sep=' '):
return [sep.join(ngram) for ngram in zip(*[tokens[i:] for i in range(n)])]
print("|".join(ngrams(tokens, 2)))
def ngrams(tokens, n=2, sep=' ', stopwords=set()):
return [sep.join(ngram) for ngram in zip(*[tokens[i:] for i in range(n)])
if len([t for t in ngram if t in stopwords])==0]
tokens = prepare(text, [str.lower, tokenize]) # keep full list of tokens
print("Bigrams:", "|".join(ngrams(tokens, 2, stopwords=stopwords)))
print("Trigrams:", "|".join(ngrams(tokens, 3, stopwords=stopwords)))
df['bigrams'] = df['text'].progress_apply(prepare, pipeline=[str.lower, tokenize]) \
.progress_apply(ngrams, n=2, stopwords=stopwords)
count_words(df, 'bigrams').head(5)
# concatenate existing IDF data frame with bigram IDFs
idf_df = pd.concat([idf_df, idf(df, 'bigrams', min_df=10)])
freq_df = count_words(df[df['year'] == 2015], 'bigrams')
freq_df['tfidf'] = freq_df['freq'] * idf_df['idf']
wordcloud(freq_df['tfidf'], title='all bigrams', max_words=50)
where = freq_df.index.str.contains('climate')
wordcloud(freq_df[where]['freq'], title='"climate" bigrams', max_words=50)
```
# Blueprint: Comparing Frequencies across Time-Intervals and Categories
## Creating Frequency Timelines
```
def count_keywords(tokens, keywords):
tokens = [t for t in tokens if t in keywords]
counter = Counter(tokens)
return [counter.get(k, 0) for k in keywords]
keywords = ['nuclear', 'terrorism', 'climate', 'freedom']
tokens = ['nuclear', 'climate', 'climate', 'freedom', 'climate', 'freedom']
print(count_keywords(tokens, keywords))
def count_keywords_by(df, by, column='tokens', keywords=keywords):
freq_matrix = df['tokens'].progress_apply(count_keywords, keywords=keywords)
freq_df = pd.DataFrame.from_records(freq_matrix, columns=keywords)
freq_df[by] = df[by] # copy the grouping column(s)
return freq_df.groupby(by=by).sum().sort_values(by)
freq_df = count_keywords_by(df, by='year', keywords=keywords)
pd.options.display.max_rows = 4
freq_df
pd.options.display.max_rows = 60
freq_df.plot(kind='line', figsize=(8, 3))
# analyzing mentions of 'climate' before 1980
kwic(df.query('year < 1980')['text'], 'climate', window=35, print_samples=5)
```
## Creating Frequency Heat Maps
```
keywords = ['terrorism', 'terrorist', 'nuclear', 'war', 'oil',
'syria', 'syrian', 'refugees', 'migration', 'peacekeeping',
'humanitarian', 'climate', 'change', 'sustainable', 'sdgs']
freq_df = count_keywords_by(df, by='year', keywords=keywords)
# compute relative frequencies based on total number of tokens per year
freq_df = freq_df.div(df.groupby('year')['no_tokens'].sum(), axis=0)
# apply square root as sublinear filter for better contrast
freq_df = freq_df.apply(np.sqrt)
sns.heatmap(data=freq_df.T,
xticklabels=True, yticklabels=True, cbar=False, cmap="Reds")
df.info(memory_usage='deep')
```
# Closing Remarks
| github_jupyter |
# Analysis of French museums' collections (Joconde database)
#### <br> *Download the open data CSV file [here](https://www.data.gouv.fr/fr/datasets/5b435ff2c751df675059dde9/) named joconde-MUSEES-valid.csv*
#### <br> Load the table from the CSV file
##### *Initial fiels are named REF|INV|DOMN|DENO|TITR|AUTR|PERI|EPOQ|TECH|DIMS|DECV|STAT|LOCA|COPY standing for : the id of the record, the number of the object, the domain, the type, the title, the author, the time period, the epoch, the material and technique, the dimensions, the discovery, the juridical status, the place of preservation and the source and data of the record*
```
# REF|INV|DOMN|DENO|TITR|AUTR|PERI|EPOQ|TECH|DIMS|DECV|STAT|LOCA|COPY
# ID-notice / Numéro de l'objet / Domaine / Dénomination / Titre / Auteur / Période de création / Epoque / Matériaux-techniques /
# Dimensions / Découverte / Statut juridique / Lieu de conservation / Source et date de la notice
import pandas as pd
full_df = pd.read_csv('joconde-MUSEES-valid.csv', sep='|', header=0, encoding='utf-8', dtype=str)
full_df['PROPERTY'] = full_df['STAT'].str.split(';').str[0]
full_df['CITY'] = full_df['LOCA'].str.split(';').str[0]
full_df['CITY'].fillna("", inplace=True)
full_df['PLACE'] = full_df['LOCA'].str.split(';').str[-1]
full_df['RECORD_SOURCE'] = full_df['COPY'].apply(lambda x: ", ".join(str(x).split(',')[0:len(str(x).split(','))-1]).strip())
full_df['RECORD_DATE'] = full_df['COPY'].str.split(',').str[-1]
full_df['RECORD_DATE'] = pd.to_numeric(full_df['RECORD_DATE'], errors='coerce').fillna(0)
full_df['RECORD_DATE'] = full_df['RECORD_DATE'].astype('int64')
df = full_df[['REF','DOMN','DENO','TITR','AUTR','PERI','TECH','DIMS','PROPERTY','CITY','PLACE','RECORD_SOURCE','RECORD_DATE']]
df.sample(3)
```
#### About the Joconde database
```
print('{} objects listed in the Joconde database'.format(df['REF'].count()))
date_serie = df.query('RECORD_DATE > 1900').groupby('RECORD_DATE')['RECORD_DATE'].count()
date_serie.plot('bar', grid=True, title='Number of records in the Joconde database per year')
```
#### <br> French cities with most works (top 10)
```
df['CITY'].value_counts().nlargest(10).plot('barh', grid=True)
```
##### Focus on Saint-Germain-en-Laye
```
df[df['CITY'].str.contains('Saint-Germain-en-Laye')]['PLACE'].value_counts()
```
#### <br> French museums with most works (top 10)
```
df['PLACE'].value_counts().nlargest(10).plot('barh', grid=True)
```
#### <br> Main authors of the works (top 20)
```
df['AUTR'].value_counts().nlargest(20)
```
#### <br> Property of the works
```
df['PROPERTY'].value_counts().nlargest(4)
```
#### <br> Main types of works
```
df['DOMN'].value_counts().nlargest(8)
df['DENO'].value_counts().nlargest(8)
```
| github_jupyter |
```
import json
import pandas as pd
import numpy as np
with open("/home/ayush/Desktop/img_json/single_keypoints.json") as datafile:
data = json.load(datafile)
df = pd.DataFrame(data)
df.head(5)
df.info()
import json
import pandas as pd
from pandas.io.json import json_normalize
with open('/home/ayush/Desktop/img_json/single_keypoints.json') as f:
d = json.load(f)
# lets put the ta into a pandas df
# clicking on raw_nyc_phil.json under "Input Files"
# tells us parent node is 'programs'
people = json_normalize(d['people'])
people.head(3)
people_arr = []
for i in range(len(people['pose_keypoints_2d'][0])):
people_arr.append(people['pose_keypoints_2d'][0][i])
type(people['pose_keypoints_2d'][0])
part_candidates = json_normalize(d['part_candidates'])
type(part_candidates['7'][0])
array = np.array(people['pose_keypoints_2d'][0])
array.shape
```
```
people_arr = np.array(df['part_candidates'][0])
d = df['part_candidates'][0]
array.shape
d.items()
d.keys()
d.values()
type(np.array(d.values()))
json_arr[0]
def read_json(n):
json_arr = []
for i in range(1,n):
with open('/home/ayush/Desktop/img_json/' + str(i) + '.json') as f:
d1 = json.load(f)
json_arr.append(d1)
return json_arr
# lets put the ta into a pandas df
# clicking on raw_nyc_phil.json under "Input Files"
# tells us parent node is 'programs'
keypt = json_normalize(json_arr[0]['people'])
keypt.head(5)
#get all the keypoints from one json file into a alist
arr = []
for i in range(len(keypt['pose_keypoints_2d'])):
arr.append(keypt['pose_keypoints_2d'][i])
#now getting all the keypoints from all json file
len(keypt_arr)
keypt_arr = []
for i in range(1,94):
kept = json_normalize(json_arr[i]['people'])
for j in range(len(keypt['pose_keypoints_2d'])):
keypt_arr.append(keypt['pose_keypoints_2d'][j])
type(keypt_arr)
for i in range(len(keypt_arr)):
print np.array(keypt_arr[i]).shape
#applying the k-means
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import KMeans
# Load data
from sklearn import datasets
data_set = datasets.load_breast_cancer()
X=keypt_arr[:] # restrict patient data to two features.
sc=StandardScaler()
sc.fit(X)
X_std=sc.transform(X)
kmeans = KMeans(n_clusters=2,max_iter=2000)
kmeans.fit(X_std)
print('\nCluster centres:')
print(kmeans.cluster_centers_)
labels = kmeans.labels_
plt.scatter(X_std[:,0],X_std[:,1],
c=labels, cmap=plt.cm.rainbow)
plt.xlabel('Normalised feature 1')
plt.ylabel('Normalised feature 2')
plt.show()
labels = []
for i in range(500):
labels.append(i)
labels[100]
import pandas as pd
from sklearn.manifold import TSNE
import seaborn as sn
from sklearn.preprocessing import StandardScaler
# Picking the top 1000 points as TSNE takes a lot of time for 15K points
data_1000 = keypt_arr[:]
labels_1000 = labels[:279]
model = TSNE(n_components=2,perplexity = 30, random_state=0, n_iter=550)
# configuring the parameteres
# the number of components = 2
# default perplexity = 30
# default learning rate = 200
# default Maximum number of iterations for the optimization = 1000
sc=StandardScaler()
sc.fit(data_1000)
X_std=sc.transform(data_1000)
tsne_data = model.fit_transform(X_std)
# creating a new data frame which help us in ploting the result data
tsne_data = np.vstack((tsne_data.T, labels_1000)).T
tsne_df = pd.DataFrame(data=tsne_data, columns=("Dim_1", "Dim_2","label"))
# Ploting the result of tsne
sn.FacetGrid(tsne_df, hue="label", size=6).map(plt.scatter, 'Dim_1', 'Dim_2').add_legend()
plt.show()
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
from typing import Callable, Optional, Tuple
import numdifftools as nd
import sympy as sp
from iminuit import Minuit
from iminuit.cost import LeastSquares
import tabulate
def fodd(f, x, p):
return 0.5 * (f(x, p) - f(x, -p))
def central(f, x, p, h):
hinv = 1.0 / h
return fodd(f, x, p + h) * hinv
def f(x, p):
# return np.round(np.exp(x + p), 5)
return np.exp(x + p)
# return (x + p) ** -10
# return ((x + p) ** 3)
def fpt(x):
# return -10 * x ** -11
return np.exp(x)
# return 3 * x ** 2
def steps(p, h0=0.125, factor=0.5 / 1.618033988749895):
eps = np.finfo(float).resolution
h = p * h0
if h == 0:
h = h0
n = int(np.log(eps / h0 ** 2) / np.log(factor)) + 1
return h * factor ** np.arange(n)
x = np.linspace(-0.2, 0.2, 10)
p = 0.0
results = []
for hi in steps(p, 0.5):
fdi = central(f, x, p, hi)
results.append((hi, fdi))
for i in range(len(x)):
def reldev(fx):
return np.abs(fx / fpt(x[i]) - 1)
fddev = []
fd = []
h = []
for hi, fdi in results:
h.append(hi)
fd.append(fdi[i])
fddev.append(reldev(fdi[i]))
h = np.array(h)
fdd = np.diff(fd) ** 2
for n in range(1, len(fdd)):
if fdd[n] >= fdd[n-1]:
break
n = min(n, 5)
if n == 1:
p_est = fd[:1]
fd_err = np.inf
elif n == 2:
p_est = np.polyfit(h[:n] ** 2, fd[:n], n - 1,
rcond=None, cov=False)
fd_err = fdd[0]
else:
p_est, C = np.polyfit(h[:n] ** 2, fd[:n], n - 2,
rcond=None, cov=True)
fd_err = C[-1, -1] ** 0.5
plt.figure()
plt.plot(h, fddev, "o")
plt.loglog()
hm = np.geomspace(1e-9, h[0])
plt.plot(hm, reldev(np.polyval(p_est, hm ** 2)),
color="r")
plt.axhline(reldev(p_est[-1]), color="g")
plt.axhline(fd_err / p_est[-1], color="g", ls="--")
plt.title(f"x = {x[i]} "
f"rel-err[est] = {fd_err / p_est[-1]:.1e} "
f"rel-err = {reldev(p_est[-1]):.1e}")
plt.plot(h[:-1], np.abs(np.diff(fd)) / fpt(x[i]), "s", mfc="none", ms=10)
plt.plot(h[:n], reldev(fd[:n]), "+", zorder=5, ms=15, color="k")
plt.loglog()
def _central(f, x, h):
return (f(x + h) - f(x - h)) * (0.5 / h)
def _steps(x, h0, factor, maxiter):
h = x * h0
if h == 0:
h = h0
return h * factor ** np.arange(maxiter)
def derive(f, x, rtol=0, maxiter=10,
factor=0.3090169943749474,
initial_step=0.5,
diagnostic=None
):
squeeze = np.ndim(x) == 0
x = np.atleast_1d(x)
x_shape = np.shape(x)
h = _steps(x, initial_step, factor, maxiter)
r = _central(f, x, h[0])
r_shape = np.shape(r)
squeeze &= r_shape == x_shape
re = np.full(r_shape, np.inf)
todo = np.ones(r_shape, dtype=bool)
fd = []
fd.append(r)
for i in range(1, len(h)):
fdi = _central(f, x, h[i])
fd.append(fdi[todo])
# polynomial fit with one extra degree of freedom
grad = min(i - 1, 3)
start = i - (grad + 1)
stop = i + 1
q, c = np.polyfit(h[start:stop] ** 2, fd[start:], grad,
rcond=None, cov=True)
ri = q[-1]
# pulls have roughly unit variance, however,
# the pull distribution is not gaussian and looks
# more like student's t
rei = c[-1, -1] ** 0.5
# update estimates that have significantly smaller error now
sub_todo = rei < 2 * re[todo] * factor ** 2
todo1 = todo.copy()
todo[todo1] = sub_todo
r[todo] = ri[sub_todo]
re[todo] = rei[sub_todo]
# do not improve estimates further which meet the tolerance
if rtol:
sub_todo &= rei > rtol * np.abs(ri)
todo[todo1] = sub_todo
# print("dev", r / (3 * x ** 2) - 1)
# print("est", re / np.abs(r))
# print(todo)
if np.sum(todo) == 0:
break
# shrink previous vectors of estimates
fd = [fdi[sub_todo] for fdi in fd]
if squeeze:
return np.squeeze(r), np.squeeze(re)
return r, re
x = np.linspace(-10, 10, 5)
def f(p):
return np.exp(x + p)
def fpt(x):
return np.exp(x)
print("exp(x)")
fp, fpe = derive(f, 0)
print(tabulate.tabulate([
["est rel-err"] + list(fpe / np.abs(fp)),
["true rel-err"] + list(fp / fpt(x) - 1),
], tablefmt="presto", floatfmt=".1e"))
def f(p):
return (x + p) ** 3
def fpt(x):
return 3 * x ** 2
print("x^3")
x = np.linspace(-0.0001, 0.0001, 6)
fp, fpe = derive(f, 0)
print(tabulate.tabulate([
["value"] + list(fp),
["est rel-err"] + list(fpe / np.abs(fp)),
["true rel-err"] + list(fp / fpt(x) - 1),
], tablefmt="presto", floatfmt=".1e"))
derive(np.exp, -100)
derive(lambda p: (x + p) ** 3, 1e-10, 0)
nd.Derivative(lambda p: (x + p) ** 3)(1e-10)
%%timeit -n 1 x = np.linspace(0, 10, 10000)
derive(lambda p: np.exp(x + p), 0)
%%timeit -n 1 x = np.linspace(0, 10, 10000)
derive(lambda p: np.exp(x + p), 0, rtol=1e-2)
%%timeit -n 1 -r 1 x = np.linspace(0, 10, 10000)
nd.Derivative(lambda p: np.exp(x + p))(0)
x = np.linspace(-10, 10, 1000)
fp1 = nd.Derivative(lambda p: np.exp(x + p).astype(np.float32))(0)
fp2 = derive(lambda p: np.exp(x + p).astype(np.float32), 0, initial_step=0.5, factor=0.5)[0]
plt.plot(x, np.abs(fp1 / np.exp(x) - 1), label="numdifftools")
plt.plot(x, np.abs(fp2 / np.exp(x) - 1), label="derive")
plt.semilogy()
plt.legend();
x = np.geomspace(1e-10, 1, 100)
fp1 = nd.Derivative(lambda p: (x + p) ** 3)(0)
fp2, fpe2 = derive(lambda p: (x + p) ** 3, 0)
plt.plot(x, np.abs(fp1 / fpt(x) - 1), label="numdifftools")
plt.plot(x, np.abs(fp2 / fpt(x) - 1), label="derive")
plt.plot(x, np.abs(fpe2 / fpt(x)), label="derive-err")
plt.loglog()
plt.legend();
class F:
nf = 0
nx = 0
def __init__(self, x):
self.x = x
def __call__(self, p):
x = self.x
self.nf += 1
self.nx += len(x)
y = (x + p)
return np.sin(y)/(y**2 + 1)
x = np.linspace(-10, 10, 201)
f = F(x)
fpx, fpxe = derive(f, 0)
plt.plot(x, F(x)(0), label="f(x)")
plt.plot(x, fpx, label="f'(x)")
plt.legend();
f = F(x)
fp1 = nd.Derivative(f)(0)
print(f.nf, f.nx, round(f.nx / len(x), 1))
f = F(x)
fp2 = derive(f, 0)[0]
print(f.nf, f.nx, round(f.nx / len(x), 1))
f = F(x)
fp3 = derive(f, 0, rtol=1e-3)[0]
print(f.nf, f.nx, round(f.nx / len(x), 1))
# compute exact derivative of sin(x)/x
fp = sp.lambdify("x", sp.diff("sin(x)/(x**2 + 1)").simplify(), "numpy")
plt.plot(x, fp(x));
fpx = fp(x)
plt.plot(x, np.abs(fp1/fpx-1), label=f"numdifftools stdev={np.nanstd(fp1/fpx):.0e}")
plt.plot(x, np.abs(fp2/fpx-1), label=f"vjacobi stdev={np.nanstd(fp2/fpx):.0e}")
plt.plot(x, np.abs(fp3/fpx-1), label=f"vjacobi rtol=1e-3 stdev={np.nanstd(fp3-fpx):.0e}")
plt.semilogy()
plt.legend();
from scipy.stats import median_abs_deviation as mad
x = np.linspace(-10, 10, 20001)
fp1, fp1e = derive(F(x), 0)
pull = (fp1 - fp(x)) / fp1e
plt.hist(pull, bins=100, range=(-5,5));
plt.title(f"{np.std(pull):.2f} {mad(pull):.2f}");
```
| github_jupyter |
<a href="https://colab.research.google.com/github/papagorgio23/Python101/blob/master/Python_101.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
# Begining Libraries
import pandas as pd #Basic sorage Lib
import numpy as np #numpy additional Data types and addition calculation Functions
import matplotlib.pyplot as plt
# Python is Case Sensitive
# Pay attention to indents, No mixing tabs and spaces
##################################### Declaring Variables ########################
x1 = 12
x2 = 3
print(x1 + x2) ## Addition
print(x1 * x2) ## Multiplication
print(x1 / x2) ### Division
# print(x1 ** x2) ## ^3
# print(x1)
####################################List ########################
a1 = ['Carrots','Peas','Celery','Apple']
a1
a2 = 'Apple'
a1[3]
# #################If Statements######################
if a1[0] == a2:
print('Yum')
else:
print('Yuck')
# Installing Library
!pip install pydata_google_auth
# Using GBQ shout Out to Hughes
import pandas_gbq
import pydata_google_auth
SCOPES = [
'https://www.googleapis.com/auth/cloud-platform',
'https://www.googleapis.com/auth/drive',
]
credentials = pydata_google_auth.get_user_credentials(
SCOPES,
auth_local_webserver=False)
df3 = pandas_gbq.read_gbq( "SELECT * FROM `fdr-data-platform.standardized_data.payment_history` limit 1000", dialect = 'standard',project_id='ffn-dw-bigquery-prd', credentials=credentials)
sql = """
SELECT DISTINCT
h.salesforce_application_id
,h.created_time AS core_created_Datetime
,a.created_datetime AS app_created_Datetime
,a.wcb_delivered_at_datetime
FROM
`freedom-dw.loscore.loan_application_history` h
INNER JOIN
`ffam-data-platform.standardized_data.fplus_application` a ON a.application_key = h.salesforce_application_id
WHERE
salesforce_application_id IN ('a010f00000V6ys3AAB', 'a010f00000VhmSRAAZ', 'a010f00000V8dNWAAZ')
"""
df2 = pandas_gbq.read_gbq( sql, dialect = 'standard', project_id='ffn-dw-bigquery-prd', credentials=credentials)
df3.head()
df2.head()
#df3.dtypes
#df3.to_clipboard()
# # Importing Data
from google.colab import files
uploaded = files.upload()
#titanic = pd.read_csv('titanic_data.csv')
url = 'https://raw.githubusercontent.com/cfb2/Machine-Learning/master/train.csv'
df = pd.read_csv(url)
############### Download Files ############################
# from google.colab import files
# df.to_csv('df.csv')
# files.download('df.csv')
##### Working with DataDrames ######
# df.head()
# df.dtypes
################################## return Column as Series ##################################
# df['Name']
# type(df['Name'])
################################## Return Column as Series ##################################
# return columns as dataframe
# df2 = df[['Name', 'Pclass', 'Sex']]
# type(df2)
################################################ EDA ######################################
# df[df['Sex']=='male']
# df[df['Sex']=='male'][df['Pclass']>1]
# df['Fare'].describe()
# df['Is_Male'] =np.where(df['Sex']=='male',1,0)
# df
# df['Is_Child'] = np.where(df['Age']< 12 , 1,0)
# df
########################################## Aggregations ####################################
# df[['Name','Sex']].groupby(['Sex']).count()
# df4 = df[['Pclass','Fare']].groupby('Pclass').sum().reset_index()
# df4
# df5 = df[['Pclass','Age']].groupby('Pclass').median().reset_index()
# df5
# df6 = df4.merge(df5, how = 'inner', on = 'Pclass' )
# df6
df7 =df[['Is_Child','Survived','Name']].groupby(['Is_Child','Survived',]).count().reset_index()
#############Exporting Table###################
# d6.to_clipboard
# d6.to_csv(index=True, path'C:/')
```
| github_jupyter |
# Anomalia bouguer para o Havaí
## Importando bibliotecas
```
import numpy as np
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
import verde as vd
import pyproj
import boule as bl
import harmonica as hm
notebook_name = '6. Hawaii_bouguer_anomaly.ipynb'
```
### Plot style
```
plt.style.use('ggplot')
```
### Carregando os dados
```
fname = 'data_set/gravity_Hawaii_03deg_EIGEN-6C4.gdf'
data = hm.load_icgem_gdf(fname)
fname = 'data_set/geoid_Hawaii_03deg_EIGEN-6C4.gdf'
geoide = hm.load_icgem_gdf(fname)
data['geoid'] = geoide.geoid
fname = 'data_set/topography_Hawaii_03deg_etopo1.gdf'
topografia = hm.load_icgem_gdf(fname)
data['topography'] = topografia.topography_shm
data
```
### Informações sobre a região e os dados
```
region = (-165,-150,15.01,25)
```
### Cálculo da gravidade normal
```
elipsoide = bl.WGS84
data['gamma'] = elipsoide.normal_gravity(data.latitude,data.h_over_geoid)
```
### Cálculo do distúrbio
```
data['disturbance'] = data.gravity_earth - data.gamma
```
### Cálculo da anomalia bouguer
```
data['h_over_ell'] = data.geoid + data.topography
bouguer = hm.bouguer_correction(data.h_over_ell)
data['disturbance_bouguer'] = data.disturbance - bouguer
```
## Campo de gravidade, gravidade normal e distúrbio
```
fig,(ax1,ax2,ax3,ax4) = plt.subplots(nrows=1,ncols=4,figsize=(22, 18), subplot_kw=dict(projection=ccrs.Mercator()))
pg = data.gravity_earth.plot.pcolormesh(ax=ax1, cmap="viridis_r", add_colorbar=False, transform=ccrs.PlateCarree())
plt.colorbar(pg, ax=ax1, orientation="horizontal", pad=0.01, aspect=40, label="mGal")
ax1.set_title("Gravity field")
ax1.set_extent(region,crs=ccrs.PlateCarree())
pn = data.h_over_ell.plot.pcolormesh(ax=ax2, cmap="terrain", add_colorbar=False, transform=ccrs.PlateCarree())
plt.colorbar(pn, ax=ax2, orientation="horizontal", pad=0.01, aspect=40, label="meters")
ax2.set_title("Topography")
ax2.set_extent(region,crs=ccrs.PlateCarree())
pd = data.disturbance.plot.pcolormesh(ax=ax3, cmap="RdBu_r", add_colorbar=False, transform=ccrs.PlateCarree())
plt.colorbar(pd, ax=ax3, orientation="horizontal", pad=0.01, aspect=40, label="mGal")
ax3.set_title("Gravity disturbance")
ax3.set_extent(region,crs=ccrs.PlateCarree())
pb = data.disturbance_bouguer.plot.pcolormesh(ax=ax4, cmap="RdBu_r", add_colorbar=False, transform=ccrs.PlateCarree())
plt.colorbar(pb, ax=ax4, orientation="horizontal", pad=0.01, aspect=40, label="mGal")
ax4.set_title("Bouguer anomaly")
ax4.set_extent(region,crs=ccrs.PlateCarree())
file_name = 'images/gravity_Hawaii'
plt.savefig(file_name+'.png',dpi=300)
plt.show()
```
## Visualização dos dados em um perfil
### Definindo as coordenadas em um grid regular e as coordenadas dos perfis
```
step = 0.03
longitude = np.arange(region[0],region[1]+step,step=step)[:-1]
latitude = np.arange(region[2],region[3]+step,step=step)[:-1]
long,lat = np.meshgrid(longitude,latitude)
full_coordinates = (long,lat)
start = (-159.4,24)
end = (-160,17)
```
### Decimando os dados de distúrbio
```
spacing = 1/60
reducer = vd.BlockReduce(reduction=np.median, spacing=spacing)
coordinates, disturbance = reducer.filter(
(long,lat), data.disturbance.values)
projection = pyproj.Proj(proj="merc", lat_ts=coordinates[1].mean())
proj_coords = projection(*coordinates)
```
### Interpolação dos dados
```
spline = vd.ScipyGridder(method='cubic')
```
#### Perfil do distúrbio
```
interpolate_dist = spline.fit(proj_coords,disturbance)
profile_dist = interpolate_dist.profile(
point1=start,
point2=end,
size=400,
dims=("latitude","longitude"),
data_names=["disturbance"],
projection=projection,)
profile_dist
```
### Decimando os dados de topografia
```
spacing = 1/60
reducer = vd.BlockReduce(reduction=np.median, spacing=spacing)
coordinates, topography = reducer.filter(
(long,lat), data.h_over_ell.values)
interpolate_topo = spline.fit(proj_coords,topography)
```
#### Perfil dos dados de topografia
```
profile_topo = interpolate_topo.profile(
point1=start,
point2=end,
size=400,
dims=("latitude","longitude"),
data_names=["topography"],
projection=projection,)
profile_topo
```
### Decimando os dados de anomalia bouguer
```
spacing = 1/60
reducer = vd.BlockReduce(reduction=np.median, spacing=spacing)
coordinates, bouguer_topo = reducer.filter(
(long,lat), data.disturbance_bouguer.values)
interpolate_bouguer = spline.fit(proj_coords,bouguer_topo)
profile_bouguer = interpolate_bouguer.profile(
point1=start,
point2=end,
size=400,
dims=("latitude","longitude"),
data_names=["bouguer"],
projection=projection,)
profile_bouguer
fig,(ax1,ax2,ax3) = plt.subplots(nrows=1,ncols=3,figsize=(14, 8), subplot_kw=dict(projection=ccrs.Mercator()))
pn = data.h_over_ell.plot.pcolormesh(ax=ax1, cmap="terrain", add_colorbar=False, transform=ccrs.PlateCarree())
plt.colorbar(pn, ax=ax1, orientation="horizontal", pad=0.01, aspect=40, label="meters")
ax1.plot(profile_topo.longitude, profile_topo.latitude, "-k", transform=ccrs.PlateCarree())
ax1.text(start[0], start[1], "A", transform=ccrs.PlateCarree())
ax1.text(end[0], end[1], "B", transform=ccrs.PlateCarree())
ax1.set_title("Topography")
ax1.set_extent(region,crs=ccrs.PlateCarree())
pd = data.disturbance.plot.pcolormesh(ax=ax2, cmap="RdBu_r", add_colorbar=False, transform=ccrs.PlateCarree())
plt.colorbar(pd, ax=ax2, orientation="horizontal", pad=0.01, aspect=40, label="mGal")
ax2.plot(profile_dist.longitude, profile_dist.latitude, "-k", transform=ccrs.PlateCarree())
ax2.text(start[0], start[1], "A", transform=ccrs.PlateCarree())
ax2.text(end[0], end[1], "B", transform=ccrs.PlateCarree())
ax2.set_title("Gravity disturbance")
ax2.set_extent(region,crs=ccrs.PlateCarree())
pb = data.disturbance_bouguer.plot.pcolormesh(ax=ax3, cmap="RdBu_r", add_colorbar=False, transform=ccrs.PlateCarree())
plt.colorbar(pb, ax=ax3, orientation="horizontal", pad=0.01, aspect=40, label="mGal")
ax3.plot(profile_bouguer.longitude, profile_bouguer.latitude, "-k", transform=ccrs.PlateCarree())
ax3.text(start[0], start[1], "A", transform=ccrs.PlateCarree())
ax3.text(end[0], end[1], "B", transform=ccrs.PlateCarree())
ax3.set_title("Bouguer anomaly")
ax3.set_extent(region,crs=ccrs.PlateCarree())
file_name = 'images/profile_Hawaii'
plt.savefig(file_name+'.png',dpi=300)
plt.show()
fig,(ax1,ax2) = plt.subplots(nrows=2, ncols=1, sharex=True,figsize=(15, 5))
### Disturbio e anomalia bouguer (com a topografia)
ax1.set_title("Profile of gravity data (A-B)")
ax1.plot(profile_dist.distance, profile_dist.disturbance, "-g",label='disturbance')
ax1.plot(profile_bouguer.distance, profile_bouguer.bouguer, "-r",label='bouguer')
ax1.set_ylabel("gravity data (mGal)")
ax1.set_xlim(profile_dist.distance.min(), profile_dist.distance.max())
ax1.legend()
### Topografia e o nível do mar
ax2.fill_between(profile_topo.distance,0.,min(profile_topo.topography),color='blue')
ax2.fill_between(profile_topo.distance,profile_topo.topography,min(profile_topo.topography),color='black')
ax2.set_ylabel("topography (meters)")
ax2.set_xlim(profile_topo.distance.min(), profile_topo.distance.max())
ax2.set_xlabel("Distance (m)")
file_name = 'images/disturbance_bouguer_Hawaii'
plt.savefig(file_name+'.png',dpi=300)
plt.tight_layout()
plt.show()
```
| github_jupyter |
# Pandas统计分析入门(2)
- 转载注明转自:https://github.com/liupengyuan/
- ## 二维数据统计分析(DataFrame基础)
---
```
%matplotlib inline
from pandas import Series, DataFrame
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
```
## 二、二维数据统计分析(DataFrame基础)
- 数据的描述、分析、可视化展示、概括性度量、输入与输出
df:多维条形图,多维折线图,多层图
### 1. DataFrame对象及数据展示
- DataFrame是pandas最重要最基础的数据对象之一,可用来表示数据表
- 如果将Series视为表格中的一列,则DataFrame可以视为表中多列,或者是具有相同index的多个Series
- 如果将Series视为向量(带索引),则DataFrame即可视为矩阵(带索引)。
- 本小节仍将以一个词频统计结果作为实例,进行介绍。
- 教程中的各个代码段,请自行建立新的python程序,依次键入并顺序执行,观察执行结果。
```
words_freq = np.array([200,300,400,350,390,600,900,400,300,120])
freq_dict = {'boy':words_freq,'girl':words_freq*2,'children':words_freq+100, 'child': words_freq+300}
total_words_freq = [12345000,23456000,22333000,45632000,11144000,65433000,44444000,55555000,34522000,55566000]
years = pd.date_range('2006', periods=10, freq='A')
df = DataFrame(freq_dict)
df
```
- 利用dict创建一个DataFrame对象,其index默认为从0开始的整数
```
df = DataFrame(freq_dict, index = years)
df
```
- 与Series对象类似,可以在初始化DataFrame的时候,指定index
- 数据可视为3个有相同index的Series
```
df.plot()
```
- 与Series对象类似,DataFrame对象可以利用plot()函数直接绘制折线图
```
df.plot(kind='bar')
```
- 绘制柱状图(统计学中一般称为复式柱状图)
```
df.plot(kind='barh')
```
- 绘制条形图
```
df.plot(kind = 'area')
```
- 绘制(面积)堆积图
```
df.plot(kind='box')
```
- 绘制箱型图
```
sns.boxplot(df)
```
- 利用seaborn绘制箱型图
```
df.plot(kind='scatter', x='boy',y='girl')
```
- 当plot()函数kind为scatter时,可绘制散点图
- 散点图展现的是2组数据(2个变量)之间的关系
- 可见当前`boy`与`girl`两组数据之间是一种线性关系
```
df.plot(kind='scatter', x='boy',y='girl',s=df['child'].values)
```
- 对散点图,如果设置s参数,将一组list类型的数据传入,代表对应点的大小,则绘制出气泡图
- 气泡图展示的是三组数据(三个变量)之间的关系。上图可以解释为:当男孩数量增加时,女孩增加,同时小孩儿的数量增加。
### 2. DataFrame汇总与描述统计
```
df.sum()
```
- Series对象的各类求值及描述统计函数,DataFrame对象均可以使用
- 默认参数是axis=0,表示以行方向为数据轴(基准),取每列做为一个Series对象对其进行统计
```
df.sum(axis=1)
```
- 参数如果指定axis=1,表示以列方向为数据轴(基准),取每行做为一个Series对象对其进行统计
```
df.min()
df.mean()
df.var()
df.std()
df.std()/df.mean()
df.median()
df.cumsum()
df.kurt()
df.skew()
df.describe()
```
### 3. 查看DataFrame数据
```
df.head()
df.tail(4)
```
- 与Series类似,一样可以利用head()与tail()查看数据的前几行与后几行。
```
df.index
```
- 查看index属性,index也可称为行索引
```
df.values
```
- 查看values属性,可见DataFrame对象的values是一个numpy的ndarray类型。
```
df.columns
```
- 查看columns属性,columns也可称为列索引。此时列标签是pandas的Index对象,类似list,其中有个三个元素,类型为`object`,在pandas中,非数字类型一般均为object类型。
```
dft = df.T
dft
```
- 可将DataFrame对象转置。
- 注意,转置并不改变原DataFrame对象,而是新生成一个DataFrame对象。
```
dft.index
dft.columns
```
- 转置后,index与columns的值互换
```
df.sort_index(ascending = False)
```
- 根据索引进行排序,默认是升序,并对行索引进行排序。定义逐行变化方向为0轴方向:axis=0
- `ascending = False`,使之降序排列
```
df.sort_index(axis = 1, ascending = False)
```
- 将参数axis设为1,排序即按照列索引进行排序
```
df.sort_values(by='boy')
```
- 根据值进行排序,如是根据某对应列的值进行排序,需要在参数by中指定数据列索引
### 4. 选择DataFrame数据
**4.1 列选择**
```
df['boy']
```
- 利用列索引选择一列,将返回该列,且为Series对象。也可以用`df.boy`来访问,效果相同。
```
df[['boy']]
```
- 指定一个列标签,放入list中,可以获取对应colums标签的列,返回DataFrame对象。
```
df[['boy','children']]
```
- 利用指定多个列标签colums的list,可以获取对应colums标签的列,返回DataFrame对象。
```
df[[0,2]]
```
- 利用代表列整数索引的list,可以获取对应colums标签的列,返回DataFrame对象。
```
df.loc[:,'boy']
```
- 也可以利用DataFrame的loc属性进行列选取
- loc属性主要通过标签进行数据选取
- 其中第一个位置参数表示行方向(即axis=0)元素,为`:`即为全部选取
- 第二个位置参数表示列方向(即axis=1)元素,选取当前指定列标签的数据
- 返回一个Series对象
```
df.loc[:,['boy']]
```
- 与前类似,如在list中指定列标签,则返回一个DataFrame对象
```
df.loc[:,['boy', 'girl']]
```
- 与前类似,可以选择多个列,返回一个DataFrame对象
```
df.loc[:,'boy': 'girl']
```
- 可以利用列索引进行切片选取
- 注意切片是两端包含
- 注意切片不需要在list中
```
df.iloc[:,0]
```
- 还可以利用DataFrame的iloc属性进行列选取
- iloc属性也称为位置(position)属性,主要通过位置(即对应整数索引)进行数据选取
- 其中第一个位置参数仍然表示行方向(即axis=0)元素,为`:`即为全部选取
- 第二个位置参数仍然表示列方向(即axis=1)元素,选取当前指定列标签的数据
- 返回一个Series对象
```
df.iloc[:,[1]]
```
- 与前类似,如在list中指定列位置,则返回一个DataFrame对象
```
df.iloc[:,[1,2]]
```
- 与前类似,可以根据位置选择多个列,返回一个DataFrame对象
```
df.iloc[:,0:2]
```
- 与前类似,可以通过列位置进行切片选择多个列,返回一个DataFrame对象
**4.2 行选择**
```
df.loc[years[0]]
```
- 利用DataFrame的loc属性也可以进行行选取
- loc属性主要通过标签进行数据选取
- 其中第一个位置参数表示行方向(即axis=0)元素
- 第二个位置参数表示列方向(即axis=1)元素,不指定时,默认为全部选取
- 返回一个Series对象
```
df.loc[[years[0]]]
```
- 与前类似,如在list中指定行标签,则可选择该行,并返回一个DataFrame对象
```
df.loc[[years[0], years[2]]]
```
- 与前类似,如在list中指定多个行标签,则可选择多行,并返回一个DataFrame对象
```
df.loc[years[0]: years[2]]
```
- 与前类似,还可以利用标签切片来选择多行,并返回一个DataFrame对象
```
df.iloc[0]
```
- 还可以利用DataFrame的iloc属性进行行选取
- iloc属性也称为位置(position)属性,主要通过位置(即对应整数索引)进行数据选取
- 其中第一个位置参数仍然表示行方向(即axis=0)元素
- 第二个位置参数仍然表示列方向(即axis=1)元素,不指定时,默认为全部选取
- 返回一个Series对象
```
df.iloc[[0]]
```
- 与前类似,如在list中指定行位置,则返回一个DataFrame对象
```
df.iloc[[0,1,2]]
```
- 与前类似,可以根据位置选择多个行,返回一个DataFrame对象
```
df.iloc[1:3]
```
- 与前类似,可以根据位置进行切片选择多个行,返回一个DataFrame对象
```
df[0:3]
```
- 可以利用行切片,选取多行数据,返回一个DataFrame对象
- 个人觉得容易与选取列数据的混淆,不建议使用
**4.3 选择区块**
**4.3.1 利用loc**
```
df.loc[years[0],'boy']
```
- 利用loc,可以选择指定行标签、列标签的数据
```
df.loc[years[0],['boy']]
```
- 利用loc,指定行标签,在list中指定一个列标签,可以选择指定行列的数据
- 返回一个Series
```
df.loc[[years[0]],['boy']]
```
- 利用loc,在list中指定一个行标签,在list中指定一个列标签,可以选择指定行列的数据
- 返回一个DataFrame对象
```
df.loc[years[0],['boy', 'girl']]
```
- 利用loc,指定行标签,在list中指定多个列标签,可以选择指定行的多个列的数据
- 返回一个Series
```
df.loc[years[0],'boy':'girl']
```
- 与前类似,还可以切片选择给定行的多列数据
- 返回一个Series
```
df.loc[[years[0]],'boy':'girl']
```
- 与前类似,但是将返回一个DataFrame对象
```
df.loc[years[0]:years[2],'boy':'girl']
```
- 行列标签切片,选择对应数据
- 利用loc方式的行与列均可以如上类似处理
**4.3.2 利用iloc**
利用iloc进行区域选择的方式与loc基本类似,只是利用位置而非标签信息进行选取,请参照前面理解
```
df.iloc[0,0]
df.iloc[0,[0,1,2]]
df.iloc[[0,1,2],0]
df.iloc[[0,1,2],[1, 2]]
df.iloc[0:2,[1, 2]]
df.iloc[0:5,0:3]
```
**4.3.3 利用iat选择单个元素**
```
df.iat[1,1]
```
- 在以上进行行、列选取的各类方法中,推荐使用loc及iloc方法,虽然稍显繁琐,但是逻辑较为严密,且性能更优。
## 5. 条件选择DataFrame数据(布尔索引)
### 对DataFrame按照条件来选择数据较为方便记忆的方式是:`对象[布尔索引]`,其中对象可以是整个DataFrame对象,也可以是DataFrame经过前一部分选择出的列、行或者区块。注意布尔索引要与选择出的数据对齐。
### 达到数据选择结果的方法较为多样,但为避免造成记忆及使用的混乱,其他条件选择数据的方式本教程不予介绍。
### 5.1 列
```
df.boy > 400
```
- 与Series中的类似,是每个元素比较后的布尔值
- 返回一个Series
```
df[df.boy > 400]
```
- 可利用DataFrame对象的某一列数据,进行布尔运算,根据布尔运算的结果过滤数据
- 返回一个DataFrame对象
```
df[(df.boy >300) & (df.girl > 900)]
```
- 可利用DataFrame对象的多列数据,进行多个布尔运算,注意用括号括起每个部分,表示`and`操作用`&`,'or'用`|`,`not`用`~`。
- 返回一个DataFrame对象
```
df['girl'][df.boy >300]
```
- 选择df中的一列,再根据布尔索引进行过滤选择
- 返回一个Series
```
df[['girl']][df.boy >300]
```
- 与前类似,但返回一个DataFrame对象
```
df[['girl', 'child']][(df.boy >300) & (df.girl > 900)]
```
- 选择多列,并用组合的布尔索引进行数据选择
```
df['girl'].isin([700,800])
```
- 对Series对象应用`isin()`函数,是判断该Series中的values是否在给定的数据表中
- 返回一个都是布尔值的Series
```
df[df['girl'].isin([700,800])]
```
- 利用isin()函数,选择某列数据进行数据过滤
```
df[['girl', 'children']][df['girl'].isin([700,800])]
```
- 选择后利用isin生成的布尔索引进行数据选择
### 5.2 行
- pandas以布尔索引选择行数据的功能至今较弱
```
df.loc[years[1]]>500
df.loc[years[0]][df.loc[years[1]]>500]
```
- 但是暂时还无法将以往行对象选择后,应用`对象[行条件]`的方式进行条件数据选取,个人认为是pandas设计没有考虑周全的地方。虽然布尔索引选择行数据的应用场景较少。
```
df.T[[years[0],years[1]]][df.loc[years[2]]>500].T
```
- 如确有需求,可以对DataFrame对象进行转置,然后利用列布尔选择,再转置
| github_jupyter |
# Prepare Test Data
```
import pandas as pd
import numpy as np
pd.options.display.max_colwidth = 100000000
test_data = pd.read_csv("C:\\Users\\ricardo\\Github\\Kaggle\\1910_TMU_EnglishReviewClassification\\Data\\test_data.csv")
print(len(test_data))
print(test_data.columns)
# neg = "0", pos = "1"
import math
for i in range(len(test_data)):
if test_data.iloc[i, 1] == "neg":
test_data.iloc[i, 1] = 0
elif test_data.iloc[i, 1] == "pos":
test_data.iloc[i, 1] = 1
elif math.isnan(test_data.iloc[i, 1]):
test_data.iloc[i, 1] = "0"
test_df_bert = pd.DataFrame({
'id':range(len(test_data)),
'label':test_data.iloc[:, 1],
'alpha':['a']*test_data.shape[0],
'text': test_data.iloc[:, 0].replace(r'\n', ' ', regex=True)
})
print(len(test_data))
print(test_data.head(5))
test_df_bert.to_csv('./Bert_Test/test.tsv', sep='\t', index=False, header=False)
```
# Evaluation
```
import torch
import numpy as np
import pickle
from sklearn.metrics import matthews_corrcoef, confusion_matrix
from torch.utils.data import (DataLoader, RandomSampler, SequentialSampler,
TensorDataset)
from torch.utils.data.distributed import DistributedSampler
from torch.nn import CrossEntropyLoss, MSELoss
from tools import *
from multiprocessing import Pool, cpu_count
import convert_examples_to_features
from tqdm import tqdm_notebook, trange
import os
from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM, BertForSequenceClassification
from pytorch_pretrained_bert.optimization import BertAdam, WarmupLinearSchedule
# OPTIONAL: if you want to have more information on what's happening, activate the logger as follows
import logging
logging.basicConfig(level=logging.INFO)
#device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
device = torch.device("cpu")
# The input data dir. Should contain the .tsv files (or other data files) for the task.
DATA_DIR = "Bert_Test/"
# Bert pre-trained model selected in the list: bert-base-uncased,
# bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased,
# bert-base-multilingual-cased, bert-base-chinese.
BERT_MODEL = 'tmu.tar.gz'
# The name of the task to train.I'm going to name this 'yelp'.
TASK_NAME = 'TMU'
# The output directory where the fine-tuned model and checkpoints will be written.
OUTPUT_DIR = f'outputs/{TASK_NAME}/'
# The directory where the evaluation reports will be written to.
REPORTS_DIR = f'reports/{TASK_NAME}_evaluation_reports/'
# This is where BERT will look for pre-trained models to load parameters from.
CACHE_DIR = 'cache/'
# The maximum total input sequence length after WordPiece tokenization.
# Sequences longer than this will be truncated, and sequences shorter than this will be padded.
MAX_SEQ_LENGTH = 128
TRAIN_BATCH_SIZE = 24
EVAL_BATCH_SIZE = 8
LEARNING_RATE = 2e-5
NUM_TRAIN_EPOCHS = 1
RANDOM_SEED = 42
GRADIENT_ACCUMULATION_STEPS = 1
WARMUP_PROPORTION = 0.1
OUTPUT_MODE = 'classification'
CONFIG_NAME = "config.json"
WEIGHTS_NAME = "pytorch_model.bin"
if os.path.exists(REPORTS_DIR) and os.listdir(REPORTS_DIR):
REPORTS_DIR += f'/report_{len(os.listdir(REPORTS_DIR))}'
os.makedirs(REPORTS_DIR)
if not os.path.exists(REPORTS_DIR):
os.makedirs(REPORTS_DIR)
REPORTS_DIR += f'/report_{len(os.listdir(REPORTS_DIR))}'
os.makedirs(REPORTS_DIR)
def get_eval_report(task_name, labels, preds):
mcc = matthews_corrcoef(labels, preds)
tn, fp, fn, tp = confusion_matrix(labels, preds).ravel()
return {
"task": task_name,
"mcc": mcc,
"tp": tp,
"tn": tn,
"fp": fp,
"fn": fn
}
def compute_metrics(task_name, labels, preds):
assert len(preds) == len(labels)
return get_eval_report(task_name, labels, preds)
# Load pre-trained model tokenizer (vocabulary)
tokenizer = BertTokenizer.from_pretrained(OUTPUT_DIR + 'vocab.txt', do_lower_case=False)
processor = BinaryClassificationProcessor()
eval_examples = processor.get_test_examples(DATA_DIR)
label_list = processor.get_labels() # [0, 1] for binary classification
num_labels = len(label_list)
eval_examples_len = len(eval_examples)
label_map = {label: i for i, label in enumerate(label_list)}
eval_examples_for_processing = [(example, label_map, MAX_SEQ_LENGTH, tokenizer, OUTPUT_MODE) for example in eval_examples]
process_count = cpu_count() - 1
if __name__ == '__main__':
print(f'Preparing to convert {eval_examples_len} examples..')
print(f'Spawning {process_count} processes..')
with Pool(process_count) as p:
eval_features = list(tqdm_notebook(p.imap(convert_examples_to_features.convert_example_to_feature, eval_examples_for_processing), total=eval_examples_len))
all_input_ids = torch.tensor([f.input_ids for f in eval_features], dtype=torch.long)
all_input_mask = torch.tensor([f.input_mask for f in eval_features], dtype=torch.long)
all_segment_ids = torch.tensor([f.segment_ids for f in eval_features], dtype=torch.long)
if OUTPUT_MODE == "classification":
all_label_ids = torch.tensor([f.label_id for f in eval_features], dtype=torch.long)
elif OUTPUT_MODE == "regression":
all_label_ids = torch.tensor([f.label_id for f in eval_features], dtype=torch.float)
eval_data = TensorDataset(all_input_ids, all_input_mask, all_segment_ids, all_label_ids)
# Run prediction for full data
eval_sampler = SequentialSampler(eval_data)
eval_dataloader = DataLoader(eval_data, sampler=eval_sampler, batch_size=EVAL_BATCH_SIZE)
# Load pre-trained model (weights)
model = BertForSequenceClassification.from_pretrained(CACHE_DIR + BERT_MODEL, cache_dir=CACHE_DIR, num_labels=len(label_list))
model.to(device)
model.eval()
eval_loss = 0
nb_eval_steps = 0
preds = []
for input_ids, input_mask, segment_ids, label_ids in tqdm_notebook(eval_dataloader, desc="Evaluating"):
input_ids = input_ids.to(device)
input_mask = input_mask.to(device)
segment_ids = segment_ids.to(device)
label_ids = label_ids.to(device)
with torch.no_grad():
logits = model(input_ids, segment_ids, input_mask, labels=None)
# create eval loss and other metric required by the task
if OUTPUT_MODE == "classification":
loss_fct = CrossEntropyLoss()
tmp_eval_loss = loss_fct(logits.view(-1, num_labels), label_ids.view(-1))
elif OUTPUT_MODE == "regression":
loss_fct = MSELoss()
tmp_eval_loss = loss_fct(logits.view(-1), label_ids.view(-1))
eval_loss += tmp_eval_loss.mean().item()
nb_eval_steps += 1
if len(preds) == 0:
preds.append(logits.detach().cpu().numpy())
else:
preds[0] = np.append(
preds[0], logits.detach().cpu().numpy(), axis=0)
eval_loss = eval_loss / nb_eval_steps
preds = preds[0]
if OUTPUT_MODE == "classification":
preds = np.argmax(preds, axis=1)
elif OUTPUT_MODE == "regression":
preds = np.squeeze(preds)
result = compute_metrics(TASK_NAME, all_label_ids.numpy(), preds)
result['eval_loss'] = eval_loss
output_eval_file = os.path.join(REPORTS_DIR, "eval_results.txt")
with open(output_eval_file, "w") as writer:
logger.info("***** Eval results *****")
for key in (result.keys()):
logger.info(" %s = %s", key, str(result[key]))
writer.write("%s = %s\n" % (key, str(result[key])))
print(preds)
print(len(preds))
```
# Output Data
```
pred_list = preds
new_pred_list = []
for i, j in enumerate(pred_list):
if preds[i] == 0:
new_pred_list.append("neg")
elif preds[i] == 1:
new_pred_list.append("pos")
print(pred_list)
print(new_pred_list)
print(len(new_pred_list))
import pandas as pd
sub_format = pd.read_csv("C:\\Users\\ricardo\\Github\\Kaggle\\1910_TMU_EnglishReviewClassification\\Data\\submission.csv")
sub_format.columns
submission = pd.DataFrame({
'Id':sub_format.iloc[:, 0].tolist(),
'Label':new_pred_list
})
submission.to_csv("./submission_bert_binary.csv", index=False, header=True)
```
| github_jupyter |
This material is copied (possibily with some modifications) from the [Python for Text-Analysis course](https://github.com/cltl/python-for-text-analysis/tree/master/Chapters).
# Chapter 7 - Lists
*This notebook uses code snippets and explanations from [this course](https://github.com/kadarakos/python-course/blob/master/Chapter%205%20-%20Lists.ipynb).*
As we have learned before, **strings** and **integers** have one value in them. When we put a new value in them, the old value is overwritten. In this chapter, we will have a first look at containers. A container allows us to put many values in a single **box**. A container is nice because we can carry lots of values around in one convenient package. It is very straightforward to store a list of items as a container. Not surprisingly, we call this a list.
**At the end of this chapter, you will be able to:**
* create a list
* add items to a list
* extract/inspect items in a list
* perform basic list operations
* use built-in functions on lists
**If you want to learn more about these topics, you might find the following links useful:**
* [Python documentation](https://docs.python.org/3/tutorial/datastructures.html#more-on-lists)
* [Tutorial on lists](https://www.tutorialspoint.com/python/python_lists.htm)
* [Another tutorial on lists](https://www.programiz.com/python-programming/list)
If you have **questions** about this chapter, contact Cody in the Slack group.
## 1. How to create a list
It's very simple to create a list. Please look at the following examples:
```
friends = ['John', 'Bob', 'Mary']
stuff_to_pack = ['socks','shirt','toothbrush']
print(friends)
print(stuff_to_pack)
```
* Lists are surrounded by square brackets and the elements in the list are separated by commas
* A list element can be **any Python object** - even another list
* A list can be empty
```
#list of integers
print([1, 24, 76])
#list of strings
print(['red', 'yellow', 'blue'])
#mixed list
print(['red', 24, 98.6])
#list with a list included
print([ 1, [5, 6], 7])
#empty list
print([])
```
Please note that there are two ways of creating an empty list
```
one_way = []
print(one_way)
another_way = list()
print(another_way)
```
## 2. How to add items to a list
The most common way of adding an item to a list is by using the **append** method.
Let's first look at the help message of this method to understand it better:
```
help(list.append)
```
We learn that **append** takes one positional argument **object** and it returns None. It might be a bit confusing at first that a list method returns None. Please carefully look at the difference between the two following examples. Please predict what will be printed in each code snippet below:
```
a_list = [1, 3, 4]
a_list.append(5)
print(a_list)
a_list = [1, 3, 4]
a_list = a_list.append(5)
print(a_list)
```
The reason why the first example is the correct one is that **lists** are **mutable**, which means that you can change the contents of a list. You can hence change the items in a list without assigning it to a new variable. This also becomes clear when you look at the documentation:
```
help(list.append)
```
In comparison, when we want to change an **immutable object** like a string, the method returns a **copy** of the input:
```
help(str.replace)
a_string = 'hello'
a_string.replace('l', 'b')
print(a_string)
a_string = 'hello'
a_new_string = a_string.replace('l', 'b')
print(a_new_string)
```
## 3. How to extract/inspect items in a list
Please note that **indexing** and **slicing** work the same way as with strings. Every item in the list has hence its own index number. We start counting at 0! The indices for our
list ['John', 'Bob', 'Marry'] are as follows:
John|Bob|Mary
---|---|--|
0|1|2|
-3|-2|-1
We can hence use this index number to extract items from a list (just as with strings)
```
friend_list = ['John', 'Bob', 'Marry']
print(friend_list[0])
print(friend_list[1])
print(friend_list[2])
```
Obviously, we can also use **negative indices**:
```
friend_list = ['John', 'Bob', 'Marry']
print(friend_list[-2])
```
And we can extract one part of a list using **slicing**:
```
friend_list = ['John', 'Bob', 'Marry']
list_with_fewer_friends = friend_list[:2]
print(list_with_fewer_friends)
```
If you insert an index that is higher than what is present in the list, you will get an **IndexError**:
```
print(friend_list[5])
```
Two additional methods are useful for inspecting lists:
* **count**
* **index**
```
help(list.count)
```
The **count** method has one positional argument **value** and returns an integer. As the name already indicates, the method returns how often the value occurs in the list.
```
friend_list = ['John', 'Bob', 'John', 'Marry', 'Bob']
number_of_bobs = friend_list.count('Bob')
print(number_of_bobs)
friend_list = ['John', 'Bob', 'John', 'Marry', 'Bob']
number_of_franks = friend_list.count('Frank')
print(number_of_franks)
help(list.index)
```
The **index** method has one positional argument **value** and returns the first index of the value. It is hence similar to the **count** method, but now the **first index** is returned of the value instead of the count.
```
friend_list = ['John', 'Bob', 'John', 'Marry', 'Bob']
first_index_with_john = friend_list.index('Bob')
print(first_index_with_john)
```
We get a **ValueError** when the value is not in the list
```
friend_list = ['John', 'Bob', 'John', 'Marry', 'Bob']
friend_list.index('Frank')
```
## 4. Basic List Operations
Python allows at least the following very useful list operations:
* **concatenation**
* **repetition**
* **membership**
* **comparison**
The '+' sign concatenates two lists:
```
one_list = ['where', 'is']
another_list = ['the', 'rest', '?']
print(one_list + another_list)
```
The '*' sign makes it possible to repeat a list:
```
a_list = ['Hello', 'world']
print(a_list * 3)
```
Of course, you can use lists in membership boolean expressions
```
life = ['a', 'lot', 'of', 'stuff']
print('meaning' in life)
```
And you can use lists in comparison boolean expressions
```
print([3, 2] == [2, 3])
```
## 5. Use built-in functions on lists
Python has a range of functions that operate on lists. We can easily get some simple calculations done with these functions:
```
nums = [3, 41, 12, 9, 74, 15]
print(len(nums)) # number of items in a list
print(max(nums)) # highest value in a list
print(min(nums)) # lowest value in a list
print(sum(nums)) # sum of all values in a list
```
## 6. An overview of list methods
There are many more methods which we can perform on lists. Here is an overview of some of them.
In order to get used to them, please call the **help** function on each of them (e.g. help(list.insert)). This will give you the information about the positional arguments, keyword arguments, and what is returned by the method.
```
#define some lists and variables
a = [1,2,3]
b = 4
c = [5,6,7]
x = 1
i = 2
#do some operations
a.append(b) # Add item b to the end of a
a.extend(c) # Add the elements of list c at the end of a
a.insert(i,b) # Insert item b at position i
a.pop(i) # Remove from a the i'th element and return it. If i is not specified, remove the last element
a.index(x) # Return the index of the first element of a with value x. Error if it does not exist
a.count(x) # Return how often value x is found in a
a.remove(x) # Remove from a the first element with value x. Error if it does not exist
a.sort() # Sort the elements of list a
a.reverse() # Reverses list a (no return value!)
print(a)
```
In order to have a complete overview of all list methods, you can use the **dir** built-in function:
```
dir(list)
```
## Exercises
**Exercise 1:**
Create an empty list and add three names (strings) to it using the *append* method
```
# your code here
```
**Exercise 2:**
Please count how many times *John* occurs in the list
```
friend_list = ['John', 'Bob', 'John', 'Marry', 'Bob']
# your code here
```
**Exercise 3:**
Please use a built-in function to determine the number of strings in the list below
```
friend_list = ['John', 'Bob', 'John', 'Marry', 'Bob']
# your code here
```
**Exercise 4:**
Please remove both *John* names from the list below using a list method
```
friend_list = ['John', 'Bob', 'John', 'Marry', 'Bob']
# your code here
```
**Exercise 4:**
Please add *world* to the string and the list below. Only create a new object if it is necessary.
```
a_string = 'hello'
a_list = ['hello']
# your code here
```
| github_jupyter |
## CMA Diagram
Clemmow-Mullaly-Allis (CMA) Diagram
**Warning**: This notebook would store data (png images) under your jupyter working directory. To be accurate, that is `/the-path-to-your-jupyter-working-directroy/sinupy_data/dispersion/*.png`. Of course you can modify it (`data_path`) in the following block.
```
from sympy import sqrt, pi, init_printing; init_printing()
from scipy.constants import e, m_p, m_e, c
import sinupy.mediums.plasma as pms
import matplotlib.pyplot as plt
from pathlib import Path
data_path = Path('./sinupy_data/dispersion'); data_path.mkdir(parents=True, exist_ok=True)
from sinupy.draw import draw_discontinuable_expr, add_line_with_slope
import sinupy.algebra.utility as fualguti
from sinupy import mediums, waves
from sinupy.waves import EM
plasma = mediums.ColdMagnetizedPlasma(species='e+i')
wave_eq = waves.EM.WaveEq(plasma)
wave = wave_eq.wave
m_i_N = m_p
m_e_N = m_e
omega_ce = pms.omega_ce(plasma=plasma)
omega_pe = pms.omega_pe(plasma=plasma)
# Even if your plasma.species is 'e', the ion-relevant symbols would not interrupt ...
# our calculation procedure, because `expr.subs(a_specific_symbol, a_numeric_value)` ...
# also would not interrupt our procedure (i.e. throw an exception) when it finds there ...
# does not exist such `a_specific_symbol` in the formula.
omega_ci = pms.omega_cj(plasma=plasma, varidx='i')
omega_pi = pms.omega_pj(plasma=plasma, varidx='i')
# Substitute symbol parameters with accurate numerical values.
# Note the function will capture the variables B, n_0, m_i from the working scope.
w2N = lambda expr: expr\
.subs(omega_ce, pms.omega_ce(B=B))\
.subs(omega_pe, pms.omega_pe(n_0=n_0))\
.subs(omega_ci, pms.omega_cj(q_e=1, m=m_i_N, B=B))\
.subs(omega_pi, pms.omega_pj(n_0=n_0, q_e=1, m=m_i_N))
```
### $N^2(\omega, \theta=0)$ and $\omega$ Singularies
Express $N^2$ with $\omega$, $\omega_{ce}$, $\omega_{pe}$, rather than $\kappa_\perp$, $\kappa_\times$, $\kappa_\parallel$.
There exist omega $\omega$ singularites. At these points, $\omega$ would cause an infinite $N^2$, *i.e.* induce resonance.
The number of numerical result may be less than analytic symbol results, because sympy knows $\omega \geq 0$ and removes some obviously wrong answers.
```
# Substitute kappa components with omega.
N2_in_omega = [
pms.kappa2omega(sol, wave, plasma) for sol in
EM.solve_N2(wave_eq, theta=0)] # <-- Set theta here
# Symbol results of omega singularities
[fualguti.find_singularities(sol, wave.w) for sol in N2_in_omega]
# Substitute constant parameters with accurate numerical values.
B, n_0 = 5, 1e20
N2_in_omega = [w2N(sol) for sol in N2_in_omega]
# Numerical result of omega singularities
omega_singularites = \
[fualguti.find_singularities(sol, wave.w) for sol in N2_in_omega]
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all" # display all expression in one cell instead of the last one
from sympy import symbols, solve, Eq, sqrt, pi
from sinupy.algebra.draw import draw_discontinuable_expr, add_line_with_slope
e, epsilon = symbols('e, epsilon', positive=True)
m_e, m_i = plasma.m['e'], plasma.m['i']
n_e, n_i = plasma.n['e'], plasma.n['i']
B = plasma.B_amp()
w2Bn = lambda expr: expr\
.subs(omega_ce, e * B / m_e)\
.subs(omega_ci, e * B / m_i)\
.subs(omega_pe**2, e**2 * n_e /(epsilon * m_e))\
.subs(omega_pi**2, e**2 * n_e /(epsilon * m_i))
```
### Wave Resonance
Wave resonance happens when the relative refracion, $N$ blow up to infinity. As we already know, $N^2$ is a function of the wave angular frequency $\omega$, the angle $\angle(\vec{k}, \vec{B})$ between the wave $\vec{k}$ vector, the external magnetic field $\vec{B}$, and other characteristic frequency in plasma, *i.e.* $\omega_{pe}$ ,$\omega_{ce}$ and so on. In the following blocks, we fix the angle and find the $\omega^2$ that would make $N$ blow up.
```
N2_in_omega = [
pms.kappa2omega(sol, wave, plasma) for sol in
EM.solve_N2(wave_eq, theta=pi/2)] # <-- Set theta here
N2_in_omega
resonance_omega_points = [fualguti.find_singularities(sol, wave.w) for sol in N2_in_omega]
resonance_omega_square_points = [
list(set(map(lambda x:pow(x,2), branch_omega_points)))
for branch_omega_points in resonance_omega_points]
resonance_omega_square_points
# The above expressions contain $omega_{pe}$, $\omega_{ce}$ and so on.
# We transform them to basic plasma parameters like $\vec{B}$, $n_e$ as follows.
resonance_omega_square_points = [[
w2Bn(omega_square) for omega_square in branch
] for branch in resonance_omega_square_points]
resonance_omega_square_points
cutoff_omega_square_points = [
w2Bn(omega_square) for omega_square in
[((omega_ci-omega_ce + sqrt((omega_ce+omega_ci)**2 + 4 * omega_pe**2))/2)**2,
((omega_ci-omega_ce - sqrt((omega_ce+omega_ci)**2 + 4 * omega_pe**2))/2)**2]
# [((pms.omega_ce + sqrt(pms.omega_ce**2 + 4 * pms.omega_pe**2))/2)**2,
# ((-pms.omega_ce + sqrt(pms.omega_ce**2 + 4 * pms.omega_pe**2))/2)**2, ]
]
cutoff_omega_square_points
X, Y = symbols('X, Y', real=True, negative=False)
Bn2XY = lambda expr: expr\
.subs(B**2, Y * (m_e * m_i * wave.w**2) /(e**2))\
.subs(B, sqrt(Y * (m_e * m_i)) * wave.w / e)\
.subs(n_e, X * (epsilon * m_e * wave.w**2) / e**2)
resonance_omega_square_points = [[
Bn2XY(omega_square) for omega_square in branch
] for branch in resonance_omega_square_points]
resonance_omega_square_points
cutoff_omega_square_points = [
Bn2XY(omega_square.expand()) for omega_square in cutoff_omega_square_points
]
cutoff_omega_square_points
resonance_points_as_Eq_1 = [
[omega_square.subs(wave.w, 1) for omega_square in branch]
for branch in resonance_omega_square_points]
resonance_points_as_Eq_1
cutoff_points_as_Eq_1 = [
omega_square.subs(wave.w, 1)
for omega_square in cutoff_omega_square_points]
cutoff_points_as_Eq_1
from sympy import solve, Eq
solve(Eq(resonance_points_as_Eq_1[1][0], 1), Y)
solve(Eq(resonance_points_as_Eq_1[1][1], 1), Y)
solve(Eq(resonance_points_as_Eq_1[1][2], 1), Y)
solve(Eq(resonance_points_as_Eq_1[1][3], 1), Y)
solve(Eq(cutoff_points_as_Eq_1[0], 1), X)
solve(Eq(cutoff_points_as_Eq_1[1], 1), X)
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "last" # display all expression in one cell instead of the last one
import matplotlib.pyplot as plt
fig_CMA, ax_CMA = plt.subplots(figsize=(20, 30))
ax_CMA.set_xscale('log')
ax_CMA.set_yscale('log')
ax_CMA.set_xlabel('$\omega_p^2/\omega^2$', loc='right', fontdict={'size': 32})
ax_CMA.set_ylabel('$\\frac{\omega_{ce}\omega_{ci}}{\omega^2}$ ', loc='top', fontdict={'size': 32}, rotation=0)
# ax_CMA.set_xticks([1.0])
# ax_CMA.set_xticklabels(size=20)
ax_CMA.set_yticks([1./m_i_N, 1.0, m_i_N/1])
ax_CMA.set_yticklabels(['$m_e/m_i$', '$1.0$', '$m_i/m_e$'], size=20)
# change the fontsize
ax_CMA.tick_params(axis='x', labelsize=20)
ax_CMA.axhline(
y=solve(Eq(resonance_points_as_Eq_1[1][0], 1), Y)[0].subs(m_i, m_i_N).subs(m_e, m_e_N),
color='blue', linestyle=':', label='$u_L=0$, $\omega=\omega_{ce}$')
ax_CMA.axhline(
y=solve(Eq(resonance_points_as_Eq_1[1][2], 1), Y)[0].subs(m_i, m_i_N).subs(m_e, m_e_N),
color='purple', linestyle=':', label='$u_R=0$, $\omega=\omega_{ci}$')
ax_CMA.axvline(
x=1,
color='darkcyan', linestyle=':', label='$u_O=\infty$, $\omega=\omega_{pe}$')
draw_discontinuable_expr(
[sol.subs(m_i, m_i_N).subs(m_e, m_e_N)
for sol in solve(Eq(resonance_points_as_Eq_1[1][1], 1), Y)], X, # [1][3] is also okay
varlim = (1e-3, 1e7), exprlim=(1e-5, None), num=500,
var_sample_scale='log', fig=fig_CMA, ax=ax_CMA, labels=['$u_X=0$, $\omega=\omega_{UH}$', '$u_X=0$, $\omega=\omega_{LH}$']
)
draw_discontinuable_expr(
[sol.subs(m_i, m_i_N).subs(m_e, m_e_N)
for sol in solve(Eq(cutoff_points_as_Eq_1[0], 1), Y)], X,
varlim = (1e-3, 1e7), exprlim=(1e-5, None), num=500,
var_sample_scale='log', fig=fig_CMA, ax=ax_CMA, labels=['$u_R=\infty$, $\omega=\omega_{R}$', '$u_L=\infty$, $\omega=\omega_{L}$']
)
ax_CMA.legend(prop={'size': 20})
plt.close(fig_CMA)
from matplotlib.patches import Circle
from matplotlib.offsetbox import (TextArea, DrawingArea, OffsetImage,
AnnotationBbox)
from matplotlib.cbook import get_sample_data
for i, (B, n_0, omega) in enumerate(plasma_B_n_0_omega):
with get_sample_data((data_path / f"v_ph_{i}.png").absolute()) as file:
arr_img = plt.imread(file, format='png')
imagebox = OffsetImage(arr_img, zoom=0.28)
imagebox.image.axes = ax_CMA
imagebox
x_CMA = w2N((omega_pe**2 + omega_pi**2) / omega**2)
y_CMA = w2N(omega_ce * omega_ci / omega**2)
xy_CMA = (x_CMA, y_CMA)
print(xy_CMA)
ab = AnnotationBbox(imagebox, xy_CMA,
xybox=(150., -200.),
xycoords='data',
boxcoords="offset points",
pad=0.5,
arrowprops=dict(
arrowstyle="->",
connectionstyle="angle,angleA=0,angleB=90,rad=3")
)
ax_CMA.add_artist(ab)
print(f"The {i}-th phase speed polar plot.")
fig_CMA
```
### References:
- For better color impression, [matplotlib official color gallery](https://matplotlib.org/3.1.0/gallery/color/named_colors.html) can ben refered.
| github_jupyter |
Implementation of Infinite Mixture Models using Dirichlet Process taken from http://blog.echen.me/2012/03/20/infinite-mixture-models-with-nonparametric-bayes-and-the-dirichlet-process/
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
import seaborn as sns
sns.set(color_codes=True)
# Generate table assignments for `num_customers` customers, according to
# a Chinese Restaurant Process with dispersion parameter `alpha`.
# returns an array of integer table assignments
def chinese_restaurant_process(num_customers, alpha):
if num_customers <= 0:
return []
table_assignments = [1] # first customer sits at table 1
next_open_table = 2 # index of the next empty table
# Now generate table assignments for the rest of the customers.
for i in range(1,num_customers):
rand_p = np.random.rand()
if (rand_p < alpha*1.0/(alpha + i)):
# Customer sits at new table
table_assignments.append(next_open_table)
next_open_table += 1
else:
# Customer sites at an exiting table
# He chooses which table to sit at by giving equal weight to each
# customer already sitting at a table
rand_index = np.random.random_integers(0,i-1)
#print i, len(table_assignments), rand_index, rand_p, alpha*1.0/(alpha + i), table_assignments
which_table = table_assignments[rand_index]
table_assignments.append(which_table)
return table_assignments
def get_points(N=10, alpha_max=1000, runs_per_alpha=10):
points = []
for alpha in range(1,alpha_max):
#print "Alpha: ", alpha
max_groups = []
for i in range(runs_per_alpha):
distribution = chinese_restaurant_process(num_customers = N, alpha = alpha)
max_groups.append(max(distribution))
#print "Run[%s]:\t%s" %(i, distribution)
mean_groups = np.mean(max_groups)
#print "Alpha: %s, Mean: %s" % (alpha, mean_groups)
points.append([alpha, mean_groups])
return np.array(points)
plt.figure(figsize=[10,10])
for N in range(10, 500, 50):
points = get_points(N)
plt.plot(points[:,0], points[:, 1], '-', label="N={}".format(N))
plt.xlabel("Alpha")
plt.ylabel("Mean Number of groups")
plt.yscale('log')
plt.legend(loc='upper left')
plt.show()
# Poyla Urn Process
# Draw `num_balls` colored balls according to a Polya Urn Model
# with a specified base color distribution and dispersion parameter
# `alpha`.
#
# returns an array of ball colors
def polya_urn_model(base_color_distribution, num_balls, alpha):
if num_balls <= 0:
return []
balls_in_urn = []
for i in range(num_balls):
urn_size = len(balls_in_urn)
if np.random.rand() < alpha*1.0 / (alpha + urn_size):
# Draw a new color, put a ball of this color in the urn.
new_color = base_color_distribution()
balls_in_urn.append(new_color)
else:
# Draw a ball from the urn, add another ball of the same color.
ball = balls_in_urn[np.random.random_integers(0,urn_size-1)]
balls_in_urn.append(ball)
return np.array(balls_in_urn)
unit_uniform = lambda: int(np.random.rand()*100)/100.0
plt.figure(figsize=[10,10])
for i in range(3):
X = polya_urn_model(unit_uniform, num_balls = 1000, alpha = 1)
#print X
sns.kdeplot(X, shade=True);
sns.__version__
```
| github_jupyter |
# Actividad de interpolación
## Integrantes
```
integrantes = {}
```
## Descripción
### Problema
La resistencia del concreto en una obra está dada en la siguiente tabla
de valores.
| Tiempo (días) | Resistencia (GPa) |
|:-------------:|:-----------------:|
| 0 | 1 |
| 2 | 7 |
| 8 | 20 |
| 10 | 21 |
Se desea estimar el tiempo mínimo que se debe esperar antes de cargar la
estructura.
### Descripción
Se debe:
1. Crear una variable de tipo diccionario llamada integrantes, cuyas claves de
diccionario son los códigos de estudiante de los integrantes como cadena de
caracteres y los valores asociados son los nombres de los estudiantes
igualmente como cadena de caracteres.
2. Crear una función de Python que permita evaluar y graficar la evolución
de la resistencia para una cantidad arbitraria de puntos (n).
3. Crear una función de Python que permita encontrar el tiempo para el cual
el concreto ha alcanzado una resistencia dada usando el método de
bisección.
4. Responder las preguntas dadas en la sección de [Informe](#Informe).
<div class="alert alert-warning">
**Recomendaciones**
- Consulte la función [`lagrange()`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.lagrange.html#scipy.interpolate.lagrange)
de Scipy que permite obtener el polinomio interpolante para una
serie de puntos.
- Consulte la función [`bisect()`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.bisect.html#scipy.optimize.bisect)
de Scipy que permite obtener la raíz de una función usando
el método de bisección.
</div>
## Entregables
1. Notebook de Jupyter con el siguiente contenido:
1. Código que resuelve el problema planteado.
2. Informe describiento el procedimiento usado y referencias (si se usaron).
## Evaluación
La evaluación se subdivide en documentación (30%) y el código (70%).
Los aspectos a tener en cuenta son:
### Documentación
1. Contenido (20%): El documento debe describir el procedimiento realizado
y ...
2. Presentación (10%): El documento debe estar organizado.
### Código
1. Organización (20%): El código debe estar organizado y tener comentarios que permitan
su entendimiento.
2. Ejecución (20%): El Notebook debe ejecutar sin ninguna clase de error.
3. Solución del problema (30%): Los resultados obtenidos para diferentes casos de prueba
son los esperados.
<div class="alert alert-warning">
**Notas**
- La tarea debe realizarla cada grupo.
- La tarea deberá ser sustentada cuando se considere necesario, en cuyo caso, la nota
de la misma dependerá en su totalidad de dicha sustentación.
- Debe anexar un comprimido en formato zip y cuyo nombre del comprimido sea el
nombre del equipo asignado.
</div>
A continuación se presentan algunas secciones que pueden ayudar
a los estudiantes a resolver la actividad.
## Importación bibliotecas
```
import numpy as np
import matplotlib.pyplot as plt
```
## Función de interpolación
```
# Creación de la función de interpolación
```
## Creación de funciones
```
def graficar_resistencia(a, b, n, interp):
"""
Grafica la función de interpolación (interp)
en el intervalo [a, b] usando n puntos.
"""
return None
def calcular_tiempo(resistencia, interp):
"""
Calcula el tiempo requerido para que
la resistencia llegue a un valor dado
(resistencia).
"""
return None
```
## Informe
**Preguntas**
- Presente el gráfico para n=10.
- Presente el gráfico para n=100.
- ¿Cuál es el tiempo requerido para que la resistencia llegue 12 GPa?
- ¿Cuál es el tiempo requerido para que la resistencia llegue 18 GPa?
## Referencias
Si consultó alguna referencia añádala en esta sección usando formato
APA.
| github_jupyter |
## 1. Google Play Store apps and reviews
<p>Mobile apps are everywhere. They are easy to create and can be lucrative. Because of these two factors, more and more apps are being developed. In this notebook, we will do a comprehensive analysis of the Android app market by comparing over ten thousand apps in Google Play across different categories. We'll look for insights in the data to devise strategies to drive growth and retention.</p>
<p><img src="https://assets.datacamp.com/production/project_619/img/google_play_store.png" alt="Google Play logo"></p>
<p>Let's take a look at the data, which consists of two files:</p>
<ul>
<li><code>apps.csv</code>: contains all the details of the applications on Google Play. There are 13 features that describe a given app.</li>
<li><code>user_reviews.csv</code>: contains 100 reviews for each app, <a href="https://www.androidpolice.com/2019/01/21/google-play-stores-redesigned-ratings-and-reviews-section-lets-you-easily-filter-by-star-rating/">most helpful first</a>. The text in each review has been pre-processed and attributed with three new features: Sentiment (Positive, Negative or Neutral), Sentiment Polarity and Sentiment Subjectivity.</li>
</ul>
```
# Read in dataset
import pandas as pd
apps_with_duplicates = pd.read_csv("datasets/apps.csv")
# Drop duplicates from apps_with_duplicates
apps = apps_with_duplicates.drop_duplicates()
# Print the total number of apps
print(apps_with_duplicates.size)
#print('Total number of apps in the dataset = ', ...)
print(apps.size)
# Have a look at a random sample of 5 rows
print(apps.sample(5))
```
## 2. Data cleaning
<p>Data cleaning is one of the most essential subtask any data science project. Although it can be a very tedious process, it's worth should never be undermined.</p>
<p>By looking at a random sample of the dataset rows (from the above task), we observe that some entries in the columns like <code>Installs</code> and <code>Price</code> have a few special characters (<code>+</code> <code>,</code> <code>$</code>) due to the way the numbers have been represented. This prevents the columns from being purely numeric, making it difficult to use them in subsequent future mathematical calculations. Ideally, as their names suggest, we would want these columns to contain only digits from [0-9].</p>
<p>Hence, we now proceed to clean our data. Specifically, the special characters <code>,</code> and <code>+</code> present in <code>Installs</code> column and <code>$</code> present in <code>Price</code> column need to be removed.</p>
<p>It is also always a good practice to print a summary of your dataframe after completing data cleaning. We will use the <code>info()</code> method to acheive this.</p>
```
# List of characters to remove
chars_to_remove = ["+",",","$"]
# List of column names to clean
cols_to_clean = ["Installs","Price"]
# Loop for each column in cols_to_clean
for col in cols_to_clean:
# Loop for each char in chars_to_remove
for char in chars_to_remove:
# Replace the character with an empty string
apps[col] = apps[col].apply(lambda x: x.replace(char,""))
# Print a summary of the apps dataframe
print(apps.info())
```
## 3. Correcting data types
<p>From the previous task we noticed that <code>Installs</code> and <code>Price</code> were categorized as <code>object</code> data type (and not <code>int</code> or <code>float</code>) as we would like. This is because these two columns originally had mixed input types: digits and special characters. To know more about Pandas data types, read <a href="https://datacarpentry.org/python-ecology-lesson/04-data-types-and-format/">this</a>.</p>
<p>The four features that we will be working with most frequently henceforth are <code>Installs</code>, <code>Size</code>, <code>Rating</code> and <code>Price</code>. While <code>Size</code> and <code>Rating</code> are both <code>float</code> (i.e. purely numerical data types), we still need to work on <code>Installs</code> and <code>Price</code> to make them numeric.</p>
```
import numpy as np
# Convert Installs to float data type
apps.Installs = apps.Installs.astype("float")
# Convert Price to float data type
apps.Price = apps.Price.astype("float")
# Checking dtypes of the apps dataframe
print(apps.info())
```
## 4. Exploring app categories
<p>With more than 1 billion active users in 190 countries around the world, Google Play continues to be an important distribution platform to build a global audience. For businesses to get their apps in front of users, it's important to make them more quickly and easily discoverable on Google Play. To improve the overall search experience, Google has introduced the concept of grouping apps into categories.</p>
<p>This brings us to the following questions:</p>
<ul>
<li>Which category has the highest share of (active) apps in the market? </li>
<li>Is any specific category dominating the market?</li>
<li>Which categories have the fewest number of apps?</li>
</ul>
<p>We will see that there are <code>33</code> unique app categories present in our dataset. <em>Family</em> and <em>Game</em> apps have the highest market prevalence. Interestingly, <em>Tools</em>, <em>Business</em> and <em>Medical</em> apps are also at the top.</p>
```
import plotly
plotly.offline.init_notebook_mode(connected=True)
import plotly.graph_objs as go
# Print the total number of unique categories
num_categories = len(set(apps.Category.unique()))
print('Number of categories = ', num_categories)
# Count the number of apps in each 'Category'.
num_apps_in_category = pd.value_counts(apps.Category)
# Sort num_apps_in_category in descending order based on the count of apps in each category
sorted_num_apps_in_category = num_apps_in_category.sort_values(ascending = False)
data = [go.Bar(
x = num_apps_in_category.index, # index = category name
y = num_apps_in_category.values, # value = count
)]
plotly.offline.iplot(data)
```
## 5. Distribution of app ratings
<p>After having witnessed the market share for each category of apps, let's see how all these apps perform on an average. App ratings (on a scale of 1 to 5) impact the discoverability, conversion of apps as well as the company's overall brand image. Ratings are a key performance indicator of an app.</p>
<p>From our research, we found that the average volume of ratings across all app categories is <code>4.17</code>. The histogram plot is skewed to the left indicating that the majority of the apps are highly rated with only a few exceptions in the low-rated apps.</p>
```
# Average rating of apps
avg_app_rating = apps.Rating.mean()
print('Average app rating = ', avg_app_rating)
# Distribution of apps according to their ratings
data = [go.Histogram(
x = apps['Rating']
)]
# Vertical dashed line to indicate the average app rating
layout = {'shapes': [{
'type' :'line',
'x0': avg_app_rating,
'y0': 0,
'x1': avg_app_rating,
'y1': 1000,
'line': { 'dash': 'dashdot'}
}]
}
plotly.offline.iplot({'data': data, 'layout': layout})
```
## 6. Size and price of an app
<p>Let's now examine app size and app price. For size, if the mobile app is too large, it may be difficult and/or expensive for users to download. Lengthy download times could turn users off before they even experience your mobile app. Plus, each user's device has a finite amount of disk space. For price, some users expect their apps to be free or inexpensive. These problems compound if the developing world is part of your target market; especially due to internet speeds, earning power and exchange rates.</p>
<p>How can we effectively come up with strategies to size and price our app?</p>
<ul>
<li>Does the size of an app affect its rating? </li>
<li>Do users really care about system-heavy apps or do they prefer light-weighted apps? </li>
<li>Does the price of an app affect its rating? </li>
<li>Do users always prefer free apps over paid apps?</li>
</ul>
<p>We find that the majority of top rated apps (rating over 4) range from 2 MB to 20 MB. We also find that the vast majority of apps price themselves under \$10.</p>
```
%matplotlib inline
import seaborn as sns
sns.set_style("darkgrid")
import warnings
warnings.filterwarnings("ignore")
# Select rows where both 'Rating' and 'Size' values are present (ie. the two values are not null)
apps_with_size_and_rating_present =apps[apps['Rating'].notnull() & apps['Size'].notnull()]
# Subset for categories with at least 250 apps
large_categories = apps_with_size_and_rating_present.groupby(['Category']).filter(lambda x: len(x) >= 250)
# Plot size vs. rating
plt1 = sns.jointplot(x = large_categories["Size"], y = large_categories["Rating"])
# Select apps whose 'Type' is 'Paid'
paid_apps = apps_with_size_and_rating_present[apps_with_size_and_rating_present["Type"]=="Paid"]
# Plot price vs. rating
plt2 = sns.jointplot(x = paid_apps["Price"], y = paid_apps["Rating"])
```
## 7. Relation between app category and app price
<p>So now comes the hard part. How are companies and developers supposed to make ends meet? What monetization strategies can companies use to maximize profit? The costs of apps are largely based on features, complexity, and platform.</p>
<p>There are many factors to consider when selecting the right pricing strategy for your mobile app. It is important to consider the willingness of your customer to pay for your app. A wrong price could break the deal before the download even happens. Potential customers could be turned off by what they perceive to be a shocking cost, or they might delete an app they’ve downloaded after receiving too many ads or simply not getting their money's worth.</p>
<p>Different categories demand different price ranges. Some apps that are simple and used daily, like the calculator app, should probably be kept free. However, it would make sense to charge for a highly-specialized medical app that diagnoses diabetic patients. Below, we see that <em>Medical and Family</em> apps are the most expensive. Some medical apps extend even up to \$80! All game apps are reasonably priced below \$20.</p>
```
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
fig.set_size_inches(15, 8)
# Select a few popular app categories
popular_app_cats = apps[apps.Category.isin(['GAME', 'FAMILY', 'PHOTOGRAPHY',
'MEDICAL', 'TOOLS', 'FINANCE',
'LIFESTYLE','BUSINESS'])]
# Examine the price trend by plotting Price vs Category
ax = sns.stripplot(x = popular_app_cats["Price"], y = popular_app_cats["Category"], jitter=True, linewidth=1)
ax.set_title('App pricing trend across categories')
# Apps whose Price is greater than 200
apps_above_200 = popular_app_cats[popular_app_cats.Price>200]
apps_above_200[['Category', 'App', 'Price']]
```
## 8. Filter out "junk" apps
<p>It looks like a bunch of the really expensive apps are "junk" apps. That is, apps that don't really have a purpose. Some app developer may create an app called <em>I Am Rich Premium</em> or <em>most expensive app (H)</em> just for a joke or to test their app development skills. Some developers even do this with malicious intent and try to make money by hoping people accidentally click purchase on their app in the store.</p>
<p>Let's filter out these junk apps and re-do our visualization.</p>
```
# Select apps priced below $100
apps_under_100 = popular_app_cats[popular_app_cats.Price<100]
fig, ax = plt.subplots()
fig.set_size_inches(15, 8)
# Examine price vs category with the authentic apps (apps_under_100)
ax = sns.stripplot(popular_app_cats["Price"], y = popular_app_cats["Category"], jitter = True, linewidth = 1)
ax.set_title('App pricing trend across categories after filtering for junk apps')
```
## 9. Popularity of paid apps vs free apps
<p>For apps in the Play Store today, there are five types of pricing strategies: free, freemium, paid, paymium, and subscription. Let's focus on free and paid apps only. Some characteristics of free apps are:</p>
<ul>
<li>Free to download.</li>
<li>Main source of income often comes from advertisements.</li>
<li>Often created by companies that have other products and the app serves as an extension of those products.</li>
<li>Can serve as a tool for customer retention, communication, and customer service.</li>
</ul>
<p>Some characteristics of paid apps are:</p>
<ul>
<li>Users are asked to pay once for the app to download and use it.</li>
<li>The user can't really get a feel for the app before buying it.</li>
</ul>
<p>Are paid apps installed as much as free apps? It turns out that paid apps have a relatively lower number of installs than free apps, though the difference is not as stark as I would have expected!</p>
```
trace0 = go.Box(
# Data for paid apps
y = apps[apps['Type'] == "Paid"]['Installs'],
name = 'Paid'
)
trace1 = go.Box(
# Data for free apps
y = apps[apps['Type'] == "Free"]['Installs'],
name = 'Free'
)
layout = go.Layout(
title = "Number of downloads of paid apps vs. free apps",
yaxis = dict(title = "Log number of downloads",
type = 'log',
autorange = True)
)
# Add trace0 and trace1 to a list for plotting
data = [trace0, trace1]
plotly.offline.iplot({'data': data, 'layout': layout})
```
## 10. Sentiment analysis of user reviews
<p>Mining user review data to determine how people feel about your product, brand, or service can be done using a technique called sentiment analysis. User reviews for apps can be analyzed to identify if the mood is positive, negative or neutral about that app. For example, positive words in an app review might include words such as 'amazing', 'friendly', 'good', 'great', and 'love'. Negative words might be words like 'malware', 'hate', 'problem', 'refund', and 'incompetent'.</p>
<p>By plotting sentiment polarity scores of user reviews for paid and free apps, we observe that free apps receive a lot of harsh comments, as indicated by the outliers on the negative y-axis. Reviews for paid apps appear never to be extremely negative. This may indicate something about app quality, i.e., paid apps being of higher quality than free apps on average. The median polarity score for paid apps is a little higher than free apps, thereby syncing with our previous observation.</p>
<p>In this notebook, we analyzed over ten thousand apps from the Google Play Store. We can use our findings to inform our decisions should we ever wish to create an app ourselves.</p>
```
# Load user_reviews.csv
reviews_df = pd.read_csv("datasets/user_reviews.csv")
# Join the two dataframes
merged_df = pd.merge(apps,reviews_df)
# Drop NA values from Sentiment and Review columns
merged_df = merged_df.dropna(subset = ['Sentiment', 'Review'])
sns.set_style('ticks')
fig, ax = plt.subplots()
fig.set_size_inches(11, 8)
# User review sentiment polarity for paid vs. free apps
ax = sns.boxplot(x = merged_df.Type, y = merged_df.Sentiment_Polarity, data = merged_df)
ax.set_title('Sentiment Polarity Distribution')
```
| github_jupyter |
<h1>Module Description</h1>
---
The current ipynb-module contains implementation of data preparing, especially it represents functions, which extracts faces from photos. For this purpose I've used the DNN module (specifically the network based on Single Shot MultiBox Detector, designed by [Aleksandr Rybnikov](https://github.com/arrybn)) from OpenCV. You can familiarize with the [paper](https://arxiv.org/pdf/1512.02325.pdf) which fully describes this algorithm. However I gonna try to shorthand how does it work.
<h2>Algorithm Shorthand</h2>
---
<h3>Goal</h3>
So let's consider the goal, which we wanna to attain. Due to in a photo occurs a lot of redundant things, like background things (*buildings, trees, cars etc*), bodies (*yep, I remove bodies, because many girl in the [Tinder](https://tinder.com) post photos only with a face, therefore I decided to feed CNN using only with faces*.). Let's take alook at the photos which suitable for face retrieving. (At least I wanna, so that they would be)
<h4>Photos Sample</h4>
Instagram: <a href="https://www.instagram.com/exx1dae">@exx1dae</a>
<div>
<img src="../img/exx1dae_1.jpg" style="width:300px; height: 300px; float: left; padding: 10px; margin-top: 14px;">
<img src="../img/exx1dae_2.jpg" style="width:300px; height: 300px; float: left; padding: 10px;">
<img src="../img/exx1dae_3.jpg" style="width:300px; height: 300px; float: left; padding: 10px;">
<img src="../img/exx1dae_4.jpg" style="width:300px; height: 300px; float: left; padding: 10px;">
<img src="../img/exx1dae_5.jpg" style="width:300px; height: 300px; float: left; padding: 10px;">
<img src="../img/exx1dae_6.jpg" style="width:300px; height: 300px; float: left; padding: 10px;">
</div>
<h3>Underlies Algorithm</h3>
As it said in the article, **Single Shot MultiBox Detector** uses deafult boxes similar to [*anchor boxes*](https://www.mathworks.com/help/vision/ug/anchor-boxes-for-object-detection.html) in MATLAB. This method differs from other with its speed and accuracy of predictions. For instance, detectors based on aggregate channel features (ACF) or histogram of gradients (HOG) features, slides a filter over the entire image and convolute it to a higher-level matrix according feature map. The process looks like this:

"*Because a convolutional neural network (CNN) can process an input image in a convolutional manner, a spatial location in the input can be related to a spatial location in the output. This convolutional correspondence means that a CNN can extract image features for an entire image at once. The extracted features can then be associated back to their location in that image. The use of anchor boxes replaces and drastically reduces the cost of the sliding window approach for extracting features from an image. Using anchor boxes, you can design efficient deep learning object detectors to encompass all three stages (detect, feature encode, and classify) of a sliding-window based object detector*."
However in Anchor Box method we use the filter, which tiled with set of predefined bounding boxes of a certain height and width. These boxes defined to capture the objects (its scale and some attributes) of a specific class, that one on which the network has been learned. Therefore the size of a default boxes based on the typical size in a training set. The network predicts the probability and other attributes, such as background, intersection over union (IoU) and offsets for every tiled anchor box. The network does not directly predict bounding boxes, but rather predicts the probabilities and refinements that correspond to the tiled anchor boxes. The network returns a unique set of predictions for every anchor box defined. The final feature map represents object detections for each class.
The filter looks like that:

<h4>How Do Anchor Boxes Work?</h4>
"*The position of an anchor box is determined by mapping the location of the network output back to the input image. The process is replicated for every network output. The result produces a set of tiled anchor boxes across the entire image. Each anchor box represents a specific prediction of a class.*"
Below there two anchor boxes to make two predictions per location:

Each anchor box is tiled across the image. The number of network outputs equals the number of tiled anchor boxes. The network produces predictions for all outputs. Also, whereas the downsampling occurs, we get the localiztion error because the distance, between the tiled anchor boxes is a function of the amount of downsampling present in the CNN, therefore when we're downsampling a picture we have such problem:

To fix this the dnn learns offset to apply to each tiled anchor box refining the anchor box position and size.

<h3>The Single Shot Detector</h3>
In the SSD model uses the same basic principle, it adjusts boxes (by ascertain their offsets), and predicts the confidence for all object categories.
*At training time, we first match these default boxes to the ground truth boxes. For
example, we have matched two default boxes with the cat and one with the dog, which
are treated as positives and the rest as negatives. The model loss is a weighted sum
between localization loss (e.g. Smooth L1) and confidence loss (e.g. Softmax).*

To the base network (In the SSD the VGG-16 network appears as a base, but other networks should also produce good results) adds following key features up:
1. "<em><strong>Multi-scale feature maps for detection</strong> We add convolutional feature layers to the end of the truncated base network.These layers decrease in size progressively and allow predictions of detections at multiple scales</em>"
2. "<em><strong>Convolutional predictors for detection</strong> Each added feature layer (or optionally an existing feature layer from the base network) can produce a fixed set of detection predictions using a set of convolutional filters.</em>"
3. "<em><strong>Default boxes and aspect ratios</strong> We associate a set of default bounding boxes with
each feature map cell, for multiple feature maps at the top of the network. The default
boxes tile the feature map in a convolutional manner, so that the position of each box
relative to its corresponding cell is fixed. At each feature map cell, we predict the offsets
relative to the default box shapes in the cell, as well as the per-class scores that indicate
the presence of a class instance in each of those boxes. Specifically, for each box out of
k at a given location, we compute c class scores and the 4 offsets relative to the original
default box shape. This results in a total of \\((c + 4) \times k\\) filters that are applied around each
location in the feature map, yielding \\((c + 4)\times k \times m \times n\\) outputs for a \\(m \times n\\) feature map. Our default boxes are similar to the anchor boxes used in Faster R-CNN, however we apply them to several feature maps of different resolutions. Allowing different default box shapes in several featuremaps let us efficiently discretize the space of possible output box shapes.</em>"
**YOLO and SSD comparsion**

"*Our SSD model adds several feature layers to the end of a base network, which predict
the offsets to default boxes of different scales and aspect ratios and their associated
confidences. SSD with a 300 × 300 input size significantly outperforms its 448 × 448
YOLO counterpart in accuracy on VOC2007 test while also improving the speed.*"
<h3>Links</h3>
1. **This was a shorthand of the SSD algorithm. You can familiarize with integrall description by following the [link](https://arxiv.org/pdf/1512.02325.pdf)**
2. **Also you can see the results of face detection (extracting) from a photos represented above, by following this [link](http://localhost:8888/notebooks/tinder_bot/application/data_preparing.ipynb#Examples-from-explanations-above)**
<h2>Impementation</h2>
---
```
%matplotlib inline
import cv2
import sys
import numpy as np
import os
import inspect
from skimage import io
from scipy import misc
import matplotlib.pyplot as plt
from skimage.transform import resize
from IPython.display import clear_output
import pandas as pd
img_size = 256 # out image's size
faces_in_image_limit = 1 # number of people in image. We want images of single people.
def extract_faces(img):
"""This function extracts a face from a photo.
:param img: the image from which we wanna to derive a face.
:return: np.array of an extracted face and confidence that it is a human face.
"""
model_file = "utils/opencv_face_detector_uint8.pb"
config_file = "utils/opencv_face_detector.pbtxt"
# This network has been created for the Caffe and Tensorflow, I used the second one
net = cv2.dnn.readNetFromTensorflow(model_file, config_file)
# Returning results
image_data_fin = []
confidence_res = None
h, w = img.shape[:2]
# https://www.pyimagesearch.com/2017/11/06/deep-learning-opencvs-blobfromimage-works/ blob description
# First we resize the image to 300x300 according to the pretrained weights
# Second, the scale factor (standard deviation in the z-scoring), I do not use the scale therefore set it as 1.0
# Third, mean-tupple of RGB [mu-Red, mu-Green, mu-Blue]
# Forth, indicates that swap first and last channels in 3-channel image is necessary.
# Fifth, indicates whether image will be cropped after resize or not
blob = cv2.dnn.blobFromImage(cv2.resize(img, (300, 300)), 1.0, (300, 300), [104, 117, 123], False, False)
# pass the blob through the network and obtain the detections and predictions
net.setInput(blob)
detections = net.forward()
# loop over the detections
for i in range(detections.shape[2]):
# extract the confidence (i.e., probability) associated with the prediction
# https://docs.opencv.org/trunk/d3/d63/classcv_1_1Mat.html
confidence = detections[0, 0, i, 2]
# If confidence is higher than 50% than
if confidence > 0.5:
# compute the (x, y)-coordinates of the bounding box for the object
box = detections[0, 0, i, 3:7] * np.array([w, h, w, h])
(x, y, x1, y1) = box.astype("int")
# create a new image (augmented image, in the way to cut off everything except a face)
roi_color = img[y:y1, x:x1]
im = resize(roi_color, (img_size, img_size))
image_data_fin.append(im)
confidence_res = confidence
# If the only one face on a photo then return it (as np.array) and confidence that it is a human face.
if len(image_data_fin) != faces_in_image_limit:
return [], None
else:
return image_data_fin, confidence_res
def print_progress(total, current, image, like_type, missing_imgs):
"""This function print progress whereas files are handling.
:param total: total number of files
:param current: current number of handled files
:param image: an image's name
:param like_type: the folder from where we are handling files
:param missing_imgs: number of files which were missed. It's required in purpose to reflect a percentage properly.
"""
def progressBar(current, total, missing_imgs, barLength = 20):
"""Represent a progress bar, like that [---> ] 50%
:param total: total number of files
:param current: current number of handled files
:param missing_imgs: number of files which were missed. It's required in purpose to reflect a percentage properly.
:param barLength: required in purpose to show the bar of the same length (default 20 symbols)
"""
percent = float(current) * 100 / (total - missing_imgs)
arrow = '-' * int(percent/100 * barLength) + '>'
spaces = ' ' * (barLength - len(arrow))
sys.stdout.write('\rProgress: [%s%s] %d %%\n' % (arrow, spaces, percent + 1))
sys.stdout.write('\r%d of %d %s files have been handling\n' % (current, total, like_type))
sys.stdout.write('\rImage: %s\n' % image)
progressBar(current, total, missing_imgs)
sys.stdout.flush()
def count_files(path):
"""Count number of files in a folder (missin invisible files, like '.filename')
:param path: path to folder.
:return: Evaluated number of files
"""
return len([name for name in path if not name[0] =="."])
# For each image, we want to know if each picture is attractive or unattractive
# list of images translated into np-array
images = []
# labels to each image
labels = []
def handle_images(name=''):
"""The function process all photos and prepares them for training.
:param name: the name of an user of a folder (name1_like)
"""
# The directory where this file is placed
currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
# Path to the folder with all samples folder
data_path = os.path.dirname(currentdir) + '\\samples'
name = name + '_' if name != '' else ''
# List of files in like/dislike directory
dislikes_images_stack = os.listdir(os.path.join(data_path, name + 'dislike'))
likes_images_stack = os.listdir(os.path.join(data_path, name + 'like'))
def process_folder(images_stack, like_type, name=''):
"""The function which processes a folder, by handling images an labeling them.
:param images_stack: a list of images
:param like_type: the type of folder which is processing.
:param name: the name beside the like-type in folder name.
:return: confidence-list (confidence that each passed image is a human face) , number of missed images,
files processed, total number of images
"""
number_of_images = count_files(images_stack)
files_processed = 0
confidence_list = []
number_of_missing_images = 0
for img in images_stack:
if not img.startswith('.'):
# Print progress
clear_output(wait=True)
print_progress(number_of_images, files_processed, img, like_type, number_of_missing_images)
try:
# obtain a face
faces, confidence = extract_faces(cv2.imread(os.path.join(data_path, os.path.join(name + like_type, img))))
except Exception as e:
raise e
# Check if the only one face has been retrieved
if len(faces) > 0 and len(faces) < 2:
confidence_list.append(confidence)
elif len(faces) == 0:
number_of_missing_images += 1
# Labeling
for face in faces:
images.append(face)
if like_type == 'like':
labels.append(1)
else:
labels.append(0)
files_processed += 1
return confidence_list, number_of_missing_images, files_processed, number_of_images
# Gather infromation regard the processed files (along with processing)
conf_list, NoMI, proc_files, NoI = process_folder(dislikes_images_stack, 'dislike', name)
conf_list2, NoMI2, proc_files2, NoI2 = process_folder(likes_images_stack, 'like', name)
conf_list.extend(conf_list2)
conf_list = np.array(conf_list)
NoMI += NoMI2
NoI += NoI2
return {'face_convincing': pd.DataFrame([['{:.2f} %'.format(np.mean(conf_list) * 100)], ['{:.2f} %'.format(np.amax(conf_list) * 100)], ['{:.2f} %'.format(np.amin(conf_list) * 100)], ['{:.2f} %'.format(np.std(conf_list) * 100)]], index=['mean', 'max', 'min', 'std'], columns=['percents']), 'images': pd.DataFrame([[NoI], [NoMI], ['{:.2f} %'.format((NoI - NoMI2)/NoI * 100)], [proc_files2], [proc_files]], index=['toatal amount', 'missed amount', 'handled ratio', 'handled likes', 'handled dislikes'], columns=['data'])}
recap = handle_images('milka')
images = np.array(images)
labels = np.array(labels)
recap['images']
images.shape
labels.shape
def save_file(data, file_path_name):
"""Takes all our data here, images and labels. Compresses images in a numpy file.
:param data: the data we wanna to save
:param file_path_name: path to file where we wanna to store the data
"""
print("Saving {}.npy".format(file_path_name))
np.save(file_path_name, data)
save_file(images, "processed_val_images")
save_file(labels, "processed_val_labels")
```
<h2>Examples from explanations above</h2>
```
import matplotlib.pyplot as plt
img_list = []
for i in range(1, 7):
im, _ = extract_faces(cv2.imread('../img/exx1dae_{}.jpg'.format(i)))
img_list.append(np.array(im[0]))
fig, axes = plt.subplots(2, 3, figsize=(16, 12))
axes[0,0].imshow(img_list[0])
axes[0,1].imshow(img_list[1])
axes[0,2].imshow(img_list[2])
axes[1,0].imshow(img_list[3])
axes[1,1].imshow(img_list[4])
axes[1,2].imshow(img_list[5])
```
| github_jupyter |
```
%autosave 0
import pandas as pd
import numpy as np
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
import sys
sys.path.append('..')
##Custom Lib
import lib
from lib.data_clean import DataClean
from lib.classifier_trainer import ClassifierJob, MainAiJob
from lib.plot import roc
##
# Import ML modele packages
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import train_test_split
from sklearn import metrics
import xgboost as xgb
from sklearn.ensemble import BaggingClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import GradientBoostingClassifier, AdaBoostClassifier
from sklearn.linear_model import LogisticRegression
from sklearn import svm
```
# Introduction
Objectif :
- Trouver le meilleur modèle, dans ce notebook je vais iterer et tester des conbinaisons, evaluer les resultats et determiner le meilleur modèle, qui sera dans la prochaine étape scélle dans un pipeline d'entrainement et droper pour utlisation.
- Tester avec le moins de variables possibles pour qu'ils puissent en recuperer plus et faire grossir le dataset
```
init_df = pd.read_excel("../data/Données projet LAP Cas-témoin 02-10-20.xlsx")
df = init_df
## clean des datas
cleandata = DataClean(df)
df_clean = cleandata.clean_job()
pd.set_option('display.max_columns', df_clean.shape[1])
df_clean.head(5)
## Je définis la taille en % du test_set 0.3 pour un gros data set sinon 0.2
test_size=0.2
## Je définis une seed afin d'avoir un reproductibilité dans mon expérience
## afin de bien comparer ma modélisation dans le même contexte
random_state= 69
## je crée un dictionnaire de model que je souhaite tester
dict_models ={
"Random Forest ":RandomForestClassifier(),
"Gradient Boosting Classifier": GradientBoostingClassifier(),
"AdaBoost Classifier": AdaBoostClassifier(),
"XgBoost": xgb.XGBClassifier(),
"Decision Tree": DecisionTreeClassifier(),
"LR": LogisticRegression(max_iter=1000),
"SVM":svm.SVC(kernel='poly'),
}
drop_col = ['Sex_M_F_(0_1)', 'Hematies_(T_L)','Mono_(G_L)',
'Hemoglobine_(g_L)', 'Hematocrite_(%)', 'TCMH_(pg)','IDR-CV_(%)', 'PNN_(%)',
'PNE_(G_L)', 'PNB_(%)', 'PNB_(G_L)', 'Lymphos_(G_L)',
'Mono_(%)', 'Blastes_(%)', 'Plq_(G_L)', 'VPM_(fL)',
'Ret_(G_L)', 'Ratio_TCA_',
'PDF_(ug_mL)_', 'LDH_(U_L)', 'Calcium_(Ca2+)_(mmol_L)',
'Phosphore_(mmol_L)', 'Uree_(mmol_L)', 'Creatinine',
'Acide_urique_(umol_L)', 'Ferritine_(ug_L)', 'CRP_(mg_L)']
## je boucle sur mon dictionaire et appel la classe, je patiente et analyse les résulats
for name, model in dict_models.items():
print("")
print("********* ",name," *********")
print("")
AI = MainAiJob(model, df_clean, target_name='target', catagorical_features=False,
test_size=test_size, random_state=random_state, learning_curve_mod=True, normalize=False,
list_col_name_drop=drop_col,show_explainers=True)
# model = AI.core_job()
```
# Amélioration du modéle
```
model_win = xgb.XGBClassifier()
## je refais la modélisation mais avec la premiére méthode (classifierJob)
cl3 = ClassifierJob(df_clean, model_win)
x_full_v3, y_full_v3 = cl3.split_features_target(list_col_name_drop=drop_col) ## add here the drop job
model_fited_v2 = cl3.fit_and_eval(test_size,random_state, learning_curve_mod=True, normalize=True) ## Now normalize==true
```
Pas de Gain avec la normalisation, comme on perd la fonctionnalitée de shap explainer, je ne conserve pas l'idée.
```
def Best_opti(x_full,y_full,model,test_size=0.2, random_state=69, normalize=False):
""" For classification binaries case
Split the data, Fit the model, and return best_parameters
params :
x_full = features dataframe
y_full = target dataframe
model = model observed (not yet fitted)
test_size = the size of split, 0.2 by default type float
random_state= 69 by default, make your process reproducible type int.
normalize = Apply tandardscaler on dataset
return : Best hyper parameters for model
"""
seed = random_state
np.random.seed(seed)
##Split data in train/test using stratify = take randomly the same sample number from each class.
X_train, X_test, y_train, y_test = train_test_split(x_full, y_full, test_size=test_size,
random_state=seed, stratify=y_full)
######## Insert normalize job #######
if normalize:
scale = StandardScaler()
X_train = scale.fit_transform(X_train)
X_test = scale.transform(X_test)
param_grid = [{'n_estimators': [150,200,300,400],'max_depth': [2,3,4,5]}]
grid_search = GridSearchCV(model,param_grid, cv=5, scoring ='f1', return_train_score=True)
grid_search.fit(X_train,y_train)
return grid_search.best_params_
Best_opti(x_full_v3, y_full_v3,model_win)
```
je relance l'entrainement et evaluation mais avec les paramétres retournés par Best_opti
```
model_win_opti = xgb.XGBClassifier(n_estimators=300, max_depth=2)
## Old method for training
cl4 = ClassifierJob(df_clean,model_win_opti)
x_full_v4, y_full_v4 = cl4.split_features_target(list_col_name_drop=drop_col)
model_fited_opti = cl3.fit_and_eval(test_size,random_state, learning_curve_mod=True, normalize=False)
```
Pas d'amélioration avec le grid search aprés plusieurs test je conserve le modèle model_win
```
model_win
```
# Recherche du seuil de confiance optimal
- positif :
- TP (True Positives) : le médecin vous annonce que vous êtes malade, et vous êtes bel et bien malade.
- TN (True Negatives) : le médecin vous annonce que vous n’êtes pas malade, et vous n’êtes effectivement pas malade.
- negatif :
- FP (False Positive) : le médecin vous annonce que vous êtes malade, mais vous n’êtes pas malade. ** moindre mal
- FN (False Negative) : le médecin vous annonce que vous n’êtes pas malade, mais vous êtes malade. ** worst
```
result_xgb, aucs = roc.roc_curve_cv(xgb.XGBClassifier(), x_full_v3, y_full_v3, n_splits=5, n_repeats=5)
print(f"AUC: {np.mean(aucs)} (std:{np.std(aucs)})")
result_rf, aucs = roc.roc_curve_cv(RandomForestClassifier(), x_full_v3, y_full_v3, n_splits=5, n_repeats=5)
print(f"AUC: {np.mean(aucs)} (std:{np.std(aucs)})")
roc.plot_specificity_cv({'XGB': result_xgb, 'RF':result_rf}, invert_x=True, invert_y=False)
plt.show()
# for i in x_full_v3:
# sns.displot(x_full_v3[f'{i}'])
cl3.optimize_model(minimize=True)
fig, ax = plt.subplots(figsize=(10,8))
sns.set(style="whitegrid")
roc_fig = roc.plot_roc_threshold_cv(result_xgb,
tpr=False,
fpr=True,
tnr=False,
fnr=True,)
roc_fig.set_yticks(np.arange(0,1.05,0.05));
roc_fig.set_xticks(np.arange(0,1.05,0.1));
roc_fig.set(xlim=(0,1),ylim=(0,1))
```
# Calibration du model
```
model_win_calibrated = cl3.Calibrate_fited_model(n_bins=5)
```
La calibration n'est pas effective, nous avons pas assez de données
# --Fin--
| github_jupyter |
<a href="https://colab.research.google.com/github/YIKUAN8/Transformers-VQA/blob/master/openI_VQA.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
**In this notebook, we will classify 15 thoracic findings from Chest X-ray images and associated reports. This can be considered as an VQA task. We will fine-tune 3 pre-trained transformer based V+L models. After running through this notebook, you will be able to fine-tune these models on your customized dataset.**
####**0.1 clone our repo and install dependencies!**
```
!git clone https://github.com/YIKUAN8/Transformers-VQA.git
%cd Transformers-VQA/
!pip install -r requirements.txt
```
**Change the 79th line of param.py from**
```
args = parser.parse_args()
```
to
```
args = parser.parse_args([])
```
This will enable us to use *argparse* in jupyter notebook!
####**0.2 Download pre-trained models and place them to data/pretrained/, you could choose from [VisualBERT](https://github.com/uclanlp/visualbert), [LXMERT](https://github.com/airsplay/lxmert), [UNITER](https://github.com/ChenRocks/UNITER).**
```
#line 1: UNITER; line 2:LXMERT, line 3: VisualBERT. Comment out selected lines if you don't want to use this model
#if the pre-trained VisualBERT cannot be downloaded succesfully, rerun one more time or refer to this link: https://drive.google.com/file/d/1kuPr187zWxSJbtCbVW87XzInXltM-i9Y/view?usp=sharing
!wget https://convaisharables.blob.core.windows.net/uniter/pretrained/uniter-base.pt -P models/pretrained/
!wget --no-check-certificate https://nlp1.cs.unc.edu/data/model_LXRT.pth -P models/pretrained/
!wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1kuPr187zWxSJbtCbVW87XzInXltM-i9Y' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1kuPr187zWxSJbtCbVW87XzInXltM-i9Y" -O models/pretrained/visualbert.th && rm -rf /tmp/cookies.txt
```
####**0.3 Download OpenI dataset.**
A detailed description of this dataset can be found at [here](https://openi.nlm.nih.gov/). In summary, there are 3684 CXR Image-Report pairs in this dataset. Each pair has an annotation of 15 throacic findings from MESH terms. We convert the raw data to a dataframe with better visibility. It can be accessed with the following command or this [link](https://drive.google.com/file/d/1i3wcfXJbH_4q3rS2rvLxtzbMiO-KuZCG/view?usp=sharing).
```
!wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1i3wcfXJbH_4q3rS2rvLxtzbMiO-KuZCG' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1i3wcfXJbH_4q3rS2rvLxtzbMiO-KuZCG" -O data/openIdf.csv && rm -rf /tmp/cookies.txt
```
***0.3.1 Have a glance of this dataframe, column 'TXT' is the radiology report; column 'split' and 'id' are self-explantory; All other columns are the 15 findings. Our task will be a 15-labels binary classification with visual and semantic input.***
```
import pandas as pd
openI = pd.read_csv('data/openIdf.csv',index_col=0)
openI.head()
```
####**0.4 Download the visaul features extracted by BUTD. 36 2048-dimension visual feature is extracted from each CXR Image. We use this [implementation](https://github.com/airsplay/py-bottom-up-attention). This step will take a while (~1min). To save downloading time, you can also make a copy of this [shareable link](https://drive.google.com/file/d/1BFw0jc0j-ffT2PhI4CZeP3IJFZg3GxlZ/view?usp=sharing) to your own google drive and mount you colab to your gdrive.**
*If you are interested in the original CXR images, which is unnecessary to out project , you can access them [here](https://drive.google.com/drive/folders/1s5A0CFB6-2N5ThbuorUK1t-bUEKmZnjz?usp=sharing).*
```
!wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1BFw0jc0j-ffT2PhI4CZeP3IJFZg3GxlZ' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1BFw0jc0j-ffT2PhI4CZeP3IJFZg3GxlZ" -O data/openI_v_features.pickle && rm -rf /tmp/cookies.txt
```
***0.4.1 Load visual features***
```
import pickle
openI_v_f = pickle.load( open( "/content/Transformers-VQA/data/openI_v_features.pickle", "rb" ) )
assert set(list(openI_v_f.keys())) == set(openI.id.values), "Visual Features are inconsistent with openI dataset"
feature_example, bbox_example, (img_w_example, img_h_example) = openI_v_f[openI.id.iloc[0]]
feature_example.shape, bbox_example.shape, (img_w_example, img_h_example)
```
####**Now We have download all data, models, and dependencies. We are good to go!!!**
**1. Change default arguments**
First, let's check it out!
```
from param import args
args.__dict__
```
***1.1*** Let's overwrite some arguments***
```
args.batch_size = 18
args.epochs = 2
args.model = 'visualbert' # use visualbert
args.load_pretrained = '/content/Transformers-VQA/models/pretrained/visualbert.th' #load pretrained visualbert model
args.max_seq_length = 128 #truncate or pad report lengths to 128 subwords
```
####**2. Create customized dataloader**
```
findings = list(openI.columns[1:-2])
findings
from torch.utils.data import Dataset
from torch.utils.data.dataloader import DataLoader
import numpy as np
class OpenIDataset(Dataset):
def __init__(self, df, vf, split, model = 'lxmert'):
# train_test_split and prepare labels
self.dataset = df[df['split'] == split]
self.visual_features = vf
self.id_list = self.dataset.id.tolist()
self.report_list = self.dataset.TXT.tolist()
self.findings_list = self.dataset.columns[1:-2]
self.target_list = self.dataset[self.findings_list].to_numpy().astype(np.float32)
self.model = model
def __len__(self):
return len(self.id_list)
def __getitem__(self, item):
cxr_id = self.id_list[item]
target = self.target_list[item]
boxes, feats, (img_w, img_h) = self.visual_features[cxr_id]
report = self.report_list[item]
if self.model == 'uniter':
boxes = self._uniterBoxes(boxes)
if self.model == 'lxmert':
boxes[:, (0, 2)] /= img_w
boxes[:, (1, 3)] /= img_h
return cxr_id, feats, boxes, report, target
def _uniterBoxes(self, boxes):#uniter requires a 7-dimensiom beside the regular 4-d bbox
new_boxes = np.zeros((boxes.shape[0],7),dtype='float32')
new_boxes = np.zeros((boxes.shape[0],7),dtype='float32')
new_boxes[:,1] = boxes[:,0]
new_boxes[:,0] = boxes[:,1]
new_boxes[:,3] = boxes[:,2]
new_boxes[:,2] = boxes[:,3]
new_boxes[:,4] = new_boxes[:,3]-new_boxes[:,1] #w
new_boxes[:,5] = new_boxes[:,2]-new_boxes[:,0] #h
new_boxes[:,6]=new_boxes[:,4]*new_boxes[:,5] #area
return new_boxes
training = OpenIDataset(df = openI, vf = openI_v_f, split='train', model = args.model)
testing = OpenIDataset(df = openI, vf = openI_v_f, split='test', model = args.model)
train_loader = DataLoader(training, batch_size=args.batch_size,shuffle=True, num_workers=0,drop_last=True, pin_memory=True)
test_loader = DataLoader(testing, batch_size=128,shuffle=False, num_workers=0,drop_last=False, pin_memory=True)
```
####**3. Model, Optimizer, Loss Function, and Evaluation Function**
```
from vqa_model import VQAModel
#init model
model = VQAModel(num_answers = len(findings), model = args.model)
#load pretrained weights
model.encoder.load(args.load_pretrained)
#send to GPU
model = model.cuda()
import torch
loss = torch.nn.BCEWithLogitsLoss()
from src.optimization import BertAdam
optim = BertAdam(list(model.parameters()),lr=args.lr,warmup=0.1,t_total=len(train_loader)*args.epochs)
# t_total denotes total training steps
# batch_per_epoch = len(train_loader)
# t_total = int(batch_per_epoch * args.epochs)
# Evaluation function, we will report the AUC and accuray of each finding
def eval(target, pred):
acc_list = []
for i, d in enumerate(findings[:-1]): #normal is excluded
acc = np.mean(target[:,i] == (pred[:,i]>=0.5))
print(i,d,acc)
acc_list.append(acc)
print('Averaged: '+str(np.average(acc_list)))
sgmd = torch.nn.Sigmoid()
```
####**4. HIT and RUN**
```
from tqdm.notebook import tqdm
iter_wrapper = (lambda x: tqdm(x, total=len(train_loader))) if args.tqdm else (lambda x: x)
best_valid = 0
for epoch in range(args.epochs):
epoch_loss = 0
for i, (cxr_id, feats, boxes, report, target) in iter_wrapper(enumerate(train_loader)):
model.train()
optim.zero_grad()
feats, boxes, target = feats.cuda(), boxes.cuda(), target.cuda()
logit = model(feats, boxes, report)
running_loss = loss(logit, target)
running_loss = running_loss * logit.size(1)
epoch_loss += running_loss
running_loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 5.)
optim.step()
print("Epoch "+str(epoch)+": Training Loss: "+str(epoch_loss/len(train_loader)))
print('Evaluation: ')
model.eval()
logit_list, target_list = [], []
iter_wrapper = (lambda x: tqdm(x, total=len(test_loader)))
for i, (cxr_id, feats, boxes, report, target) in iter_wrapper(enumerate(test_loader)):
target_list.append(target)
with torch.no_grad():
feats, boxes = feats.cuda(), boxes.cuda()
logit = model(feats, boxes, report)
logit_list.append(sgmd(logit).cpu().numpy())
eval(np.concatenate(target_list,axis = 0), np.concatenate(logit_list,axis = 0))
```
| github_jupyter |
# Detecting COVID-19 with Chest X Ray using PyTorch
Image classification of Chest X Rays in one of three classes: Normal, Viral Pneumonia, COVID-19
Dataset from [COVID-19 Radiography Dataset](https://www.kaggle.com/tawsifurrahman/covid19-radiography-database) on Kaggle
# Importing Libraries
```
from google.colab import drive
drive.mount('/gdrive')
%matplotlib inline
import os
import shutil
import copy
import random
import torch
import torch.nn as nn
import torchvision
import torch.optim as optim
from torch.optim import lr_scheduler
import numpy as np
import seaborn as sns
import time
from sklearn.metrics import confusion_matrix
from PIL import Image
import matplotlib.pyplot as plt
torch.manual_seed(0)
print('Using PyTorch version', torch.__version__)
```
# Preparing Training and Test Sets
```
class_names = ['Non-Covid', 'Covid']
root_dir = '/gdrive/My Drive/Research_Documents_completed/Data/Data/'
source_dirs = ['non', 'covid']
```
# Creating Custom Dataset
```
class ChestXRayDataset(torch.utils.data.Dataset):
def __init__(self, image_dirs, transform):
def get_images(class_name):
images = [x for x in os.listdir(image_dirs[class_name]) if x.lower().endswith('png') or x.lower().endswith('jpg')]
print(f'Found {len(images)} {class_name} examples')
return images
self.images = {}
self.class_names = ['Non-Covid', 'Covid']
for class_name in self.class_names:
self.images[class_name] = get_images(class_name)
self.image_dirs = image_dirs
self.transform = transform
def __len__(self):
return sum([len(self.images[class_name]) for class_name in self.class_names])
def __getitem__(self, index):
class_name = random.choice(self.class_names)
index = index % len(self.images[class_name])
image_name = self.images[class_name][index]
image_path = os.path.join(self.image_dirs[class_name], image_name)
image = Image.open(image_path).convert('RGB')
return self.transform(image), self.class_names.index(class_name)
```
# Image Transformations
```
train_transform = torchvision.transforms.Compose([
torchvision.transforms.Resize(size=(224, 224)),
torchvision.transforms.RandomHorizontalFlip(),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])
test_transform = torchvision.transforms.Compose([
torchvision.transforms.Resize(size=(224, 224)),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
```
# Prepare DataLoader
```
train_dirs = {
'Non-Covid': '/gdrive/My Drive/Research_Documents_completed/Data/Data/non/',
'Covid': '/gdrive/My Drive/Research_Documents_completed/Data/Data/covid/'
}
#train_dirs = {
# 'Non-Covid': '/gdrive/My Drive/Data/Data/non/',
# 'Covid': '/gdrive/My Drive/Data/Data/covid/'
#}
train_dataset = ChestXRayDataset(train_dirs, train_transform)
test_dirs = {
'Non-Covid': '/gdrive/My Drive/Research_Documents_completed/Data/Data/test/non/',
'Covid': '/gdrive/My Drive/Research_Documents_completed/Data/Data/test/covid/'
}
test_dataset = ChestXRayDataset(test_dirs, test_transform)
batch_size = 25
dl_train = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
dl_test = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=True)
print(dl_train)
print('Number of training batches', len(dl_train))
print('Number of test batches', len(dl_test))
```
# Data Visualization
```
class_names = train_dataset.class_names
def show_images(images, labels, preds):
plt.figure(figsize=(30, 20))
for i, image in enumerate(images):
plt.subplot(1, 25, i + 1, xticks=[], yticks=[])
image = image.numpy().transpose((1, 2, 0))
mean = np.array([0.485, 0.456, 0.406])
std = np.array([0.229, 0.224, 0.225])
image = image * std + mean
image = np.clip(image, 0., 1.)
plt.imshow(image)
col = 'green'
if preds[i] != labels[i]:
col = 'red'
plt.xlabel(f'{class_names[int(labels[i].numpy())]}')
plt.ylabel(f'{class_names[int(preds[i].numpy())]}', color=col)
plt.tight_layout()
plt.show()
images, labels = next(iter(dl_train))
show_images(images, labels, labels)
images, labels = next(iter(dl_test))
show_images(images, labels, labels)
```
# Creating the Model
```
model = torchvision.models.mnasnet1_0(pretrained=True)
print(model)
model.classifier[1] = torch.nn.Linear(in_features=1280, out_features=2, bias=True)
loss_fn = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=3e-5)
def show_preds():
model.eval()
images, labels = next(iter(dl_test))
outputs = model(images)
_, preds = torch.max(outputs, 1)
show_images(images, labels, preds)
show_preds()
```
# Training the Model
```
def train(epochs):
best_model_wts = copy.deepcopy(model.state_dict())
b_acc = 0.0
t_loss = []
t_acc = []
avg_t_loss=[]
avg_t_acc=[]
v_loss = []
v_acc=[]
avg_v_loss = []
avg_v_acc = []
ep = []
print('Starting training..')
for e in range(0, epochs):
ep.append(e+1)
print('='*20)
print(f'Starting epoch {e + 1}/{epochs}')
print('='*20)
train_loss = 0.
val_loss = 0.
train_accuracy = 0
total_train = 0
correct_train = 0
model.train() # set model to training phase
for train_step, (images, labels) in enumerate(dl_train):
optimizer.zero_grad()
outputs = model(images)
_, pred = torch.max(outputs, 1)
loss = loss_fn(outputs, labels)
loss.backward()
optimizer.step()
train_loss += loss.item()
train_loss /= (train_step + 1)
_, predicted = torch.max(outputs, 1)
total_train += labels.nelement()
correct_train += sum((predicted == labels).numpy())
train_accuracy = correct_train / total_train
t_loss.append(train_loss)
t_acc.append(train_accuracy)
if train_step % 20 == 0:
print('Evaluating at step', train_step)
print(f'Training Loss: {train_loss:.4f}, Training Accuracy: {train_accuracy:.4f}')
accuracy = 0.
model.eval() # set model to eval phase
for val_step, (images, labels) in enumerate(dl_test):
outputs = model(images)
loss = loss_fn(outputs, labels)
val_loss += loss.item()
_, preds = torch.max(outputs, 1)
accuracy += sum((preds == labels).numpy())
val_loss /= (val_step + 1)
accuracy = accuracy/len(test_dataset)
print(f'Validation Loss: {val_loss:.4f}, Validation Accuracy: {accuracy:.4f}')
v_loss.append(val_loss)
v_acc.append(accuracy)
show_preds()
model.train()
if accuracy > b_acc:
b_acc = accuracy
avg_t_loss.append(sum(t_loss)/len(t_loss))
avg_v_loss.append(sum(v_loss)/len(v_loss))
avg_t_acc.append(sum(t_acc)/len(t_acc))
avg_v_acc.append(sum(v_acc)/len(v_acc))
best_model_wts = copy.deepcopy(model.state_dict())
print('Best validation Accuracy: {:4f}'.format(b_acc))
print('Training complete..')
plt.plot(ep, avg_t_loss, 'g', label='Training loss')
plt.plot(ep, avg_v_loss, 'b', label='validation loss')
plt.title('Training and Validation loss for each epoch')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.savefig('/gdrive/My Drive/Research_Documents_completed/MnasNet/mnasnet_loss.png')
plt.show()
plt.plot(ep, avg_t_acc, 'g', label='Training accuracy')
plt.plot(ep, avg_v_acc, 'b', label='validation accuracy')
plt.title('Training and Validation Accuracy for each epoch')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.savefig('/gdrive/My Drive/Research_Documents_completed/MnasNet/mnasnet_accuarcy.png')
plt.show()
torch.save(model.state_dict(),'/gdrive/My Drive/Research_Documents_completed/MnasNet/mnasnet.pt')
%%time
train(epochs=5)
```
# Final Results
VALIDATION LOSS AND TRAINING LOSS VS EPOCH
VALIDATION ACCURACY AND TRAINING ACCURACY VS EPOCH
BEST ACCURACY ERROR..
```
show_preds()
```
| github_jupyter |
# Building Rollup hierarchies in python with Treelib and atoti
This notebook is illustrating how to create a product catalog inside a BI application using Treelib and atoti. Full story is available on this link:
https://medium.com/atoti/building-rollup-hierarchies-in-python-with-treelib-and-atoti-ffc61fbac69c?source=friends_link&sk=0b8b36c30a588af4ac0fc7a6f38d2a6f
<div style="text-align:center"><a href="https://www.atoti.io/?utm_source=gallery&utm_content=rollup-hierarchies" target="_blank" rel="noopener noreferrer"><img src="https://data.atoti.io/notebooks/banners/discover.png" alt="atoti table" /></a></div>
# Credits for the sample data
The sample data for this project was sourced from this edu course [Data analytics with R](https://stepik.org/course/724/promo). The data has been transformed. I'm hosting my version of the data on s3:
```
# !conda install -c conda-forge python-wget -y
# !pip install treelib atoti[aws]
import zipfile
import pandas as pd
import wget
from IPython.display import clear_output, display
from treelib import Node, Tree
def bar_custom(current, total, width=80):
clear_output(wait=True)
print("Downloading: %d%% [%d / %d] bytes" % (current / total * 100, current, total))
url = "http://data.atoti.io/notebooks/rollup-hierarchy/rollup-hierarchies.zip"
filename = wget.download(url, bar=bar_custom)
# unzipping the file
with zipfile.ZipFile("rollup-hierarchies.zip", "r") as zip_ref:
zip_ref.extractall()
```
# Reading parent-child product catalog description
```
categories_df = pd.read_csv("categories.csv")
categories_df.head()
# creating a dict to lookup a name for an id
cat_dict = dict(zip(categories_df.category_id, categories_df.name))
```
# Populating a tree in Treelib
```
tree = Tree()
tree.create_node("Product Catalogue", 0)
# Creating nodes under root
for i, c in categories_df.iterrows():
tree.create_node(c["name"], c["category_id"], parent=0)
# Moving nodes to reflect the parent-child relationship
for i, c in categories_df.iterrows():
if c["parent_id"] == c["parent_id"]:
tree.move_node(c["category_id"], c["parent_id"])
```
# paths_to_leaves
Having created a Tree using Treelib, it's trivial to obtain all paths to leaves:
```
tree.paths_to_leaves()[:5]
```
The following will show that the tree is unbalanced:
```
print(
"Min depth is {}, max depth is {}".format(
min([len(i) for i in tree.paths_to_leaves()]),
max([len(i) for i in tree.paths_to_leaves()]),
)
)
```
Although parent-child pairs are a very natural way to express hierarchies, we can’t use them in its raw form for slicing and dicing. Think about a table in Excel — with different levels of a tree in separate columns, we can combine them in a pivot table to roll up and down through the levels of the catalogue.
Let’s extract the levels of the tree into separate columns, and save into the `categories_hierarchy.csv` file:
```
with open("categories_hierarchy.csv", "w+") as outfile:
outfile.write(
";".join(["Category_Lvl" + str(i + 1) for i in range(tree.depth())]) + "\n"
)
for p in tree.paths_to_leaves():
outfile.write(
";".join(
[cat_dict[pi] for pi in p[1:]]
+ [cat_dict[p[-1]]]
* (
tree.depth() - len(p) + 1
) # I'm adding the last item for the shorter branches to balance the tree
)
+ "\n"
)
pd.read_csv("categories_hierarchy.csv", sep=";").head(3)
```
# Launchin BI app using atoti
```
import atoti as tt
session = tt.create_session()
events_tbl = session.read_csv(
"events.csv",
table_name="Events",
keys=["externalsessionid", "eventtype"],
)
{"columns": len(events_tbl.columns), "rows": len(events_tbl)}
cube = session.create_cube(events_tbl, "Sales Analytics")
session.link()
products_tbl = session.read_csv(
"products.csv", table_name="Product Attributes", separator=";"
)
products_tbl.head()
events_tbl.join(products_tbl, mapping={"product_id": "product_id"})
cube.schema
# loading product to category mapping into the cube
products_categories_tbl = session.read_csv(
"product-categories.csv", table_name="Categories"
)
events_tbl.join(products_categories_tbl, mapping={"product_id": "product_id"})
# loading categories hierarchy into the cube
categories_tree_tbl = session.read_csv(
"categories_hierarchy.csv",
table_name="Categories Hierarchy",
separator=";",
keys=["Category_Lvl3"],
)
products_categories_tbl.join(categories_tree_tbl, mapping={"Category": "Category_Lvl3"})
cube.schema
# Creating a multi-level hierarchy to automatically expand data:
cube.hierarchies["Catalog"] = [
cube.levels["Category_Lvl1"],
cube.levels["Category_Lvl2"],
cube.levels["Category"],
]
# A measure to count unique sessions
cube.measures["UniqueSessionsCount"] = tt.agg.count_distinct(
events_tbl["externalsessionid"]
)
```
Take note that the `UniqueSessionsCount` for `eventtype` level is higher than at the `Category_Lvl1` level. This is because the same `externalsessionid` may exists for both purchase and view events.
```
session.visualize()
cube.measures["Sales"] = cube.measures["totalcents.SUM"]
cube.measures["Sales as % of Grand Total"] = cube.measures["Sales"] / tt.total(
cube.measures["Sales"], cube.hierarchies["Catalog"]
)
cube.measures["Sales as % of Grand Total"].formatter = "DOUBLE[0.00%]"
session.visualize()
```
# Start building dashboards
```
session.link()
```
<div style="text-align:center"><a href="https://www.atoti.io/?utm_source=gallery&utm_content=rollup-hierarchies" target="_blank" rel="noopener noreferrer"><img src="https://data.atoti.io/notebooks/banners/discover-try.png" alt="atoti table" /></a></div>
| github_jupyter |
<a href="https://colab.research.google.com/github/https-deeplearning-ai/tensorflow-1-public/blob/adding_C4/C4/W4/ungraded_labs/C4_W4_Lab_1_LSTM.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
**Note:** This notebook can run using TensorFlow 2.5.0
```
#!pip install tensorflow==2.5.0
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
def plot_series(time, series, format="-", start=0, end=None):
plt.plot(time[start:end], series[start:end], format)
plt.xlabel("Time")
plt.ylabel("Value")
plt.grid(True)
def trend(time, slope=0):
return slope * time
def seasonal_pattern(season_time):
"""Just an arbitrary pattern, you can change it if you wish"""
return np.where(season_time < 0.4,
np.cos(season_time * 2 * np.pi),
1 / np.exp(3 * season_time))
def seasonality(time, period, amplitude=1, phase=0):
"""Repeats the same pattern at each period"""
season_time = ((time + phase) % period) / period
return amplitude * seasonal_pattern(season_time)
def noise(time, noise_level=1, seed=None):
rnd = np.random.RandomState(seed)
return rnd.randn(len(time)) * noise_level
time = np.arange(4 * 365 + 1, dtype="float32")
baseline = 10
series = trend(time, 0.1)
baseline = 10
amplitude = 40
slope = 0.05
noise_level = 5
# Create the series
series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)
# Update with noise
series += noise(time, noise_level, seed=42)
split_time = 1000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
window_size = 20
batch_size = 32
shuffle_buffer_size = 1000
def windowed_dataset(series, window_size, batch_size, shuffle_buffer):
series = tf.expand_dims(series, axis=-1)
ds = tf.data.Dataset.from_tensor_slices(series)
ds = ds.window(window_size + 1, shift=1, drop_remainder=True)
ds = ds.flat_map(lambda w: w.batch(window_size + 1))
ds = ds.shuffle(shuffle_buffer)
ds = ds.map(lambda w: (w[:-1], w[1:]))
return ds.batch(batch_size).prefetch(1)
def model_forecast(model, series, window_size):
ds = tf.data.Dataset.from_tensor_slices(series)
ds = ds.window(window_size, shift=1, drop_remainder=True)
ds = ds.flat_map(lambda w: w.batch(window_size))
ds = ds.batch(32).prefetch(1)
forecast = model.predict(ds)
return forecast
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
window_size = 30
train_set = windowed_dataset(x_train, window_size, batch_size=128, shuffle_buffer=shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Conv1D(filters=32, kernel_size=5,
strides=1, padding="causal",
activation="relu",
input_shape=[None, 1]),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32, return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32, return_sequences=True)),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x: x * 200)
])
lr_schedule = tf.keras.callbacks.LearningRateScheduler(
lambda epoch: 1e-8 * 10**(epoch / 20))
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-8, momentum=0.9)
model.compile(loss=tf.keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
history = model.fit(train_set, epochs=100, callbacks=[lr_schedule])
plt.semilogx(history.history["lr"], history.history["loss"])
plt.axis([1e-8, 1e-4, 0, 30])
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
#batch_size = 16
dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Conv1D(filters=32, kernel_size=3,
strides=1, padding="causal",
activation="relu",
input_shape=[None, 1]),
tf.keras.layers.LSTM(32, return_sequences=True),
tf.keras.layers.LSTM(32, return_sequences=True),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x: x * 200)
])
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-5, momentum=0.9)
model.compile(loss=tf.keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
history = model.fit(dataset,epochs=500)
rnn_forecast = model_forecast(model, series[..., np.newaxis], window_size)
rnn_forecast = rnn_forecast[split_time - window_size:-1, -1, 0]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, rnn_forecast)
tf.keras.metrics.mean_absolute_error(x_valid, rnn_forecast).numpy()
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
#-----------------------------------------------------------
# Retrieve a list of list results on training and test data
# sets for each training epoch
#-----------------------------------------------------------
mae=history.history['mae']
loss=history.history['loss']
epochs=range(len(loss)) # Get number of epochs
#------------------------------------------------
# Plot MAE and Loss
#------------------------------------------------
plt.plot(epochs, mae, 'r')
plt.plot(epochs, loss, 'b')
plt.title('MAE and Loss')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["MAE", "Loss"])
plt.figure()
epochs_zoom = epochs[200:]
mae_zoom = mae[200:]
loss_zoom = loss[200:]
#------------------------------------------------
# Plot Zoomed MAE and Loss
#------------------------------------------------
plt.plot(epochs_zoom, mae_zoom, 'r')
plt.plot(epochs_zoom, loss_zoom, 'b')
plt.title('MAE and Loss')
plt.xlabel("Epochs")
plt.ylabel("Accuracy")
plt.legend(["MAE", "Loss"])
plt.figure()
```
| github_jupyter |
# FEATURE EXTRACTION
```
import pandas as pd
from textblob import TextBlob
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
d = pd.read_csv("Processed_tweets.csv")
d = d.drop(["Unnamed: 0"],axis=1)
d.head(10)
d.drop_duplicates(inplace=True)
d.isna().sum()
d.dropna(inplace=True)
d.info()
n = len(d)
d["Sentiment"]=[None for i in range(n)]
for i in range(n):
s = TextBlob(d["Text"].iloc[i]).sentiment
if(s[0]>=0):
d["Sentiment"].iloc[i] = 0
else:
d["Sentiment"].iloc[i] = 1
len(d[d["Sentiment"]==0]), len(d[d["Sentiment"]==1])
no = len(d[d["Sentiment"]==1])
t = d[d["Sentiment"]==0][:no]
s = d[d["Sentiment"]==1]
df = pd.concat([s,t],ignore_index="True")
len(df)
```
# MODEL DEVELOPMENT AND EVALUATION
# Splitting into Train and Test Data
```
from sklearn.model_selection import train_test_split
x= df["Text"]
y = df["Sentiment"].astype("int")
x_train,x_test, y_train, y_test = train_test_split(x,y,test_size=0.2,random_state=42)
cv = CountVectorizer(ngram_range=(1,3))
tf = TfidfVectorizer(ngram_range=(1,3))
x1 = cv.fit_transform(x_train)
x2 = tf.fit_transform(x_train)
from sklearn import metrics
accuracy = {}
model = {}
vectorizer = {"CountVectorizer":cv,"TfidfVectorizer":tf}
```
# Support Vector Machine
### CountVectorizer
```
from sklearn import svm
classifier=svm.SVC()
classifier.fit(x1,y_train)
y_predict1=classifier.predict(cv.transform(x_test))
k = "Support Vector Machine with CountVectorizer"
model[k]=classifier
b1=metrics.accuracy_score(y_test, y_predict1)
recall = metrics.recall_score(y_test, y_predict1)
accuracy[k] = float("{0:.4f}".format(b1))
print(k)
print("Accuracy: ",accuracy[k])
print("Recall : {0:.4f}".format(recall))
```
### TfidfVectorizer
```
classifier=svm.SVC()
classifier.fit(x2,y_train)
y_predict2=classifier.predict(tf.transform(x_test))
k = "Support Vector Machine with TfidfVectorizer"
model[k]=classifier
b2=metrics.accuracy_score(y_test, y_predict2)
recall = metrics.recall_score(y_test, y_predict2)
accuracy[k] = float("{0:.4f}".format(b2))
print(k)
print("Accuracy: ",accuracy[k])
print("Recall : {0:.4f}".format(recall))
```
# Multinomial Naive Bayes
### Count Vectorizer
```
from sklearn.naive_bayes import MultinomialNB
nb = MultinomialNB()
nb.fit(x1,y_train)
y_predict3=classifier.predict(cv.transform(x_test))
k = "Multinomial Naive Bayes with CountVectorizer"
model[k]=nb
b3=metrics.accuracy_score(y_test, y_predict3)
recall = metrics.recall_score(y_test, y_predict3)
accuracy[k] = float("{0:.4f}".format(b3))
print(k)
print("Accuracy: ",accuracy[k])
print("Recall : {0:.4f}".format(recall))
```
### TfidfVectorizer
```
nb = MultinomialNB()
nb.fit(x2,y_train)
y_predict4=classifier.predict(tf.transform(x_test))
k = "Multinomial Naive Bayes with TfidfVectorizer"
model[k]=nb
b4=metrics.accuracy_score(y_test,y_predict4)
recall = metrics.recall_score(y_test, y_predict4)
accuracy[k] = float("{0:.4f}".format(b4))
print(k)
print("Accuracy: ",accuracy[k])
print("Recall : {0:.4f}".format(recall))
sorted(accuracy)
ad = pd.DataFrame({"Accuracy":accuracy})
ad
```
| github_jupyter |
<img src="images/logo.jpg" style="display: block; margin-left: auto; margin-right: auto;" alt="לוגו של מיזם לימוד הפייתון. נחש מצויר בצבעי צהוב וכחול, הנע בין האותיות של שם הקורס: לומדים פייתון. הסלוגן המופיע מעל לשם הקורס הוא מיזם חינמי ללימוד תכנות בעברית.">
# <span style="text-align: right; direction: rtl; float: right;">Mutability</span>
## <span style="text-align: right; direction: rtl; float: right; clear: both;">הגדרה</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
המילה <dfn>Mutable</dfn> נגזרת מהמילה <em>Mutate</em>, והמשמעות שלה היא "<em>משהו בר־שינוי</em>".<br>
אנחנו נשתמש בה כדי לתאר טיפוסי נתונים שניתן לשנותם, למשל להוסיף או להחסיר מהם איברים.<br>
משמעות המילה <dfn>Immutable</dfn> היא "<em>משהו שאינו בר־שינוי</em>", נתון שאמור להישאר קבוע אחרי יצירתו.<br>
שינוי ערך שהוא Immutable משנה את המהות שלו, ויגרום לו להיחשב ערך אחר לגמרי.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נדמיין ארנק שבתוכו שטרות – ניתן להוסיף אליו שטרות או להוציא ממנו שטרות, אבל הארנק יישאר אותו ארנק.<br>
כיוון שניתן לשנות את המצב של הארנק בלי לפגוע במהות שלו, ניתן להגיד שארנק הוא Mutable.<br>
לעומת זאת, אם אקח את אחד השטרות שנמצאים בתוך הארנק, לא אוכל לשנות בו משהו בלי שאשנה את המהות שלו.<br>
שינוי באחד המאפיינים של השטר, כמו המספר שכתוב עליו, יגרור שינוי מהותי שיהפוך אותו לדבר אחר לחלוטין.<br>
ניתן להגיד שהשטר הוא Immutable, בלתי ניתן לשינוי.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
הערכים ה־Mutable שנכיר בקורס הם מעין "מכולות" שמכילות ערכים אחרים.<br>
כרגע אנו מכירים סוג ערך אחד שהוא Mutable – רשימה.
</p>
## <span style="text-align: right; direction: rtl; float: right; clear: both;">כתובות של ערכים</span>
### <span style="text-align: right; direction: rtl; float: right;">ערכים</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כדי להבין טוב יותר את הנושא, נעצור לרגע כדי להבין איך ערכים עובדים מאחורי הקלעים בפייתון.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כשאנחנו יוצרים ערך כלשהו, פייתון מקצה עבורו מקום בזיכרון המחשב ושומרת שם את הערך.<br>
מאותו רגע לאותו ערך יש כתובת, שהיא מספר שמייצג את המקום שבו אותו ערך נמצא בזיכרון המחשב.<br>
כתובת הערך נשארת זהה מרגע שהוא נוצר ועד סוף חייו.
</p>
```
print(9876543)
```
<p style="text-align: right; direction: rtl; float: right; clear: both;">
בשורה למעלה הגדרנו את הערך 9,876,543.<br>
אף על פי שלא עשינו עליו פעולה מתוחכמת ולא שמרנו אותו במשתנה, פייתון תשמור את הערך הזה בזיכרון המחשב.<br>
לערך 9,876,543 יש כתובת עכשיו.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
בשורה הבאה "נקשור" את השם <var>name</var> לכתובת של הערך המספרי 12,345.<br>
המשתנה <var>name</var> לא באמת "מכיל" את הערך 12,345, אלא רק מצביע על הכתובת שבה הערך 12,345 מאוחסן.
</p>
```
name = 12345
```
<p style="text-align: right; direction: rtl; float: right; clear: both;">
<mark>כשאנחנו מבצעים השמה, אנחנו יוצרים קישור בין שם המשתנה לבין הכתובת של הערך שהשמנו לתוכו.</mark>
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
אם ניזכר בדימוי הלייזר מהשיעורים הקודמים, עבור כל פעולת השמה בתוכנית, פייתון:
</p>
<ol style="text-align: right; direction: rtl; float: right; clear: both;">
<li>תיצור מצביע לייזר <em>חדש</em> שעליו מודבק שם המשתנה.</li>
<li>תגרום ללייזר להצביע על המקום בזיכרון שבו מאוחסן הערך המדובר.</li>
</ol>
### <span style="text-align: right; direction: rtl; float: right;">בדיקת כתובת של ערכים</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
הפונקציה <code dir="ltr" style="direction: ltr;">id</code> מקבלת כארגומנט ערך, ומחזירה מספר שמייצג את הכתובת שלו – המיקום של הערך בזיכרון.<br>
בהקבלה למטאפורת הלייזר, היא מקבלת את ראש הלייזר ומראה לנו לאן הוא מצביע.<br>
נראה דוגמה:
</p>
```
number = 100000
print("ID before: " + str(id(number)))
number = 123456
print("ID after: " + str(id(number)))
```
<p style="text-align: right; direction: rtl; float: right; clear: both;">
בדוגמה האחרונה ניתן לראות ששם המשתנה לא משפיע על המיקום שבו הערכים נשמרים.<br>
לערכים שונים מוקצות כתובות שונות.
</p>
```
number = 100000
print("ID before: " + str(id(number)))
number = number + 1
print("ID after: " + str(id(number)))
```
<p style="text-align: right; direction: rtl; float: right; clear: both;">
בדוגמה הגדלנו את ערך המשתנה <var>number</var> מ־100,000 ל־100,001.<br>
חשוב לזכור ש<mark>ההגדלה מ־100,000 ל־100,001 לא באמת שינתה את הערך השמור במשתנה, אלא גרמה למשתנה להצביע לכתובת אחרת של ערך אחר.</mark>
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
בשורה הראשונה ביקשנו מ־<var>number</var> להצביע לערך 100,000, ולכן כשהרצנו את <code>id(number)</code> קיבלנו את הכתובת של הערך <em>100,000</em>.<br>
בשורה השנייה ביקשנו מ־<var>number</var> להצביע לערך 100,001, ולכן כשהרצנו את <code>id(number)</code> קיבלנו את הכתובת של הערך <em>100,001</em>.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
עבור שתי השורות הראשונות מודפסת הכתובת של הערך הראשון שיצרנו, 100,000.<br>
עבור שתי השורות האחרונות מודפסת הכתובת של הערך השני שיצרנו, 100,001.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
חשוב לדעת שהתייחסות לאותו ערך פעם נוספת עשויה ליצור מופע חדש שלו, שיאוחסן בכתובת אחרת:
</p>
```
print(f"ID of number ({number}): " + str(id(number)))
number2 = 100001
print(f"ID of number2 ({number2}): " + str(id(number2)))
```
<p style="text-align: right; direction: rtl; float: right; clear: both;">
אבל השמה של משתנה אחד למשתנה אחר תגרום לכך ששני המשתנים יפנו לאותה כתובת:
</p>
```
print("ID of number: " + str(id(number)))
number3 = number
print("ID of number2: " + str(id(number3)))
```
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ניתן לדמיין את המצב האחרון כמו שני ראשי לייזר שמצביעים לאותה כתובת בזיכרון המחשב.<br>
</p>
#### <span style="text-align: right; direction: rtl; float: right;">רשימה</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נבדוק אם שינוי של ערך שנמצא בתוך הרשימה יגרום לפייתון לבנות רשימה חדשה.<br>
אם פייתון תבנה רשימה חדשה אחרי שינוי הערך, נוכל לראות זאת בקלות לפי השינוי במיקום של הרשימה בזיכרון.
</p>
```
my_list = ['It\'s', 'never', 'enough']
print(f"id() of my_list ({my_list}) before:\n\t" + str(id(my_list)))
my_list[2] = 'lupus'
print(f"id() of my_list ({my_list}) after:\n\t" + str(id(my_list)))
```
<p style="text-align: right; direction: rtl; float: right; clear: both;">
בדוגמה שינינו ערך ברשימה ששמה <var>my_list</var>, וראינו שהמיקום שלה לא משתנה.<br>
זו ההתנהגות שאנחנו מצפים לה מערך שהסוג שלו הוא Mutable – ניתן לשנות אותו מבלי להשפיע על מיקומו בזיכרון המחשב.<br>
</p>
### <span style="text-align: right; direction: rtl; float: right;">סיכום ביניים</span>
<ul style="text-align: right; direction: rtl; float: right; clear: both;">
<li>כל ערך שניצור מאוחסן בכתובת, שלא תשתנה במשך כל חייו של הערך.</li>
<li>אם ניצור ערך פעמיים, יכול להיות שיהיו שני מופעים שלו בכתובות שונות.</li>
<li>משתנה הוא לא יותר מ"קשירה" בין שם לכתובת של ערך מסוים.</li>
<li>השמה היא הפעולה שקושרת בין שם המשתנה לבין הכתובת של הערך.</li>
<li>ייתכן שיותר משם משתנה אחד יצביע לאותה כתובת.</li>
<li>ערכים שהסוג שלהם הוא <em>mutable</em> יכולים להשתנות בלי שהכתובת שלהם תשתנה.</li>
<li>ערכים שהסוג שלהם הוא <em>immutable</em> לא יכולים להשתנות בלי שהכתובת שלהם תשתנה.</li>
</ul>
## <span style="text-align: right; direction: rtl; float: right;">השלכות</span>
### <span style="text-align: right; direction: rtl; float: right;">רשימות</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נעשה את הניסיון הבא:</p>
```
str1 = "Puns are the highest form of literature."
str2 = str1
str2 = str2 + "\n\t - Alfred Hitchcock"
print(str1)
print('-' * len(str1))
print(str2)
```
<p style="text-align: right; direction: rtl; float: right; clear: both;">
עם הידע החדש שצברנו, נוכל להגיד ש־<var>str1</var> ו־<var>str2</var> מצביעים למקומות שונים, בגלל ההשמה בשורה 3.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
אבל ברשימות אפשר לשנות את הערכים גם בלי לבצע השמה. מה קורה אז?
</p>
```
list1 = [2, 8, 20, 28, 50, 82]
list2 = list1
list2.append(126)
print(list1)
print('-' * len(str(list1)))
print(list2)
```
<p style="text-align: right; direction: rtl; float: right; clear: both;">
במקרה הזה, גרמנו ל־<var>list2</var> להצביע לאותו מקום ש־<var>list1</var> מצביעה עליו.<br>
מהסיבה הזו, שינוי של <var>list2</var> ישפיע גם על <var>list1</var>, ושינוי של <var>list1</var> ישפיע גם על <var>list2</var>.
</p>
```
print(id(list1))
print(id(list2))
```
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כדי לבקש מפייתון לא להתנהג כך, צריך להגיד לה במפורש שאנחנו מעוניינים ביצירת רשימה חדשה.<br>
ניתן לעשות את זה בקריאה לפעולה <code dir="ltr" style="direction: ltr;">list.copy()</code>:
</p>
```
list1 = [2, 8, 20, 28, 50, 82]
list2 = list1.copy()
list2.append(126)
print(list1)
print('-' * len(str(list1)))
print(list2)
```
### <span style="text-align: right; direction: rtl; float: right;">פרמטרים של פונקציה</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נגדיר פונקציה שמקבלת מחרוזת ומשרשרת לסופה את האות <i>Z</i>:
</p>
```
def append_to_string(my_string):
print('\t--- Inside the function now ---')
print(f'\tFunction got value: {my_string}, with id: {id(my_string)}.')
my_string = my_string + 'Z'
print(f'\tChanged my_string to be {my_string}, with id: {id(my_string)}.')
print('\t--- Finished to run the function now ---')
s = 'Hello'
print(f'Before calling the function: s = {s}, with id: {id(s)}.')
append_to_string(s)
print(f'After calling the function: s = {s}, with id: {id(s)}.')
```
<p style="text-align: right; direction: rtl; float: right; clear: both;">
מה קרה בפועל? למה המחרוזת לא השתנתה גם מחוץ לפונקציה?
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
הערך שהועבר לפרמטר של הפונקציה היה הכתובת של <var>s</var>, שעכשיו גם <var>my_string</var> מצביע עליו.<br>
ברגע שביצענו את ההשמה <code dir="ltr" style="direction: ltr;">my_string = my_string + 'Z'</code>, יצרנו באגף הימני של ההשמה ערך חדש, וביקשנו מלייזר חדש ששמו <var>my_string</var> להצביע לכתובת שלו.<br>
המשתנה ששמו <var>my_string</var> מצביע כרגע לכתובת של ערך אחר, בזמן שהמשתנה <var>s</var> עדיין מצביע על הערך המקורי.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
במקרה הזה, הפונקציה לא שינתה את הערך של המחרוזת שהעברנו לה כארגומנט.<br>
גם אם היינו רוצים מאוד לעשות את זה – זה לא אפשרי, כיוון שמחרוזות הן immutable.<br>
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ננסה לעשות אותו דבר עם רשימה:
</p>
```
def append_to_list(my_list):
print('\t--- Inside the function now ---')
print(f'\tFunction got value: {my_list}, with id: {id(my_list)}.')
my_list = my_list + [126]
print(f'\tChanged my_string to be {my_list}, with id: {id(my_list)}.')
print('\t--- Finished to run the function now ---')
l = [2, 8, 20, 28, 50, 82]
print(f'Before calling the function: l = {l}, with id: {id(l)}.')
append_to_list(l)
print(f'After calling the function: l = {l}, with id: {id(l)}.')
```
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ההתרחשות הייתה זהה למה שקרה עם מחרוזות!<br>
זה קרה כיוון שגם פה דרסנו את <var>my_list</var> כך שיצביע לרשימה חדשה שיצרנו.<br>
בצד ימין של ההשמה, יצרנו רשימה חדשה שמכילה את האיברים <span dir="ltr" style="direction: ltr">2, 8, 20, 28, 50, 82, 126</span>.<br>בעצם ההשמה ביקשנו מ־<var>my_list</var> שבתוך הפונקציה להפנות לכתובת של הרשימה החדשה.<br>
ננסה להשתמש בפעולה של צירוף איבר חדש לרשימה, <code dir="ltr" style="direction: ltr;">list.append(item)</code>, שעליה למדנו השבוע:
</p>
```
def append_to_list(my_list):
print('\t--- Inside the function now ---')
print(f'\tFunction got value: {my_list}, with id: {id(my_list)}.')
my_list.append(126)
print(f'\tChanged my_string to be {my_list}, with id: {id(my_list)}.')
print('\t--- Finished to run the function now ---')
l = [2, 8, 20, 28, 50, 82]
print(f'Before calling the function: l = {l}, with id: {id(l)}.')
append_to_list(l)
print(f'After calling the function: l = {l}, with id: {id(l)}.')
```
<p style="text-align: right; direction: rtl; float: right; clear: both;">
הצלחנו!<br>
הרשימה השתנתה גם בתוך הפונקציה וגם מחוצה לה.<br>
אפשר ללמוד מהדוגמה הזו שכשאנחנו מבצעים השמה לשם משתנה מסוים, אנחנו משנים את הכתובת שאליה הצביע שם המשתנה לכתובת חדשה, ולא עורכים את תוכן משתנה.<br>
</p>
### <span style="text-align: right; direction: rtl; float: right;">כתיבת פונקציה כראוי</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כשלמדנו פונקציות, הדגשנו את העובדה שפונקציה היא <em>קטע קוד עצמאי</em>.<br>
ככזו, <mark>פונקציה בדרך כלל לא תשנה ערכים של משתנים שלא היא הגדירה.</mark><br>
לדוגמה, קטע הקוד שמופיע למעלה ועורך את המשתנה <code>l</code> שהוגדר מחוץ לפונקציה, נחשב להרגל רע.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
הנה קטע הקוד מלמעלה בלי ההדפסות המסרבלות:
</p>
```
def append_to_list(my_list):
my_list.append(126)
l = [2, 8, 20, 28, 50, 82]
append_to_list(l)
print(l)
```
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ננסה לרשום קוד זהה, רק שהפעם הפונקציה לא תערוך את המשתנה <code dir="ltr" style="direction: ltr;">l</code>:
</p>
```
def append_to_list(my_list):
list_copy = my_list.copy() # גם יעבוד my_list = my_list.copy()
list_copy.append(126)
return list_copy
l = [2, 8, 20, 28, 50, 82]
new_l = append_to_list(l) # l גם יעבוד, אבל יאבד את הערך של l = append_to_list(l)
print(l)
print(new_l)
```
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לצורת כתיבה זו כמה יתרונות:
</p>
<ul style="text-align: right; direction: rtl; float: right; clear: both;"></ol>
<li>קל יותר להבין מה הפונקציה עושה, גם אם אין לקורא ידע מוקדם בנוגע לשאר הקוד.</li>
<li>קל יותר למשתמש חיצוני להשתמש בפונקציה מבלי לפחד שיאבד מידע.</li>
<li>קל יותר לבנות פונקציות נוספות שיסתמכו על ההתנהגות של הפונקציה הזו.</li>
</ul>
## <span style="text-align: right; direction: rtl; float: right;">Tuple</span>
### <span style="text-align: right; direction: rtl; float: right;">הגדרה</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ברוח השיעור על העובדה שרשימות הן Mutable, הגיע הזמן להכיר את "האח דל התקציב" שלהן: <code>tuple</code>.<br>
סוג הנתונים tuple לא מרגש במיוחד – הוא למעשה סוג של רשימה שאי־אפשר לשנות. רשימה שהיא immutable, אם תרצו.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נגדיר משתנה מסוג tuple באמצעות סוגריים עגולים:
</p>
```
animals = ('dog', 'fish', 'horse')
```
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כמו ברשימה, ניתן לקבל איברים שנמצאים ב־tuple אם נפנה למיקום שלהם:
</p>
```
first_animal = animals[0]
print(f"The first animal is {first_animal}")
```
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ניסיון לשנות את ה־tuple לא יצליח, מן הסתם. Immutable, זוכרים?
</p>
```
animals[1] = 'pig'
```
<p style="text-align: right; direction: rtl; float: right; clear: both;">
יצירת tuple ריק תיכתב כך:
</p>
```
my_tuple = tuple()
```
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ועבור יצירת tuple עם איבר אחד בלבד, נכתוב את האיבר ואז פסיק אחריו, כדי שפייתון לא תפרש את הביטוי כסוגריים רגילים:
</p>
```
my_tuple = (4, )
```
<div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 90%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
<strong>תרגול</strong>:
כתבו פונקצייה שמשתמשת ב־<code dir="ltr" style="direction: ltr;">dir()</code>, ומחזירה את כל הפעולות שיש ב־list ואין ב־tuple.<br>
בדקו גם אילו פעולות יש ב־tuple ואין ב־list.<br>
בשתי ההשוואות, התעלמו מפעולות ששמן מתחיל בתו קו תחתון.
</p>
</div>
</div>
### <span style="text-align: right; direction: rtl; float: right;">שימושים</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
אם tuple מעניק לי פחות חופש פעולה, למה להשתמש בו מלכתחילה?
</p>
<ul style="text-align: right; direction: rtl; float: right; clear: both;"></ol>
<li><strong>מהירות</strong> – עבודה עם ערכים מסוג tuple היא מהירה פי כמה וכמה מאשר עם רשימות.</li>
<li><strong>סמנטיקה</strong> – עבור ערך קבוע שאין צורך לשנות, נעדיף להשתמש ב־tuple כדי להדגיש את זה וכדי לא לאפשר הוספה או הסרה של ערכים בטעות.</li>
</ul>
### <span style="text-align: right; direction: rtl; float: right;">דוגמאות</span>
```
my_home = (35.027185, -111.022388) # x, y
traingle_sides_length = (4, 5, 6)
possible_directions = ('UP', 'DOWN', 'LEFT', 'RIGHT')
students_and_age = [('Itamar', 50), ('Yam', '27'), ('David', 16)] # רשימה של טאפלים
```
## <p style="align: right; direction: rtl; float: right; clear: both;">מונחים</p>
<dl style="text-align: right; direction: rtl; float: right; clear: both;">
<dt>כתובת</dt><dd>מקום במחשב שבו שמור ערך כלשהו. ערך לעולם לא יחליף את הכתובת שלו.</dd>
<dt>Immutable</dt><dd>ערך שלא ניתן לשנות.</dd>
<dt>Mutable</dt><dd>ערך שניתן לשנות.</dd>
<dt>Tuple</dt><dd>סוג משתנה. Immutable. דומה לרשימה בתכונותיו.</dd>
</dl>
| github_jupyter |
# Goals
### Learn how to use full potential of monk in it's expert mode
# Table of Contents
## [0. Install](#0)
## [1. Load data, setup model, select params, and Train](#1)
## [2. Run validation on trained classifier](#2)
## [3. Run inferencing on trained classifier](#3)
<a id='0'></a>
# Install Monk
- git clone https://github.com/Tessellate-Imaging/monk_v1.git
- cd monk_v1/installation/Linux && pip install -r requirements_cu9.txt
- (Select the requirements file as per OS and CUDA version)
```
!git clone https://github.com/Tessellate-Imaging/monk_v1.git
# Select the requirements file as per OS and CUDA version
!cd monk_v1/installation/Linux && pip install -r requirements_cu9.txt
```
## Dataset - Natural Images Classification
- https://www.kaggle.com/prasunroy/natural-images
```
! wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1sbQ_KaEDd7kRrTvna-4odLqxM2G0QT0Z' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1sbQ_KaEDd7kRrTvna-4odLqxM2G0QT0Z" -O natural-images.zip && rm -rf /tmp/cookies.txt
! unzip -qq natural-images.zip
```
# Imports
```
# Monk
import os
import sys
sys.path.append("monk_v1/monk/");
#Using keras backend
from keras_prototype import prototype
```
<a id='1'></a>
# Load data, setup model, select params, and Train
```
gtf = prototype(verbose=1);
gtf.Prototype("project", "expert_mode");
```
## Set Data params
```
gtf.Dataset_Params(dataset_path="natural-images/train",
split=0.9,
input_size=224,
batch_size=16,
shuffle_data=True,
num_processors=3);
```
## Apply Transforms
```
gtf.apply_random_horizontal_flip(train=True, val=True);
```
## Load Dataset
```
gtf.Dataset();
```
## Set Model Params
```
gtf.Model_Params(model_name="resnet50",
freeze_base_network=True,
use_gpu=True,
use_pretrained=True);
```
## Load Model
```
gtf.Model();
```
## Set Training params
```
gtf.Training_Params(num_epochs=5,
display_progress=True,
display_progress_realtime=True,
save_intermediate_models=True,
intermediate_model_prefix="intermediate_model_",
save_training_logs=True);
## Set Optimizer, losses and learning rate schedulers
gtf.optimizer_sgd(0.0001);
gtf.lr_fixed();
gtf.loss_crossentropy()
#Start Training
gtf.Train();
#Read the training summary generated once you run the cell and training is completed
```
<a id='2'></a>
# Validating the trained classifier
```
gtf = prototype(verbose=1);
gtf.Prototype("project", "expert_mode", eval_infer=True);
# Just for example purposes, validating on the training set itself
gtf.Dataset_Params(dataset_path="natural-images/train");
gtf.Dataset();
accuracy, class_based_accuracy = gtf.Evaluate();
```
<a id='3'></a>
# Running inference on test images
```
gtf = prototype(verbose=1);
gtf.Prototype("project", "expert_mode", eval_infer=True);
img_name = "natural-images/test/test3.jpg";
predictions = gtf.Infer(img_name=img_name);
#Display
from IPython.display import Image
Image(filename=img_name)
img_name = "natural-images/test/test2.jpg";
predictions = gtf.Infer(img_name=img_name);
#Display
from IPython.display import Image
Image(filename=img_name)
img_name = "natural-images/test/test3.jpg";
predictions = gtf.Infer(img_name=img_name);
#Display
from IPython.display import Image
Image(filename=img_name)
```
| github_jupyter |
___
___
# Choropleth Maps
## Offline Plotly Usage
Get imports and set everything up to be working offline.
```
import plotly.plotly as py
import plotly.graph_objs as go
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
```
Now set up everything so that the figures show up in the notebook:
```
init_notebook_mode(connected=True)
```
More info on other options for Offline Plotly usage can be found [here](https://plot.ly/python/offline/).
## Choropleth US Maps
Plotly's mapping can be a bit hard to get used to at first, remember to reference the cheat sheet in the data visualization folder, or [find it online here](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf).
```
import pandas as pd
```
Now we need to begin to build our data dictionary. Easiest way to do this is to use the **dict()** function of the general form:
* type = 'choropleth',
* locations = list of states
* locationmode = 'USA-states'
* colorscale=
Either a predefined string:
'pairs' | 'Greys' | 'Greens' | 'Bluered' | 'Hot' | 'Picnic' | 'Portland' | 'Jet' | 'RdBu' | 'Blackbody' | 'Earth' | 'Electric' | 'YIOrRd' | 'YIGnBu'
or create a [custom colorscale](https://plot.ly/python/heatmap-and-contour-colorscales/)
* text= list or array of text to display per point
* z= array of values on z axis (color of state)
* colorbar = {'title':'Colorbar Title'})
Here is a simple example:
```
data = dict(type = 'choropleth',
locations = ['AZ','CA','NY'],
locationmode = 'USA-states',
colorscale= 'Portland',
text= ['text1','text2','text3'],
z=[1.0,2.0,3.0],
colorbar = {'title':'Colorbar Title'})
```
Then we create the layout nested dictionary:
```
layout = dict(geo = {'scope':'usa'})
```
Then we use:
go.Figure(data = [data],layout = layout)
to set up the object that finally gets passed into iplot()
```
choromap = go.Figure(data = [data],layout = layout)
iplot(choromap)
```
### Real Data US Map Choropleth
Now let's show an example with some real data as well as some other options we can add to the dictionaries in data and layout.
```
df = pd.read_csv('2011_US_AGRI_Exports')
df.head()
```
Now out data dictionary with some extra marker and colorbar arguments:
```
data = dict(type='choropleth',
colorscale = 'YIOrRd',
locations = df['code'],
z = df['total exports'],
locationmode = 'USA-states',
text = df['text'],
marker = dict(line = dict(color = 'rgb(255,255,255)',width = 2)),
colorbar = {'title':"Millions USD"}
)
```
And our layout dictionary with some more arguments:
```
layout = dict(title = '2011 US Agriculture Exports by State',
geo = dict(scope='usa',
showlakes = True,
lakecolor = 'rgb(85,173,240)')
)
choromap = go.Figure(data = [data],layout = layout)
iplot(choromap)
```
# World Choropleth Map
Now let's see an example with a World Map:
```
df = pd.read_csv('2014_World_GDP')
df.head()
data = dict(
type = 'choropleth',
locations = df['CODE'],
z = df['GDP (BILLIONS)'],
text = df['COUNTRY'],
colorbar = {'title' : 'GDP Billions US'},
)
layout = dict(
title = '2014 Global GDP',
geo = dict(
showframe = False,
projection = {'type':'Mercator'}
)
)
choromap = go.Figure(data = [data],layout = layout)
iplot(choromap)
```
| github_jupyter |
# Managing ML workflows with AWS Step Functions and the Data Science SDK
<img align="left" width="130" src="https://raw.githubusercontent.com/PacktPublishing/Amazon-SageMaker-Cookbook/master/Extra/cover-small-padded.png"/>
This notebook contains the code to help readers work through one of the recipes of the book [Machine Learning with Amazon SageMaker Cookbook: 80 proven recipes for data scientists and developers to perform ML experiments and deployments](https://www.amazon.com/Machine-Learning-Amazon-SageMaker-Cookbook/dp/1800567030)
### How to do it...
```
!mkdir -p tmp
g = "raw.githubusercontent.com"
p = "PacktPublishing"
a = "Amazon-SageMaker-Cookbook"
mc = "master/Chapter01"
path = f"https://{g}/{p}/{a}/{mc}/files"
fname = "management_experience_and_salary.csv"
!wget -P tmp {path}/{fname}
import pandas as pd
filename = f"tmp/{fname}"
df_all_data = pd.read_csv(filename)
df_all_data
from sklearn.model_selection import train_test_split
dad = df_all_data
X = dad['management_experience_months'].values
y = dad['monthly_salary'].values
X_train, X_test, y_train, y_test = train_test_split(
X, y,
test_size=0.3, random_state=0
)
import pandas as pd
df_training_data = pd.DataFrame({
'monthly_salary': y_train,
'management_experience_months': X_train
})
df_training_data
df_training_data.to_csv(
'tmp/training_data.csv',
header=False, index=False
)
s3_bucket = "<insert S3 bucket name here>"
prefix = 'chapter09'
tn = "training_data.csv"
source = f"tmp/{tn}"
dest = f"s3://{s3_bucket}/{prefix}/input/{tn}"
!aws s3 cp {source} {dest}
import sagemaker
import boto3
from sagemaker import get_execution_role
role = get_execution_role()
session = sagemaker.Session()
region_name = boto3.Session().region_name
training_s3_input_location = f"s3://{s3_bucket}/{prefix}/input/training_data.csv"
training_s3_output_location = f"s3://{s3_bucket}/{prefix}/output/"
from sagemaker.inputs import TrainingInput
train = TrainingInput(
training_s3_input_location,
content_type="text/csv"
)
from sagemaker.image_uris import retrieve
container = retrieve(
"linear-learner",
region_name, "1"
)
container
estimator = sagemaker.estimator.Estimator(
container,
role,
instance_count=1,
instance_type='ml.m5.xlarge',
output_path=training_s3_output_location,
sagemaker_session=session
)
estimator.set_hyperparameters(
predictor_type='regressor',
mini_batch_size=4
)
!pip -q install --upgrade stepfunctions
execution_role = 'arn:aws:iam::________________:role/test-002'
from stepfunctions.inputs import ExecutionInput
execution_input = ExecutionInput(
schema={
'ModelName': str,
'EndpointName': str,
'JobName': str
}
)
ei = execution_input
from stepfunctions.steps import TrainingStep
training_step = TrainingStep(
'Training Step',
estimator=estimator,
data={
'train': train
},
job_name=ei['JobName']
)
from stepfunctions.steps import ModelStep
model_step = ModelStep(
'Model Step',
model=training_step.get_expected_model(),
model_name=ei['ModelName']
)
from stepfunctions.steps import EndpointConfigStep
endpoint_config_step = EndpointConfigStep(
"Create Endpoint Configuration",
endpoint_config_name=ei['ModelName'],
model_name=ei['ModelName'],
initial_instance_count=1,
instance_type='ml.m5.xlarge'
)
from stepfunctions.steps import EndpointStep
endpoint_step = EndpointStep(
"Deploy Endpoint",
endpoint_name=ei['EndpointName'],
endpoint_config_name=ei['ModelName']
)
from stepfunctions.steps import Chain
workflow_definition = Chain([
training_step,
model_step,
endpoint_config_step,
endpoint_step
])
import uuid
uuid.uuid4().hex
def generate_random_string():
return uuid.uuid4().hex
grs = generate_random_string
import uuid
from stepfunctions.workflow import Workflow
workflow = Workflow(
name='{}-{}'.format('Workflow', grs()),
definition=workflow_definition,
role=execution_role,
execution_input=execution_input
)
workflow.create()
execution = workflow.execute(
inputs={
'JobName': 'll-{}'.format(grs()),
'ModelName': 'll-{}'.format(grs()),
'EndpointName': 'll-{}'.format(grs())
}
)
execution.list_events()
import pandas as pd
events = execution.list_events()
pd.json_normalize(events)
workflow.__dict__
print(workflow.definition.to_json(pretty=True))
```
| github_jupyter |
##### Copyright 2020 The TensorFlow IO Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# 오디오 데이터 준비 및 증강
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/io/tutorials/audio"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/io/tutorials/audio.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행하기</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/io/tutorials/audio.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서소스 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/io/tutorials/audio.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드하기</a></td>
</table>
## 개요
자동 음성 인식의 가장 큰 문제 중 하나는 오디오 데이터를 준비하고 증강시키는 작업입니다. 오디오 데이터 분석에는 시간 또는 주파수 도메인이 포함될 수 있으므로 이미지와 같은 다른 데이터 소스와 비교하여 복잡도가 더욱 높습니다.
TensorFlow 에코시스템의 일부인 `tensorflow-io` 패키지는 오디오 데이터의 준비 및 증강을 간편하게 해주는 몇 가지 유용한 오디오 관련 API를 제공합니다.
## 설정
### 필수 패키지를 설치하고 런타임 다시 시작하기
```
!pip install tensorflow-io
```
## 사용법
### 오디오 파일 읽기
TensorFlow IO에서 `tfio.audio.AudioIOTensor` 클래스를 사용하면 오디오 파일을 지연 로드된 `IOTensor`로 읽을 수 있습니다.
```
import tensorflow as tf
import tensorflow_io as tfio
audio = tfio.audio.AudioIOTensor('gs://cloud-samples-tests/speech/brooklyn.flac')
print(audio)
```
위의 예에서 Flac 파일 `brooklyn.flac`는 [Google Cloud](https://cloud.google.com/speech-to-text/docs/quickstart-gcloud)에서 공개적으로 액세스할 수 있는 오디오 클립에서 가져온 것입니다.
GCS는 TensorFlow에서 지원되는 파일 시스템이므로 GCS 주소 `gs://cloud-samples-tests/speech/brooklyn.flac`가 직접 사용됩니다. `Flac` 형식 외에 `WAV`, `Ogg`, `MP3` 및 `MP4A`도 자동 파일 형식 감지 기능이 있는 `AudioIOTensor`에서 지원됩니다.
`AudioIOTensor`는 지연 로드되므로 처음에는 형상, dtype 및 샘플 속도만 표시됩니다. `AudioIOTensor`의 형상은 `[samples, channels]`로 표시됩니다. 이는 로드한 오디오 클립이 `int16`의 `28979`개 샘플을 갖는 모노 채널임을 의미합니다.
오디오 클립의 내용은 `to_tensor()`를 통해 또는 슬라이싱을 통해 `AudioIOTensor`를 `Tensor`로 변환하여 필요에 따라서만 판독됩니다. 슬라이싱은 큰 오디오 클립의 일부만 필요한 경우에 특히 유용합니다.
```
audio_slice = audio[100:]
# remove last dimension
audio_tensor = tf.squeeze(audio_slice, axis=[-1])
print(audio_tensor)
```
오디오는 다음을 통해 재생할 수 있습니다.
```
from IPython.display import Audio
Audio(audio_tensor.numpy(), rate=audio.rate.numpy())
```
텐서를 부동 소수점 숫자로 변환하고 오디오 클립을 그래프로 표시하는 것이 더 편리합니다.
```
import matplotlib.pyplot as plt
tensor = tf.cast(audio_tensor, tf.float32) / 32768.0
plt.figure()
plt.plot(tensor.numpy())
```
### 노이즈 제거
때로는 오디오에서 노이즈를 제거하는 것이 합리적이며 이때 API `tfio.experimental.audio.trim`을 사용할 수 있습니다. API에서 세그먼트의 `[start, stop]` 위치 쌍이 반환됩니다.
```
position = tfio.experimental.audio.trim(tensor, axis=0, epsilon=0.1)
print(position)
start = position[0]
stop = position[1]
print(start, stop)
processed = tensor[start:stop]
plt.figure()
plt.plot(processed.numpy())
```
### 페이드 인 및 페이드 아웃
유용한 오디오 엔지니어링 기술 중 하나는 오디오 신호를 점차적으로 늘리거나 줄이는 페이딩 기술입니다. 이 기술은 `tfio.experimental.audio.fade`를 통해 수행할 수 있습니다. `tfio.experimental.audio.fade`는 `linear` , `logarithmic` 또는 `exponential`과 같은 다양한 형상의 페이드를 지원합니다.
```
fade = tfio.experimental.audio.fade(
processed, fade_in=1000, fade_out=2000, mode="logarithmic")
plt.figure()
plt.plot(fade.numpy())
```
### 스펙트럼 사진
고급 오디오 처리는 종종 시간 경과에 따른 주파수 변화에 적용됩니다. `tensorflow-io`에서 파형은 `tfio.experimental.audio.spectrogram`을 통해 스펙트럼 사진으로 변환할 수 있습니다.
```
# Convert to spectrogram
spectrogram = tfio.experimental.audio.spectrogram(
fade, nfft=512, window=512, stride=256)
plt.figure()
plt.imshow(tf.math.log(spectrogram).numpy())
```
다른 스케일로의 추가 변환도 가능합니다.
```
# Convert to mel-spectrogram
mel_spectrogram = tfio.experimental.audio.melscale(
spectrogram, rate=16000, mels=128, fmin=0, fmax=8000)
plt.figure()
plt.imshow(tf.math.log(mel_spectrogram).numpy())
# Convert to db scale mel-spectrogram
dbscale_mel_spectrogram = tfio.experimental.audio.dbscale(
mel_spectrogram, top_db=80)
plt.figure()
plt.imshow(dbscale_mel_spectrogram.numpy())
```
### SpecAugment
위에서 언급한 데이터 준비 및 증강 API 외에도 `tensorflow-io` 패키지는 고급 스펙트럼 사진 증강을 제공하며, 특히 [SpecAugment: 자동 음성 인식을 위한 간단한 데이터 증강 방법(Park 등, 2019)](https://arxiv.org/pdf/1904.08779.pdf)에서 논의된 주파수 및 시간 마스킹이 주목할만합니다.
#### 주파수 마스킹
주파수 마스킹에서 주파수 채널 `[f0, f0 + f)`이 마스킹됩니다. 여기서 `f`는 `0`부터 주파수 마스크 매개변수 `F`까지의 균일한 분포에서 선택되고 `f0`은 `(0, ν − f)`에서 선택됩니다. 여기서 `ν`는 주파수 채널의 수입니다.
```
# Freq masking
freq_mask = tfio.experimental.audio.freq_mask(dbscale_mel_spectrogram, param=10)
plt.figure()
plt.imshow(freq_mask.numpy())
```
#### 시간 마스킹
시간 마스킹에서 `t` 연속 시간 스텝 `[t0, t0 + t)`이 마스킹됩니다. 여기서 `t`는 `0`부터 시간 마스킹 매개변수 `T`까지의 균일한 분포에서 선택되고 `t0`는 `[0, τ − t)`에서 선택됩니다. 여기서 `τ`는 시간 스텝입니다.
```
# Time masking
time_mask = tfio.experimental.audio.time_mask(dbscale_mel_spectrogram, param=10)
plt.figure()
plt.imshow(time_mask.numpy())
```
| github_jupyter |
##Explore FrozenLakeEnv
```
import numpy as np
import copy
import check_test
from frozenlake import FrozenLakeEnv
from plot_utils import plot_values
env=FrozenLakeEnv()
print(env.observation_space)
print(env.action_space)
print(env.nS)
print(env.nA)
env.P[1][3]
```
##Iterative Policy
```
def policy_evaluation(env,policy,gamma=1,theta=1e-8):
V=np.zeros(env.nS)
while True:
delta=0
for s in range(env.nS):
Vs=0
for a, action_prob in enumerate(policy[s]):
for prob,next_state,reward,done in env.P[s][a]:
Vs+=action_prob*prob*(reward+gamma*V[next_state])
delta=max(delta,np.abs(V[s]-Vs))
V[s]=Vs
if delta<theta:
break
return V
random_policy=np.ones([env.nS,env.nA])/env.nA
V=policy_evaluation(env,random_policy)
plot_values(V)
print(check_test.run_check('policy_evaluation_check', policy_evaluation))
```
##Obtain q(pi) from v(pi)
```
def q_from_v(env,V,s,gamma=1):
q=np.zeros(env.nA)
for a in range(env.nA):
for prob,next_state,reward,done in env.P[s][a]:
q[a]+=prob*(reward+gamma*V[next_state])
return q
Q=np.zeros([env.nS,env.nA])
for s in range(env.nS):
Q[s]=q_from_v(env,V,s)
print("Action-Value Function:")
print(Q)
print(check_test.run_check('q_from_v_check', q_from_v))
def policy_improvement(env, V, gamma=1):
policy = np.zeros([env.nS, env.nA]) / env.nA
for s in range(env.nS):
q = q_from_v(env, V, s, gamma)
# OPTION 1: construct a deterministic policy
# policy[s][np.argmax(q)] = 1
# OPTION 2: construct a stochastic policy that puts equal probability on maximizing actions
best_a = np.argwhere(q==np.max(q)).flatten()
policy[s] = np.sum([np.eye(env.nA)[i] for i in best_a], axis=0)/len(best_a)
return policy
check_test.run_check('policy_improvement_check', policy_improvement)
```
##Policy Iteration
```
def policy_iteration(env,gamma=1,theta=1e-8):
policy=np.ones([env.nS,env.nA])/env.nA
while True:
V=policy_evaluation(env,policy,gamma,theta)
new_policy=policy_improvement(env,V)
if(new_policy==policy).all():
break;
policy=copy.copy(new_policy)
return policy,V
policy_pi,v_pi=policy_iteration(env)
print("\nOptimal Policy (LEFT=0,Down=1,Right=2,Up=3):")
print(policy_pi,"\n")
plot_values(v_pi)
check_test.run_check('policy_iteration_check', policy_iteration)
```
##Truncated Policy Iteration
```
def truncated_policy_evaluation(env,policy,V,max_it=1,gamma=1):
num_it=0
while num_it<max_it:
for s in range(env.nS):
v=0
q=q_from_v(env,V,s,gamma)
for a,action_prob in enumerate(policy[s]):
v+=action_prob*q[a]
V[s]=v
num_it+=1
return V
def truncated_policy_iteration(env,max_it=1,gamma=1,theta=1e-8):
V=np.zeros(env.nS)
policy=np.zeros([env.nS,env.nA])/env.nA
while True:
policy=policy_improvement(env,V)
old_V=copy.copy(V)
V=truncated_policy_evaluation(env,policy,V,max_it,gamma)
if max(abs(V-old_V))<theta:
break;
return policy,V
policy_tpi, V_tpi = truncated_policy_iteration(env, max_it=2)
# print the optimal policy
print("\nOptimal Policy (LEFT = 0, DOWN = 1, RIGHT = 2, UP = 3):")
print(policy_tpi,"\n")
# plot the optimal state-value function
plot_values(V_tpi)
```
##Value Iteration
```
def value_iteration(env, gamma=1, theta=1e-8):
V = np.zeros(env.nS)
while True:
delta = 0
for s in range(env.nS):
v = V[s]
V[s] = max(q_from_v(env, V, s, gamma))
delta = max(delta,abs(V[s]-v))
if delta < theta:
break
policy = policy_improvement(env, V, gamma)
return policy, V
policy_vi, V_vi = value_iteration(env)
# print the optimal policy
print("\nOptimal Policy (LEFT = 0, DOWN = 1, RIGHT = 2, UP = 3):")
print(policy_vi,"\n")
# plot the optimal state-value function
plot_values(V_vi)
```
| github_jupyter |
##### Copyright 2020 Google
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Data analysis
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/cirq/experiments/guide/data_analysis"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/ReCirq/blob/master/docs/guide/data_analysis.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/ReCirq/blob/master/docs/guide/data_analysis.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/ReCirq/docs/guide/data_analysis.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
This is the follow up to the [data collection](data_collection.ipynb) tutorial. We have measured bitstrings for the single-qubit circuit $R_y(\theta)$ for various `theta`s. In this analysis, we compute $\langle Z \rangle (\theta)$, compare to the analytically expected true value, and fit to a depolarizing noise model with T1 decay during readout.
## Setup
Install the ReCirq package:
```
try:
import recirq
except ImportError:
!pip install --quiet git+https://github.com/quantumlib/ReCirq
```
Now import Cirq, ReCirq and the module dependencies:
```
import cirq
import recirq
from recirq.readout_scan.tasks import EXPERIMENT_NAME, DEFAULT_BASE_DIR
```
## Load data
We can use utilities in ReCirq to query the filesystem and load in a dataset. Please recall that all tasks have an associated `EXPERIMENT_NAME` and a `dataset_id` which define the top two hierarchies in the filesystem. We import these values from the data collection script to ensure consistency.
If you're running this notebook in Colab or you haven't yet gone through the Data Collection tutorial, we will download a pre-generated copy of the data for analysis.
```
recirq.fetch_guide_data_collection_data()
```
`recirq.iterload_records` uses these two bits of information to iterate over records saved using `recirq.save` (in the data collection script.
This also gives you a chance to do post-processing on the data. In general, you should do some massaging of the data and put the results into a pandas DataFrame. DataFrames are great for doing statistics and visualizations across tabular data.
```
import numpy as np
import pandas as pd
records = []
# Load all data, do some light processing
for record in recirq.iterload_records(dataset_id='2020-02-tutorial', base_dir=DEFAULT_BASE_DIR):
# Expand task dataclass into columns
recirq.flatten_dataclass_into_record(record, 'task')
# Unwrap BitArray into np.ndarray
all_bitstrings = [ba.bits for ba in record['all_bitstrings']]
# Compute <Z>
record['z_vals'] = [np.mean((-1)**bitstrings, axis=0).item() for bitstrings in all_bitstrings]
# Don't need to carry around the full array of bits anymore
del record['all_bitstrings']
records.append(record)
df = pd.DataFrame(records)
print(len(df))
df.head()
```
## Plot the data
A good first step.
```
%matplotlib inline
from matplotlib import pyplot as plt
entry = df.iloc[0] # Pick the first qubit
plt.plot([], []) # advance color cycle in anticipation of future analysis
plt.plot(entry['thetas'], entry['z_vals'], 'o-')
plt.xlabel('Theta', fontsize=14)
plt.ylabel(r'$\langle Z \rangle$', fontsize=14)
plt.title("Qubit {}".format(entry['qubit']), fontsize=14)
plt.tight_layout()
```
## How does it compare to analytical results?
You could imagine setting up a separate task for computing and saving analytic results. For this single qubit example, we'll just compute it on the fly.
```
qubit = cirq.LineQubit(0)
thetas = df.iloc[0]['thetas']
class _DummyMeasurementGate(cirq.IdentityGate):
"""A dummy measurement used to trick simulators into applying
readout error when using PauliString.expectation_from_xxx."""
def _measurement_key_(self):
return 'dummy!'
def __repr__(self):
if self.num_qubits() == 1:
return '_DummyMeasurementGate'
return '_DummyMeasurementGate({!r})'.format(self.num_qubits())
def __str__(self):
if (self.num_qubits() == 1):
return 'dummyM'
else:
return 'dummyM({})'.format(self.num_qubits())
def _circuit_diagram_info_(self, args):
from cirq import protocols
return protocols.CircuitDiagramInfo(
wire_symbols=('dM',) * self.num_qubits(), connected=True)
def dummy_measure(qubits):
return _DummyMeasurementGate(num_qubits=len(qubits)).on(*qubits)
def get_circuit(theta):
return cirq.Circuit([
cirq.ry(theta).on(qubit),
dummy_measure([qubit])
])
true_z_vals = []
for theta in thetas:
wf = cirq.final_state_vector(get_circuit(theta))
op = cirq.Z(qubit) * 1.
true_z_val = op.expectation_from_state_vector(wf, qubit_map={qubit:0}, check_preconditions=False)
true_z_vals.append(np.real_if_close(true_z_val).item())
true_z_vals = np.array(true_z_vals)
true_z_vals
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(11, 4))
ax1.plot(thetas, true_z_vals, '-', label='True')
ax1.plot(entry['thetas'], entry['z_vals'], 'o-', label='Data')
ax2.plot([], []) # advance color cycle
ax2.plot(entry['thetas'], np.abs(true_z_vals - entry['z_vals']), 'o-', label='|Data - True|')
ax1.legend(loc='best', frameon=False)
ax2.legend(loc='best', frameon=False)
ax1.set_xlabel('Theta', fontsize=14)
ax2.set_xlabel('Theta', fontsize=14)
fig.tight_layout()
```
## Learn a model
Our experimental data has some wiggles in it, but it also has a clear pattern of deviation from the true values. We can hypothesize a (parameterized) noise model and then use function minimization to fit the noise model parameters.
```
import scipy.optimize
import cirq.contrib.noise_models as ccn
def get_obj_func(data_expectations):
all_results = []
def obj_func(x):
depol_prob, decay_prob, readout_prob = x
if depol_prob < 0 or decay_prob < 0 or readout_prob < 0:
# emulate constraints by returning a high cost if we
# stray into invalid territory
return 1000
sim = cirq.DensityMatrixSimulator(
noise=ccn.DepolarizingWithDampedReadoutNoiseModel(
depol_prob=depol_prob, decay_prob=decay_prob, bitflip_prob=readout_prob))
results = []
for theta in thetas:
density_result = sim.simulate(get_circuit(theta))
op = cirq.Z(qubit) * 1.
true_z_val = op.expectation_from_state_vector(
density_result.final_density_matrix,
qubit_map=density_result.qubit_map, check_preconditions=False)
results.append(np.real_if_close(true_z_val).item())
results = np.array(results)
all_results.append(results)
cost = np.sum(np.abs(results - data_expectations))
return cost
return obj_func, all_results
def print_result(x):
depol_prob, decay_prob, readout_prob = x
print(f'depol = {depol_prob:.2%}')
print(f'decay = {decay_prob:.2%}')
print(f'readout = {readout_prob:.2%}')
dfb = df
dfb = dfb.head(5) # Remove this to do all qubits
len(dfb)
# Initial values
depol_prob = 0.01
decay_prob = 0.01
readout_prob = 0.01
opt_results = []
for i, entry in dfb.iterrows():
ofunc, results = get_obj_func(entry['z_vals'])
opt_result = scipy.optimize.minimize(ofunc,
[depol_prob, decay_prob, readout_prob],
method='nelder-mead',
options={'disp': True})
label = f"{entry['qubit'].row}, {entry['qubit'].col}"
print("Qubit", label)
print_result(opt_result.x)
opt_results.append(opt_result)
data_expectations = entry['z_vals']
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(11, 4))
ax1.plot(thetas, true_z_vals, label='True')
ax1.plot(thetas, data_expectations, 'o-', label=f'{label} Data')
ax1.plot(thetas, results[-1], '.-', label='Fit')
ax2.plot([], []) # advance color cycle
ax2.plot(thetas, np.abs(true_z_vals - data_expectations), 'o-', label='|Data - True|')
ax2.plot(thetas, np.abs(true_z_vals - results[-1]), '-', label='|Fit - True|')
ax1.legend(loc='best')
ax2.legend(loc='best')
fig.tight_layout()
plt.show()
```
| github_jupyter |
```
library(hash)
library(xts)
library(lubridate)
library(forecast)
library(fpp)
# Constants used throughout the code
INPUT_FILE <- "../../../cocUptoDec2016.csv"
DATA_FOLDER <- "../data/topNComplaints"
```
# Base Vignette
Purpose:
- To provide a quick start code snippet to get the data, loaded into a useable format for forecasting modules
- Establish a baseline forecast
```
# load the data
df <- read.csv(INPUT_FILE, stringsAsFactors = F)
df$Complaint.Date <- as.Date(df$Complaint.Date, format = "%m/%d/%Y")
df$NumComplaints <- 1
minDate <- min(df$Complaint.Date)
maxDate <- max(df$Complaint.Date)
head(df)
# pick top complaint types, and model only that data
topComplaintTypes <- data.frame(table(df$Complaint.Type))
topComplaintTypes <- topComplaintTypes[order(-topComplaintTypes$Freq),]
topComplaintTypes <- topComplaintTypes[1:10, ]
topComplaintTypes <- as.character(topComplaintTypes$Var1)
print(topComplaintTypes)
data <- df[df$Complaint.Type %in% topComplaintTypes, ]
print(unique(data$Complaint.Type))
```
# Create data files
For ease of modeling, construct data in the following format: `"Month", "Year", "Complaints"` , with missing values filled in.
```
# create the 'ideal' data set
# TODO Sahil, isn't there a better way to do this?
minYear <- year(minDate)
maxYear <- year(maxDate)
ideal <- data.frame(Month=character(), Year=integer(), stringsAsFactors=F)
for(year in seq(from=minYear, to=maxYear)) {
for(month in month.abb) {
r <- nrow(ideal)
month <- as.character(month)
ideal[nrow(ideal)+1,] <- c(month, year)
}
}
head(ideal)
constructMonthlyData <- function(data, complaintType) {
# make this a function for re-use
d <- data[data$Complaint.Type == complaintType, ]
# create xts object for rolling up the data
series <- xts(d$NumComplaints, d$Complaint.Date)
series <- apply.monthly(series, FUN = sum)
# create a df for easy access
monthlyData <- data.frame(Date=index(series), Complaints=coredata(series))
# create columns for join
monthlyData$Month <- month.abb[month(monthlyData$Date)]
monthlyData$Year <- year(monthlyData$Date)
joined <- merge(x = ideal, y = monthlyData, by = c("Month", "Year"), sort=F, all= T)
# don't need date
joined$Date <- NULL
# sort it by year-month, since R doesn't do it otherwise /endrant
joined <- joined[order(as.yearmon(paste0(joined$Year, "-", joined$Month), "%Y-%b")), ]
joined[is.na(joined$Complaints), ]$Complaints <- 0
joined
}
# create the files
for(complaintType in topComplaintTypes) {
joined <- constructMonthlyData(data, complaintType)
# one complaint type has a '/' in it, which messes up the paths
path <- file.path(DATA_FOLDER, paste0(gsub("/", "", complaintType), ".csv"))
print(paste0("Saving file", path))
write.csv(joined, file=path, row.names=F)
}
```
# Baseline method
The purpose of this exercise is to establish a [baseline](http://machinelearningmastery.com/how-to-get-baseline-results-and-why-they-matter/) to help us compare the 'naive' method with ML models.
The modelling methods used are described in detail in [here](https://www.otexts.org/fpp/2/3).
```
# trying it with one complaint type
complaintType <- topComplaintTypes[1]
monthly <- constructMonthlyData(data, complaintType)
monthly$Complaints
# convert it to a ts object
monthly <- ts(monthly$Complaints, start=c(minYear, 1), frequency = 12)
print(monthly)
seasonplot(monthly,ylab="Number of complaints", xlab="Year",
main=paste0("Seasona plot for ", complaintType),
year.labels=TRUE, year.labels.left=TRUE, col=1:20, pch=19)
naiveMethodsPlot <- function(monthly, complaintType) {
h <- 12
trainStart <- c(2013, 1)
trainEnd <- c(2015, 1)
testStart <- trainEnd
testEnd <- c(2015, 12)
monthly2 <- window(monthly,start=trainStart, end=trainEnd)
monthlyAfter <- window(monthly, start=testStart, end=testEnd)
monthlyfit1 <- meanf(monthly2, h=h)
monthlyfit2 <- naive(monthly2, h=h)
monthlyfit3 <- snaive(monthly2, h=h)
monthlyfit4 <- rwf(monthly2, h=h, drift=TRUE)
plot(monthlyfit1, plot.conf=FALSE,
main=paste0("Forecasts for ", complaintType))
lines(monthlyAfter, lty=2)
lines(monthlyfit2$mean,col=2)
lines(monthlyfit3$mean,col=3)
lines(monthlyfit4$mean, col=6)
legend("topleft",col=c(1,4,2,3,6), lty=c(2, 1, 1, 1,1),
legend=c("Actual Data", "Pred: Mean method",
"Pred: Naive method","Pred: Seasonal naive method",
"Pred: Drift Method"))
# TODO: Remove this line, since later on, we'll have all the data for 2015
monthlyAfter <- window(monthly, start=c(maxYear, 1), end=c(maxYear, 6))
print(paste0(complaintType, ": Mean Method"))
print(accuracy(monthlyfit1, monthlyAfter))
print(paste0(complaintType, ": Naive Method"))
print(accuracy(monthlyfit2, monthlyAfter))
print(paste0(complaintType, ": Seasonal Method"))
print(accuracy(monthlyfit3, monthlyAfter))
print(paste0(complaintType, ": Drift Method"))
print(accuracy(monthlyfit4, monthlyAfter))
}
naiveMethodsPlot(monthly, topComplaintTypes[1])
# do this for other complaint types as well
for(complaintType in topComplaintTypes[2:length(topComplaintTypes)]) {
monthly <- constructMonthlyData(data, complaintType)
monthly <- ts(monthly$Complaints, start=c(minYear, 1), frequency = 12)
naiveMethodsPlot(monthly, complaintType)
}
```
## Boilerplate Code
The code below contains some boilerplate code that loads the data into a usable format
```
loadData <- function(dataFolder) {
files <- list.files(dataFolder)
data <- list()
for(file in files) {
df <- read.csv(paste0(dataFolder, "/", file), stringsAsFactors=F)
minYear <- min(df$Year)
complaintType <- substr(file,1,(nchar(file))-4)
tsObject <- ts(df$Complaints, start=c(minYear, 1), frequency = 12)
data[[complaintType]] <- tsObject
}
data
}
print(loadData(DATA_FOLDER))
```
| github_jupyter |
```
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
tf.logging.set_verbosity(tf.logging.INFO) #This way we can see the training information
import os
from google.colab import drive
drive.mount('/content/drive')
%matplotlib inline
filedir = './drive/My Drive/Final/CNN_data'
filelist = os.listdir(filedir)
'''
with open(filedir + '/' + filelist[0], 'rb') as f:
X = np.load(f)
print(X.shape)
plt.imshow(X.reshape(40,61,3)[:,:,0], cmap='hot')
plt.show()
'''
'''
X = np.zeros((300,7320))
Y = np.zeros(300)
for i in range(len(filelist)):
file = filelist[i]
if(file[0] == '0'):
Y[i] = 0
else:
Y[i] = 1
with open(filedir + '/' + file, 'rb') as f:
data = np.load(f)
assert (len(data) == 7320)
X[i,:] = data
'''
with open(filedir + '/' + 'X', 'rb') as f:
X = np.load(f)
with open(filedir + '/' + 'Y', 'rb') as f:
Y = np.load(f).astype(np.int32)
print('X.shape: {}\nY.shape: {}'.format(X.shape, Y.shape))
'''
with open(filedir + '/' + 'X', 'wb') as f:
np.save(f, X)
with open(filedir + '/' + 'Y', 'wb') as f:
np.save(f, Y)
'''
def cnn_model_fn(features, labels, mode):
"""
Custom model function for a CNN estimator object
"""
#Input preprocessing layer
# Reshape X to 4-D tensor: [batch_size, width, height, channels]
# spectographs are 40x183 pixels, and have one color channel
input_layer = tf.reshape(features["x"],[-1,40,61,3]) #-1 means adjust batch size so that feature data fills the dimension of (40,61,3) before the starting the next batch element
#Module 1: Extraction
# Computes 32 new features using a 7x11 filter with ReLU activation. Filter dimensions preserve ratios of convolution 7/40~5/28, 11/(183/3)~5/28
# Padding is added to preserve width and height during convolution.
# The (1,1) stride makes each new feature have same dimensions as input spectrograph
# Input Tensor Shape: [batch_size, 40, 61, 3]
# Output Tensor Shape: [batch_size, 40, 61, 32]
conv1 = tf.layers.conv2d(
inputs=input_layer,
filters=32,
kernel_size=[7,11],
strides=(1, 1),
padding="same",
activation=tf.nn.relu)
# First max pooling layer with a 3x5 filter and stride of 2. Filter dimensions preserve ratios of shrinking filters 3/7~2/5, 5/11~2/5
# The stride cuts the X's image dimensions by ceil(2)
# Retains the newest 32 features
# Input Tensor Shape: [batch_size, 40, 61, 32]
# Output Tensor Shape: [batch_size, 20, 31, 32]
pool1 = tf.layers.max_pooling2d(
inputs=conv1,
pool_size=[3,4],
strides=(2,2),
padding='same',
)
#Module 2: Extraction
# Computes 32 new features, totaling 32+32=64 using a 7x11 filter.
# Padding is added to preserve width and height.
# Input Tensor Shape: [batch_size, 20, 31, 32]
# Output Tensor Shape: [batch_size, 20, 31, 64]
conv2 = tf.layers.conv2d(
inputs=pool1,
filters=64,
kernel_size=[7,11],
strides=(1, 1),
padding="same",
activation=tf.nn.relu
)
# Second max pooling layer with a 3x5 filter and stride of 2
# The stride cuts the X's image dimensions by ceil(2)
# Retains the total 64 features
# Input Tensor Shape: [batch_size, 20, 31, 64]
# Output Tensor Shape: [batch_size, 10, 16, 64]
pool2 = tf.layers.max_pooling2d(
inputs=conv2,
pool_size=[3,4],
strides=(2,2),
padding='same'
)
#Module 3: Prediction
# Flatten tensor into a batch of vectors, just like our input
# Input Tensor Shape: [batch_size, 3, 3, 3]
# Output Tensor Shape: [batch_size, 10 * 16 * 64 = 10240]
pool2_flattened = tf.reshape(
pool2,
[-1, 10 * 16 * 64]
)
# Densely connected layer with 1024 neurons
# Input Tensor Shape: [batch_size, 10 * 16 * 64 = 10240]
# Output Tensor Shape: [batch_size, 5120]
dense = tf.layers.dense(
inputs=pool2_flattened,
units=5120,
activation=tf.nn.relu)
# Add dropout operation; 0.6 probability that element will be kept
dropout = tf.layers.dropout(
inputs=dense,
rate=0.4,
training= (mode==tf.estimator.ModeKeys.TRAIN) )
# Input Tensor Shape: [batch_size, 5120]
# Output Tensor Shape: [batch_size, 2]
logits = tf.layers.dense(inputs=dropout, units=2)
#Modes
#Based on the mode, return a different value
#Modes include: Train, Test, Predict, Eval
# Predict Mode
predictions = {
#Generate the logits predictions as a dictionary
# One key for the flattened convolution features of each input image
"features_flat": pool2_flattened,
# One key for the dense features of each input image
"features_dense": dense,
# One key for the predicted classes from logits
"classes": tf.argmax(input=logits,axis=1),
# One key of the overal logits vector shape
"probabilities": tf.nn.softmax(logits,name="softmax_tensor")
}
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)
# Train & Eval Mode
loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)
# Train
if mode == tf.estimator.ModeKeys.TRAIN:
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
train_op = optimizer.minimize(
loss=loss,
global_step=tf.train.get_global_step()
)
return tf.estimator.EstimatorSpec(
mode=mode,
loss=loss,
train_op=train_op
)
# Eval
eval_metric_ops = {
#Generate the evaluation metrics as a dictionary
"accuracy": tf.metrics.accuracy(
labels=labels,
predictions=predictions["classes"]
)
}
return tf.estimator.EstimatorSpec(
mode=mode,
loss=loss,
eval_metric_ops=eval_metric_ops
)
#log appropriately
#https://stackoverflow.com/questions/46013115/how-to-change-global-step-in-tensorflows-skcompat
config = tf.estimator.RunConfig(
save_summary_steps=100,
log_step_count_steps=100 #this is where we display log outputs. should be a factor of training "steps" below
)
#create correct estimator using the custom "cnn_model_fn" defined above
digit_classifier = tf.estimator.Estimator(
model_fn=cnn_model_fn,
model_dir='./drive/My Drive/Final/tf_cnn_checkpoint/tf_cnn_spectograph_model',
config=config
)
#log progress while training
tensors_to_log = {"probabilities": "softmax_tensor"}
logging_hook = tf.train.LoggingTensorHook(
tensors=tensors_to_log,
every_n_iter=1000 #Only affect the long log output. Does not affect how frequently we see "steps" in output. If not divisible by number of steps, it will take precedence. IE log_iter=5, #steps=11 -> Training will last for 16steps
)
#prepare training input with shuffling, batching, etc
train_input_fn = tf.estimator.inputs.numpy_input_fn(
x={"x": X},
y=Y,
batch_size=50,
num_epochs=None,
shuffle=True
)
##train the model by calling "estimator.train"
digit_classifier.train(
input_fn=train_input_fn,
steps=10000, #Batch size = 50, total data = 300, so there are 6 steps in one epoch, we'll do 4 epochs or 32 steps over a total of 1,200 images
hooks=None #[logging_hook]
)
# Add ops to save and restore all the variables.
saver = tf.train.Saver()
# Later, launch the model, use the saver to restore variables from disk, and
# do some work with the model.
with tf.Session() as sess:
# Restore variables from disk.
saver.restore(sess, "/tmp/model.ckpt")
print("Model restored.")
predict_input_fn = tf.estimator.inputs.numpy_input_fn(
x={"x":X},
num_epochs=1,
shuffle=False)
predictions = digit_classifier.predict(input_fn=predict_input_fn)
features = np.zeros((len(X), 10*16*64))
for i in range(len(X)):
eachdata = next(predictions)
features[i,:] = eachdata['features_flat']
with open('./drive/My Drive/Final/CNN_data/feature_extracted', 'wb') as f:
np.save(f, features)
max(features[0])
```
| github_jupyter |
```
%config IPCompleter.greedy=True
```
# Neuron
Let's start with a simple neuron. From the biological point of view, the simplified view on neuron is following.

The dendrites are inputs of the neuron. Outputs of other neurons are connected to the neuron by dendrites. The inputs are then passed into the body of the neuron (Nucleus), that can be excitated (activated) or not. When the neuron is excitated, it's output is passed through the axons to the other neurons. This is a huge simplification, but it is enough for our purposes.
How we can model this with the artificial neuron? We need to create a function, that accepts the neuron's inputs and compute the output. For the simplification, inputs are weighted and summed, the result is then passed to the activation function and its result is the output of the neuron. We can see this in the following picture.

*(We don't take threshold into account yet)*
Mathematically, we can rewrite the picture into the formula.
$$
y = f(x_1 \cdot w_1 + x_2 \cdot w_2 + \cdots + x_n \cdot w_n) \\
y = f\left(\sum_0^n(x_i \cdot w_i)\right) \\
y = f(\pmb{x} \times \pmb{w}^T) \\
y = f(\pmb{x} \pmb{w}^T)
$$
Before we continue, let's unify the terminology. I will mark vectors using bold text ($\pmb{x}$) and I will consider all one-dimensional vectors are going to be row vectors. All the vector multiplications will be dot products (if not specified otherwise) - so $\pmb{x} \pmb{w}^T$ is dot product and the result is scalar. I will be indexing vectors from zero (note that the image using number 1 as the first index).
As the activation function is most of the time set up in advance, during the training of the neuron (or the whole neural network), we are looking for the weights $\pmb{w}$, which leads to the best solution.
And that is everything, we need to know so far. Now, we can go to the first model - perceptron.
# Perceptron
Perceptron is the simplest model - it uses *sign* as the activation function *f*. The *sign* function is defined as follows.
$$
sign(x) = \begin{cases}
0 &\quad\text{if } x < 0\\
1 &\quad\text{if } x \geq 0 \\
\end{cases}
$$
The formula defined separating hyperplane (i.e. hyperplane, that splits data's features space into two halves). For example, if the input data have two dimensions, the separating hyperplane will be a line, where all the data from the first class will be above the line, whereas all the data from the second class will end up below the line.
As the previous paragraph suggested, we can use the perceptron for classification tasks - that is split data into two classes. It is guaranteed, that for linearly separable data (data, that can be separated), the perceptron learning algorithm (see below) always finds the separating hyperplane (you can check the proof for example in [[1]](#Bibliography)).
## Perceptron learning algorithm
The learning algorithm is very simple. First of all, the weights are initialized randomly. The algorithm is looking for an instance, that is misclassified. If the true class of the instance is positive (and thus has been classified as negative), the instance is added to the weight vector. If, on the other hand, the instance is negative (and has been classified as positive), the instance is subtracted from the weight vector. The algorithm ends, when all the instances are classified correctly.
For the classification, I am going to use random data (I call it playing dataset) generated by *sklearn* library.
```
# Load libraries
import sklearn.datasets
import sklearn.model_selection
import sklearn.metrics
import matplotlib.pyplot as plt
import numpy as np
# Define the sign function
def sign(x):
return 0 if x < 0 else 1
# Generate data
data, classes = sklearn.datasets.make_blobs(n_samples=100, n_features=2, centers=2, random_state=42)
# Plot data
plt.scatter(data[:,0], data[:,1], c=classes)
plt.show()
```
These are the data we trying to classify. Now let's write the perceptron learning algorithm.
```
# Initialize the weights
weights = np.random.RandomState(42).uniform(-2, 2, 2)
# Iterate until convergence
weights_changed = True
while weights_changed:
weights_changed = False
# for every instance in the data
for instance, target in zip(data, classes):
# predict the output of the perceptron
prediction = sign(instance @ weights)
if prediction == target:
# correct classification
continue
elif target == 1:
# positive classified as negative - add the instance to the weights
weights = weights + instance
elif target == 0:
# negative classified as positive - subtract the instance from the weights
weights = weights - instance
weights_changed = True
```
As I said, the perceptron defined separating hyperplane. The hyperplane is defined by the weights of the perceptron and as we are in the 2D, the separating hyperplane is a line. You maybe remember formula of line normal form $\alpha x+ \beta y+ \gamma =0$. In our case, the $\pmb{w}$ is the norm of the line and thus $\alpha=w_0$, $\beta=w_1$ and $\gamma=0$ (norm is perpendicular to the line). Let's draw the separating hyperplane.
```
# Compute slope of the line
slope = - weights[0] / weights[1]
# Plot data
plt.scatter(data[:,0], data[:,1], c=classes)
# Plot the separating line
plt.plot(
[data.min(axis=0)[0], data.max(0)[0]],
[slope * data.min(axis=0)[0], slope * data.max(axis=0)[0]],
c='r')
plt.show()
```
As we can see, the line is between the two classes, but the line is moved a little bit towards the yellow class. We would probably like to have the line exactly between the two classes. However, as all the points are already classified correctly, the algorithm can't adjust the line more. That is in general the drawback of the perceptron - it will find the separating hyperplane, however, it doesn't need to (and probably never) be the best one. Let's try to change the data (by changing the generating seed) and try to train the perceptron again.
```
# DATA GENERATION
data, classes = sklearn.datasets.make_blobs(n_samples=100, n_features=2, centers=2, random_state=48)
# PERCEPTRON TRAINING
# Initialize the weights
weights = np.random.RandomState(42).uniform(-2, 2, 2)
# Iterate until convergence
weights_changed = True
while weights_changed:
weights_changed = False
# for every instance in the data
for instance, target in zip(data, classes):
# predict the output of the perceptron
prediction = sign(instance @ weights)
if prediction == target:
# correct classification
continue
elif target == 1:
# positive classified as negative - add the instance to the weights
weights = weights + instance
elif target == 0:
# negative classified as positive - subtract the instance from the weights
weights = weights - instance
weights_changed = True
# PLOTTING
# Compute slope of the line
slope = - weights[0] / weights[1]
# Plot data
plt.scatter(data[:,0], data[:,1], c=classes)
# Plot the separating line
plt.plot(
[data.min(axis=0)[0], data.max(0)[0]],
[slope * data.min(axis=0)[0], slope * data.max(axis=0)[0]],
c='r')
plt.show()
```
As we can see, all the points are correctly classified, however, the separating hyperplane is exactly next to the yellow points. That is definitely not the line we wanted to obtain! We can try to run the algorithm multiple times with different initialization of the weights and pick up the best line we can obtain. We can as well iterate over the data not sequentially, but randomly. I will discuss other approaches (that doesn't need the involvement of the programmer) later.
## Simplifying the update rule
We can simplify the update rule a little bit by using the formula $\pmb{w} = \pmb{w} + (\textrm{target} - \textrm{prediction}) * \pmb{x}$. When the prediction is correct, the subtraction is $0$ and no update is made. When $target = 0$ and $prediction = 1$ (the negative instance is classified positively), the instance is subtracted. When $target = 1$ and $prediction = 0$ (positive instance classified as negative), the instance is added (as in the conditions). We can thus rewrite the whole update into single line.
The last thing is to make sure we keep track of weight changes. We can compare the new weights to the weights from the previous iteration (otherwise no changes have been made). In that case, the algorithm can end.
```
# DATA GENERATION
data, classes = sklearn.datasets.make_blobs(n_samples=100, n_features=2, centers=2, random_state=42)
# PERCEPTRON TRAINING
# Initialize the weights
weights = np.random.RandomState(42).uniform(-2, 2, 2)
# Iterate until convergence
old_weights = None
while (weights != old_weights).any():
old_weights = weights
# for every instance in the data
for instance, target in zip(data, classes):
# predict the output of the perceptron
prediction = sign(instance @ weights)
# update the weights
weights = weights + (target - prediction) * instance
# PLOTTING
slope = - weights[0] / weights[1]
plt.scatter(data[:,0], data[:,1], c=classes)
plt.plot(
[data.min(axis=0)[0], data.max(0)[0]],
[slope * data.min(axis=0)[0], slope * data.max(axis=0)[0]],
c='r')
plt.show()
```
# Stepping of the algorithm
Now, when we have the algorithm, we can look into it closer. Because I promised you a lot of charts, let's plot each update. Firstly, let's create the dataset - I reduced the number of data points, to make the example more clear.
```
data, classes = sklearn.datasets.make_blobs(
n_samples=10,
n_features=2,
centers=[[-1,0.4],[1,0.6]],
cluster_std=0.8,
random_state=82
)
plt.scatter(data[:,0], data[:,1], c=classes)
plt.show()
```
Now I will plot all the changes. On the charts below, the red line is the separating hyperplane, and the black line its norm. The misclassified instance (on which we are updating the weights) is marked green. The green line represents the instance vector - the vector, that we want to add (or subtract) from the weights. Lastly, the light red and blue lines are the new separating line (after the update) and the corresponding norm.
```
# PERCEPTRON TRAINING
# Initialize the weights
weights = np.random.RandomState(42).uniform(size=2)
# Iterate until convergence
weights_changed = True
while weights_changed:
weights_changed = False
# for every instance in the data
for instance, target in zip(data, classes):
# predict the output of the perceptron
prediction = sign(instance @ weights)
# update the weights
weights_new = weights + (target - prediction) * instance
# plot the change
if (weights != weights_new).any():
slope = - weights[0] / weights[1] # calculate the slope
plt.figure(figsize=(8, 5))
plt.plot(
[data.min(axis=0)[0], data.max(0)[0]],
[slope * data.min(axis=0)[0], slope * data.max(axis=0)[0]],
c='r') # plot the separating line
plt.plot([0, weights[0]],[0, weights[1]], c='black') # plot the norm
plt.scatter(data[:,0], data[:,1], c=classes) # plot the data
plt.scatter(instance[0], instance[1], c='g') # plot the missclassified instance
plt.plot([0, instance[0]], [0, instance[1]], c='g', alpha=0.4) # plot the instance vector
plt.plot(
[weights[0], weights[0] + (target - prediction) * instance[0]],
[weights[1], weights[1] + (target - prediction) * instance[1]],
c='g', alpha=0.4) # plot the instance vector from the normal
plt.plot(
[0, weights[0] + (target - prediction) * instance[0]],
[0, weights[1] + (target - prediction) * instance[1]],
c='c', alpha=0.4) # plot new normal
slope_new = - weights_new[0] / weights_new[1] # calculate the slope of new line
plt.plot(
[data.min(axis=0)[0], data.max(0)[0]],
[slope_new * data.min(axis=0)[0], slope_new * data.max(axis=0)[0]],
c='r', alpha=0.2) # plot the separating line
plt.axis('equal')
plt.ylim(-2,4)
plt.show()
weights_changed = weights_changed or (weights != weights_new).any()
weights = weights_new
# PLOTTING THE FINAL WEIGHTS
slope = - weights[0] / weights[1]
plt.figure(figsize=(8, 5))
plt.scatter(data[:,0], data[:,1], c=classes)
plt.plot(
[data.min(axis=0)[0], data.max(0)[0]],
[slope * data.min(axis=0)[0], slope * data.max(axis=0)[0]],
c='r')
plt.show()
```
As you can see, after each update, the norm move closer (for the positive data) or away (for the negative data) from the misclassified instance. This is the main idea of the perceptron learning algorithm.
# Bias
So far, we haven't used bias. If you remember the norm line formula $\alpha x+ \beta y+ \gamma =0$, we set up $\gamma$ to zero. Because of that, the line always goes through the $[0,0]$ coordinate. But what if we have data, that can't be separated that way? We would like to introduce bias and allow the algorithm to shift the separating hyperplane from the $0$ coordinate. First, let's generate new dataset.
```
data, classes = sklearn.datasets.make_blobs(
n_features=2,
centers=2,
random_state=44
)
plt.scatter(data[:,0], data[:,1], c=classes)
plt.show()
```
This dataset can't be separated using our perceptron algorithm, because no line going through $[0,0]$ can separate it. We will introduce the bias - we add some term $b$ to the linear combination of inputs.
$$
y=f(\pmb{x}\pmb{w}^T + b)
$$
The bias is learned the same way, as the weights. When the algorithm misclassifies the positive instance, it increases it by 1. When the negative instance is misclassified, it decreases it by 1.
```
# PERCEPTRON TRAINING
# Initialize the weights
weights = np.random.RandomState(42).uniform(-2, 2, 2)
bias = 0
# Iterate until convergence
old_weights = None
old_bias = None
while (weights != old_weights).any() or bias != old_bias:
old_weights = weights
old_bias = bias
# for every instance in the data
for instance, target in zip(data, classes):
# predict the output of the perceptron
prediction = sign(instance @ weights + bias)
# update the weights
weights = weights + (target - prediction) * instance
bias = bias + (target - prediction)
# plot
slope = - weights[0] / weights[1]
plt.scatter(data[:,0], data[:,1], c=classes)
plt.plot(
[data.min(axis=0)[0], data.max(0)[0]],
[slope * data.min(axis=0)[0] - bias / weights[1], slope * data.max(axis=0)[0] - bias / weights[1]],
c='r')
plt.show()
```
As we can see, the line moved down and no longer goes through $[0,0]$. Sometimes, it is pointless to handle bias separately. We can include it in weights and pad the examples with 1. It is the same as if we moved data into a new dimension. Let's show it on one-dimensional data. We have positive data $\left\{1,2,3\right\}$ and negative data $\left\{4,5,6\right\}$. We would like to separate them by hyperplane (that is in this case point), that goes through 0. That is not possible, because only the point 0 can be the separating hyperplane.
```
data = np.array([[1],[2],[3],[4],[5],[6]])
classes = np.array([0,0,0,1,1,1])
plt.scatter(data, np.zeros((6,)), c=classes)
plt.scatter(0,0,c='r')
plt.show()
```
However, if we moved the data into the new dimension (that would be 2D in this case), the separating hyperplane is a line and the data are linearly separated.
```
data = np.array([[1],[2],[3],[4],[5],[6]])
classes = np.array([0,0,0,1,1,1])
plt.scatter(data, np.ones((6,)), c=classes)
plt.scatter(0,0,c='r')
plt.plot([0, 6],[0, 1.75], c='r')
plt.show()
```
We can deal with the bias the same way. We add a new dimension with only ones and use 3 weights in the perceptron algorithm.
```
# DATA GENERATION
data, classes = sklearn.datasets.make_blobs(
n_features=2,
centers=2,
random_state=44
)
data = np.hstack([data, np.ones((data.shape[0],1))])
# PERCEPTRON TRAINING
# Initialize the weights
weights = np.random.RandomState(42).uniform(-2, 2, 3)
# Iterate until convergence
old_weights = None
while (weights != old_weights).any():
old_weights = weights
# for every instance in the data
for instance, target in zip(data, classes):
# predict the output of the perceptron
prediction = sign(instance @ weights)
# update the weights
weights = weights + (target - prediction) * instance
# plot
slope = - weights[0] / weights[1]
bias = weights[2] / weights[1]
plt.scatter(data[:,0], data[:,1], c=classes)
plt.plot(
[data.min(axis=0)[0], data.max(0)[0]],
[slope * data.min(axis=0)[0] - bias, slope * data.max(axis=0)[0] - bias],
c='r')
plt.show()
```
# Multidimensional case
So far I have shown only two-dimensional cases. There is nothing, that stops use from use more dimensional data, except we won't be able to plot them. Let's try 10 dimensions. However, without plotting this time, as visualize 10D data is not easily done.
```
# DATA GENERATION
data, classes = sklearn.datasets.make_blobs(
n_samples=1000,
n_features=10,
centers=2,
random_state=42
)
data = np.hstack([data, np.ones((data.shape[0],1))])
train_data, test_data, train_classes, test_classes = sklearn.model_selection.train_test_split(data, classes, test_size=0.15)
# PERCEPTRON TRAINING
weights = np.random.RandomState(42).uniform(-2, 2, 11)
old_weights = None
while (weights != old_weights).any():
old_weights = weights
for instance, target in zip(train_data, train_classes):
prediction = sign(instance @ weights)
weights = weights + (target - prediction) * instance
# TEST
predictions = [sign(instance @ weights) for instance in test_data]
print(f'Accuracy: {sklearn.metrics.accuracy_score(test_classes, predictions)}')
```
As we can see, the algorithm achieved $100\%$ accuracy - it separated the data correctly because they are linearly separable.
# Bibliography
[1] The Perceptron Learning Algorithm and its Convergence, Shivaram Kalyanakrishnan, 21 January 2017, [online](https://www.cse.iitb.ac.in/~shivaram/teaching/old/cs344+386-s2017/resources/classnote-1.pdf)
| github_jupyter |
# Spiral-shaped distribution
```
Copyright (C) 2021
Code by Leopoldo Sarra and Florian Marquardt
Max Planck Institute for the Science of Light, Erlangen, Germany
http://www.mpl.mpg.de
This work is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.
If you find this code useful in your work, please cite our article
"Renormalized Mutual Information for Artificial Scientific Discovery", Leopoldo Sarra, Andrea Aiello, Florian Marquardt, arXiv:2005.01912
available on
https://arxiv.org/abs/2005.01912
```
In this notebook we show how one can extract an optimal feature by maximizing renormalized mutual information. This is the code used to generate the example with the spiral-shaped distribution in the paper.
We consider as input samples $x = (x1,x2)$ the transformation of Gaussian distributed $x'=(x_1',x_2')$ to
$$\begin{cases}
x_1 &= x_1' \cos(\alpha r') - x'_2 \sin(\alpha r') \\
x_2 &= x_1' \sin(\alpha r') + x'_2 \cos(\alpha r') \\
\end{cases}$$
with $r=\sqrt{x_1'^2 + x_2'^2}$ and $\alpha$ a parameter that controls the twist of the transformation.
The goal is to find a one dimensional feature $y=f(x_1,x_2)$ that preserves the largest information on the input $x$ variables.
```
# Load libraries
import numpy as np
import matplotlib.pyplot as plt
import scipy as sc
from scipy import interpolate
import rmi.neuralnets as nn
import rmi.estimation as inf
import rmi.features as f
from rmi.examples.spiral import SpiralDistribution
from rmi.pca import pca
from tqdm import tqdm, trange
%load_ext line_profiler
%load_ext autoreload
%autoreload 2
# Define the input distribution
sp = SpiralDistribution(sx=0.8,
sy=1,
r=-0.7,
alpha=0.5)
```
## Feature Extraction
We train a neural network to find a feature function that optimizes Renormalized Mutual Information with the input.
In other words, the unsupervised network is optimizing the cost function
$$
C = - \tilde{I}(x,f(x)) + \lambda_\text{gauss} KL(f,g) = - \left( H(f) - \frac{1}{2} \langle \log ||\nabla f(x)||^2 \rangle_x \right) + \lambda_\text{gauss} KL(f||g)
$$
where $g$ is a normal Gaussian distribution and the last term is only a regularization that fixes the output distribution to a Gaussian. This is only used to make the convergence of the network more stable - we can arbitrary choose any deformation of feature by reparametrization invariance.
```
n_in = 2
n_out = 1
# Number of neurons in each layer of the network
n_neurons = 30
# Strength of the constraint on the output distribution
# of the feature (exploiting reparametrization invariance)
coeff_gauss = 5.0
# Options to estimate the entropy H(y)
# Number of bins
H_nbins = 180
# Size of the kernel placed on each point
# to make the histogram differentiable
# (in units of the histogram-cell-width)
H_kernel_size = 1
N_train = 30000
batchsize = 100
eta = 0.005
# Define the layout of the neural network
# The cost function is implicit when choosing the model RMIOptimizer
rmi_optimizer = nn.RMIOptimizer(H_nbins,
H_kernel_size,
coeff_gauss,
layers=[
nn.K.layers.Dense(n_neurons, activation="tanh",
input_shape=(n_in,)),
nn.K.layers.Dense(n_neurons, activation="tanh"),
nn.K.layers.Dense(n_out)
])
# Compile the network === choose the optimizer to use during the training
rmi_optimizer.compile(optimizer=nn.tf.optimizers.Adam(eta))
# Print the table with the structure of the network
rmi_optimizer.summary()
# Define an objects that handles the training and
# automatically saves the model and the training history
rmi_net = nn.Net(rmi_optimizer,
path="models/spiral",
mode="a")
# Perform the training of the neural network
rmi_net.fit_generator(lambda: sp.sample(batchsize), N_train)
# Plot the training history (and other useful quantities)
rmi_net.plot_rmi_cost()
```
## Plot the feature
Here we plot the extracted feature on top of the input distribution.
Intuitively, if the feature follows the "main direction" on which the input distribution
lies, it provides the most useful one-dimensional "curvilinear" coordinate
```
# Range of the plot
xmin = -4
xmax = 4
# Number of points in the grid
N = 100
x_linspace = np.linspace(xmin, xmax, N)
x1_grid, x2_grid = np.meshgrid(x_linspace, x_linspace, indexing='ij')
x_points = np.array([x1_grid.flatten(), x2_grid.flatten()])
# Get the theoretical distribution of x
# (instead of sampling and making a 2d histogram
# we simply transform the Gaussian distribution)
P_x = sp.P_x(x_points.T).reshape([N, N])
# Get the extracted feature (on an equally-spaced grid)
# (for plotting it)
feat = rmi_optimizer(x_points.T).numpy().reshape([N,N])
# ------------------------------------------------------------
# Plot
# Plot the theoretical distribution and the extracted feature on the top
plt.figure(figsize=[10,5],dpi=300)
plt.subplot(1,2,1)
plt.title("Feature contours")
plt.xlabel(r"$x_1$")
plt.ylabel(r"$x_2$")
plt.xlim([-4,4])
plt.ylim([-4,4])
plt.gca().set_aspect('equal')
# Draw the input distribution on the background
plt.contourf(x1_grid,x2_grid, P_x, 1000, cmap=plt.cm.BrBG_r)
# Draw a line along the main direction of the spiral on the background
spiral_theta = sp.get_theta()[0]
spiral_line = np.array([np.cos(spiral_theta), np.sin(spiral_theta)])[None,:]*np.linspace(-6,6,50)[:,None]
spiral_axis = sp.spiralize_batch(spiral_line)
plt.plot(spiral_axis[:,0], spiral_axis[:,1], "--", alpha=0.3, c="black")
# Draw the contour lines of the extracted feature
plt.contour(x1_grid,x2_grid, feat,25)
plt.colorbar()
# ------------------------------------------------------------
# Exploit the reparametrization invariance:
# We plot the same feature but transforming the output distribution
# to uniform between 0 and 1.
def feature_scaleFixed(rmi_optimizer, x_points, rescale=1):
"""Fix the scale of the feature to output a uniform distribution
We choose H(y=f(x)) to have a uniform distribution.
The only remaining ambiguity is the sign of the gradient of the feature (increasing feature or decreasing feature).
For example, this can be fixed by flipping all decreasing feature (choose rescale = -1 if feature is decreasing, otherwise rescale = 1)
Args:
rmi_optimizer (function): should return the feature calculated on x_points
x_points (array_like): [N_samples, 2] batch of points to rescale. It is important to provide a large batch in order to properly estimate the cumulative of the feature.
rescale (int, optional): either 1 or -1, factor to which rescale the final uniform feature. Defaults to 1.
Returns:
array_like: the rescaled rmi_optimizer(x_points) so that the distribution is output
"""
tf_x = nn.tf.convert_to_tensor(x_points, dtype=nn.K.backend.floatx())
tf_f = nn.tf.transpose(rescale*rmi_optimizer(tf_x))
f = tf_f.numpy()
tf_y_minimum = nn.tf.reduce_min(tf_f)
tf_y_maximum = nn.tf.reduce_max(tf_f)
tf_ydlt = (tf_y_maximum-tf_y_minimum)/nn.tf.cast(rmi_optimizer.H_nbins-1, nn.K.backend.floatx())
tf_y_min = tf_y_minimum - 3*tf_ydlt
tf_y_max = tf_y_maximum + 3*tf_ydlt
tf_y_linspace = nn.tf.reshape(nn.tf.linspace(tf_y_min, tf_y_max, rmi_optimizer.H_nbins),
[rmi_optimizer.H_nbins, 1])
tf_P_y, tf_ydelta = rmi_optimizer.tf_calcProbabilityDistribution(tf_f)
Py = tf_P_y.numpy()
ydelta = tf_ydelta.numpy()
# It is very easy to obtain the function to transform the feature to have uniform distribution
# Indeed, we know that the cumulative of the distribution gives the right result.
# It is crucial to multiply by ydelta because ydelta is not fixed, but different each time (and different from 1)
G = np.cumsum(Py)*ydelta
y_linspace = tf_y_linspace.numpy().flatten()
f_norm = sc.interpolate.CubicSpline(y_linspace, G)(f)
return f_norm
# Normalize the extracted feature to have uniform distribution
a = - np.sign(feat[0, 0]-feat[-1, -1])
fnorm = feature_scaleFixed(rmi_optimizer, x_points.T, a).reshape([N, N])
# Plot again for comparison
# We only changed the "distance" between two different contour lines
plt.subplot(1,2,2)
plt.title("Reparametrization: P(y)= uniform")
plt.xlabel(r"$x_1$")
plt.ylabel(r"$x_2$")
plt.xlim([-4,4])
plt.ylim([-4,4])
plt.gca().set_aspect('equal')
plt.contourf(x1_grid,x2_grid, P_x, 1000, cmap=plt.cm.BrBG_r)
plt.plot(spiral_axis[:,0], spiral_axis[:,1], "--", alpha=0.3, c="black")
plt.contour(x1_grid,x2_grid, fnorm,25)
plt.colorbar()
plt.show()
```
## Calculate RMI of the extracted feature
```
# Sample a large batch for a more precise estimation
x_batch = sp.sample(100000)
# Calculate the feature and its gradient with respect to the input
feat, grad_feat = rmi_net.get_feature_and_grad(x_batch)
# Estimate Renormalized Mutual Information
inf.RenormalizedMutualInformation(feat, grad_feat)
```
### Principal Component Analysis feature
We calculate the feature provided by Principal Component Analysis performed on the input samples $x$.
PCA can only provide a linear distribution, so it cannot "deform" to follow the shape of a non-Gaussian distribution.
```
# Perform PCA
mypca = pca(sp.sample(100000),1)
# Calculate the feature on an equally spaced grid
# (for plotting)
fpca = mypca.transform(x_points.T).reshape([100,100])
plt.figure(figsize=[10,5],dpi=300)
plt.subplot(1,2,1)
plt.title("Feature contours")
plt.xlabel(r"$x_1$")
plt.ylabel(r"$x_2$")
plt.xlim([-4,4])
plt.ylim([-4,4])
plt.gca().set_aspect('equal')
plt.contourf(x1_grid,x2_grid, P_x, 1000, cmap=plt.cm.BrBG_r)
plt.plot(spiral_axis[:,0], spiral_axis[:,1], "--", alpha=0.3, c="black")
plt.contour(x1_grid,x2_grid, fpca,25)
plt.colorbar()
plt.show()
```
## Comparison with Principal Component Analysis
```
N_estimate = 100000
inf_pca = inf.RenormalizedMutualInformation(*f.pca(sp.sample(N_estimate)))
inf_nn = inf.RenormalizedMutualInformation(*rmi_net.get_feature_and_grad(sp.sample(N_estimate)))
print("RMI of the NN feature: \t\t%2.2f" % inf_nn)
print("RMI of the PCA feature: \t%2.2f" % inf_pca)
print("Delta: \t\t\t\t%2.2f" % (inf_nn - inf_pca))
```
---
#### Code to save the plots for the paper
```
plt.figure(figsize=[3,3],dpi=500)
# plt.title("Feature contours")
# plt.xlabel(r"$x_1$")
# plt.ylabel(r"$x_2$")
plt.xticks([-4,4], ["-4","4"])
plt.yticks([-4,4], ["-4","4"])
plt.xlim([-4,4])
plt.ylim([-4,4])
plt.gca().set_aspect('equal')
plt.contourf(x1_grid,x2_grid, P_x, 1000, cmap=plt.cm.BrBG_r)
plt.plot(spiral_axis[:,0], spiral_axis[:,1], "--", alpha=0.3, c="black")
plt.contour(x1_grid,x2_grid, fnorm,15)
plt.savefig("figures/spiral" + "_nn.pdf")
plt.show()
plt.figure(figsize=[3,3],dpi=500)
# plt.title("Feature contours")
# plt.xlabel(r"$x_1$")
# plt.ylabel(r"$x_2$")
plt.xticks([-4,4], ["-4","4"])
plt.yticks([-4,4], ["-4","4"])
plt.xlim([-4,4])
plt.ylim([-4,4])
plt.gca().set_aspect('equal')
plt.contourf(x1_grid,x2_grid, P_x, 1000, cmap=plt.cm.BrBG_r)
plt.plot(spiral_axis[:,0], spiral_axis[:,1], "--", alpha=0.3, c="black")
plt.contour(x1_grid,x2_grid, fpca,15)
plt.savefig("figures/spiral" + "_pca.pdf")
plt.show()
```
| github_jupyter |
# Captcha Solver
```
import cv2
import keras
import numpy as np
from matplotlib import pyplot as plt
%%capture
!unzip generated_captcha_images.zip
```
## Data Processing
### Extracting Single letters from Captcha.
```
import os
import os.path, glob, imutils
captcha_image_folder = "generated_captcha_images"
output_folder = "extracted_letter_images"
# Getting a list of all the captcha images we need to process
captcha_image_files = glob.glob(os.path.join(captcha_image_folder,'*'))
counts = {}
# loop over the image paths
for i,image_file in enumerate(captcha_image_files):
print("[INFO] processing image {}/{}".format(i + 1, len(captcha_image_files)))
# grab the base filename as the text
file_name = os.path.basename(image_file)
correct_text = os.path.splitext(file_name)[0]
# Load the image
image = cv2.imread(image_file,cv2.IMREAD_GRAYSCALE)
# Addding some extra padding around the image
image = cv2.copyMakeBorder(image,8,8,8,8,cv2.BORDER_REPLICATE)
# threshold the image (convert it to pure black and white)
thresh = cv2.threshold(image, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
#Finding the contours
contours = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if imutils.is_cv2() else contours[1]
letter_image_regions = []
# Now we can loop through each of the four contours and extract the letter inside of each one
for contour in contours:
(x,y,w,h) = cv2.boundingRect(contour)
if h == 0:
print(x,y,w,h)
if w/h > 1.25:
# This contour is too wide to be a single letter!
# Split it in half into two letter regions!
half_width = int(w/2)
letter_image_regions.append((x, y, half_width, h))
letter_image_regions.append((x + half_width, y, half_width, h))
else:
letter_image_regions.append((x,y,w,h))
if len(letter_image_regions)>4:
continue
letter_image_regions = sorted(letter_image_regions, key=lambda x: x[0])
for box,text in zip(letter_image_regions, correct_text):
(x,y,w,h) = box
letter_image = image[y-2: y+h+2,x-2:x+w+2]
saving_path = os.path.join(output_folder,text)
# If the folder doesn't exists make it
if not os.path.exists(saving_path):
os.makedirs(saving_path)
count = counts.get(text,1)
# Writing image to file
p = os.path.join(saving_path,f'{str(count).zfill(6)}.png')
cv2.imwrite(p,letter_image)
counts[text]= count + 1
for i,j in counts.items():
print(f'{i}: {j}',end= ' ')
mapping = {'A': 8, 'B': 9, 'C': 10, 'D': 11, 'E': 12, 'F': 13, 'G': 14,
'H': 15, 'J': 16, 'K': 17, 'L': 18, 'M': 19, 'N': 20, 'P': 21,
'Q': 22, 'R': 23, 'S': 24, 'T': 25, 'U': 26,'V': 27, 'W': 28,
'X': 29, 'Y': 30, 'Z': 31, '2':0,'3': 1, '4': 2,'5': 3, '6':4,
'7': 5,'8': 6,
'9': 7}
```
## Building and Training model
### Making training and validation data
```
base_filename = 'extracted_letter_images'
y = np.array([])
len_letter = []
images = []
for folder in os.listdir(base_filename):
Subdir = os.path.join(base_filename,folder)
file_names = glob.glob( os.path.join( Subdir, '*' ))
n_files = len(file_names)
len_letter.append(n_files)
label = mapping[folder]
labels = np.full(n_files,label)
if y.shape == (0,):
y = labels
else:
y = np.append(y,labels)
for name in file_names:
image = cv2.imread(name, cv2.IMREAD_GRAYSCALE)
images.append(image)
```
#### Resizing images
```
img_width, img_height = 20, 20
resized_images = [cv2.resize(img, (img_width, img_height), interpolation = cv2.INTER_CUBIC) for img in images]
np.array(resized_images[0]).shape
y.shape
```
### Splitting Data into training and validation
```
from keras.models import Sequential
from keras.layers import Conv2D,Dense,Flatten,MaxPooling2D,Dropout
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(np.array(resized_images),y, train_size=0.8, test_size=0.2)
X_train = X_train.reshape(len(X_train),20,20,1).astype('float32')
X_test = X_test.reshape(len(X_test),20,20,1).astype('float32')
y_train = keras.utils.to_categorical(y_train, 32)
y_test = keras.utils.to_categorical(y_test, 32)
len(X_train), len(y_train)
len(X_test), len(y_test)
```
### Building Model
```
model = Sequential()
model.add(Conv2D(32,(3,3),input_shape = (20,20,1),activation = 'relu', name = 'Conv2D_1'))
model.add(Conv2D(32,(3,3),activation = 'relu', name = 'Conv2D_2'))
model.add(MaxPooling2D(pool_size=(2, 2), name = 'MaxPool2D_1'))
model.add(Dropout(0.25, name = 'Dropout_1'))
model.add(Conv2D(64,(3,3),activation = 'relu', name = 'Conv2d_3'))
model.add(Conv2D(64,(3,3),activation = 'relu', name = 'Conv2d_4'))
model.add(MaxPooling2D(pool_size=(2, 2), name = 'MaxPool2D_2'))
model.add(Flatten(name = 'Flatten'))
model.add(Dense(256,activation='relu', name = 'Dense_1'))
model.add(Dropout(0.5, name = 'Dropout_2'))
model.add(Dense(64,activation='relu',name = 'Dense_2'))
model.add(Dense(32,activation='softmax', name = 'Dense_3'))
model.compile(loss = 'categorical_crossentropy',optimizer='adam',metrics = ['accuracy'])
model.summary()
```
### Training Model
```
model.fit(X_train,y_train,batch_size=32,validation_data=(X_test, y_test), epochs=10, verbose=1)
```
### Saving Model
```
model.save("Captcha_Solver.h5")
```
### Evaluating Model
```
def plot_model_history(model_name, history, epochs):
print(model_name)
plt.figure(figsize=(15, 5))
# summarize history for accuracy
plt.subplot(1, 2 ,1)
plt.plot(np.arange(0, len(history['acc'])), history['acc'], 'r')
plt.plot(np.arange(1, len(history['val_acc'])+1), history['val_acc'], 'g')
plt.xticks(np.arange(0, epochs+1, epochs/10))
plt.title('Training Accuracy vs. Validation Accuracy')
plt.xlabel('Num of Epochs')
plt.ylabel('Accuracy')
plt.legend(['train', 'validation'], loc='best')
plt.subplot(1, 2, 2)
plt.plot(np.arange(1, len(history['loss'])+1), history['loss'], 'r')
plt.plot(np.arange(1, len(history['val_loss'])+1), history['val_loss'], 'g')
plt.xticks(np.arange(0, epochs+1, epochs/10))
plt.title('Training Loss vs. Validation Loss')
plt.xlabel('Num of Epochs')
plt.ylabel('Loss')
plt.legend(['train', 'validation'], loc='best')
plt.show()
plot_model_history('model',model.history.history,32)
```
## Testing the model on Outside Data
### Captcha Preprocessing
```
from skimage.transform import resize
img_size = 20
predictions = []
img = cv2.imread('captcha.jpg', cv2.IMREAD_GRAYSCALE)
plt.imshow(img)
image = cv2.copyMakeBorder(img,8,8,8,8,cv2.BORDER_REPLICATE)
thresh = cv2.threshold(image, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
contours = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if imutils.is_cv2() else contours[1]
letter_image_regions = []
for contour in contours:
(x,y,w,h) = cv2.boundingRect(contour)
if h == 0:
print(x,y,w,h)
if w/h > 1.25:
half_width = int(w/2)
letter_image_regions.append((x, y, half_width, h))
letter_image_regions.append((x + half_width, y, half_width, h))
else:
letter_image_regions.append((x,y,w,h))
if len(letter_image_regions)>4:
print('Sorry! but the captcha is more than 4 letters!!')
else:
letter_image_regions = sorted(letter_image_regions, key=lambda x: x[0])
for box,text in zip(letter_image_regions, correct_text):
(x,y,w,h) = box
letter_image = image[y-2: y+h+2,x-2:x+w+2]
X = cv2.resize(letter_image, (img_width, img_height), interpolation = cv2.INTER_CUBIC)
X = X.reshape(1,20,20,1).astype('float32')
prediction = model.predict(X).argmax(axis = 1)[0]
for key,value in mapping.items():
if value == prediction:
prediction = key
break
predictions.append(prediction)
print(f"The Captcha Text is {''.join(predictions)}")
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import glob
import nibabel as nib
import os
import time
import pandas as pd
import numpy as np
from mricode.utils import log_textfile
from mricode.utils import return_csv
#from mricode.utils import return_iter
path_output = './'
path_tfrecords = '/data2/res64/down/'
path_csv = '/data2/csv/'
filename_res = {'train': 'intell_residual_train.csv', 'val': 'intell_residual_valid.csv', 'test': 'intell_residual_test.csv'}
filename_final = filename_res
sample_size = 'site16_allimages'
batch_size = 8
onlyt1 = False
t1_mean = 0.35196779465675354
t2_mean = 0.5694633522033692
t1_std = 0.8948413240464094
t2_std = 1.2991791534423829
train_df, val_df, test_df, norm_dict = return_csv(path_csv, filename_final, False)
cat_cols = {'female': 2, 'race.ethnicity': 5, 'high.educ_group': 4, 'income_group': 8, 'married': 6}
num_cols = [x for x in list(val_df.columns) if '_norm' in x]
```
## 01 Resolution
```
output = []
for col in num_cols:
mean = test_df[col].mean()
mse_norm = np.mean(np.square(test_df[col]-mean))
output.append([col, mse_norm])
for col in list(cat_cols.keys()):
mean = test_df[col].value_counts().idxmax()
mse_norm = np.mean(test_df[col]==mean)
output.append([col, mse_norm])
df = pd.DataFrame(output)
df.columns = ['colname', 'mse']
df_base = df.copy()
df_add = {'new_simplecnnsmall__allimages_down64': 'down64',
'new_simplecnnsmall__allimages_cropped64': 'cropped64',
'new_simplecnnsmall__allimages_down128': 'down128',
'new_simplecnnsmall__allimages_cropped128': 'cropped128',
'simplecnnsmall__allimages_down256': 'org256'
}
df_add_keys = list(df_add.keys())
for df_name in df_add_keys:
df = pd.read_csv('./output/' + df_name + '/df_best.csv')
df.columns = [df_name + str(x) for x in df.columns]
df_base = pd.merge(df_base, df, left_on='colname', right_on=df_name+'name',how='left')
add_col = [x for x in df_base.columns if 'best_loss_test' in x]
aa = df_base[['colname', 'mse'] + add_col]
aa.columns = ['colname', 'mse'] + [df_add[x] for x in df_add_keys]
aa.loc[~(aa['colname'].isin(['female', 'income_group', 'high.educ_group', 'married', 'race.ethnicity']))]
add_col = [x for x in df_base.columns if 'best_acc_test' in x]
aa = df_base[['colname', 'mse'] + add_col]
aa.columns = ['colname', 'mse'] + [df_add[x] for x in df_add_keys]
aa.loc[aa['colname'].isin(['female', 'income_group', 'high.educ_group', 'married', 'race.ethnicity'])]
add_col = [x for x in df_base.columns if 'best_loss_test' in x]
aa = df_base[['colname', 'mse'] + add_col]
aa.columns = ['colname', 'mse'] + [df_add[x] for x in df_add_keys]
aa.loc[~(aa['colname'].isin(['female', 'income_group', 'high.educ_group', 'married', 'race.ethnicity']))].to_csv('./output_csv/01_new_resolution_num_256.csv')
add_col = [x for x in df_base.columns if 'best_acc_test' in x]
aa = df_base[['colname', 'mse'] + add_col]
aa.columns = ['colname', 'mse'] + [df_add[x] for x in df_add_keys]
aa.loc[aa['colname'].isin(['female', 'income_group', 'high.educ_group', 'married', 'race.ethnicity'])].to_csv('./output_csv/01_new_resolution_cat_256.csv')
```
## 02 Model Type
```
output = []
for col in num_cols:
mean = test_df[col].mean()
mse_norm = np.mean(np.square(test_df[col]-mean))
output.append([col, mse_norm])
for col in list(cat_cols.keys()):
mean = test_df[col].value_counts().idxmax()
mse_norm = np.mean(test_df[col]==mean)
output.append([col, mse_norm])
df = pd.DataFrame(output)
df.columns = ['colname', 'mse']
df_base = df.copy()
df_add = {'new_simplecnnsmall__allimages_down64': 'simplecnnsmall',
'new_simplecnnnormal_allimages_down64': 'simplecnnnormal',
'new_simplecnnnormal_autofocus_allimages_down64': 'simplecnnnormal_auto',
'new2_densenet_allimages_down64': 'densenet',
'new_densenet_allimages_autofocus_down64': 'densenet_auto'
}
df_add_keys = list(df_add.keys())
for df_name in df_add_keys:
df = pd.read_csv('./output/' + df_name + '/df_best.csv')
df.columns = [df_name + str(x) for x in df.columns]
df_base = pd.merge(df_base, df, left_on='colname', right_on=df_name+'name',how='left')
add_col = [x for x in df_base.columns if 'best_loss_test' in x]
aa = df_base[['colname', 'mse'] + add_col]
aa.columns = ['colname', 'mse'] + [df_add[x] for x in df_add_keys]
aa.loc[~(aa['colname'].isin(['female', 'income_group', 'high.educ_group', 'married', 'race.ethnicity']))].mean()
add_col = [x for x in df_base.columns if 'best_acc_test' in x]
aa = df_base[['colname', 'mse'] + add_col]
aa.columns = ['colname', 'mse'] + [df_add[x] for x in df_add_keys]
aa.loc[aa['colname'].isin(['female', 'income_group', 'high.educ_group', 'married', 'race.ethnicity'])].mean()
add_col = [x for x in df_base.columns if 'best_loss_test' in x]
aa = df_base[['colname', 'mse'] + add_col]
aa.columns = ['colname', 'mse'] + [df_add[x] for x in df_add_keys]
aa.loc[~(aa['colname'].isin(['female', 'income_group', 'high.educ_group', 'married', 'race.ethnicity']))].to_csv('./output_csv/02_new_model_type_num.csv')
add_col = [x for x in df_base.columns if 'best_acc_test' in x]
aa = df_base[['colname', 'mse'] + add_col]
aa.columns = ['colname', 'mse'] + [df_add[x] for x in df_add_keys]
aa.loc[aa['colname'].isin(['female', 'income_group', 'high.educ_group', 'married', 'race.ethnicity'])].to_csv('./output_csv/03_new_model_type_cat.csv')
```
## 03 MRI Types
```
output = []
for col in num_cols:
mean = test_df[col].mean()
mse_norm = np.mean(np.square(test_df[col]-mean))
output.append([col, mse_norm])
for col in list(cat_cols.keys()):
mean = test_df[col].value_counts().idxmax()
mse_norm = np.mean(test_df[col]==mean)
output.append([col, mse_norm])
df = pd.DataFrame(output)
df.columns = ['colname', 'mse']
df_base = df.copy()
df_add = {'new2_densenet_allimages_down64': 't1t2',
'new_densenet_allimages_t1t2def_down64': 't1t2def',
'new_densenet_allimages_t1_down64': 't1',
'new_densenet_allimages_t2_down64': 't2',
'new_densenet_allimages_def_down64': 'def'
}
df_add_keys = list(df_add.keys())
for df_name in df_add_keys:
df = pd.read_csv('./output/' + df_name + '/df_best.csv')
df.columns = [df_name + str(x) for x in df.columns]
df_base = pd.merge(df_base, df, left_on='colname', right_on=df_name+'name',how='left')
add_col = [x for x in df_base.columns if 'best_loss_test' in x]
aa = df_base[['colname', 'mse'] + add_col]
aa.columns = ['colname', 'mse'] + [df_add[x] for x in df_add_keys]
aa.loc[~(aa['colname'].isin(['female', 'income_group', 'high.educ_group', 'married', 'race.ethnicity']))].mean()
add_col = [x for x in df_base.columns if 'best_acc_test' in x]
aa = df_base[['colname', 'mse'] + add_col]
aa.columns = ['colname', 'mse'] + [df_add[x] for x in df_add_keys]
aa.loc[aa['colname'].isin(['female', 'income_group', 'high.educ_group', 'married', 'race.ethnicity'])]
add_col = [x for x in df_base.columns if 'best_loss_test' in x]
aa = df_base[['colname', 'mse'] + add_col]
aa.columns = ['colname', 'mse'] + [df_add[x] for x in df_add_keys]
aa.loc[~(aa['colname'].isin(['female', 'income_group', 'high.educ_group', 'married', 'race.ethnicity']))].to_csv('./output_csv/03_new_types_num.csv')
add_col = [x for x in df_base.columns if 'best_acc_test' in x]
aa = df_base[['colname', 'mse'] + add_col]
aa.columns = ['colname', 'mse'] + [df_add[x] for x in df_add_keys]
aa.loc[aa['colname'].isin(['female', 'income_group', 'high.educ_group', 'married', 'race.ethnicity'])].to_csv('./output_csv/03_new_types_cat.csv')
```
## 04 Training
```
output = []
for col in num_cols:
mean = test_df[col].mean()
mse_norm = np.mean(np.square(test_df[col]-mean))
output.append([col, mse_norm])
for col in list(cat_cols.keys()):
mean = test_df[col].value_counts().idxmax()
mse_norm = np.mean(test_df[col]==mean)
output.append([col, mse_norm])
df = pd.DataFrame(output)
df.columns = ['colname', 'mse']
df_base = df.copy()
df_add = {'new2_densenet_allimages_down64': 'desnet_t1t2',
'new_simplecnnnormal_allimages_down64': 'simplenormal_t1t2',
'new_cross_allimages_down64': 'cross',
'new_hiearch_allimages_down64': 'simplenormal_hierch',
'new_hiearch_densenet_allimages_down64': 'densenet_hierch',
'new_densenet_cross_allimages_down64': 'cross_densenet1',
'new2_densenet_cross_allimages_down64': 'cross_densenet2'
}
df_add_keys = list(df_add.keys())
for df_name in df_add_keys:
df = pd.read_csv('./output/' + df_name + '/df_best.csv')
df.columns = [df_name + str(x) for x in df.columns]
df_base = pd.merge(df_base, df, left_on='colname', right_on=df_name+'name',how='left')
add_col = [x for x in df_base.columns if 'best_loss_test' in x]
aa = df_base[['colname', 'mse'] + add_col]
aa.columns = ['colname', 'mse'] + [df_add[x] for x in df_add_keys]
aa.loc[~(aa['colname'].isin(['female', 'income_group', 'high.educ_group', 'married', 'race.ethnicity']))]
add_col = [x for x in df_base.columns if 'best_acc_test' in x]
aa = df_base[['colname', 'mse'] + add_col]
aa.columns = ['colname', 'mse'] + [df_add[x] for x in df_add_keys]
aa.loc[aa['colname'].isin(['female', 'income_group', 'high.educ_group', 'married', 'race.ethnicity'])]
add_col = [x for x in df_base.columns if 'best_loss_test' in x]
aa = df_base[['colname', 'mse'] + add_col]
aa.columns = ['colname', 'mse'] + [df_add[x] for x in df_add_keys]
aa.loc[~(aa['colname'].isin(['female', 'income_group', 'high.educ_group', 'married', 'race.ethnicity']))].to_csv('./output_csv/04_multitask_num.csv')
add_col = [x for x in df_base.columns if 'best_acc_test' in x]
aa = df_base[['colname', 'mse'] + add_col]
aa.columns = ['colname', 'mse'] + [df_add[x] for x in df_add_keys]
aa.loc[aa['colname'].isin(['female', 'income_group', 'high.educ_group', 'married', 'race.ethnicity'])].to_csv('./output_csv/04_multitask_cat.csv')
train_df, val_df, test_df, norm_dict = return_csv(path_csv, filename_final, False)
train_df['abcd_site'].value_counts()
train_df = train_df[train_df['abcd_site']==16]
val_df = val_df[val_df['abcd_site']==16]
test_df = test_df[test_df['abcd_site']==16]
cat_cols = {'female': 2, 'race.ethnicity': 5, 'high.educ_group': 4, 'income_group': 8, 'married': 6}
num_cols = [x for x in list(val_df.columns) if '_norm' in x]
output = []
for col in num_cols:
mean = test_df[col].mean()
mse_norm = np.mean(np.square(test_df[col]-mean))
output.append([col, mse_norm])
for col in list(cat_cols.keys()):
mean = test_df[col].value_counts().idxmax()
mse_norm = np.mean(test_df[col]==mean)
output.append([col, mse_norm])
df = pd.DataFrame(output)
df.columns = ['colname', 'mse']
df_base = df.copy()
df_add = ['runAllImages64_DenseNet_T1T2_site16_norm_multitask_test.csv',
'runAllImages64_DenseNet_T1T2_site16_onlyIntel_multitask_test.csv',
'runAllImages64_DenseNet_T1T2_site16_norm_20emultitask_test.csv',
'runAllImages64_DenseNet_T1T2_site16_onlyIntel_20emultitask_test.csv',
'runAllImages64_DenseNet_T1T2_site16_onlyIntel_20e_onlyIntel_multitask_test.csv',
'runAllImages64_DenseNet_T1T2_site16_onlyIntel_20e_SimpleCNN_multitask_test.csv',
'runAllImages64_DenseNet_T1T2_site16_onlyIntel_20e_onlyIntel_SimpleCNN_multitask_test.csv',
'runAllImages64_DenseNet_T1T2_site16_norm_20e_SimpleCNN_multitask_test.csv',
'runAllImages64_DenseNet_T1T2_site16_norm_20e_norm_T1_multitask_test.csv',
'runAllImages64_DenseNet_T1T2_site16_norm_20e_norm_T2_multitask_test.csv',
'runAllImages64_DenseNet_T1T2_site16_norm_20e_norm_def_multitask_test.csv',
'runAllImages64_DenseNet_T1T2_site16_norm_20e_norm_T1T2def_multitask_test.csv',
]
for df_name in df_add:
df = pd.read_csv('./output/' + df_name)
df.columns = [df_name + str(x) for x in df.columns]
df_base = pd.merge(df_base, df, left_on='colname', right_on=df_name+'name',how='left')
df_base.head()
add_col = [x for x in df_base.columns if 'best_loss_test' in x or 'best_acc_test' in x]
aa = df_base[['colname', 'mse'] + add_col]
aa.columns = range(0,aa.shape[1])
aa[[0,1] + [x for x in range(2,aa.shape[1],2)]].to_csv('ana1.csv')
add_col = [x for x in df_base.columns if 'best_loss_test' in x or 'best_acc_test' in x]
aa = df_base[['colname', 'mse'] + add_col]
aa.columns = range(0,aa.shape[1])
aa[[0,1] + [x+1 for x in range(2,aa.shape[1],2)]].to_csv('ana2.csv')
df_base.to_csv('output_site16.csv')
test_df['female'].value_counts()
66/(66+56)
from tf_explain.core.smoothgrad import SmoothGrad
```
| github_jupyter |
# "Enabling Easy Zipapp Installs on Windows"
> "How to prepare a Windows system for a good PYZ experience."
- author: jhermann
- toc: false
- branch: master
- badges: true
- comments: true
- published: true
- categories: [python, deployment]
- image: images/copied_from_nb/img/python/python+windows.png

## Zipapps in a Nutshell
Zipapps are a way to distribute Python applications
and all of their dependencies in a single binary file.
This is comparable to statically linked golang apps or Java's ‘executable JARs’.
Their main advantage is that distributing and installing them is quite simple.
Running Python code directly from ZIP archives is nothing new, [PEP 273](https://www.python.org/dev/peps/pep-0273/) made its debut in 2001, as part of Python 2.3 in the form of the ``zipimport`` module.
[PEP 441](https://www.python.org/dev/peps/pep-0441/) builds on this and describes mechanisms to bundle full applications into a single ZIP file that can be made executable. It was approved in 2015 and a first implementation appeared in Python 3.5 via the ``zipapp`` module.
See the PEP for details on how making a ZIP into an executable file works, but basically on POSIX systems the Python interpreter is called in a ‘bang path’ that is followed by the ZIP archive. The interpreter recognizes the ‘script’ is a whole application archive and acts accordingly. On Windows, zipapps *MUST* carry the ``.pyz`` extension which is bound to the ``py`` wrapper command, which in turn looks at the bang path and calls a matching Python interpreter from the installed set.
To display the bang path of a zipapp, use this command:
python3 -m zipapp --info foo.pyz
If you want to change the requested Python version to one that is actually installed or that you prefer, change the bang path as part of the installation process:
python3 -m zipapp -p '/usr/bin/env python3.8' -o ~/bin/foo foo.pyz
This can also be done on an ad-hoc basis, by explicitly calling the desired interpreter:
python3.8 foo.pyz … # POSIX
py -3.8 foo.pyz … # Windows
Well-known tools to build new zipapps, outside of the Python core, are [pex](https://github.com/pantsbuild/pex) (Twitter) and [shiv](https://github.com/linkedin/shiv) (LinkedIn). See their documentation for details on bundling your own applications.
## Setting Up Windows 10 for Zipapps
On Windows, because there is no ‘+x’ flag, things are a bit more complicated than on POSIX.
Zipapps **MUST** have a ``.pyz`` extension,
for which the ``py`` launcher is registered as the default application.
The net effect is that such files become executable and are handed over to the launcher
*if* you add a few environment settings to your machine.
In the user-specific environment settings, add a new ``PATHEXT`` variable
(or extend an existing one), with the value ``%PATHEXT%;.PYZ``.
Also edit the ``PATH`` one and add a new ``%LOCALAPPDATA%\bin`` entry.
Save everything (click “OK”), open a *new* command window, and verify
the changes with
echo %PATHEXT% & echo %PATH%
Create the new bin directory by calling ``md %LOCALAPPDATA%\bin``.
Now you can place a zipapp file like ``foo.pyz`` in that directory,
and it is immediately callable as ``foo``.
To get such a test subject, you can build
[shiv](https://github.com/linkedin/shiv) with itself:
git clone https://github.com/linkedin/shiv.git
cd shiv
py -3 -m venv --prompt shiv venv
venv\Scripts\activate.bat
python -m pip install -e .
shiv -e shiv.cli:main -o %LOCALAPPDATA%\bin\shiv.pyz .
deactivate
shiv --version
## Variations
If that makes more sense to you, you can change the system-wide
variables instead of the user-specific ones, and choose paths that are
global for all users (like ``C:\usr\bin`` or similar).
To make zipapps available network-wide, you can use ``%APPDATA%`` to store the zipapps,
so you only have to maintain them once in case you regularly
work on several machines in the same network. Just make sure the same version of Python is used everywhere then.
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Keras
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r1/guide/keras.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r1/guide/keras.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
> Note: This is an archived TF1 notebook. These are configured
to run in TF2's
[compatbility mode](https://www.tensorflow.org/guide/migrate)
but will run in TF1 as well. To use TF1 in Colab, use the
[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)
magic.
Keras is a high-level API to build and train deep learning models. It's used for
fast prototyping, advanced research, and production, with three key advantages:
- *User friendly*<br>
Keras has a simple, consistent interface optimized for common use cases. It
provides clear and actionable feedback for user errors.
- *Modular and composable*<br>
Keras models are made by connecting configurable building blocks together,
with few restrictions.
- *Easy to extend*<br> Write custom building blocks to express new ideas for
research. Create new layers, loss functions, and develop state-of-the-art
models.
## Import tf.keras
`tf.keras` is TensorFlow's implementation of the
[Keras API specification](https://keras.io). This is a high-level
API to build and train models that includes first-class support for
TensorFlow-specific functionality, such as [eager execution](#eager_execution),
`tf.data` pipelines, and [Estimators](./estimators.md).
`tf.keras` makes TensorFlow easier to use without sacrificing flexibility and
performance.
To get started, import `tf.keras` as part of your TensorFlow program setup:
```
!pip install pyyaml # Required to save models in YAML format
import tensorflow.compat.v1 as tf
from tensorflow.keras import layers
print(tf.version.VERSION)
print(tf.keras.__version__)
```
`tf.keras` can run any Keras-compatible code, but keep in mind:
* The `tf.keras` version in the latest TensorFlow release might not be the same
as the latest `keras` version from PyPI. Check `tf.keras.__version__`.
* When [saving a model's weights](#weights_only), `tf.keras` defaults to the
[checkpoint format](./checkpoints.md). Pass `save_format='h5'` to
use HDF5.
## Build a simple model
### Sequential model
In Keras, you assemble *layers* to build *models*. A model is (usually) a graph
of layers. The most common type of model is a stack of layers: the
`tf.keras.Sequential` model.
To build a simple, fully-connected network (i.e. multi-layer perceptron):
```
model = tf.keras.Sequential()
# Adds a densely-connected layer with 64 units to the model:
model.add(layers.Dense(64, activation='relu'))
# Add another:
model.add(layers.Dense(64, activation='relu'))
# Add a softmax layer with 10 output units:
model.add(layers.Dense(10, activation='softmax'))
```
### Configure the layers
There are many `tf.keras.layers` available with some common constructor
parameters:
* `activation`: Set the activation function for the layer. This parameter is
specified by the name of a built-in function or as a callable object. By
default, no activation is applied.
* `kernel_initializer` and `bias_initializer`: The initialization schemes
that create the layer's weights (kernel and bias). This parameter is a name or
a callable object. The kernel defaults to the `"Glorot uniform"` initializer,
and the bias defaults to zeros.
* `kernel_regularizer` and `bias_regularizer`: The regularization schemes
that apply the layer's weights (kernel and bias), such as L1 or L2
regularization. By default, no regularization is applied.
The following instantiates `tf.keras.layers.Dense` layers using constructor
arguments:
```
# Create a sigmoid layer:
layers.Dense(64, activation='sigmoid')
# Or:
layers.Dense(64, activation=tf.sigmoid)
# A linear layer with L1 regularization of factor 0.01 applied to the kernel matrix:
layers.Dense(64, kernel_regularizer=tf.keras.regularizers.l1(0.01))
# A linear layer with L2 regularization of factor 0.01 applied to the bias vector:
layers.Dense(64, bias_regularizer=tf.keras.regularizers.l2(0.01))
# A linear layer with a kernel initialized to a random orthogonal matrix:
layers.Dense(64, kernel_initializer='orthogonal')
# A linear layer with a bias vector initialized to 2.0s:
layers.Dense(64, bias_initializer=tf.keras.initializers.constant(2.0))
```
## Train and evaluate
### Set up training
After the model is constructed, configure its learning process by calling the
`compile` method:
```
model = tf.keras.Sequential([
# Adds a densely-connected layer with 64 units to the model:
layers.Dense(64, activation='relu', input_shape=(32,)),
# Add another:
layers.Dense(64, activation='relu'),
# Add a softmax layer with 10 output units:
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.train.AdamOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
```
`tf.keras.Model.compile` takes three important arguments:
* `optimizer`: This object specifies the training procedure. Pass it optimizer
instances from the `tf.train` module, such as
`tf.train.AdamOptimizer`, `tf.train.RMSPropOptimizer`, or
`tf.train.GradientDescentOptimizer`.
* `loss`: The function to minimize during optimization. Common choices include
mean square error (`mse`), `categorical_crossentropy`, and
`binary_crossentropy`. Loss functions are specified by name or by
passing a callable object from the `tf.keras.losses` module.
* `metrics`: Used to monitor training. These are string names or callables from
the `tf.keras.metrics` module.
The following shows a few examples of configuring a model for training:
```
# Configure a model for mean-squared error regression.
model.compile(optimizer=tf.train.AdamOptimizer(0.01),
loss='mse', # mean squared error
metrics=['mae']) # mean absolute error
# Configure a model for categorical classification.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.01),
loss=tf.keras.losses.categorical_crossentropy,
metrics=[tf.keras.metrics.categorical_accuracy])
```
### Input NumPy data
For small datasets, use in-memory [NumPy](https://www.numpy.org/)
arrays to train and evaluate a model. The model is "fit" to the training data
using the `fit` method:
```
import numpy as np
def random_one_hot_labels(shape):
n, n_class = shape
classes = np.random.randint(0, n_class, n)
labels = np.zeros((n, n_class))
labels[np.arange(n), classes] = 1
return labels
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
model.fit(data, labels, epochs=10, batch_size=32)
```
`tf.keras.Model.fit` takes three important arguments:
* `epochs`: Training is structured into *epochs*. An epoch is one iteration over
the entire input data (this is done in smaller batches).
* `batch_size`: When passed NumPy data, the model slices the data into smaller
batches and iterates over these batches during training. This integer
specifies the size of each batch. Be aware that the last batch may be smaller
if the total number of samples is not divisible by the batch size.
* `validation_data`: When prototyping a model, you want to easily monitor its
performance on some validation data. Passing this argument—a tuple of inputs
and labels—allows the model to display the loss and metrics in inference mode
for the passed data, at the end of each epoch.
Here's an example using `validation_data`:
```
import numpy as np
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
val_data = np.random.random((100, 32))
val_labels = random_one_hot_labels((100, 10))
model.fit(data, labels, epochs=10, batch_size=32,
validation_data=(val_data, val_labels))
```
### Input tf.data datasets
Use the [Datasets API](./datasets.md) to scale to large datasets
or multi-device training. Pass a `tf.data.Dataset` instance to the `fit`
method:
```
# Instantiates a toy dataset instance:
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32)
dataset = dataset.repeat()
# Don't forget to specify `steps_per_epoch` when calling `fit` on a dataset.
model.fit(dataset, epochs=10, steps_per_epoch=30)
```
Here, the `fit` method uses the `steps_per_epoch` argument—this is the number of
training steps the model runs before it moves to the next epoch. Since the
`Dataset` yields batches of data, this snippet does not require a `batch_size`.
Datasets can also be used for validation:
```
dataset = tf.data.Dataset.from_tensor_slices((data, labels))
dataset = dataset.batch(32).repeat()
val_dataset = tf.data.Dataset.from_tensor_slices((val_data, val_labels))
val_dataset = val_dataset.batch(32).repeat()
model.fit(dataset, epochs=10, steps_per_epoch=30,
validation_data=val_dataset,
validation_steps=3)
```
### Evaluate and predict
The `tf.keras.Model.evaluate` and `tf.keras.Model.predict` methods can use NumPy
data and a `tf.data.Dataset`.
To *evaluate* the inference-mode loss and metrics for the data provided:
```
data = np.random.random((1000, 32))
labels = random_one_hot_labels((1000, 10))
model.evaluate(data, labels, batch_size=32)
model.evaluate(dataset, steps=30)
```
And to *predict* the output of the last layer in inference for the data provided,
as a NumPy array:
```
result = model.predict(data, batch_size=32)
print(result.shape)
```
## Build advanced models
### Functional API
The `tf.keras.Sequential` model is a simple stack of layers that cannot
represent arbitrary models. Use the
[Keras functional API](https://keras.io/getting-started/functional-api-guide/)
to build complex model topologies such as:
* Multi-input models,
* Multi-output models,
* Models with shared layers (the same layer called several times),
* Models with non-sequential data flows (e.g. residual connections).
Building a model with the functional API works like this:
1. A layer instance is callable and returns a tensor.
2. Input tensors and output tensors are used to define a `tf.keras.Model`
instance.
3. This model is trained just like the `Sequential` model.
The following example uses the functional API to build a simple, fully-connected
network:
```
inputs = tf.keras.Input(shape=(32,)) # Returns a placeholder tensor
# A layer instance is callable on a tensor, and returns a tensor.
x = layers.Dense(64, activation='relu')(inputs)
x = layers.Dense(64, activation='relu')(x)
predictions = layers.Dense(10, activation='softmax')(x)
```
Instantiate the model given inputs and outputs.
```
model = tf.keras.Model(inputs=inputs, outputs=predictions)
# The compile step specifies the training configuration.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs
model.fit(data, labels, batch_size=32, epochs=5)
```
### Model subclassing
Build a fully-customizable model by subclassing `tf.keras.Model` and defining
your own forward pass. Create layers in the `__init__` method and set them as
attributes of the class instance. Define the forward pass in the `call` method.
Model subclassing is particularly useful when
[eager execution](./eager.ipynb) is enabled since the forward pass
can be written imperatively.
Key Point: Use the right API for the job. While model subclassing offers
flexibility, it comes at a cost of greater complexity and more opportunities for
user errors. If possible, prefer the functional API.
The following example shows a subclassed `tf.keras.Model` using a custom forward
pass:
```
class MyModel(tf.keras.Model):
def __init__(self, num_classes=10):
super(MyModel, self).__init__(name='my_model')
self.num_classes = num_classes
# Define your layers here.
self.dense_1 = layers.Dense(32, activation='relu')
self.dense_2 = layers.Dense(num_classes, activation='sigmoid')
def call(self, inputs):
# Define your forward pass here,
# using layers you previously defined (in `__init__`).
x = self.dense_1(inputs)
return self.dense_2(x)
def compute_output_shape(self, input_shape):
# You need to override this function if you want to use the subclassed model
# as part of a functional-style model.
# Otherwise, this method is optional.
shape = tf.TensorShape(input_shape).as_list()
shape[-1] = self.num_classes
return tf.TensorShape(shape)
```
Instantiate the new model class:
```
model = MyModel(num_classes=10)
# The compile step specifies the training configuration.
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)
```
### Custom layers
Create a custom layer by subclassing `tf.keras.layers.Layer` and implementing
the following methods:
* `build`: Create the weights of the layer. Add weights with the `add_weight`
method.
* `call`: Define the forward pass.
* `compute_output_shape`: Specify how to compute the output shape of the layer
given the input shape.
* Optionally, a layer can be serialized by implementing the `get_config` method
and the `from_config` class method.
Here's an example of a custom layer that implements a `matmul` of an input with
a kernel matrix:
```
class MyLayer(layers.Layer):
def __init__(self, output_dim, **kwargs):
self.output_dim = output_dim
super(MyLayer, self).__init__(**kwargs)
def build(self, input_shape):
shape = tf.TensorShape((input_shape[1], self.output_dim))
# Create a trainable weight variable for this layer.
self.kernel = self.add_weight(name='kernel',
shape=shape,
initializer='uniform',
trainable=True)
# Make sure to call the `build` method at the end
super(MyLayer, self).build(input_shape)
def call(self, inputs):
return tf.matmul(inputs, self.kernel)
def compute_output_shape(self, input_shape):
shape = tf.TensorShape(input_shape).as_list()
shape[-1] = self.output_dim
return tf.TensorShape(shape)
def get_config(self):
base_config = super(MyLayer, self).get_config()
base_config['output_dim'] = self.output_dim
return base_config
@classmethod
def from_config(cls, config):
return cls(**config)
```
Create a model using your custom layer:
```
model = tf.keras.Sequential([
MyLayer(10),
layers.Activation('softmax')])
# The compile step specifies the training configuration
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Trains for 5 epochs.
model.fit(data, labels, batch_size=32, epochs=5)
```
## Callbacks
A callback is an object passed to a model to customize and extend its behavior
during training. You can write your own custom callback, or use the built-in
`tf.keras.callbacks` that include:
* `tf.keras.callbacks.ModelCheckpoint`: Save checkpoints of your model at
regular intervals.
* `tf.keras.callbacks.LearningRateScheduler`: Dynamically change the learning
rate.
* `tf.keras.callbacks.EarlyStopping`: Interrupt training when validation
performance has stopped improving.
* `tf.keras.callbacks.TensorBoard`: Monitor the model's behavior using
[TensorBoard](https://tensorflow.org/tensorboard).
To use a `tf.keras.callbacks.Callback`, pass it to the model's `fit` method:
```
callbacks = [
# Interrupt training if `val_loss` stops improving for over 2 epochs
tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'),
# Write TensorBoard logs to `./logs` directory
tf.keras.callbacks.TensorBoard(log_dir='./logs')
]
model.fit(data, labels, batch_size=32, epochs=5, callbacks=callbacks,
validation_data=(val_data, val_labels))
```
<a name='save_and_restore'></a>
## Save and restore
<a name='weights_only'></a>
### Weights only
Save and load the weights of a model using `tf.keras.Model.save_weights`:
```
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10, activation='softmax')])
model.compile(optimizer=tf.train.AdamOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
# Save weights to a TensorFlow Checkpoint file
model.save_weights('./weights/my_model')
# Restore the model's state,
# this requires a model with the same architecture.
model.load_weights('./weights/my_model')
```
By default, this saves the model's weights in the
[TensorFlow checkpoint](./checkpoints.md) file format. Weights can
also be saved to the Keras HDF5 format (the default for the multi-backend
implementation of Keras):
```
# Save weights to a HDF5 file
model.save_weights('my_model.h5', save_format='h5')
# Restore the model's state
model.load_weights('my_model.h5')
```
### Configuration only
A model's configuration can be saved—this serializes the model architecture
without any weights. A saved configuration can recreate and initialize the same
model, even without the code that defined the original model. Keras supports
JSON and YAML serialization formats:
```
# Serialize a model to JSON format
json_string = model.to_json()
json_string
import json
import pprint
pprint.pprint(json.loads(json_string))
```
Recreate the model (newly initialized) from the JSON:
```
fresh_model = tf.keras.models.model_from_json(json_string)
```
Serializing a model to YAML format requires that you install `pyyaml` *before you import TensorFlow*:
```
yaml_string = model.to_yaml()
print(yaml_string)
```
Recreate the model from the YAML:
```
fresh_model = tf.keras.models.model_from_yaml(yaml_string)
```
Caution: Subclassed models are not serializable because their architecture is
defined by the Python code in the body of the `call` method.
### Entire model
The entire model can be saved to a file that contains the weight values, the
model's configuration, and even the optimizer's configuration. This allows you
to checkpoint a model and resume training later—from the exact same
state—without access to the original code.
```
# Create a trivial model
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(data, labels, batch_size=32, epochs=5)
# Save entire model to a HDF5 file
model.save('my_model.h5')
# Recreate the exact same model, including weights and optimizer.
model = tf.keras.models.load_model('my_model.h5')
```
<a name='eager_execution'></a>
## Eager execution
[Eager execution](./eager.ipynb) is an imperative programming
environment that evaluates operations immediately. This is not required for
Keras, but is supported by `tf.keras` and useful for inspecting your program and
debugging.
All of the `tf.keras` model-building APIs are compatible with eager execution.
And while the `Sequential` and functional APIs can be used, eager execution
especially benefits *model subclassing* and building *custom layers*—the APIs
that require you to write the forward pass as code (instead of the APIs that
create models by assembling existing layers).
See the [eager execution guide](./eager.ipynb#build_a_model) for
examples of using Keras models with custom training loops and `tf.GradientTape`.
## Distribution
### Estimators
The [Estimators](./estimators.md) API is used for training models
for distributed environments. This targets industry use cases such as
distributed training on large datasets that can export a model for production.
A `tf.keras.Model` can be trained with the `tf.estimator` API by converting the
model to an `tf.estimator.Estimator` object with
`tf.keras.estimator.model_to_estimator`. See
[Creating Estimators from Keras models](./estimators.md#creating-estimators-from-keras-models).
```
model = tf.keras.Sequential([layers.Dense(64, activation='relu', input_shape=(32,)),
layers.Dense(10,activation='softmax')])
model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
loss='categorical_crossentropy',
metrics=['accuracy'])
estimator = tf.keras.estimator.model_to_estimator(model)
```
Note: Enable [eager execution](./eager.ipynb) for debugging
[Estimator input functions](./premade_estimators.md#create-input-functions)
and inspecting data.
### Multiple GPUs
`tf.keras` models can run on multiple GPUs using
`tf.distribute.DistributionStrategy`. This API provides distributed
training on multiple GPUs with almost no changes to existing code.
Currently, `tf.distribute.MirroredStrategy` is the only supported
distribution strategy. `MirroredStrategy` does in-graph replication with
synchronous training using all-reduce on a single machine. To use
`DistributionStrategy` with Keras, convert the `tf.keras.Model` to a
`tf.estimator.Estimator` with `tf.keras.estimator.model_to_estimator`, then
train the estimator
The following example distributes a `tf.keras.Model` across multiple GPUs on a
single machine.
First, define a simple model:
```
model = tf.keras.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10,)))
model.add(layers.Dense(1, activation='sigmoid'))
optimizer = tf.train.GradientDescentOptimizer(0.2)
model.compile(loss='binary_crossentropy', optimizer=optimizer)
model.summary()
```
Define an *input pipeline*. The `input_fn` returns a `tf.data.Dataset` object
used to distribute the data across multiple devices—with each device processing
a slice of the input batch.
```
def input_fn():
x = np.random.random((1024, 10))
y = np.random.randint(2, size=(1024, 1))
x = tf.cast(x, tf.float32)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.repeat(10)
dataset = dataset.batch(32)
return dataset
```
Next, create a `tf.estimator.RunConfig` and set the `train_distribute` argument
to the `tf.distribute.MirroredStrategy` instance. When creating
`MirroredStrategy`, you can specify a list of devices or set the `num_gpus`
argument. The default uses all available GPUs, like the following:
```
strategy = tf.distribute.MirroredStrategy()
config = tf.estimator.RunConfig(train_distribute=strategy)
```
Convert the Keras model to a `tf.estimator.Estimator` instance:
```
keras_estimator = tf.keras.estimator.model_to_estimator(
keras_model=model,
config=config,
model_dir='/tmp/model_dir')
```
Finally, train the `Estimator` instance by providing the `input_fn` and `steps`
arguments:
```
keras_estimator.train(input_fn=input_fn, steps=10)
```
| github_jupyter |
# Wafer map pattern classification using MultiNN
- Directory
/data/WMPC_CNN_0_0_softmax.pickle
...
/data/WMPC_MFE_0_0_softmax.pickle
...
```
import pickle
import os
import sys
import numpy as np
from tensorflow.keras.layers import Input, Dense, MaxPooling2D, Concatenate
from tensorflow.keras.applications.vgg16 import VGG16
import tensorflow as tf
from sklearn.model_selection import train_test_split
from sklearn.metrics import f1_score, confusion_matrix
from sklearn.preprocessing import StandardScaler
DIM = 64
REPLICATION = 10
BATCH_SIZE = 32
MAX_EPOCH = 1000
TRAIN_SIZE_LIST = [500, 5000, 50000, 162946]
early_stopping = tf.keras.callbacks.EarlyStopping(patience=20, restore_best_weights=True, verbose=0)
with open('../data/y.pickle', 'rb') as f:
y = pickle.load(f)
y_onehot = tf.keras.utils.to_categorical(y)
def FNN(lr=1e-4):
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(9, activation='softmax')])
model.compile(optimizer= tf.keras.optimizers.Adam(lr=lr),
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
REP_ID = 0
RAN_NUM = 27407 + REP_ID
print('Replication:', REP_ID)
for TRAIN_SIZE_ID in range(4):
TRAIN_SIZE = TRAIN_SIZE_LIST[TRAIN_SIZE_ID]
y_trnval, y_tst = train_test_split(y_onehot, test_size=10000, random_state=RAN_NUM)
if TRAIN_SIZE == 162946:
pass
else:
y_trnval, _ = train_test_split(y_trnval, train_size=TRAIN_SIZE, random_state=RAN_NUM)
filename_MFE = '../data/WMPC_'+'MFE_'+str(TRAIN_SIZE)+'_'+str(REP_ID)+'_'
filename_CNN = '../data/WMPC_'+'CNN_'+str(TRAIN_SIZE)+'_'+str(REP_ID)+'_'
with open(filename_MFE + 'softmax.pickle', 'rb') as f:
y_trnval_hat_mfe, y_tst_hat_mfe = pickle.load(f)
with open(filename_CNN + 'softmax.pickle', 'rb') as f:
y_trnval_hat_cnn, y_tst_hat_cnn = pickle.load(f)
X_trnval_concat = np.concatenate([y_trnval_hat_mfe, y_trnval_hat_cnn], axis=1)
X_tst_concat = np.concatenate([y_tst_hat_mfe, y_tst_hat_cnn], axis=1)
labels = np.unique(np.argmax(y_trnval, 1))
model = FNN()
log = model.fit(X_trnval_concat, y_trnval, validation_split=0.2,
epochs=MAX_EPOCH, batch_size=BATCH_SIZE,
callbacks=[early_stopping], verbose=0)
y_trnval_hat = model.predict(X_trnval_concat)
y_tst_hat = model.predict(X_tst_concat)
macro = f1_score(np.argmax(y_tst, 1), np.argmax(y_tst_hat, 1), labels=labels, average='macro')
micro = f1_score(np.argmax(y_tst, 1), np.argmax(y_tst_hat, 1), labels=labels, average='micro')
cm = confusion_matrix(np.argmax(y_tst, 1), np.argmax(y_tst_hat, 1))
filename = '../result/WMPC_'+'Stacking_'+str(TRAIN_SIZE)+'_'+str(REP_ID)+'_'
with open(filename+'f1_score.pickle', 'wb') as f:
pickle.dump([macro, micro, cm], f)
with open(filename+'softmax.pickle', 'wb') as f:
pickle.dump([y_trnval_hat,y_trnval], f)
with open(filename+'coef_.pickle', 'wb') as f:
pickle.dump(model.coef_, f)
```
| github_jupyter |
```
import numpy as np
def sigmoid(x):
return 1.0 / (1.0 + np.exp(-x))
# return x * (x > 0)
def sigmoid_derivative(output):
return output * (1 - output)
# return 1.0 * (output > 0)
# 整数与二进制转化
int2binary = {}
binary_dim = 9
largest_number = pow(2, binary_dim)
def int2bin(int_num):
b_temp = bin(int_num)[2:]
len_diff = binary_dim - len(b_temp)
b_temp = '0' * len_diff + b_temp
result = [int(i) for i in b_temp]
return result
binary = np.array([ int2bin(i) for i in range(largest_number)])
for i in range(largest_number):
int2binary[i] = binary[i]
# 网络结构
n_input = 2
hidden1 = 20
n_output = 1
learning_rate = 1
# 初始化模型参数
U = np.random.randn(n_input, hidden1)
V = np.random.randn(hidden1, hidden1)
W = np.random.randn(hidden1, n_output)
dU = np.zeros_like(U)
dV = np.zeros_like(V)
dW = np.zeros_like(W)
# 训练次数
n_epochs = 10000
for i in range(n_epochs + 1):
# 生成加法问题,除以2, 防止加法溢出
a_int = np.random.randint(largest_number / 2)
b_int = np.random.randint(largest_number / 2)
if i == 0 or i == n_epochs:
a_int = 144
b_int = 177
a = int2binary[a_int]
b = int2binary[b_int]
c_int = a_int + b_int
c = int2binary[c_int]
d = np.zeros_like(c)
loss = 0
# 训练实例
X = np.array([a, b]).T
y = np.array([c]).T
hs = [] #每个步长下的隐含特征
hs.append(np.zeros((1, hidden1)))
os = [] #每个步长的预测值
# forward
for t in range(binary_dim):
xt = X[binary_dim - t - 1] # 反着来
ht = sigmoid(np.dot(xt, U) + np.dot(hs[-1], V)) # +prev_hidden
ot = sigmoid(np.dot(ht, W))
hs.append(ht)
os.append(ot)
loss += pow(ot - y[binary_dim - t - 1], 2)[0][0]
# predict
d[binary_dim - t - 1] = np.round(ot)[0][0]
#bp
future_d_ht = np.zeros((1, hidden1))
for t in reversed(range(binary_dim)):
xt = X[binary_dim - t - 1].reshape(1, -1)
ht = hs[t + 1]
ht_prev = hs[t]
ot = os[t]
#d_loss/d_ot
d_ot = ot - y[binary_dim - t - 1]
d_ot_output = sigmoid_derivative(ot) * d_ot
dW += ht.T.dot(d_ot_output)
d_ht = d_ot_output.dot(W.T) + future_d_ht
d_ht_output = sigmoid_derivative(ht) * d_ht
dU += xt.T.dot(d_ht_output)
dV += ht_prev.T.dot(d_ht_output)
# 更新futured_d_ht
future_d_ht = d_ht_output.dot(V.T)
# 梯度更新
U -= learning_rate * dU
V -= learning_rate * dV
W -= learning_rate * dW
# 重置梯度
dU *= 0
dV *= 0
dW *= 0
if i % 1000 == 0:
print("loss:" + str(loss))
print("Pred:" + str(d))
print("Ture:" + str(c))
out = 0
for index, x in enumerate(reversed(d)):
out += x * pow(2, index)
print(str(a_int) + " + " + str(b_int) + " = " + str(out))
print("------------")
```
| github_jupyter |
# Functions and Methods Homework
Complete the following questions:
____
**Write a function that computes the volume of a sphere given its radius.**
<p>The volume of a sphere is given as $$\frac{4}{3} πr^3$$</p>
```
def vol(rad):
return 4/3 * 22/7 * rad**3
# Check
vol(2)
```
___
**Write a function that checks whether a number is in a given range (inclusive of high and low)**
```
def ran_check(num,low,high):
if num in range(low,high+1):
return print(f'{num} is in the range between {low} and {high}')
else:
return False
ran_check(5,2,7)
# Check
ran_check(5,2,7)
```
If you only wanted to return a boolean:
```
def ran_bool(num,low,high):
return num in range(low,high+1)
ran_bool(3,1,10)
```
____
**Write a Python function that accepts a string and calculates the number of upper case letters and lower case letters.**
Sample String : 'Hello Mr. Rogers, how are you this fine Tuesday?'
Expected Output :
No. of Upper case characters : 4
No. of Lower case Characters : 33
HINT: Two string methods that might prove useful: **.isupper()** and **.islower()**
If you feel ambitious, explore the Collections module to solve this problem!
```
def up_low(s):
x=0
y=0
for i in s:
if i.isupper():
x += 1
elif i.islower():
y += 1
else:
pass
return x,y
s = 'Hello Mr. Rogers, how are you this fine Tuesday?'
up_low(s)
```
____
**Write a Python function that takes a list and returns a new list with unique elements of the first list.**
Sample List : [1,1,1,1,2,2,3,3,3,3,4,5]
Unique List : [1, 2, 3, 4, 5]
```
def unique_list(lst):
return set(lst)
unique_list([1,1,1,1,2,2,3,3,3,3,4,5])
```
____
**Write a Python function to multiply all the numbers in a list.**
Sample List : [1, 2, 3, -4]
Expected Output : -24
```
def multiply(numbers):
x=1
for i in numbers:
x *= i
return x
multiply([1,2,3,-4])
```
____
**Write a Python function that checks whether a passed in string is palindrome or not.**
Note: A palindrome is word, phrase, or sequence that reads the same backward as forward, e.g., madam or nurses run.
```
def palindrome(s):
return s[::-1] == s
palindrome('helleh')
```
____
#### Hard:
**Write a Python function to check whether a string is pangram or not.**
Note : Pangrams are words or sentences containing every letter of the alphabet at least once.
For example : "The quick brown fox jumps over the lazy dog"
Hint: Look at the string module
```
import string
def ispangram(str1, alphabet=string.ascii_lowercase):
alphaset = set(alphabet)
return alphaset <= set(str1)
ispangram("The quick brown fox jumps over the lazy dog")
string.ascii_lowercase
```
#### Great Job!
| github_jupyter |
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content-dl/blob/main/projects/NaturalLanguageProcessing/machine_translation.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> <a href="https://kaggle.com/kernels/welcome?src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content-dl/main/projects/NaturalLanguageProcessing/machine_translation.ipynb" target="_parent"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open in Kaggle"/></a>
# Machine Translation
**By Neuromatch Academy**
__Content creators:__ Juan Manuel Rodriguez, Salomey Osei
__Production editors:__ Amita Kapoor, Spiros Chavlis
**Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs**
<p align='center'><img src='https://github.com/NeuromatchAcademy/widgets/blob/master/sponsors.png?raw=True'/></p>
---
# Objective
The main goal of this project is to train a sequence to sequence NN that transtlate a language into another language, e.g. french to english. This notebook is based on this [Pytorch tutorial](https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html), but change several thing.
---
# Setup
```
# Imports
import io
import re
import math
import random
import unicodedata
import numpy as np
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch import optim
from tqdm.notebook import tqdm
from sklearn.utils import shuffle
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Download the data
import requests, zipfile
zip_file_url = 'https://download.pytorch.org/tutorial/data.zip'
r = requests.get(zip_file_url)
z = zipfile.ZipFile(io.BytesIO(r.content))
z.extractall()
N = 10 # print the 10 first lines
with open('data/eng-fra.txt') as f:
for i in range(N):
line = next(f).strip()
print(line)
```
---
# Representing the data
We create a language representation defining indixes for each word. In addition to the words, our languages have three special tokens:
* SOS: Start Of Sentence
* EOS: End Of Sentence
* PAD: Padding token used to fill inputs vectors where there are no other words.
```
SOS_token = 0
EOS_token = 1
class Lang:
def __init__(self, name):
self.name = name
self.word2index = {}
self.word2count = {}
self.index2word = {0: "SOS", 1: "EOS", 2: "PAD"}
self.n_words = 3 # Count SOS and EOS and PAD
def addSentence(self, sentence):
for word in sentence.split(' '):
self.addWord(word)
def addWord(self, word):
if word not in self.word2index:
self.word2index[word] = self.n_words
self.word2count[word] = 1
self.index2word[self.n_words] = word
self.n_words += 1
else:
self.word2count[word] += 1
def unicodeToAscii(s):
return ''.join(
c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn'
)
def normalizeString(s):
s = unicodeToAscii(s.lower().strip())
s = re.sub(r"([.!?])", r" \1", s)
s = re.sub(r"[^a-zA-Z.!?]+", r" ", s)
return s
def readLangs(lang1, lang2, reverse=False):
print("Reading lines...")
# Read the file and split into lines
lines = io.open('data/%s-%s.txt' % (lang1, lang2), encoding='utf-8').\
read().strip().split('\n')
# Split every line into pairs and normalize
pairs = [[normalizeString(s) for s in l.split('\t')] for l in lines]
# Reverse pairs, make Lang instances
if reverse:
pairs = [list(reversed(p)) for p in pairs]
input_lang = Lang(lang2)
output_lang = Lang(lang1)
else:
input_lang = Lang(lang1)
output_lang = Lang(lang2)
return input_lang, output_lang, pairs
MAX_LENGTH = 10
eng_prefixes = (
"i am ", "i m ",
"he is", "he s ",
"she is", "she s ",
"you are", "you re ",
"we are", "we re ",
"they are", "they re "
)
def filterPair(p):
return len(p[0].split(' ')) < MAX_LENGTH and \
len(p[1].split(' ')) < MAX_LENGTH and \
p[1].startswith(eng_prefixes)
def filterPairs(pairs):
return [pair for pair in pairs if filterPair(pair)]
def prepareData(lang1, lang2, reverse=False):
input_lang, output_lang, pairs = readLangs(lang1, lang2, reverse)
print("Read %s sentence pairs" % len(pairs))
pairs = filterPairs(pairs)
print("Trimmed to %s sentence pairs" % len(pairs))
print("Counting words...")
for pair in pairs:
input_lang.addSentence(pair[0])
output_lang.addSentence(pair[1])
print("Counted words:")
print(input_lang.name, input_lang.n_words)
print(output_lang.name, output_lang.n_words)
return input_lang, output_lang, pairs
input_lang, output_lang, pairs = prepareData('eng', 'fra', True)
print(random.choice(pairs))
```
## Language word distributions
We can check which is the word distribution in our dataset.
```
def plot_lang(lang, top_k=100):
words = list(lang.word2count.keys())
words.sort(key=lambda w: lang.word2count[w], reverse=True)
print(words[:top_k])
count_occurences = sum(lang.word2count.values())
accumulated = 0
counter = 0
while accumulated < count_occurences * 0.8:
accumulated += lang.word2count[words[counter]]
counter += 1
print(f"The {counter * 100 / len(words)}% most common words "
f"account for the {accumulated * 100 / count_occurences}% of the occurrences")
plt.bar(range(100), [lang.word2count[w] for w in words[:top_k]])
plt.show()
plot_lang(input_lang)
plot_lang(output_lang)
```
## The RNN
Our goal is to create a network that takes an input sentence in one language and then provides the translation of the sentence in an output language. Our network will use an RNN which will consist of an encoder, and a decoder. The encoder will first transform our input sentence into a vector, and pass this condensed vector into the decoder which will then translate the text in our given language. The process of this is further explained in the diagram below:
<img src="https://raw.githubusercontent.com/NeuromatchAcademy/course-content-dl/main/projects/static/seq2seq.png" width="600" height="300">
**Note:** Please note that this same approach can be used for next sentence prediction task
```
class EncoderRNN(nn.Module):
def __init__(self, input_size, hidden_size):
super(EncoderRNN, self).__init__()
self.hidden_size = hidden_size
self.embedding = nn.Embedding(input_size, hidden_size)
self.gru = nn.GRU(hidden_size, hidden_size, batch_first=True)
def forward(self, input, hidden):
embedded = self.embedding(input)#.view(1, 1, -1)
output = embedded
output, hidden = self.gru(output, hidden)
return output, hidden
def initHidden(self, batch_size):
return torch.zeros(1, batch_size, self.hidden_size, device=device)
class DecoderRNN(nn.Module):
def __init__(self, hidden_size, output_size):
super(DecoderRNN, self).__init__()
self.hidden_size = hidden_size
self.embedding = nn.Embedding(output_size, hidden_size)
self.gru = nn.GRU(hidden_size, hidden_size, batch_first=True)
self.out = nn.Linear(hidden_size, output_size)
self.softmax = nn.LogSoftmax(dim=-1)
def forward(self, input, hidden):
output = self.embedding(input)
output = F.relu(output)
output, hidden = self.gru(output, hidden)
output = self.softmax(self.out(output))
return output, hidden
def initHidden(self):
return torch.zeros(1, 1, self.hidden_size, device=device)
```
---
# Representing the text
```
def to_train(input_lang, output_lang, pairs, max_len=MAX_LENGTH+2):
x_input = []
x_output = []
target = []
for i, o in pairs:
s_i = [2] * max_len + [0] + [input_lang.word2index[w] for w in i.split(" ")] + [1]
s_o = [0] + [output_lang.word2index[w] for w in o.split(" ")] + [1] + [2] * max_len
s_to = s_o[1:] + [2]
x_input.append(s_i[-max_len:])
x_output.append(s_o[:max_len])
target.append(s_to[:max_len])
return x_input, x_output, target
x_input, x_partial, y = to_train(input_lang, output_lang, pairs)
print('Representation of an input sentece:')
print(x_input[0])
print(' '.join([input_lang.index2word[w] for w in x_input[0]]))
print('\nRepresentation of an partial sentece:')
print(x_partial[0])
print(' '.join([output_lang.index2word[w] for w in x_partial[0]]))
print('\nRepresentation of an target sentece:')
print(y[0])
print(' '.join([output_lang.index2word[w] for w in y[0]]))
```
We represent the input sentence using left padding because the GRU network process the sentence left to rignt, and we want that the output is as close to our sentence as possible. In contrast, we use right padding to the partial translation sentence because we want that our context is process inmediatly by our decoder. Finally, our target is our partial translation left-shifted.
## Training
Using this representation, we can train our model. Notice that our feed the full sentences as partial translations instanted of feeding partial sentences. This speed-ups our training, as next words in the sentence do not affects the output of the network and the gradients up to that point.
```
def predict(encoder, decoder, input, output):
_, hidden = encoder(input, encoder.initHidden(input.shape[0]))
out, _ = decoder(output, hidden)
return out
def train(encoder, decoder, loss, input, output, target, learning_rate=0.001, epochs=10, batch_size=100):
plot_losses = []
plot_full_losses = []
encoder_optimizer = optim.SGD(encoder.parameters(), lr=learning_rate)
decoder_optimizer = optim.SGD(decoder.parameters(), lr=learning_rate)
for _ in tqdm(range(epochs)):
c_input, c_output, c_target = shuffle(input, output, target)
c_input = torch.tensor(c_input, dtype=torch.long, device=device)
c_output = torch.tensor(c_output, dtype=torch.long, device=device)
c_target = torch.tensor(c_target, dtype=torch.long, device=device)
acc_loss = 0
for i in range(0, c_target.shape[0], batch_size):
c_batch_size = c_target[i:i+batch_size, ...].shape[0]
encoder_optimizer.zero_grad()
decoder_optimizer.zero_grad()
out = predict(encoder, decoder, c_input[i:i+batch_size, ...], c_output[i:i+batch_size, ...])
#Reshapes the output and target to use the expected loss format.
# N x Classes for the output
# N for the targets
# Where N is the batch size
out = out.reshape(c_batch_size * c_input.shape[1], -1)
r_target = c_target[i:i+batch_size, ...].reshape(c_batch_size * c_input.shape[1])
c_loss = loss(out, r_target)
# Mask the errors for padding as they are not usefull!
valid = torch.where(r_target == 2, 0, 1)
c_loss = c_loss * valid
c_loss = torch.sum(c_loss) #/ torch.sum(valid)
c_loss.backward()
encoder_optimizer.step()
decoder_optimizer.step()
plot_full_losses.append(c_loss.detach().numpy())
acc_loss += c_loss.detach().numpy()
plot_losses.append(acc_loss /math.ceil(c_target.shape[0] / batch_size))
return plot_losses, plot_full_losses
hidden_size = 300
num_epochs = 10 # Change this to 50 (original value!)
encoder = EncoderRNN(input_lang.n_words, hidden_size).to(device)
decoder = DecoderRNN(hidden_size, output_lang.n_words)
epoch_error, batch_error = train(encoder, decoder,
nn.NLLLoss(reduction='none'),
x_input, x_partial, y,
epochs=num_epochs)
#print(epoch_error)
#print(batch_error)
plt.plot(batch_error)
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('minibatch')
plt.show()
plt.plot(epoch_error)
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.show()
```
---
# Prediction and generation
In the following cells, we can can see how our Seq2Seq model produces a prediction.
```
p = predict(encoder, decoder, torch.tensor([x_input[40]],
dtype=torch.long,
device=device),
torch.tensor([x_partial[40]], dtype=torch.long, device=device))
p = p.detach().numpy()
print(np.argmax(p, axis=-1))
print(x_partial[40])
```
---
# Generating a translation
The generation is a very simple iterative process:
1. Initialize the partiar translation using only the start of sentence token '
SOS' (its id, which is 0).
1. Repeat:
1. Predict the probability distribution for the next token given the partial translation.
1. Pick the most probable token. (other option is to sample the distribution).
1. Add that token to the translation.
1. If the token is EOF, break the loop.
1. Return the partial translation, which is now a full translation.
If we want to generate several candidates, we can use other generation algorithm. [Beam Search](https://www.youtube.com/watch?v=RLWuzLLSIgw) is a great option for this.
```
def gen_translation(encoder, decoder, text, input_lang, output_lang,
max_len=MAX_LENGTH+2):
text = [2] * max_len + [0] + [input_lang.word2index[w] for w in text.split(" ")] + [1]
text = torch.tensor([text[-max_len:]], dtype=torch.long, device=device)
out = [0] + [2] * max_len
out = [out[:max_len]]
for i in range(1, max_len):
pt_out =torch.tensor(out, dtype=torch.long, device=device)
p = predict(encoder, decoder, text, pt_out).detach().numpy()
out[0][i] = np.argmax(p, axis=-1)[0, i-1]
if np.argmax(p, axis=-1)[0, i-1] == 1:
break
return ' '.join([output_lang.index2word[idx] for idx in out[0]])
gen_translation(encoder, decoder, pairs[40][0], input_lang, output_lang)
for i in range(40):
print('> {}'.format(pairs[i][0]))
print('= {}'.format(pairs[i][1]))
print('< {}'.format(gen_translation(encoder, decoder,
pairs[i][0],
input_lang,
output_lang)))
print('*' * 40)
for i in range(40):
print('> {}'.format(pairs[-i][0]))
print('= {}'.format(pairs[-i][1]))
print('< {}'.format(gen_translation(encoder, decoder,
pairs[-i][0],
input_lang,
output_lang)))
print('*' * 40)
```
---
# To dos
1. We use the full dataset to train/test. This is not a great idea, you should split the dataset into training/test.
2. We did some empirical evaluation looking at the translated senteces. Other evaluation can be done using metrics like [BLUE](https://www.nltk.org/api/nltk.translate.html?highlight=bleu_score#module-nltk.translate.bleu_score) score.
3. We try it with languages that are writting in a left-rigth as input and output. What happens if the languages are not written in this way? [Datasets](https://www.manythings.org/anki/) [Even more](https://tatoeba.org/en/downloads)
4. It would be possible to do machine translation using other NN architectures, such as attention based model.
5. We are not handling proper nouns, and that could be a problem.
6. This can be applied to next sentence prediction.
---
# Further reading
* [Sequence to Sequence Learning with Neural Networks](https://arxiv.org/abs/1409.3215)
* [Neural machine translation by jointly learning to align and translate](https://arxiv.org/abs/1409.0473)
* [Effective Approaches to Attention-based Neural Machine Translation](https://arxiv.org/abs/1508.04025)
| github_jupyter |
```
from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from trackml.dataset import load_event
from trackml.randomize import shuffle_hits
from trackml.score import score_event
import os
import numpy as np
import pandas as pd
import glob
import math
import time
from utils import timeSince
from tqdm import tqdm
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import seaborn as sns
%matplotlib inline
from hit_gauss_predictor import HitGausPredictor
from hit_gauss_predictor import cal_res
from utils import tunable_parameters
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
batch_size = 64
n_epochs = 50
model = HitGausPredictor(batch_size=batch_size, device=device).to(device)
print('total tunable parameters:', tunable_parameters(model))
model.load_state_dict(torch.load('model_hitGaus', map_location=lambda storage, loc: storage))
import pickle
track_arrays = pickle.load(open('input/ten_hists_normed.npy', 'rb'))
print('total tracks:', track_arrays.shape)
mean_r, sigma_r = 913.681763, 692.430542
mean_phi, sigma_phi = 0.009939, 1.823752
mean_z, sigma_z = -2.315056, 1061.912476
def cal_res2(model, test_track):
"""
calculate predicted residual and variances
of the model, for this test_track.
test_track's size: [batch_size, n_hits, 3]"""
test_t = torch.from_numpy(test_track[:, :-1, :])
target_t = torch.from_numpy(test_track[:, 1:, 1:])
with torch.no_grad():
output = model(test_t)
print(output.size())
output = output.contiguous().view(-1, output.size(-1))
means = output[:, 0:2]
covs = output[:, 2:4]
target_t = target_t.contiguous().view(target_t.size(0)*target_t.size(1),
target_t.size(2))
print("target size:", target_t.size())
res = means - target_t
rho = output[:, 4]
return res, covs, rho
def cal_res3(model, test_track):
"""
calculate predicted residual and variances
of the model, for this test_track.
test_track's size: [batch_size, n_hits, 3]"""
print("test track size:", test_track.shape)
n_events = test_track.shape[0]
n_batches = int(n_events/batch_size)
print("number of batches:", n_batches)
with torch.no_grad():
output_list = []
for ibatch in range(n_batches):
start = ibatch*batch_size
end = start + batch_size
test_t = torch.from_numpy(test_track[start:end, :-1, :])
target_t = torch.from_numpy(test_track[start:end, 1:, 1:])
output_tmp = model(test_t)
output_tmp = output_tmp.contiguous().view(-1, output_tmp.size(-1))
output_tmp[:, 0:2] = output_tmp[:, 0:2] - target_t.contiguous().view(-1, target_t.size(-1))
output_list.append(output_tmp)
print("number of output items:", len(output_list))
output = torch.cat(output_list)
print(output.size())
return output
def gaus_llh_loss(outputs, targets):
"""Custom gaussian log-likelihood loss function"""
if torch.isnan(outputs).any():
raise Exception("Net's output is NAN")
batches = outputs.size(0)
hits = outputs.size(1)
# Flatten layer axis into batch axis to use batch matrix operations
outputs = outputs.contiguous().view(-1, outputs.size(-1))
targets = targets.contiguous().view(-1, targets.size(-1))
# Calculate the residual error
dx1 = targets[:, 0] - outputs[:, 0]
dx2 = targets[:, 1] - outputs[:, 1]
c1 = outputs[:, 2]
c2 = outputs[:, 3]
rho = outputs[:, 4]
det_sigma = (1 - rho*rho) * c1 * c2
log_det = torch.log(det_sigma)
chi2 = (dx1*dx1/c1 + dx2*dx2/c2 - 2*rho*dx1*dx2/torch.sqrt(c1*c2))/(1-rho*rho)
print(det_sigma[0], log_det[0], chi2[0], dx1[0], c1[0], dx2[0], c2[0], rho[0])
prob = log_det + chi2
return torch.sum(prob)/batches/hits
start = 894060
n_samples = 100000
residule, covs, rho = cal_res2(model, np.array(track_arrays[start:start+batch_size]))
res_phi = residule.numpy()[:, 0]*sigma_phi + mean_phi
res_z = residule.numpy()[:, 1]*sigma_z + mean_z
out = cal_res3(model, np.array(track_arrays[start:start+n_samples]))
out_cp = out.clone()
out_cp[:, 0] = out_cp[:, 0]*sigma_phi
out_cp[:, 1] = out_cp[:, 1]*sigma_z
out_cp[:, 2] = out_cp[:, 2]*sigma_phi**2
out_cp[:, 3] = out_cp[:, 3]*sigma_z**2
out_batches = out_cp.contiguous().view(-1, 9, 5)
fig = plt.figure(figsize=(12, 12))
ax = fig.add_subplot(221)
ax.set_xlim(-np.pi, np.pi)
nbins = 100
#jj = plt.hist(res_z, bins=100, histtype='step', lw=2)
# jj = plt.hist(residule.view(batch_size, -1, 2).numpy()[:, 0, 0] * sigma_phi + mean_phi)
res = plt.hist(out_batches.numpy()[:, 0, 0],
bins=nbins, histtype='step', label="First Layer", lw=2, log=True)
res = plt.hist(out_batches.numpy()[:, 1, 0],
bins=nbins, histtype='step', label="Second Layer", lw=2)
res = plt.hist(out_batches.numpy()[:, 8, 0],
bins=nbins, histtype='step', label="Last Layer", lw=2)
res = plt.hist(out_batches.numpy()[:, :, 0].flatten(),
bins=nbins, histtype='step', label="All predictions", lw=2)
ax.legend()
ax.set_xlabel("Error in $\phi$ [rad]")
ax.set_ylabel('predicted hits')
ax3 = fig.add_subplot(223)
res = plt.hist(np.sqrt(out_batches.numpy()[:, 0, 2])*np.sign(out_batches.numpy()[:, 0, 0]),
bins=nbins, histtype='step', label="First Layer", lw=2, log=True)
res = plt.hist(np.sqrt(out_batches.numpy()[:, 1, 2])*np.sign(out_batches.numpy()[:, 1, 0]),
bins=nbins, histtype='step', label="Second Layer", lw=2)
res = plt.hist(np.sqrt(out_batches.numpy()[:, 8, 2])*np.sign(out_batches.numpy()[:, 8, 0]),
bins=nbins, histtype='step', label="Last Layer", lw=2)
res = plt.hist(np.sqrt(out_batches.numpy()[:, :, 2].flatten())*np.sign(out_batches.numpy()[:, :, 0].flatten()),
bins=nbins, histtype='step', label="All predictions", lw=2)
ax3.legend()
ax3.set_xlabel("$\sigma_\phi$ [rad]")
ax3.set_ylabel('predicted hits')
ax2 = fig.add_subplot(222)
ax2.set_xlim(-3000, 3000)
res = plt.hist(out_batches.numpy()[:, 0, 1],
bins=nbins, histtype='step', label="First Layer", lw=2, log=True)
res = plt.hist(out_batches.numpy()[:, 1, 1],
bins=nbins, histtype='step', label="Second Layer", lw=2)
res = plt.hist(out_batches.numpy()[:, 8, 1],
bins=nbins, histtype='step', label="Last Layer", lw=2)
res = plt.hist(out_batches.numpy()[:, :, 1].flatten(),
bins=nbins, histtype='step', label="All predictions", lw=2)
# res = plt.hist(out_batches.numpy()[:, 8, 0]*sigma_phi+mean_phi, bins=100, histtype='step')
ax2.legend()
ax2.set_xlabel("Error in $Z$ [mm]")
ax2.set_ylabel('predicted hits')
ax4 = fig.add_subplot(224)
ax4.set_xlim(-20000, 20000)
res = plt.hist(np.sqrt(out_batches.numpy()[:, 0, 3])*np.sign(out_batches.numpy()[:, 0, 1]),
bins=nbins, histtype='step', label="First Layer", lw=2, log=True)
res = plt.hist(np.sqrt(out_batches.numpy()[:, 1, 3])*np.sign(out_batches.numpy()[:, 1, 1]),
bins=nbins, histtype='step', label="Second Layer", lw=2)
res = plt.hist(np.sqrt(out_batches.numpy()[:, 8, 3])*np.sign(out_batches.numpy()[:, 8, 1]),
bins=nbins, histtype='step', label="Last Layer", lw=2)
res = plt.hist(np.sqrt(out_batches.numpy()[:, :, 3].flatten())*np.sign(out_batches.numpy()[:, :, 1].flatten()),
bins=nbins, histtype='step', label="All predictions", lw=2)
# res = plt.hist(out_batches.numpy()[:, 8, 0]*sigma_phi+mean_phi, bins=100, histtype='step')
ax4.legend()
ax4.set_xlabel("$\sigma_Z$ [mm]")
ax4.set_ylabel('predicted hits')
```
### Visual each track prediction
```
def get_output(model, test_track):
"""
calculate predicted residual and variances
of the model, for this test_track.
test_track's size: [batch_size, n_hits, 3]"""
print("test track size:", test_track.shape)
n_events = test_track.shape[0]
n_batches = int(n_events/batch_size)
print("number of batches:", n_batches)
with torch.no_grad():
output_list = []
for ibatch in range(n_batches):
start = ibatch*batch_size
end = start + batch_size
test_t = torch.from_numpy(test_track[start:end, :-1, :])
output_tmp = model(test_t)
output_list.append(output_tmp)
print("number of output items:", len(output_list))
output = torch.cat(output_list)
print(output.size())
# output[:, 0] = output[:, 0]*sigma_phi + mean_phi
# output[:, 1] = output[:, 1]*sigma_z + mean_z
# output[:, 2] = output[:, 2]*sigma_phi**2
# output[:, 3] = output[:, 3]*sigma_z**2
return output
pred_out = get_output(model, np.array(track_arrays[start:start+n_samples]))
def plot(idx):
fig = plt.figure(figsize=(12, 6))
ax = fig.add_subplot(121)
target = track_arrays[start+idx, 1:, 1]
predict = pred_out.numpy()[idx, :, 0]
err = np.sqrt(pred_out.numpy()[idx, :, 2])
ax.errorbar(np.arange(9), target, fmt='-*', lw=2, ms=10, label='target')
ax.errorbar(np.arange(9), predict, yerr=err, fmt='.', lw=2, ms=10, label='prediction')
ax.set_ylim(-3, 3)
ax.set_ylabel('$\phi$')
ax.set_xlabel('layer')
ax.legend()
ax1 = fig.add_subplot(122)
target2 = track_arrays[start+idx, 1:, 2]
predict2 = pred_out.numpy()[idx, :, 1]
err2 = np.sqrt(pred_out.numpy()[idx, :, 3])
ax1.errorbar(np.arange(9), target2, fmt='-*', lw=2, ms=10, label='target')
ax1.errorbar(np.arange(9), predict2, yerr=err2, fmt='.', lw=2, ms=10, label='prediction')
ax1.set_ylim(-3, 3)
ax1.set_ylabel('$Z$')
ax1.set_xlabel('layer')
ax1.legend()
for n in range(10):
plot(n)
with torch.no_grad():
output = model( torch.from_numpy(track_arrays[start:start+64, :-1, :]) )
target = torch.from_numpy(track_arrays[start:start+64, 1:, 1:])
res = gaus_llh_loss(output, target)
sns.jointplot(res_phi, covs.numpy()[:, 0])
sns.jointplot(res_z, covs.numpy()[:, 1])
fig = plt.figure(figsize=(4, 4))
ax1 = fig.add_subplot(111)
ax1.set_xlim(-20000, 20000)
plt.hist(res_z/covs.numpy()[:, 1])
plt.hist(rho)
loss_train = pickle.load(open('loss_trainGauss.pkl', 'rb'))
plt.plot(np.arange(0, len(loss_train)), loss_train)
```
### How the time cost of each epoch changes as a function of batch size.
```
x_batch_size = [64, 128, 256, 512]
y_iter_time = [49.95, 25.51, 13.33, 5.5]
plt.plot(x_batch_size, y_iter_time, '-*')
```
| github_jupyter |
# Defect Detection Model
Here, we build a model to detect the presence/absence of defect (any kind) in a submersible pump impeller using Transfer Learning (with VGG16 base model)
**Dataset**: [Submersible Pump Impeller Defect Dataset](https://www.kaggle.com/ravirajsinh45/real-life-industrial-dataset-of-casting-product)
## Preliminaries
```
import os
os.chdir("/content/drive/My Drive/ME781 Project")
import matplotlib
matplotlib.use("Agg")
import tensorflow as tf
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
from keras.preprocessing.image import ImageDataGenerator
from keras import Sequential
from keras.optimizers import Adam
from keras.layers import Input
from keras.models import Model
from keras.applications import VGG16
from keras.layers.core import Flatten, Dense, Dropout
from keras.preprocessing import image
import numpy as np
import os
from sklearn.metrics import classification_report, confusion_matrix
%matplotlib inline
```
## Model Architecture [Using VGG16 as Base Model]
```
CLASSES = 2
HEIGHT = 224
WIDTH = 224
CHANNELS = 3
baseModel = VGG16(weights="imagenet", include_top=False,
input_tensor=Input(shape=(WIDTH, HEIGHT, CHANNELS)))
# Enable Transfer Learning by freezing weights of the base VGG16 Model
for layer in baseModel.layers:
layer.trainable = False
model = Sequential()
model.add(baseModel)
model.add(Flatten(name="flatten"))
model.add(Dense(1024, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(CLASSES, activation='softmax'))
model.summary()
```
## Dataset Preparation
```
# Image preprocessing for robustness
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=20,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
vertical_flip=True
)
test_datagen = ImageDataGenerator(rescale=1./255)
BATCH_SIZE = 50
print("[INFO] loading images...")
train_data_dir = "./casting_data/train" # directory of training data
test_data_dir = "./casting_data/test" # directory of test data
training_set = train_datagen.flow_from_directory(train_data_dir,
target_size=(WIDTH, HEIGHT),
batch_size=BATCH_SIZE,
class_mode='categorical')
test_set = test_datagen.flow_from_directory(test_data_dir,
target_size=(WIDTH, HEIGHT),
batch_size=BATCH_SIZE,
class_mode='categorical',
shuffle=False)
```
## Model training
```
print("[INFO] compiling model...")
model.compile(
loss="categorical_crossentropy",
optimizer = Adam(lr=0.001),
metrics=["accuracy"]
)
print("[INFO] training model...")
EPOCHS = 10
history = model.fit(
training_set,
epochs=EPOCHS,
steps_per_epoch=training_set.samples//BATCH_SIZE,
validation_data=test_set,
validation_steps=test_set.samples//BATCH_SIZE
)
# Save the model
print("[Info] serializing network...")
model.save("defect_detection_vgg16.hdf5")
# Function to plot the accuracy & losses over epochs of training
def plot_training(history):
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc)+1)
plt.figure(figsize=(16, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs, acc, 'b*--', label="Training Accuracy")
plt.plot(epochs, val_acc, 'rD:', label="Validation Accuracy")
plt.legend()
plt.title('Training and validation accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs, loss, 'b*--', label="Training Loss")
plt.plot(epochs, val_loss, 'rD:', label="Validation Loss")
plt.legend()
plt.title('Training and validation loss')
plt.savefig("defect_detection_model.png", bbox_inches="tight")
plt.show()
plot_training(history)
```
## Model Evauation
```
score = model.evaluate(test_set)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
pred = model.predict(
test_set,
steps=test_set.samples//BATCH_SIZE + 1,
verbose=1
)
pred = np.argmax(pred, axis=1)
print('Confusion Matrix')
print(confusion_matrix(test_set.classes[test_set.index_array], pred))
print('\nClassification Report')
target_names = ['Defective', 'OK']
print(classification_report(test_set.classes[test_set.index_array], pred, target_names=target_names))
def predict(model, img):
"""Run model prediction on image
Args:
model: keras model
img: PIL format image
Returns:
list of predicted labels and their probabilities
"""
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
# x = preprocess_input(x)
preds = model.predict(x)
return preds[0]
def plot_preds(img, preds):
"""Displays image and the top-n predicted probabilities in a bar graph
Args:
preds: list of predicted labels and their probabilities
"""
labels = ("Defective", "OK")
gs = gridspec.GridSpec(2, 1, height_ratios=[4, 1])
plt.figure(figsize=(8,8))
plt.subplot(gs[0])
plt.imshow(np.asarray(img))
plt.subplot(gs[1])
plt.barh([0, 1], preds, alpha=0.5)
plt.yticks([0, 1], labels)
plt.xlabel('Probability')
plt.xlim(0, 1)
plt.tight_layout()
# Load two images, one form each class & predict the class using the trained model
img = image.load_img('/content/drive/My Drive/ME781 Project/casting_data/test/def_front/cast_def_0_85.jpeg', target_size=(WIDTH, HEIGHT))
preds = predict(model, img)
plot_preds(np.asarray(img), preds)
img = image.load_img('/content/drive/My Drive/ME781 Project/casting_data/test/ok_front/cast_ok_0_1020.jpeg', target_size=(WIDTH, HEIGHT))
preds = predict(model, img)
plot_preds(np.asarray(img), preds)
```
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.
# Automated ML on Azure Databricks
In this example we use the scikit-learn's <a href="http://scikit-learn.org/stable/datasets/index.html#optical-recognition-of-handwritten-digits-dataset" target="_blank">digit dataset</a> to showcase how you can use AutoML for a simple classification problem.
In this notebook you will learn how to:
1. Create Azure Machine Learning Workspace object and initialize your notebook directory to easily reload this object from a configuration file.
2. Create an `Experiment` in an existing `Workspace`.
3. Configure Automated ML using `AutoMLConfig`.
4. Train the model using Azure Databricks.
5. Explore the results.
6. Viewing the engineered names for featurized data and featurization summary for all raw features.
7. Test the best fitted model.
Before running this notebook, please follow the <a href="https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/azure-databricks" target="_blank">readme for using Automated ML on Azure Databricks</a> for installing necessary libraries to your cluster.
We support installing AML SDK with Automated ML as library from GUI. When attaching a library follow <a href="https://docs.databricks.com/user-guide/libraries.html" target="_blank">this link</a> and add the below string as your PyPi package. You can select the option to attach the library to all clusters or just one cluster.
**azureml-sdk with automated ml**
* Source: Upload Python Egg or PyPi
* PyPi Name: `azureml-sdk[automl_databricks]`
* Select Install Library
### Check the Azure ML Core SDK Version to Validate Your Installation
```
import azureml.core
print("SDK Version:", azureml.core.VERSION)
```
## Initialize an Azure ML Workspace
### What is an Azure ML Workspace and Why Do I Need One?
An Azure ML workspace is an Azure resource that organizes and coordinates the actions of many other Azure resources to assist in executing and sharing machine learning workflows. In particular, an Azure ML workspace coordinates storage, databases, and compute resources providing added functionality for machine learning experimentation, operationalization, and the monitoring of operationalized models.
### What do I Need?
To create or access an Azure ML workspace, you will need to import the Azure ML library and specify following information:
* A name for your workspace. You can choose one.
* Your subscription id. Use the `id` value from the `az account show` command output above.
* The resource group name. The resource group organizes Azure resources and provides a default region for the resources in the group. The resource group will be created if it doesn't exist. Resource groups can be created and viewed in the [Azure portal](https://portal.azure.com)
* Supported regions include `eastus2`, `eastus`,`westcentralus`, `southeastasia`, `westeurope`, `australiaeast`, `westus2`, `southcentralus`.
```
subscription_id = "<Your SubscriptionId>" #you should be owner or contributor
resource_group = "<Resource group - new or existing>" #you should be owner or contributor
workspace_name = "<workspace to be created>" #your workspace name
workspace_region = "<azureregion>" #your region
```
## Creating a Workspace
If you already have access to an Azure ML workspace you want to use, you can skip this cell. Otherwise, this cell will create an Azure ML workspace for you in the specified subscription, provided you have the correct permissions for the given `subscription_id`.
This will fail when:
1. The workspace already exists.
2. You do not have permission to create a workspace in the resource group.
3. You are not a subscription owner or contributor and no Azure ML workspaces have ever been created in this subscription.
If workspace creation fails for any reason other than already existing, please work with your IT administrator to provide you with the appropriate permissions or to provision the required resources.
**Note:** Creation of a new workspace can take several minutes.
```
# Import the Workspace class and check the Azure ML SDK version.
from azureml.core import Workspace
ws = Workspace.create(name = workspace_name,
subscription_id = subscription_id,
resource_group = resource_group,
location = workspace_region,
exist_ok=True)
ws.get_details()
```
## Configuring Your Local Environment
You can validate that you have access to the specified workspace and write a configuration file to the default configuration location, `./aml_config/config.json`.
```
from azureml.core import Workspace
ws = Workspace(workspace_name = workspace_name,
subscription_id = subscription_id,
resource_group = resource_group)
# Persist the subscription id, resource group name, and workspace name in aml_config/config.json.
ws.write_config()
```
## Create a Folder to Host Sample Projects
Finally, create a folder where all the sample projects will be hosted.
```
import os
sample_projects_folder = './sample_projects'
if not os.path.isdir(sample_projects_folder):
os.mkdir(sample_projects_folder)
print('Sample projects will be created in {}.'.format(sample_projects_folder))
```
## Create an Experiment
As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
```
import logging
import os
import random
import time
from matplotlib import pyplot as plt
from matplotlib.pyplot import imshow
import numpy as np
import pandas as pd
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.train.automl import AutoMLConfig
from azureml.train.automl.run import AutoMLRun
# Choose a name for the experiment and specify the project folder.
experiment_name = 'automl-local-classification'
project_folder = './sample_projects/automl-local-classification'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace Name'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Project Directory'] = project_folder
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
pd.DataFrame(data = output, index = ['']).T
```
## Diagnostics
Opt-in diagnostics for better experience, quality, and security of future releases.
```
from azureml.telemetry import set_diagnostics_collection
set_diagnostics_collection(send_diagnostics = True)
```
## Registering Datastore
Datastore is the way to save connection information to a storage service (e.g. Azure Blob, Azure Data Lake, Azure SQL) information to your workspace so you can access them without exposing credentials in your code. The first thing you will need to do is register a datastore, you can refer to our [python SDK documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.datastore.datastore?view=azure-ml-py) on how to register datastores. __Note: for best security practices, please do not check in code that contains registering datastores with secrets into your source control__
The code below registers a datastore pointing to a publicly readable blob container.
```
from azureml.core import Datastore
datastore_name = 'demo_training'
container_name = 'digits'
account_name = 'automlpublicdatasets'
Datastore.register_azure_blob_container(
workspace = ws,
datastore_name = datastore_name,
container_name = container_name,
account_name = account_name,
overwrite = True
)
```
Below is an example on how to register a private blob container
```python
datastore = Datastore.register_azure_blob_container(
workspace = ws,
datastore_name = 'example_datastore',
container_name = 'example-container',
account_name = 'storageaccount',
account_key = 'accountkey'
)
```
The example below shows how to register an Azure Data Lake store. Please make sure you have granted the necessary permissions for the service principal to access the data lake.
```python
datastore = Datastore.register_azure_data_lake(
workspace = ws,
datastore_name = 'example_datastore',
store_name = 'adlsstore',
tenant_id = 'tenant-id-of-service-principal',
client_id = 'client-id-of-service-principal',
client_secret = 'client-secret-of-service-principal'
)
```
## Load Training Data Using Dataset
Automated ML takes a `TabularDataset` as input.
You are free to use the data preparation libraries/tools of your choice to do the require preparation and once you are done, you can write it to a datastore and create a TabularDataset from it.
You will get the datastore you registered previously and pass it to Dataset for reading. The data comes from the digits dataset: `sklearn.datasets.load_digits()`. `DataPath` points to a specific location within a datastore.
```
from azureml.core.dataset import Dataset
from azureml.data.datapath import DataPath
datastore = Datastore.get(workspace = ws, datastore_name = datastore_name)
X_train = Dataset.Tabular.from_delimited_files(datastore.path('X.csv'))
y_train = Dataset.Tabular.from_delimited_files(datastore.path('y.csv'))
```
## Review the TabularDataset
You can peek the result of a TabularDataset at any range using `skip(i)` and `take(j).to_pandas_dataframe()`. Doing so evaluates only j records for all the steps in the TabularDataset, which makes it fast even against large datasets.
```
X_train.take(5).to_pandas_dataframe()
y_train.take(5).to_pandas_dataframe()
```
## Configure AutoML
Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.
|Property|Description|
|-|-|
|**task**|classification or regression|
|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: <br><i>accuracy</i><br><i>AUC_weighted</i><br><i>average_precision_score_weighted</i><br><i>norm_macro_recall</i><br><i>precision_score_weighted</i>|
|**primary_metric**|This is the metric that you want to optimize. Regression supports the following primary metrics: <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i>|
|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|
|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|
|**n_cross_validations**|Number of cross validation splits.|
|**spark_context**|Spark Context object. for Databricks, use spark_context=sc|
|**max_concurrent_iterations**|Maximum number of iterations to execute in parallel. This should be <= number of worker nodes in your Azure Databricks cluster.|
|**X**|(sparse) array-like, shape = [n_samples, n_features]|
|**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]<br>Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers.|
|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|
|**preprocess**|set this to True to enable pre-processing of data eg. string to numeric using one-hot encoding|
|**exit_score**|Target score for experiment. It is associated with the metric. eg. exit_score=0.995 will exit experiment after that|
```
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
primary_metric = 'AUC_weighted',
iteration_timeout_minutes = 10,
iterations = 3,
preprocess = True,
n_cross_validations = 10,
max_concurrent_iterations = 2, #change it based on number of worker nodes
verbosity = logging.INFO,
spark_context=sc, #databricks/spark related
X = X_train,
y = y_train,
path = project_folder)
```
## Train the Models
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.
```
local_run = experiment.submit(automl_config, show_output = True)
```
## Continue experiment
```
local_run.continue_experiment(iterations=2,
X=X_train,
y=y_train,
spark_context=sc,
show_output=True)
```
## Explore the Results
#### Portal URL for Monitoring Runs
The following will provide a link to the web interface to explore individual run details and status. In the future we might support output displayed in the notebook.
```
displayHTML("<a href={} target='_blank'>Your experiment in Azure Portal: {}</a>".format(local_run.get_portal_url(), local_run.id))
```
The following will show the child runs and waits for the parent run to complete.
#### Retrieve All Child Runs after the experiment is completed (in portal)
You can also use SDK methods to fetch all the child runs and see individual metrics that we log.
```
children = list(local_run.get_children())
metricslist = {}
for run in children:
properties = run.get_properties()
metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}
metricslist[int(properties['iteration'])] = metrics
rundata = pd.DataFrame(metricslist).sort_index(1)
rundata
```
### Retrieve the Best Model after the above run is complete
Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.
```
best_run, fitted_model = local_run.get_output()
print(best_run)
print(fitted_model)
```
#### Best Model Based on Any Other Metric after the above run is complete based on the child run
Show the run and the model that has the smallest `log_loss` value:
```
lookup_metric = "log_loss"
best_run, fitted_model = local_run.get_output(metric = lookup_metric)
print(best_run)
print(fitted_model)
```
#### View the engineered names for featurized data
Below we display the engineered feature names generated for the featurized data using the preprocessing featurization.
```
fitted_model.named_steps['datatransformer'].get_engineered_feature_names()
```
#### View the featurization summary
Below we display the featurization that was performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:-
- Raw feature name
- Number of engineered features formed out of this raw feature
- Type detected
- If feature was dropped
- List of feature transformations for the raw feature
```
# Get the featurization summary as a list of JSON
featurization_summary = fitted_model.named_steps['datatransformer'].get_featurization_summary()
# View the featurization summary as a pandas dataframe
pd.DataFrame.from_records(featurization_summary)
```
### Test the Best Fitted Model
#### Load Test Data - you can split the dataset beforehand & pass Train dataset to AutoML and use Test dataset to evaluate the best model.
```
blob_location = "https://{}.blob.core.windows.net/{}".format(account_name, container_name)
X_test = pd.read_csv("{}./X_valid.csv".format(blob_location), header=0)
y_test = pd.read_csv("{}/y_valid.csv".format(blob_location), header=0)
images = pd.read_csv("{}/images.csv".format(blob_location), header=None)
images = np.reshape(images.values, (100,8,8))
```
#### Testing Our Best Fitted Model
We will try to predict digits and see how our model works. This is just an example to show you.
```
# Randomly select digits and test.
for index in np.random.choice(len(y_test), 2, replace = False):
print(index)
predicted = fitted_model.predict(X_test[index:index + 1])[0]
label = y_test.values[index]
title = "Label value = %d Predicted value = %d " % (label, predicted)
fig = plt.figure(3, figsize = (5,5))
ax1 = fig.add_axes((0,0,.8,.8))
ax1.set_title(title)
plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')
display(fig)
```
When deploying an automated ML trained model, please specify _pippackages=['azureml-sdk[automl]']_ in your CondaDependencies.
Please refer to only the **Deploy** section in this notebook - <a href="https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/classification-with-deployment" target="_blank">Deployment of Automated ML trained model</a>

| github_jupyter |
# Random Agent in Malmo
This guide shows how to setup a single-player Malmo mission. This example may serve as a basis to use Malmo in your RL experiments.
## Malmo launcher
In earlier versions of ```malmoenv``` each Minecraft instance had to be started manually from command line. The launcher handles these processes automatically.
Each launcher instance creates a copy of Malmo into the ```/tmp/malmo_<hash>/``` directory and starts it up using a launch script and a given port. The figure below shows this process with the first port set to 9000 and using the ```~/launch_headless.sh``` script. Note that the launcher searches for the launch script in the ```Minecraft/``` subdirectory.

```
# imports
from pathlib import Path
import os
# malmoenv imports
import malmoenv
from malmoenv.utils.launcher import launch_minecraft
```
The next step is to define some constants.
The ```MISSION_XML``` is the file defining the current mission. The ```malmoenv``` module communicates with the JAVA version of Minecraft through sockets, so it is important to make sure that the PORT numbers align. This example has been setup to work correctly with both 1 and multiple workers.
By default we provide 2 launch scripts:
- ```./launchClient_quiet.sh``` - runs Malmo as normal with redirecting the out and error streams to the ```out.txt``` file in the copied Malmo directory in the ```/tmp``` directory.
- ```./launchClient_headless.sh``` - runs Malmo without rendering a window. Malmo's output is the same as with ```launchClient_quiet.sh```. To run this ```xvfb``` should be installed on your computer. This script is useful to run Malmo on headless servers.
```
ENV_NAME = "malmo"
MISSION_XML = os.path.realpath('../../MalmoEnv/missions/mobchase_single_agent.xml')
COMMAND_PORT = 8999
xml = Path(MISSION_XML).read_text()
CHECKPOINT_FREQ = 100 # in terms of number of algorithm iterations
LOG_DIR = "results/" # creates a new directory and puts results there
NUM_WORKERS = 1
NUM_GPUS = 0
EPISODES = 10
launch_script = "./launchClient_quiet.sh"
```
Next we create a dictionary called config to store the parameters required for creating Malmo environments such as the mission XML and the COMMAND_PORT. This example assumes to only use a single environment.
```env.init``` by default returns a flattened representation of the observed frame, setting ```reshape=True``` keeps it as an image with [width, height, channels] dimensions.
```
config = {
"xml": xml,
"port": COMMAND_PORT,
}
def create_env(config):
env = malmoenv.make()
env.init(config["xml"], config["port"], reshape=True)
env.reward_range = (-float('inf'), float('inf'))
return env
env = create_env(config)
```
The next step is to start up the Minecraft instances. Note that this step might take a few minutes.
In the background each Malmo instance get copied to the ```/tmp/malmo_<hash>/malmo``` directory, where it gets executed (Each Minecraft instance requires its own directory).
After copying the instances are started using a the provided ```launch_script```, this is where we can define if we want to run it without rendering a window for example.
```
GAME_INSTANCE_PORTS = [COMMAND_PORT + i for i in range(NUM_WORKERS)]
instances = launch_minecraft(GAME_INSTANCE_PORTS, launch_script=launch_script)
```
The final step is to run the random agent in Malmo. Using the default launch script you should see Malmo in a new window on your screen. Resetting the env might take a few seconds depending on the complexity of the mission. In this example we accumulate the rewards and the game steps and print it into the console.
At the end we close the environments and kill the JAVA instances in the background.
```
for i in range(EPISODES):
obs = env.reset()
steps = 0
total_rewards = 0
done = False
while not done:
action = env.action_space.sample()
obs, reward, done, info = env.step(action)
steps += 1
total_rewards += reward
if done:
print(f"Episode finished in {steps} with reward: {total_rewards} ")
# close envs
env.close()
for instance in instances:
instance.communicate()
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import pandas as pd
from sklearn.linear_model import LogisticRegression
import seaborn as sns
from sklearn.pipeline import Pipeline
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import GridSearchCV
import math
```
## Logistic Regression
### Images
For this problem we'll use some simple images - a text classfication dataset that is a common ML toy dataset. Up until now we've only used text and numbers for data, how do we deal with images?
We can think of an image as a matrix of pixels. If you ever looked at your TV up extremely close as a kid, you've seen this. Each image here is a 28 by 28 pixel grid, each point on that grid is one pixel that can be somewhere on the black to white scale, which is represented by 0 to 255. So our overall dataset is ~70000 images, each one being a 28 x 28 (784 pixel) x 1 matrix. The only thing making it an image instead of a big table of numbers is how we interpret it when reading the data - if we don't know it is an image, we'd look at it as a bunch of integers; if we know to interpret it as an image, we can use those integers to draw what we were looking for!
If you have a 1080p TV or computer monitor the same logic applies: The screen is a 1920 x 1080 pixel grid, but here each pixel can be multiple colors (there are different color encodings, but the idea is the same) - so instead of each pixel on the grid having a depth of 1 (like our BW digits), each pixel has a depth of 3 - one for each of red, green, and blue, all on a 0 to 255 scale. This allows each pixel to have a position, and a color made up of a combo of those 3 values, giving us a pretty picture. If we were encoding a video, we'd have a series of these images in sequence - with 24, 30, 60 or however many frames per second.
```
#Load Data
from sklearn.datasets import fetch_openml
mnist = fetch_openml('mnist_784', version=1)
X, y = mnist["data"], mnist["target"]
print(X.shape)
print(y.shape)
```
#### Image Data
Our images are currently stored as pixels. Each image is 28 x 28 pixels, so that is 784 total pixels. Each individual pixel is a value on a 255 pt scale - greyscale in this case.
Our labels are just the numbers, if we look at a few, each is just a bunch-o-pixels.
```
X.head()
```
### Images as Arrays
The above visualization is one of those list of pixels if we picture it. There are 28 x 28 pixels in a 2D grid, and each of those pixels is some degree of "colored". It may be easier to see with a more elaborate image:

All of the "outside" pixels are 0 - black (ours are flipped - black text). Each part of the number has a higher number based on lightness. Our overall image is represented by a 28 x 28 x 1 array - width, height, and "depth" or "color depth", we only have one color (black) so the depth is 1. This image is pretty low definition, so it is not super clear. The images on our monitors are the same, just with higher definition. We'll look at more elaborate images later, they are stored in the same way, except for color images we have 3 (usually) layers for depth.

A color image like this is a similar array as ours, but larger. If the image was 100 x 100 pixels, the array would be 100 x 100 x 3 (1 depth count per color) - this is also something called a tensor, which will be meaningful later. This is why we can do fun stuff with images like facial recognition - images are just big 'ol arrays. This is also why when we start dealing with high definition images or videos, things become MUCH slower; the amount of data in image data grows rapidly the better our images are.
We can look at an image, there's a couple of steps to make it "image-y":
<ul>
<li>Take a row of data from the dataframe.
<li>Make it into an array - 28 x 28.
<li>Use mathplotlib to show the array of integers interpreted as an image.
</ul>
```
#Look at an image
def showDigit(digit, label, size=28):
some_digit = digit
#turn array into the correct shape
some_digit_image = np.array(some_digit).reshape(size, size)
#imshow displays an array like an image
plt.imshow(some_digit_image, cmap=mpl.cm.binary)
plt.title(label)
plt.axis("off")
plt.show()
#The weird index is because it is a 2D array. We are basically grabing from the "start of 5" to the "start of 6"(non-inclusive)
showDigit(X[5:6], y[5])
#Display multiple digits
def showDigits(digits, labels, indexes, size=28):
#Make a grid that is the right size
pics = len(indexes)
cols = 6
rows = math.ceil(pics/cols)
fig, axes = plt.subplots(rows, cols, figsize=(14,6))
plt.axis("off")
#loop through the list of indexes, grab images and labels, plot in the "next" spot.
for i in range(0, pics):
n = indexes[i]
some_digit = digits[n:n+1]
some_digit_image = np.array(some_digit).reshape(size, size)
ax = axes[i//cols, i%cols]
ax.axis("off")
ax.imshow(some_digit_image, cmap=mpl.cm.binary)
ax.set_title('Ind: {} - Lbl: {}'.format(indexes[i], labels[n]))
plt.tight_layout()
plt.axis("off")
plt.show()
showDigits(X, y, [10,11,12,15,16,78,863,112,46,76,34,454,232])
```
### Exercise 1 - Load Digits Data.
This dataset is one we can use as an exercise as we go through. It is a smaller version of the images that we are using. Most things translate pretty directly from the example.
For now:
<ul>
<li>Load the data like we did with mnist.
<li>Picture a digit, then a grid of digits.
</ul>
A solved example is below.
```
#EXERCISE
#Load Data
from sklearn.datasets import load_digits
digits = load_digits()
```
### Softmax, One v All, One v One, and Multiple Classifications
Logistic regression separated two classes, predictions are either labeled as a 1 or a 0. In reality, we often want to predict more than just yes/no questions. For example, if we are doing facial recognition we likely don't want to settle for saying "yup, that is a person", we want to be able to determine who that person is.
When we looked at decision trees, they were capable of doing multiple classifications directly, no adjustments needed. Linear classifiers are different though, they only separate between two classes, so we need a different approach.
#### One vs Rest
One way to train a multiple classifier is to create a series of binary clasifiers, one for each outcome class vs "the rest". This is the default in sklearn's logistic regression. The end result is one classifier for each class.
For our example: 1 vs not 1, 2 vs not 2, etc...
#### One vs One
Another method is to create a separate classifier for every combination of outcomes. This isn't implimented in sklearn's logistic regression but there is a class OneVsOneClassifier that allows you to plug in any classifier and the 1 vs 1 algorithm will be applied.
For our example: 1 vs 2, 1 vs 3, 1 vs 4... 4 vs 5, 4 vs 6....
#### Softmax
For logistic regression, we make these classifications using something called Softmax Regression, or Multinomial Logistic Regression. The idea behind this is pretty simple, we just calculate a score for each class and the highest score is the prediction.
Softmax will get a bit of a deeper look when we get to neural networks, for now it is more or less a multi-way version of the sigmoid function that we are used to seeing in classfications. Rather than splitting an individual prediction into two possibilites like the sigmoid, the softmax breaks out an individual probability for each of the possible output classes.
So if we are predicting between 3 classes - A, B, and C, a model that is predicting B with pretty high confidence might produce an output like:
<ul>
<li> A - .228
<li> B - .619
<li> C - .153
</ul>
If the true answer is B, we'd have a real distribution that looks like:
<ul>
<li> A - 0
<li> B - 1
<li> C - 0
</ul>
I.e. the probability of it being B is 100%, since that's the true value; the probability of A or C is 0, because it isn't either of those.
### Cross Entropy
Cross entropy is a very common cost function used when doing multiple classifications.
The cross entropy loss function compares the real distribution to the expected one, and generates a metric for loss (like any other loss function). It will compare the predictions produced by the softmax to the true value and then calculate the loss. If we take the example from above, the cross entropy can be calculated with the formula:

Resulting in an actual calculation of:
H = - (0.0*ln(0.228) + 1.0*ln(0.619) + 0.0*ln(0.153)) = 0.479
The gradient descent uses this amount of loss as we'd expect it to, and the training process just keeps repeating until we converge on a minimum amount of loss or run out of iterations to try. There is an expanded explaination here: https://stackoverflow.com/questions/41990250/what-is-cross-entropy/41990932#41990932
We will spend more time on the details of loss stuff in neural networks.
### Multiple Classifier
Now, we can try to evaluate the two methods against each other for our data while attempting to labels the digits from 0 to 9.
#### Solver
One of the hyperparameters in the logistic regression call is the solver. This defines the method that the algorithm uses to do the gradient descent. The short answer is that it isn't something that we need to worry about too much unless we are looking for optimizing speed with large datasets. The slightly less short answer is that lbfgs (the default) is probably OK for most cases and either liblinear or saga (large datasets) if we want to feature select using L1 regularization. In any case, don't obsess over this. The documentation provides a little table for selecting an appropriate solver:

```
#Classify the digits
# This currently takes the first 10000 images, change commenting to take all
# It may take a long time with all data, especially if there is lots of grid searching and CV
#X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
X_train, X_test, y_train, y_test = train_test_split(X[:10000], y[:10000], test_size=0.3)
# Scale inputs in a pipe
scaler = MinMaxScaler()
estimator = LogisticRegression(n_jobs=-1, solver="saga", max_iter=10000)
pipe = Pipeline(steps=[("scaler", scaler), ("log", estimator)])
# Try different classifications for the multiple classes
params = {'log__multi_class':["ovr","multinomial"]}
clf = GridSearchCV(pipe, param_grid=params, cv=3, n_jobs=-1)
clf.fit(X_train, y_train.ravel())
best = clf.best_estimator_
train_preds = best.predict(X_train)
print(best)
print(best.score(X_test, y_test))
```
### Exercise 2 - Make Models
Take the digits data and make a model. Score it on test data.
```
#EXERCISE
#Model with the digit data.
```
### Classification Results
We can look at the results of our classification, the confusion matrix still works, it is just a little more complex with multiple classes.
```
#Create Confusion Matrix
from sklearn.metrics import confusion_matrix
plt.rcParams["figure.figsize"] = (10,10)
preds = best.predict(X_test)
corr = confusion_matrix(y_test,preds)
mask = np.triu(np.ones_like(corr, dtype=bool))
sns.heatmap(corr, annot=True, mask=mask)
```
We can take a look at the heatmap to see how often we are wrong with different combinations of digits. For example, 7 and 9 having frequent errors isn't every surprising.
The confustion matrix is a 2D array of counts of errors. We can extract the values for each number to look at the differences in performance for each digit.
```
print(corr)
# Grab each row, which represents one digit, and add up the errors.
# Be sure to exclude the "spine" of counts.
ers = []
for i in range(len(corr)):
num = corr[i]
#print(num)
before = num[:i]
after = num[i+1:]
#print(before, after)
tmp_err = np.sum(before) + np.sum(after)
ers.append(tmp_err)
print(ers)
#Errors Per Number
sns.barplot(y=ers, x=[0,1,2,3,4,5,6,7,8,9])
```
### Error Distribution
As we might expect the errors are skewed towards numbers that look similar, like a 6 and an 8. There's no intuitive way to know what we can do with our modeling to improve this - maybe some different algorithm gives better results, or a different set of HPs that we can find with a grid search.
Most likely we'll need to do some processing of the data to understand them a little better as images rather than just tables of pixels. Image processing is something we'll look into a bit more later on in the course.
### Exercise 3 - Examine Results
Take the digits model results and examine.
Try:
<ul>
<li>Heatmap.
<li>Errors per digit.
</ul>
```
#EXERCISE
#Plot results of model
```
## Example Solution - Digits
Try with a slightly more simple example - an 8 x 8 version of the same thing.
```
#Load Data
from sklearn.datasets import load_digits
digits = load_digits()
Xd, yd = digits["data"], digits["target"]
print(Xd.shape)
print(yd.shape)
#Look at a digit
showDigit(Xd[12:13], yd[12], 8)
#Classify the digits
X_traind, X_testd, y_traind, y_testd = train_test_split(Xd, yd, test_size=0.3)
# Scale inputs in a pipe
scalerd = MinMaxScaler()
estimatord = LogisticRegression(n_jobs=-1, solver="lbfgs", max_iter=10000)
piped = Pipeline(steps=[("scaler", scalerd), ("log", estimatord)])
# Try different classifications for the multiple classes
paramsd = {'log__multi_class':["ovr","multinomial"]}
clfd = GridSearchCV(piped, param_grid=paramsd, cv=3, n_jobs=-1)
clfd.fit(X_traind, y_traind.ravel())
bestd = clfd.best_estimator_
train_predsd = bestd.predict(X_traind)
print(bestd)
print(bestd.score(X_testd, y_testd))
#Create Confusion Matrix
from sklearn.metrics import confusion_matrix
plt.rcParams["figure.figsize"] = (10,10)
predsd = bestd.predict(X_testd)
corrd = confusion_matrix(y_testd,predsd)
maskd = np.triu(np.ones_like(corrd, dtype=bool))
sns.heatmap(corrd, annot=True, mask=maskd)
```
| github_jupyter |
# Building a Model in Helipad
In this walkthrough, we’ll build a very simple model of two goods where decentralized trading results in agents converging on an equilibrium price. Each period, agents pair off randomly and see if they can become better off by trading. If so, they trade. If not, they do nothing. In just a few rounds, agents – without knowing anything besides the people they’re trading with – converge on a single price for one good in terms of the other, and become better off in the process. You can see the [final model code here](https://github.com/charwick/helipad/blob/master/sample-models/pricediscover.py).
In order to get started, we’ll need to import the main Helipad class and initialize it, along with a few other functions we’ll need along the way.
```
from helipad import Helipad
from helipad.utility import CobbDouglas
from math import sqrt, exp, floor
import random
heli = Helipad()
heli.name = 'Price Discovery'
```
The variable `heli` will be the main way we’ll interact with the model. Helipad operates using [*hooks*](https://helipad.dev/glossary/hooks/), meaning that Helipad runs a loop and gives you the opportunity to insert your own logic into it. In this model, we’ll insert code to run in three places: (1) at the beginning of each period, (2) when an agent is created, and – most importantly – (3) each time an agent is activated.
## Outlining the Model
First, we’ll want to lay out the logic of the model, as well as the kinds of data we’ll want to collect as it runs. A rough outline might look like this:
1. At the beginning of the model, each agent gets a random amount of two goods. We’ll call them Shmoo (M) and Soma (H).
2. Agents have a Cobb-Douglas utility function over the two: *U*=*M*<sup>0.5</sup>*H*<sup>0.5</sup>. This means that more of each makes agents better off, but at a decreasing rate.
3. Each period, each agent finds a random partner.
4. If the agents in a pair find they can both become better off by trading, they trade some amount of Soma for some amount of Shmoo. Otherwise they do nothing.
5. At the end of each period, we’ll be interested in recording (1) how well off everyone is, (2) how much shmoo and soma got traded, and (3) the terms of trade – specifically, how divergent they are.
Helipad will take care of (3) for us:
```
heli.order = 'match'
```
This line configures the model to match agents each period and run them through a [`match`](https://helipad.dev/hooks/match) hook that we'll define later, rather than stepping through them individually. `'match'` is equivalent to `'match-2'`. We could also set [`heli.order`](https://helipad.dev/functions/model/#order) to `'match-3'` if we wanted it to group in triplets, or any other number. But for a trading model, and for most others, pairs are what we want.
For the others, we’ll hook (1) and (2) into [agent initialization](https://helipad.dev/hooks/agentinit), (4) into [agent activation](https://helipad.dev/hooks/agentstep), and (5) we’ll gather afterward.
Starting with (1), we might want to control the aggregate ratio of shmoo to soma before each run. Helipad has a control panel on which we can place parameters that control the model. Before the function then, we’ll tell the Helipad object to add a parameter that we can control, and whose value we can use in our agents’ logic.
```
heli.addParameter('ratio', 'Log Endowment Ratio', 'slider', dflt=0, opts={'low': -3, 'high': 3, 'step': 0.5}, runtime=False)
#Make sure we don't get stray agents
heli.params['num_agent'].opts['step'] = 2
heli.params['num_agent'].opts['low'] = 2
heli.addGood('shmoo','#11CC00', (1, 1000))
heli.addGood('soma', '#CC0000', lambda breed: (1, floor(exp(heli.param('ratio'))*1000)))
```
Line 1 tells Helipad to add a slider parameter named ‘ratio’, with a default value of 0, and that moves in increments of 0.5 between -3 and 3. We'll be able to adjust this value in the control panel and use the value in the model. The `runtime` argument tells Helipad not to allow the parameter to be changed while the model is running, since it only affects the endowment at the beginning.
Helipad automatically creates slider parameters for number of agents (more specifically, it creates a parameter for each *primitive*, but we aren't creating new primitives in this model, so we use the default primitive titled `agent`). In the fourth and fifth lines, we want to edit the options of this automatically created parameter so we can only create an even number of agents. Since this is a matching model, we don't want leftover agents!
We can access this automatically created parameter in Helipad's `params` property. The [`opts` property](https://helipad.dev/functions/param/#opts) of the [`Param` object](https://helipad.dev/functions/param/) that we access this way corresponds to the `opts` argument in the [`addParameter()` method](https://helipad.dev/functions/model/addparameter/) earlier. Here, we change the increment and the low value to equal 2.
Lines 7 and 8 tell Helipad that our economy has two goods, named `'shmoo'` and `'soma'`. The [`addGood`](https://helipad.dev/functions/model/addgood) function gives each agent object a [`'stocks`' property](https://helipad.dev/functions/agent/#stocks), a dict with two items – `'soma'` and `'shmoo'` – that keep track of how much of each good each agent has. The second argument is a [hex color](https://www.w3schools.com/colors/colors_picker.asp); this tells Helipad to draw shmoo with a green line, and soma with a red.
The third argument gives each agent an initial endowment. There are several ways we can do this. First, we could pass a number to give each agent the same amount. Second, we can pass a tuple with two items, and Helipad will endow the agent with a random amount between those two numbers. This is what we do for shmoo. For soma, on the other hand, we use the third possibility: a function that uses logic as complicated as necessary to determine how much each agent is to start with. In this case we want to endow the agent with a random amount between 0 and $1000 \times e^{ratio}$, with `heli.param('ratio')` retrieving the value of the slider parameter from line 1 (The value is wrapped in `floor()` in order to make sure we pass a whole number to `randint()`). `'ratio'`, therefore, controls the aggregate quantity of soma as compared to shmoo.
Why not just pass a tuple for soma too? The lambda function is necessary here becuase we want Helipad to check the value of the `ratio` parameter each time agents are initialized, since we might change the value between runs. If we simply used the tuple that the lambda function returns as the third argument, it would evaluate the expression using the value of the parameter *when the good was first added*. Since that's before we had a chance to change it in the control panel, it would lock us into the parameter's default value.
Now it's time to start using [hooks](https://helipad.dev/glossary/hooks/). [`agentInit`](https://helipad.dev/hooks/agentinit/) allows you to hook a function to run each time an agent is initialized. The easiest way to write a hook is to add the `@heli.hook` decorator to a function with the name of the hook.
```
@heli.hook
def agentInit(agent, model):
agent.utility = CobbDouglas(['shmoo', 'soma'])
```
(Alternatively, we could name the function whatever we want, and decorate it with `@heli.hook('agentInit')`, or add it manually afterward with `heli.addHook('agentInit', agentInit)`, but the basic decorator is the easiest way.)
The `agentInit` hook passes two arguments to its function: the `agent` object – the agent being instantiated – and the general `model` object. Since Helipad takes care of matching and stocks of goods for us, all we need to do here is to give each agent a Cobb-Douglas utility function over two goods, with the default exponents of 0.5.
## The `match` Function
Next, we’ll move to the meat of the model, steps (3) and (4) above. Since we instantiated a match model above, this will be the [`match`](https://helipad.dev/hooks/match) function, which pairs agents off (if we had set `heli.order` to `'random'` or `'linear'` we would be using the [`agentStep`](https://helipad.dev/hooks/agentstep) hook).
Before we get to the function, however, we'll need to break out some microeconomics to determine how the two partners will interact. The basic tool to figure out opportunities for gains from trade is called an *Edgeworth Box*.

In an Edgeworth box, we plot two agents’ space of two goods, but we invert the second and place it on top of the first. Agent 1’s possessions are counted from the bottom-left axis, and agent 2’s are counted from the top-right axis. The height and width of the box, therefore, represent the total of the two goods between them, and a point in the box – for example, point E – represents four pieces of information: agent 1’s stock of shmoo ($H_1^E$), agent 1’s stock of soma ($M_1^E$), agent 2’s stock of shmoo ($H_2^E$), and agent 2’s stock of soma ($M_2^E$).
In this example, point E is our *endowment point*. These four values represent the amount of shmoo and soma, respectively, that the agents bring into the trade. Suppose this is period 1, so agent 2 has very little soma ($M_2^E$ is very small), agent 1 has a great deal, and they both have middling amounts of shmoo.
The blue curves represent the Cobb-Douglas utility function we gave agent 1 earlier. Each curve, what is called an *indifference curve*, indicates all the points on the curve that would give agent 1 the same utility. It slopes downward due to the fact that at any point, agent 1 would be willing to trade *some* amount of soma for additional shmoo. Blue curves further out from O1 indicate higher utility. The same pertains to the red lines for agent 2, except that lines further out from O2 will indicate higher utility.
At the endowment point E, agents 1 and 2 have utility corresponding to $U_1^E$ and $U_2^E$, respectively. How do we know if they can become better off by trading?
One result from microeconomics is that any two agents can become more satisfied by trading if their *marginal rates of substitution* between the two goods are different – that is, if the slopes of their indifference curves at E are different. This means that there exists some range of prices where agent 1 would be willing to sell something and agent 2 would be willing to buy it (and vice versa in terms of the other good), and both would be happy with this arrangement (i.e. the indifference curve would be pushed further out).
Marginal rates of substitution are equal between the two when their indifference curves are tangent to each other. The goal, then, is to find some trade of soma for shmoo that moves the two agents to a point in the Edgeworth box where their indifference curves would be tangent to one another.
The *contract curve* above is the set of all points in the Edgeworth box where the two curves would be tangent (*note that the contract curve is only a straight line between the two origins when the exponents on the Cobb-Douglas utility function are equal, as we ensured when instantiating the CobbDouglas object above*). In addition, we’ll want to find a point on the contract curve inside the lens made by the indifference curves sprouting from point E, as any other point would make one of them worse off (though if they had started from that point, there would be no further gains from trade).
Any point on the contract curve inside that lens will do. For our purposes though, we’ll just split the difference. We’ll find the points where $U_1^E$ and $U_2^E$ hit the contract curve, and trade enough soma and shmoo to move to the midpoint between the two – namely, point T, which gives both parties higher utility $U_1^T$ and $U_2^T$.
We write this logic into the `match` function as follows:
```
@heli.hook
def match(agents, primitive, model, stage):
u1e = agents[0].utility.calculate(agents[0].stocks)
u2e = agents[1].utility.calculate(agents[1].stocks)
#Get the endpoints of the contract curve
#Contract curve isn't linear unless the CD exponents are both 0.5. If not, *way* more complicated
cc1Soma = u1e * (sum([a.stocks['soma'] for a in agents])/sum([a.stocks['shmoo'] for a in agents])) ** 0.5
cc2Soma = sum([a.stocks['soma'] for a in agents]) - u2e * (sum([a.stocks['soma'] for a in agents])/sum([a.stocks['shmoo'] for a in agents])) ** 0.5
cc1Shmoo = sum([a.stocks['shmoo'] for a in agents])/sum([a.stocks['soma'] for a in agents]) * cc1Soma
cc2Shmoo = sum([a.stocks['shmoo'] for a in agents])/sum([a.stocks['soma'] for a in agents]) * cc2Soma
#Calculate demand: choose a random point on the contract curve
r = random.random()
somaDemand = r*cc1Soma + (1-r)*cc2Soma - agents[0].stocks['soma']
shmooDemand = r*cc1Shmoo + (1-r)*cc2Shmoo - agents[0].stocks['shmoo']
#Do the trades
if abs(somaDemand) > 0.1 and abs(shmooDemand) > 0.1:
agents[0].trade(agents[1], 'soma', -somaDemand, 'shmoo', shmooDemand)
agents[0].lastPrice = -somaDemand/shmooDemand
agents[1].lastPrice = -somaDemand/shmooDemand
else:
agents[0].lastPrice = None
agents[1].lastPrice = None
#Record data
agents[0].utils = agents[0].utility.calculate(agents[0].stocks)
agents[1].utils = agents[1].utility.calculate(agents[1].stocks)
```
The `match` hook sends four arguments: a list of matched agents (in this case two of them), their [primitive](https://helipad.dev/glossary/#primitive), the model object, and the current model stage. Since we haven't created any new primitives, `primitive` will always equal `'agent'`, the default primitive. We could also have the model run in multiple stages by setting `heli.stages`, but since we haven't done that, `stage` will always equal 1.
Lines 3-11 find the endpoints of the contract curve. 3, 8, and 10 solve for the point where agent 1’s endowment utility sits on the contract curve. If $U=M^{0.5}H^{0.5}$, then $U_1^E = \sqrt{M_1^E H_1^E}$, which the `CobbDouglas` object we instantiated earlier as `agent.utility` can calculate automatically. Solving for H and setting it equal to the equation for the contract curve gives us the endpoint, which is denoted by the point (`cc1Soma`, `cc1Shmoo`), which we calculate on lines 8 and 10. An identical process is played out for agent 2 on lines 4, 9, and 11 to find the other endpoint where $U_2^E$ intersects the contract curve.
Having found these two endpoints, the third block selects a random point on the contract curve and subtracts the existing endowment in order to find the quantities necessary to move from point E to a point T. Geometrically, we can demonstrate that this gives both parties higher utility starting from *any point not already on the contract curve*. Note that, geometrically, in order to get to the contract curve, one of either `somaDemand` and `shmooDemand` will be positive, and the other negative.
Provided the amounts to be traded aren’t minuscule, line 16 actually executes the trade. The `trade()` method of the agent class transfers `-somaDemand` soma from the agent, and gives `shmooDemand` shmoo to partner. Note that the third and fifth arguments of `trade()` expect a supply and a demand for the two goods from the perspective of the first agent, which means it expects both to be positive or both to be negative (*Note: if one argument were negative and the other positive, one agent would be paying its partner one good in order to take some of the other good. One of these good, therefore, would be a bad rather than a good*). In lines 3-16, however, we calculated the agent’s demand for both goods, one of which will be negative (i.e. a supply of it). We therefore reverse `somaDemand` in the third argument to indicate a supply of soma.
We also record `lastPrice` and `utils` as properties of each agent, updating every period, in order to collect and display them later. The `if` block tells the agents to record the terms of trade as a `lastPrice` property, *if* they traded. Otherwise, the `else` block indicates that they didn’t trade, and shouldn’t be counted as part of that period’s average. The final two lines record the agents’ utilities from their new endowments of shmoo and soma.
## Data Collection and Visualization
For our purposes, we’ll be interested in keeping track of (1) average utility, (2) the volume of trade each period, and (3) the terms of trade. If we did things correctly, we’ll expect to see (1) utility rising, since agents won’t trade if they don’t become better off; (2) the volume of trade falling, since any two agents that find each other will be closer to the contract curve in later periods than in earlier periods, and (3) convergence on a single price.
The first thing we need to do is tell Helipad what kind of visualization we want. In this case we'll import `TimeSeries`, which lets us plot data over time. We'll use methods of the `viz` variable later to add plots and series.
```
from helipad.visualize import TimeSeries
viz = heli.useVisual(TimeSeries)
```
Any data we plot visually will have to be recorded with a *reporter,* which returns one value per model period. Fortunately Helipad keeps tracks the volume of trade automatically when we use the [`trade()` function](https://helipad.dev/functions/agent/trade/) and registers the reporter for us. We've also kept track of utility and terms of trade at the end of our previous function, so now we need reporters to tell Helipad how to aggregate and display those properties. In our case, in addition to the volume of trade, we want Helipad to record *average* utility and *average* price, along with maximum and minimum prices (to see dispersion).
Helipad already records average utility by default by looking at the `utils` property of agents, so once we’ve given agents a utility in lines 8 an 9 above, the plotting is already taken care of. That leaves just the prices. In order to set up a plot, we’ll have a four-step process:
1. Aggregate the data each period,
2. Register the reporter so Helipad keeps track of it,
3. Set up a plot on which we can display series together, and
4. Register the reporter as a series, so we can visualize it, and place it on a plot.
Each of these is associated with a function. For 1, we use [`heli.data.agentReporter()`](https://helipad.dev/functions/data/agentreporter), which cycles over all the agents and computes a statistic based on a property name. We’ll use this in combination with [`heli.data.addReporter()`](https://helipad.dev/functions/data/addreporter) for step 2, which makes sure to record the result of `heli.data.agentReporter` each period and put it in the data output.
```
heli.data.addReporter('ssprice', heli.data.agentReporter('lastPrice', 'agent', stat='gmean', percentiles=[0,100]))
```
The second argument of `addReporter` must be a function that takes the model object. `agentReporter` generates such a function that calculates the geometric mean of the value of the `lastPrice` property of each agent (Geometric mean, rather than arithmetic mean, is appropriate for relative prices as 0.1 should be an equal distance from 1 as 10), along with additional plots for the 0<sup>th</sup> and 100<sup>th</sup> percentiles (maximum and minimum values). This column of the data can then be plotted and referred back to in Helipad’s data functions with the name `'ssprice'`.
In order to visualize our data in real time, we’ll need somewhere to put it – step 3. This is called a plot, and is registered using the [`addPlot`](https://helipad.dev/functions/model/addplot) method of our visualization class that we stored in the `viz` variable. Finally, for step 4, we’ll create a series, which tells Helipad to display a reporter on the plot we specify.
```
pricePlot = viz.addPlot('price', 'Price', logscale=True, selected=True)
pricePlot.addSeries('ssprice', 'Soma/Shmoo Price', '#119900')
```
Line 1 gives us a place to put our price series, lets us refer to it as `‘price’` and labels it `‘Price’`. Since we’re looking at a price ratio, for the same reason we used a geometric mean, we’ll also want to display it on a logarithmic scale. Finally, line 2 tells Helipad to use the `ssprice` reporter we set up earlier and draw it on the `'price'` plot from the first line, label it ‘Soma/Shmoo Price’, and color it `#119900`, which is a medium-dark green.
All of the data registered as a reporter can be accessed algorithmically within the model, or after the model runs, to integrate with the statistical techniques from the previous chapter. The entire data output can be accessed as a Pandas dataframe using [`heli.data.dataframe`](https://helipad.dev/functions/data/#dataframe). Particular values of a series can be accessed using [`heli.data.getLast()`](https://helipad.dev/functions/data/getlast) (see below).
## Final Touches
The model as it stands is ready to run. Just a few more niceties before finishing. First, because this model is one of a decentralized convergence to equilibrium, it won’t be interesting to keep running it once it gets sufficiently close to equilibrium.
Helipad has a built-in configuration parameter `'stopafter'` that displays as a [checkentry](https://helipad.dev/functions/checkentry/) in the control panel. If `stopafter` is `False`, the model runs forever. If `stopafter` is an integer, the model stops after that many periods.
However, `stopafter` can also be set to a string that points to an [*event*](https://helipad.dev/functions/model/addevent/) name, in order for us to establish more complex stopping conditions. Events are items that trigger when a certain criterion is satisfied. Ordinarily they draw a vertical line on our plot function, but here we want to *stop* the model when the criterion is satisfied.
```
#Stop the model when we're basically equilibrated
@heli.event
def stopCondition(model):
return model.t > 1 and model.data.getLast('demand-shmoo') < 20 and model.data.getLast('demand-soma') < 20
heli.param('stopafter', 'stopCondition')
```
In this case, we stop the model when (1) the current time is greater than 1, (2) when the total shmoo traded is less than 20, and (3) when the total soma traded is also less than 20. Given that we gave each agent a random endowment of up to 1000 of each, and that the default number of agents is 50, a total volume of trade under 20 is quite low, comparatively. We write a function that returns `False` until all three of these things are true, then register it as an event with the [`@heli.event` decorator](https://helipad.dev/functions/model/event/). Then, we set the `stopafter` parameter to the name of the event we registered.
Finally, a line to make sure only the plots with actual series are selected by default in the control panel.
```
for p in ['demand', 'utility']: viz.plots[p].active(True)
```
We also set the refresh rate parameter to 1 in order to see it play out live. Ordinarily this would make the model quite slow, but because it equilibrates so quickly (less than 20 periods), the default refresh rate of 20 would just show the final result.
```
heli.param('refresh', 1)
```
And with that, our model is complete! Launch the control panel if you want to adjust the parameters visually before running the model.
```
heli.launchCpanel()
```
Once we've set our parameters to our liking, we can actually run the model. [`heli.start()`](https://helipad.dev/functions/model/start/) will run the model without the plots; this starts the model along with the plotting.
```
heli.launchVisual()
```
From this output, it looks like all three of our predictions were validated: (1) utility is rising as agents accomplish more trade, (2) price dispersion narrows as agents converge on an equilibrium price, and (3) the volume of trade declines as agents get closer and closer to equilibrium. It only takes about 15 periods to get a total volume of trade below 20!
From here you can explore the model in various ways:
1. Adjust the parameters in the control panel and re-run the model by running the last line again. See how the endowment ratio affects the equilibrium price, for example.
2. Click on the top plot with the `demand` series and press 'L' to toggle a logarithmic scale on the vertical axis. The resulting lines are approximately linear, meaning the demand function with respect to time takes the form *e*<sup>-*t*</sup>.
3. Add other settings to see how they affect the model. For example, add a slider parameter to control the probability that an agent trades in a given period (i.e. only execute the `match` function with a certain probability). If only half of agents trade each period, for example, the convergence to equilibrium would be slower. Or you might split the difference on the contract curve differently, to see how differences in bargaining power affect the process of equilibration.
One final note: all this code will also work as a standalone Python app with a Tkinter frontend if you run it as a file. The only difference is that the last line `heli.launchVisual()` is unnecessary outside of a Jupyter notebook, as the Tkinter control panel provides a button to launch the plots.
[**See the model code put together ▸**](https://github.com/charwick/helipad/blob/master/sample-models/pricediscover.py)
| github_jupyter |

# Data Science Projects with SQL Server Machine Learning Services
## 06 Customer Acceptance and Model Retraining
<p style="border-bottom: 1px solid lightgrey;"></p>
<dl>
<dt>Course Outline</dt>
<dt>1 Overview and Course Setup</dt>
<dt>2 Business Understanding</dt>
<dt>3 Data Acquisition and Understanding</dt>
<dt>4 Modeling</dt>
<dt>5 Deployment</i></dt>
<dt>6 Customer Acceptance and Model Retraining <i>(This section)</dt>
<dd>6.1 Call the Prediction from a Stored Procedure</dd>
<dd>6.2 Close out the project</dd>
<dl>
<p style="border-bottom: 1px solid lightgrey;"></p>
The final phase involves testing the model predictions on real-world queries to ensure that it meets all requirements. In this phase you will also document the project so that all parameters are well-known. Finally, a mechanism is evaluated to re-train the model. You will not cover the retraining portion of the process in this course.
### Goals for Customer Acceptance
- Confirm that the pipeline, the model, and their deployment in a production environment satisfy the customer's objectives
- Create a project close out document
- Create a path for retraining your model
### How to do it
- System validation: Confirm that the deployed model and pipeline meet the customer's needs.
- Project hand-off: Hand the project off to the entity that's going to run the system in production
- Develop a "ground truth" mechanism and feed the new labels (if applicable) back into the retraining API
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/cortanalogo.png"><b>6.1 Test the predictions with a Stored Procedure</b></p>
Using the binary Model you created, you can now allow users to make calls to the system for predictions. In the code that the application runs, you need to send along the Features the model expects, and accept the returned value(s) as the prediction. Alternately, you could store the results in a table or other SQL Server object.
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/aml-logo.png"><b>Activity: Make a call with features sent to the Model's Stored Procedure</b></p>
- Connect to SQL Server with a SQL query tool, and run the following code that sends five Features to the Model:
<pre>
/* Execute the predict_rentals stored proc and pass the model name
and a query string with a set of features we want to use to predict
the rental count */
EXEC dbo.predict_rentalcount_new @model = 'rxDTree',
@q ='SELECT CONVERT(INT, 3) AS Month, CONVERT(INT, 24) AS Day, CONVERT(INT, 4) AS WeekDay, CONVERT(INT, 1) AS Snow, CONVERT(INT, 1) AS Holiday';
GO
</pre>
You'll get back a prediction showing how many rentals are expected.
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/cortanalogo.png"><b>6.2 Close out the Project</b></p>
To complete the project, document the steps, findings, and results. In the activity that follows, you'll find a complete document reference for this process.
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/aml-logo.png"><b>Activity: Create a Project Closeout Document</b></p>
- Open the `../assets/ProjectCloseout.md` file and fill in the fields from your project.
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/thinking.jpg"><b>For Further Study</b></p>
- Learn more about closing out a Data Science project here: https://docs.microsoft.com/en-us/azure/machine-learning/team-data-science-process/lifecycle-acceptance
Congratulations! You have completed this introductory course. As you can see, there is a great deal more to learn. The best way to do that is use what you have learned here and apply it to a real-world scenario. Try out your new skills and use the references and the materials in the ./assets folder in your journey.
| github_jupyter |
<a href="https://colab.research.google.com/github/Data-Science-and-Data-Analytics-Courses/UCSanDiegoX---Machine-Learning-Fundamentals-03-Jan-2019-audit/blob/master/Week%2006%20Linear%20Classification/perceptron_at_work/perceptron_at_work.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# The Perceptron algorithm at work
In this notebook, we will look in detail at the Perceptron algorithm for learning a linear classifier in the case of binary labels.
# Clone remote
```
import os, sys
from pathlib import Path
URL = "https://github.com/Data-Science-and-Data-Analytics-Courses/UCSanDiegoX---Machine-Learning-Fundamentals-03-Jan-2019-audit"
NBDIR = "Week 06 Linear Classification/perceptron_at_work"
def clone(url, dest=".", branch="master", reloc=True):
"""
Clone remote branch from url into dest
branch not provided: clone all branches
reloc is True: relocate to repository
"""
url = url.strip(" /")
repo = Path(dest, os.path.basename(url)).resolve()
# dest must not be inside existing repository
is_out = !git -C "$dest" rev-parse
if not is_out: # inside repository
raise ValueError("Can't clone into existing repository")
# Clone
p = repo.as_posix()
if branch: # specific branch
!git clone --single-branch "$url" -b "$branch" "$p"
else: # all branches
!git clone "$url" "$p"
# Relocate
if reloc:
%cd "$repo"
return repo.as_posix()
REPO = clone(URL)
%run .Importable.ipynb
sys.path.append(REPO)
%cd "$NBDIR"
```
## 1. The algorithm
This first procedure, **evaluate_classifier**, takes as input the parameters of a linear classifier (`w,b`) as well as a data point (`x`) and returns the prediction of that classifier at `x`.
The prediction is:
* `1` if `w.x+b > 0`
* `0` if `w.x+b = 0`
* `-1` if `w.x+b < -1`
```
def evaluate_classifier(w,b,x):
if (np.dot(w,x) + b) > 0:
return 1
if (np.dot(w,x) + b) <= 0:
return -1
return 0
```
Here is the Perceptron training procedure. It is invoked as follows:
* `w,b,converged = train_perceptron(x,y,n_iters)`
where
* `x`: n-by-d numpy array with n data points, each d-dimensional
* `y`: n-dimensional numpy array with the labels (each 1 or -1)
* `n_iters`: the training procedure will run through the data at most this many times (default: 100)
* `w,b`: parameters for the final linear classifier
* `converged`: flag (True/False) indicating whether the algorithm converged within the prescribed number of iterations
If the data is not linearly separable, then the training procedure will not converge.
```
def train_perceptron(x,y,n_iters=100):
n,d = x.shape
w = np.zeros((d,))
b = 0
done = False
converged = True
iters = 0
np.random.seed(None)
while not(done):
done = True
I = np.random.permutation(n)
for i in range(n):
j = I[i]
if (evaluate_classifier(w,b,x[j,:]) != y[j]):
w = w + y[j] * x[j,:]
b = b + y[j]
done = False
iters = iters + 1
if iters > n_iters:
done = True
converged = False
if converged:
print("Perceptron algorithm: iterations until convergence: ", iters)
else:
print("Perceptron algorithm: did not converge within the specified number of iterations")
return w, b, converged
```
## 2. Experiments with the Perceptron
We start with standard includes.
```
%matplotlib inline
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
matplotlib.rc('xtick', labelsize=14)
matplotlib.rc('ytick', labelsize=14)
```
The directory containing this notebook should also contain the two-dimensional data files, `data_1.txt` and `data_2.txt`. These files contain one data point per line, along with a label, like:
* `3 8 1` (meaning that point `x=(3,8)` has label `y=1`)
The next procedure, **run_perceptron**, loads one of these data sets, learns a linear classifier using the Perceptron algorithm, and then displays the data as well as the boundary.
```
def run_perceptron(datafile):
data = np.loadtxt(datafile)
n,d = data.shape
# Create training set x and labels y
x = data[:,0:2]
y = data[:,2]
# Run the Perceptron algorithm for at most 100 iterations
w,b,converged = train_perceptron(x,y,100)
# Determine the x1- and x2- limits of the plot
x1min = min(x[:,0]) - 1
x1max = max(x[:,0]) + 1
x2min = min(x[:,1]) - 1
x2max = max(x[:,1]) + 1
plt.xlim(x1min,x1max)
plt.ylim(x2min,x2max)
# Plot the data points
plt.plot(x[(y==1),0], x[(y==1),1], 'ro')
plt.plot(x[(y==-1),0], x[(y==-1),1], 'k^')
# Construct a grid of points at which to evaluate the classifier
if converged:
grid_spacing = 0.05
xx1, xx2 = np.meshgrid(np.arange(x1min, x1max, grid_spacing), np.arange(x2min, x2max, grid_spacing))
grid = np.c_[xx1.ravel(), xx2.ravel()]
Z = np.array([evaluate_classifier(w,b,pt) for pt in grid])
# Show the classifier's boundary using a color plot
Z = Z.reshape(xx1.shape)
plt.pcolormesh(xx1, xx2, Z, cmap=plt.cm.PRGn, vmin=-3, vmax=3)
plt.show()
```
Let's run this on `data_1.txt`. Try running it a few times; you should get slightly different outcomes, because of the randomization in the learning procedure.
```
run_perceptron('data_1.txt')
```
And now, let's try running it on `data_2.txt`. *What's going on here?*
```
run_perceptron('data_2.txt')
```
### 3. For you to do
<font color="magenta">Design a data set</font> with the following specifications:
* there are just two data points, with labels -1 and 1
* the two points are distinct, with coordinate values in the range [-1,1]
* the Perceptron algorithm requires more than 1000 iterations to converge
```
iters = 1000
x1 = [0.1, 0.2]
x2 = [0.1001, 0.2001]
x = np.array([x1, x2])
y = np.array([-1, 1])
w, b, converged = train_perceptron(x, y, iters)
print(converged)
```
| github_jupyter |
# Dropout
Dropout [1] is a technique for regularizing neural networks by randomly setting some features to zero during the forward pass. In this exercise you will implement a dropout layer and modify your fully-connected network to optionally use dropout.
[1] [Geoffrey E. Hinton et al, "Improving neural networks by preventing co-adaptation of feature detectors", arXiv 2012](https://arxiv.org/abs/1207.0580)
```
# As usual, a bit of setup
from __future__ import print_function
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.items():
print('%s: ' % k, v.shape)
```
# Dropout forward pass
In the file `cs231n/layers.py`, implement the forward pass for dropout. Since dropout behaves differently during training and testing, make sure to implement the operation for both modes.
Once you have done so, run the cell below to test your implementation.
```
np.random.seed(231)
x = np.random.randn(500, 500) + 10
for p in [0.25, 0.4, 0.7]:
out, _ = dropout_forward(x, {'mode': 'train', 'p': p})
out_test, _ = dropout_forward(x, {'mode': 'test', 'p': p})
print('Running tests with p = ', p)
print('Mean of input: ', x.mean())
print('Mean of train-time output: ', out.mean())
print('Mean of test-time output: ', out_test.mean())
print('Fraction of train-time output set to zero: ', (out == 0).mean())
print('Fraction of test-time output set to zero: ', (out_test == 0).mean())
print()
```
# Dropout backward pass
In the file `cs231n/layers.py`, implement the backward pass for dropout. After doing so, run the following cell to numerically gradient-check your implementation.
```
np.random.seed(231)
x = np.random.randn(10, 10) + 10
dout = np.random.randn(*x.shape)
dropout_param = {'mode': 'train', 'p': 0.2, 'seed': 123}
out, cache = dropout_forward(x, dropout_param)
dx = dropout_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda xx: dropout_forward(xx, dropout_param)[0], x, dout)
# Error should be around e-10 or less
print('dx relative error: ', rel_error(dx, dx_num))
```
## Inline Question 1:
What happens if we do not divide the values being passed through inverse dropout by `p` in the dropout layer? Why does that happen?
## Answer:
The average output activations will be smaller than the inputs which is not intended by drop out layer. The input and output of drop out layer should have similar average values.
# Fully-connected nets with Dropout
In the file `cs231n/classifiers/fc_net.py`, modify your implementation to use dropout. Specifically, if the constructor of the net receives a value that is not 1 for the `dropout` parameter, then the net should add dropout immediately after every ReLU nonlinearity. After doing so, run the following to numerically gradient-check your implementation.
```
np.random.seed(231)
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for dropout in [1, 0.75, 0.5]:
print('Running check with dropout = ', dropout)
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
weight_scale=5e-2, dtype=np.float64,
dropout=dropout, seed=123)
loss, grads = model.loss(X, y)
print('Initial loss: ', loss)
# Relative errors should be around e-6 or less; Note that it's fine
# if for dropout=1 you have W2 error be on the order of e-5.
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
print()
```
# Regularization experiment
As an experiment, we will train a pair of two-layer networks on 500 training examples: one will use no dropout, and one will use a keep probability of 0.25. We will then visualize the training and validation accuracies of the two networks over time.
```
# Train two identical nets, one with dropout and one without
np.random.seed(231)
num_train = 500
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
solvers = {}
dropout_choices = [1, 0.25]
for dropout in dropout_choices:
model = FullyConnectedNet([500], dropout=dropout)
print(dropout)
solver = Solver(model, small_data,
num_epochs=25, batch_size=100,
update_rule='adam',
optim_config={
'learning_rate': 5e-4,
},
verbose=True, print_every=100)
solver.train()
solvers[dropout] = solver
# Plot train and validation accuracies of the two models
train_accs = []
val_accs = []
for dropout in dropout_choices:
solver = solvers[dropout]
train_accs.append(solver.train_acc_history[-1])
val_accs.append(solver.val_acc_history[-1])
plt.subplot(3, 1, 1)
for dropout in dropout_choices:
plt.plot(solvers[dropout].train_acc_history, 'o', label='%.2f dropout' % dropout)
plt.title('Train accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
for dropout in dropout_choices:
plt.plot(solvers[dropout].val_acc_history, 'o', label='%.2f dropout' % dropout)
plt.title('Val accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(ncol=2, loc='lower right')
plt.gcf().set_size_inches(15, 15)
plt.show()
```
## Inline Question 2:
Compare the validation and training accuracies with and without dropout -- what do your results suggest about dropout as a regularizer?
## Answer:
After adding drop out, the training accuracy becomes smaller since some of the neurons are being turned off. However, the validation set is performing better because the model is more generalized. This behavior is similar to regularizer which is to reduce overfitting (lowering training set accuracy, increasing val/test set accuracy).
## Inline Question 3:
Suppose we are training a deep fully-connected network for image classification, with dropout after hidden layers (parameterized by keep probability p). How should we modify p, if at all, if we decide to decrease the size of the hidden layers (that is, the number of nodes in each layer)?
## Answer:
If the number of neurons are decreasing, we can increase p to make less neuron being dropped.
| github_jupyter |
# Friendship Paradox
#### Author: [Erika Fille Legara](https://erikalegara.site)
[](https://github.com/eflegara/Network-Science-Lectures/blob/master/LICENSE.md)
---
<table align="left" border=0>
<!-- <table class="tfo-notebook-buttons" align="left"> -->
<td>
<a target="_blank" href="https://colab.research.google.com/github/eflegara/Network-Science-Lectures/blob/master/Friendship%20Paradox.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/eflegara/Network-Science-Lectures/blob/master/Friendship%20Paradox.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
---
Do you think you have more friends than your friends, on average? Or do they have more friends than you have, on average?
In 1991, sociologist [Scott L. Feld](https://scholar.google.com/citations?user=Qh24zNEAAAAJ&hl=en) observed and [reported](https://www.journals.uchicago.edu/doi/10.1086/229693) the [friendship paradox](https://en.wikipedia.org/wiki/Friendship_paradox), which says that most people have fewer friends that their friends have. Do you believe this?
Let's investigate.
```
import networkx as nx
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
G = nx.barabasi_albert_graph(1500,m=5)
yourfriends = []
foyf = []
for i in G.nodes():
neighbors = list(G.neighbors(i))
deg_size = []
for n in neighbors:
deg_size.append(G.degree[n])
yourfriends.append(G.degree[i])
foyf.append(np.mean(deg_size))
plt.figure(figsize=(10,5))
plt.plot(yourfriends, foyf,
'r.', alpha=0.3, markersize=8)
plt.xlabel('Number of Your Friends');
plt.ylabel('Average Number of Friends of Your Friends');
g = sns.JointGrid(x=yourfriends, y=foyf)
g.plot_joint(sns.kdeplot, fill=True)
g.plot_marginals(sns.boxplot);
g.ax_joint.set_xlabel('Number of Your Friends',
fontweight='bold');
g.ax_joint.set_ylabel('Average Number of Friends of Your Friends',
fontweight='bold');
print('In this phenomenological network, individuals \
have around {:.0f} friends while their friends have around {:.0f} \
friends, on average!'.format(int(np.mean(yourfriends)),
int(np.mean(foyf))))
```
Indeed, from the plot, we can see that the number of friends of individuals is, on average, fewer than that of their friends's.
Why do you think this is the case?
## How about in the real-world?
Yes, we used a model network to show the so-called friendship paradox. Do you think the same is observed for many real-world networks?
```
def check_paradox(G):
yourfriends = []
foyf = []
for i in G.nodes():
neighbors = list(G.neighbors(i))
deg_size = []
for n in neighbors:
deg_size.append(G.degree[n])
# just make sure to include individuals with neighbors
if deg_size:
yourfriends.append(G.degree[i])
foyf.append(np.mean(deg_size))
g = sns.JointGrid(x=yourfriends, y=foyf)
g.plot_joint(sns.kdeplot, fill=True)
g.plot_marginals(sns.boxplot);
g.ax_joint.set_xlabel('Your Degree of Connectivity',
fontweight='bold');
g.ax_joint.set_ylabel('Average Degree of Your Connections',
fontweight='bold');
print('In this phenomenological network, individuals have around {:.0f} connections while their connections have around {:.0f} connections, on average!'.format(int(np.mean(yourfriends)),
int(np.mean(foyf))))
return
G = nx.read_gml("./datasets/polblogs copy.gml")
G = G.to_undirected()
G = nx.Graph(G)
check_paradox(G)
G = nx.read_gml("./datasets/netscience.gml")
G = G.to_undirected()
G = nx.Graph(G)
check_paradox(G)
G = nx.read_gml("./datasets/celegansneural.gml")
G = G.to_undirected()
G = nx.Graph(G)
check_paradox(G)
```
| github_jupyter |
### The purpose of this notebook is to load the referential data of `chairs-in-context` and package them in a pandas dataframe along with other simple datastructures like dictionaries that map integers to tokens etc. Having access to these pre-processed data, is the first step before you start training neural listener ans speakers.
```
import numpy as np
import os.path as osp
import pandas as pd
from shapeglot.simple_utils import unique_rows, unpickle_data, pickle_data, invert_dictionary, sort_dict_by_val
from shapeglot.in_out.game_data_preprocessing import preprocess_geometry, preprocess_language
from shapeglot.in_out.game_data_preprocessing import basic_game_statistics
from shapeglot import vis_utils
from shapeglot.vis_utils import visualize_game_example
%load_ext autoreload
%autoreload 2
%matplotlib inline
```
### Define the data files
```
# After running download_data.sh this is where the shapeglot_data should be.
top_data_dir = '../../data/main_data_for_chairs'
# Downloaded files that will be used to prepare the data:
game_interactions = osp.join(top_data_dir, 'language/shapenet_chairs.csv')
misspelling_corrector = osp.join(top_data_dir, 'language/word_spell_manual_corrector_chairs.pkl')
top_image_dir = osp.join(top_data_dir, 'images/shapenet')
vis_utils.top_image_dir = top_image_dir
```
### User defined settings
```
tokenizer = 'naive' # ['naive' or 'spacy'] (you can use spacy if it is installed)
# Replace rare words with <UNK>
replace_rare = 1 # If a word occurs less/equal to this, is rare. (use 0 to keep all words)
# For comparative and superlative adjetives break their ending:
# nicer -> ['nice', 'er'], nicest -> ['nice', 'est']
do_compar_superlative = False # (if True, assumes nltk is installed)
# Apply some spell-checking we manually created,
# or use 'spell_corrector=None' to ignore spelling mistakes.
spell_corrector = next(unpickle_data(misspelling_corrector, python2_to_3=True))
```
### Load and prerocess the referential data
```
# Load the data
game_data = pd.read_csv(game_interactions)
# Load the data
game_data = pd.read_csv(game_interactions)
# Convert ShapeNet code-names to integers.
game_data, sn_model_to_int = preprocess_geometry(game_interactions)
# Tokenize/process utterances.
game_data, word_to_int = preprocess_language(game_data,
spell_corrector=spell_corrector,
replace_rare=replace_rare,
tokenizer=tokenizer,
do_compar_superlative=do_compar_superlative)
print ('Vocabulary, size:', len(word_to_int))
# Make some auxiliary data-structures that are helpful for accessing the data
int_to_sn_model = invert_dictionary(sn_model_to_int)
sorted_sn_models = [k[1] for k in sort_dict_by_val(int_to_sn_model)]
int_to_word = invert_dictionary(word_to_int)
## print some basic statistics of the resulting data.
basic_game_statistics(game_data, word_to_int)
```
### Visualize the resuting triplets:
```
rid = np.random.randint(len(game_data))
visualize_game_example(game_data, rid, sorted_sn_models, int_to_word)
# save the data to the top data directory as a pkl
save_file = osp.join(top_data_dir, 'game_data.pkl')
pickle_data(save_file, game_data, word_to_int, int_to_word, int_to_sn_model, sn_model_to_int, sorted_sn_models)
```
| github_jupyter |
# VMEC Python Interface
This notebook introduces the user to the VMEC Python interface. This is accomplished by using the CTYPES Python library interface to directly access a statically linked version of libstell as compiled with the VMEC distribution.
First we Test if we can load the library.
```
from libstell import *
```
Now we test reading a file.
```
from libstell import * #Import the LIBSTELL Library
from matplotlib.pyplot import * #Import matplotlib.pyplot for plotting
from math import * #For some constants
import numpy as np #For Arrays
# Now use LIBSTELL to read a wout file.
v=libstell.read_vmec('../BENCHMARKS/VMEC_TEST/wout_ATF.nc')
```
Here we show you how to use the builting functions to declare the $\theta$ and $\zeta$ grid then fourier transform quantities.
```
# We now Fourier transform the data into points in real space on an [ns,nu,nv] grid
nu=32
nv=15*v['nfp']
ns=v['ns']
theta = np.ndarray((nu,1))
zeta = np.ndarray((nv,1))
for i in range(nu): theta[i]=2*pi*i/(nu-1)
for i in range(nv): zeta[i]=2*pi*i/(nv-1)
r=libstell.cfunct(theta,zeta,v['rmnc'],v['xm'],v['xn'])
z=libstell.sfunct(theta,zeta,v['zmns'],v['xm'],v['xn'])
b=libstell.cfunct(theta,zeta,v['bmnc'],v['xm'],v['xn'])
g=libstell.cfunct(theta,zeta,v['gmnc'],v['xm'],v['xn'])
currv=libstell.cfunct(theta,zeta,v['currvmnc'],v['xm'],v['xn'])
```
Now we plot the 1D values from the VMEC wout file.
```
# This section shows how to plot various 1D quantities. We did not need to Fourier transform to plot these.
fig = figure(figsize=(10,10))
ax1 = fig.add_subplot(2,2,1, adjustable='box', xlabel='Norm. Flux (s)',ylabel='Pressure [kpA]')
ax1.plot(v['presf'])
ax2 = fig.add_subplot(2,2,2, adjustable='box', xlabel='Norm. Flux (s)',ylabel=' $\iota$')
ax2.plot(v['iotaf'])
ax3 = fig.add_subplot(2,2,3, adjustable='box', xlabel='Norm. Flux (s)',ylabel=' <$j_u$>')
ax3.plot(v['jcuru'])
ax4 = fig.add_subplot(2,2,4, adjustable='box', xlabel='Norm. Flux (s)',ylabel='<$j_v$>')
ax4.plot(v['jcurv'])
show()
# Now let's make some plots of the flux surfaces of the equilibrium in three cross sections
fig = figure(figsize=(10,10))
ax1 = fig.add_subplot(2,2,1, adjustable='box', xlabel='R [m]',ylabel='Z [m]',aspect='equal')
ax1.plot(r[ns-1,:,0],z[ns-1,:,0],color='red')
ax1.plot(r[0,0,0],z[0,0,0],'+',color='red')
ax2 = fig.add_subplot(2,2,2, adjustable='box', xlabel='R [m]',ylabel='Z [m]',aspect='equal')
dex = round(nv/(4*v['nfp']))
ax2.plot(r[ns-1,:,dex],z[ns-1,:,dex],color='red')
ax2.plot(r[0,0,dex],z[0,0,dex],'+',color='red')
ax3 = fig.add_subplot(2,2,3, adjustable='box', xlabel='R [m]',ylabel='Z [m]',aspect='equal')
dex = round(nv/(2*v['nfp'])) # Note it helps is nv is odd
ax3.plot(r[ns-1,:,dex],z[ns-1,:,dex],color='red')
ax3.plot(r[0,0,dex],z[0,0,dex],'+',color='red')
ax2 = fig.add_subplot(2,2,4, adjustable='box', xlabel='R [m]',ylabel='Z [m]',aspect='equal')
dex = 0 # Note it helps is nv is odd
ax2.plot(r[ns-1,:,dex],z[ns-1,:,dex],color='red')
ax2.plot(r[0,0,dex],z[0,0,dex],'+',color='red')
dex = round(nv/(4*v['nfp'])) # Note it helps is nv is odd
ax2.plot(r[ns-1,:,dex],z[ns-1,:,dex],color='green')
ax2.plot(r[0,0,dex],z[0,0,dex],'+',color='green')
dex = round(nv/(2*v['nfp'])) # Note it helps is nv is odd
ax2.plot(r[ns-1,:,dex],z[ns-1,:,dex],color='blue')
ax2.plot(r[0,0,dex],z[0,0,dex],'+',color='blue')
show()
# Now we plot color cross sections of various quantities
fig = figure(figsize=(10,10))
ax1 = fig.add_subplot(2,2,1, adjustable='box', xlabel='R [m]',ylabel='Z [m]',aspect='equal')
ax1.pcolormesh(r[:,:,0],z[:,:,0],b[:,:,0],cmap='jet',shading='gouraud')
ax1 = fig.add_subplot(2,2,2, adjustable='box', xlabel='R [m]',ylabel='Z [m]',aspect='equal')
ax1.pcolormesh(r[:,:,int(nv/6)],z[:,:,int(nv/6)],b[:,:,int(nv/6)],cmap='jet',shading='gouraud')
ax1 = fig.add_subplot(2,2,3, adjustable='box', xlabel='R [m]',ylabel='Z [m]',aspect='equal')
ax1.pcolormesh(r[:,:,0],z[:,:,0],currv[:,:,0]/g[:,:,0],cmap='hot',shading='gouraud')
ax1 = fig.add_subplot(2,2,4, adjustable='box', xlabel='R [m]',ylabel='Z [m]',aspect='equal')
ax1.pcolormesh(r[:,:,int(nv/6)],z[:,:,int(nv/6)],currv[:,:,int(nv/6)]/g[:,:,int(nv/6)],cmap='hot',shading='gouraud')
show()
#help(cm)
# Of course there is a much easier way to do this.
libstell.torocont(r,z,b,0)
# Also this is much simpler way to plot cross sections as well
libstell.toroslice(r,0,z,range(0,49))
# There is also an easy way to to plot isosurfaces.
from mayavi import *
h=libstell.isotoro(r,z,zeta,[48])
h=libstell.isotoro(r,z,zeta,[48],b)
```
That's it for the VMEC wout stuff. Now we'll use safe_open and read_indata_namelist to read a VMEC input file.
```
# Now we read the contents of a VMEC input file.
from libstell import *
iunit = 27
iunit2 = 28
istat = 0
recl = 1
temp=libstell.safe_open(iunit,istat,'../BENCHMARKS/input.QAS','old','formatted',recl,'sequential','none')
test=libstell.read_indata_namelist(iunit,istat)
print(test['pmass_type'])
print(test['am'])
temp = libstell.pmass(0.2)
print(temp)
```
Now we show how to read and write namelists using read_indata_namelist and write_indata_namelist
```
from libstell import *
iunit = 27
iunit2 = 28
istat = 0
recl = 1
print('1 ',type(iunit),type(istat))
temp=libstell.safe_open(iunit,istat,'../BENCHMARKS/input.QAS','old','formatted',recl,'sequential','none')
print('2 ',type(iunit),type(istat))
test=libstell.read_indata_namelist(iunit,istat)
print('3 ',type(iunit),type(istat))
libstell.safe_close(iunit)
iunit = 27
iunit2 = 28
istat = 0
recl = 1000
print('4 ',type(iunit),type(istat))
temp=libstell.safe_open(iunit2,istat,'../BENCHMARKS/input.new_file','unknown','formatted',recl,'sequential','none')
print('5 ',type(iunit2),type(istat))
istat = 0
temp2=libstell.write_indata_namelist(iunit2,istat)
print('6 ',type(iunit2),type(istat))
libstell.safe_close(iunit)
```
In this section we switch to STELLOPT and show you how to read in and plot values from the stellopt.ext file.
```
from libstell import * #Import the LIBSTELL Library
from matplotlib.pyplot import * #Import matplotlib.pyplot for plotting
stel_data = libstell.read_stellopt('../BENCHMARKS/stellopt.STELLOPT_BENCH')
print(stel_data['BALLOON'][1,3,:])
for item in stel_data:
print(item,type(stel_data[item]))
fig = figure(figsize=(10,10))
ax1 = fig.add_subplot(1,1,1, adjustable='box', xlabel='Radial Index',ylabel='Ballooning stability')
ax1.plot(stel_data['BALLOON_k'].transpose(),stel_data['BALLOON_grate'].transpose())
show()
# MayaVI Testing
from numpy import pi, sin, cos, mgrid
dphi,dtheta = pi/250.0, pi/250.0
[phi,theta] = mgrid[0:pi+dphi*1.5:dphi,0:2*pi+dtheta*1.5:dtheta]
m0=4; m1=3; m2=2; m3=3; m4=6; m5=2; m6=6; m7=4;
r = sin(m0*phi)**m1 + cos(m2*phi)**m3+sin(m4*theta)**m5+cos(m6*theta)**m7
x=r*sin(phi)*cos(theta)
y=r*cos(phi)
z=r*sin(phi)*sin(theta)
from mayavi import mlab
```
| github_jupyter |
[](https://github.com/awslabs/aws-data-wrangler)
# 11 - CSV Datasets
Wrangler has 3 different write modes to store CSV Datasets on Amazon S3.
- **append** (Default)
Only adds new files without any delete.
- **overwrite**
Deletes everything in the target directory and then add new files.
- **overwrite_partitions** (Partition Upsert)
Only deletes the paths of partitions that should be updated and then writes the new partitions files. It's like a "partition Upsert".
```
from datetime import date
import awswrangler as wr
import pandas as pd
```
## Enter your bucket name:
```
import getpass
bucket = getpass.getpass()
path = f"s3://{bucket}/dataset/"
```
## Checking/Creating Glue Catalog Databases
```
if "awswrangler_test" not in wr.catalog.databases().values:
wr.catalog.create_database("awswrangler_test")
```
## Creating the Dataset
```
df = pd.DataFrame({
"id": [1, 2],
"value": ["foo", "boo"],
"date": [date(2020, 1, 1), date(2020, 1, 2)]
})
wr.s3.to_csv(
df=df,
path=path,
index=False,
dataset=True,
mode="overwrite",
database="awswrangler_test",
table="csv_dataset"
)
wr.athena.read_sql_table(database="awswrangler_test", table="csv_dataset")
```
## Appending
```
df = pd.DataFrame({
"id": [3],
"value": ["bar"],
"date": [date(2020, 1, 3)]
})
wr.s3.to_csv(
df=df,
path=path,
index=False,
dataset=True,
mode="append",
database="awswrangler_test",
table="csv_dataset"
)
wr.athena.read_sql_table(database="awswrangler_test", table="csv_dataset")
```
## Overwriting
```
wr.s3.to_csv(
df=df,
path=path,
index=False,
dataset=True,
mode="overwrite",
database="awswrangler_test",
table="csv_dataset"
)
wr.athena.read_sql_table(database="awswrangler_test", table="csv_dataset")
```
## Creating a **Partitoned** Dataset
```
df = pd.DataFrame({
"id": [1, 2],
"value": ["foo", "boo"],
"date": [date(2020, 1, 1), date(2020, 1, 2)]
})
wr.s3.to_csv(
df=df,
path=path,
index=False,
dataset=True,
mode="overwrite",
database="awswrangler_test",
table="csv_dataset",
partition_cols=["date"]
)
wr.athena.read_sql_table(database="awswrangler_test", table="csv_dataset")
```
## Upserting partitions (overwrite_partitions)
```
df = pd.DataFrame({
"id": [2, 3],
"value": ["xoo", "bar"],
"date": [date(2020, 1, 2), date(2020, 1, 3)]
})
wr.s3.to_csv(
df=df,
path=path,
index=False,
dataset=True,
mode="overwrite_partitions",
database="awswrangler_test",
table="csv_dataset",
partition_cols=["date"]
)
wr.athena.read_sql_table(database="awswrangler_test", table="csv_dataset")
```
| github_jupyter |
# 3.1.3. Sensor-to-sample distance
This code generates all the results presented in the subsubsection 3.1.3 Sensor-to-sample distance.
### License
This code is licensed under under the [BSD 3-clause](http://choosealicense.com/licenses/bsd-3-clause/) license. See the file `LICENSE.md`
### Import the required dependencies
```
%matplotlib inline
import numpy as np
from IPython.display import Image
from fatiando.vis import mpl, myv
from fatiando.gravmag import prism, sphere
from fatiando.utils import contaminate
```
**Fatiando a Terra version:**
commit hash: [09cd37da986114a68c57c6a611271fc6cd22bde4](https://github.com/fatiando/fatiando/tree/09cd37da986114a68c57c6a611271fc6cd22bde4)
```
mpl.rcParams['font.size'] = 16
import functions as f
```
### Load data from txt files
These data are the output of the application to real data.
```
files = ['..\\data\\data0.txt',
'..\\data\\data1.txt',
'..\\data\\data2.txt',
'..\\data\\data3.txt']
nfiles = len(files)
data = []
for i in range(nfiles):
data.append(np.loadtxt(files[i]))
mag_true = np.loadtxt('..\\data\\estimate_real.txt')
mag_true
```
Here, the estimated magnetization obtained with real data is modified in order to simulate an observed data set. Only the estimated inclination and declination are modified.
```
mean_inc0 = np.mean(mag_true[1:4,1])
mean_inc1 = np.mean(mag_true[5:8,1])
mean_inc2 = np.mean(mag_true[9:12,1])
mean_inc3 = np.mean(mag_true[13:16,1])
mean_dec0 = np.mean(mag_true[1:4,2])
mean_dec1 = np.mean(mag_true[5:8,2])
mean_dec2 = np.mean(mag_true[9:12,2])
mean_dec3 = np.mean(mag_true[13:16,2])
mag_true[0:4,1] = mean_inc0
mag_true[4:8,1] = mean_inc1
mag_true[8:12,1] = mean_inc2
mag_true[12:16,1] = mean_inc3
mag_true[0:4,2] = mean_dec0
mag_true[4:8,2] = mean_dec1
mag_true[8:12,2] = mean_dec2
mag_true[12:16,2] = mean_dec3
mag_true
```
### Parameters of the sample
```
P = 16 # number of prisms forming the sample
M = 3*P # number of parameters to be estimated
Lx = 0.001 # in m
Ly = 0.003 # in m
Lz = Ly # in m
shape = (42,102) # (Ny, Nx)
xmin_sample = -0.5*Lx*P #in m
centers = [100.*(xmin_sample + 0.5*Lx + i*Lx) for i in range(P)] #in cm
```
### Synthetic Sample
```
sample = f.sample(Lx,Ly,Lz,P, m = mag_true[:,0], inc = mag_true[:,1], dec = mag_true[:,2])
```
### Synthetic data
```
# Simulate a Gaussian noise with null mean
stdev_data = 30000.0 # nT
# Simulate sensor-to-sample errors
# standard deviation of h per line
dh = (10.**-6)*(100.) # m
# constant shift per plane
h0 = (10.**-6)*(80.) # m
h1 = (10.**-6)*(-170.) # m
h2 = (10.**-6)*(-100.) # m
h3 = (10.**-6)*(270.) # m
h_error0 = np.resize(np.random.normal(loc=h0, scale=dh, size=shape[0]), (shape[1], shape[0])).T.ravel()
h_error1 = np.resize(np.random.normal(loc=h1, scale=dh, size=shape[0]), (shape[1], shape[0])).T.ravel()
h_error2 = np.resize(np.random.normal(loc=h2, scale=dh, size=shape[0]), (shape[1], shape[0])).T.ravel()
h_error3 = np.resize(np.random.normal(loc=h3, scale=dh, size=shape[0]), (shape[1], shape[0])).T.ravel()
B_obs = [contaminate(f.magnetic_data(data[0][:,0], data[0][:,1], data[0][:,2] + h_error0,
sample, alpha=0, eff_area = (300., 300.)), stddev = stdev_data),
contaminate(f.magnetic_data(data[1][:,0], data[1][:,1] + h_error1, data[1][:,2],
sample, alpha=1, eff_area = (300., 300.)), stddev = stdev_data),
contaminate(f.magnetic_data(data[2][:,0], data[2][:,1], data[2][:,2] + h_error2,
sample, alpha=2, eff_area = (300., 300.)), stddev = stdev_data),
contaminate(f.magnetic_data(data[3][:,0], data[3][:,1] + h_error3, data[3][:,2],
sample, alpha=3, eff_area = (300., 300.)), stddev = stdev_data)]
```
### Interpretation model
```
model = f.sample(Lx,Ly,Lz,P)
xmin = np.min([data[0][:,0], data[1][:,0], data[2][:,0], data[3][:,0]])
xmax = np.max([data[0][:,0], data[1][:,0], data[2][:,0], data[3][:,0]])
ymin = np.min([data[0][:,1], data[1][:,1], data[2][:,1], data[3][:,1]])
ymax = np.max([data[0][:,1], data[1][:,1], data[2][:,1], data[3][:,1]])
zmin = np.min([data[0][:,2], data[1][:,2], data[2][:,2], data[3][:,2]])
zmax = np.max([data[0][:,2], data[1][:,2], data[2][:,2], data[3][:,2]])
volume = [xmin, xmax, ymin, ymax, zmin, zmax]
```
### Inversion
```
A = []
for i in range(4):
A.append(f.sensitivity(P, data[i][:,0], data[i][:,1], data[i][:,2], model, alpha = i, eff_area = (300., 300.)))
R = np.zeros(M+1)
R[0] = 1.
R[3] = -1.
R = np.resize(R, (M-3,M))
u0 = [1.0e-10, 1.0e-10, 1.0e-10, 1.0e-10]
f0 = []
p_est = []
mag_r = []
H = np.dot(A[0].T, A[0])
if u0[0] != 0.:
f0.append(np.trace(H)/M)
H = H + u0[0]*f0[0]*np.dot(R.T, R)
h = np.dot(A[0].T, B_obs[0])
p_est.append(np.linalg.solve(H, h))
mag_r.append(f.parameters_sph(P,p_est[0]))
for i in range(1,4):
H = np.dot(np.vstack(A[:i+1]).T, np.vstack(A[:i+1]))
if u0[i] != 0.:
f0.append(np.trace(H)/M)
H = H + u0[i]*f0[i]*np.dot(R.T, R)
h = np.dot(np.vstack(A[:i+1]).T, np.hstack(B_obs[:i+1]))
p_est.append(np.linalg.solve(H, h))
mag_r.append(f.parameters_sph(P,p_est[i]))
mag_r = np.array(mag_r)
B_pre = []
for i in range(4):
B_pre.append(np.dot(A[i],p_est[3]))
r_norm = []
r_mean = []
r_std = []
for i in range(4):
aux1,aux2,aux3 = f.residual(0.001*B_obs[i],0.001*B_pre[i])
r_norm.append(aux1) # in uT
r_mean.append(aux2) # in uT
r_std.append(aux3) # in uT
```
### Plotting of the observed and predicted data
```
title_font = 22
bottom_font = 16
labels = ['(a)', '(b)', '(c)',
'(d)', '(e)', '(f)',
'(g)', '(h)', '(i)',
'(j)', '(k)', '(l)']
lables_place = (0.05, 0.90)
lables_size = 24
mpl.close('all')
mpl.figure(figsize=(20,24), tight_layout=True)
for i in range(0,4,2):
# in uT
ranges = 0.001*np.abs([B_obs[i].max(), B_obs[i].min(),
B_pre[i].max(), B_pre[i].min()]).max()
mpl.subplot(4,3,1 + 3*i)
#mpl.title(labels[3*i], fontsize=title_font)
nlevels = mpl.contourf(100.*data[i][:,1], 100.*data[i][:,0], 0.001*B_obs[i],
shape, 20, cmap=mpl.cm.RdBu_r,
vmin=-ranges, vmax=ranges)
#mpl.colorbar(pad=0.01, aspect=40, shrink=1.0).set_label('uT')
mpl.colorbar(pad=0.01, aspect=40, shrink=1.0)
mpl.xlabel('y (cm)', fontsize = title_font)
mpl.ylabel('x (cm)', fontsize = title_font)
mpl.annotate(labels[3*i], xy = lables_place,
xycoords = 'axes fraction', fontsize=lables_size)
mpl.subplot(4,3,1 + 3*i + 1)
#mpl.title(labels[3*i + 1], fontsize=title_font)
nlevels = mpl.contourf(100.*data[i][:,1], 100.*data[i][:,0], 0.001*B_pre[i],
shape, 20, cmap=mpl.cm.RdBu_r,
vmin=-ranges, vmax=ranges)
#mpl.colorbar(pad=0.01, aspect=40, shrink=1.0).set_label('uT')
mpl.colorbar(pad=0.01, aspect=40, shrink=1.0)
mpl.xlabel('y (cm)', fontsize = title_font)
mpl.ylabel('x (cm)', fontsize = title_font)
mpl.annotate(labels[3*i + 1], xy = lables_place,
xycoords = 'axes fraction', fontsize=lables_size)
mpl.subplot(4,3,1 + 3*i + 2)
#mpl.title(labels[3*i + 2], fontsize=title_font)
mpl.xlabel('$\mu$ = %.3f uT | $\sigma$ = %.3f uT' % (r_mean[i], r_std[i]), fontsize = title_font)
nbins = int((np.max(r_norm[i]) - np.min(r_norm[i])))
mpl.hist(r_norm[i],bins=nbins,normed=True)
mpl.ylim(0.,0.5)
mpl.xlim(-10., 10.)
mpl.annotate(labels[3*i + 2], xy = lables_place,
xycoords = 'axes fraction', fontsize=lables_size)
for i in range(1,4,2):
# in uT
ranges = 0.001*np.abs([B_obs[i].max(), B_obs[i].min(),
B_pre[i].max(), B_pre[i].min()]).max()
mpl.subplot(4,3,1 + 3*i)
#mpl.title(labels[3*i], fontsize=title_font)
nlevels = mpl.contourf(100.*data[i][:,2], 100.*data[i][:,0], 0.001*B_obs[i],
shape, 20, cmap=mpl.cm.RdBu_r,
vmin=-ranges, vmax=ranges)
#mpl.colorbar(pad=0.01, aspect=40, shrink=1.0).set_label('uT')
mpl.colorbar(pad=0.01, aspect=40, shrink=1.0)
mpl.xlabel('z (cm)', fontsize = title_font)
mpl.ylabel('x (cm)', fontsize = title_font)
mpl.annotate(labels[3*i], xy = lables_place,
xycoords = 'axes fraction', fontsize=lables_size)
mpl.subplot(4,3,1 + 3*i + 1)
#mpl.title(labels[3*i + 1], fontsize=title_font)
nlevels = mpl.contourf(100.*data[i][:,2], 100.*data[i][:,0], 0.001*B_pre[i],
shape, 20, cmap=mpl.cm.RdBu_r,
vmin=-ranges, vmax=ranges)
#mpl.colorbar(pad=0.01, aspect=40, shrink=1.0).set_label('uT')
mpl.colorbar(pad=0.01, aspect=40, shrink=1.0)
mpl.xlabel('z (cm)', fontsize = title_font)
mpl.ylabel('x (cm)', fontsize = title_font)
mpl.annotate(labels[3*i+1], xy = lables_place,
xycoords = 'axes fraction', fontsize=lables_size)
mpl.subplot(4,3,1 + 3*i + 2)
#mpl.title(labels[3*i + 2], fontsize=title_font)
mpl.xlabel('$\mu$ = %.3f uT | $\sigma$ = %.3f uT' % (r_mean[i], r_std[i]), fontsize = title_font)
nbins = int((np.max(r_norm[i]) - np.min(r_norm[i])))
mpl.hist(r_norm[i],bins=nbins,normed=True)
mpl.ylim(0.,0.5)
mpl.xlim(-10., 10.)
mpl.annotate(labels[3*i+2], xy = lables_place,
xycoords = 'axes fraction', fontsize=lables_size)
mpl.savefig('..\\manuscript\\Figs\\Fig9_HQ.eps')
mpl.savefig('..\\manuscript\\Figs\\Fig9_LQ.png')
mpl.show()
# results obtained by using all the planes
line_sty = ['b--', 'b--', 'b--', 'k--']
mark_sty = ['b.', 'b.', 'b.', 'ko']
label_font = 20
legend_font = 16
labels = ['(a)', '(b)', '(c)']
lables_place = (0.03, 0.90)
lables_size = 24
mpl.show()
mpl.close('all')
mpl.figure(figsize=(8, 16), tight_layout=True)
mpl.subplot(3,1,1)
mpl.plot(centers, mag_true[:,0], 'r-', linewidth=2)
mpl.plot(centers, mag_true[:,0], 'ro', markersize=10, label='True')
for i, mr in enumerate(mag_r):
mpl.plot(centers, mr[:,0], line_sty[i], alpha=0.4+i*0.2, linewidth=2)
mpl.plot(centers, mr[:,0], mark_sty[i], alpha=0.4+i*0.2, markersize=7,label='Estimated')
mpl.ylabel('Intensity ($A/m$)', fontsize=label_font)
mpl.ylim(0., np.max(mag_r[:,:,0]) + 50.)
mpl.annotate(labels[0], xy = lables_place,
xycoords = 'axes fraction', fontsize=lables_size)
mpl.subplot(3,1,2)
mpl.plot([np.min(centers), np.max(centers)], [0.0, 0.0], 'k--')
mpl.plot(centers, mag_true[:,1], 'r-', linewidth=2)
mpl.plot(centers, mag_true[:,1], 'ro', markersize=10, label='True')
for i, mr in enumerate(mag_r):
mpl.plot(centers, mr[:,1], line_sty[i], alpha=0.4+i*0.2, linewidth=2)
mpl.plot(centers, mr[:,1], mark_sty[i], alpha=0.4+i*0.2, markersize=7,label='Estimated')
mpl.ylabel('Inclination ($^{\circ}$)', fontsize=label_font)
mpl.ylim(np.min(mag_r[:,:,1]) - 20., np.max(mag_r[:,:,1]) + 20.)
mpl.annotate(labels[1], xy = lables_place,
xycoords = 'axes fraction', fontsize=lables_size)
mpl.subplot(3,1,3)
mpl.plot([np.min(centers), np.max(centers)], [0.0, 0.0], 'k--')
mpl.plot(centers, mag_true[:,2], 'r-', linewidth=2)
mpl.plot(centers, mag_true[:,2], 'ro', markersize=10, label='True')
for i, mr in enumerate(mag_r):
mpl.plot(centers, mr[:,2], line_sty[i], alpha=0.4+i*0.2, linewidth=2)
mpl.plot(centers, mr[:,2], mark_sty[i], alpha=0.4+i*0.2, markersize=7,label='Estimated')
mpl.ylabel('Declination ($^{\circ}$)', fontsize=label_font)
#mpl.xlabel('Sample length ($cm$)', fontsize=label_font)
mpl.xlabel('$x$ ($cm$)', fontsize=label_font)
mpl.ylim(np.min(mag_r[:,:,2]) - 20., np.max(mag_r[:,:,2]) + 20.)
#mpl.savefig('estimate_int_inc_dec_simul_validation.eps')
#mpl.legend(loc='best', numpoints=1)
mpl.annotate(labels[2], xy = (0.90, 0.90),
xycoords = 'axes fraction', fontsize=lables_size)
mpl.savefig('..\\manuscript\\Figs\\Fig10_HQ.eps')
mpl.savefig('..\\manuscript\\Figs\\Fig10_LQ.png')
mpl.show()
# results obtained by using only the plane 0
n = 1
line_sty = ['b--', 'b--', 'b--', 'k--']
mark_sty = ['bo', 'bo', 'bo', 'ko']
label_font = 20
legend_font = 16
labels = ['(a)', '(b)', '(c)']
lables_place = (0.03, 0.90)
lables_size = 24
mpl.show()
mpl.close('all')
mpl.figure(figsize=(8, 16), tight_layout=True)
mpl.subplot(3,1,1)
mpl.plot(centers, mag_true[:,0], 'r-', linewidth=2)
mpl.plot(centers, mag_true[:,0], 'ro', markersize=10, label='True')
for i, mr in enumerate(mag_r[:n]):
mpl.plot(centers, mr[:,0], line_sty[i], alpha=0.4+i*0.2, linewidth=2)
mpl.plot(centers, mr[:,0], mark_sty[i], alpha=0.4+i*0.2, markersize=7,label='Estimated')
mpl.ylabel('Intensity ($A/m$)', fontsize=label_font)
mpl.ylim(0., np.max(mag_r[:,:,0]) + 50.)
mpl.annotate(labels[0], xy = lables_place,
xycoords = 'axes fraction', fontsize=lables_size)
mpl.subplot(3,1,2)
mpl.plot([np.min(centers), np.max(centers)], [0.0, 0.0], 'k--')
mpl.plot(centers, mag_true[:,1], 'r-', linewidth=2)
mpl.plot(centers, mag_true[:,1], 'ro', markersize=10, label='True')
for i, mr in enumerate(mag_r[:n]):
mpl.plot(centers, mr[:,1], line_sty[i], alpha=0.4+i*0.2, linewidth=2)
mpl.plot(centers, mr[:,1], mark_sty[i], alpha=0.4+i*0.2, markersize=7,label='Estimated')
mpl.ylabel('Inclination ($^{\circ}$)', fontsize=label_font)
mpl.ylim(np.min(mag_r[:,:,1]) - 20., np.max(mag_r[:,:,1]) + 20.)
mpl.annotate(labels[1], xy = lables_place,
xycoords = 'axes fraction', fontsize=lables_size)
mpl.subplot(3,1,3)
mpl.plot([np.min(centers), np.max(centers)], [0.0, 0.0], 'k--')
mpl.plot(centers, mag_true[:,2], 'r-', linewidth=2)
mpl.plot(centers, mag_true[:,2], 'ro', markersize=10, label='True')
for i, mr in enumerate(mag_r[:n]):
mpl.plot(centers, mr[:,2], line_sty[i], alpha=0.4+i*0.2, linewidth=2)
mpl.plot(centers, mr[:,2], mark_sty[i], alpha=0.4+i*0.2, markersize=7,label='Estimated')
mpl.ylabel('Declination ($^{\circ}$)', fontsize=label_font)
#mpl.xlabel('Sample length ($cm$)', fontsize=label_font)
mpl.xlabel('$x$ ($cm$)', fontsize=label_font)
mpl.ylim(np.min(mag_r[:,:,2]) - 20., np.max(mag_r[:,:,2]) + 20.)
#mpl.savefig('estimate_int_inc_dec_simul_validation.eps')
#mpl.legend(loc='best', numpoints=1)
mpl.annotate(labels[2], xy = (0.90, 0.90),
xycoords = 'axes fraction', fontsize=lables_size)
mpl.show()
# results obtained by using the planes 0 and 1
n = 2
line_sty = ['b--', 'b--', 'b--', 'k--']
mark_sty = ['bo', 'bo', 'bo', 'ko']
label_font = 20
legend_font = 16
labels = ['(a)', '(b)', '(c)']
lables_place = (0.03, 0.90)
lables_size = 24
mpl.show()
mpl.close('all')
mpl.figure(figsize=(8, 16), tight_layout=True)
mpl.subplot(3,1,1)
mpl.plot(centers, mag_true[:,0], 'r-', linewidth=2)
mpl.plot(centers, mag_true[:,0], 'ro', markersize=10, label='True')
for i, mr in enumerate(mag_r[:n]):
mpl.plot(centers, mr[:,0], line_sty[i], alpha=0.4+i*0.2, linewidth=2)
mpl.plot(centers, mr[:,0], mark_sty[i], alpha=0.4+i*0.2, markersize=7,label='Estimated')
mpl.ylabel('Intensity ($A/m$)', fontsize=label_font)
mpl.ylim(0., np.max(mag_r[:,:,0]) + 50.)
mpl.annotate(labels[0], xy = lables_place,
xycoords = 'axes fraction', fontsize=lables_size)
mpl.subplot(3,1,2)
mpl.plot([np.min(centers), np.max(centers)], [0.0, 0.0], 'k--')
mpl.plot(centers, mag_true[:,1], 'r-', linewidth=2)
mpl.plot(centers, mag_true[:,1], 'ro', markersize=10, label='True')
for i, mr in enumerate(mag_r[:n]):
mpl.plot(centers, mr[:,1], line_sty[i], alpha=0.4+i*0.2, linewidth=2)
mpl.plot(centers, mr[:,1], mark_sty[i], alpha=0.4+i*0.2, markersize=7,label='Estimated')
mpl.ylabel('Inclination ($^{\circ}$)', fontsize=label_font)
mpl.ylim(np.min(mag_r[:,:,1]) - 20., np.max(mag_r[:,:,1]) + 20.)
mpl.annotate(labels[1], xy = lables_place,
xycoords = 'axes fraction', fontsize=lables_size)
mpl.subplot(3,1,3)
mpl.plot([np.min(centers), np.max(centers)], [0.0, 0.0], 'k--')
mpl.plot(centers, mag_true[:,2], 'r-', linewidth=2)
mpl.plot(centers, mag_true[:,2], 'ro', markersize=10, label='True')
for i, mr in enumerate(mag_r[:n]):
mpl.plot(centers, mr[:,2], line_sty[i], alpha=0.4+i*0.2, linewidth=2)
mpl.plot(centers, mr[:,2], mark_sty[i], alpha=0.4+i*0.2, markersize=7,label='Estimated')
mpl.ylabel('Declination ($^{\circ}$)', fontsize=label_font)
#mpl.xlabel('Sample length ($cm$)', fontsize=label_font)
mpl.xlabel('$x$ ($cm$)', fontsize=label_font)
mpl.ylim(np.min(mag_r[:,:,2]) - 20., np.max(mag_r[:,:,2]) + 20.)
#mpl.savefig('estimate_int_inc_dec_simul_validation.eps')
#mpl.legend(loc='best', numpoints=1)
mpl.annotate(labels[2], xy = (0.90, 0.90),
xycoords = 'axes fraction', fontsize=lables_size)
mpl.show()
# results obtained by using the planes 0, 1 and 3
n = 3
line_sty = ['b--', 'b--', 'b--', 'k--']
mark_sty = ['bo', 'bo', 'bo', 'ko']
label_font = 20
legend_font = 16
labels = ['(a)', '(b)', '(c)']
lables_place = (0.03, 0.90)
lables_size = 24
mpl.show()
mpl.close('all')
mpl.figure(figsize=(8, 16), tight_layout=True)
mpl.subplot(3,1,1)
mpl.plot(centers, mag_true[:,0], 'r-', linewidth=2)
mpl.plot(centers, mag_true[:,0], 'ro', markersize=10, label='True')
for i, mr in enumerate(mag_r[:n]):
mpl.plot(centers, mr[:,0], line_sty[i], alpha=0.4+i*0.2, linewidth=2)
mpl.plot(centers, mr[:,0], mark_sty[i], alpha=0.4+i*0.2, markersize=7,label='Estimated')
mpl.ylabel('Intensity ($A/m$)', fontsize=label_font)
mpl.ylim(0., np.max(mag_r[:,:,0]) + 50.)
mpl.annotate(labels[0], xy = lables_place,
xycoords = 'axes fraction', fontsize=lables_size)
mpl.subplot(3,1,2)
mpl.plot([np.min(centers), np.max(centers)], [0.0, 0.0], 'k--')
mpl.plot(centers, mag_true[:,1], 'r-', linewidth=2)
mpl.plot(centers, mag_true[:,1], 'ro', markersize=10, label='True')
for i, mr in enumerate(mag_r[:n]):
mpl.plot(centers, mr[:,1], line_sty[i], alpha=0.4+i*0.2, linewidth=2)
mpl.plot(centers, mr[:,1], mark_sty[i], alpha=0.4+i*0.2, markersize=7,label='Estimated')
mpl.ylabel('Inclination ($^{\circ}$)', fontsize=label_font)
mpl.ylim(np.min(mag_r[:,:,1]) - 20., np.max(mag_r[:,:,1]) + 20.)
mpl.annotate(labels[1], xy = lables_place,
xycoords = 'axes fraction', fontsize=lables_size)
mpl.subplot(3,1,3)
mpl.plot([np.min(centers), np.max(centers)], [0.0, 0.0], 'k--')
mpl.plot(centers, mag_true[:,2], 'r-', linewidth=2)
mpl.plot(centers, mag_true[:,2], 'ro', markersize=10, label='True')
for i, mr in enumerate(mag_r[:n]):
mpl.plot(centers, mr[:,2], line_sty[i], alpha=0.4+i*0.2, linewidth=2)
mpl.plot(centers, mr[:,2], mark_sty[i], alpha=0.4+i*0.2, markersize=7,label='Estimated')
mpl.ylabel('Declination ($^{\circ}$)', fontsize=label_font)
#mpl.xlabel('Sample length ($cm$)', fontsize=label_font)
mpl.xlabel('$x$ ($cm$)', fontsize=label_font)
mpl.ylim(np.min(mag_r[:,:,2]) - 20., np.max(mag_r[:,:,2]) + 20.)
#mpl.savefig('estimate_int_inc_dec_simul_validation.eps')
#mpl.legend(loc='best', numpoints=1)
mpl.annotate(labels[2], xy = (0.90, 0.90),
xycoords = 'axes fraction', fontsize=lables_size)
mpl.show()
```
| github_jupyter |
# Sankey Diagram
```
#Simple Sankey Diagram
fig = go.Figure(
go.Sankey(
node = {
"label": ["India", "USA", "China", "Pakistan", "Bangladesh", "Mexico"],
},
link = {
"source": [0, 1, 2, 3, 4, 0, 2, 5],
"target": [1, 2, 3, 4, 5, 3, 5, 3],
"value": [300, 400, 200, 450, 700, 200,150, 200]
}
)
)
fig.show()
#Simple Sankey Diagram
fig = go.Figure(
go.Sankey(
node = dict(
thickness = 40, # Changing thickness of nodes
color = "lightgreen", # Changing color of the node
line = dict(color = "red", width = 0.5), # Changing line color
label = ["India", "USA", "China", "Pakistan", "Bangladesh", "Mexico"],
),
link = {
"source": [0, 1, 2, 3, 4, 0, 2, 5],
"target": [1, 2, 3, 4, 5, 3, 5, 3],
"value": [300, 400, 200, 450, 550, 200,150, 200]
}
)
)
fig.show()
#Simple Sankey Diagram
fig = go.Figure(
go.Sankey(
node = {
"label": ["Married: NO", "Married: Yes",
"Pet: No", "Pet: Yes",
"Happy: Yes", "Happy: No"],
"color" : px.colors.qualitative.Set3 # Node color
},
link = dict(
source = [0, 0, 1, 1, 2, 2, 3, 5],
target = [2, 3, 2, 3, 5, 4, 4, 3],
value = [200, 300, 400, 600, 150, 350,700],
color = px.colors.qualitative.Set2 # Color of links
)
)
)
fig.show()
```
# END
| github_jupyter |
# Binary Classification
This is a basic example in which we learn to ground unary predicate $A$ that is defined in the space of $[0,1]^2$.
We define the predicate $A$ to apply to points that are close to the middle point $c=(.5,.5)$.In order to get training data, we randomly sample data from the domain. We split the sample data into two separate sets based on their euclidian distance to $c$. We then define two facts for the predicate $A$. For all points the predicate should apply to, we provide them as positive examples and vice versa for all points that the predicate does not apply to.
```
import logging; logging.basicConfig(level=logging.INFO)
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import logictensornetworks as ltn
plt.rcParams['font.size'] = 12
plt.rcParams['axes.linewidth'] = 1
```
Sample random data from $[0,1]^2$. Our groundtruth positive training data for $A$ is close to the center (.5,.5). All other data is considered as negative examples.
```
batch_size=64
nr_samples = 100
nr_samples_train = 50
data = np.random.uniform([0,0],[1,1],(nr_samples,2))
labels = np.sum(np.square(data-[.5,.5]),axis=1)<.09
# 400 examples for training; 100 examples for training
ds_train = tf.data.Dataset\
.from_tensor_slices((data[:nr_samples_train],labels[:nr_samples_train]))\
.batch(batch_size)
ds_test = tf.data.Dataset\
.from_tensor_slices((data[nr_samples_train:],labels[nr_samples_train:]))\
.batch(batch_size)
plt.figure(figsize=(4,4))
plt.scatter(data[labels][:,0],data[labels][:,1],label='A')
plt.scatter(data[np.logical_not(labels)][:,0],data[np.logical_not(labels)][:,1],label='~A')
plt.title("Groundtruth")
plt.legend()
plt.show()
```
Define the predicate $A$. $A$ has arity 1 (single argument). The dimension of the argument is 2 (since the domain is $[0,1]^2$).
```
A = ltn.Predicate.MLP([2],hidden_layer_sizes=(16,16))
```
Import some operators to write the axioms.
```
Not = ltn.Wrapper_Connective(ltn.fuzzy_ops.Not_Std())
And = ltn.Wrapper_Connective(ltn.fuzzy_ops.And_Prod())
Or = ltn.Wrapper_Connective(ltn.fuzzy_ops.Or_ProbSum())
Implies = ltn.Wrapper_Connective(ltn.fuzzy_ops.Implies_Reichenbach())
Forall = ltn.Wrapper_Quantifier(ltn.fuzzy_ops.Aggreg_pMeanError(p=2),semantics="forall")
Exists = ltn.Wrapper_Quantifier(ltn.fuzzy_ops.Aggreg_pMean(p=2),semantics="exists")
```
Now we add some facts to the knowledgebase. We express that for all points in $\mathrm{data\_A}$, $A$ should be true. For all points in $\mathrm{data\_not\_A}$, $A$ is not true.
```
formula_aggregator = ltn.Wrapper_Formula_Aggregator(ltn.fuzzy_ops.Aggreg_pMeanError(p=2))
@tf.function
def axioms(data, labels):
x_A = ltn.Variable("x_A",data[labels])
x_not_A = ltn.Variable("x_not_A",data[tf.logical_not(labels)])
axioms = [
Forall(x_A, A(x_A)),
Forall(x_not_A, Not(A(x_not_A)))
]
sat_level = formula_aggregator(axioms).tensor
return sat_level
```
Initialize all layers and the static graph.
```
for _data, _labels in ds_test:
print("Initial sat level %.5f"%axioms(_data, _labels))
break
```
Train on the knowledgebase.
```
mean_metrics = tf.keras.metrics.Mean()
trainable_variables = A.trainable_variables
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
for epoch in range(2000):
for _data, _labels in ds_train:
with tf.GradientTape() as tape:
loss = 1. - axioms(_data, _labels)
grads = tape.gradient(loss, trainable_variables)
optimizer.apply_gradients(zip(grads, trainable_variables))
if epoch%100 == 0:
mean_metrics.reset_states()
for _data, _labels in ds_test:
mean_metrics(axioms(_data, _labels))
print("Epoch %d: Sat Level %.3f"%(epoch, mean_metrics.result() ))
mean_metrics.reset_states()
for _data, _labels in ds_test:
mean_metrics(axioms(_data, _labels))
print("Training finished at Epoch %d with Sat Level %.3f"%(epoch, mean_metrics.result() ))
```
The following queries the database on training data and test data. Vizualisation show the extent of generalization.
```
fig = plt.figure(figsize=(9, 11))
plt.subplots_adjust(wspace=0.2,hspace=0.3)
ax = plt.subplot2grid((3,8),(0,2),colspan=4)
ax.set_title("groundtruth")
ax.scatter(data[labels][:,0],data[labels][:,1],label='A')
ax.scatter(data[np.logical_not(labels)][:,0],data[np.logical_not(labels)][:,1],label='~A')
ax.legend()
# Training data
x = ltn.Variable("x",data[:nr_samples_train])
fig.add_subplot(3, 2, 3)
result=A(x)
plt.title("A(x) - training data")
plt.scatter(data[:nr_samples_train,0],data[:nr_samples_train,1],c=result.tensor.numpy().squeeze())
plt.colorbar()
fig.add_subplot(3, 2, 4)
result=Not(A(x))
plt.title("~A(x) - training data")
plt.scatter(data[:nr_samples_train,0],data[:nr_samples_train,1],c=result.tensor.numpy().squeeze())
plt.colorbar()
# Test data
x = ltn.Variable("x",data[nr_samples_train:])
fig.add_subplot(3, 2, 5)
result=A(x)
plt.title("A(x) - test data")
plt.scatter(data[nr_samples_train:,0],data[nr_samples_train:,1],c=result.tensor.numpy().squeeze())
plt.colorbar()
fig.add_subplot(3, 2, 6)
result=Not(A(x))
plt.title("~A(x) - test data")
plt.scatter(data[nr_samples_train:,0],data[nr_samples_train:,1],c=result.tensor.numpy().squeeze())
plt.colorbar()
plt.savefig("ex_binary_testing.pdf")
plt.show()
```
| github_jupyter |
## Datasets
```
# Visualization
%pylab inline
from IPython.display import display, Math, Latex
import matplotlib.pyplot as plt
# handling data
import csv
import json
import pandas as pd
# Math
from random import random
import scipy.stats as ss
import numpy as np
import itertools
from collections import Counter
```
We will use the following dataset to define the universe of possible values throughout the notebook to test the paper, it contains the setting's universe values and the people the author's of the paper want to protect. To simplify things without the loss of generality, the authors just use 4 people. The 4 people belong to a high school. The high school would like to release a dataset to be queried; however, this new dataset will not include people that have been in probation, in this case, Terry.
```
# We define the actual dataset (conforming the universe)
dict_school = {'name': ['Chris', 'Kelly', 'Pat', 'Terry'], 'school_year': [1, 2, 3, 4], 'absence_days': [1, 2, 3, 10]}
df_school = pd.DataFrame(dict_school)
df_school
```
The attacker's ultimate goal is to find out which student was placed in probation, Terry. However, the attacker will only be able to query this other dataset (Without Terry, because the release dataset only contains students who were not in probation):
```
# We define the the dataset that we will release
df_school_release = df_school.drop([3], axis=0)
df_school_release
```
To accomplish this, the attacker would perform analytics queries on df_school_release such as the mean. With the results, the adversary will try to improve his/her guess about which dataset is the true one to single out and discover the student who was placed on probation (Terry).
The adversary model adopted in the paper is the worst-case scenario. It will be the one I adopted in this notebook as well: An attacker has infinite computation power. Because DP is supposed to provide privacy given adversaries with arbitrary background knowledge, it is okay to assume that the adversary has full access to all the records (Knows all the universe - df_school). However, there is a dataset made from the universe without an individual (df_school_release), and he does not know who is and who is not in this dataset (This is the only thing he does not know). However, he knows this dataset contains people with a certain quality (The students who have not been on probation). With the initial dataset, the attacker will reconstruct the dataset he does not know, querying the new dataset (df_school_release) without having access to it.
## Functions
### Auxiliary function
```
# With this funciton, we can make easier to call the mean, median... functions
# REF: https://stackoverflow.com/questions/34794634/how-to-use-a-variable-as-function-name-in-python
# It is not clean to have the var percentile input each function, but it is less verbose than having a function
# For each percentile. We could however limit the maount of percentiles offer to 25 and 75.
class Query_class:
"""
A class used to represent a query. You instantiate an object that will perform a particlar query on an array
Attributes
----------
fDic - (dict) containing the possible queries the class can be transformed into
fActive - (function) it contins the function we created the class to have
Methods
-------
run_query - I will run the query for which we instantiated the class
The other methods implement the different possible queries
"""
def __init__(self, fCase):
# mapping: string --> variable = function name
fDic = {'mean':self._mean,
'median':self._median,
'count': self._count,
'sum': self._sum,
'std': self._std,
'var': self._var,
'percentile': self._percentile}
self.fActive = fDic[fCase]
# Calculate the mean of an array
def _mean(self, array, percentile):
return np.mean(array)
# Calculate the median of an array
def _median(self, array, percentile):
return np.median(array)
# Calculate the number of elements in the array
def _count(self, array, percentile):
return len(array)
# Calculate the sum of an array
def _sum(self, array, percentile):
return np.sum(array)
# Calculate the std of an array
def _std(self, array, percentile):
return np.std(array)
# Calculate the variance of an array
def _var(self, array, percentile):
return np.var(array)
def _percentile(self, array, percentile):
return np.percentile(array, percentile)
# It will run the given query
def run_query(self, array, percentile=50):
return self.fActive(array, percentile)
# Set of checks on the input values
def verify_sensitivity_inputs(universe_cardinality, universe_subset_cardinality, hamming_distance):
"""
INPUT:
universe - (df) contains all possible values of the dataset
universe_subset_cardinality - (df) cardinality of the universe subset
hamming_distance - (int) hamming distance between neighboring datasets
OUTPUT:
ValueError - (str) error message due to the value of the inputs
Description:
It performs multiple checks to verify the validity of the inputs for the calculation of senstitivity
"""
# Check on unverse cardinality (1).
# The cardinality of the subset of the universe cannot be larger than the universe
if universe_cardinality < universe_subset_cardinality:
raise ValueError("Your universe dataset cannot be smaller than your release dataset.")
# Checks on the validity of the chosen hamming_distance (3)
if hamming_distance >= (universe_subset_cardinality):
raise ValueError("Hamming distance chosen is larger than the cardinality of the release dataset.")
if (hamming_distance > np.abs(universe_cardinality - universe_subset_cardinality)):
raise ValueError("Hamming distance chosen is larger than the cardinality difference between the \
universe and the release dataset, i.e., \
there are not enough values in your universe to create such a large neighboring dataset (Re-sampling records).")
# The hamming distance cannot be 0, then your neighbor dataset is equal to the original dataset
if hamming_distance == 0:
raise ValueError("Hamming distance cannot be 0.")
# Used by unbounded unbounded_empirical_global_L1_sensitivity_a
def L1_norm_max(release_dataset_query_value, neighbor_datasets, query, percentile):
"""
INPUT:
release_dataset_query_value - (float) query value of a particular possible release dataset
neighbor_datasets - (list) contains the possible neighbors of the specific release dataset
query - (object) instance of class Query_class
percentile - (int) percentile value for the percentile query
OUTPUT:
L1_norm_maximum - (float) maximum L1 norm calcuated from the differences between the query results
of the neighbor datasets and the specific release dataset
Description:
It claculates the maximum L1 norm between the query results of the neighbor datasets and the specific release dataset
"""
neighbor_dataset_query_values = []
for neighbor_dataset in neighbor_datasets:
neighbor_dataset_query_value = query.run_query(neighbor_dataset, percentile)
neighbor_dataset_query_values.append(neighbor_dataset_query_value)
# We select the maximum and minimum values of the queries, as the intermediate values will not
# yield a larger L1 norm (ultimately, we are interested in the maximum L1 norm)
neighbor_dataset_query_value_min, neighbor_dataset_query_value_max = \
min(neighbor_dataset_query_values), max(neighbor_dataset_query_values)
# We calculate the L1 norm for these two values and pick the maximum
L1_norm_i = np.abs(release_dataset_query_value - neighbor_dataset_query_value_min)
L1_norm_ii = np.abs(release_dataset_query_value - neighbor_dataset_query_value_max)
L1_norm_maximum = max(L1_norm_i, L1_norm_ii)
return L1_norm_maximum
def calculate_unbounded_sensitivities(universe, universe_subset_cardinality, columns, hamming_distance, unbounded_sensitivities):
"""
INPUT:
universe - (df or dict) contains all possible values of the dataset
universe_subset_cardinality - (int) contains the length of the subset chosen for the release dataset
columns - (array) contains the names of the columns we would like to obtain the sensitivity from
hamming_distance - (int) hamming distance between neighboring datasets
unbounded_sensitivities - (dict) stores sensitivities per hamming distance and query type
OUTPUT
unbounded_sensitivities - (dict) stores sensitivities per hamming distance and query type
Description:
It calculates the sensitivities for a set of queries given a universe and a release dataset.
"""
# Calculate the sensitivity of different queries for the unbounded DP
query_type = 'mean'
mean_unbounded_global_sensitivities = unbounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance)
query_type = 'median'
median_unbounded_global_sensitivities = unbounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance)
query_type = 'count'
count_unbounded_global_sensitivities = unbounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance)
query_type = 'sum'
sum_unbounded_global_sensitivities = unbounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance)
query_type = 'std'
std_unbounded_global_sensitivities = unbounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance)
query_type = 'var'
var_unbounded_global_sensitivities = unbounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance)
query_type = 'percentile'
percentile = 25
percentile_25_unbounded_global_sensitivities = unbounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance, percentile)
percentile = 50
percentile_50_unbounded_global_sensitivities = unbounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance, percentile)
percentile = 75
percentile_75_unbounded_global_sensitivities = unbounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance, percentile)
percentile = 90
percentile_90_unbounded_global_sensitivities = unbounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance, percentile)
print('Unbounded sensitivities for mean', mean_unbounded_global_sensitivities)
print('Unbounded sensitivities for median', median_unbounded_global_sensitivities)
print('Unbounded sensitivities for count', count_unbounded_global_sensitivities)
print('Unbounded sensitivities for sum', sum_unbounded_global_sensitivities)
print('Unbounded sensitivities for std', std_unbounded_global_sensitivities)
print('Unbounded sensitivities for var', var_unbounded_global_sensitivities)
print('Unbounded sensitivities for percentile 25', percentile_25_unbounded_global_sensitivities)
print('Unbounded sensitivities for percentile 50', percentile_50_unbounded_global_sensitivities)
print('Unbounded sensitivities for percentile 75', percentile_75_unbounded_global_sensitivities)
print('Unbounded sensitivities for percentile 90', percentile_90_unbounded_global_sensitivities)
unbounded_sensitivities = build_sensitivity_dict(unbounded_sensitivities, hamming_distance,\
mean_unbounded_global_sensitivities, median_unbounded_global_sensitivities, count_unbounded_global_sensitivities, \
sum_unbounded_global_sensitivities, std_unbounded_global_sensitivities, var_unbounded_global_sensitivities, \
percentile_25_unbounded_global_sensitivities, percentile_50_unbounded_global_sensitivities, \
percentile_75_unbounded_global_sensitivities, percentile_90_unbounded_global_sensitivities)
return unbounded_sensitivities
def calculate_bounded_sensitivities(universe, universe_subset_cardinality, columns, hamming_distance, bounded_sensitivities):
"""
INPUT:
universe - (df or dict) contains all possible values of the dataset
universe_subset_cardinality - (int) contains the length of the subset chosen for the release dataset
columns - (array) contains the names of the columns we would like to obtain the sensitivity from
hamming_distance - (int) hamming distance between neighboring datasets
unbounded_sensitivities - (dict) stores sensitivities per hamming distance and query type
OUTPUT
bounded_sensitivities - (dict) stores sensitivities per hamming distance and query type
Description:
It calculates the sensitivities for a set of queries given a universe and a release dataset.
"""
# Calculate the sensitivity of different queries for the unbounded DP
query_type = 'mean'
mean_bounded_global_sensitivities = bounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance)
query_type = 'median'
median_bounded_global_sensitivities = bounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance)
query_type = 'count'
count_bounded_global_sensitivities = bounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance)
query_type = 'sum'
sum_bounded_global_sensitivities = bounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance)
query_type = 'std'
std_bounded_global_sensitivities = bounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance)
query_type = 'var'
var_bounded_global_sensitivities = bounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance)
query_type = 'percentile'
percentile = 25
percentile_25_bounded_global_sensitivities = bounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance, percentile)
percentile = 50
percentile_50_bounded_global_sensitivities = bounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance, percentile)
percentile = 75
percentile_75_bounded_global_sensitivities = bounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance, percentile)
percentile = 90
percentile_90_bounded_global_sensitivities = bounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance, percentile)
print('Bounded sensitivities for mean', mean_bounded_global_sensitivities)
print('Bounded sensitivities for median', median_bounded_global_sensitivities)
print('Bounded sensitivities for count', count_bounded_global_sensitivities)
print('Bounded sensitivities for sum', sum_bounded_global_sensitivities)
print('Bounded sensitivities for std', std_bounded_global_sensitivities)
print('Bounded sensitivities for var', var_bounded_global_sensitivities)
print('Bounded sensitivities for percentile 25', percentile_25_bounded_global_sensitivities)
print('Bounded sensitivities for percentile 50', percentile_50_bounded_global_sensitivities)
print('Bounded sensitivities for percentile 75', percentile_75_bounded_global_sensitivities)
print('Bounded sensitivities for percentile 90', percentile_90_bounded_global_sensitivities)
bounded_sensitivities = build_sensitivity_dict(bounded_sensitivities, hamming_distance,\
mean_bounded_global_sensitivities, median_bounded_global_sensitivities, count_bounded_global_sensitivities, \
sum_bounded_global_sensitivities, std_bounded_global_sensitivities, var_bounded_global_sensitivities, \
percentile_25_bounded_global_sensitivities, percentile_50_bounded_global_sensitivities, \
percentile_75_bounded_global_sensitivities, percentile_90_bounded_global_sensitivities)
return bounded_sensitivities
# We save the values in a dictionary
def build_sensitivity_dict(unbounded_sensitivities, hamming_distance, mean_sensitivity, median_sensitivity, count_sensitivity, _sum_sensitivity, _std_sensitivity, _var_sensitivity, percentile_25_sensitivity, percentile_50_sensitivity, percentile_75_sensitivity, percentile_90_sensitivity):
"""
INPUT
unbounded_sensitivities - (dict) stores sensitivities per hamming distance and query type
hamming_distance - (int) hamming distance of the neighboring datasets
mean_sensitivity - (float) sensitivity of the mean query
median_sensitivity - (float) sensitivity of the media query
count_sensitivity - (float) sensitivity of the count query
_sum_sensitivity - (float) sensitivity of the sum query
_std_sensitivity - (float) sensitivity of the std query
_var - (float) sensitivity of the var query
percentile_25_sensitivity - (float) sensitivity of the percentile 25 query
percentile_50_sensitivity - (float) sensitivity of the percentile 50 query
percentile_75_sensitivity - (float) sensitivity of the percentile 75query
percentile_90_sensitivity - (float) sensitivity of the percentile 90 query
OUTPUT
unbounded_sensitivities - (dict) stores sensitivities per hamming distance and query type
"""
unbounded_sensitivities[hamming_distance] = {}
unbounded_sensitivities[hamming_distance]['mean'] = mean_sensitivity
unbounded_sensitivities[hamming_distance]['median'] = median_sensitivity
unbounded_sensitivities[hamming_distance]['count'] = count_sensitivity
unbounded_sensitivities[hamming_distance]['sum'] = _sum_sensitivity
unbounded_sensitivities[hamming_distance]['std'] = _std_sensitivity
unbounded_sensitivities[hamming_distance]['var'] = _var_sensitivity
unbounded_sensitivities[hamming_distance]['percentile_25'] = percentile_25_sensitivity
unbounded_sensitivities[hamming_distance]['percentile_50'] = percentile_50_sensitivity
unbounded_sensitivities[hamming_distance]['percentile_75'] = percentile_75_sensitivity
unbounded_sensitivities[hamming_distance]['percentile_90'] = percentile_90_sensitivity
return unbounded_sensitivities
```
### Main Functions
##### Equation in 4.1 after its first paragraph - Definition of sensitivity
```
%%latex
\begin{align}
\ell_{1, \mbox{sensitivity}}: \Delta f=\max_{\substack{
{x, y \in \mathbb{N}^{(\mathcal{X})}} \\
\|x-y\|_{1} = h
}} \|f(x)-f(y)\|_{1}
\end{align}
```
\begin{align}
\ell_{1, \mbox{sensitivity}}: \Delta f=\max_{\substack{
{x, y \in \mathbb{N}^{(\mathcal{X})}} \\
\|x-y\|_{1} = h
}} \|f(x)-f(y)\|_{1}
\end{align}
```
def unbounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance, percentile=50):
"""
INPUT:
universe - (df) contains all possible values of the dataset
universe_subset_cardinality - (int) contains the length of the subset chosen for the release dataset
columns - (array) contains the names of the columns we would like to obtain the sensitivity from
query_type - (str) contain the category declaring the type of query to be later on executed
hamming_distance - (int) hamming distance between neighboring datasets
percentile - (int) percentile value for the percentile query
OUTPUT:
unbounded_global_sensitivity - (float) the unbounded global sensitivity of the input universe
Description:
It claculates the global sensitivity of an array based on the knowledge of the entire universe of
the dataset and query_type.
"""
# We initialie the type of query for which we would like calculate the sensitivity
query = Query_class(query_type)
# We will store the sensitivity of each column of the dataset containing universe in a dictionary
unbounded_global_sensitivity_per_colum = {}
for column in columns:
# Check if the values for the hamming distance and universe sizes comply with the basic constraints
verify_sensitivity_inputs(len(universe[column]), universe_subset_cardinality, hamming_distance)
# 1) RELEASE DATASET
# We calculate all the possible release datasets formed by the combination of values sampled from the universe
release_datasets = itertools.combinations(universe[column], universe_subset_cardinality)
release_datasets = list(release_datasets)
# 2) |NEIGHBORING DATASET| < |RELEASE DATASET| //// cardinalities
# The neighboring datasets are subsets of a smaller dimension of the possible release datasets (smaller by the hamming_distance)
# The neighboring release datasets are used to calculate the max sensitivity, stemming from the DP definition
neighbor_with_less_records_datasets = []
for release_dataset in release_datasets:
# These yields the smaller possible neighboring datasets
neighbor_with_less_records_dataset = itertools.combinations(release_dataset, \
universe_subset_cardinality - hamming_distance)
neighbor_with_less_records_dataset = list(neighbor_with_less_records_dataset)
neighbor_with_less_records_datasets.append(neighbor_with_less_records_dataset)
# 3) |NEIGHBORING DATASET| > |RELEASE DATASET| //// cardinalities
# similar process but adding records
neighbor_with_more_records_datasets = []
for release_dataset in release_datasets:
# We obtain combinations of values from the univsere and these will be appended to the release datasets.
# The size of each combination is equal to the hamming distance, as the neighboring dataset will be that much larger
# However, in case your universe is a dataset and not just a range of values, then the neighboring
# dataset could contain the same record twice, which is NOT desirable (1 person appearing twice)
# Therefore, the values must be sampled from the symmetric difference between the release dataset and the universe dataset
# REF: https://www.geeksforgeeks.org/python-difference-of-two-lists-including-duplicates/
symmetric_difference = list((Counter(universe[column]) - Counter(release_dataset)).elements())
neighbor_possible_value_combinations = itertools.combinations(symmetric_difference, hamming_distance)
neighbor_possible_value_combinations = list(neighbor_possible_value_combinations)
temp_neighbor_with_more_records_datasets = []
for neighbor_possible_value_combination in neighbor_possible_value_combinations:
# We create neighboring datasets by concatenating the neighbor_possible_value_combination with the release dataset
neighbor_with_more_records_dataset = list(release_dataset + neighbor_possible_value_combination)
temp_neighbor_with_more_records_datasets.append(neighbor_with_more_records_dataset)
# We append in this manner to cluster the neighboring datasets with their respective release dataset
neighbor_with_more_records_datasets.append(temp_neighbor_with_more_records_datasets)
# 4) For each possible release datase, there is a set of neighboring datasets
# We will iterate through each possible release dataset and calculate the L1 norm with
# each of its repspective neighboring datasets
L1_norms = []
for i, release_dataset in enumerate(release_datasets):
release_dataset_query_value = query.run_query(release_dataset, percentile)
L1_norm = L1_norm_max(release_dataset_query_value, neighbor_with_less_records_datasets[i], query, percentile)
L1_norms.append(L1_norm)
L1_norm = L1_norm_max(release_dataset_query_value, neighbor_with_more_records_datasets[i], query, percentile)
L1_norms.append(L1_norm)
# We pick the maximum out of all the maximum L1_norms calculated from each possible release dataset
unbounded_global_sensitivity_per_colum[column] = max(L1_norms)
return unbounded_global_sensitivity_per_colum
```
##### You can find this definition after equation (5) of 5.1
```
%%latex
\begin{align}
\Delta v=\max_{\substack{
{1 \leq i, j \leq n} \\ \\
i \neq j
}} \|f(w_i)-f(w_j)\|_{1}
\end{align}
```
\begin{align}
\Delta v=\max_{\substack{
{1 \leq i, j \leq n} \\ \\
i \neq j
}} \|f(w_i)-f(w_j)\|_{1}
\end{align}
```
def bounded_empirical_global_L1_sensitivity(universe, universe_subset_cardinality, columns, query_type, hamming_distance, percentile=50):
"""
INPUT:
universe - (df) contains all possible values of the dataset
universe_subset_cardinality - (int) contains the length of the subset chosen for the release dataset
columns - (array) contains the names of the columns we would like to obtain the sensitivity from
query_type - (str) contain the category declaring the type of query to be later on executed
hamming_distance - (int) hamming distance between neighboring datasets
percentile - (int) percentile value for the percentile query
OUTPUT:
bounded_global_sensitivity - (float) the bounded global sensitivity of the input universe
Description:
It claculates the global sensitivity of an array based on the knowledge of the entire universe of
the dataset and query_type.
"""
# We initialie the type of query for which we would like calculate the sensitivity
query = Query_class(query_type)
# We will store the sensitivity of each column of the dataset containing universe in a dictionary
bounded_global_sensitivity_per_column = {}
for column in columns:
# Check if the values for the hamming distance and universe sizes comply with the basic constraints
verify_sensitivity_inputs(len(universe[column]), universe_subset_cardinality, hamming_distance)
# We calculate all the possible release datasets
# First we obtain the combinations within the release dataset. The size of this combinations is not the original size
# but the original size minus the hamming_distance
release_i_datasets = itertools.combinations(universe[column], universe_subset_cardinality - hamming_distance)
release_i_datasets = list(release_i_datasets)
# it will contain sets of neighboring datasets. The L1 norm will be calculated between these sets. The maximum will be chosen
# The datasets from different groups do not necesarilly need to be neighbors, thus we separate them in groups
neighbor_datasets = []
for release_i_dataset in release_i_datasets:
# second we calculate the combinations of the items in the universe that are not in the release dataset
# the size of a combination is equal to the hamming distance
symmetric_difference = list((Counter(universe[column]) - Counter(release_i_dataset)).elements())
release_ii_datasets = itertools.combinations(symmetric_difference, hamming_distance)
release_ii_datasets = list(release_ii_datasets)
# We create neighboring datasets by concatenating i with ii
temp_neighbors = []
for release_ii_dataset in release_ii_datasets:
temp_neighbor = list(release_i_dataset + release_ii_dataset)
temp_neighbors.append(temp_neighbor)
neighbor_datasets.append(temp_neighbors)
# We calculate the L1_norm for the different combinations with the aim to find the max
# We can loop in this manner because we are obtaining the absolute values
L1_norms = []
for m in range(0, len(neighbor_datasets)):
for i in range(0, len(neighbor_datasets[m])-1):
for j in range(i+1, len(neighbor_datasets[m])):
L1_norm = np.abs(query.run_query(neighbor_datasets[m][i], percentile) - query.run_query(neighbor_datasets[m][j], percentile))
L1_norms.append(L1_norm)
bounded_global_sensitivity_per_column[column] = max(L1_norms)
return bounded_global_sensitivity_per_column
def prior_belief(universe, universe_subset, columns):
"""
INPUT:
universe - (df) contains all possible values of the dataset
universe_subset - (df) contains the values of the dataset to be released
columns - (array) contains the names of the columns we would like to obtain the sensitivity from
OUTPUT:
prior_per_column - (dict) maps each column of all the possible release datasets to the prior knowledge of an adversary
Description:
It calculates the prior knowledge of an adversary assuming uniform distribution
"""
# Initialize the dictionary to store all the posteriors with the column names as keys
prior_per_column = {}
for column in columns:
# We calculate all the possible release datasets
release_datasets = itertools.combinations(universe[column], len(universe_subset[column]))
release_datasets = list(release_datasets)
number_possible_release_datasets = len(release_datasets)
# We assume a uniform prior
prior_per_column[column] = [1 / number_possible_release_datasets] * number_possible_release_datasets
return prior_per_column
```
#### Definition 3, also in equation 2 of 5.1 but with another form / right before the triangle inequality
```
def posterior_belief(universe, universe_subset, columns, query_type, query_result, sensitivity, epsilon):
"""
INPUT:
universe - (df) contains all possible values of the dataset
universe_subset - (df) contains the values of the dataset to be released
columns - (array) contains the names of the columns we would like to obtain the sensitivity from
query_type - (str) contain the category declaring the type of query to be later on executed
query_result - (float) the result of the query the attacker received
sensitivity - (dict) it maps the columns to their sensitivities, they can be based on bounded or unbounded DP
epsilon - (float) it is the parameter that tunes noise, based on DP
OUTPUT:
posterior_per_column - (dict) maps each column of all the possible release datasets to the posterior knowledge of an adversary
Description:
It calculates the posterior knowledge of an adversary
"""
# Initialize the dictionary to store all the posteriors with the column names as keys
posterior_per_column = {}
# We initialie the type of query for which we would like calculate the sensitivity
query = Query_class(query_type)
for column in columns:
# We calculate all the possible release datasets
release_datasets = itertools.combinations(universe[column], len(universe_subset[column]))
release_datasets = list(release_datasets)
# According to the definiton of DP, the sacle factor of a Laplacian distribution:
scale_parameter = sensitivity[column]/epsilon
posterior_probability = []
for release_dataset in release_datasets:
probability = (1 / (2 * scale_parameter)) * np.exp(- np.abs(query_result - np.mean(release_dataset)) / scale_parameter)
posterior_probability.append(probability)
print(probability)
posterior_per_column[column] = posterior_probability / np.sum(posterior_probability)
return posterior_per_column
```
#### Result from 4.1 probability ratio of 3.2933
```
scale_parameter = (17/12)
```
Using the cumulative distribution functions. REF: https://en.wikipedia.org/wiki/Laplace_distribution
But we want a P(x > value) and not P(x < value), the latter is given by the REF
```
# probability that output is greater than 1.1677 > 0=mu
0.5*(np.exp(- 1.1677 / scale_parameter))
# prob that he output is greater than -0.832 < 0=mu
1 - 0.5*(np.exp(- 0.832 / scale_parameter))
0.7221/0.2192
0.5 + 0.5 * np.sign(1.1677)*(1 - np.exp(- 1.1677 / scale_parameter))
```
##### Definition 4
```
def confidence(priors, posteriors, columns):
"""
INPUT:
priors - (dict) maps each column of all the possivle release datasets to the prior knowledge of an adversary
posteriors - (dict) maps each column of all the possible release datasets to the posterior knowledge of an adversary
columns - (array) contains the names of the columns we would like to obtain the sensitivity from
OUTPUT:
confidence_per_column - (dict) maps each column to the confidence of the adversary
Description:
It calculates the confidence of an adversary after seeing the prior
"""
confidence_per_column = {}
for column in columns:
confidence_per_column[column] = np.max(posteriors[column] - priors[column])
return confidence_per_column
```
##### Definition 5
```
def risk_disclosure(posteriors, columns):
"""
INPUT:
posteriors - (dict) maps each column of all the possible release datasets to the posterior knowledge of an adversary
columns - (array) contains the names of the columns we would like to obtain the sensitivity from
OUTPUT:
posterior_max_per_column - (dict) maps each column to the risk of disclosure
Description:
It calculates the risk of disclosure per column, which is the max posterior
"""
posterior_max_per_column = {}
for column in columns:
posterior_max_per_column[column] = np.max(posteriors[column])
return posterior_max_per_column
```
#### 5.1 equation 4-5
#### This definition is used for equation. (4-5) of point 5.1
```
def upper_bound_posterior(universe, columns, bounded_sensitivity, unbounded_sensitivity, epsilon):
"""
INPUT:
universe - (df) contains all possible values of the dataset
columns - (array) contains the names of the columns we would like to obtain the sensitivity from
bounded_sensitivity - (dict) it maps the columns to their bounded sensitivities
unbounded_sensitivity - (dict) it maps the columns to their unbounded sensitivities
epsilon - (float) it is the parameter that tunes noise, based on DP
OUTPUT:
upper_bound_per_column - (dict) dictionary that maps the value that bounds the posterior (also the risk) per column
Description:
It calculates the upper bound of the posterior
"""
upper_bound_posterior_per_column = {}
for column in columns:
upper_bound_posterior = 1 / (1 + (universe.shape[0] - 1) * \
np.exp(-epsilon * bounded_sensitivity[column] / unbounded_sensitivity[column]))
upper_bound_posterior_per_column[column] = upper_bound_posterior
return upper_bound_posterior_per_column
```
##### inequality 3 of 5.1
```
def tighter_upper_bound_posterior(universe, universe_subset, columns, query_type, sensitivity, epsilon, percentile=50):
"""
INPUT:
universe - (df) contains all possible values of the dataset
universe_subset - (df) contains the values of the dataset to be released
columns - (array) contains the names of the columns we would like to obtain the sensitivity from
query_type - (str) contain the category declaring the type of query to be later on executed
sensitivity - (dict) it maps the columns to their sensitivities, they can be based on bounded or unbounded DP
epsilon - (float) it is the parameter that tunes noise, based on DP
OUTPUT:
tighter_upper_bound_posteriorper_column - (dict) maps each column of all the possible release datasets to
the tighter upper bound of the posterior knowledge of an adversary
Description:
It calculates a tighter bound of the posterior knowledge of an adversary
"""
# Initialize the dictionary to store all the posteriors with the column names as keys
tighter_upper_bound_posterior_per_column = {}
# We initialie the type of query for which we would like calculate the sensitivity
query = Query_class(query_type)
for column in columns:
# We calculate all the possible release datasets
release_datasets = itertools.combinations(universe[column], len(universe_subset[column]))
release_datasets = list(release_datasets)
# We calculate the L1_norm for the different combinations of the query result from different data releass
# We have to complete all loops becuase we need to calculate different values of the posterior
# Then we select the max after calculating all posteriors
posterior_probability = []
for i in range(0, len(release_datasets) - 1):
L1_norms = []
for j in range(0, len(release_datasets)):
if release_datasets[i] == release_datasets[j]:
continue
else:
L1_norms.append(np.abs(query.run_query(release_datasets[i], percentile) - query.run_query(release_datasets[j], percentile)))
denominator_posterior = 1
for L1_norm in L1_norms:
denominator_posterior += np.exp(-epsilon * L1_norm / sensitivity[column])
beta = 1 / denominator_posterior
posterior_probability.append(beta)
tighter_upper_bound_posterior_per_column[column] = max(posterior_probability)
return tighter_upper_bound_posterior_per_column
```
#### 5.2 - inequality 7
```
def upper_bound_epsilon(universe, columns, bounded_sensitivity, unbounded_sensitivity, risk):
"""
INPUT:
universe - (df) contains all possible values of the dataset
columns - (array) contains the names of the columns we would like to obtain the sensitivity from
bounded_sensitivity - (dict) it maps the columns to their bounded sensitivities
unbounded_sensitivity - (dict) it maps the columns to their unbounded sensitivities
risk - (float) it is a parameter that sets the privacy requirement. It is the probability that the attacker
succeeds in his/her attack
OUTPUT:
epsilon_upper_bound_per_column - (dict) dictionary that maps the upper bound of epsilon per column
Description:
It calculates epsilon given a risk willing to take
"""
epsilon_upper_bound_per_column = {}
for column in columns:
epsilon_upper_bound = unbounded_sensitivity[column] / bounded_sensitivity[column] * \
np.log(((universe.shape[0] - 1) * risk) / (1 - risk))
epsilon_upper_bound_per_column[column] = epsilon_upper_bound
return epsilon_upper_bound_per_column
```
#### Binary search - 5.2
```
def binary_search_epsilon(universe, universe_subset, columns, query_type, bounded_global_sensitivities, unbounded_global_sensitivities, privacy_requirement, posterior_bound_type='tight'):
"""
INPUT:
universe - (df) contains all possible values of the dataset
universe_subset - (df) contains the values of the dataset to be released
columns - (array) contains the names of the columns we would like to obtain the sensitivity from
query_type - (str) contain the category declaring the type of query to be later on executed
bounded_global_sensitivities - (dict) it maps the columns to their bounded sensitivities
unbounded_global_sensitivities - (dict) it maps the columns to their unbounded sensitivities
privacy_requirement - (float) the highest probability admisable of disclosure
posterior_bound_type - (str) the type of bound we use to calculate the new values to decide upon which epsilon to take next
OUTPUT:
optimal_epsilon - (dict) dictionary that maps the optimal epsilons per column
Description:
It performs binary search to find the optimal epsilon
"""
optimal_epsilon = {}
for column in columns:
max_risk = 0.9999
epsilon_upper_bound = upper_bound_epsilon(universe, columns, bounded_global_sensitivities, unbounded_global_sensitivities, max_risk)
epsilon_f = epsilon_upper_bound
epsilon_s = 0
for i in range(0,25):
epsilon = (epsilon_f[column] + epsilon_s)/2
# Check which type of bound
if posterior_bound_type != 'upper':
posterior_upper_bound_new = tighter_upper_bound_posterior(universe, universe_subset, columns, query_type, unbounded_global_sensitivities, epsilon)
else:
posterior_upper_bound_new = upper_bound_posterior(universe, columns, bounded_global_sensitivities, unbounded_global_sensitivities, epsilon)
if posterior_upper_bound_new[column] < privacy_requirement:
epsilon_s = epsilon
elif posterior_upper_bound_new[column] > privacy_requirement:
epsilon_f[column] = epsilon
optimal_epsilon[column] = epsilon
return optimal_epsilon
```
## MAIN
### True results for different query types - just a warm up
```
# Finding true values of different queries
mean_year = df_school['school_year'].mean()
mean_absence_days = df_school['absence_days'].mean()
median_year = df_school['school_year'].median()
median_absence_days = df_school['absence_days'].median()
count_year = df_school['school_year'].count()
count_absence_days = df_school['absence_days'].count()
sum_year = df_school['school_year'].sum()
sum_absence_days = df_school['absence_days'].sum()
std_year = df_school['school_year'].std()
std_absence_days = df_school['absence_days'].std()
var_year = df_school['school_year'].var()
var_absence_days = df_school['absence_days'].var()
var_year = df_school['school_year'].var()
var_absence_days = df_school['absence_days'].var()
percentile_25_year = np.percentile(df_school['school_year'], 25)
percentile_25_absence_days = np.percentile(df_school['absence_days'], 25)
percentile_50_year = np.percentile(df_school['school_year'], 50)
percentile_50_absence_days = np.percentile(df_school['absence_days'], 50)
percentile_75_year = np.percentile(df_school['school_year'], 75)
percentile_75_absence_days = np.percentile(df_school['absence_days'], 75)
print('School year: mean =', mean_year, 'Absence days: mean =', mean_absence_days)
print('School year: median =', median_year, 'Absence days: median =', median_absence_days)
print('School year: count =', count_year, 'Absence days: count =', count_absence_days)
print('School year: sum =', sum_year, 'Absence days: sum =', sum_absence_days)
print('School year: std =', std_year, 'Absence days: std =', std_absence_days)
print('School year: var =', var_year, 'Absence days: var =', var_absence_days)
print('School year: 25th percentile =', percentile_25_year, 'Absence days: 25th percentile =', percentile_25_absence_days)
print('School year: 50th percentile =', percentile_50_year, 'Absence days: 50th percentile =', percentile_50_absence_days)
print('School year: 75th percentile =', percentile_75_year, 'Absence days: 75th percentile =', percentile_75_absence_days)
```
## All the cross checks of the paper are done with the Mean, as the paper utilizes the mean as the use case
### Unbounded global sensitivity for different query types - 4.1 - 2.8333 of point 4.1
```
# Calculate the sensitivity of different queries for the unbounded DP
columns = ['school_year', 'absence_days']
hamming_distance = 1
unbounded_sensitivities = {}
unbounded_sensitivities = calculate_unbounded_sensitivities(df_school, df_school_release.shape[0], columns, hamming_distance, unbounded_sensitivities)
;
```
Notice the obvious, the sensitivity for the median is the same as the sensitivity for the 50th percentile.
#### 4.3 & 4.4
```
# Calculating prior knowledge. We assume a uniform prior
priors = prior_belief(df_school, df_school_release, columns)
priors
```
###### Posterior 0.618 from the paper replicated - there is a typo on the expression below Table 3, the numerator should be "0.3062"
```
# Let us calculate the posteriors
query_result = 2.20131
epsilon = 2
query_type = 'mean'
mean_unbounded_global_sensitivities = unbounded_empirical_global_L1_sensitivity(df_school, df_school_release.shape[0], columns, query_type, hamming_distance)
posteriors = posterior_belief(df_school, df_school_release, columns, query_type, query_result, mean_unbounded_global_sensitivities, epsilon)
posteriors
# Let us calculate the risk of disclosure
risk_disclosure(posteriors, columns)
# Let us calculate the confidence of the attacker
confidence_adversary = confidence(posteriors, priors, columns)
confidence_adversary
```
### Bounded global sensitivity for different query types - prep for 5.1 (right above equation 8, delta_v = 3 for absence days and 1 for school year, for the mean query)
```
# Calculate the sensitivity of different queries for the unbounded DP
columns = ['school_year', 'absence_days']
hamming_distance = 1
bounded_sensitivities = {}
calculate_bounded_sensitivities(df_school, df_school_release.shape[0], columns, hamming_distance, bounded_sensitivities)
;
```
##### These calculations are not in the paper, but it is interesting to see what values we would get if we assume a bounded DP definition from the beginning.
```
# Calculating prior knowledge. We assume a uniform prior
priors = prior_belief(df_school, df_school_release, columns)
priors
# Let us calculate the posteriors
query_result = 2.20131
epsilon = 2
query_type = 'mean'
mean_bounded_global_sensitivities = bounded_empirical_global_L1_sensitivity(df_school, df_school_release.shape[0], columns, query_type, hamming_distance)
posteriors = posterior_belief(df_school, df_school_release, columns, query_type, query_result, mean_bounded_global_sensitivities, epsilon)
posteriors
# Let us calculate the risk of disclosure
risk_disclosure(posteriors, columns)
# Let us calculate the confidence of the attacker
confidence_adversary = confidence(posteriors, priors, columns)
confidence_adversary
```
### Calculate upper bounds for posterior - 5.1
```
# Experiment with an epsilon of 2
epsilon = 2
posterior_upper_bound = upper_bound_posterior(df_school, columns, mean_bounded_global_sensitivities, mean_unbounded_global_sensitivities, epsilon)
print('For epsilon {} the posterio is {}'.format(epsilon, posterior_upper_bound))
# Epsilon of 0 should provie the highest privacy, but also zero utility, hence, the adversary has not updated his/her prior
# It has not learned anaything new. But if this individual querying is not malicious, then the utility of the query is 0
epsilon = 0
posterior_upper_bound = upper_bound_posterior(df_school, columns, mean_bounded_global_sensitivities, mean_unbounded_global_sensitivities, epsilon)
print('For epsilon {} the posterio is {}'.format(epsilon, posterior_upper_bound))
```
### Calculate upper bounds for epsilon given risk willing to take - 5.2 (9) - 0.3829
```
# Let us calculate the upper bound with epsilon with a risk of 0.33 (there is a chance of 1/3 of letting
# the adversary know the true value)
risk = 1/3
epsilon_upper_bound = upper_bound_epsilon(df_school, columns, mean_bounded_global_sensitivities, mean_unbounded_global_sensitivities, risk)
epsilon_upper_bound
```
### Calcualte a tighter risk bound - with 5.1 equation 3 (result 12) - 0.3292
```
epsilon = 0.5
query_type = 'mean'
posterior_tighter_upper_bound = tighter_upper_bound_posterior(df_school, df_school_release, columns, query_type, mean_unbounded_global_sensitivities, epsilon)
print('For epsilon {} the posterior is {}'.format(epsilon, posterior_tighter_upper_bound))
```
### Plotting Fig 2
If you are wondering why are we using these bounds and not plotting the real value of the posterior:
This is because, in order to plot the real value of the posterior, you already needed to have a query result. For someone to output the query result, he/she had to already decide on a value of epsilon, and that is actually what they are aiming for - hence the name of the paper.
The purpose of finding these upper and tighter bounds of the posterior, is to find an optimal value for epsilon, i.e. by trying out with many values of epsilon and without knowing which query result you will get, you can plot the posterior upper and tight bounds and decide which epsilon to pick based on the risk you are willing to take.
###### Side note: In the paper they refer to the acceptable risk as the greek letters delta and as rho. Delta is never placed in an equation, that might somewhat confusing if you are reading the paper. They use 1/3 as this value.
First we get the values for the bounds for plotting:
```
precision = 0.01
limit_x = 5
epsilons = np.linspace(0, 5, num=int(limit_x/precision))
# Setting parameters
columns = ['school_year', 'absence_days']
query_type = 'mean'
# Initialize dicts with correspondng keys: https://stackoverflow.com/questions/11509721/how-do-i-initialize-a-dictionary-of-empty-lists-in-python
posterior_upper_bound = {k: [] for k in columns}
posterior_tighter_upper_bound = {k: [] for k in columns}
# Obtaining the values for the bounds
for epsilon in epsilons:
temp_posterior_upper_bound = upper_bound_posterior(df_school, columns, mean_bounded_global_sensitivities, mean_unbounded_global_sensitivities, epsilon)
temp_posterior_tighter_upper_bound = tighter_upper_bound_posterior(df_school, df_school_release, columns, query_type, mean_unbounded_global_sensitivities, epsilon)
for column in columns:
posterior_upper_bound[column].append(temp_posterior_upper_bound[column])
posterior_tighter_upper_bound[column].append(temp_posterior_tighter_upper_bound[column])
plt.figure(figsize=(15, 7))
risk = 1/3
# Calculate upper bound
epsilon_upper_bound = upper_bound_epsilon(df_school, columns, mean_bounded_global_sensitivities, mean_unbounded_global_sensitivities, risk)
for index, column in enumerate(columns):
# Start the plot
plot_index = int(str(1) + str(len(columns)) + str(index+1))
plt.subplot(plot_index)
# plot the upper bounds
upper_bound, = plt.plot(epsilons, posterior_upper_bound[column], 'r', label="Risk upper bound")
tighter_bound, = plt.plot(epsilons, posterior_tighter_upper_bound[column], 'b', label="Risk tighter bound")
# Legends
legend = plt.legend(handles=[upper_bound, tighter_bound], loc='lower right')
ax = plt.gca().add_artist(legend)
# axis labels and titles
plt.xlabel('Epsilon')
plt.ylabel('Risk disclosure probability')
plt.ylim(0.2,1)
plt.xlim(0,5)
plt.title('{}) Domain {} = {}'.format(index+1, column, df_school[column].values))
plt.suptitle('Upper bounds of the risk disclosure (posterior probability) by varying domains')
plt.show()
```
Let us zoom in and check some values for risk and epsilon:
```
plt.figure(figsize=(15, 7))
risk = 1/3
# Calculate upper bound
epsilon_upper_bound = upper_bound_epsilon(df_school, columns, mean_bounded_global_sensitivities, mean_unbounded_global_sensitivities, risk)
for index, column in enumerate(columns):
# Start the plot
plot_index = int(str(1) + str(len(columns)) + str(index+1))
plt.subplot(plot_index)
# plot the upper bounds
upper_bound, = plt.plot(epsilons, posterior_upper_bound[column], 'r', label="Risk upper bound")
tighter_bound, = plt.plot(epsilons, posterior_tighter_upper_bound[column], 'b', label="Risk tighter bound")
# Plot the risk
privacy_requirement, = plt.plot(epsilons, np.full(shape=len(epsilons), fill_value=risk), color='black', label="Privacy requirement")
# Plot the epsilon upper bound
y_axis_points = np.linspace(0,1,2)
plt.plot(np.full(shape=len(y_axis_points), fill_value=epsilon_upper_bound[column]), y_axis_points, 'r--')
plt.plot(np.full(shape=len(y_axis_points), fill_value=0.5), y_axis_points, 'r--')
# Legends
legend = plt.legend(handles=[upper_bound, tighter_bound, privacy_requirement], loc='lower right')
# axis labels and titles
plt.xlabel('Epsilon')
plt.ylabel('Risk disclosure probability')
plt.title('{}) Domain {} = {}'.format(index+1, column, df_school[column].values))
# Zoom
# plt.ylim(0, 0.4)
# plt.xlim(0, 0.6)
# Additonal info
print('Epsilon upper bound for risk {} in universe {} = {}'.format(round(risk, 2), column, epsilon_upper_bound[column]))
plt.suptitle('Upper bounds of the risk disclosure (posterior probability) by varying domains')
plt.show()
```
More specifically: (Looking at plot 1), left) The authors want us to notice that even though we have an epsilon bound on 0.3379 (left red dotted vertical line), we can find a value of epsilon (e.g., 0.5, right red dotted line) that still fulfills the privacy requirement (<1/3). (But note than an epsilon of 0.5 on the absence days would make the risk go above the set threshold of 1/3)
You cannot free epsilon on one side of the inequality from the tighter bound formula due to arithmetic constraints. Thus, you cannot find epsilon directly on the tighter bound curve formula, unlike with the upper bound curve. Therefore, they propose to use binary search starting at the max of the domain; in this case, it is with an epsilon of 5, which would approximately yield a 100% probability of the adversary being successful. To make it more optimal, if you already have visualized the curve, you can choose your start and end of the binary search with high precision.
### Binary Search - 5.2
We are going to perform binary search with the upper bound (not with the tight one9 to show that the binary search converges into the expected values:
```
privacy_requirement = 1/3
query_type = 'mean'
posterior_bound_type = 'upper'
optimal_epsilons = binary_search_epsilon(df_school, df_school_release, columns, query_type, mean_bounded_global_sensitivities, mean_unbounded_global_sensitivities, privacy_requirement, posterior_bound_type)
optimal_epsilons
```
These are the sub-optimal values (as they are calculated with the upper bound) for epsilon to comply with a privacy requirement (disclosure probability) of 1/3
We thus show that our binary search works. See below the exact calculations with the upper bound (they are equal).
```
risk = 1/3
epsilon_upper_bound = upper_bound_epsilon(df_school, columns, mean_bounded_global_sensitivities, mean_unbounded_global_sensitivities, risk)
epsilon_upper_bound
privacy_requirement = 1/3
query_type = 'mean'
posterior_bound_type = 'tight'
columns = ['school_year', 'absence_days']
optimal_epsilons = binary_search_epsilon(df_school, df_school_release, columns, query_type, mean_bounded_global_sensitivities, mean_unbounded_global_sensitivities, privacy_requirement, posterior_bound_type)
print('Optimal epsilons:')
optimal_epsilons
tight_upper = {}
for column in columns:
epsilon = optimal_epsilons[column]
tight_upper[column] = tighter_upper_bound_posterior(df_school, df_school_release, columns, query_type, mean_unbounded_global_sensitivities, epsilon)
print('Posterior with optimal epsilons. Very close to the threhold of 1/3. So OK.')
print('You can also see, of course, how the optimal epsilon for one attribute does not yield a tight upper bound of 1/3 on the other')
tight_upper
```
This is computationally taxing due ti how the algorithm is written. We could save part of the maximum posterior operations so we do not need to run them again in every iteration. However, the purpose of this notebook is to get a deeper undersranding of the intricaces of the paper. Nonetheless, for these datasets sizes used for demonstrations, the algorithm runs smoothly.
Let us plot the output to verify that indeed these optimal epsilons correspond to the tighter upper bound curves.
```
plt.figure(figsize=(15, 7))
risk = 1/3
# Calculate upper bound
epsilon_upper_bound = upper_bound_epsilon(df_school, columns, mean_bounded_global_sensitivities, mean_unbounded_global_sensitivities, risk)
for index, column in enumerate(columns):
# Start the plot
plot_index = int(str(1) + str(len(columns)) + str(index+1))
plt.subplot(plot_index)
# plot the upper bounds
upper_bound, = plt.plot(epsilons, posterior_upper_bound[column], 'r', label="Risk upper bound")
tighter_bound, = plt.plot(epsilons, posterior_tighter_upper_bound[column], 'b', label="Risk tighter bound")
# Plot the risk
privacy_requirement, = plt.plot(epsilons, np.full(shape=len(epsilons), fill_value=risk), color='black', label="Privacy requirement")
# Plot the epsilon upper bound
y_axis_points = np.linspace(0,1,2)
plt.plot(np.full(shape=len(y_axis_points), fill_value=epsilon_upper_bound[column]), y_axis_points, 'r--')
plt.plot(np.full(shape=len(y_axis_points), fill_value=optimal_epsilons[column]), y_axis_points, 'b--')
# Legends
legend = plt.legend(handles=[upper_bound, tighter_bound, privacy_requirement], loc='lower left')
# axis labels and titles
plt.xlabel('Epsilon')
plt.ylabel('Risk disclosure probability')
plt.title('{}) Domain {} = {}'.format(index+1, column, df_school[column].values))
# Zoom
plt.ylim(0, 0.4)
plt.xlim(0, 0.6)
# Additonal info
print('Plot:', index, column)
print('Epsilon upper bound for risk {} in universe {} = {} in dash red'.format(round(risk, 2), column, epsilon_upper_bound[column]))
print('Epsilon tight upper bound for risk {} in universe {} = {} in dash blue'.format(round(risk, 2), column, optimal_epsilons[column]))
print('\n')
plt.suptitle('Upper bounds of the risk disclosure (posterior probability) by varying domains')
plt.show()
```
##### The optimal epsilons (in dash blue) fit pretty well, they maximize utility while preserving privacy.
### MEDIAN - the last part of the paper (6), exemplifies this process with the median
```
query_type = 'median'
median_unbounded_global_sensitivities = unbounded_empirical_global_L1_sensitivity(df_school, df_school_release.shape[0], columns, query_type, hamming_distance)
median_bounded_global_sensitivities = bounded_empirical_global_L1_sensitivity(df_school, df_school_release.shape[0], columns, query_type, hamming_distance)
print('unbounded sensitivity', median_unbounded_global_sensitivities)
print('bounded sensitivity', median_bounded_global_sensitivities)
# Calculating prior knowledge. We assume a uniform prior
priors = prior_belief(df_school, df_school_release, columns)
priors
# Let us calculate the posteriors
query_result = 2.20131
epsilon = 2
query_type = 'median'
posteriors = posterior_belief(df_school, df_school_release, columns, query_type, query_result, mean_unbounded_global_sensitivities, epsilon)
posteriors
# Let us calculate the risk of disclosure
risk_disclosure(posteriors, columns)
# Let us calculate the confidence of the attacker
confidence_adversary = confidence(posteriors, priors, columns)
confidence_adversary
```
##### Calculate the upper bounds of the posterior
```
epsilon = 2
posterior_upper_bound = upper_bound_posterior(df_school, columns, median_bounded_global_sensitivities, median_unbounded_global_sensitivities, epsilon)
print('For epsilon {} the posterio is {}'.format(epsilon, posterior_upper_bound))
epsilon = 0
posterior_upper_bound = upper_bound_posterior(df_school, columns, median_bounded_global_sensitivities, median_unbounded_global_sensitivities, epsilon)
print('For epsilon {} the posterio is {}'.format(epsilon, posterior_upper_bound))
```
##### Calculate upper bounds for epsilon given risk willing to take - 6 result 18 = 1.6219
```
# Let us calculate the upper bound with epsilon with a risk of 0.33 (there is a chance of 1/3 of letting
# the adversary know the true value)
risk = 1/3
epsilon_upper_bound = upper_bound_epsilon(df_school, columns, median_bounded_global_sensitivities, median_unbounded_global_sensitivities, risk)
epsilon_upper_bound
```
##### Calcualte a tighter risk bound with a given epsilon
```
epsilon = 0.5
query_type = 'median'
posterior_tighter_upper_bound = tighter_upper_bound_posterior(df_school, df_school_release, columns, query_type, median_unbounded_global_sensitivities, epsilon)
print('For epsilon {} the posterio is {}'.format(epsilon, posterior_tighter_upper_bound))
```
##### Plotting
```
precision = 0.01
limit_x = 5
epsilons = np.linspace(0, 5, num=int(limit_x/precision))
# Setting parameters
columns = ['school_year', 'absence_days']
query_type = 'median'
# Initialize dicts with correspondng keys: https://stackoverflow.com/questions/11509721/how-do-i-initialize-a-dictionary-of-empty-lists-in-python
posterior_upper_bound = {k: [] for k in columns}
posterior_tighter_upper_bound = {k: [] for k in columns}
# Obtaining the values for the bounds
for epsilon in epsilons:
temp_posterior_upper_bound = upper_bound_posterior(df_school, columns, median_bounded_global_sensitivities, median_unbounded_global_sensitivities, epsilon)
temp_posterior_tighter_upper_bound = tighter_upper_bound_posterior(df_school, df_school_release, columns, query_type, median_unbounded_global_sensitivities, epsilon)
for column in columns:
posterior_upper_bound[column].append(temp_posterior_upper_bound[column])
posterior_tighter_upper_bound[column].append(temp_posterior_tighter_upper_bound[column])
plt.figure(figsize=(15, 7))
risk = 1/3
# Calculate upper bound
epsilon_upper_bound = upper_bound_epsilon(df_school, columns, median_bounded_global_sensitivities, median_unbounded_global_sensitivities, risk)
for index, column in enumerate(columns):
# Start the plot
plot_index = int(str(1) + str(len(columns)) + str(index+1))
plt.subplot(plot_index)
# plot the upper bounds
upper_bound, = plt.plot(epsilons, posterior_upper_bound[column], 'r', label="Risk upper bound")
tighter_bound, = plt.plot(epsilons, posterior_tighter_upper_bound[column], 'b', label="Risk tighter bound")
# Legends
legend = plt.legend(handles=[upper_bound, tighter_bound], loc='lower right')
ax = plt.gca().add_artist(legend)
# axis labels and titles
plt.xlabel('Epsilon')
plt.ylabel('Risk disclosure probability')
plt.ylim(0.2,1)
plt.xlim(0,5)
plt.title('{}) Domain {} = {}'.format(index+1, column, df_school[column].values))
plt.suptitle('Upper bounds of the risk disclosure (posterior probability) by varying domains')
plt.show()
plt.figure(figsize=(15, 7))
risk = 1/3
# Calculate upper bound
epsilon_upper_bound = upper_bound_epsilon(df_school, columns, median_bounded_global_sensitivities, median_unbounded_global_sensitivities, risk)
for index, column in enumerate(columns):
# Start the plot
plot_index = int(str(1) + str(len(columns)) + str(index+1))
plt.subplot(plot_index)
# plot the upper bounds
upper_bound, = plt.plot(epsilons, posterior_upper_bound[column], 'r', label="Risk upper bound")
tighter_bound, = plt.plot(epsilons, posterior_tighter_upper_bound[column], 'b', label="Risk tighter bound")
# Plot the risk
privacy_requirement, = plt.plot(epsilons, np.full(shape=len(epsilons), fill_value=risk), color='black', label="Privacy requirement")
# Plot the epsilon upper bound
y_axis_points = np.linspace(0,1,2)
plt.plot(np.full(shape=len(y_axis_points), fill_value=epsilon_upper_bound[column]), y_axis_points, 'r--')
# Legends
legend = plt.legend(handles=[upper_bound, tighter_bound, privacy_requirement], loc='lower right')
# axis labels and titles
plt.xlabel('Epsilon')
plt.ylabel('Risk disclosure probability')
plt.title('{}) Domain {} = {}'.format(index+1, column, df_school[column].values))
# Zoom
# plt.ylim(0, 0.4)
# plt.xlim(0, 0.6)
# Additonal info
print('Epsilon upper bound for risk {} in universe {} = {}'.format(round(risk, 2), column, epsilon_upper_bound[column]))
plt.suptitle('Upper bounds of the risk disclosure (posterior probability) by varying domains')
plt.show()
```
### Binary Search - 5.2
We are going to perform binary search with the upper bound (not with the tight one9 to show that the binary search converges into the expected values:
```
privacy_requirement = 1/3
query_type = 'median'
posterior_bound_type = 'upper'
optimal_epsilons = binary_search_epsilon(df_school, df_school_release, columns, query_type, median_bounded_global_sensitivities, median_unbounded_global_sensitivities, privacy_requirement, posterior_bound_type)
optimal_epsilons
```
These are the sub-optimal values (as they are calculated with the upper bound) for epsilon to comply with a privacy requirement (disclosure probability) of 1/3
We thus show that our binary search works. See below the exact calculations with the upper bound (they are equal).
```
risk = 1/3
epsilon_upper_bound = upper_bound_epsilon(df_school, columns, median_bounded_global_sensitivities, median_unbounded_global_sensitivities, risk)
epsilon_upper_bound
```
##### Result 6 after equation 21 - 2.773
```
privacy_requirement = 1/3
query_type = 'median'
posterior_bound_type = 'tight'
columns = ['school_year', 'absence_days']
optimal_epsilons = binary_search_epsilon(df_school, df_school_release, columns, query_type, median_bounded_global_sensitivities, median_unbounded_global_sensitivities, privacy_requirement, posterior_bound_type)
print('Optimal epsilons:')
optimal_epsilons
query_type = 'median'
tight_upper = {}
for column in columns:
epsilon = optimal_epsilons[column]
tight_upper[column] = tighter_upper_bound_posterior(df_school, df_school_release, columns, query_type, median_unbounded_global_sensitivities, epsilon)
print('Posterior with optimal epsilons. Very close to the threhold of 1/3. So OK.')
print('You can also see how the optimal epsilon for one attribute does not yield a tight upper bound of 1/3 on the other')
tight_upper
```
This is computationally taxing due ti how the algorithm is written. We could save part of the maximum posterior operations so we do not need to run them again in every iteration. However, the purpose of this notebook is to get a deeper undersranding of the intricaces of the paper. Nonetheless, for these datasets sizes used for demonstrations, the algorithm runs smoothly.
Let us plot the output to verify that indeed these optimal epsilons correspond to the tighter upper bound curves.
```
plt.figure(figsize=(15, 7))
risk = 1/3
# Calculate upper bound
epsilon_upper_bound = upper_bound_epsilon(df_school, columns, median_bounded_global_sensitivities, median_unbounded_global_sensitivities, risk)
for index, column in enumerate(columns):
# Start the plot
plot_index = int(str(1) + str(len(columns)) + str(index+1))
plt.subplot(plot_index)
# plot the upper bounds
upper_bound, = plt.plot(epsilons, posterior_upper_bound[column], 'r', label="Risk upper bound")
tighter_bound, = plt.plot(epsilons, posterior_tighter_upper_bound[column], 'b', label="Risk tighter bound")
# Plot the risk
privacy_requirement, = plt.plot(epsilons, np.full(shape=len(epsilons), fill_value=risk), color='black', label="Privacy requirement")
# Plot the epsilon upper bound
y_axis_points = np.linspace(0,1,2)
plt.plot(np.full(shape=len(y_axis_points), fill_value=epsilon_upper_bound[column]), y_axis_points, 'r--')
plt.plot(np.full(shape=len(y_axis_points), fill_value=optimal_epsilons[column]), y_axis_points, 'b--')
# Legends
legend = plt.legend(handles=[upper_bound, tighter_bound, privacy_requirement], loc='lower right')
# axis labels and titles
plt.xlabel('Epsilon')
plt.ylabel('Risk disclosure probability')
plt.title('{}) Domain {} = {}'.format(index+1, column, df_school[column].values))
# Additonal info
print('Plot:', index, column)
print('Epsilon upper bound for risk {} in universe {} = {} in dash red'.format(round(risk, 2), column, epsilon_upper_bound[column]))
print('Epsilon tight upper bound for risk {} in universe {} = {} in dash blue'.format(round(risk, 2), column, optimal_epsilons[column]))
print('\n')
plt.suptitle('Upper bounds of the risk disclosure (posterior probability) by varying domains')
plt.show()
```
The optimal epsilons (in dash blue) fit pretty well, they maximize utility while preserving privacy.
| github_jupyter |
# cuML Preprocessing
Users of cuML are certainly familiar with its ability to run machine learning models on GPUs and the significant training and inference speedup that can entail, but the models themselves are only part of the story. In this notebook, we will demonstrate how cuML allows you to develop an entire machine learning _pipeline_ in order to preprocess and prepare your data without _ever_ leaving the GPU.
We will use the [BNP Paribas Cardif Claims Management dataset](https://www.kaggle.com/c/bnp-paribas-cardif-claims-management) to showcase a few of the many methods that cuML offers for GPU-accelerated feature engineering. This dataset offers an interesting challenge because:
1. It is somewhat messy, including missing data of various kinds.
2. It includes both quantitative data (represented as floating point values) and categorical data (represented as both integers and strings).
3. It is anonymized, so we cannot use _a priori_ domain-specific knowledge to guide our approach.
Our goal here is not necessarily to achieve the best possible model performance but to showcase the cuML features that you could use to improve model performance on your own. For a deeper dive into how to maximize performance on this dataset, check out the solutions and associated discussion for [the top Kaggle entries](https://www.kaggle.com/c/bnp-paribas-cardif-claims-management/leaderboard).
## 1. Data Ingest
Our first step is to acquire the data and read it into a data frame for subsequent processing. This process should be quite familiar for Pandas users, though we will be making use of cuDF, the equivalent GPU-accelerated module.
```
# To acquire the dataset, we will make use of the Kaggle CLI tool.
# If you do not have this tool set up, you can download the data directly
# from the Kaggle competition page: https://www.kaggle.com/c/bnp-paribas-cardif-claims-management/data
# Note that you may still need to visit this page even if you have the CLI
# tool in order to agree to the terms of data usage.
!kaggle competitions download -c bnp-paribas-cardif-claims-management
!unzip -o bnp-paribas-cardif-claims-management.zip
import cudf
data_cudf = cudf.read_csv("./train.csv.zip")
data_pd = data_cudf.to_pandas()
data_cudf.head()
```
Looking at the first few rows of these data, we can already understand some of the problems we might expect in working with the full dataset. We have a "target" column representing a binary classification target that we would like to predict with our model. As input to that model, we have over a hundred features, some represented as floats, some as ints, and some as strings. We can also see that quite a bit of the data is missing, as denoted by the numerous "\<NA\>" entries.
## 2. Evaluation Procedure
As a general principle, it is helpful to clearly define an evaluation procedure before jumping into model building and training. In this case, we are interested in finding a robust preprocessing protocol to apply to unseen data, so we will perform [k-fold cross-validation](https://en.wikipedia.org/wiki/Cross-validation_(statistics)#k-fold_cross-validation) and average performance across folds.
Because the RAPIDS packages have maintained such close compatibility with their non-GPU-accelerated counterparts, sklearn's k-fold cross-validation implementation can be directly applied to our data on the GPU. Moreover, this is one of several sklearn algorithms that can be applied without incurring any device-to-host copies, so we will use it directly in our evaluation protocol.
For demonstrations purposes, we will use accuracy (the default scoring metric for random forest models in sklearn) as our metric, but remember that accuracy [should](https://en.wikipedia.org/wiki/Accuracy_paradox) [not](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0084217) [be](https://www.fharrell.com/post/class-damage/) [used](https://medium.com/@limavallantin/why-you-should-not-trust-only-in-accuracy-to-measure-machine-learning-performance-a72cf00b4516) as a model-selection metric for any serious application.
```
import warnings
import numpy
from sklearn.model_selection import KFold
def evaluate(pipeline, data, n_splits=5, target_col='target'):
""""""
x = data[data.columns.difference([target_col])]
y = data[[target_col]]
folds = KFold(n_splits=n_splits, shuffle=False)
scores = numpy.empty(folds.get_n_splits(x), dtype=numpy.float32)
for i, (train_indices, test_indices) in enumerate(folds.split(x)):
x_train, x_test = x.iloc[train_indices], x.iloc[test_indices]
y_train, y_test = y.iloc[train_indices], y.iloc[test_indices]
pipeline.fit(x_train, y_train)
scores[i] = pipeline.score(x_test, y_test)
return numpy.average(scores)
def cu_evaluate(pipeline):
"""Convenience wrapper for evaluating cuML-based pipelines"""
with warnings.catch_warnings():
warnings.simplefilter("ignore")
return evaluate(pipeline, data_cudf)
def sk_evaluate(pipeline):
"""Convenience wrapper for evaluating sklearn-based pipelines"""
# Suppress sklearn data conversion warnings
with warnings.catch_warnings():
warnings.simplefilter("ignore")
return evaluate(pipeline, data_pd)
```
With these two convenience functions, we can quickly assess performance of full processing-and-classification pipelines with a single call.
## 3. The Model
For the moment, we are focusing on the preprocessing portion of our pipeline, so we will stick with a random forest model with a fixed set of hyperparameters. We will set `n_jobs` to `-1` for the sklearn model in order to make use of all available CPU processors, but we will otherwise stick with defaults.
You will probably notice a small difference in the accuracy achieved by the cuML random forest implementation and that achieved by sklearn. RAPIDS is in the process of transitioning to a new random forest implementation that performs much more comparably to sklearn. If you'd like to try out this (currently experimental) implementation, uncommon the indicated lines below.
```
from cuml.ensemble import RandomForestClassifier as cuRandomForestClassifier
from sklearn.ensemble import RandomForestClassifier as skRandomForestClassifier
cu_classifier = cuRandomForestClassifier()
sk_classifier = skRandomForestClassifier(n_jobs=-1)
# Uncomment the following lines to try out the new experimental RF
# implementation in cuML
# cu_classifier = cuRandomForestClassifier(max_features=1.0,
# max_depth=13,
# use_experimental_backend=True)
```
## Intermezzo: Helper Code
One of the standout features of sklearn is its consistent API for algorithms that fill the same role. Introducing a new algorithm that can be slotted into an sklearn pipeline is as easy as defining a class that fits that API. In this section, we'll define a few helper classes that will help us easily apply whatever preprocessing transformations we desire as part of our pipeline.
Feel free to skip over the details of these implementations; the docstrings should give a sufficient sense of their purpose and usage.
```
import pandas
from sklearn.base import BaseEstimator, TransformerMixin
class LambdaTransformer(BaseEstimator, TransformerMixin):
"""An sklearn-compatible class for simple transformation functions
This helper class is useful for transforming data with a straightforward
function requiring no fitting
"""
def __init__(self, transform_function):
self.transform_function = transform_function
def fit(self, X, y=None):
return self
def transform(self, X, y=None):
return self.transform_function(X)
# Workaround for https://github.com/rapidsai/cuml/issues/3041
class PerFeatureTransformer(BaseEstimator, TransformerMixin):
"""An sklearn-compatible class for fitting and transforming on
each feature independently
Some preprocessing algorithms need to be applied independently to
each feature. This wrapper facilitates that process.
"""
def __init__(self,
transformer_class,
transformer_args=(),
transformer_kwargs={},
copy=True):
self.transformer_class = transformer_class
self.transformer_args = transformer_args
self.transformer_kwargs = transformer_kwargs
self.transformers = {}
self.copy = copy
def fit(self, X, y=None):
for col in X.columns:
self.transformers[col] = self.transformer_class(
*self.transformer_args,
**self.transformer_kwargs
)
try:
self.transformers[col].fit(X[col], y=y)
except TypeError: # https://github.com/rapidsai/cuml/issues/3053
self.transformers[col].fit(X[col])
return self
def transform(self, X, y=None):
if self.copy:
X = X.copy()
for col in X.columns:
try:
X[col] = self.transformers[col].transform(X[col], y=y)
except TypeError: # https://github.com/rapidsai/cuml/issues/3053
X[col] = self.transformers[col].transform(X[col])
return X
def fit_transform(self, X, y=None):
for col in X.columns:
self.transformers[col] = self.transformer_class(
*self.transformer_args,
**self.transformer_kwargs
)
try:
X[col] = self.transformers[col].fit_transform(X[col], y=y)
except TypeError: # https://github.com/rapidsai/cuml/issues/3053
X[col] = self.transformers[col].fit_transform(X[col])
return X
class FeatureGenerator(BaseEstimator, TransformerMixin):
"""An sklearn-compatible class for adding new features to existing
data
"""
def __init__(self,
generator,
include_dtypes=None,
exclude_dtypes=None,
columns=None,
copy=True):
self.include_dtypes = include_dtypes
self.exclude_dtypes = exclude_dtypes
self.columns = columns
self.copy = copy
self.generator = generator
def _get_subset(self, X):
subset = X
if self.columns is not None:
subset = X[self.columns]
if self.include_dtypes or self.exclude_dtypes:
subset = subset.select_dtypes(
include=self.include_dtypes,
exclude=self.exclude_dtypes
)
return subset
def fit(self, X, y=None):
subset = self._get_subset(X)
try:
self.generator.fit(subset, y=y)
except TypeError: # https://github.com/rapidsai/cuml/issues/3053
self.generator.fit(subset)
def transform(self, X, y=None):
subset = self._get_subset(X)
try:
new_features = self.generator.transform(subset, y=y)
except TypeError: # https://github.com/rapidsai/cuml/issues/3053
new_features = self.generator.transform(subset)
if isinstance(X, cudf.DataFrame):
return cudf.concat((X.reset_index(), new_features), axis=1)
else:
new_features = pandas.DataFrame(
new_features,
columns=["new_{}".format(i) for i in range(new_features.shape[1])]
)
return pandas.concat((X.reset_index(), new_features), axis=1)
def fit_transform(self, X, y=None):
subset = self._get_subset(X)
try:
new_features = self.generator.fit_transform(subset, y=y)
except TypeError: # https://github.com/rapidsai/cuml/issues/3053
new_features = self.generator.fit_transform(subset)
if isinstance(X, cudf.DataFrame):
return cudf.concat((X.reset_index(), new_features), axis=1)
else:
new_features = pandas.DataFrame(
new_features,
columns=["new_{}".format(i) for i in range(new_features.shape[1])]
)
return pandas.concat((X.reset_index(), new_features), axis=1)
class SubsetTransformer(BaseEstimator, TransformerMixin):
"""An sklearn-compatible class for fitting and transforming on
a subset of features
This allows a transformation to be applied to only data in a
specific column of a dataframe or only data of a particular dtype.
"""
def __init__(self,
transformer,
include_dtypes=None,
exclude_dtypes=None,
columns=None,
copy=True):
self.transformer = transformer
self.include_dtypes = include_dtypes
self.exclude_dtypes = exclude_dtypes
self.columns = columns
self.copy = copy
def _get_subset(self, X):
subset = X
if self.columns is not None:
subset = X[self.columns]
if self.include_dtypes or self.exclude_dtypes:
subset = subset.select_dtypes(
include=self.include_dtypes,
exclude=self.exclude_dtypes
)
return subset
def fit(self, X, y=None):
subset = self._get_subset(X)
try:
self.transformer.fit(subset, y=y)
except TypeError: # https://github.com/rapidsai/cuml/issues/3053
self.transformer.fit(subset)
return self
def transform(self, X, y=None):
if self.copy:
X = X.copy()
subset = self._get_subset(X)
try:
X[subset.columns] = self.transformer.transform(subset, y=y)
except TypeError: # https://github.com/rapidsai/cuml/issues/3053
X[subset.columns] = self.transformer.transform(subset)
return X
def fit_transform(self, X, y=None):
if self.copy:
X = X.copy()
subset = self._get_subset(X)
try:
X[subset.columns] = self.transformer.fit_transform(subset, y=y)
except TypeError: # https://github.com/rapidsai/cuml/issues/3053
X[subset.columns] = self.transformer.fit_transform(subset)
return X
class DeviceSpecificTransformer(BaseEstimator, TransformerMixin):
"""An sklearn-compatible class for performing different
transformations based on whether it receives a cuDF or Pandas
dataframe"""
def __init__(self, pandas_transformer, cudf_transformer):
self.pandas_transformer = pandas_transformer
self.cudf_transformer = cudf_transformer
self.transformer = None
self.is_cuml_transformer = None
def fit(self, X, y=None):
if hasattr(X, 'to_pandas'):
self.transformer = self.cudf_transformer
self.is_cuml_transformer = True
else:
self.transformer = self.pandas_transformer
self.is_cuml_transformer = False
try:
self.transformer.fit(X, y=y)
except TypeError: # https://github.com/rapidsai/cuml/issues/3053
self.transformer.fit(X)
return self
def transform(self, X, y=None):
try:
return self.transformer.transform(X, y=y)
except TypeError: # https://github.com/rapidsai/cuml/issues/3053
return self.transformer.transform(X)
def fit_transform(self, X, y=None):
if hasattr(X, 'to_pandas'):
self.transformer = self.cudf_transformer
else:
self.transformer = self.pandas_transformer
try:
return self.transformer.fit_transform(X, y=y)
except TypeError: # https://github.com/rapidsai/cuml/issues/3053
return self.transformer.fit_transform(X)
```
Note that much of the logic here is necessary because of the relative messiness of the dataset we intend to work with or because we will be using these transformers in both cuML and sklearn pipelines. Simpler, cleaner datasets may not require any of this helper logic if they are processed solely with cuML
## 4. Feature Engineering
With an evaluation protocol in place, a fixed model defined, and helper classes written, we can now turn to the actual task of cleaning up our data and exploring the available tools for creating useful features.
### 4.1 A Naive Approach
We'll start by defining a few cleaning steps that will be needed simply to pass off the data to our classifiers. Specifically, we will:
1. Drop the `ID` column, since we do not want to take the arbitrarily-assigned ID into account in our training.
2. Replace null and NaN values with something our classifier can work with.
3. Drop any non-numeric features, since our classifier does not currently support such data.
4. Convert remaining (numeric) features to 32-bit floats, since cuML's random forest implementation requires this.
This approach is quite naive. Categorical integer data is treated in the same way as quantitative float data. Categorical strings are ignored entirely, and missing data is replaced with a constant value that may not be appropriate in the context of the full dataframe. We will address all of these concerns and more as we build up more complex preprocessing pipelines.
```
drop_id = LambdaTransformer(lambda x: x[x.columns.difference(['ID'])])
replace_numeric_na = SubsetTransformer(
LambdaTransformer(lambda x: x.fillna(0)),
include_dtypes=['integer', 'floating']
)
replace_string_na = SubsetTransformer(
LambdaTransformer(lambda x: x.fillna('UNKNOWN')),
include_dtypes=['object']
)
filter_numeric = LambdaTransformer(lambda x: x.select_dtypes('number'))
convert_to_float32 = LambdaTransformer(lambda x: x.astype('float32'))
preprocessing_steps = [
("Drop ID", drop_id),
("Replace numeric NA", replace_numeric_na),
("Replace string NA", replace_string_na),
("Numeric filter", filter_numeric),
("32-bit Conversion", convert_to_float32)
]
```
With these naive preprocessing steps defined, let's create an sklearn `Pipeline` for both the cuML classifier and the sklearn classifier. We can then apply our previously-defined evaluation protocol to each and assess both runtime and accuracy performance.
```
from sklearn.pipeline import Pipeline
cuml_pipeline = Pipeline(
preprocessing_steps + [("Classifier", cu_classifier)],
verbose=1 # Detailed timing information
)
sklearn_pipeline = Pipeline(
preprocessing_steps + [("Classifier", sk_classifier)],
verbose=1 # Detailed timing information
)
%time cu_evaluate(cuml_pipeline)
%%script false --no-raise-error
# WARNING: Takes several minutes
%time sk_evaluate(sklearn_pipeline)
```
Given the known runtime improvement of cuML's GPU-accelerated random forest implementation, it is no surprise that the cuML pipeline executed faster than its CPU-only equivalent. Digging into the timings of individual pipeline steps, we do indeed see that the majority of our performance gain with cuML comes from the classifier itself, but we also see some improvement in runtimes for the preprocessing steps. We'll take a closer look at that once we have a slightly more interesting pipeline in place.
Since the sklearn pipeline takes several minutes to run and the observed accuracy is similar to what we see with cuML, most of the remaining sklearn cells in this notebook will be disabled with the `%%script false --no-raise-error` magic tag. You can simply delete this tag from the cell if you wish to run the sklearn version of a particular section of code.
### 4.2 Data Imputation
As a marginal improvement on our initial approach, let's use a slightly more sophisticated method for dealing with missing values. Specifically, let's fill in missing quantitative features with the mean value for that feature in our training data. For this, we will make use of the `SimpleImputer` class, newly available in RAPIDS v0.16 through the `cuml.experimental.preprocessing` module.
#### Aside: cuML's Experimental Preprocessing
It is no secret that cuML stands on the shoulders of the sklearn giant and benefits enormously from sklearn's brilliant design, thoughtful implementation, and enthusiastic community. In v0.16, cuML has benefitted even more directly through its new (and currently experimental) preprocessing features.
Because cuML has maintained such strong compatibility with sklearn, the RAPIDS team was able to incorporate sklearn code (still distributed under the terms of the sklearn license, of course) directly into cuML with only minor modifications. This became cuML's experimental preprocessing module. So if you appreciate having these features available in cuML, remember that it is thanks to the consistently stellar work of the sklearn developers and community, and be sure to [cite sklearn](https://scikit-learn.org/stable/about.html#citing-scikit-learn) in any scientific publications based on these features.
As an experimental feature, we are actively seeking feedback on these newly-introduced preprocessing algorithms. Please do report any problems you encounter via the [cuML issue tracker](https://github.com/rapidsai/cuml/issues).
```
from sklearn.impute import SimpleImputer as skSimpleImputer
from cuml.experimental.preprocessing import SimpleImputer as cuSimpleImputer
sk_mean_imputer = SubsetTransformer(
skSimpleImputer(missing_values=numpy.nan, strategy='mean'),
include_dtypes=['floating']
)
cu_mean_imputer = SubsetTransformer(
cuSimpleImputer(missing_values=numpy.nan, strategy='mean'),
include_dtypes=['floating']
)
mean_imputer = DeviceSpecificTransformer(sk_mean_imputer, cu_mean_imputer)
```
Because cupy does not currently support null values, we will need to add one other step to our pipeline: converting null data to NaNs or another known invalid value before performing imputation.
```
def _replace_nulls(data):
data = data.copy()
replacements = [
(numpy.floating, numpy.nan),
(numpy.integer, -1),
(object, 'UNKNOWN')
]
for col_type, value in replacements:
subset = data.select_dtypes(col_type)
data[subset.columns] = subset.fillna(value)
return data
null_filler = LambdaTransformer(_replace_nulls)
preprocessing_steps = [
("Drop ID", drop_id),
("Replace nulls", null_filler),
("Imputation", mean_imputer),
("Numeric filter", filter_numeric),
("32-bit Conversion", convert_to_float32)
]
cuml_pipeline = Pipeline(preprocessing_steps + [("Classifier", cu_classifier)])
sklearn_pipeline = Pipeline(preprocessing_steps + [("Classifier", sk_classifier)])
cu_evaluate(cuml_pipeline)
%%script false --no-raise-error
sk_evaluate(sklearn_pipeline)
```
We see an almost negligible increase in accuracy using mean imputation, but you can try experimenting with other imputation strategies, including "median" and "most_frequent" to see what impact it has on performance.
### 4.3 Scaling
For some machine learning algorithms, it is helpful to adjust the average value of a feature and scale it so that its "spread" is comparable to other features. There are a few strategies for doing this, but one of the most common is to subtract off the mean and then divide by the variance. We can do precisely this using the `StandardScaler` algorithm.
```
from sklearn.preprocessing import StandardScaler as skStandardScaler
from cuml.experimental.preprocessing import StandardScaler as cuStandardScaler
sk_scaler = SubsetTransformer(
skStandardScaler(),
include_dtypes=['floating']
)
cu_scaler = SubsetTransformer(
cuStandardScaler(),
include_dtypes=['floating']
)
scaler = DeviceSpecificTransformer(sk_scaler, cu_scaler)
preprocessing_steps = [
("Drop ID", drop_id),
("Replace nulls", null_filler),
("Imputation", mean_imputer),
("Scaling", scaler),
("Numeric filter", filter_numeric),
("32-bit Conversion", convert_to_float32)
]
cuml_pipeline = Pipeline(preprocessing_steps + [("Classifier", cu_classifier)])
sklearn_pipeline = Pipeline(preprocessing_steps + [("Classifier", sk_classifier)])
cu_evaluate(cuml_pipeline)
%%script false --no-raise-error
sk_evaluate(sklearn_pipeline)
```
In general, random forest models do not benefit from this kind of scaling, but other model types, especially logistic regression and neural networks can see improved accuracy or better convergence with this sort of preprocessing.
### 4.4 Encoding Categorical Data
Up to this point, we have not taken advantage of the categorical features in our data at all. In order to do so, we must encode them in some numeric representation. cuML offers a number of strategies for doing this, including one-hot encoding, label encoding, and target encoding. We will demonstrate just one of these algorithms (label encoding) here.
Using encoders on different training and testing data can be tricky because our training split may be missing some labels from our testing split. cuML's `LabelEncoder` includes the `handle_unknown` param which allows us to mark previously-unseen categories as null. Since all integer entries in our dataset are whole numbers, we can then replace these nulls with a value of -1 using two quick helper transformations.
In sklearn, we must use a slightly different workaround.
```
from cuml.preprocessing import LabelEncoder as cuLabelEncoder
from sklearn.preprocessing import LabelEncoder as skLabelEncoder
cu_encoder = SubsetTransformer(
PerFeatureTransformer(cuLabelEncoder, transformer_kwargs={'handle_unknown': 'ignore'}),
include_dtypes=['integer', 'object']
)
# cuML workarounds for unseen data
def standard_ints(data):
subset = data.select_dtypes('integer')
data[subset.columns] = subset.astype('int32')
return data
int_standardizer = LambdaTransformer(standard_ints)
replace_unknown_labels = LambdaTransformer(lambda x: x.fillna(-1))
# sklearn workarounds for unseen data
class SKUnknownEncoder(BaseEstimator, TransformerMixin):
UNKNOWN = 'UNKNOWN'
def __init__(self, base_encoder, copy=True):
self.base_encoder = base_encoder
self.copy = copy
def fit(self, X, y=None):
self.base_encoder.fit(list(X) + [self.UNKNOWN])
def transform(self, X):
if self.copy:
X = X.copy()
missing = set(X.unique()) - set(self.base_encoder.classes_)
X = X.replace(list(missing), self.UNKNOWN)
return self.base_encoder.transform(X)
def fit_transform(self, X, y=None):
return self.base_encoder.fit_transform(X)
sk_encoder = SubsetTransformer(
PerFeatureTransformer(SKUnknownEncoder, transformer_args=(skLabelEncoder(),)),
include_dtypes=['integer', 'object']
)
label_encoder = DeviceSpecificTransformer(sk_encoder, cu_encoder)
preprocessing_steps = [
("Drop ID", drop_id),
("Replace nulls", null_filler),
("Encoding", label_encoder),
("Imputation", mean_imputer),
("Standardize ints", int_standardizer),
("Handle unknown labels", replace_unknown_labels),
("Scaling", scaler),
("Numeric filter", filter_numeric),
("32-bit Conversion", convert_to_float32)
]
cuml_pipeline = Pipeline(preprocessing_steps + [("Classifier", cu_classifier)])
sklearn_pipeline = Pipeline(preprocessing_steps + [("Classifier", sk_classifier)])
cu_evaluate(cuml_pipeline)
%%script false --no-raise-error
sk_evaluate(sklearn_pipeline)
```
### 4.5 Discretization
While encoding gives us a way of converting discrete labels into numeric values, it is sometimes useful to do the reverse. When quantitative data falls into obviously useful categories (like "zero" vs "non-zero") or when the noise in quantitative data does not yield meaningful information about our prediction target, it can help our model to preprocess that quantitative data by converting it into categorical "bins". We will give just one example of this (`KBinsDiscretizer`), which we will naively apply across all categorical data. For more serious feature engineering, we would perform a more careful analysis of the meaning and distribution of each quantitative feature.
```
from sklearn.preprocessing import KBinsDiscretizer as skKBinsDiscretizer
from cuml.experimental.preprocessing import KBinsDiscretizer as cuKBinsDiscretizer
sk_discretizer = SubsetTransformer(
skKBinsDiscretizer(encode='ordinal'),
include_dtypes=['floating']
)
cu_discretizer = SubsetTransformer(
cuKBinsDiscretizer(encode='ordinal'),
include_dtypes=['floating']
)
discretizer = DeviceSpecificTransformer(sk_discretizer, cu_discretizer)
preprocessing_steps = [
("Drop ID", drop_id),
("Replace nulls", null_filler),
("Encoding", label_encoder),
("Imputation", mean_imputer),
("Standardize ints", int_standardizer),
("Handle unknown labels", replace_unknown_labels),
("Scaling", scaler),
("Discretization", discretizer),
("Numeric filter", filter_numeric),
("32-bit Conversion", convert_to_float32)
]
cuml_pipeline = Pipeline(preprocessing_steps + [("Classifier", cu_classifier)])
sklearn_pipeline = Pipeline(preprocessing_steps + [("Classifier", sk_classifier)])
cu_evaluate(cuml_pipeline)
%%script false --no-raise-error
sk_evaluate(sklearn_pipeline)
```
### 4.7 Generating New Features
We have looked at several ways of processing existing features that may help a machine learning model converge faster or perform better, but we can also generate new features from the existing data to help create the best possible representation of those data.
One of the most straightforward examples of this technique is expemplified by the `PolynomialFeatureGenerator` algorithm. This algorithm works by looking at the products of existing features up to a certain order. Thus, if we have features `a`, `b`, and `c`, it might be useful to let the model see `ab`, `ac`, `bc` and potentially even `a**2`, `b**2`, and `c**2`.
In our case, we will again take a fairly naive approach, adding all of the interaction terms of order 2 (corresponding to `ab`, `ac`, and `bc` in the above example) as new features.
```
from cuml.experimental.preprocessing import PolynomialFeatures as cuPolynomialFeatures
from sklearn.preprocessing import PolynomialFeatures as skPolynomialFeatures
sk_generator = FeatureGenerator(
skPolynomialFeatures(interaction_only=True, degree=2),
include_dtypes=['integer']
)
cu_generator = FeatureGenerator(
cuPolynomialFeatures(interaction_only=True, degree=2),
include_dtypes=['integer']
)
generator = DeviceSpecificTransformer(sk_generator, cu_generator)
preprocessing_steps = [
("Drop ID", drop_id),
("Replace nulls", null_filler),
("Encoding", label_encoder),
("Imputation", mean_imputer),
("Standardize ints", int_standardizer),
("Handle unknown labels", replace_unknown_labels),
("Generate products", generator),
("Scaling", scaler),
("Discretization", discretizer),
("Numeric filter", filter_numeric),
("32-bit Conversion", convert_to_float32)
]
cuml_pipeline = Pipeline(preprocessing_steps + [("Classifier", cu_classifier)])
sklearn_pipeline = Pipeline(preprocessing_steps + [("Classifier", sk_classifier)])
cu_evaluate(cuml_pipeline)
%%script false --no-raise-error
sk_evaluate(sklearn_pipeline)
```
## 5. Final Assessment
Blindly applying the techniques presented thus far, we have seen a very modest increase in accuracy due solely to preprocessing. As evidenced by the ingenious solutions presented for the Kaggle competition associated with this dataset, a more careful and thorough exploration of preprocessing can yield much more impressive performance.
A key factor in finding an effective preprocessing protocol is how long it takes to iterate through possibilities and assess their impact. Indeed, this is one of the key benefits of cuML's new preprocessing tools. Using them, we can load data onto the GPU then tweak, transform, and use it for training and inference without ever incurring the cost of device-to-host transfers.
With this in mind, let's take one final look at execution time for our final pipeline, breaking it down and analyzing the specific benefits of GPU-accelerated preprocessing.
```
#Increase verbosity to provide timing details
cuml_pipeline = Pipeline(
preprocessing_steps + [("Classifier", cu_classifier)],
verbose=1
)
sklearn_pipeline = Pipeline(
preprocessing_steps + [("Classifier", sk_classifier)],
verbose=1
)
%time cu_evaluate(cuml_pipeline)
preprocessing_steps = [
("Drop ID", drop_id),
("Replace nulls", null_filler),
("Encoding", label_encoder),
("Imputation", mean_imputer),
("Standardize ints", int_standardizer),
("Handle unknown labels", replace_unknown_labels),
("Generate products", generator),
("Scaling", scaler),
("Discretization", discretizer),
("Numeric filter", filter_numeric),
("32-bit Conversion", convert_to_float32)
]
sklearn_pipeline = Pipeline(
preprocessing_steps + [("Classifier", sk_classifier)],
verbose=1
)
%time sk_evaluate(sklearn_pipeline)
preproc_only_pipeline = Pipeline(preprocessing_steps)
%%time
# Suppress warnings from naive application of discretizer to
# all features
with warnings.catch_warnings():
warnings.simplefilter("ignore")
preproc_only_pipeline.fit_transform(data_cudf[data_cudf.columns.difference(['target'])], data_cudf.target)
%%time
# Suppress warnings from naive application of discretizer to
# all features
with warnings.catch_warnings():
warnings.simplefilter("ignore")
preproc_only_pipeline.fit_transform(data_pd[data_pd.columns.difference(['target'])], data_pd.target)
```
Looking at these results, we can see the runtime benefit of GPU acceleration in both the entire preprocessing and classification pipeline and the preprocessing portion alone. For feature engineering, this means faster iteration, lower compute costs, and the possibility of conducting more systematic hyper-parameter optimization over even the preprocessing steps themselves. Those with an interest in HPO might check out our [detailed walkthroughs](https://rapids.ai/hpo) on performing HPO with RAPIDS in the cloud. The techniques explored there could easily be combined with those demonstrated in this notebook to rapidly search the space of available preprocessing and model hyperparameters.
## 6. Conclusions
Thanks to the newly-expanded cuML preprocessing features in RAPIDS v0.16, it is now possible to keep your entire machine learning pipeline on the GPU, without copying data back to the host to make use of CPU-only algorithms. This offers substantial benefits in terms of runtime, which can in turn lead to more thorough exploration of the feature engineering space and dramatically lower compute times and costs.
While this notebook primarily offers a high-level demonstration of available preprocessing features rather than an in-depth optimization of features on a particular dataset, you may be interested in using it to play more with the BNP dataset yourself to engineer the perfect combination of curated features. Or better yet, try it with your own data.
If you like what you see here, there is plenty more to explore in our [other demo notebooks](https://github.com/rapidsai/notebooks). Please feel free to report any problems you find or ask questions via [the cuML issue tracker](https://github.com/rapidsai/cuml/issues), and keep an eye out for the next release of cuML (v0.17), which we expect to have an even smoother preprocessing experience as we start to transition the new preprocessing features out of experimental.
| github_jupyter |
```
#Introduction
#.....
```
Check to see if jupyter lab uses the correct python interpreter with '!which python'.
It should be something like '/opt/anaconda3/envs/[environment name]/bin/python' (on Mac).
If not, try this: https://github.com/jupyter/notebook/issues/3146#issuecomment-352718675
```
import sys
sys.executable
#!which python #which does not semm to work on windows
```
# Install dependencies:
```
install_packages = False
if install_packages:
!conda install tensorflow=2 -y
!conda install -c anaconda pandas -y
!conda install -c conda-forge html2text -y
!conda install -c conda-forge tensorflow-hub -y # !conda install -c akode html2text -y
!conda install -c conda-forge tqdm -y
!conda install -c anaconda scikit-learn -y
!conda install -c conda-forge matplotlib -y
!conda install -c anaconda seaborn -y
print("Done")
```
# Imports
```
#imports
import pandas as pd
import numpy as np
import os
import time
import tensorflow as tf
import tensorflow_hub as hub
import zipfile
from html2text import HTML2Text
from tqdm import tqdm
import re
from sklearn.metrics import pairwise_distances
from sklearn.preprocessing import normalize
import matplotlib.pyplot as plt
import seaborn as sns
```
# Set pandas print options
This will improve readability of printed pandas dataframe.
```
pd.set_option('display.max_rows', None)
pd.set_option('display.max_columns', None)
pd.set_option('display.width', None)
pd.set_option('display.max_colwidth', None)
```
## Set global Parameters
Set your parameters here:
data_path: In this path put the data you have downloaded with YouTube Data Tools.
output_path: Tghe files generated in this notebook will be saved here.
url_dict: URLs to models on Tensorflow hub are saved here. Other models are available there.
model_type: Define which model you would like to use. Choose one from url_dict
new_embeddings: If this is true, new embeddings will be generated and saved at output_path. Otherwise, embeddings are loaded from Disc.
```
data_path = './data/videoinfo_VMXcbWwzeY8_2020_11_24-15_26_42_comments.tab'
output_path = "./output/"
new_embeddings = True
url_dict = {
'Transformer' : "https://tfhub.dev/google/universal-sentence-encoder-large/5",
'DAN' : "https://tfhub.dev/google/universal-sentence-encoder/4",
'Transformer_Multilingual': "https://tfhub.dev/google/universal-sentence-encoder-multilingual-large/3"
}
model_type = 'Transformer' #@param ['DAN','Transformer','Transformer_Multilingual']
```
## Create output directory
Try to create the directory defined by output_path
```
try:
os.mkdir(output_path)
except OSError:
print ("Creation of the directory %s failed" % output_path)
else:
print ("Successfully created the directory %s " % output_path)
```
# Load Data
Load you data as a pandas dataframe
```
if new_embeddings:
data = pd.read_csv(data_path,sep='\t',header=(0))
data.head()
```
# Preprocessing
Preprocess your data:
- Drop empty rows
- Drop unused columns
```
if new_embeddings:
data = data.dropna(subset=['text', 'authorName']) # drop rows with no content
data=data.drop(['id', 'replyCount','likeCount','authorChannelUrl','authorChannelId','isReplyTo','isReplyToName'],axis=1) # drop unused columns
data.head()
```
- remove HTML-tags, links and usernames
```
if new_embeddings:
# Remove HTML tags
tqdm.pandas()
h = HTML2Text()
h.ignore_links = True
data['cleaned'] = data['text'].progress_apply(lambda x: h.handle(x))
print( "Removed HTML Tags.")
# Remove links
http_link_pattern = r'http\S+'
bitly_link_pattern = r'bit.ly/\S+'
data['cleaned'] = data['cleaned'].str.replace(http_link_pattern, '')
data['cleaned'] = data['cleaned'].str.replace(bitly_link_pattern, '')
print( "Removed Links.")
# Remove user names
keep_names = ["earth", "Tide", "Geologist", "A Person", "Titanic", "adventure", "Sun", "The United States Of America"] # user names we want to keep
user_names = [name for name in data['authorName'].unique() if (len(name)> 3 and name not in keep_names)]
data['cleaned'] = data['cleaned'].str.replace('|'.join(map(re.escape, user_names)), '')
print( "Removed user names.")
```
# Save or Load preprocessed data
Save your data afte preprocessing, or load preprocessed data from disc.
```
if new_embeddings:
data.to_pickle(output_path+'data_preprocessed'+'.pkl')
else:
data = pd.read_pickle(output_path+'data_preprocessed'+'.pkl')
data.head()
```
# Produce Text Embeddings with Universal Sentence Encoder
## Load Model
Load the model from TF-hub
```
hub_url = url_dict[model_type]
if new_embeddings:
print("Loading model. This will take some time...")
embed = hub.load(hub_url)
```
## Embed Documents
Produce embeddings of your documents.
```
if new_embeddings:
for k,g in data.groupby(np.arange(len(data))//200):
if k == 0:
embeddings = embed(g['cleaned'])
else:
embeddings_new = embed(g['cleaned'])
embeddings = tf.concat(values=[embeddings,embeddings_new],axis = 0)
print(k , end =" ")
print("The embeddings vector is of fixed length {}".format(embeddings.shape[1]))
np.save(output_path+'/embeddings'+model_type+'.npy', embeddings, allow_pickle=True, fix_imports=True)
else:
embeddings = np.load(output_path+'/embeddings'+model_type+'.npy', mmap_mode=None, allow_pickle=False, fix_imports=True, encoding='ASCII')
embeddings.shape
```
## Calculate Similarity Matrix with angular distance
'Following Cer et al. (2018), we first compute
the sentence embeddings u, v for an STS sentence
pair, and then score the sentence pair similarity
based on the angular distance between the two
embedding vectors d = − arccos (uv/||u|| ||v||).'
```
from sklearn.metrics.pairwise import cosine_similarity
def cos_sim(input_vectors):
similarity = cosine_similarity(input_vectors)
return similarity
cosine_similarity_matrix = cos_sim(np.array(embeddings))
print(cosine_similarity_matrix)
```
# Plots Similarity
Plot and print a heat map showing the semantic contextual similarity between comments.
```
import seaborn as sns
def plot_similarity(labels, features, rotation):
corr = np.inner(features, features)
sns.set(font_scale=1.2)
g = sns.heatmap(
corr,
xticklabels=labels,
yticklabels=labels,
vmin=0,
vmax=1,
cmap="YlOrRd")
g.set_xticklabels(labels, rotation=rotation)
g.set_title("Semantic Textual Similarity")
num_samples = 5
off_set = 170
plot_similarity(data.iloc[off_set:off_set+num_samples]['cleaned'], embeddings[off_set:off_set+num_samples], 90)
```
# Show neighbours of a comment
Define which comment to analyze
```
comment_index = 171
comment = data["cleaned"][comment_index]
comment_list = data["cleaned"].tolist()
print(comment)
```
Print similar comments.
```
def get_top_similar(sentence, sentence_list, similarity_matrix, topN):
# find the index of sentence in list
index = sentence_list.index(sentence)
# get the corresponding row in similarity matrix
similarity_row = np.array(similarity_matrix[index, :])
# get the indices of top similar
indices = similarity_row.argsort()[-topN:][::-1]
return [sentence_list[i] for i in indices]
for i, value in enumerate(get_top_similar(comment, comment_list, cosine_similarity_matrix, 20)):
print("Top similar comment {}: {}".format(i+1, value))
```
| github_jupyter |
# iAR package Demo - BIAR Model
```
import iar
import numpy as np
import matplotlib.pyplot as plt
print("iAR version:")
print(iar.__version__)
```
# Simulates from a BIAR Model
```
from iar import BIAR_sample,gentime
np.random.seed(6713)
n=300
phi1=0.9
phi2=0.4
sT=gentime(n=n,lambda1=15,lambda2=2)
y,sT,Sigma =BIAR_sample(n=n,sT=sT,phi_R=phi1,phi_I=phi2,rho=0.9)
plt.subplot(211)
plt.plot(sT, y[0],"o-")
plt.subplot(212)
plt.plot(sT, y[1],"o-")
plt.show()
```
# Maximum Likelihood Estimation of the BIAR Model
```
from iar import BIAR_phi_kalman,BIAR_kalman
y1=np.zeros((2,len(y[0])))
y1[0]=y[0]/np.sqrt(np.var(y[0],ddof=1))
y1[1]=y[1]/np.sqrt(np.var(y[1],ddof=1))
out=BIAR_phi_kalman(x=[0.9,0.4],y1=y1[0],y2=y1[1],t=sT,yerr1=np.zeros(len(y1[0])),yerr2=np.zeros(len(y1[0])))
print(out)
phi_R,phi_I,out=BIAR_kalman(y1=y1[0],y2=y1[1],sT=sT,delta1=np.zeros(len(y1[0])),delta2=np.zeros(len(y1[0])))
print(phi_R)
print(phi_I)
```
# Estimating Contemporary Correlation
```
from iar import BIAR_fit
cor,cov,yhat,xhat=BIAR_fit(x=(phi_R,phi_I),y1=y1[0],y2=y1[1],t=sT,yerr1=np.zeros(len(y1[0])),yerr2=np.zeros(len(y1[0])))
print(cor)
print(cov)
```
# Simulates a Negatively Correlated BIAR Model
```
np.random.seed(6713)
n=300
phi1=-0.9
phi2=0.4
sT=gentime(n=n,lambda1=15,lambda2=2)
y,sT,Sigma =BIAR_sample(n=n,sT=sT,phi_R=phi1,phi_I=phi2,rho=-0.9)
plt.subplot(211)
plt.plot(sT, y[0],"o-")
plt.subplot(212)
plt.plot(sT, y[1],"o-")
plt.show()
y1=np.zeros((2,len(y[0])))
y1[0]=y[0]/np.sqrt(np.var(y[0],ddof=1))
y1[1]=y[1]/np.sqrt(np.var(y[1],ddof=1))
phi_R,phi_I,out=BIAR_kalman(y1=y1[0],y2=y1[1],sT=sT,delta1=np.zeros(len(y1[0])),delta2=np.zeros(len(y1[0])))
print(phi_R)
print(phi_I)
cor,cov,yhat,xhat=BIAR_fit(x=(phi_R,phi_I),y1=y1[0],y2=y1[1],t=sT,yerr1=np.zeros(len(y1[0])),yerr2=np.zeros(len(y1[0])))
print(cor)
print(cov)
```
# Forecast with BIAR Model
```
from iar import BIAR_smoothing
np.random.seed(6713)
n=100
phi1=0.9
phi2=0.4
sT=gentime(n=n,lambda1=15,lambda2=2)
y,sT,Sigma =BIAR_sample(n=n,sT=sT,phi_R=phi1,phi_I=phi2,rho=0.9)
n=len(sT)
p=int(0.9*n)
y0=np.copy(y)
y1=np.zeros((2,100))
y1[0]=(y0[0]-np.mean(y0[0]))/np.sqrt(np.var(y0[0],ddof=1))
y1[1]=(y0[1]-np.mean(y0[1]))/np.sqrt(np.var(y0[1],ddof=1))
#Estimation
phi_R,phi_I,out=BIAR_kalman(y1=y1[0],y2=y1[1],sT=sT,delta1=np.zeros(len(sT)),delta2=np.zeros(len(sT)))
xest=(phi_R,phi_I)
#Forecast
boolean1=np.isin(np.arange(n),np.arange(p,n))
y1[0][np.where(boolean1)[0]]=np.nan
p1=np.arange(p,n)
xestBIAR=np.zeros(len(p1))
difftime=np.zeros(len(p1))
for i in range(len(p1)):
pos=p1[range(i+1,len(p1))]
boolean=np.isin(np.arange(len(sT)),pos)
y3=y1[0:2,~boolean]
st3=sT[~boolean]
difftime[i]=st3[p1[i]]-st3[p1[i]-1]
xest1,out=BIAR_smoothing(x=xest,y1=y3[0],y2=y3[1],t=st3,yerr1=np.zeros(len(st3)),yerr2=np.zeros(len(st3)),nsmooth=1)
xestBIAR[i]=xest1
y1[0,p1[i]]=xest1
print(xestBIAR)
MSE=(y0[0,boolean1]-xestBIAR)**2
print(np.mean(MSE))
import scipy.stats
ModBIAR=abs(complex(xest[0], xest[1]))
s=np.std(y0[0,:])
yerrBIAR=scipy.stats.norm.ppf(0.975)*s*np.sqrt(1-ModBIAR**(2*difftime))
fig, axs = plt.subplots(2, 1, sharex=True)
fig.subplots_adjust(hspace=0)
axs[0].set_xlim(310,410)
axs[0].set_ylim(np.min(y0)-0.2, np.max(y0)+0.2)
axs[0].plot(sT[np.where(boolean1)[0]], xestBIAR,color="black",linestyle='dashed')
axs[0].scatter(sT[np.where(boolean1)[0]], xestBIAR,color="black",linestyle='dashed')
axs[0].errorbar(sT[np.where(boolean1)[0]], xestBIAR, yerr=yerrBIAR,color="gray")
#plt.plot(sT[np.where(boolean1)[0]], xestCIAR,color="red",linestyle='dashed')
#plt.scatter(sT[np.where(boolean1)[0]], xestCIAR,color="red",linestyle='dashed')
axs[0].plot(sT, y0[0,:],color="green")
axs[0].scatter(sT, y0[0,:],color="green")
axs[1].set_xlim(310,410)
axs[1].set_ylim(np.min(y0)-0.2, np.max(y0)+0.2)
axs[1].plot(sT, y0[1,:],color="red")
axs[1].scatter(sT, y0[1,:],color="red")
axs[0].set_title('Forecasting BIAR')
plt.show()
```
| github_jupyter |
# Using the NDBC Buoy Data Scraper
The Buoy class is used to get realtime and historical data from [NDBC Buoys](https://www.ndbc.noaa.gov/)
[Realtime Buoy Data](#Realtime-data-from-the-Neah-Bay-buoy)
[Historical Buoy Data](#Historical-data)
```
from buoyscraper import Buoy
```
## Realtime data from the Neah Bay buoy
```
neah_bay_id = 46087
neah = Buoy(neah_bay_id)
#print(neah) # Prints metadata
neah.get_realtime("not a valid dtype")
# Get stdmet data
neah_stdmet = neah.get_realtime("stdmet")
neah_stdmet.head(3)
# Check the units for realtime stdmet
help(neah.realtime.stdmet)
# Realtime data has 45 days of data
max(neah_stdmet.index) - min(neah_stdmet.index)
```
**Save and load realtime data**
Note: If a pickle for a ```dtype``` and ```buoy_id``` already exists in the default (or specified) data directory, *it will be updated with any new data!*
```
neah.save_realtime(["stdmet", "data_spec"])
#neah.save_realtime() # Save all available data types
# Loading data with the local timezone. Default is UTC.
neah_stdmet = neah.load_realtime("stdmet", 'US/Pacific')
```
**Plotting the realtime wave height and dominant period for the Neah Bay buoy**
```
import matplotlib.pyplot as plt
%matplotlib inline
fig, ax1 = plt.subplots(figsize=(12, 4))
ax2 = ax1.twinx()
ax1.plot(neah_stdmet['WVHT'], 'b-', linewidth=1, label='wave height')
ax1.set_ylabel('height (m)', color='b', size=16)
ax1.legend(loc="upper left")
ax2.plot(neah_stdmet['DPD'], 'g-', linewidth=1, label='Dominant Period')
ax2.set_ylabel('period (sec)', color='g', size=16)
ax2.legend()
plt.show()
```
## Historical data
Accessing historical data mostly works the same as realtime data. The main difference is the data that are available.
```
from buoyscraper import Buoy
neah_bay_id = 46087
neah = Buoy(neah_bay_id)
neah.get_historical("not a valid dtype")
# Get swell density data
neah_swden = neah.get_historical("swden")
print(len(neah_swden))
neah_swden.head(3)
# Check the units
help(neah.historical.swden)
# Another buoy
new_dungeness_id = 46088
dunge = Buoy(new_dungeness_id)
dunge_swden = dunge.get_historical("swden")
```
**Plotting the historical spectral density means at the New Dungeness and Neah Bay buoys**
* Check the Neah Bay axis. Spectral densities in Neah Bay are way higher than New Dungeness. The Neah Bay buoy is on the West coast of Washington, while the New Dungeness buoy is in The Straight of Juan de Fuca.
* Frequencies are lower at Neah, because it gets more ocean swell, while New Dungeness gets more wind swell.
* Dungeness has a noticeable bump right around the 0.1 Hz frequenciy, the same frequencies that are most prevelant at Neah. Some ocean swell makes it in The Straight of Juan de Fuca to Dungeness.
```
import matplotlib.pyplot as plt
%matplotlib inline
frequencies = [float(hz) for hz in list(dunge_swden)]
neah_spectral_means = neah_swden.sum(axis=0)/len(neah_swden)
dunge_spectral_means = dunge_swden.sum(axis=0)/len(dunge_swden)
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
ax1.plot(frequencies, neah_spectral_means, 'g-')
ax2.plot(frequencies, dunge_spectral_means, 'r-')
ax1.set_ylabel('Neah Bay', color='g')
ax2.set_ylabel('New Dunge', color='r')
fig.suptitle("Spectral Density $(m^2/Hz)$ Means")
ax1.set_xlabel('Frequency (Hz)')
plt.show()
```
| github_jupyter |
Before we begin, let's execute the cell below to display information about the CUDA driver and GPUs running on the server by running the `nvidia-smi` command. To do this, execute the cell block below by giving it focus (clicking on it with your mouse), and hitting Ctrl-Enter, or pressing the play button in the toolbar above. If all goes well, you should see some output returned below the grey cell.
```
!nvidia-smi
```
## Learning objectives
The **goal** of this lab is to:
- Learn how to find bottleneck and performance limiters using Nsight tools
- Learn about three levels of parallelism in OpenACC
- Learn how to use clauses to extract more parallelism in loops
In this section, we will optimize the parallel [RDF](../serial/rdf_overview.ipynb) application using OpenACC. Before we begin, feel free to have a look at the parallel version of the code and inspect it once again.
[RDF Parallel Code](../../source_code/openacc/SOLUTION/rdf_data_directive.f90)
Now, let's compile, and profile it with Nsight Systems first.
```
#compile the parallel code for Tesla GPU
!cd ../../source_code/openacc && nvfortran -acc -ta=tesla,lineinfo -Minfo=accel -o rdf nvtx.f90 SOLUTION/rdf_data_directive.f90 -L/opt/nvidia/hpc_sdk/Linux_x86_64/21.3/cuda/11.2/lib64 -lnvToolsExt
#profile and see output
!cd ../../source_code/openacc && nsys profile -t nvtx,openacc --stats=true --force-overwrite true -o rdf_parallel ./rdf
```
Let's checkout the profiler's report. Download and save the report file by holding down <mark>Shift</mark> and <mark>Right-Clicking</mark> [Here](../../source_code/openacc/rdf_parallel.qdrep) and open it via the GUI.
### Further Optimization
Look at the profiler report from the previous section again. From the timeline, have a close look at the the kernel functions. Checkout the theoretical [occupancy](../GPU_Architecture_Terminologies.ipynb). As shown in the example screenshot below, `rdf_98_gpu` kernel has the theoretical occupancy of 62.5%. It clearly shows that the occupancy is a limiting factor. *Occupancy* is a measure of how well the GPU compute resources are being utilized. It is about how much parallelism is running / how much parallelism the hardware could run.
<img src="../images/f_data_thread.png">
NVIDIA GPUs are comprised of multiple [streaming multiprocessors (SMs)](../GPU_Architecture_Terminologies.ipynb) where it can manage up to 2048 concurrent threads (not actively running at the same time). Low occupancy shows that there are not enough active threads to fully utilize the computing resources. Higher occupancy implies that the scheduler has more active threads to choose from and hence achieves higher performance. So, what does this mean in OpenACC execution model?
**3 Levels of Parallelism: Gang, Worker, and Vector**
CUDA and OpenACC programming model use different terminologies for similar ideas. For example, in CUDA, parallel execution is organized into grids, blocks, and threads (checkout the [GPU Architecture Terminologies](../GPU_Architecture_Terminologies.ipynb) notebook to learn more about grids,blocks, threads). On the other hand, the OpenACC execution model has three levels of *Vector*, *Worker* and *Gang*. *Vector* threads work in lockstep, performing a single operation on multipler data (SIMD), whereas *Workers* computes one vector and *Gangs* have 1 or more workers and they all share resources such as cache, or SMs. Gangs run independent of each other.
OpenACC assumes the device has multiple processing elements (Streaming Multiprocessors on NVIDIA GPUs) running in parallel and mapping of OpenACC execution model on CUDA is as below:
- An OpenACC gang is a threadblock
- A worker is a warp
- An OpenACC vector is a CUDA thread
<img src="../images/diagram.png" width="50%" height="50%">
### Vector, Worker and Gang Clauses
So, in order to improve the occupancy, we have to increase the parallelism within the gang. In other words, we have to increase the number of threads that can be scheduled on the GPU to improve GPU thread occupancy.
As you can see from the profiler report's screenshot, the grid dimension is fairly small `<53,1,1>` which shows the small amount of parallelism within the gang. We can use specific clauses to control the level of parallelism so that the compiler would use to parallelise the next loop. Each of *Vector*, *Worker* and *Gang* clauses can take a parameter to specify the size of each level of parallelism. For example we can control the vector length by using the `vector_length (num)` clause or we can add more workers by using `num_workers(num)`.
```fortran
!$acc parallel loop gang worker num_workers(32) vector_length(32)
do i=1,N
!$acc loop vector
do j=1,N
...
```
Now, add `gang`, `vector` and/or `worker` clauses to the code and experiment with the number of vectors and workers. Make necessary changes to the loop directives. Once done, save the file, re-compile via `make`, and profile it again.
From the top menu, click on *File*, and *Open* `rdf.f90` from the current directory at `Fortran/source_code/openacc` directory. Remember to **SAVE** your code after changes, before running below cells.
```
#compile for Tesla GPU
!cd ../../source_code/openacc && make clean && make
```
Let us start inspecting the compiler feedback and see if it applied the optimizations. Below is the screenshot of expected compiler feedback after adding the `gang`and `vector` clause to the code with vector length of 128. You can also change the vector length to 32 or 256 and see how the profiler output changes. The line 101 would change to `101, !$acc loop vector(256) ! threadidx%x` or `101, !$acc loop vector(32) ! threadidx%x`.
<img src="../images/f_gang_vector.png">
Now, validate the output by running the executable, and then **Profile** your code with Nsight Systems command line `nsys`.
```
#Run on Nvidia GPU and check the output
!cd ../../source_code/openacc && ./rdf && cat Pair_entropy.dat
```
The output should be the following:
```
s2 : -2.452690945278331
s2bond : -24.37502820694527
```
```
#profile and see output
!cd ../../source_code/openacc && nsys profile -t nvtx,openacc --stats=true --force-overwrite true -o rdf_gang_vector ./rdf
```
Let's checkout the profiler's report. Download and save the report file by holding down <mark>Shift</mark> and <mark>Right-Clicking</mark> [Here](../../source_code/openacc/rdf_gang_vector.qdrep) and open it via the GUI. Have a look at the example expected profiler report below:
<img src="../images/f_gang_128.png">
Checkout the kernel functions on the timeline and the occupancy. As you can see from the above screenshot, the theoretical occupancy is now 50% (slightly less than before). If we change the vector length to 32, the occupancy will not change as we do not have a lot of concurrent threads running.
<img src="../images/f_gang_32.png" width="30%" height="30%">
The loop iteration count inside the `rdf_98_gpu` function (line 98 according the above compiler feedback) is `natoms = 6720`. In this example, this number is the grid dimension of `<6720,1,1>`.
How much this optimization will speed-up the code will vary according to the application and the target accelerator, but it is not uncommon to see large speed-ups by using collapse on loop nests.
Feel free to checkout the [solution](../../source_code/openacc/SOLUTION/rdf_gang_vector_length.f90) to help you understand better.
### Collapse Clauses
In order to expose more parallelism and improve the occupancy, we can use an additional clause called `collapse` in the `!$acc loop` to optimize loops. The loop directive gives the compiler additional information about the next loop in the source code through several clauses. Apply the `collapse(N)` clause to a loop directive to collapse the next `N` tightly-nested loops to be collapsed into a single, flattened loop. This is useful if you have many nested loops or when you have really short loops. Sample usage of collapse clause is given as follows:
```fortran
!$acc parallel loop collapse (2)
do i=1,N
do j=1,N
< loop code >
```
When the loop count in any of some tightly nested loops is relatively small compared to the available number of threads in the device, creating a single iteration space across all the nested loops, increases the iteration count thus allowing the compiler to extract more parallelism.
**Tips on where to use:**
- Collapse outer loops to enable creating more gangs.
- Collapse inner loops to enable longer vector lengths.
- Collapse all loops, when possible, to do both
Now, add `collapse` clause to the code and make necessary changes to the loop directives. Once done, save the file, re-compile via `make`, and profile it again.
From the top menu, click on *File*, and *Open* `rdf.f90` from the current directory at `Fortran/source_code/openacc` directory. Remember to **SAVE** your code after changes, before running below cells.
```
#compile for Tesla GPU
!cd ../../source_code/openacc && make clean && make
```
Let us start inspecting the compiler feedback and see if it applied the optimizations. Below is the screenshot of expected compiler feedback after adding the `collapse`clause to the code. You can see that nested loops on line 184 has been successfully collapsed.
<img src="../images/f_collapse_feedback.png">
Now, validate the output by running the executable, and then **Profile** your code with Nsight Systems command line `nsys`.
```
#Run on Nvidia GPU and check the output
!cd ../../source_code/openacc && ./rdf && cat Pair_entropy.dat
```
The output should be the following:
```
s2 : -2.452690945278331
s2bond : -24.37502820694527
```
```
#profile and see output
!cd ../../source_code/openacc && nsys profile -t nvtx,openacc --stats=true --force-overwrite true -o rdf_collapse ./rdf
```
Let's checkout the profiler's report. Download and save the report file by holding down <mark>Shift</mark> and <mark>Right-Clicking</mark> [Here](../../source_code/openacc/rdf_collapse.qdrep) and open it via the GUI. Have a look at the example expected profiler report below:
<img src="../images/f_collapse_thread.png">
Checkout the kernel functions on the timeline and the occupancy. As you can see from the above screenshot, the theoretical occupancy is now 62.5%.
The iteration count for the collapsed loop is `natoms * natoms` where `natoms = 6720` (in this example). So, the iteration count for this particular loop (collapse loop) inside the `rdf_98_gpu` function is 45158400 and this number divided by the vector length of *128* is 352800. The maximum grid size is 65535 blocks in each dimension and we can see that we have a grid dimension of `<65535,1,1>` in this example.
By creating a single iteration space across the nested loops and increasing the iteration count, we improved the occupancy and extracted more parallelism.
**Notes:**
- 100% occupancy is not required for, nor does it guarantee best performance.
- Less than 50% occupancy is often a red flag
How much this optimization will speed-up the code will vary according to the application and the target accelerator, but it is not uncommon to see large speed-ups by using collapse on loop nests.
Feel free to checkout the [solution](../../source_code/openacc/SOLUTION/rdf_collapse.f90) to help you understand better.
There is another clause which may be useful in optimizing loops. *Tile* clause breaks down the next loops into tiles before parallelising and it promotes data locality as the device can then use data from nearby tiles.
```fortran
!$acc parallel loop tile (4,4)
do i=1,N
do j=1,N
< loop code >
```
We do not cover this in the labs but this is something you can explore and compare with the explained methods of loop optimization.
## Post-Lab Summary
If you would like to download this lab for later viewing, it is recommend you go to your browsers File menu (not the Jupyter notebook file menu) and save the complete web page. This will ensure the images are copied down as well. You can also execute the following cell block to create a zip-file of the files you've been working on, and download it with the link below.
```
%%bash
cd ..
rm -f nways_files.zip
zip -r nways_files.zip *
```
**After** executing the above zip command, you should be able to download and save the zip file by holding down <mark>Shift</mark> and <mark>Right-Clicking</mark> [Here](../nways_files.zip).
Let us now go back to parallelizing our code using other approaches.
**IMPORTANT**: If you wish to dig deeper and profile the kernel with the Nsight Computer profiler, go to the next notebook. Otherwise, please click on **HOME** to go back to the main notebook for *N ways of GPU programming for MD* code.
-----
# <p style="text-align:center;border:3px; border-style:solid; border-color:#FF0000 ; padding: 1em"> <a href=../../../nways_MD_start.ipynb>HOME</a> <a href=nways_openacc_opt_2.ipynb>NEXT</a></p>
-----
# Links and Resources
[NVIDIA Nsight System](https://docs.nvidia.com/nsight-systems/)
[NVIDIA Nsight Compute](https://developer.nvidia.com/nsight-compute)
[CUDA Toolkit Download](https://developer.nvidia.com/cuda-downloads)
**NOTE**: To be able to see the Nsight System profiler output, please download Nsight System latest version from [here](https://developer.nvidia.com/nsight-systems).
Don't forget to check out additional [OpenACC Resources](https://www.openacc.org/resources) and join our [OpenACC Slack Channel](https://www.openacc.org/community#slack) to share your experience and get more help from the community.
---
## Licensing
This material is released by OpenACC-Standard.org, in collaboration with NVIDIA Corporation, under the Creative Commons Attribution 4.0 International (CC BY 4.0).
| github_jupyter |
```
from openpiv import tools, pyprocess, scaling, filters, \
validation, preprocess
import numpy as np
from skimage import io
import matplotlib.pyplot as plt
%matplotlib inline
file_a = '../test4/Camera1-0101.tif'
file_b = '../test4/Camera1-0102.tif'
im_a = tools.imread( file_a )
im_b = tools.imread( file_b )
plt.imshow(np.c_[im_a,im_b],cmap='gray')
# let's crop the region of interest
frame_a = im_a[380:1980,0:1390]
frame_b = im_b[380:1980,0:1390]
plt.imshow(np.c_[frame_a,frame_b],cmap='gray')
# Process the original cropped image and see the OpenPIV result:
# typical parameters:
window_size = 32 #pixels
overlap = 16 # pixels
search_area_size = 64 # pixels
frame_rate = 40 # fps
scaling_factor = 96.52 # micron/pixel
# process again with the masked images, for comparison# process once with the original images
u, v, sig2noise = pyprocess.extended_search_area_piv(
frame_a.astype(np.int32) , frame_b.astype(np.int32),
window_size = window_size,
overlap = overlap,
dt=1./frame_rate,
search_area_size = search_area_size,
sig2noise_method = 'peak2peak')
x, y = pyprocess.get_coordinates(frame_a.shape,
search_area_size,
overlap )
u, v, mask = validation.global_val( u, v, (-300.,300.),(-300.,300.))
u, v, mask = validation.sig2noise_val( u, v, sig2noise, threshold = 1.1 )
u, v = filters.replace_outliers( u, v, method='localmean', max_iter = 3, kernel_size = 3)
x, y, u, v = scaling.uniform(x, y, u, v, scaling_factor = 96.52 )
# save to a file
x, y, u, v = tools.transform_coordinates(x, y, u, v)
tools.save(x, y, u, v, mask, 'test.txt')
tools.display_vector_field('test.txt', scale=5, width=0.006)
# masking using not optimal choice of the methods or parameters:
masked_a, _ = preprocess.dynamic_masking(frame_a,method='edges',filter_size=7,threshold=0.005)
masked_b, _ = preprocess.dynamic_masking(frame_b,method='intensity',filter_size=3,threshold=0.0)
plt.imshow(np.c_[masked_a,masked_b],cmap='gray')
# masking using optimal (manually tuned) set of parameters and the right method:
masked_a, _ = preprocess.dynamic_masking(frame_a,method='edges',filter_size=7,threshold=0.01)
masked_b, _ = preprocess.dynamic_masking(frame_b,method='edges',filter_size=7,threshold=0.01)
plt.imshow(np.c_[masked_a,masked_b],cmap='gray')
# Process the masked cropped image and see the OpenPIV result:
# process again with the masked images, for comparison# process once with the original images
u, v, sig2noise = pyprocess.extended_search_area_piv(
masked_a.astype(np.int32) , masked_b.astype(np.int32),
window_size = window_size,
overlap = overlap,
dt=1./frame_rate,
search_area_size = search_area_size,
sig2noise_method = 'peak2peak')
x, y = pyprocess.get_coordinates(masked_a.shape,
search_area_size,
overlap )
u, v, mask = validation.global_val( u, v, (-300.,300.),(-300.,300.))
u, v, mask = validation.sig2noise_val( u, v, sig2noise, threshold = 1.1)
u, v = filters.replace_outliers( u, v, method='localmean', max_iter = 3, kernel_size = 3)
x, y, u, v = scaling.uniform(x, y, u, v, scaling_factor = scaling_factor )
# save to a file
x, y, u, v = tools.transform_coordinates(x, y, u, v)
tools.save(x, y, u, v, mask, 'test_masked.txt', fmt='%9.6f', delimiter='\t')
tools.display_vector_field('test_masked.txt', scale=5, width=0.006)
```
| github_jupyter |
# Text Preprocessing
For any NLP tasks in Deep Learning the first step would be preprocessing the text data into numbers!
In the recent years almost all the DL packages have started to provide their own APIs to do the text preprocessing, however each one has its own subtle differences, which if not understood correctly will lead to improper data preparation and thus skewing model trianing.
When I resumed my hobby in DL with Transformers + Tensorflow 2.0, I came across different APIs doing the same text tokneization as part of the Tensorflow ecosystem tutorials.
From the days of writing our own tokenizer and encoders/decoders, we now have APIs which can simplify our work a lot. However care should be taken while using such APIs, like
- How you wanted the text to be splitted?
- How the tokenizers wanted to handle the punctuations/special characters?
- How to handle out of vocab word (OOV)?
- Do you wanted to use [WordPiece tokenization](https://stackoverflow.com/questions/55382596/how-is-wordpiece-tokenization-helpful-to-effectively-deal-with-rare-words-proble/55416944#55416944)?
- Does the tokenizer/enoder support charcter level encoding ?
- How is vocab length is calculated? does it include PAD and OOV words in it?
Choosing the right API to do our task with multiple options out there is not an easy job, as each API is build with specific purpose to fit with its counter parts. Some wors natively with Tensors, somw with Tensrflow datasets, some with character level etc.,
This is a quick skim through reference blog for word and character level encoding in Tensorflow.
```
from string import punctuation
import tensorflow as tf
import tensorflow_text
import tensorflow_datasets as tfds
```
Data is a sample from [CoNLL 2003](https://www.clips.uantwerpen.be/conll2003/ner/).
```
text_data = ["4. Kurt Betschart - Bruno Risi ( Switzerland ) 22",
"Israel approves Arafat 's flight to West Bank .",
"Moreau takes bronze medal as faster losing semifinalist .",
"W D L G / F G / A P",
"-- Helsinki newsroom +358 - 0 - 680 50 248",
"M'bishi Gas sets terms on 7-year straight ."]
ner_data = ["O B-PER I-PER O B-PER I-PER O B-LOC O O",
"B-LOC O B-PER O O O B-LOC I-LOC O",
"B-PER O O O O O O O O",
"O O O O O O O O O O",
"O B-LOC O O O O O O O O",
"B-ORG I-ORG O O O O O O"]
start_word, end_word, unknown_word = "<START>", "<END>", "<UNK>"
```
Three set of APIs are explored
- Tensorflow Dataset APIs
- Tensorflow Keras Text Preprocessing
- Tensorflow Text
For my current task Keras APIs solved my requirements, i.e word and character tokenizing, encoding and decoding.
Note: Decoding will be updated if I get time.:)
## 1. Tensorflow Dataset API
Like many I started the TRansformers from this tutorial which uses the Tensorflow Dataset APIs.
https://www.tensorflow.org/tutorials/text/transformer
- The API is clean and easy to use.
- https://www.tensorflow.org/datasets/api_docs/python/tfds/features/text/Tokenizer
- https://www.tensorflow.org/datasets/api_docs/python/tfds/features/text/TextEncoder
- Here we need Tokenizer and Encoder seprately.
Cons:
- For the task of preparing the text for NER, we have to consider all special characters, which by default is ignored.
- Even if we add the `punctuation` as reserved tokens, it still removes the special characters while tokenizing
```
text_tokenizer = tfds.features.text.Tokenizer(reserved_tokens=[start_word, end_word] + list(punctuation))
tags_tokenizer = tfds.features.text.Tokenizer(reserved_tokens=['B-LOC', 'B-MISC', 'B-ORG', 'B-PER', 'I-LOC',
'I-MISC', 'I-ORG', 'I-PER', 'O',
start_word, end_word])
text_vocabulary_set = set()
tags_vocabulary_set = set()
for text, ner in zip(text_data, ner_data):
text_tokens = text_tokenizer.tokenize(text)
tag_tokens = tags_tokenizer.tokenize(ner)
text_vocabulary_set.update(text_tokens)
tags_vocabulary_set.update(tag_tokens)
text_vocabulary_set.update([start_word, end_word])
tags_vocabulary_set.update([start_word, end_word])
text_encoder = tfds.features.text.TokenTextEncoder(text_vocabulary_set, oov_token=unknown_word, tokenizer=text_tokenizer)
tags_encoder = tfds.features.text.TokenTextEncoder(tags_vocabulary_set, oov_token=unknown_word, tokenizer=tags_tokenizer)
for token, id in text_encoder._token_to_id.items():
print(token,"--->", id+1) # Be default 0 is used PAD index
for token, id in tags_encoder._token_to_id.items():
print(token, "--->", id+1)
tags_encoder.vocab_size # i.e above tags + PAD + UNK
text_data[0]
ner_data[0]
res = text_encoder.encode(text_data[0])
res
for text_token, tag_token, id in zip(text_tokenizer.tokenize(text_data[0]), ner_data[0].split(" "), res):
print(text_token, tag_token, id)
```
**As you can see "4." is splitted into "4" and "."**
# Keras API
- If you are lucky enough and had patient to read this tutorial https://www.tensorflow.org/tutorials/text/nmt_with_attention or who loves Keras, then your requirement for Text preprocessing is met.
- https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/text/Tokenizer?version=stable
## Word Encoding
```
def keras_tokenize(text_corpus, char_level=False, filters='!"#$%&()*+,-./:;<=>?@[\\]^_`{|}~\t\n'):
lang_tokenizer = tf.keras.preprocessing.text.Tokenizer(filters=filters, oov_token="<UNK>", char_level=char_level, lower=False)
lang_tokenizer.fit_on_texts(text_corpus)
return lang_tokenizer
text_word_tokenizer = keras_tokenize(text_data, filters='')
```
Lets print word index for our data
```
text_word_tokenizer.index_word
```
So, if you wanna to convert your text data into intergers..
```
res = text_word_tokenizer.texts_to_sequences(text_data)
# Easy to use padding API from Keras
res = tf.keras.preprocessing.sequence.pad_sequences(res, padding='post')
res
```
To test how out of vocab index values are used, we can feed tags to text tokenizer ;) and see `1s`
```
text_word_tokenizer.texts_to_sequences(ner_data)
ner_word_tokenizer = keras_tokenize(ner_data)
res = ner_word_tokenizer.texts_to_sequences(ner_data)
res = tf.keras.preprocessing.sequence.pad_sequences(res, padding="post")
res
ner_word_tokenizer.index_word
vocab_size = len(ner_word_tokenizer.word_index) + 1
vocab_size
```
## Character Encoding
Character level encoding will be useful when you wanted to capture the sematics at word level which can help you to deal with out of vocab words, deeper understanding of words with repect to its position etc.,
```
text_char_tonkenizer = keras_tokenize(text_data, char_level=True)
text_data[0]
text_char_tonkenizer.index_word
# care needs to be taken while using cahracter level tokenizing with OOV,
# since all the characters will be part of our vocab. This can happen when we wanted to
# tokenize a differnt language or different string encoding.
# split the text by spaces i.e list of list of words
char_data = [text.split(" ") for text in text_data]
print(char_data)
char_data_encoded = []
for char_seq in char_data:
# tokenize each sentence
res = text_char_tonkenizer.texts_to_sequences(char_seq)
# pad it
res = tf.keras.preprocessing.sequence.pad_sequences(res, padding="post", maxlen=6)
# group it as a batch
char_data_encoded.append(res)
char_data_encoded
```
# TF Text APIs
- https://github.com/tensorflow/text
- https://www.tensorflow.org/tutorials/tensorflow_text/intro
- https://blog.tensorflow.org/2019/06/introducing-tftext.html
The last one is Tensorflow Text APIs. From first glance it seems to have good integration with the Tensorflow Dataset APIs and Keras.
Since my current requirements are met with Keras preprocessing APIs, I am keepin theis for later time exploration.
```
tokenizer = tensorflow_text.WhitespaceTokenizer()
tokens = tokenizer.tokenize(['everything not saved will be lost.', u'Sad☹'.encode('UTF-8')])
print(tokens.to_list())
text_tokens = tokenizer.tokenize(text_data)
text_tokens.values
```
| github_jupyter |
```
import pickle
import numpy as np
class SeqDataset(object):
def __init__(self, ids, features, labels, groups, wordRanges, truePos):
'''
ids are ids of candidate sequences
each row of features is 13 features corresponding to the following:
feature_0: pred_end - pred_start so length of span -1
feature_1: normalized start position (normalized by number of words)
feature_2: normalized end position (normalized by number of words)
feature_4-10: 7 evenly spaced quantiles of the distribution of relevant class probabilities for this sequence
feature_11: The probability that words on either edge of the current sub-sequence belong to the class of interest
feature_12: The probability that the first word corresponds to a 'B'-egin token
labels are binary labels corresponding to whether the candidate sequence is an exact match to a true span
wordRanges are the start and end (inclusive on both sides) indices of the candidate sequence
truePos are binary labels corresponding to whether the candidate sequence would be considered a true positive (>0.5 overlap)
'''
self.features = np.array(features, dtype=np.float32)
self.labels = np.array(labels)
self.groups = np.array(groups, dtype=np.int16)
self.wordRanges = np.array(wordRanges, dtype=np.int16)
self.truePos = np.array(truePos)
self.ids=ids
import pandas as pd
disc_types = ['Evidence','Claim','Lead','Position','Counterclaim','Rebuttal','Concluding Statement']
dfs = []
folder= 'cache' #put pickle files in this folder
for fold in range(8):
with open(f'{folder}/valid_seqds_fold{fold}.p','rb') as f:
seqdataset=pickle.load(f)
for disc_type in disc_types:
x = seqdataset[disc_type]
df = pd.DataFrame()
df[[f"f_{i}" for i in range(x.features.shape[1])]] = x.features
df["id"] = x.ids
df["class"] = disc_type
df[["begin", "end"]] = x.wordRanges
df["kfold"] = fold
dfs.append(df)
len_features = x.features.shape[1]
oof_df = pd.concat(dfs)
print(oof_df.shape)
oof_df.head()
oof_df.sample(3, random_state=0).T
gt_df = pd.read_csv("../train_folds.csv")
print(gt_df.shape)
gt_df.head()
from tqdm import tqdm
ps = []
for begin, end in tqdm(list(zip(oof_df["begin"].values, oof_df["end"].values))):
#ps.append(" ".join([str(int(x)) for x in np.arange(begin, end)]))
ps.append(f"{begin} {end-1}")
oof_df["predictionstring"] = ps
#ps = []
# for begin, end in tqdm(list(zip(gt_df["begin"].values, gt_df["end"].values))):
# #ps.append(" ".join([str(int(x)) for x in np.arange(begin, end)]))
# ps.append(f"{begin} {end}")
# gt_df["predictionstring"] = ps
oof_df.head()
# from Rob Mulla @robikscube
# https://www.kaggle.com/robikscube/student-writing-competition-twitch
def calc_overlap(row):
"""
Calculates the overlap between prediction and
ground truth and overlap percentages used for determining
true positives.
"""
set_pred = set(row.predictionstring_pred.split(' '))
set_gt = set(row.predictionstring_gt.split(' '))
# Length of each and intersection
len_gt = len(set_gt)
len_pred = len(set_pred)
inter = len(set_gt.intersection(set_pred))
overlap_1 = inter / len_gt
overlap_2 = inter/ len_pred
return [overlap_1, overlap_2]
def score_feedback_comp(pred_df, gt_df):
"""
A function that scores for the kaggle
Student Writing Competition
Uses the steps in the evaluation page here:
https://www.kaggle.com/c/feedback-prize-2021/overview/evaluation
"""
gt_df = gt_df[['id','discourse_type','predictionstring']] .reset_index(drop=True).copy()
pred_df = pred_df[['id','class','predictionstring']] .reset_index(drop=True).copy()
pred_df['pred_id'] = pred_df.index
gt_df['gt_id'] = gt_df.index
# Step 1. all ground truths and predictions for a given class are compared.
joined = pred_df.merge(gt_df,
left_on=['id','class'],
right_on=['id','discourse_type'],
how='outer',
suffixes=('_pred','_gt')
)
joined['predictionstring_gt'] = joined['predictionstring_gt'].fillna(' ')
joined['predictionstring_pred'] = joined['predictionstring_pred'].fillna(' ')
joined['overlaps'] = joined.apply(calc_overlap, axis=1)
# 2. If the overlap between the ground truth and prediction is >= 0.5,
# and the overlap between the prediction and the ground truth >= 0.5,
# the prediction is a match and considered a true positive.
# If multiple matches exist, the match with the highest pair of overlaps is taken.
joined['overlap1'] = joined['overlaps'].apply(lambda x: eval(str(x))[0])
joined['overlap2'] = joined['overlaps'].apply(lambda x: eval(str(x))[1])
joined['potential_TP'] = (joined['overlap1'] >= 0.5) & (joined['overlap2'] >= 0.5)
joined['max_overlap'] = joined[['overlap1','overlap2']].max(axis=1)
tp_pred_ids = joined.query('potential_TP') .sort_values('max_overlap', ascending=False) .groupby(['id','predictionstring_gt']).first()['pred_id'].values
# 3. Any unmatched ground truths are false negatives
# and any unmatched predictions are false positives.
fp_pred_ids = [p for p in joined['pred_id'].unique() if p not in tp_pred_ids]
matched_gt_ids = joined.query('potential_TP')['gt_id'].unique()
unmatched_gt_ids = [c for c in joined['gt_id'].unique() if c not in matched_gt_ids]
# Get numbers of each type
TP = len(tp_pred_ids)
FP = len(fp_pred_ids)
FN = len(unmatched_gt_ids)
#calc microf1
my_f1_score = TP / (TP + 0.5*(FP+FN))
return my_f1_score
def calc_overlap_shujun(pred, gt):
"""
Calculates if the overlap between prediction and
ground truth is enough fora potential True positive
"""
try:
g1=pred[1]+1-gt[0]
g2=gt[1]+1-pred[0]
l1=pred[1]-pred[0]+1
l2=gt[1]-gt[0]+1
#print(g1,g2)
if g1*g2>=0:
#g1=abs(g1)+1
#g2=abs(g2)+1
inter=min((g1,g2,l1,l2))#/max((g1,g2,l1,l2))
overlap_1=inter/l1
overlap_2=inter/l2
return overlap_1 >= 0.5 and overlap_2 >= 0.5
else:
return False
except:
return False
def score_feedback_comp_micro_shujun(pred_df, gt_df, discourse_type):
"""
A function that scores for the kaggle
Student Writing Competition
Uses the steps in the evaluation page here:
https://www.kaggle.com/c/feedback-prize-2021/overview/evaluation
"""
gt_df = gt_df.loc[gt_df['discourse_type'] == discourse_type,
['id', 'predictionstring']].reset_index(drop=True)
pred_df = pred_df.loc[pred_df['class'] == discourse_type,
['id', 'predictionstring']].reset_index(drop=True)
pred_df['pred_id'] = pred_df.index
gt_df['gt_id'] = gt_df.index
pred_df['predictionstring'] = [(int(pred.split(' ')[0]),int(pred.split(' ')[-1])) for pred in pred_df['predictionstring']]
gt_df['predictionstring'] = [(int(pred.split(' ')[0]),int(pred.split(' ')[-1])) for pred in gt_df['predictionstring']]
# print(pred_df[pred_df['predictionstring']!=pred_df['predictionstring']])
# exit()
#gt_strings=
# Step 1. all ground truths and predictions for a given class are compared.
joined = pred_df.merge(gt_df,
left_on='id',
right_on='id',
how='outer',
suffixes=('_pred','_gt')
)
overlaps = [calc_overlap_shujun(*args) for args in zip(list(joined.predictionstring_pred),
list(joined.predictionstring_gt))]
# 2. If the overlap between the ground truth and prediction is >= 0.5,
# and the overlap between the prediction and the ground truth >= 0.5,
# the prediction is a match and considered a true positive.
# If multiple matches exist, the match with the highest pair of overlaps is taken.
# we don't need to compute the match to compute the score
TP = joined.loc[overlaps]['gt_id'].nunique()
# 3. Any unmatched ground truths are false negatives
# and any unmatched predictions are false positives.
TPandFP = len(pred_df)
TPandFN = len(gt_df)
#calc microf1
my_f1_score = 2*TP / (TPandFP + TPandFN)
return my_f1_score
def score_feedback_comp_shujun(pred_df, gt_df, return_class_scores=False):
class_scores = {}
for discourse_type in gt_df.discourse_type.unique():
class_score = score_feedback_comp_micro_shujun(pred_df, gt_df, discourse_type)
class_scores[discourse_type] = class_score
f1 = np.mean([v for v in class_scores.values()])
if return_class_scores:
return f1, class_scores
return f1
sample_df = oof_df[oof_df["f_7"] > 0.9999].reset_index(drop=True)
print(sample_df.shape)
score_feedback_comp_shujun(sample_df, gt_df, return_class_scores=True)
oof_df["idx"] = np.arange(oof_df.shape[0])
eval_df = oof_df[["idx", "id", "class", "predictionstring"]].merge(gt_df[["id", "discourse_type", "predictionstring"]].rename(columns={"predictionstring": "gt_ps",
"discourse_type": 'class'}),
how="left", on=["id", "class"])
eval_df.shape
eval_df.columns
def calc_overlap_shujun_min(pred, gt):
"""
Calculates if the overlap between prediction and
ground truth is enough fora potential True positive
"""
try:
pred=[int(pred.split()[0]),int(pred.split()[-1])]
gt=[int(gt.split()[0]),int(gt.split()[-1])]
g1=pred[1]+1-gt[0]
g2=gt[1]+1-pred[0]
l1=pred[1]-pred[0]+1
l2=gt[1]-gt[0]+1
#print(g1,g2)
if g1*g2>=0:
#g1=abs(g1)+1
#g2=abs(g2)+1
inter=min((g1,g2,l1,l2))#/max((g1,g2,l1,l2))
overlap_1=inter/l1
overlap_2=inter/l2
#return overlap_1 >= 0.5 and overlap_2 >= 0.5
return min(overlap_1,overlap_2)
else:
return 0
except:
return 0
def calc_overlap(predictionstring, gt_ps):
set_pred = set(str(predictionstring).split(" "))
set_gt = set(str(gt_ps).split(" "))
# Length of each and intersection
len_gt = len(set_gt)
len_pred = len(set_pred)
inter = len(set_gt.intersection(set_pred))
overlap_1 = inter / len_gt
overlap_2 = inter / len_pred
return min(overlap_1, overlap_2)
overlap = []
for predictionstring, gt_ps in tqdm(list(zip(eval_df["predictionstring"].values, eval_df["gt_ps"].values))):
#break
overlap.append(calc_overlap_shujun_min(predictionstring, gt_ps))
eval_df["overlap"] = overlap
eval_df = eval_df.groupby("idx")["overlap"].max().reset_index()
eval_df.shape
eval_df.head()
eval_df.tail()
oof_df.head()["idx"], oof_df.tail()["idx"]
oof_df["overlap"] = eval_df["overlap"].values
oof_df["overlap"].fillna(0.0, inplace=True)
oof_df["overlap"].hist(bins=50)
oof_df.to_parquet(f"{folder}/new_oof_shujun_overlap_calc.parquet", index=False)
```
| github_jupyter |
```
# importing libraries
import argparse
import os
import pickle
import logging
import boto3
import faiss
import pandas as pd
from tqdm import tqdm
from random import sample
########################################
# 从s3同步数据
########################################
def sync_s3(file_name_list, s3_folder, local_folder):
for f in file_name_list:
print("file preparation: download src key {} to dst key {}".format(os.path.join(
s3_folder, f), os.path.join(local_folder, f)))
s3client.download_file(bucket, os.path.join(
s3_folder, f), os.path.join(local_folder, f))
default_bucket = 'sagemaker-us-east-1-002224604296'
default_mk_region = '1'
level_1 = 'recommender-system-film-mk'
# parser = argparse.ArgumentParser()
# parser.add_argument('--bucket', type=str, default=default_bucket)
# parser.add_argument('--mk-region', type=str, default=default_mk_region)
# args, _ = parser.parse_known_args()
bucket = default_bucket
mk_region = default_mk_region
prefix = f"{level_1}/{mk_region}"
print("bucket={}".format(bucket))
print("prefix='{}'".format(prefix))
s3client = boto3.client('s3')
local_folder = 'info'
if not os.path.exists(local_folder):
os.makedirs(local_folder)
# recall & rank 结果加载
file_name_list = ['recall_batch_result.pickle','rank_batch_result.pickle']
s3_folder = '{}/feature/recommend-list/movie'.format(prefix)
sync_s3(file_name_list, s3_folder, local_folder)
# 用户画像数据加载
file_name_list = ['portrait.pickle']
s3_folder = '{}/feature/recommend-list/portrait'.format(prefix)
sync_s3(file_name_list, s3_folder, local_folder)
# 倒排列表的pickle文件
file_name_list = ['movie_id_movie_property_dict.pickle',
'movie_category_movie_ids_dict.pickle',
'movie_director_movie_ids_dict.pickle',
'movie_actor_movie_ids_dict.pickle',
'movie_language_movie_ids_dict.pickle',
'movie_level_movie_ids_dict.pickle',
'movie_year_movie_ids_dict.pickle']
s3_folder = '{}/feature/content/inverted-list/'.format(prefix)
sync_s3(file_name_list, s3_folder, local_folder)
# filter配置项
file_name_list = ['filter_config.pickle']
s3_folder = '{}/model/filter/'.format(prefix)
sync_s3(file_name_list, s3_folder, local_folder)
# 加载pickle文件
file_to_load = open("info/movie_id_movie_property_dict.pickle", "rb")
dict_id_content = pickle.load(file_to_load)
print("length of movie_id v.s. movie_property {}".format(len(dict_id_content)))
file_to_load = open("info/movie_category_movie_ids_dict.pickle", "rb")
dict_category_id = pickle.load(file_to_load)
print("length of movie_category v.s. movie_ids {}".format(len(dict_category_id)))
file_to_load = open("info/movie_director_movie_ids_dict.pickle", "rb")
dict_director_id = pickle.load(file_to_load)
print("length of movie_dicrector v.s. movie_ids {}".format(len(dict_director_id)))
file_to_load = open("info/movie_actor_movie_ids_dict.pickle", "rb")
dict_actor_id = pickle.load(file_to_load)
print("length of movie_actor v.s. movie_ids {}".format(len(dict_actor_id)))
file_to_load = open("info/movie_language_movie_ids_dict.pickle", "rb")
dict_language_id = pickle.load(file_to_load)
print("length of movie_lanugage v.s. movie_ids {}".format(len(dict_language_id)))
file_to_load = open("info/movie_level_movie_ids_dict.pickle", "rb")
dict_level_id = pickle.load(file_to_load)
print("length of movie_level v.s. movie_ids {}".format(len(dict_level_id)))
file_to_load = open("info/movie_year_movie_ids_dict.pickle", "rb")
dict_year_id = pickle.load(file_to_load)
print("length of movie_year v.s. movie_ids {}".format(len(dict_year_id)))
file_to_load = open("info/portrait.pickle", "rb")
user_portrait = pickle.load(file_to_load)
print("length of user_portrait {}".format(len(user_portrait)))
# 加载filter配置
file_to_load = open("info/filter_config.pickle", "rb")
filter_config = pickle.load(file_to_load)
print("length of filter_config {}".format(len(filter_config)))
# 加载recall结果
file_to_load = open("info/recall_batch_result.pickle", "rb")
dict_recall_result = pickle.load(file_to_load)
# 加载rank结果
file_to_load = open("info/rank_batch_result.pickle", "rb")
dict_rank_result = pickle.load(file_to_load)
# 返回结果格式设计:
# item_id | recall_type | recall_score | rank_type | rank_score | filter_type | filter_score
# recall_type: [运行时机]_[方法]_[位置]
# [运行时机]: batch/online
# [方法]: category/director/actor/language/level/year/review/photo/ub/portrai_xxx
# [位置]: 数字[0-xxx]
# recall_score: 召回得分,float型
# rank_type: [运行时机]_[方法]_[位置]
# [运行时机]: batch/online
# [数据源头]: action/portrait
# [方法]: deepfm/xgboost
# [位置]: 数字[0-xxx]
# rank_score: 排序得分,float型
# filter_type: [运行时机]_[方法]_[位置]
# [运行时机]: batch/online
# [方法]: recommend/coldstart/disparity
# [位置]: 数字[0-xxx]
# filter_score: 过滤得分,float型
def get_dict_pos(key, dict_var):
return list(dict_var.keys()).index(key)
def calc_filter_score(recall_score, rank_score, recall_mt=None, rank_mt=None, recall_pos=None, rank_pos=None):
filter_score = min(1.0, recall_score/40.0 + rank_score)
return round(filter_score,2)
def mt_construct(timing, mt, pos):
type_list = []
type_list.append(str(timing))
type_list.append(str(mt))
type_list.append(str(pos))
type_name = '_'.join(type_list)
return type_name
def sort_and_fill_pos(filter_result):
sort_filter_result = dict(
sorted(filter_result.items(), key=lambda item: item[1][2], reverse=True))
filter_pos = 0
update_filter_result = dict()
for filter_id, filter_content in sort_filter_result.items():
current_trace = filter_content[3]
current_trace_split_list = current_trace.split('|')
current_filter_type = current_trace_split_list[4]
current_filter_type_split_list = current_filter_type.split('_')
update_filter_type_split_list = current_filter_type_split_list
update_filter_type_split_list[2] = str(filter_pos)
update_filter_type = '_'.join(update_filter_type_split_list)
update_trace_split_list = current_trace_split_list
update_trace_split_list[-2] = update_filter_type
update_trace = '|' .join(update_trace_split_list)
update_filter_content = filter_content
update_filter_content[3] = update_trace
# print("update id {} trace {} type {}".format(filter_id, update_trace,update_filter_type_split_list))
update_filter_result[str(filter_id)] = update_filter_content
# update filter pos
filter_pos = filter_pos + 1
def initial_diversity(stats_result, filter_config):
for cate in filter_config['category']:
stats_result[cate] = 0
def category_diversity_logic(filter_result, stats_result, dict_category_id, filter_config):
diversity_count = filter_config['category_diversity_count']
min_category = None
min_category_count = 999
candidate_category_list = []
for cate, count in stats_result.items():
if count < min_category_count and count != 0:
min_category_count = count
min_category = cate
elif count == 0:
candidate_category_list.append(cate)
if min_category != None:
candidate_category_list.append(min_category)
diversity_result_list = []
diversity_result_content_list= []
current_diversity_count = 0
filter_result_list = list(filter_result.keys())
filter_result_content_list = list(filter_result.values())
sample_try = 0
catch_count = 0
while catch_count < diversity_count:
for cate in candidate_category_list:
sample_try = sample_try + 1
candidate_id = sample(dict_category_id[str(cate)],1)
if candidate_id in filter_result_list:
continue
else:
filter_result_list.append(str(candidate_id))
filter_result_content_list.append([str(candidate_id), 'diversity', 0.0, 'batch_diversity_{}|{}'.format(len(filter_result_list),cate)])
catch_count = catch_count + 1
if catch_count >= diversity_count:
break
if sample_try > 5*diversity_count:
logging.error("fail to find enough diversity candidate, need to find {} but only find {}".format(diversity_count, catch_count+1))
break
update_filter_result = dict(zip(filter_result_list, filter_result_content_list))
return update_filter_result
# 同一批次去重/统计
# 运行时机
run_timing = 'batch'
dict_filter_result = {}
for user_id, recall_result in dict_recall_result.items():
# print("user id {}".format(user_id))
current_user_result = {}
current_diversity_result = {}
initial_diversity(current_diversity_result, filter_config)
for recall_id, recall_property in recall_result.items():
# print("item id {} recall_property {}".format(recall_id, recall_property))
# 构建recall_type
recall_type = mt_construct(run_timing, recall_property[1], recall_property[2])
# 构建recall_score
recall_score = round(recall_property[3],2)
# 构建rank_type
rank_pos = str(get_dict_pos(int(recall_id), dict_rank_result[str(user_id)]))
rank_type = mt_construct(run_timing, 'deepfm', rank_pos)
# 构建rank_score
rank_score = round(dict_rank_result[str(user_id)][int(recall_id)],2)
# 构建filter_type
filter_type = mt_construct(run_timing, 'recommend', 'TBD')
# 构建filter_score
filter_score = calc_filter_score(recall_score, rank_score)
# print("{}|{}|{}|{}|{}|{}".format(recall_type,recall_score,rank_type,rank_score))
# break
recommend_trace = "{}|{}|{}|{}|{}|{}".format(recall_type,recall_score,rank_type,rank_score,filter_type,filter_score)
current_user_result[str(recall_id)]=[]
current_user_result[str(recall_id)].append(str(recall_id))
current_user_result[str(recall_id)].append('recommend')
current_user_result[str(recall_id)].append(filter_score)
current_user_result[str(recall_id)].append(recommend_trace)
# 更新多样性统计
current_category = dict_id_content[str(recall_id)]['category']
for cate in current_category:
if cate is not None:
current_diversity_result[cate] = current_diversity_result[cate] + 1
# 根据filter score更新排序
sort_and_fill_pos(current_user_result)
update_user_result = category_diversity_logic(current_user_result, current_diversity_result, dict_category_id, filter_config)
dict_filter_result[str(user_id)] = update_user_result
filter_config = {}
filter_config['category'] = list(dict_category_id.keys())
filter_config['category_diversity_count'] = 5
file_name = 'info/filter_config.pickle'
output_file = open(file_name, 'wb')
pickle.dump(filter_config, output_file)
output_file.close()
!aws s3 cp info/filter_config.pickle s3://sagemaker-us-east-1-002224604296/recommender-system-film-mk/1/model/filter/
n = 0
for k,v in dict_filter_result.items():
print("key {} and value {}".format(k,v))
if n > 2:
break
n = n + 1
n = 0
for k,v in dict_rank_result.items():
print("key {} and value {}".format(k,v))
if n > 2:
break
n = n + 1
!python filter-batch.py
```
| github_jupyter |
# Utilizing existing FAQs for Question Answering
[](https://colab.research.google.com/github/deepset-ai/haystack/blob/master/tutorials/Tutorial4_FAQ_style_QA.ipynb)
While *extractive Question Answering* works on pure texts and is therefore more generalizable, there's also a common alternative that utilizes existing FAQ data.
**Pros**:
- Very fast at inference time
- Utilize existing FAQ data
- Quite good control over answers
**Cons**:
- Generalizability: We can only answer questions that are similar to existing ones in FAQ
In some use cases, a combination of extractive QA and FAQ-style can also be an interesting option.
### Prepare environment
#### Colab: Enable the GPU runtime
Make sure you enable the GPU runtime to experience decent speed in this tutorial.
**Runtime -> Change Runtime type -> Hardware accelerator -> GPU**
<img src="https://raw.githubusercontent.com/deepset-ai/haystack/master/docs/_src/img/colab_gpu_runtime.jpg">
```
# Make sure you have a GPU running
!nvidia-smi
# Install the latest release of Haystack in your own environment
#! pip install farm-haystack
# Install the latest master of Haystack
!pip install grpcio-tools==1.34.1
!pip install git+https://github.com/deepset-ai/haystack.git
# If you run this notebook on Google Colab, you might need to
# restart the runtime after installing haystack.
from haystack.document_stores import ElasticsearchDocumentStore
from haystack.nodes import EmbeddingRetriever
import pandas as pd
import requests
```
### Start an Elasticsearch server
You can start Elasticsearch on your local machine instance using Docker. If Docker is not readily available in your environment (eg., in Colab notebooks), then you can manually download and execute Elasticsearch from source.
```
# Recommended: Start Elasticsearch using Docker via the Haystack utility function
from haystack.utils import launch_es
launch_es()
# In Colab / No Docker environments: Start Elasticsearch from source
! wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.9.2-linux-x86_64.tar.gz -q
! tar -xzf elasticsearch-7.9.2-linux-x86_64.tar.gz
! chown -R daemon:daemon elasticsearch-7.9.2
import os
from subprocess import Popen, PIPE, STDOUT
es_server = Popen(['elasticsearch-7.9.2/bin/elasticsearch'],
stdout=PIPE, stderr=STDOUT,
preexec_fn=lambda: os.setuid(1) # as daemon
)
# wait until ES has started
! sleep 30
```
### Init the DocumentStore
In contrast to Tutorial 1 (extractive QA), we:
* specify the name of our `text_field` in Elasticsearch that we want to return as an answer
* specify the name of our `embedding_field` in Elasticsearch where we'll store the embedding of our question and that is used later for calculating our similarity to the incoming user question
* set `excluded_meta_data=["question_emb"]` so that we don't return the huge embedding vectors in our search results
```
from haystack.document_stores import ElasticsearchDocumentStore
document_store = ElasticsearchDocumentStore(host="localhost", username="", password="",
index="document",
embedding_field="question_emb",
embedding_dim=384,
excluded_meta_data=["question_emb"])
```
### Create a Retriever using embeddings
Instead of retrieving via Elasticsearch's plain BM25, we want to use vector similarity of the questions (user question vs. FAQ ones).
We can use the `EmbeddingRetriever` for this purpose and specify a model that we use for the embeddings.
```
retriever = EmbeddingRetriever(document_store=document_store, embedding_model="sentence-transformers/all-MiniLM-L6-v2", use_gpu=True)
```
### Prepare & Index FAQ data
We create a pandas dataframe containing some FAQ data (i.e curated pairs of question + answer) and index those in elasticsearch.
Here: We download some question-answer pairs related to COVID-19
```
# Download
temp = requests.get("https://raw.githubusercontent.com/deepset-ai/COVID-QA/master/data/faqs/faq_covidbert.csv")
open('small_faq_covid.csv', 'wb').write(temp.content)
# Get dataframe with columns "question", "answer" and some custom metadata
df = pd.read_csv("small_faq_covid.csv")
# Minimal cleaning
df.fillna(value="", inplace=True)
df["question"] = df["question"].apply(lambda x: x.strip())
print(df.head())
# Get embeddings for our questions from the FAQs
questions = list(df["question"].values)
df["question_emb"] = retriever.embed_queries(texts=questions)
df = df.rename(columns={"question": "content"})
# Convert Dataframe to list of dicts and index them in our DocumentStore
docs_to_index = df.to_dict(orient="records")
document_store.write_documents(docs_to_index)
```
### Ask questions
Initialize a Pipeline (this time without a reader) and ask questions
```
from haystack.pipelines import FAQPipeline
pipe = FAQPipeline(retriever=retriever)
from haystack.utils import print_answers
prediction = pipe.run(query="How is the virus spreading?", params={"Retriever": {"top_k": 10}})
print_answers(prediction, details="medium")
```
## About us
This [Haystack](https://github.com/deepset-ai/haystack/) notebook was made with love by [deepset](https://deepset.ai/) in Berlin, Germany
We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our other work:
- [German BERT](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](https://www.deepset.ai/jobs)
| github_jupyter |
```
from google.colab import drive
import os
import shutil
import numpy as np
import pandas as pd
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow import keras
from keras import layers
from keras import models
from keras import optimizers
from keras.layers import Input, Dense, Activation, Flatten, Conv2D
from keras.layers import MaxPooling2D, Dropout, ZeroPadding2D, BatchNormalization
from keras.models import Model, load_model
from keras.preprocessing.image import ImageDataGenerator, image
tf.test.gpu_device_name()
```
# Data Preparing
```
original_dataset_dir = '/content/drive/My Drive/Colab Notebooks/Chest_Xray/DATA'
base_dir = '/content/Chest_Xray'
os.mkdir(base_dir)
train_dir = os.path.join(base_dir, 'train')
os.mkdir(train_dir)
test_dir = os.path.join(base_dir, 'test')
os.mkdir(test_dir)
train_Normal_dir = os.path.join(train_dir, 'Normal')
os.mkdir(train_Normal_dir)
train_Sick_dir = os.path.join(train_dir, 'Sick')
os.mkdir(train_Sick_dir)
test_Normal_dir = os.path.join(test_dir, 'Normal')
os.mkdir(test_Normal_dir)
test_Sick_dir = os.path.join(test_dir, 'Sick')
os.mkdir(test_Sick_dir)
fnames = ['Normal.{}.jpeg'.format(i) for i in range(1341)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(train_Normal_dir, fname)
shutil.copyfile(src, dst)
fnames = ['Normal.{}.jpeg'.format(i) for i in range(1341, 1574)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(test_Normal_dir, fname)
shutil.copyfile(src, dst)
fnames = ['Sick.{}.jpeg'.format(i) for i in range(3874)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(train_Sick_dir, fname)
shutil.copyfile(src, dst)
fnames = ['Sick.{}.jpeg'.format(i) for i in range(3874, 4264)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(test_Sick_dir, fname)
shutil.copyfile(src, dst)
print('total training Normal images:', len(os.listdir(train_Normal_dir)))
print('total training Sick images:', len(os.listdir(train_Sick_dir)))
print('total test Normal images:', len(os.listdir(test_Normal_dir)))
print('total test Sick images:', len(os.listdir(test_Sick_dir)))
train_datagen = ImageDataGenerator(rescale = 1./255)
test_datagen = ImageDataGenerator(rescale = 1./255)
train_generator = train_datagen.flow_from_directory(
train_dir,
class_mode = 'binary',
color_mode="grayscale",
target_size = (200, 200),
batch_size = 16)
test_generator = test_datagen.flow_from_directory(
test_dir,
class_mode = 'binary',
color_mode="grayscale",
target_size = (200, 200),
batch_size = 16)
for data_batch, labels_batch in train_generator:
print('data batch shape:', data_batch.shape)
print('labels batch shape:', labels_batch.shape)
break
```
# Model Creation and Assigning.
```
def xray(input_shape):
#Placeholding for the X_input.
X_input = Input(input_shape)
X = X_input
# (Conv2d + BN + Dropout + MaxPooling) X 10
X = Conv2D(32, (3, 3), input_shape=(200, 200, 1), padding = "same")(X)
X = BatchNormalization(axis = -1)(X)
X = Activation('relu')(X)
X = Dropout(0.6)(X)
X = MaxPooling2D((2, 2))(X)
X = Conv2D(32, (3, 3), padding = "same")(X)
X = BatchNormalization(axis = -1)(X)
X = Activation('relu')(X)
X = Dropout(0.6)(X)
X = MaxPooling2D((2, 2))(X)
X = Conv2D(64, (3, 3), padding = "same")(X)
X = BatchNormalization(axis = -1)(X)
X = Activation('relu')(X)
X = Dropout(0.6)(X)
X = MaxPooling2D((2, 2))(X)
X = Conv2D(64, (3, 3), padding = "same")(X)
X = BatchNormalization(axis = -1)(X)
X = Activation('relu')(X)
X = Dropout(0.6)(X)
X = MaxPooling2D((2, 2))(X)
X = Conv2D(128, (3, 3), padding = "same")(X)
X = BatchNormalization(axis = -1)(X)
X = Activation('relu')(X)
X = Dropout(0.6)(X)
X = MaxPooling2D((2, 2))(X)
X = Conv2D(128, (3, 3), padding = "same")(X)
X = BatchNormalization(axis = -1)(X)
X = Activation('relu')(X)
X = Dropout(0.5)(X)
X = MaxPooling2D((2, 2), padding = "same")(X)
X = Conv2D(256, (3, 3), padding = "same")(X)
X = BatchNormalization(axis = -1)(X)
X = Activation('relu')(X)
X = Dropout(0.5)(X)
X = MaxPooling2D((2, 2), padding = "same")(X)
X = Conv2D(256, (3, 3), padding = "same")(X)
X = BatchNormalization(axis = -1)(X)
X = Activation('relu')(X)
X = Dropout(0.5)(X)
X = MaxPooling2D((2, 2), padding = "same")(X)
X = Conv2D(512, (3, 3), padding = "same")(X)
X = BatchNormalization(axis = -1)(X)
X = Activation('relu')(X)
X = Dropout(0.5)(X)
X = MaxPooling2D((2, 2), padding = "same")(X)
X = Conv2D(512, (3, 3), padding = "same")(X)
X = BatchNormalization(axis = -1)(X)
X = Activation('relu')(X)
X = Dropout(0.5)(X)
X = MaxPooling2D((2, 2), padding = "same")(X)
X = Flatten()(X)
# FC + Dropout X2
X = Dense(16, activation = 'relu')(X)
X = Dropout(0.4)(X)
X = Dense(32, activation = 'relu')(X)
X = Dropout(0.3)(X)
#Sigmoid activation
X = Dense(1, activation = 'sigmoid')(X)
#Model creation
model = Model(inputs = X_input, outputs = X, name='xray')
return model
Xray = xray(input_shape = (200, 200, 1)) #Assigning the model
```
# Model Compileing, Training, and Testing
```
Xray.compile(loss = 'binary_crossentropy',
optimizer = 'Adam',
metrics = ['acc'])
STEP_SIZE_TRAIN=train_generator.n//train_generator.batch_size #Determining the step size == (number of samples)/(batch size)
Xray.fit_generator(generator=train_generator, # Model training
steps_per_epoch=STEP_SIZE_TRAIN,
epochs = 30)
STEP_SIZE_TEST = test_generator.n//test_generator.batch_size #Determining the step size == (number of samples)/(batch size)
test_generator.reset()
pred = Xray.predict_generator(test_generator, # Model Evaluation
steps=STEP_SIZE_TEST,
verbose=1)
print ("Loss = " + str(pred[0]))
print ("Test Accuracy = " + str(pred[1]))
```
# Model Saving, Loading, and Summrizing.
```
Xray.save('Xray,h5') #Saving the weights of the model as an h5 file.
Xray = load_model('') # Only if there is already a trained model !
Xray.summary()
```
# Test Your Own Images :)
```
from google.colab import files #Test your own images !
uploaded = files.upload() #Upload an image from your dir.
for name, data in uploaded.items():
with open(name, 'wb') as f:
f.write(data)
print ('saved file', name)
from matplotlib.pyplot import imshow
from keras.applications.imagenet_utils import preprocess_input
img_path = '/content/' + name #Uncomment if you want to use the image uploded by the previous cell.
#img_path = '/content/' + '350' + '.jpg' #Uncomment if you want to choose the image manually.
img = image.load_img(img_path, color_mode='grayscale', target_size=(200, 200))
imshow(img)
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
if Xray.predict(x) == 0 :
print("The patient is Pneumonia negtive")
else :
print("The patient is Pneumonia positive !")
```
| github_jupyter |
# Laboratory 03 - Introduction to Digital Data Acquisition, FFT, and Spectrum Analysis 2
## MAE 3120, Spring 2020
## Grading Rubric
Procedures, Results, Plots, Tables - 60%
Discussion Questions - 25%
Neatness - 15%
## Introduction and Background
Prior to the 1980s, the oscilloscope and strip-chart recorder represented the most common methods for measurement of time-varying signals. With time PC-based digital data acquisition became standard in most laboratories. By combining high-speed data acquisition cards with graphical software, it is now possible to design complex data acquisition systems with real-time data analysis and plotting features, with minimal programming. The data acquisition hardware converts analog inputs into the digital domain at the specified sampling rate, and the software manipulates and displays the desired output.
In this lab we use Python and the NI-DAQmx API for digital data acquisition. Using the ***DAQ*** Jupyter Notebook developed for this class, instructions are issued to the data acquisition hardware, either inside the PC or external to the PC (the hardware we use in our lab is connected through the USB port). The ***DAQ*** can be configured to record data to files, change sampling parameters, and display a live output of your sampled signal.
The goal of this tutorial is to provide you with your first experience using the ***DAQ*** notebook to perform data acquisition. You will use the ***DAQ*** to take samples and plot voltage data and to illustrate some limitations of digital data acquisition systems.
To help verify that you have configured the ***DAQ*** properly before performing trials, you will learn how to use ***NI MAX*** (a software provided by National Instruments).
Ultimately, you will experiment with digital data acquisition and some of its shortcomings. For your report you are expected to save all the data you will acquire in the lab to files and plot them in Python.
___
In spectral analysis the goal is to determine the frequency content of a signal. Aliasing can be a serious problem with digital data acquisition if the experimenter is not careful. Significant measurement errors called ***aliasing*** errors are possible if the waveform is not sampled at high enough frequency. To avoid aliasing, the ***sampling rate*** must be at least twice the maximum frequency of the measured signal. This restriction is called the ***Nyquist criterion***. Signal aliasing occurs when waveforms are sampled at frequencies below the Nyquist frequency. Aliased signals appear to have frequencies (and possibly even waveform *shapes*) that differ from those of the actual signal. For adequate resolution of the waveform shape, data should be sampled at a much higher frequency – typically at least five times the Nyquist frequency, if possible.
Digital PC-based data acquisition will not totally replace oscilloscopes, at least not in the near future. The reason is sampling frequency. The maximum sampling frequency of modern PC A/D systems is typically less than a MHz (megahertz). By comparison, a good digital oscilloscope may sample as high as several GHz (gigahertz)!
The fast Fourier transform (FFT) is a computationally efficient form of the more general discrete Fourier transform (DFT), which is itself a discretized version of the even more general Fourier transform (FT). Like Fourier series analysis, FFT analysis enables us to calculate the frequency content of a signal. Fourier series analysis is useful for continuous, periodic, analog signals of known fundamental frequency. FFT analysis, on the other hand, is useful for discretely sampled (digital) data, and can be applied even if the signal is not periodic. With FFT analysis, the fundamental frequency of a periodic signal does not have to be known a priori. NumPy has built-in FFT features, which are utilized in this lab.
For $N$ sampled data points at sampling frequency $f_s$, the most useful output of an FFT calculation is the frequency spectrum or amplitude spectrum, which is a plot of modified FFT amplitude versus frequency. The frequency spectrum shows the relative importance or contribution of discrete frequencies, which range from zero to $f_s\,/\,2$. (The factor of two is a direct result of the Nyquist criterion.) The number of discrete frequencies on the frequency spectrum plot is $N\,/\,2 + 1$. This is half of the number of discretely sampled data points in the original signal, plus one extra since we typically plot both extreme values – from zero Hz (DC component) to the folding frequency $f_\textit{folding}$.
Here are some useful definitions for FFTs:
- $N$ is the ***total number of discrete data points*** taken. $N$ is an input parameter, chosen by the user.<br><p></p>
- $f_s$ is the ***sampling frequency***, in Hz. $f_s$ is an input parameter, chosen by the user. *All other properties of the FFT, including sampling time, maximum frequency, frequency resolution, etc., are determined solely from these two inputs, $N$ and $f_s$.*<br><p></p>
- $T$ is the ***total sampling time***, and is calculated as $T = N\,/\,f_s$. To increase the sampling time, we must either *increase* the number of data points, or *decrease* the sampling frequency (or both).<br><p></p>
- $f_\textit{folding}$ is the ***folding frequency***, also called $f_\textit{max}$, the ***maximum frequency***. $f_\textit{folding} = f_s\,/\,2$. $f_\textit{folding}$ is the maximum frequency plotted on the frequency spectrum plot, since $f_\textit{folding}$ is the maximum frequency at which reliable information about the signal can be calculated, due to the Nyquist criterion. The only way to increase $f_\textit{folding}$ is to increase the sampling frequency.<br><p></p>
- $\Delta f$ is the ***frequency resolution*** or ***frequency increment*** of the frequency spectrum. $\Delta f = 1\,/\,T = f_s\,/\,N$. On the frequency spectrum plot, amplitudes of the FFT are plotted at $N\,/\,2 + 1$ discrete frequencies, each separated by $\Delta f$. In other words, the discrete values of $f$ are $0$, $\Delta f$, $2 \Delta f$, $3 \Delta f$, ... , $[(N\,/\,2\,– 1)] \Delta f$. (The amplitude at exactly $f_\textit{folding}$, i.e., at $(N\,/\,2) \Delta f$, is also plotted; this results in a total of $(N\,/\,2) + 1$ discrete frequencies, counting both $f = 0$ and $f = f_\textit{folding}$). The *only* way to increase the frequency resolution is to increase sampling time.<br><p></p>
Here is a summary of some useful techniques and rules to remember when calculating FFTs:
- To get better frequency resolution for a fixed sampling frequency, increase the number of data points.<br><p></p>
- To get better frequency resolution for a fixed number of data points, decrease the sampling frequency. (But be careful here not to let $f_s$ fall below the Nyquist criterion limit).<br><p></p>
- To get frequency component information at higher frequencies, increase the sampling frequency.<br><p></p>
- To reduce ***leakage*** in the frequency spectrum, do one or more of the following:<br><p></p>
- Increase the number of sampled data points $N$ (at the cost of more computer time).<br><p></p>
- Decrease the sampling frequency $f_s$ (but do not sample at such a low frequency that the Nyquist criterion is violated).<br><p></p>
- Multiply the time signal by a ***windowing*** function prior to taking the FFT (at the cost of throwing away a significant portion of the signal, in particular data points near the start and finish of the time trace).
## Objectives
- Practice data acquisition with digital data acquisition systems.<br><p></p>
- Learn a simple way to sum two voltage signals.<br><p></p>
- Examine the effect of aliasing.
## Equipment
- Computer<br><p></p>
- Software: NI MAX, Jupyter<br><p></p>
- Hardware: National Instrument CompactDAQ cDAQ-9174, NI-9201 C Series Voltage Input Module <br><p></p>
- Function/waveform generator, along with appropriate cables<br><p></p>
- Oscilloscope<br><p></p>
# Procedure
### Part I - Discrete Data Acquisition
Here you will demonstrate the digital data acquisition system acquire data at discrete times. We transform the original continuous (analog) signal into a discrete (digital) signal. There is a significant amount of theory regarding digital data acquisition and signal processing that will be introduced in the coming lectures and labs.
1. Using a BNC T-Adapter and a BNC cable, connect the waveform/function generator to the first channel of the oscilloscope. <br><p></p>
- Using a pair of output wires, connect the waveform/function generator to the first channel of the *NI 9201* module. The positive output should be connected to `AI0` and the negative to `COM`. <br><p></p>
- Power on the oscilloscope. Configure the function generator to produce a 20 Hz sine wave, 0V DC Offset, 1V peak-to-peak amplitude. Check if the signal on the oscilloscope is the same with the generated signal. Don't forget to select the appropriate impedance in the waveform generator. You can find it under the "output load". <br><p></p>
- Using the `acquire` function in the third cell of the ***DAQ*** notebook, set the sampling rate to 100,000 Hz and the number of samples to 10,000. You are acquiring 100 ms of data which corresponds to two waveforms and about 50,000 points per waveform. <br><p></p>
- Verify the signal using the oscilloscope. Run the `acquire` function with no file output to test that the function is working properly. <br><p></p>
- To have a 'live' output in Jupyter, use this line of code: `acquire(120 * fs, fs, time_sep=1, zero_bound=False)`. When you are done observing the output, click the *Stop* button in Jupyter. <br><p></p>
- Once you are comfortable with the acquisition, set a file output and save one run of the data for your report. Remember that you can save your data using the `acquire` function (e.g., `acquire(100, 1000, file_out='C:\\Users\\Josh\\Downloads\\Lab2_0.csv', output='N')`). <br><p></p>
- Decrease the data acquisition rate until the signal displayed on the graph starts to look “pixelated”. To save the same duration of data, additionally adjust the number of samples so that you acquire 100 ms (two waveforms). Use the following frequencies and save each case to a file for use in your report:
- 10,000 Hz<br><p></p>
- 1,000 Hz<br><p></p>
- 100 Hz<br><p></p>
- In your report discuss the appearance of your recorded waveforms.
### Part II - Leakage effect
1. Remember to be organized when acquiring data. Generate a table that includes all the runs and associated parameters you will acquire in this lab and report this matrix of experiments in your lab report.<br><p></p>
- Using the `acquire` function, set `plt_sp_fft` to `True`. This will display the frequency plot when a FFT is applied to your data. Additionally, set the `time_sep` to `120` to prevent the graph from updating.<br><p></p>
- Set a 10 Hz sine wave, 0V DC Offset, and 2V peak-to-peak amplitude on the waveform generator. Keep monitoring your signal on the oscilloscope. For each of the steps below, save the time history signal to file. <br><p></p>
- Use Python to recreate each spectrum. In your report you are expected to report both the time series and frequency spectra. <br><p></p>
- Set $f_s = 200\text{ Hz}$, $N = 256$.<br><p></p>
- To try to reduce the leakage, try first to increase the sampling rate. Set $f_s = 1000\text{ Hz}$, $N = 256$.<br><p></p>
- Try the following settings. Set $f_s = 25\text{ Hz}$, $N = 256$.<br><p></p>
- Try the following settings. Set $f_s = 25\text{ Hz}$, $N = 512$. What can you conclude about the spectral accuracy of our system?<br><p></p>
- Finally, try the following settings. Set $f_s = 25.6\text{ Hz}$, $N = 512$. This corresponds to a “perfect FFT”, can you think why?
### Part III - Windowing of FFT
Keep the same parameters as above for the signal generator. Keep monitoring your signal on the oscilloscope.<br><p></p>
1. Using the `acquire` function, set `han_window` to `True`. This will apply a Hanning windowing function. It has the following formula and appearance:<br><p></p>
$$u_\textit{Hanning}(t) = \frac{1}{2} \left(1 - cos \frac{2 \pi t}{T}\right)$$
<img src="img/Hanning.png" width=480>
2. Redo the measurements from *Part II* and save data for each condition. Do you observe any improvement?<br><p></p>
- Add a 1V DC Offset and redo the measurements. What do you observe?<br><p></p>
- By now you should know how to optimize the spectral response of a system. <br><p></p>
- Now create triangular waves of similar frequency and select the proper sampling rate, period, and windowing. How many harmonics do you observe?
### Part IV - Clipping
1. Set the function generator to produce a 100 Hz sine wave, 0V DC Offset, and 5V peak-to-peak amplitude. Check the signal using the oscilloscope. <br><p></p>
- Set the sampling rate to 10,000 Hz and the number of samples to 1000 in order to record 100 ms of data (10 full waveforms). Acquire one trial of data and save it to a file using the `acquire` function. <br><p></p>
- Using a 'live' output, adjust the DC offset and/or amplitude of the signal produced by the waveform generator to observe how the digital signal is clipped.<br><p></p>
- When you have a display that clearly illustrates clipping, stop the live output and acquire 100 ms of data. Ensure the data is saved to a file for use in your lab report. You should also report the DC offset, amplitude, and any other relevant waveform generator settings.
### Part V - Signal Reconstruction
A signal contaminated with a high-frequency noise will be simulated. This requires using advanced function in the waveform generator to generate the sum of two signals. The carrier wave is a sine wave, 10 Hz, 5V. The noise is a sine wave at 3.1 kHz with amplitude 1V. <br><p></p>
1. Program the sum of the two sines in the waveform generator. In order to sum high frequency noise to carrier signal, go to modulate button on waveform function generator. Turn on modulate and choose sum under type, internal under source. Choose sine as shape of the noise and give the sum amplitude and frequency as it mentioned above.<br><p></p>
- Monitor that you have the proper signal on the oscilloscope. <br><p></p>
- Send the signal directly to the DAQ system (i.e. without going through the anti-aliasing filter that you have created).<br><p></p>
- Sample at 500 Hz with 1,024 data points per scan. You should observe the low frequency signal nicely, but the high frequency signal should yield some aliasing.<br><p></p>
- Save the time trace and recreate the frequency spectrum for your lab report. Estimate the frequency of the two signals from the frequency spectrum plot. Calculate the frequency resolution of your DAQ system for this sampling frequency and comment on the resolution of your signal.<br><p></p>
- Redo *Steps 4 & 5* with a sampling frequency of:<br><p></p>
- 1 kHz <br><p></p>
- 5 kHz<br><p></p>
- 10 kHz<br><p></p>
- Can you think a way to acquire the signal without aliasing ?
# Discussion Questions
1. Explain why you need to select the output of your waveform generator to infinite impedance. What would happen if you had it selected for 50 Ω impedance? <br><p></p>
3. *Part II*, for each of the test cases, calculate the frequency resolution and the energy contained at the signal frequency (10 Hz). Comment on:
A. The resolution of the sine wave.
B. The width of the spike on the frequency spectrum and the energy contained at 10 Hz vs what you would expect. Explain how the width changes for each condition and what is the source of the observed phenomenon and how it can be corrected. <br><p></p>
4. *Part II* conclusions:
A. What is the benefit and drawbacks of increasing the sampling frequency?
B. What is the benefit and drawbacks of increasing the sampling period?
C. What is a “perfect FFT”?<br><p></p>
5. What is the effect of the Hanning windowing on your signal? Does it totally eliminate leakage?<br><p></p>
6. What is the effect of windowing when there is a DC offset in addition of the sinusoidal signal? What can you conclude about the mean of a signal on which windowing can be applied? Propose a procedure to apply windowing when the signal has non-zero mean.<br><p></p>
7. Which statistical tool/graph could you use to identify if some clipping took place in your data? What would you expect to see? <br><p></p>
8. For *Part V* :
A. Which frequency was the optimum to record your signal? Hint: think about the frequency resolution.
B. Can you think a way to acquire the signal without aliasing ?
# Appendices
## Appendix A - NI cDAQ-9174
<img src="img/cDAQ-9174.png" width=240 align="left"><br><br><br><br><br><br><br><br>
[Online Manual](https://www.ni.com/documentation/en/compactdaq-chassis/latest/cdaq-9174/overview/)
[User Manual](https://www.ni.com/pdf/manuals/372838e.pdf)
[Specification Sheet](https://www.ni.com/pdf/manuals/374045a.pdf)
## Appendix B - NI 9201
<img src="img/NI-9201.png" width=150 align="left"><br><br><br><br><br><br><br><br>
[HTML Manual](https://www.ni.com/documentation/en/c-series-voltage-input-module/latest/9201/overview/)
[Datasheet](https://www.ni.com/pdf/manuals/373783a_02.pdf)
**Signal Level**: ± 10V
**Channels**: 8 Single-Ended
**Max Sample Rate (Single Channel)**: 800 kS/s
**Max Sample Rate (Scanning)**: 500 kS/s
**Simultaneous** No
**ADC Resolution**: 12-Bit
**Type of ADC**: Successive approximation register (SAR)
<img src="img/NI-9201%20Circuit.png" width=480 align="left"><br><br><br><br><br><br><br><br><br>
<img src="img/NI-9201%20Sample%20Rate.png" width=480 align="left"><br><br><br><br><br><br><br><br><br><br><br><br>
<img src="img/NI-9201%20Accuracy.png" width=480 align="left"><br><br><br><br><br><br>
<img src="img/NI-9201%20Stability.png" width=480 align="left">
| github_jupyter |
# Use Amazon Sagemaker Distributed Model Parallel to Launch a BERT Training Job with Model Parallelization
Sagemaker distributed model parallel (SMP) is a model parallelism library for training large deep learning models that were previously difficult to train due to GPU memory limitations. SMP automatically and efficiently splits a model across multiple GPUs and instances and coordinates model training, allowing you to increase prediction accuracy by creating larger models with more parameters.
Use this notebook to configure SMP to train a model using PyTorch (version 1.6.0) and the [Amazon SageMaker Python SDK](https://sagemaker.readthedocs.io/en/stable/overview.html#train-a-model-with-the-sagemaker-python-sdk).
In this notebook, you will use a BERT example training script with SMP.
The example script is based on [Nvidia Deep Learning Examples](https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/LanguageModeling/BERT) and requires you to download the datasets and upload them to Amazon Simple Storage Service (Amazon S3) as explained in the instructions below. This is a large dataset, and so depending on your connection speed, this process can take hours to complete.
This notebook depends on the following files. You can find all files in the [bert directory](https://github.com/aws/amazon-sagemaker-examples/tree/master/training/distributed_training/pytorch/model_parallel/bert) in the model parllel section of the Amazon SageMaker Examples notebooks repo.
* `bert_example/sagemaker_smp_pretrain.py`: This is an entrypoint script that is passed to the Pytorch estimator in the notebook instructions. This script is responsible for end to end training of the BERT model with SMP. The script has additional comments at places where the SMP API is used.
* `bert_example/modeling.py`: This contains the model definition for the BERT model.
* `bert_example/bert_config.json`: This allows for additional configuration of the model and is used by `modeling.py`. Additional configuration includes dropout probabilities, pooler and encoder sizes, number of hidden layers in the encoder, size of the intermediate layers in the encoder etc.
* `bert_example/schedulers.py`: contains definitions for learning rate schedulers used in end to end training of the BERT model (`bert_example/sagemaker_smp_pretrain.py`).
* `bert_example/utils.py`: This contains different helper utility functions used in end to end training of the BERT model (`bert_example/sagemaker_smp_pretrain.py`).
* `bert_example/file_utils.py`: Contains different file utility functions used in model definition (`bert_example/modeling.py`).
### Additional Resources
If you are a new user of Amazon SageMaker, you may find the following helpful to learn more about SMP and using SageMaker with Pytorch.
* To learn more about the SageMaker model parallelism library, see [Model Parallel Distributed Training with SageMaker Distributed](http://docs.aws.amazon.com/sagemaker/latest/dg/model-parallel.html).
* To learn more about using the SageMaker Python SDK with Pytorch, see [Using PyTorch with the SageMaker Python SDK](https://sagemaker.readthedocs.io/en/stable/frameworks/pytorch/using_pytorch.html).
* To learn more about launching a training job in Amazon SageMaker with your own training image, see [Use Your Own Training Algorithms](https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-training-algo.html).
### Prerequisites
1. You must create an S3 bucket to store the input data to be used for training. This bucket must must be located in the same AWS Region you use to launch your training job. This is the AWS Region you use to run this notebook. To learn how, see [Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/gsg/CreatingABucket.html) in the Amazon S3 documentation.
2. You must download the dataset that you use for training from [Nvidia Deep Learning Examples](https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/LanguageModeling/BERT) and upload it to the S3 bucket you created. To learn more about the datasets and scripts provided to preprocess and download it, see [Getting the data](https://github.com/NVIDIA/DeepLearningExamples/blob/master/PyTorch/LanguageModeling/BERT/README.md#getting-the-data) in the Nvidia Deep Learning Examples repo README. You can also use the [Quick Start Guide](https://github.com/NVIDIA/DeepLearningExamples/blob/master/PyTorch/LanguageModeling/BERT/README.md#quick-start-guide) to learn how to download the dataset. The repository consists of three datasets. Optionally, you can to use the `wiki_only` parameter to only download the Wikipedia dataset.
## Amazon SageMaker Initialization
Upgrade Sagemaker SDK to the latest version.
NOTE: This step may require a kernel restart.
```
import sagemaker
original_version = sagemaker.__version__
%pip install --upgrade sagemaker
```
Initialize the notebook instance. Get the AWS Region, SageMaker execution role Amazon Resource Name (ARN).
```
%%time
import sagemaker
from sagemaker import get_execution_role
from sagemaker.estimator import Estimator
from sagemaker.pytorch import PyTorch
import boto3
import os
role = get_execution_role() # provide a pre-existing role ARN as an alternative to creating a new role
print(f'SageMaker Execution Role:{role}')
client = boto3.client('sts')
account = client.get_caller_identity()['Account']
print(f'AWS account:{account}')
session = boto3.session.Session()
region = session.region_name
print(f'AWS region:{region}')
sagemaker_session = sagemaker.session.Session(boto_session=session)
import sys
print(sys.path)
# get default bucket
default_bucket = sagemaker_session.default_bucket()
print()
print("Default bucket for this session: ", default_bucket)
```
## Prepare/Identify your Training Data in Amazon S3
If you don't already have the BERT dataset in an S3 bucket, please see the instructions in [Nvidia BERT Example](https://github.com/NVIDIA/DeepLearningExamples/blob/master/PyTorch/LanguageModeling/BERT/README.md) to download the dataset and upload it to a s3 bucket. See the prerequisites at the beginning of this notebook for more information.
Replace the instances of `None` below to set the S3 bucket and prefix of your preprocessed
data. For example, if your training data is in s3://your-bucket/training, enter `'your-bucket'` for `s3_bucket` and `'training'` for `prefix`. Note that your output data will be stored in the same bucket, under the `output/` prefix.
If you proceed with `None` values for both `s3_bucket` and `prefix`, then the program downloads some mock data from a public S3 bucket `sagemaker-sample-files` and uploads it
to your default bucket. This is intended for CI.
```
s3_bucket = None # Replace None by your bucket
prefix = None # Replace None by the prefix of your data
# For CI
if s3_bucket is None:
# Donwload some mock data from a public bucket in us-east-1
s3 = boto3.resource('s3')
bucket_name = 'sagemaker-sample-files'
# Phase 1 pretraining
prefix = 'datasets/binary/bert/hdf5_lower_case_1_seq_len_128_max_pred_20_masked_lm_prob_0.15_random_seed_12345_dupe_factor_5/wikicorpus_en_abstract'
local_dir='/tmp/data'
bucket = s3.Bucket(bucket_name)
for obj in bucket.objects.filter(Prefix=prefix):
target = os.path.join(local_dir, obj.key)
if not os.path.exists(os.path.dirname(target)):
os.makedirs(os.path.dirname(target))
bucket.download_file(obj.key, target)
# upload to default bucket
mock_data = sagemaker_session.upload_data(path=os.path.join(local_dir, prefix),
bucket=sagemaker_session.default_bucket(),
key_prefix=prefix)
data_channels = {'train': mock_data}
else:
s3train = f's3://{s3_bucket}/{prefix}'
train = sagemaker.session.TrainingInput(s3train, distribution='FullyReplicated',
s3_data_type='S3Prefix')
data_channels = {'train': train}
print(data_channels)
```
Set your output data path. This is where model artifacts are stored.
```
s3_output_location = f's3://{default_bucket}/output/bert'
print(f'your output data will be stored in: s3://{default_bucket}/output/bert')
```
## Define SageMaker Training Job
Next, you will use SageMaker Estimator API to define a SageMaker Training Job. You will use a [`PyTorchEstimator`](https://sagemaker.readthedocs.io/en/stable/frameworks/pytorch/sagemaker.pytorch.html) to define the number and type of EC2 instances Amazon SageMaker uses for training, as well as the size of the volume attached to those instances.
You must update the following:
* `instance_count`
* `instance_type`
* `volume_size`
See the following sub-sections for more details.
### Update the Type and Number of EC2 Instances Used
The instance type and number of instances you specify in `instance_type` and `instance_count` respectively will determine the number of GPUs Amazon SageMaker uses during training. Explicitly, `instance_type` will determine the number of GPUs on a single instance and that number will be multiplied by `instance_count`.
You must specify values for `instance_type` and `instance_count` so that the total number of GPUs available for training is equal to `partitions` in `config` of `smp.init` in your training script.
If you set ddp to `True`, you must ensure that the total number of GPUs available is divisible by `partitions`. The result of the division is inferred to be the number of model replicas to be used for Horovod (data parallelism degree).
See [Amazon SageMaker Pricing](https://aws.amazon.com/sagemaker/pricing/) for SageMaker supported instances and cost information. To look up GPUs for each instance types, see [Amazon EC2 Instance Types](https://aws.amazon.com/ec2/instance-types/). Use the section **Accelerated Computing** to see general purpose GPU instances. Note that an ml.p3.2xlarge has the same number of GPUs as an p3.2xlarge.
### Update your Volume Size
The volume size you specify in `volume_size` must be larger than your input data size.
### Set your parameters dictionary for SMP and set custom mpioptions
With the parameters dictionary you can configure: the number of microbatches, number of partitions, whether to use data parallelism with ddp, the pipelining strategy, the placement strategy and other BERT specific hyperparameters.
```
mpi_options = "-verbose --mca orte_base_help_aggregate 0 "
smp_parameters = {"optimize": "speed", "microbatches": 12, "partitions": 2, "ddp": True, "pipeline": "interleaved", "overlapping_allreduce": True, "placement_strategy": "cluster", "memory_weight": 0.3}
timeout = 60 * 60
metric_definitions = [{"Name": "base_metric", "Regex": "<><><><><><>"}]
hyperparameters = {"input_dir": "/opt/ml/input/data/train",
"output_dir": "./checkpoints",
"config_file": "bert_config.json",
"bert_model": "bert-large-uncased",
"train_batch_size": 48,
"max_seq_length": 128,
"max_predictions_per_seq": 20,
"max_steps": 7038,
"warmup_proportion": 0.2843,
"num_steps_per_checkpoint": 200,
"learning_rate": 6e-3,
"seed": 12439,
"steps_this_run": 500,
"allreduce_post_accumulation": 1,
"allreduce_post_accumulation_fp16": 1,
"do_train": 1,
"use_sequential": 1,
"skip_checkpoint": 1,
"smp": 1,
"apply_optimizer": 1}
```
### Instantiate Pytorch Estimator with SMP enabled
```
pytorch_estimator = PyTorch("sagemaker_smp_pretrain.py",
role=role,
instance_type="ml.p3.16xlarge",
volume_size=200,
instance_count=1,
sagemaker_session=sagemaker_session,
py_version="py36",
framework_version='1.6.0',
distribution={
"smdistributed": {
"modelparallel": {
"enabled": True,
"parameters": smp_parameters
}
},
"mpi": {
"enabled": True,
"processes_per_host": 8,
"custom_mpi_options": mpi_options,
}
},
source_dir='bert_example',
output_path=s3_output_location,
max_run=timeout,
hyperparameters=hyperparameters,
metric_definitions=metric_definitions)
```
Finally, you will use the estimator to launch the SageMaker training job.
```
pytorch_estimator.fit(data_channels, logs=True)
```
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.
# Model Development with Custom Weights
This example shows how to retrain a model with custom weights and fine-tune the model with quantization, then deploy the model running on FPGA. Only Windows is supported. We use TensorFlow and Keras to build our model. We are going to use transfer learning, with ResNet50 as a featurizer. We don't use the last layer of ResNet50 in this case and instead add our own classification layer using Keras.
The custom wegiths are trained with ImageNet on ResNet50. We are using a public Top tagging dataset as our training data.
Please set up your environment as described in the [quick start](project-brainwave-quickstart.ipynb).
This work was performed on the Caltech GPU cluster. The specific server is named imperium-sm.hep.caltech.edu. Paths have been set to work in that environment, but must be altered for your purposes.
```
import os,sys
os.environ['KERAS_BACKEND'] = 'tensorflow'
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
import tensorflow as tf
import numpy as np
from keras import backend as K
from keras.backend import manual_variable_initialization
manual_variable_initialization(True)
import tables
from tensorflow.python.client import device_lib
device_lib.list_local_devices()
%load_ext autoreload
%autoreload 2
```
## Setup Environment
After you train your model in float32, you'll write the weights to a place on disk. We also need a location to store the models that get downloaded.
```
# These directories were chosen because they write the data to local disk, which will have the fastest access time
# of our various storage options.
custom_weights_dir = os.path.expanduser("../weights")
custom_weights_dir_q = os.path.expanduser("../machinelearningnotebooks/weights-quantized")
saved_model_dir = os.path.expanduser("../models/")
results_dir = os.path.expanduser("../machinelearningnotebooks/results/")
```
## Prepare Data
Load the files we are going to use for training and testing. The public Top dataset consists of image formatted data, but our data has been preprocessed into a raw form.
At the time of writing, the files in question are located at `/data/shared/dwerran/converted`. They are stored in the HDF5 format, and must be accessed via the `tables` module. The two sub-datasets we're interested in are `/img-pt` and `/labels`, corresponding to the images and lables respectively. Each dataset contains 50000 images, and there are about 30 datasets. As before, this storage location was chosen to maximize data bandwidth.
```
from utils import normalize_and_rgb, image_with_label, count_events, save_results, plot_results
import glob
# for 64x64:
datadir = "../machinelearningnotebooks/data/"
# for 224x224:
#datadir = "../../converted/rotation_224_v1/"
data_size = 64 #image width/height
n_train_file = 122
n_test_file = 41
n_val_file = 41
train_files = glob.glob(os.path.join(datadir, 'train_file_*'))
test_files = glob.glob(os.path.join(datadir,'test', 'test_file_*'))
val_files = glob.glob(os.path.join(datadir, 'val_file_*'))
#train_files = train_files[:n_train_file]
#test_files = test_files[:n_test_file]
#val_files = val_files[:n_val_file]
n_train_events = count_events(train_files)
n_test_events = count_events(test_files)
n_val_events = count_events(val_files)
print("n_train_events =", n_train_events)
print("n_test_events =", n_test_events)
print("n_val_events =", n_val_events)
```
## Construct Model
We use ResNet50 for the featuirzer and build our own classifier using Keras layers. We train the featurizer and the classifier as one model. The weights trained on ImageNet are used as the starting point for the retraining of our featurizer. The weights are loaded from tensorflow checkpoint files.
Before passing image dataset to the ResNet50 featurizer, we need to preprocess the input file to get it into the form expected by ResNet50. ResNet50 expects float tensors representing the images in BGR, channel last order. Given that our images are greyscale, this isn't relevant to us, as we will simply be copying the data in place.
```
from utils import preprocess_images
```
We use Keras layer APIs to construct the classifier. Because we're using the tensorflow backend, we can train this classifier in one session with our Resnet50 model.
```
from utils import construct_classifier
```
Now every component of the model is defined, we can construct the model. Constructing the model with the project brainwave models is two steps - first we import the graph definition, then we restore the weights of the model into a tensorflow session. Because the quantized graph defintion and the float32 graph defintion share the same node names in the graph definitions, we can initally train the weights in float32, and then reload them with the quantized operations (which take longer) to fine-tune the model.
```
from utils import construct_model
```
## Train Model
First we train the model with custom weights but without quantization. Training is done with native float precision (32-bit floats). We load the traing data set and batch the training with 10 epochs. When the performance reaches desired level or starts decredation, we stop the training iteration and save the weights as tensorflow checkpoint files.
```
from utils import chunks, train_model, test_model
```
This training currently leverages a hack to work around some apparent limits in the BW API. I have attempted to specify a custom weights directory when calling the `Resnet50` function in `construct_model()` above in the same way it is specified for `Quantized_Resnet50`. However, this throws an error, and since there is no API documentation yet, the way I'm working around it is rewriting our trained weights to the saved model directory. I will be reaching out to the team on this topic to see if they have a better suggestion.
```
# Launch the training
tf.reset_default_graph()
sess = tf.Session(graph=tf.get_default_graph())
num_epoch_train = 7
with sess.as_default():
in_images, image_tensors, features, preds, featurizer, classifier = construct_model(quantized=False, starting_weights_directory=custom_weights_dir, saved_model_dir=saved_model_dir, is_training=True, size=data_size)
# It's necessary to specify global (all) variables when using the saver in this instance.
# Since we are using batch norm layers, whose variables aren't saved by default, we
# include them this way.
saver = tf.train.Saver(tf.global_variables(), max_to_keep = 100)
loss_over_epoch, accuracy_over_epoch, auc_over_epoch, val_loss_over_epoch, val_accuracy_over_epoch, val_auc_over_epoch = \
train_model(preds, in_images, train_files[:1], val_files[:1], is_retrain=True, train_epoch=num_epoch_train,
classifier=classifier,
saver=saver, checkpoint_path=custom_weights_dir,
chunk_size=64)
_, _, features, preds, featurizer, classifier = construct_model(quantized=False, saved_model_dir=saved_model_dir, starting_weights_directory=custom_weights_dir, is_training=False, size=64)
loss, accuracy, auc, preds_test, test_labels = test_model(preds, in_images, test_files[:1])
```
## Load and Test Model
Here, we re-load the weights saved on disk and test the model. If the featurizer weights do not end in \_best, then the model_checkpoint_path entry in the checkpoint file needs to be changed.
```
tf.reset_default_graph()
sess = tf.Session(graph=tf.get_default_graph())
with sess.as_default():
print("Loading a trained model")
in_images, image_tensors, features, preds, featurizer, classifier = construct_model(quantized=False,
saved_model_dir=saved_model_dir,
starting_weights_directory=custom_weights_dir,
is_training=False,
size=data_size)
loss, accuracy, auc, preds_test, test_labels = test_model(preds,
in_images,
test_files[:1],
chunk_size=64,
shuffle=False)
print("Accuracy:", accuracy, ", Area under ROC curve:", auc)
# Call the save results utility.
save_results(results_dir, 't', accuracy, test_labels, preds_test)
```
## Load and Test Quantized Model (with Floating Point Weights)
After training, we evaluate the trained model's accuracy on test dataset with quantization. So that we know the model's performance if it is deployed on the FPGA. The only significant difference between this cell and the cell two below is this loads weights from the floating point directory and the one two below loads from the quantized directory. In this way, you can compare pre- and post- quantization fine tuning tests.
It's been found that an abysmal score here does not necessarily reflect a broken model. Quantization has very negative effects on the batch normalization layers, and should quickly be corrected in only a few epochs of fine-tuning.
```
tf.reset_default_graph()
sess = tf.Session(graph=tf.get_default_graph())
with sess.as_default():
print("Loading a trained model (quantized)")
in_images_q, image_tensors_q, features_q, preds_q, featurizer_q, classifier = construct_model(quantized=True,
saved_model_dir=saved_model_dir,
starting_weights_directory=custom_weights_dir,
is_training=False,
size=data_size)
loss_q, accuracy_q, auc_q, preds_test_q, test_labels_q = test_model(preds_q,
in_images_q,
test_files[:1],
chunk_size=64,
shuffle=False)
print("Accuracy:", accuracy_q, ", Area under ROC curve:", auc_q)
# Call the save results utility.
save_results(results_dir, 'q', accuracy_q, test_labels_q, preds_test_q)
```
## Fine-Tune Quantized Model
Sometimes, the model's accuracy can drop significantly after quantization. In those cases, we need to retrain the model enabled with quantization to get better model accuracy.
```
tf.reset_default_graph()
sess = tf.Session(graph=tf.get_default_graph())
num_epoch_finetune = 10
with sess.as_default():
print("Fine-tuning model with quantization")
in_images, image_tensors, features, preds, quantized_featurizer, classifier = construct_model(quantized=True,
saved_model_dir=saved_model_dir,
starting_weights_directory=custom_weights_dir,
is_training=True,
size=data_size)
saver = tf.train.Saver(tf.global_variables(), max_to_keep = 100)
loss_over_epoch_ft, accuracy_over_epoch_ft, auc_over_epoch_ft, val_loss_over_epoch_ft, val_accuracy_over_epoch_ft, val_auc_over_epoch_ft = \
train_model(preds, in_images, train_files, val_files, is_retrain=True, train_epoch=num_epoch_finetune,
classifier=classifier,
saver=saver,
checkpoint_path=custom_weights_dir_q,
chunk_size=32)
```
## Load and Test Quantized Model
After training, we evaluate the trained model's accuracy on test dataset with quantization. So that we know the model's performance if it is deployed on the FPGA. The only significant difference between this cell and the cell two above is this loads weights from the quantized directory and the one two above loads from the floating point directory. In this way, you can compare pre- and post- quantization fine tuning tests.
```
tf.reset_default_graph()
sess = tf.Session(graph=tf.get_default_graph())
with sess.as_default():
in_images, image_tensors, features, preds, quantized_featurizer, classifier = construct_model(quantized=True,
saved_model_dir=saved_model_dir,
starting_weights_directory=custom_weights_dir_q,
is_training=False,
size=data_size)
loss_ft, accuracy_ft, auc_ft, preds_test_ft, test_labels_ft = test_model(preds,
in_images,
test_files,
chunk_size=64,
shuffle=False)
# Call the save results utility.
save_results(results_dir, 'ft', accuracy_ft, test_labels_ft, preds_test_ft)
%matplotlib inline
#from utils import plot_results
def plot_results(results_dir,plot_label='ROC.pdf'):
import os
import numpy as np
from sklearn import metrics
# Load the labels and results into memory.
accuracy_t = np.load(results_dir + "/t_accuracy.npy")
test_labels_t = np.load(results_dir + "/t_labels.npy")
test_preds_t = np.load(results_dir + "/t_preds.npy")
accuracy_q = np.load(results_dir + "/q_accuracy.npy")
test_labels_q = np.load(results_dir + "/q_labels.npy")
test_preds_q = np.load(results_dir + "/q_preds.npy")
accuracy_ft = np.load(results_dir + "/ft_accuracy.npy")
test_labels_ft = np.load(results_dir + "/ft_labels.npy")
test_preds_ft = np.load(results_dir + "/ft_preds.npy")
#accuracy_b = np.load(results_dir + "/b_accuracy.npy")
#test_labels_b = np.load(results_dir + "/b_labels.npy")
#test_preds_b = np.load(results_dir + "/b_preds.npy")
new_test_preds_t = np.zeros(test_preds_t.shape)
new_test_preds_t[:,0] = test_preds_t[:,0]/np.sum(test_preds_t,axis=1)
new_test_preds_t[:,1] = test_preds_t[:,1]/np.sum(test_preds_t,axis=1)
test_preds_t = new_test_preds_t
new_test_preds_q = np.zeros(test_preds_q.shape)
new_test_preds_q[:,0] = test_preds_q[:,0]/np.sum(test_preds_q,axis=1)
new_test_preds_q[:,1] = test_preds_q[:,1]/np.sum(test_preds_q,axis=1)
test_preds_q = new_test_preds_q
new_test_preds_ft = np.zeros(test_preds_ft.shape)
new_test_preds_ft[:,0] = test_preds_ft[:,0]/np.sum(test_preds_ft,axis=1)
new_test_preds_ft[:,1] = test_preds_ft[:,1]/np.sum(test_preds_ft,axis=1)
test_preds_ft = new_test_preds_ft
#new_test_preds_b = np.zeros(test_preds_b.shape)
#new_test_preds_b[:,0] = test_preds_b[:,0]/np.sum(test_preds_b,axis=1)
#new_test_preds_b[:,1] = test_preds_b[:,1]/np.sum(test_preds_b,axis=1)
#test_preds_b = new_test_preds_b
# Determine the ROC curve for each of the tests.
# [:,0] will convert the labels from one-hot to binary.
fpr_test_t, tpr_test_t, thresholds = metrics.roc_curve(test_labels_t[:,0], test_preds_t[:,0])
fpr_test_q, tpr_test_q, thresholds_q = metrics.roc_curve(test_labels_q[:,0], test_preds_q[:,0])
fpr_test_ft, tpr_test_ft, thresholds_ft = metrics.roc_curve(test_labels_ft[:,0], test_preds_ft[:,0])
#fpr_test_b, tpr_test_b, thresholds_b = metrics.roc_curve(test_labels_b[:,0], test_preds_b[:,0])
# Use the data we just generated to determine the area under the ROC curve.
# Use the data we just generated to determine the area under the ROC curve.
auc_test = metrics.auc(fpr_test_t, tpr_test_t)
auc_test_q = metrics.auc(fpr_test_q, tpr_test_q)
auc_test_ft = metrics.auc(fpr_test_ft, tpr_test_ft)
#auc_test_b = metrics.auc(fpr_test_b, tpr_test_b)
# Find the true positive rate of 30% and 1 over the false positive rate at tpr = 30%.
def find_nearest(array,value):
idx = (np.abs(array-value)).argmin()
return idx
idx_t = find_nearest(tpr_test_t,0.3)
idx_q = find_nearest(tpr_test_q,0.3)
idx_ft = find_nearest(tpr_test_ft,0.3)
#idx_b = find_nearest(tpr_test_b,0.3)
# Plot the ROCs, labeling with the AUCs.
import matplotlib.pyplot as plt
plt.figure(figsize=(7,5))
plt.plot(tpr_test_t, fpr_test_t, label=r'Floating point: AUC = %.1f%%, acc. = %.1f%%, $1/\epsilon_{B}$ = %.0f'%(auc_test*100., accuracy_t*100, 1./fpr_test_t[idx_t]))
plt.plot(tpr_test_q, fpr_test_q, linestyle='--', label=r'Quant.: AUC = %.1f%%, acc. = %.1f%%, $1/\epsilon_{B}$ = %.0f'%(auc_test_q*100., accuracy_q*100, 1./fpr_test_q[idx_q]))
plt.plot(tpr_test_ft, fpr_test_ft, linestyle='-.', label=r'Quant., fine-tune: AUC = %.1f%%, acc. = %.1f%%, $1/\epsilon_{B}$ = %.0f'%(auc_test_ft*100., accuracy_ft*100, 1./fpr_test_ft[idx_ft]))
#plt.plot(tpr_test_b, fpr_test_b, linestyle=':',label=r'Brainwave: AUC = %.1f%%, acc. = %.1f%%, $1/\epsilon_{B}$ = %.0f'%(auc_test_b*100., accuracy_b*100, 1./fpr_test_b[idx_b]))
plt.semilogy()
plt.xlabel("Signal efficiency",fontsize='x-large')
plt.ylabel("Background efficiency",fontsize='x-large')
plt.ylim(0.0001,1)
plt.xlim(0,1)
plt.grid(True)
plt.legend(loc='upper left',fontsize=11.8)
plt.tight_layout()
plt.savefig(results_dir+'/'+plot_label)
#plt.figure()
#plt.hist(test_preds_t[:,0], weights=test_labels_t[:,0], bins=np.linspace(0, 1, 40), density=True, alpha = 0.7)
#plt.hist(test_preds_t[:,0], weights=test_labels_t[:,1], bins=np.linspace(0, 1, 40), density=True, alpha = 0.7)
#plt.hist(test_preds_q[:,0], weights=test_labels_q[:,0], bins=np.linspace(0, 1, 40), density=True, alpha = 0.7)
#plt.hist(test_preds_q[:,0], weights=test_labels_q[:,1], bins=np.linspace(0, 1, 40), density=True, alpha = 0.7)
#plt.hist(test_preds_ft[:,0], weights=test_labels_ft[:,0], bins=np.linspace(0, 1, 40), density=True, alpha = 0.7)
#plt.hist(test_preds_ft[:,0], weights=test_labels_ft[:,1], bins=np.linspace(0, 1, 40), density=True, alpha = 0.7)
#plt.hist(test_preds_b[:,0], weights=test_labels_b[:,0], bins=np.linspace(0, 1, 40), density=True, alpha = 0.7)
#plt.hist(test_preds_b[:,0], weights=test_labels_b[:,1], bins=np.linspace(0, 1, 40), density=True, alpha = 0.7)
print ("Floating Point", accuracy_t, auc_test, tpr_test_t[idx_t], 1./fpr_test_t[idx_t])
print ("Quantized ", accuracy_q, auc_test_q, tpr_test_q[idx_q], 1./fpr_test_q[idx_q])
print ("Quantized, fine-tuned", accuracy_ft, auc_test_ft, tpr_test_ft[idx_ft], 1./fpr_test_ft[idx_ft])
#print ("Brainwave", accuracy_b, auc_test_b, tpr_test_b[idx_b], 1./fpr_test_b[idx_b])
plot_results(results_dir)
```
## Appendix
License for plot_confusion_matrix:
New BSD License
Copyright (c) 2007-2018 The scikit-learn developers.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
a. Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
b. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
c. Neither the name of the Scikit-learn Developers nor the names of
its contributors may be used to endorse or promote products
derived from this software without specific prior written
permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
DAMAGE.
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.