text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
# Personalize Ranking Example <a class="anchor" id="top"></a>
In this notebook, you will choose a dataset and prepare it for use with Amazon Personalize Batch Recommendations.
1. [Choose a dataset or data source](#source)
1. [Prepare your data](#prepare)
1. [Create dataset groups and the interactions dataset](#group_dataset)
1. [Configure an S3 bucket and an IAM role](#bucket_role)
1. [Import the interactions data](#import)
1. [Create solutions](#solutions)
1. [Create campaigns](#create)
1. [Interact with campaigns](#interact)
1. [Clean up](#cleanup)
## Introduction <a class="anchor" id="intro"></a>
For the most part, the algorithms in Amazon Personalize (called recipes) look to solve different tasks, explained here:
1. **HRNN & HRNN-Metadata** - Recommends items based on previous user interactions with items.
1. **HRNN-Coldstart** - Recommends new items for which interaction data is not yet available.
1. **Personalized-Ranking** - Takes a collection of items and then orders them in probable order of interest using an HRNN-like approach.
1. **SIMS (Similar Items)** - Given one item, recommends other items also interacted with by users.
1. **Popularity-Count** - Recommends the most popular items, if HRNN or HRNN-Metadata do not have an answer - this is returned by default.
No matter the use case, the algorithms all share a base of learning on user-item-interaction data which is defined by 3 core attributes:
1. **UserID** - The user who interacted
1. **ItemID** - The item the user interacted with
1. **Timestamp** - The time at which the interaction occurred
We also support event types and event values defined by:
1. **Event Type** - Categorical label of an event (browse, purchased, rated, etc).
1. **Event Value** - A value corresponding to the event type that occurred. Generally speaking, we look for normalized values between 0 and 1 over the event types. For example, if there are three phases to complete a transaction (clicked, added-to-cart, and purchased), then there would be an event_value for each phase as 0.33, 0.66, and 1.0 respectfully.
The event type and event value fields are additional data which can be used to filter the data sent for training the personalization model. In this particular exercise we will not have an event type or event value.
## Choose a dataset or data source <a class="anchor" id="source"></a>
[Back to top](#top)
As we mentioned, the user-item-iteraction data is key for getting started with the service. This means we need to look for use cases that generate that kind of data, a few common examples are:
1. Video-on-demand applications
1. E-commerce platforms
1. Social media aggregators / platforms
There are a few guidelines for scoping a problem suitable for Personalize. We recommend the values below as a starting point, although the [official limits](https://docs.aws.amazon.com/personalize/latest/dg/limits.html) lie a little lower.
* Authenticated users
* At least 50 unique users
* At least 100 unique items
* At least 2 dozen interactions for each user
Most of the time this is easily attainable, and if you are low in one category, you can often make up for it by having a larger number in another category.
Generally speaking your data will not arrive in a perfect form for Personalize, and will take some modification to be structured correctly. This notebook looks to guide you through all of that.
To begin with, we are going to use the MovieLens 100k dataset. These are records of the movie rating behavior of its users. The data fits our guidelines with a large number for users, items, and interactions.
First, you will download the dataset and unzip it in a new folder using the code below.
Take a look at the data files you have downloaded.
```
data_dir = "ml-100k"
!ls $data_dir
```
## Prepare your data <a class="anchor" id="prepare"></a>
[Back to top](#top)
The next thing to be done is to load the data and confirm the data is in a good state, then save it to a CSV where it is ready to be used with Amazon Personalize.
To get started, import a collection of Python libraries commonly used in data science.
```
import time
from time import sleep
import json
from datetime import datetime
import numpy as np
import boto3
import pandas as pd
%store -r rating_threshold
rating_threshold
```
Next,open the data file and take a look at the first several rows.
```
original_data = pd.read_csv(data_dir + '/train.csv', header=0)
original_data = original_data.rename(columns={0: "user", 1: "item", 2:"rating", 3:"timestamp"})
original_data.head(5)
```
That's better. Now that the data has been successfully loaded into memory, let's extract some additional information. First, calculate some basic statistics from the data.
```
original_data.describe()
```
This shows that we have a good range of values for `userID` and `movieID`. Next, it is always a good idea to confirm the data format.
```
original_data.info()
interactions_df = original_data.copy()
interactions_df = interactions_df[['uid', 'iid', 'rating', 'timestamp']]
```
After manipulating the data, always confirm if the data format has changed.
Amazon Personalize has default column names for users, items, and timestamp. These default column names are `USER_ID`, `ITEM_ID`, AND `TIMESTAMP`. So the final modification to the dataset is to replace the existing column headers with the default headers.
```
interactions_df.rename(columns = {'uid':'USER_ID', 'iid':'ITEM_ID', 'rating':'EVENT_VALUE',
'timestamp':'TIMESTAMP'}, inplace = True)
interactions_df['EVENT_TYPE'] = "rating"
interactions_df.head()
interactions_df = interactions_df[interactions_df['EVENT_VALUE']>=rating_threshold]
```
That's it! At this point the data is ready to go, and we just need to save it as a CSV file.
```
interactions_filename = "interactions.csv"
interactions_df.to_csv((data_dir+"/"+interactions_filename), index=False, float_format='%.0f')
user_df = pd.read_csv(data_dir + '/u.user', delimiter='|', header=None, names=['USER_ID', 'AGE', 'GENDER', 'OCCUPATION', 'ZIPCODE'])
user_df['LOC'] = user_df['ZIPCODE'].apply(lambda x:x[:3])
user_df.head()
users_filename = "users.csv"
user_df[['USER_ID','AGE','GENDER','OCCUPATION','LOC']].to_csv((data_dir+"/"+users_filename), index=False, float_format='%.0f')
genres = ['unknown','Action' , 'Adventure', 'Animation', 'Childrens' , 'Comedy' , 'Crime', \
'Documentary', 'Drama' ,'Fantasy' , 'Film-Noir' , 'Horror' , 'Musical', \
'Mystery', 'Romance', 'Sci-Fi', 'Thriller', 'War', 'Western']
item_df = pd.read_csv(data_dir + '/u.item', names=['ITEM_ID','title','release_date','video_release_date', 'imdb url'] + genres, sep='|', encoding = "ISO-8859-1")
def to_timestamp(x):
try:
return int(time.mktime(datetime.strptime(x, "%d-%b-%Y").timetuple()))
except:
return None
item_df['CREATION_TIMESTAMP'] = item_df['release_date'].apply(lambda x: to_timestamp(x))
item_df['CREATION_TIMESTAMP'].fillna(item_df['CREATION_TIMESTAMP'].median())
item_df[['ITEM_ID','CREATION_TIMESTAMP']+genres]
item_input_df = item_df.copy()
item_input_df['GENRE']=''
for col_name in genres:
item_input_df.loc[item_input_df[col_name]==1,'GENRE']= item_input_df['GENRE']+'|'+ col_name
item_input_df = item_input_df[['ITEM_ID','CREATION_TIMESTAMP', 'GENRE']]
item_input_df
items_filename = "items.csv"
item_input_df.to_csv((data_dir+"/"+items_filename), index=False, float_format='%.0f')
```
## Create dataset groups and the interactions dataset <a class="anchor" id="group_dataset"></a>
[Back to top](#top)
The highest level of isolation and abstraction with Amazon Personalize is a *dataset group*. Information stored within one of these dataset groups has no impact on any other dataset group or models created from one - they are completely isolated. This allows you to run many experiments and is part of how we keep your models private and fully trained only on your data.
Before importing the data prepared earlier, there needs to be a dataset group and a dataset added to it that handles the interactions.
Dataset groups can house the following types of information:
* User-item-interactions
* Event streams (real-time interactions)
* User metadata
* Item metadata
Before we create the dataset group and the dataset for our interaction data, let's validate that your environment can communicate successfully with Amazon Personalize.
```
# Configure the SDK to Personalize:
personalize = boto3.client('personalize')
personalize_runtime = boto3.client('personalize-runtime')
```
### Create the dataset group
The following cell will create a new dataset group with specified name
Before we can use the dataset group, it must be active. This can take a minute or two. Execute the cell below and wait for it to show the ACTIVE status. It checks the status of the dataset group every second, up to a maximum of 3 hours.
```
def create_dataset_group(name):
create_dataset_group_response = personalize.create_dataset_group(
name = name
)
dataset_group_arn = create_dataset_group_response['datasetGroupArn']
print(json.dumps(create_dataset_group_response, indent=2))
max_time = time.time() + 3*60*60 # 3 hours
while time.time() < max_time:
describe_dataset_group_response = personalize.describe_dataset_group(
datasetGroupArn = dataset_group_arn
)
status = describe_dataset_group_response["datasetGroup"]["status"]
print("DatasetGroup: {}".format(status))
if status == "ACTIVE" or status == "CREATE FAILED":
break
time.sleep(60)
return dataset_group_arn
prj_suffix = str(np.random.uniform())[4:9]
dataset_group_name = "p13n-ml100k-{}".format(prj_suffix)
dataset_group_arn = create_dataset_group(dataset_group_name)
```
Now that you have a dataset group, you can create a dataset for the interaction data.
### Create the dataset
First, define a schema to tell Amazon Personalize what type of dataset you are uploading. There are several reserved and mandatory keywords required in the schema, based on the type of dataset. More detailed information can be found in the [documentation](https://docs.aws.amazon.com/personalize/latest/dg/how-it-works-dataset-schema.html).
Here, you will create a schema for interactions data, which needs the `USER_ID`, `ITEM_ID`, and `TIMESTAMP` fields. These must be defined in the same order in the schema as they appear in the dataset.
```
interactions_schema = {
"type": "record",
"name": "Interactions",
"namespace": "com.amazonaws.personalize.schema",
"fields": [
{
"name": "USER_ID",
"type": "string"
},
{
"name": "ITEM_ID",
"type": "string"
},
{
"name": "EVENT_VALUE",
"type": [
"float",
"null"
]
},
{
"name": "TIMESTAMP",
"type": "long"
},
{
"name": "EVENT_TYPE",
"type": "string"
},
],
"version": "1.0"
}
create_schema_response = personalize.create_schema(
name = "p13n-ml100k-interaction-{}".format(prj_suffix),
schema = json.dumps(interactions_schema)
)
schema_arn = create_schema_response['schemaArn']
print(json.dumps(create_schema_response, indent=2))
```
With a schema created, you can create a dataset within the dataset group. Note, this does not load the data yet. This will happen a few steps later.
```
dataset_type = "INTERACTIONS"
create_dataset_response = personalize.create_dataset(
name = "p13n-ml100k-interaction",
datasetType = dataset_type,
datasetGroupArn = dataset_group_arn,
schemaArn = schema_arn
)
interactions_dataset_arn = create_dataset_response['datasetArn']
print(json.dumps(create_dataset_response, indent=2))
```
```
# add_genres(fields, genres)
items_schema = {
"type": "record",
"name": "Items",
"namespace": "com.amazonaws.personalize.schema",
"fields": [
{
"name": "ITEM_ID",
"type": "string"
},
{
"name": "CREATION_TIMESTAMP",
"type": "long"
},
{
"name": "GENRE",
"type": [
"null",
"string"
],
"categorical": True
}
],
"version": "1.0"
}
create_schema_response = personalize.create_schema(
name = "p13n-ml100k-items-{}".format(prj_suffix),
schema = json.dumps(items_schema)
)
item_schema_arn = create_schema_response['schemaArn']
dataset_type = "ITEMS"
create_dataset_response = personalize.create_dataset(
name = "p13n-ml100k-items",
datasetType = dataset_type,
datasetGroupArn = dataset_group_arn,
schemaArn = item_schema_arn
)
items_dataset_arn = create_dataset_response['datasetArn']
print(json.dumps(create_dataset_response, indent=2))
# USER_ID AGE GENDER OCCUPATION ZIPCODE
user_schema = {
"type": "record",
"name": "Users",
"namespace": "com.amazonaws.personalize.schema",
"fields": [
{
"name": "USER_ID",
"type": "string"
},
{
"name": "AGE",
"type": [
"null",
"int"
],
"categorical": False
},
{
"name": "GENDER",
"type": [
"null",
"string"
],
"categorical": True
},
{
"name": "OCCUPATION",
"type": [
"null",
"string"
],
"categorical": True
},
{
"name": "LOC",
"type": [
"null",
"string"
],
"categorical": True
},
],
"version": "1.0"
}
create_schema_response = personalize.create_schema(
name = "p13n-ml100k-users-{}".format(prj_suffix),
schema = json.dumps(user_schema)
)
user_schema_arn = create_schema_response['schemaArn']
# personalize.delete_dataset(
# datasetArn = users_dataset_arn
# )
# response = personalize.delete_schema(
# schemaArn='arn:aws:personalize:us-west-2:230755935769:schema/p13n-ml100k-users-20211127'
# )
dataset_type = "USERS"
create_dataset_response = personalize.create_dataset(
name = "p13n-ranking-ml100k-users",
datasetType = dataset_type,
datasetGroupArn = dataset_group_arn,
schemaArn = user_schema_arn
)
users_dataset_arn = create_dataset_response['datasetArn']
print(json.dumps(create_dataset_response, indent=2))
```
## Configure an S3 bucket and an IAM role <a class="anchor" id="bucket_role"></a>
[Back to top](#top)
So far, we have downloaded, manipulated, and saved the data onto the Amazon EBS instance attached to instance running this Jupyter notebook. However, Amazon Personalize will need an S3 bucket to act as the source of your data, as well as IAM roles for accessing that bucket. Let's set all of that up.
Use the metadata stored on the instance underlying this Amazon SageMaker notebook, to determine the region it is operating in. If you are using a Jupyter notebook outside of Amazon SageMaker, simply define the region as a string below. The Amazon S3 bucket needs to be in the same region as the Amazon Personalize resources we have been creating so far.
```
with open('/opt/ml/metadata/resource-metadata.json') as notebook_info:
data = json.load(notebook_info)
resource_arn = data['ResourceArn']
region = resource_arn.split(':')[3]
print(region)
```
Amazon S3 bucket names are globally unique. To create a unique bucket name, the code below will append the string `personalizepoc` to your AWS account number. Then it creates a bucket with this name in the region discovered in the previous cell.
```
s3 = boto3.client('s3')
bucket_name = "p13n-movielens-ranking-demo-"+ prj_suffix # replace with the name of your S3 bucket
print(bucket_name)
if region != "us-east-1":
s3.create_bucket(Bucket=bucket_name, CreateBucketConfiguration={'LocationConstraint': region})
else:
s3.create_bucket(Bucket=bucket_name)
```
### Upload data to S3
Now that your Amazon S3 bucket has been created, upload the CSV file of our user-item-interaction data.
```
interactions_file_path = data_dir + "/" + interactions_filename
boto3.Session().resource('s3').Bucket(bucket_name).Object(interactions_filename).upload_file(interactions_file_path)
interactions_s3DataPath = "s3://"+bucket_name+"/"+interactions_filename
interactions_s3DataPath
users_file_path = data_dir + "/" + users_filename
boto3.Session().resource('s3').Bucket(bucket_name).Object(users_filename).upload_file(users_file_path)
users_s3DataPath = "s3://"+bucket_name+"/"+users_filename
users_s3DataPath
items_file_path = data_dir + "/" + items_filename
boto3.Session().resource('s3').Bucket(bucket_name).Object(items_filename).upload_file(items_file_path)
items_s3DataPath = "s3://"+bucket_name+"/"+items_filename
items_s3DataPath
```
### Set the S3 bucket policy
Amazon Personalize needs to be able to read the contents of your S3 bucket. So add a bucket policy which allows that.
```
policy = {
"Version": "2012-10-17",
"Id": "PersonalizeS3BucketAccessPolicy",
"Statement": [
{
"Sid": "PersonalizeS3BucketAccessPolicy",
"Effect": "Allow",
"Principal": {
"Service": "personalize.amazonaws.com"
},
"Action": [
"s3:*Object",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::{}".format(bucket_name),
"arn:aws:s3:::{}/*".format(bucket_name)
]
}
]
}
s3.put_bucket_policy(Bucket=bucket_name, Policy=json.dumps(policy))
```
### Create an IAM role
Amazon Personalize needs the ability to assume roles in AWS in order to have the permissions to execute certain tasks. Let's create an IAM role and attach the required policies to it. The code below attaches very permissive policies; please use more restrictive policies for any production application.
```
iam = boto3.client("iam")
role_name = "MovieLens100kP13NRole-{}".format(prj_suffix)
assume_role_policy_document = {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "personalize.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
create_role_response = iam.create_role(
RoleName = role_name,
AssumeRolePolicyDocument = json.dumps(assume_role_policy_document)
)
# AmazonPersonalizeFullAccess provides access to any S3 bucket with a name that includes "personalize" or "Personalize"
# if you would like to use a bucket with a different name, please consider creating and attaching a new policy
# that provides read access to your bucket or attaching the AmazonS3ReadOnlyAccess policy to the role
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonPersonalizeFullAccess"
iam.attach_role_policy(
RoleName = role_name,
PolicyArn = policy_arn
)
# Now add S3 support
iam.attach_role_policy(
PolicyArn='arn:aws:iam::aws:policy/AmazonS3FullAccess',
RoleName=role_name
)
time.sleep(60) # wait for a minute to allow IAM role policy attachment to propagate
role_arn = create_role_response["Role"]["Arn"]
print(role_arn)
```
## Import the interactions data <a class="anchor" id="import"></a>
[Back to top](#top)
Earlier you created the dataset group and dataset to house your information, so now you will execute an import job that will load the data from the S3 bucket into the Amazon Personalize dataset.
Before we can use the dataset, the import job must be active. Execute the cell below and wait for it to show the ACTIVE status. It checks the status of the import job every second, up to a maximum of 3 hours.
Importing the data can take some time, depending on the size of the dataset. In this workshop, the data import job should take around 15 minutes.
```
def create_import_job(job_name, dataset_arn, s3DataPath, role_arn):
create_dataset_import_job_response = personalize.create_dataset_import_job(
jobName = job_name,
datasetArn = dataset_arn,
dataSource = {
"dataLocation": s3DataPath
},
roleArn = role_arn
)
dataset_import_job_arn = create_dataset_import_job_response['datasetImportJobArn']
print(json.dumps(create_dataset_import_job_response, indent=2))
max_time = time.time() + 3*60*60 # 3 hours
while time.time() < max_time:
describe_dataset_import_job_response = personalize.describe_dataset_import_job(
datasetImportJobArn = dataset_import_job_arn
)
status = describe_dataset_import_job_response["datasetImportJob"]['status']
print("DatasetImportJob: {}".format(status))
if status == "ACTIVE" or status == "CREATE FAILED":
break
time.sleep(60)
create_import_job("p13n-ml100k-ranking-ui-job-{}".format(prj_suffix),interactions_dataset_arn, interactions_s3DataPath, role_arn)
create_import_job("p13n-ml100k-ranking-items-job-{}".format(prj_suffix),items_dataset_arn, items_s3DataPath, role_arn)
create_import_job("p13n-ml100k-ranking-users-job-{}".format(prj_suffix),users_dataset_arn, users_s3DataPath, role_arn)
```
When the dataset import is active, you are ready to start building models with SIMS, Personalized-Ranking, Popularity-Count, and HRNN. This process will continue in other notebooks. Run the cell below before moving on to store a few values for usage in the next notebooks.
## Create solutions <a class="anchor" id="solutions"></a>
[Back to top](#top)
In this notebook, you will create solutions with the following recipe:
`aws-personalized-ranking`
In Amazon Personalize, a specific variation of an algorithm is called a recipe. Different recipes are suitable for different situations. A trained model is called a solution, and each solution can have many versions that relate to a given volume of data when the model was trained.
To start, we will list all the recipes that are supported. This will allow you to select one and use that to build your model.
```
personalize.list_recipes()
```
### Use cases -
#### User personalized recommendation - Usually on home page or EDM
#### Similar items - Usually on item page to increase user engagement
#### Personalized ranking - rerank a list of items based on the user's preference
```
p13n_recipe_arn="arn:aws:personalize:::recipe/aws-user-personalization"
rerank_recipe_arn = "arn:aws:personalize:::recipe/aws-personalized-ranking"
sims_recipe_arn = "arn:aws:personalize:::recipe/aws-similar-items"
```
#### Create the solution
As with the previous solution, start by creating the solution first. Although you provide the dataset ARN in this step, the model is not yet trained. See this as an identifier instead of a trained model.
#### Create the solution version
Once you have a solution, you need to create a version in order to complete the model training. The training can take a while to complete, upwards of 25 minutes, and an average of 35 minutes for this recipe with our dataset. Normally, we would use a while loop to poll until the task is completed. However the task would block other cells from executing, and the goal here is to create many models and deploy them quickly. So we will set up the while loop for all of the solutions further down in the notebook. There, you will also find instructions for viewing the progress in the AWS console.
#### View solution creation status
As promised, how to view the status updates in the console:
* In another browser tab you should already have the AWS Console up from opening this notebook instance.
* Switch to that tab and search at the top for the service `Personalize`, then go to that service page.
* Click `View dataset groups`.
* Click the name of your dataset group, most likely something with POC in the name.
* Click `Solutions and recipes`.
* You will now see a list of all of the solutions you created above, including a column with the status of the solution versions. Once it is `Active`, your solution is ready to be reviewed. It is also capable of being deployed.
Or simply run the cell below to keep track of the solution version creation status.
```
def create_solution(solution_name, dataset_group_arn, recipe, hpo=False, solution_config=None, wait=True):
if hpo and solution_config:
create_solution_response = personalize.create_solution(
name = solution_name,
datasetGroupArn = dataset_group_arn,
recipeArn = recipe,
performHPO = hpo,
solutionConfig = solution_config
)
else:
create_solution_response = personalize.create_solution(
name = solution_name,
datasetGroupArn = dataset_group_arn,
recipeArn = recipe
)
solution_arn = create_solution_response['solutionArn']
print(json.dumps(create_solution_response, indent=2))
create_solution_version_response = personalize.create_solution_version(
solutionArn = solution_arn
)
solution_version_arn = create_solution_version_response['solutionVersionArn']
in_progress_solution_versions = [
solution_version_arn
]
if not wait:
return solution_version_arn
max_time = time.time() + 3*60*60 # 3 hours
while time.time() < max_time:
for solution_version_arn in in_progress_solution_versions:
version_response = personalize.describe_solution_version(
solutionVersionArn = solution_version_arn
)
status = version_response["solutionVersion"]["status"]
if status == "ACTIVE":
print("Build succeeded for {}".format(solution_version_arn))
in_progress_solution_versions.remove(solution_version_arn)
elif status == "CREATE FAILED":
print("Build failed for {}".format(solution_version_arn))
in_progress_solution_versions.remove(solution_version_arn)
if len(in_progress_solution_versions) <= 0:
break
else:
print("At least one solution build is still in progress")
time.sleep(60)
return solution_version_arn
```
```
p13n_solution_version_arn = create_solution("p13n-ml100k-{}".format(prj_suffix), dataset_group_arn, p13n_recipe_arn, True, solution_config)
```
## Create campaigns <a class="anchor" id="create"></a>
[Back to top](#top)
A campaign is a hosted solution version; an endpoint which you can query for recommendations. Pricing is set by estimating throughput capacity (requests from users for personalization per second). When deploying a campaign, you set a minimum throughput per second (TPS) value. This service, like many within AWS, will automatically scale based on demand, but if latency is critical, you may want to provision ahead for larger demand. For this POC and demo, all minimum throughput thresholds are set to 1. For more information, see the [pricing page](https://aws.amazon.com/personalize/pricing/).
Let's start deploying the campaigns.
### User Personalized Recommendation Campaign
Deploy a campaign for your personalized ranking solution version. It can take around 10 minutes to deploy a campaign. Normally, we would use a while loop to poll until the task is completed. However the task would block other cells from executing, and the goal here is to create multiple campaigns. So we will set up the while loop for all of the campaigns further down in the notebook. There, you will also find instructions for viewing the progress in the AWS console.
### View campaign creation status
As promised, how to view the status updates in the console:
* In another browser tab you should already have the AWS Console up from opening this notebook instance.
* Switch to that tab and search at the top for the service `Personalize`, then go to that service page.
* Click `View dataset groups`.
* Click the name of your dataset group, most likely something with POC in the name.
* Click `Campaigns`.
* You will now see a list of all of the campaigns you created above, including a column with the status of the campaign. Once it is `Active`, your campaign is ready to be queried.
Or simply run the cell below to keep track of the campaign creation status.
```
def create_campaign(campaign_name, solution_version_arn):
create_campaign_response = personalize.create_campaign(
name = campaign_name,
solutionVersionArn = solution_version_arn,
minProvisionedTPS = 1
)
campaign_arn = create_campaign_response['campaignArn']
in_progress_campaigns = [
campaign_arn
]
max_time = time.time() + 3*60*60 # 3 hours
while time.time() < max_time:
for campaign_arn in in_progress_campaigns:
version_response = personalize.describe_campaign(
campaignArn = campaign_arn
)
status = version_response["campaign"]["status"]
if status == "ACTIVE":
print("Build succeeded for {}".format(campaign_arn))
in_progress_campaigns.remove(campaign_arn)
elif status == "CREATE FAILED":
print("Build failed for {}".format(campaign_arn))
in_progress_campaigns.remove(campaign_arn)
if len(in_progress_campaigns) <= 0:
break
else:
print("At least one campaign build is still in progress")
time.sleep(60)
return campaign_arn
p13n_campaign = create_campaign("p13n-ml100k-{}".format(prj_suffix), p13n_solution_version_arn)
```
## Interact with campaigns <a class="anchor" id="interact"></a>
[Back to top](#top)
Now that all campaigns are deployed and active, we can start to get recommendations via an API call. Each of the campaigns is based on a different recipe, which behave in slightly different ways because they serve different use cases. We will cover each campaign in a different order than used in previous notebooks, in order to deal with the possible complexities in ascending order (i.e. simplest first).
First, let's create a supporting function to help make sense of the results returned by a Personalize campaign. Personalize returns only an `item_id`. This is great for keeping data compact, but it means you need to query a database or lookup table to get a human-readable result for the notebooks. We will create a helper function to return a human-readable result from the MovieLens dataset.
Start by loading in the dataset which we can use for our lookup table.
```
items_df = pd.read_pickle('item_df.p')
items_df
def get_movie_by_id(movie_id, movie_df=items_df):
"""
This takes in an movie_id from Personalize so it will be a string,
converts it to an int, and then does a lookup in a default or specified
dataframe.
A really broad try/except clause was added in case anything goes wrong.
Feel free to add more debugging or filtering here to improve results if
you hit an error.
"""
try:
c_row = movie_df[movie_df['iid']==movie_id].iloc[0]
title = c_row['title']
m_genres = []
for g in genres:
if c_row[g] == 1:
m_genres.append(g)
return movie_df[movie_df['iid']==movie_id].iloc[0]['title'] + " genres:" + ",".join(m_genres)
except:
return "Error obtaining movie info"
```
Now let's test a few simple values to check our error catching.
```
# A known good id
print(get_movie_by_id(movie_id=1))
```
Great! Now we have a way of rendering results.
### Personalized Ranking
The core use case for personalized ranking is to take a collection of items and to render them in priority or probable order of interest for a user. To demonstrate this, we will need a random user and a random collection of 25 items.
Then make the personalized ranking API call.
```
user_df = pd.read_pickle('user_df.p')
user_df
import matplotlib.pyplot as plt
user_id = str(1)
# plt.pie(user_df[user_df['uid']==1].values[0][3:22], labels=genres)
plt.pie(user_df[user_df['uid']==int(user_id)].values[0][2:21], labels=genres)
import json
import numpy
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
user_item_df = pd.read_pickle("user_item_df.p")
item_df = pd.read_pickle("item_df.p")
genres = ['unknown','Action' , 'Adventure', 'Animation', 'Childrens' , 'Comedy' , 'Crime', \
'Documentary', 'Drama' ,'Fantasy' , 'Film-Noir' , 'Horror' , 'Musical', \
'Mystery', 'Romance', 'Sci-Fi', 'Thriller', 'War', 'Western']
def plot_heat_map(df, figsize=(10,7)):
df = df.div(df.sum(axis=1), axis=0)
plt.subplots(figsize=figsize)
sns.heatmap(df)
u_id = user_id
tester_df = user_item_df[user_item_df['uid']==int(u_id)]
tester_df['positive'] = tester_df['rating'] >3
review = tester_df[['positive']+genres].groupby(['positive']).sum()
plot_heat_map(review, figsize=(10,5))
# p13n_campaign="arn:aws:personalize:us-west-2:230755935769:campaign/p13n-ml100k-09994"
# Get recommended reranking
get_recommendations_response = personalize_runtime.get_recommendations(
campaignArn = p13n_campaign,
userId = user_id,
)
get_recommendations_response
```
Now add the reranked items as a second column to the original dataframe, for a side-by-side comparison.
```
ranked_list = []
item_list = get_recommendations_response['itemList']
for item in item_list:
movie = get_movie_by_id(int(item['itemId']))
ranked_list.append(movie)
ranked_df = pd.DataFrame(ranked_list, columns = ['Re-Ranked'])
# compare_df = pd.concat([rerank_df, ranked_df], axis=1)
pd.set_option('display.max_colwidth', -1)
ranked_df
```
You can see above how each entry was re-ordered based on the model's understanding of the user. This is a popular task when you have a collection of items to surface a user, a list of promotions for example, or if you are filtering on a category and want to show the most likely good items.
### Similar Items by Amazon Personalize Recipe
```
sims_solution_version_arn = create_solution("p13n-ml100k-sims-{}".format(prj_suffix), dataset_group_arn, sims_recipe_arn)
# ranking_solution_version_arn = create_solution("p13n-ml100k-ranking-{}".format(prj_suffix), dataset_group_arn, rerank_recipe_arn)
sims_campaign = create_campaign("p13n-ml100k-sims-{}".format(prj_suffix), sims_solution_version_arn)
```
### Personalized ranking by Amazon Personalize Recipe
```
ranking_solution_version_arn = create_solution("p13n-ml100k-ranking-{}".format(prj_suffix), dataset_group_arn, rerank_recipe_arn)
ranking_campaign = create_campaign("p13n-ml100k-ranking-{}".format(prj_suffix), ranking_solution_version_arn)
item_id = 987
print(get_movie_by_id(movie_id=item_id))
sims_campaign
get_sims_items_response = personalize_runtime.get_recommendations(
campaignArn = sims_campaign,
itemId = str(item_id),
)
get_sims_items_response
sim_list = []
sim_ids = []
for item in get_sims_items_response['itemList']:
item_id = item['itemId']
sim_ids.append(item_id)
movie = get_movie_by_id(int(item_id))
sim_list.append(movie)
sim_df = pd.DataFrame(sim_list, columns = ['original'])
sim_df
get_recommendations_response_rerank = personalize_runtime.get_personalized_ranking(
campaignArn = ranking_campaign,
# userId = user_id,
userId=str(2),
inputList = sim_ids
)
get_recommendations_response_rerank
ranked_list = []
item_list = get_recommendations_response_rerank['personalizedRanking']
for item in item_list:
movie = get_movie_by_id(int(item['itemId']))
ranked_list.append(movie)
ranked_df = pd.DataFrame(ranked_list, columns = ['Re-Ranked'])
compare_df = pd.concat([sim_df, ranked_df], axis=1)
pd.set_option('display.max_colwidth', -1)
compare_df
```
| github_jupyter |
```
# if you are in the root folder, don't run this line
import os
os.chdir("..")
os.getcwd()
from typing import List
import numpy as np
from matplotlib import pyplot as plt
import matplotlib
from matter_multi_fidelity_emu.gpemulator_singlebin import (
SingleBinGP,
SingleBinLinearGP,
SingleBinNonLinearGP,
)
from matter_multi_fidelity_emu.data_loader import PowerSpecs
# set a random number seed to reproducibility
np.random.seed(0)
from itertools import combinations
from trainset_optimize.optmize import TrainSetOptimize
def generate_data(folder: str = "data/50_LR_3_HR/"):
data = PowerSpecs(folder=folder)
return data
```
## Find the HR choices
The following outlines the procedure to select 3 cosmologies for high-fidelity training set
out of low-fidelity latin hypercube (which has 50 cosmologies).
This simple procedure will find the optimal 2 cosmologies first by optizing the low-fidelity only
emulator. This is done by searching all combinations of 2 cosmologies in the LF lating hypercube.
Conditioning on the selected 2 cosmologies, we perform the optimization again to find the 3rd
high-fidelity selection.
```
# acquire data object, the text files
data = generate_data()
i_fidelity = 0
X = data.X_train_norm[i_fidelity]
Y = data.Y_train[i_fidelity]
train_opt = TrainSetOptimize(X=X, Y=Y)
# find the optimal two indices first;
# looking for all possible combinations
num_samples, _ = data.X_train[0].shape
num_selected = 2
all_combinations = list(combinations(range(num_samples), num_selected))
plt.loglog(10**data.kf, 10**data.Y_train[0][9])
plt.loglog(10**data.kf, 10**data.Y_train[1][0])
%%capture
# loop over to get the least loss 2 indices
all_loss = []
for i,selected_index in enumerate(all_combinations):
# need to convert to boolean array
ind = np.zeros(num_samples, dtype=np.bool)
ind[np.array(selected_index)] = True
loss = train_opt.loss(ind)
print("iteration:", i)
all_loss.append(loss)
# find the set of indices best minimize the loss
selected_index = np.array(all_combinations[np.argmin(all_loss)])
selected_index
```
## Procedure to find the next optimal index
```
# find the 3rd HighRes selection
prev_ind = np.zeros(num_samples, dtype=np.bool)
prev_ind[np.array(selected_index)] = True
assert np.sum(prev_ind) == len(selected_index)
%%capture
next_index, all_next_loss = train_opt.optimize(prev_ind,)
# optimal next selection indices
optimal_index = np.append(selected_index, next_index)
optimal_index
# the high-fidelity selection is a subset of low-fidelity latin hypercube
# the above cell output means 19th, 37th and 45th cosmologies are the
# choice for the high-fidelity simulation training set.
# cosmologies:
# "omega0", "omegab", "hubble", "scalar_amp", "ns"
data.X_train[0][optimal_index]
```
| github_jupyter |
<a href="https://colab.research.google.com/github/google-research/tapas/blob/master/notebooks/wtq_predictions.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
##### Copyright 2020 The Google AI Language Team Authors
Licensed under the Apache License, Version 2.0 (the "License");
```
# Copyright 2019 The Google AI Language Team Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
Running a Tapas fine-tuned checkpoint
---
This notebook shows how to load and make predictions with TAPAS model, which was introduced in the paper: [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349)
# Clone and install the repository
First, let's install the code.
```
! pip install tapas-table-parsing
```
# Fetch models fom Google Storage
Next we can get pretrained checkpoint from Google Storage. For the sake of speed, this is a medium sized model trained on [WTQ](https://nlp.stanford.edu/blog/wikitablequestions-a-complex-real-world-question-understanding-dataset/). Note that best results in the paper were obtained with with a large model.
```
! gsutil cp "gs://tapas_models/2020_08_05/tapas_wtq_wikisql_sqa_masklm_medium_reset.zip" "tapas_model.zip" && unzip tapas_model.zip
! mv tapas_wtq_wikisql_sqa_masklm_medium_reset tapas_model
```
# Imports
```
import tensorflow.compat.v1 as tf
import os
import shutil
import csv
import pandas as pd
import IPython
tf.get_logger().setLevel('ERROR')
from tapas.utils import tf_example_utils
from tapas.protos import interaction_pb2
from tapas.utils import number_annotation_utils
from tapas.scripts import prediction_utils
```
# Load checkpoint for prediction
Here's the prediction code, which will create and `interaction_pb2.Interaction` protobuf object, which is the datastructure we use to store examples, and then call the prediction script.
```
os.makedirs('results/wtq/tf_examples', exist_ok=True)
os.makedirs('results/wtq/model', exist_ok=True)
with open('results/wtq/model/checkpoint', 'w') as f:
f.write('model_checkpoint_path: "model.ckpt-0"')
for suffix in ['.data-00000-of-00001', '.index', '.meta']:
shutil.copyfile(f'tapas_model/model.ckpt{suffix}', f'results/wtq/model/model.ckpt-0{suffix}')
max_seq_length = 512
vocab_file = "tapas_model/vocab.txt"
config = tf_example_utils.ClassifierConversionConfig(
vocab_file=vocab_file,
max_seq_length=max_seq_length,
max_column_id=max_seq_length,
max_row_id=max_seq_length,
strip_column_names=False,
add_aggregation_candidates=False,
)
converter = tf_example_utils.ToClassifierTensorflowExample(config)
def convert_interactions_to_examples(tables_and_queries):
"""Calls Tapas converter to convert interaction to example."""
for idx, (table, queries) in enumerate(tables_and_queries):
interaction = interaction_pb2.Interaction()
for position, query in enumerate(queries):
question = interaction.questions.add()
question.original_text = query
question.id = f"{idx}-0_{position}"
for header in table[0]:
interaction.table.columns.add().text = header
for line in table[1:]:
row = interaction.table.rows.add()
for cell in line:
row.cells.add().text = cell
number_annotation_utils.add_numeric_values(interaction)
for i in range(len(interaction.questions)):
try:
yield converter.convert(interaction, i)
except ValueError as e:
print(f"Can't convert interaction: {interaction.id} error: {e}")
def write_tf_example(filename, examples):
with tf.io.TFRecordWriter(filename) as writer:
for example in examples:
writer.write(example.SerializeToString())
def aggregation_to_string(index):
if index == 0:
return "NONE"
if index == 1:
return "SUM"
if index == 2:
return "AVERAGE"
if index == 3:
return "COUNT"
raise ValueError(f"Unknown index: {index}")
def predict(table_data, queries):
table = [list(map(lambda s: s.strip(), row.split("|")))
for row in table_data.split("\n") if row.strip()]
examples = convert_interactions_to_examples([(table, queries)])
write_tf_example("results/wtq/tf_examples/test.tfrecord", examples)
write_tf_example("results/wtq/tf_examples/random-split-1-dev.tfrecord", [])
! python -m tapas.run_task_main \
--task="WTQ" \
--output_dir="results" \
--noloop_predict \
--test_batch_size={len(queries)} \
--tapas_verbosity="ERROR" \
--compression_type= \
--reset_position_index_per_cell \
--init_checkpoint="tapas_model/model.ckpt" \
--bert_config_file="tapas_model/bert_config.json" \
--mode="predict" 2> error
results_path = "results/wtq/model/test.tsv"
all_coordinates = []
df = pd.DataFrame(table[1:], columns=table[0])
display(IPython.display.HTML(df.to_html(index=False)))
print()
with open(results_path) as csvfile:
reader = csv.DictReader(csvfile, delimiter='\t')
for row in reader:
coordinates = sorted(prediction_utils.parse_coordinates(row["answer_coordinates"]))
all_coordinates.append(coordinates)
answers = ', '.join([table[row + 1][col] for row, col in coordinates])
position = int(row['position'])
aggregation = aggregation_to_string(int(row["pred_aggr"]))
print(">", queries[position])
answer_text = str(answers)
if aggregation != "NONE":
answer_text = f"{aggregation} of {answer_text}"
print(answer_text)
return all_coordinates
```
# Predict
```
# Based on SQA example nu-1000-0
result = predict("""
Pos | No | Driver | Team | Laps | Time/Retired | Grid | Points
1 | 32 | Patrick Carpentier | Team Player's | 87 | 1:48:11.023 | 1 | 22
2 | 1 | Bruno Junqueira | Newman/Haas Racing | 87 | +0.8 secs | 2 | 17
3 | 3 | Paul Tracy | Team Player's | 87 | +28.6 secs | 3 | 14
4 | 9 | Michel Jourdain, Jr. | Team Rahal | 87 | +40.8 secs | 13 | 12
5 | 34 | Mario Haberfeld | Mi-Jack Conquest Racing | 87 | +42.1 secs | 6 | 10
6 | 20 | Oriol Servia | Patrick Racing | 87 | +1:00.2 | 10 | 8
7 | 51 | Adrian Fernandez | Fernandez Racing | 87 | +1:01.4 | 5 | 6
8 | 12 | Jimmy Vasser | American Spirit Team Johansson | 87 | +1:01.8 | 8 | 5
9 | 7 | Tiago Monteiro | Fittipaldi-Dingman Racing | 86 | + 1 Lap | 15 | 4
10 | 55 | Mario Dominguez | Herdez Competition | 86 | + 1 Lap | 11 | 3
11 | 27 | Bryan Herta | PK Racing | 86 | + 1 Lap | 12 | 2
12 | 31 | Ryan Hunter-Reay | American Spirit Team Johansson | 86 | + 1 Lap | 17 | 1
13 | 19 | Joel Camathias | Dale Coyne Racing | 85 | + 2 Laps | 18 | 0
14 | 33 | Alex Tagliani | Rocketsports Racing | 85 | + 2 Laps | 14 | 0
15 | 4 | Roberto Moreno | Herdez Competition | 85 | + 2 Laps | 9 | 0
16 | 11 | Geoff Boss | Dale Coyne Racing | 83 | Mechanical | 19 | 0
17 | 2 | Sebastien Bourdais | Newman/Haas Racing | 77 | Mechanical | 4 | 0
18 | 15 | Darren Manning | Walker Racing | 12 | Mechanical | 7 | 0
19 | 5 | Rodolfo Lavin | Walker Racing | 10 | Mechanical | 16 | 0
""", ["Who are the drivers with 87 laps?", "Sum of laps for team Walker Racing?", "Average grid for the drivers with less than 80 laps?",])
```
| github_jupyter |
```
import networkx as nx
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
```
# Cliques, Triangles and Squares
Let's pose a problem: If A knows B and B knows C, would it be probable that A knows C as well? In a graph involving just these three individuals, it may look as such:
```
G = nx.Graph()
G.add_nodes_from(['a', 'b', 'c'])
G.add_edges_from([('a','b'), ('b', 'c')])
nx.draw(G, with_labels=True)
```
Let's think of another problem: If A knows B, B knows C, C knows D and D knows A, is it likely that A knows C and B knows D? How would this look like?
```
G.add_node('d')
G.add_edge('c', 'd')
G.add_edge('d', 'a')
nx.draw(G, with_labels=True)
```
The set of relationships involving A, B and C, if closed, involves a triangle in the graph. The set of relationships that also include D form a square.
You may have observed that social networks (LinkedIn, Facebook, Twitter etc.) have friend recommendation systems. How exactly do they work? Apart from analyzing other variables, closing triangles is one of the core ideas behind the system. A knows B and B knows C, then A probably knows C as well.
If all of the triangles in the two small-scale networks were closed, then the graph would have represented **cliques**, in which everybody within that subgraph knows one another.
In this section, we will attempt to answer the following questions:
1. Can we identify cliques?
2. Can we identify *potential* cliques that aren't captured by the network?
3. Can we model the probability that two unconnected individuals know one another?
As usual, let's start by loading the synthetic network.
```
# Load the network.
G = nx.read_gpickle('Synthetic Social Network.pkl')
nx.draw(G, with_labels=True)
```
## Cliques
In a social network, cliques are groups of people in which everybody knows everybody. Triangles are a simple example of cliques. Let's try implementing a simple algorithm that finds out whether a node is present in a triangle or not.
The core idea is that if a node is present in a triangle, then its neighbors' neighbors' neighbors should include itself.
```
# Example code that shouldn't be too hard to follow.
def in_triangle(G, node):
neighbors1 = G.neighbors(node)
neighbors2 = []
for n in neighbors1:
neighbors = G.neighbors(n)
if node in neighbors2:
neighbors2.remove(node)
neighbors2.extend(G.neighbors(n))
neighbors3 = []
for n in neighbors2:
neighbors = G.neighbors(n)
neighbors3.extend(G.neighbors(n))
return node in neighbors3
in_triangle(G, 3)
```
In reality, NetworkX already has a function that *counts* the number of triangles that any given node is involved in. This is probably more useful than knowing whether a node is present in a triangle or not, but the above code was simply for practice.
```
nx.triangles(G, 3)
```
### Exercise
Can you write a function that takes in one node and its associated graph as an input, and returns a list or set of itself + all other nodes that it is in a triangle relationship with?
Hint: The neighbor of my neighbor should also be my neighbor, then the three of us are in a triangle relationship.
Hint: Python Sets may be of great use for this problem. https://docs.python.org/2/library/stdtypes.html#set
Verify your answer by drawing out the subgraph composed of those nodes.
```
# Possible answer
def get_triangles(G, node):
neighbors = set(G.neighbors(node))
triangle_nodes = set()
"""
Fill in the rest of the code below.
"""
for n in neighbors:
neighbors2 = set(G.neighbors(n))
neighbors.remove(n)
neighbors2.remove(node)
triangle_nodes.update(neighbors2.intersection(neighbors))
neighbors.add(n)
triangle_nodes.add(node)
return triangle_nodes
# Verify your answer with the following funciton call. Should return:
# {1, 2, 3, 6, 23}
get_triangles(G, 3)
# Then, draw out those nodes.
nx.draw(G.subgraph(get_triangles(G, 3)), with_labels=True)
# Compare for yourself that node 8 isn't in a triangle relationship with the other nodes.
neighbors3 = G.neighbors(3)
neighbors3.append(3)
nx.draw(G.subgraph(neighbors3), with_labels=True)
```
# Friend Recommendation: Open Triangles
Now that we have some code that identifies closed triangles, we might want to see if we can do some friend recommendations by looking for open triangles.
Open triangles are like those that we described earlier on - A knows B and B knows C, but C's relationship with A isn't captured in the graph.
### Exercise
Can you write a function that identifies, for a given node, the other two nodes that it is involved with in an open triangle, if there is one?
Hint: You may still want to stick with set operations. Suppose we have the A-B-C triangle. If there are neighbors of C that are also neighbors of B, then those neighbors are in a triangle with B and C; consequently, if there are nodes for which C's neighbors do not overlap with B's neighbors, then those nodes are in an open triangle. The final implementation should include some conditions, and probably won't be as simple as described above.
```
# Possible Answer, credit Justin Zabilansky (MIT) for help on this.
def get_open_triangles(G, node):
"""
There are many ways to represent this. One may choose to represent only the nodes involved
in an open triangle; this is not the approach taken here.
Rather, we have a code that explicitly enumrates every open triangle present.
"""
open_triangle_nodes = []
neighbors = set(G.neighbors(node))
for n in neighbors:
neighbors2 = set(G.neighbors(n))
neighbors2.remove(node)
overlaps = set()
for n2 in neighbors2:
if n2 in neighbors:
overlaps.add(n2)
difference = neighbors.difference(overlaps)
difference.remove(n)
for n2 in difference:
if set([node, n, n2]) not in open_triangle_nodes:
open_triangle_nodes.append(set([node, n, n2]))
return open_triangle_nodes
# # Uncomment the following code if you want to draw out each of the triplets.
# nodes = get_open_triangles(G, 2)
# for i, triplet in enumerate(nodes):
# fig = plt.figure(i)
# nx.draw(G.subgraph(triplet), with_labels=True)
print(get_open_triangles(G, 19))
len(get_open_triangles(G, 19))
```
If you remember the previous section on hubs and paths, you will note that node 19 was involved in a lot of open triangles.
Triangle closure is also the core idea behind social networks' friend recommendation systems; of course, it's definitely more complicated than what we've implemented here.
# Please Network With One Another!
I would like to become a hub node in our PyData network :).
I also wish to create many triangle cliques with each of you.
Please exchange your cards with a neighbor.
| github_jupyter |
## Programming Exercise 3 - Multi-class Classification and Neural Networks
```
# %load ../../standard_import.txt
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
# load MATLAB files
from scipy.io import loadmat
from scipy.optimize import minimize
from sklearn.linear_model import LogisticRegression
pd.set_option('display.notebook_repr_html', False)
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', 150)
pd.set_option('display.max_seq_items', None)
#%config InlineBackend.figure_formats = {'pdf',}
%matplotlib inline
import seaborn as sns
sns.set_context('notebook')
sns.set_style('white')
```
#### Load MATLAB datafiles
```
data = loadmat('data/ex3data1.mat')
data.keys()
weights = loadmat('data/ex3weights.mat')
weights.keys()
y = data['y']
# Add constant for intercept
X = np.c_[np.ones((data['X'].shape[0],1)), data['X']]
print('X: {} (with intercept)'.format(X.shape))
print('y: {}'.format(y.shape))
theta1, theta2 = weights['Theta1'], weights['Theta2']
print('theta1: {}'.format(theta1.shape))
print('theta2: {}'.format(theta2.shape))
sample = np.random.choice(X.shape[0], 20)
plt.imshow(X[sample,1:].reshape(-1,20).T)
plt.axis('off');
```
### Multiclass Classification
#### Logistic regression hypothesis
#### $$ h_{\theta}(x) = g(\theta^{T}x)$$
#### $$ g(z)=\frac{1}{1+e^{−z}} $$
```
def sigmoid(z):
return(1 / (1 + np.exp(-z)))
```
#### Regularized Cost Function
#### $$ J(\theta) = \frac{1}{m}\sum_{i=1}^{m}\big[-y^{(i)}\, log\,( h_\theta\,(x^{(i)}))-(1-y^{(i)})\,log\,(1-h_\theta(x^{(i)}))\big] + \frac{\lambda}{2m}\sum_{j=1}^{n}\theta_{j}^{2}$$
#### Vectorized Cost Function
#### $$ J(\theta) = \frac{1}{m}\big((\,log\,(g(X\theta))^Ty+(\,log\,(1-g(X\theta))^T(1-y)\big) + \frac{\lambda}{2m}\sum_{j=1}^{n}\theta_{j}^{2}$$
```
def lrcostFunctionReg(theta, reg, X, y):
m = y.size
h = sigmoid(X.dot(theta))
J = -1*(1/m)*(np.log(h).T.dot(y)+np.log(1-h).T.dot(1-y)) + (reg/(2*m))*np.sum(np.square(theta[1:]))
if np.isnan(J[0]):
return(np.inf)
return(J[0])
def lrgradientReg(theta, reg, X,y):
m = y.size
h = sigmoid(X.dot(theta.reshape(-1,1)))
grad = (1/m)*X.T.dot(h-y) + (reg/m)*np.r_[[[0]],theta[1:].reshape(-1,1)]
return(grad.flatten())
```
#### One-vs-all Classification
```
def oneVsAll(features, classes, n_labels, reg):
initial_theta = np.zeros((X.shape[1],1)) # 401x1
all_theta = np.zeros((n_labels, X.shape[1])) #10x401
for c in np.arange(1, n_labels+1):
res = minimize(lrcostFunctionReg, initial_theta, args=(reg, features, (classes == c)*1), method=None,
jac=lrgradientReg, options={'maxiter':50})
all_theta[c-1] = res.x
return(all_theta)
theta = oneVsAll(X, y, 10, 0.1)
```
#### One-vs-all Prediction
```
def predictOneVsAll(all_theta, features):
probs = sigmoid(X.dot(all_theta.T))
# Adding one because Python uses zero based indexing for the 10 columns (0-9),
# while the 10 classes are numbered from 1 to 10.
return(np.argmax(probs, axis=1)+1)
pred = predictOneVsAll(theta, X)
print('Training set accuracy: {} %'.format(np.mean(pred == y.ravel())*100))
```
#### Multiclass Logistic Regression with scikit-learn
```
clf = LogisticRegression(C=10, penalty='l2', solver='liblinear')
# Scikit-learn fits intercept automatically, so we exclude first column with 'ones' from X when fitting.
clf.fit(X[:,1:],y.ravel())
pred2 = clf.predict(X[:,1:])
print('Training set accuracy: {} %'.format(np.mean(pred2 == y.ravel())*100))
```
### Neural Networks
```
def predict(theta_1, theta_2, features):
z2 = theta_1.dot(features.T)
a2 = np.c_[np.ones((data['X'].shape[0],1)), sigmoid(z2).T]
z3 = a2.dot(theta_2.T)
a3 = sigmoid(z3)
return(np.argmax(a3, axis=1)+1)
pred = predict(theta1, theta2, X)
print('Training set accuracy: {} %'.format(np.mean(pred == y.ravel())*100))
```
| github_jupyter |
# Network metrics and analysis
```
import networkx as nx
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
```
## Centrality metrics
The following metrics are available: Degree, Closeness, Betweenness, Eigenvector
```
#let's start with a random graph:
G=nx.gnm_random_graph(100,1000)
GD=nx.gnm_random_graph(100,1000,directed=True)
nx.draw(GD)
```
## Degree centrality
```
#it works for directed and undirected graphs the same way, returing
#a dictionary whose elements are the nodes and their centralities
G_deg_cent=nx.degree_centrality(G)
G_deg_cent
# For directed graphs we also get in_degree and out_degree,
#producing as output the same dictionary
GD_in_deg=nx.in_degree_centrality(GD)
GD_out_deg=nx.out_degree_centrality(GD)
```
## Betweennes centrality
betweenness_centrality(G, k=None, normalized=True, weight=None, endpoints=False, seed=None)
Betweenness centrality of a node v is the sum of the fraction of all-pairs shortest paths that pass through v:
$c_B(v) =\sum_{s,t \in V} \frac{\sigma(s, t|v)}{\sigma(s, t)}$
where V is the set of nodes, $\sigma(s, t)$ is the number of shortest (s, t)-paths, and $\sigma(s, t|v)$ is the number of those paths passing through some node v other than s, t. If $s = t$, $\sigma(s, t) = 1$, and if $v \in {s, t}$, $\sigma(s, t|v) = 0$.
k, integer: if given use k node samples to estimate the betweenness, the higher, the better (and slower :) )
normalized: if True normalize values
weight: use for weighted graphs
endpoints: include or not endpoints in the shortes path computation
```
G_bet=nx.betweenness_centrality(G, normalized=False)
#this hold for nodes, you can also compute bet. for edges with:
G_e_bet=nx.edge_betweenness_centrality(G)
# Both algorithms return a dictionary of nodes and edges
G_bet
```
## Eigenvector centrality
Eigenvector centrality computes the centrality for a node based on the centrality of its neighbors. The eigenvector centrality for node i is
$\mathbf{Ax} = \lambda \mathbf{x}$
where $A$ is the adjacency matrix of the graph $G$ with eigenvalue $\lambda$. By virtue of the Perron–Frobenius theorem, there is a unique and positive solution if $\lambda$ is the largest eigenvalue associated with the eigenvector of the adjacency matrix $A$.
Parameters:
**G** (graph) – A networkx graph
**max_iter** (integer, optional) – Maximum number of iterations in power method.
**tol** (float, optional) – Error tolerance used to check convergence in power method iteration.
**nstart** (dictionary, optional) – Starting value of eigenvector iteration for each node.
**weight** (None or string, optional) – If None, all edge weights are considered equal. Otherwise holds the name of the edge attribute used as weight.
```
G_eig=nx.eigenvector_centrality(G)
GD_eig=nx.eigenvector_centrality(GD)
G_eig
```
## Exercise
Generate a Barabasi-Albert Graph with 200 nodes and compare the centrality metrics, plotting the node's metrics distributions.
Remark the differences (if any) among the random graph $G$ generated in this lecture.
```
ba=nx.barabasi_albert_graph(200,5)
nx.draw(ba)
deg_cen=nx.degree_centrality(ba)
np_deg_cen=np.array(deg_cen.values())
x, y=np.unique(np_deg_cen, return_counts=True)
plt.plot(x, y, linewidth=1.5)
plt.yscale('log')
plt.xscale('log')
plt.show()
```
## Link Analysis of Directed networks
NetworkX also contains specific algoriths for ranking nodes in directed networks, we focus on PageRank and Hits.
### PageRank
pagerank(G, alpha=0.85, personalization=None, max_iter=100, tol=1e-06, nstart=None, weight='weight', dangling=None)
PageRank computes a ranking of the nodes in the graph G based on the structure of the incoming links. It was originally designed as an algorithm to rank web pages (i.e. you are famous because others think you are).
The eigenvector calculation is done by the power iteration method and has no guarantee of convergence. The iteration will stop after *max_iter* iterations or an error tolerance of *number_of_nodes(G) x tol* has been reached.
The PageRank algorithm was designed for directed graphs but this algorithm does not check if the input graph is directed and will execute on undirected graphs by converting each edge in the directed graph to two edges.
**G** (graph) – A NetworkX graph. Undirected graphs will be converted to a directed graph with two directed edges for each undirected edge.
**alpha** (float, optional) – Damping parameter for PageRank, default=0.85.
**personalization** (dict, optional) – The “personalization vector” consisting of a dictionary with a key for every graph node and nonzero personalization value for each node. By default, a uniform distribution is used.
**max_iter** (integer, optional) – Maximum number of iterations in power method eigenvalue solver.
**tol** (float, optional) – Error tolerance used to check convergence in power method solver.
**nstart** (dictionary, optional) – Starting value of PageRank iteration for each node.
**weight** (key, optional) – Edge data key to use as weight. If None weights are set to 1.
**dangling** (dict, optional) – The outedges to be assigned to any “dangling” nodes, i.e., nodes without any outedges.
The dict key is the node the outedge points to and the dict value is the weight of that outedge. By default, dangling nodes are given outedges according to the personalization vector (uniform if not specified). This must be selected to result in an irreducible transition matrix (see notes under google_matrix). It may be common to have the dangling dict to be the same as the personalization dict.
```
GD_pr=nx.pagerank(GD)
#As before it returns a dictionary:
GD_pr
```
## Hits - finding Hubs and Authorities
hits(G, max_iter=100, tol=1e-08, nstart=None, normalized=True)
The HITS algorithm computes two numbers for a node. Authorities estimates the node value based on the incoming links. Hubs estimates the node value based on outgoing links.
**G** (graph) – A NetworkX graph
**max_iter** (interger, optional) – Maximum number of iterations in power method
**tol** (float, optional) – Error tolerance used to check convergence in power method iteration
**nstart** (dictionary, optional) – Starting value of each node for power method iteration
**normalized** (bool (default=True)) – Normalize results by the sum of all of the values
Returns:
**(hubs,authorities)** – Two dictionaries keyed by node containing the hub and authority values.
```
GD_ha=nx.hits(GD)
```
| github_jupyter |
# Name
Data preparation using SparkSQL on YARN with Cloud Dataproc
# Label
Cloud Dataproc, GCP, Cloud Storage, YARN, SparkSQL, Kubeflow, pipelines, components
# Summary
A Kubeflow Pipeline component to prepare data by submitting a SparkSql job on YARN to Cloud Dataproc.
# Details
## Intended use
Use the component to run an Apache SparkSql job as one preprocessing step in a Kubeflow Pipeline.
## Runtime arguments
Argument| Description | Optional | Data type| Accepted values| Default |
:--- | :---------- | :--- | :------- | :------ | :------
project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No| GCPProjectID | | |
region | The Cloud Dataproc region to handle the request. | No | GCPRegion|
cluster_name | The name of the cluster to run the job. | No | String| | |
queries | The queries to execute the SparkSQL job. Specify multiple queries in one string by separating them with semicolons. You do not need to terminate queries with semicolons. | Yes | List | | None |
query_file_uri | The HCFS URI of the script that contains the SparkSQL queries.| Yes | GCSPath | | None |
script_variables | Mapping of the query’s variable names to their values (equivalent to the SparkSQL command: SET name="value";).| Yes| Dict | | None |
sparksql_job | The payload of a [SparkSqlJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/SparkSqlJob). | Yes | Dict | | None |
job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None |
wait_interval | The number of seconds to pause between polling the operation. | Yes |Integer | | 30 |
## Output
Name | Description | Type
:--- | :---------- | :---
job_id | The ID of the created job. | String
## Cautions & requirements
To use the component, you must:
* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).
* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).
* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.
* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project.
## Detailed Description
This component creates a Pig job from [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).
Follow these steps to use the component in a pipeline:
1. Install the Kubeflow Pipeline SDK:
```
%%capture --no-stderr
!pip3 install kfp --upgrade
```
2. Load the component using KFP SDK
```
import kfp.components as comp
dataproc_submit_sparksql_job_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/1.4.0/components/gcp/dataproc/submit_sparksql_job/component.yaml')
help(dataproc_submit_sparksql_job_op)
```
### Sample
Note: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template.
#### Setup a Dataproc cluster
[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code.
#### Prepare a SparkSQL job
Either put your SparkSQL queries in the `queires` list, or upload your SparkSQL queries into a file to a Cloud Storage bucket and then enter the Cloud Storage bucket’s path in `query_file_uri`. In this sample, we will use a hard coded query in the `queries` list to select data from a public CSV file from Cloud Storage.
For more details about Spark SQL, see [Spark SQL, DataFrames and Datasets Guide](https://spark.apache.org/docs/latest/sql-programming-guide.html)
#### Set sample parameters
```
PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
REGION = 'us-central1'
QUERY = '''
DROP TABLE IF EXISTS natality_csv;
CREATE EXTERNAL TABLE natality_csv (
source_year BIGINT, year BIGINT, month BIGINT, day BIGINT, wday BIGINT,
state STRING, is_male BOOLEAN, child_race BIGINT, weight_pounds FLOAT,
plurality BIGINT, apgar_1min BIGINT, apgar_5min BIGINT,
mother_residence_state STRING, mother_race BIGINT, mother_age BIGINT,
gestation_weeks BIGINT, lmp STRING, mother_married BOOLEAN,
mother_birth_state STRING, cigarette_use BOOLEAN, cigarettes_per_day BIGINT,
alcohol_use BOOLEAN, drinks_per_week BIGINT, weight_gain_pounds BIGINT,
born_alive_alive BIGINT, born_alive_dead BIGINT, born_dead BIGINT,
ever_born BIGINT, father_race BIGINT, father_age BIGINT,
record_weight BIGINT
)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
LOCATION 'gs://public-datasets/natality/csv';
SELECT * FROM natality_csv LIMIT 10;'''
EXPERIMENT_NAME = 'Dataproc - Submit SparkSQL Job'
```
#### Example pipeline that uses the component
```
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Dataproc submit SparkSQL job pipeline',
description='Dataproc submit SparkSQL job pipeline'
)
def dataproc_submit_sparksql_job_pipeline(
project_id = PROJECT_ID,
region = REGION,
cluster_name = CLUSTER_NAME,
queries = json.dumps([QUERY]),
query_file_uri = '',
script_variables = '',
sparksql_job='',
job='',
wait_interval='30'
):
dataproc_submit_sparksql_job_op(
project_id=project_id,
region=region,
cluster_name=cluster_name,
queries=queries,
query_file_uri=query_file_uri,
script_variables=script_variables,
sparksql_job=sparksql_job,
job=job,
wait_interval=wait_interval)
```
#### Compile the pipeline
```
pipeline_func = dataproc_submit_sparksql_job_pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
```
#### Submit the pipeline for execution
```
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
```
## References
* [Spark SQL, DataFrames and Datasets Guide](https://spark.apache.org/docs/latest/sql-programming-guide.html)
* [SparkSqlJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/SparkSqlJob)
* [Cloud Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs)
## License
By deploying or using this software you agree to comply with the [AI Hub Terms of Service](https://aihub.cloud.google.com/u/0/aihub-tos) and the [Google APIs Terms of Service](https://developers.google.com/terms/). To the extent of a direct conflict of terms, the AI Hub Terms of Service will control.
| github_jupyter |
```
#RNN_LSTM_Project On Google Stock Prices Trend for Data ranging from 1-01-2005 to 31-12-2020
#Model Predicting the trend of Stock Price for January 2021
#Part1- Data Prerecessing-
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
dataset_train=pd.read_csv('GOOG_Stock_Train.csv')
training_set=dataset_train.iloc[:,1:2].values
from sklearn.preprocessing import MinMaxScaler
sc=MinMaxScaler(feature_range=(0,1))
training_set_scaled=sc.fit_transform(training_set)
X_train=list()
y_train=list()
for i in range (60 ,4027):
X_train.append(training_set_scaled[ i-60 :i ,0])
y_train.append(training_set_scaled[ i,0])
X_train ,y_train =np.array(X_train) ,np.array(y_train)
X_train=np.reshape(X_train ,(X_train.shape[0], X_train.shape[1] , 1))
#Part2 -Building THE LSTM MODEL with 6 LSTM layers
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Dropout
regressor= Sequential()
regressor.add(LSTM(units=70,return_sequences=True , input_shape=(X_train.shape[1],1)))
regressor.add(Dropout(0.2))
regressor.add(LSTM(units=70,return_sequences=True))
regressor.add(Dropout(0.2))
regressor.add(LSTM(units=70,return_sequences=True))
regressor.add(Dropout(0.2))
regressor.add(LSTM(units=70,return_sequences=True))
regressor.add(Dropout(0.2))
regressor.add(LSTM(units=70,return_sequences=True))
regressor.add(Dropout(0.2))
regressor.add(LSTM(units=70,return_sequences=False))
regressor.add(Dropout(0.2))
regressor.add(Dense(units=1))
regressor.compile(optimizer='adam' ,loss='mean_squared_error')
regressor.fit(X_train,y_train, batch_size=32 ,epochs=100)
#Part3- Making The Actual Real Prediction for 2021 January
dataset_test=pd.read_csv('GOOG _Stock_Test.csv')
Actual_stock_Price= dataset_test.iloc[:,1:2].values
dataset_total= pd.concat((dataset_train['Open'] ,dataset_test['Open']),axis=0)
inputs=dataset_total[len(dataset_total) -len(dataset_test) -60 :].values
inputs=inputs.reshape(-1 ,1)
inputs=sc.transform(inputs)
X_test=[]
for i in range (60,79):
X_test.append(inputs[i-60:i ,0])
X_test=np.array(X_test)
X_test=np.reshape(X_test ,(X_test.shape[0] ,X_test.shape[1], 1))
predicted_stock=sc.inverse_transform(regressor.predict(X_test))
#Visualizing The Results
plt.plot(Actual_stock_Price, color='red' ,label='Actual Stock Price of Jan-2021')
plt.plot(predicted_stock,color='blue' , label= 'Predicted Stock Price of Jan 2021')
plt.title('Google Stock Price Model')
plt.xlabel('Time')
plt.ylabel('Stock Prices')
plt.legend()
plt.show()
#Stock prices are random in nature.Based on previous datasets it is difficult to predict the stock price of next day as these events depend
#on no.of factors & are absolutely random with no correlation with previous prices,so we can at best predict the trend of the stock price (upward /Downward Trend) that I tried to predict here.
```
| github_jupyter |

Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.
# Train a model using a custom Docker image
In this tutorial, learn how to use a custom Docker image when training models with Azure Machine Learning.
The example scripts in this article are used to classify pet images by creating a convolutional neural network.
## Set up the experiment
This section sets up the training experiment by initializing a workspace, creating an experiment, and uploading the training data and training scripts.
### Initialize a workspace
The Azure Machine Learning workspace is the top-level resource for the service. It provides you with a centralized place to work with all the artifacts you create. In the Python SDK, you can access the workspace artifacts by creating a `workspace` object.
Create a workspace object from the config.json file.
```
from azureml.core import Workspace
ws = Workspace.from_config()
```
### Prepare scripts
Create a directory titled `fastai-example`.
```
import os
os.makedirs('fastai-example', exist_ok=True)
```
Then run the cell below to create the training script train.py in the directory.
```
%%writefile fastai-example/train.py
from fastai.vision.all import *
path = untar_data(URLs.PETS)
path.ls()
files = get_image_files(path/"images")
len(files)
#(Path('/home/ashwin/.fastai/data/oxford-iiit-pet/images/yorkshire_terrier_102.jpg'),Path('/home/ashwin/.fastai/data/oxford-iiit-pet/images/great_pyrenees_102.jpg'))
def label_func(f): return f[0].isupper()
#To get our data ready for a model, we need to put it in a DataLoaders object. Here we have a function that labels using the file names, so we will use ImageDataLoaders.from_name_func. There are other factory methods of ImageDataLoaders that could be more suitable for your problem, so make sure to check them all in vision.data.
dls = ImageDataLoaders.from_name_func(path, files, label_func, item_tfms=Resize(224))
#We have passed to this function the directory we're working in, the files we grabbed, our label_func and one last piece as item_tfms: this is a Transform applied on all items of our dataset that will resize each imge to 224 by 224, by using a random crop on the largest dimension to make it a square, then resizing to 224 by 224. If we didn't pass this, we would get an error later as it would be impossible to batch the items together.
dls.show_batch()
learn = cnn_learner(dls, resnet34, metrics=error_rate)
learn.fine_tune(1)
```
### Define your environment
Create an environment object and enable Docker.
```
from azureml.core import Environment
fastai_env = Environment("fastai")
fastai_env.docker.enabled = True
```
This specified base image supports the fast.ai library which allows for distributed deep learning capabilities. For more information, see the [fast.ai DockerHub](https://hub.docker.com/u/fastdotai).
When you are using your custom Docker image, you might already have your Python environment properly set up. In that case, set the `user_managed_dependencies` flag to True in order to leverage your custom image's built-in python environment.
```
fastai_env.docker.base_image = "fastdotai/fastai:latest"
fastai_env.python.user_managed_dependencies = True
```
To use an image from a private container registry that is not in your workspace, you must use `docker.base_image_registry` to specify the address of the repository as well as a username and password.
```python
fastai_env.docker.base_image_registry.address = "myregistry.azurecr.io"
fastai_env.docker.base_image_registry.username = "username"
fastai_env.docker.base_image_registry.password = "password"
```
It is also possible to use a custom Dockerfile. Use this approach if you need to install non-Python packages as dependencies and remember to set the base image to None.
Specify docker steps as a string:
```python
dockerfile = r""" \
FROM mcr.microsoft.com/azureml/base:intelmpi2018.3-ubuntu16.04
RUN echo "Hello from custom container!" \
"""
```
Set base image to None, because the image is defined by dockerfile:
```python
fastai_env.docker.base_image = None \
fastai_env.docker.base_dockerfile = dockerfile
```
Alternatively, load the string from a file:
```python
fastai_env.docker.base_image = None \
fastai_env.docker.base_dockerfile = "./Dockerfile"
```
### Create or attach existing AmlCompute
You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for training your model. In this tutorial, you create `AmlCompute` as your training compute resource.
**Creation of AmlCompute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace this code will skip the creation process.
As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
```
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# choose a name for your cluster
cluster_name = "gpu-cluster"
try:
compute_target = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing compute target.')
except ComputeTargetException:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6',
max_nodes=4)
# create the cluster
compute_target = ComputeTarget.create(ws, cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True)
# use get_status() to get a detailed status for the current AmlCompute
print(compute_target.get_status().serialize())
```
### Create a ScriptRunConfig
This ScriptRunConfig will configure your job for execution on the desired compute target.
```
from azureml.core import ScriptRunConfig
fastai_config = ScriptRunConfig(source_directory='fastai-example',
script='train.py',
compute_target=compute_target,
environment=fastai_env)
```
### Submit your run
When a training run is submitted using a ScriptRunConfig object, the submit method returns an object of type ScriptRun. The returned ScriptRun object gives you programmatic access to information about the training run.
```
from azureml.core import Experiment
run = Experiment(ws,'fastai-custom-image').submit(fastai_config)
run.wait_for_completion(show_output=True)
```
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-documentation/tree/master/Template/template.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-documentation/blob/master/Template/template.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-documentation/blob/master/Template/template.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
# U.S. Census Data
The United States Census Bureau Topologically Integrated Geographic Encoding and Referencing (TIGER) dataset contains the 2018 boundaries for the primary governmental divisions of the United States. In addition to the fifty states, the Census Bureau treats the District of Columbia, Puerto Rico, and each of the island areas (American Samoa, the Commonwealth of the Northern Mariana Islands, Guam, and the U.S. Virgin Islands) as the statistical equivalents of States for the purpose of data presentation. Each feature represents a state or state equivalent.
For full technical details on all TIGER 2018 products, see the [TIGER technical documentation](https://www2.census.gov/geo/pdfs/maps-data/data/tiger/tgrshp2018/TGRSHP2018_TechDoc.pdf).
* [TIGER: US Census States](https://developers.google.com/earth-engine/datasets/catalog/TIGER_2018_States): `ee.FeatureCollection("TIGER/2018/States")`
* [TIGER: US Census Counties](https://developers.google.com/earth-engine/datasets/catalog/TIGER_2018_Counties): `ee.FeatureCollection("TIGER/2018/Counties")`
* [TIGER: US Census Tracts](https://developers.google.com/earth-engine/datasets/catalog/TIGER_2010_Tracts_DP1): `ee.FeatureCollection("TIGER/2010/Tracts_DP1")`
* [TIGER: US Census Blocks](https://developers.google.com/earth-engine/datasets/catalog/TIGER_2010_Blocks): `ee.FeatureCollection("TIGER/2010/Blocks")`
* [TIGER: US Census Roads](https://developers.google.com/earth-engine/datasets/catalog/TIGER_2016_Roads): `ee.FeatureCollection("TIGER/2016/Roads")`
* [TIGER: US Census 5-digit ZIP Code](https://developers.google.com/earth-engine/datasets/catalog/TIGER_2010_ZCTA5): `ee.FeatureCollection("TIGER/2010/ZCTA5")`
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## TIGER: US Census States
https://developers.google.com/earth-engine/datasets/catalog/TIGER_2018_States

### Displaying data
```
Map = emap.Map(center=[40, -100], zoom=4)
states = ee.FeatureCollection('TIGER/2018/States')
Map.centerObject(states, 4)
Map.addLayer(states, {}, 'US States')
Map.addLayerControl() #This line is not needed for ipyleaflet-based Map
Map
```
### Dispalying vector as raster
```
Map = emap.Map(center=[40, -100], zoom=4)
states = ee.FeatureCollection('TIGER/2018/States')
image = ee.Image().paint(states, 0, 2)
Map.centerObject(states, 4)
Map.addLayer(image, {}, 'US States')
Map.addLayerControl()
Map
```
### Select by attribute
#### Select one single state
```
Map = emap.Map(center=[40, -100], zoom=4)
tn = ee.FeatureCollection('TIGER/2018/States') \
.filter(ee.Filter.eq("NAME", 'Tennessee'))
Map.centerObject(tn, 6)
Map.addLayer(tn, {}, 'Tennessee')
Map.addLayerControl()
Map
tn = ee.FeatureCollection('TIGER/2018/States') \
.filter(ee.Filter.eq("NAME", 'Tennessee')) \
.first()
props = tn.toDictionary().getInfo()
print(props)
```
#### Select multiple states
```
Map = emap.Map(center=[40, -100], zoom=4)
selected = ee.FeatureCollection('TIGER/2018/States') \
.filter(ee.Filter.inList("NAME", ['Tennessee', 'Alabama', 'Georgia']))
Map.centerObject(selected, 6)
Map.addLayer(selected, {}, 'Selected states')
Map.addLayerControl()
Map
```
#### Printing all values of a column
```
states = ee.FeatureCollection('TIGER/2018/States').sort('ALAND', False)
names = states.aggregate_array("STUSPS").getInfo()
print(names)
areas = states.aggregate_array("ALAND").getInfo()
print(areas)
import matplotlib.pyplot as plt
%matplotlib notebook
plt.bar(names, areas)
plt.show()
```
#### Discriptive statistics of a column
For example, we can calcualte the total land area of all states:
```
states = ee.FeatureCollection('TIGER/2018/States')
area_m2 = states.aggregate_sum("ALAND").getInfo()
area_km2 = area_m2 / 1000000
print("Total land area: ", area_km2, " km2")
states = ee.FeatureCollection('TIGER/2018/States')
stats = states.aggregate_stats("ALAND").getInfo()
print(stats)
```
#### Add a new column to the attribute table
```
states = ee.FeatureCollection('TIGER/2018/States').sort('ALAND', False)
states = states.map(lambda x: x.set('AreaKm2', x.area().divide(1000000).toLong()))
first = states.first().toDictionary().getInfo()
print(first)
```
#### Set symbology based on column values
```
Map = emap.Map(center=[40, -100], zoom=4)
states = ee.FeatureCollection('TIGER/2018/States')
visParams = {
'palette': ['purple', 'blue', 'green', 'yellow', 'orange', 'red'],
'min': 500000000.0,
'max': 5e+11,
'opacity': 0.8,
}
image = ee.Image().float().paint(states, 'ALAND')
Map.addLayer(image, visParams, 'TIGER/2018/States')
Map.addLayerControl()
Map
```
#### Download attribute table as a CSV
```
states = ee.FeatureCollection('TIGER/2018/States')
url = states.getDownloadURL(filetype="csv", selectors=['NAME', 'ALAND', 'REGION', 'STATEFP', 'STUSPS'], filename="states")
print(url)
```
#### Formatting the output
```
first = states.first()
props = first.propertyNames().getInfo()
print(props)
props = states.first().toDictionary(props).getInfo()
print(props)
for key, value in props.items():
print("{}: {}".format(key, value))
```
#### Download data as shapefile to Google Drive
```
# function for converting GeometryCollection to Polygon/MultiPolygon
def filter_polygons(ftr):
geometries = ftr.geometry().geometries()
geometries = geometries.map(lambda geo: ee.Feature( ee.Geometry(geo)).set('geoType', ee.Geometry(geo).type()))
polygons = ee.FeatureCollection(geometries).filter(ee.Filter.eq('geoType', 'Polygon')).geometry()
return ee.Feature(polygons).copyProperties(ftr)
states = ee.FeatureCollection('TIGER/2018/States')
new_states = states.map(filter_polygons)
col_names = states.first().propertyNames().getInfo()
print("Column names: ", col_names)
url = new_states.getDownloadURL("shp", col_names, 'states');
print(url)
desc = 'states'
# Set configration parameters for output vector
task_config = {
'folder': 'gee-data', # output Google Drive folder
'fileFormat': 'SHP',
'selectors': col_names # a list of properties/attributes to be exported
}
print('Exporting {}'.format(desc))
task = ee.batch.Export.table.toDrive(new_states, desc, **task_config)
task.start()
```
## TIGER: US Census Blocks
https://developers.google.com/earth-engine/datasets/catalog/TIGER_2010_Blocks

```
Map = emap.Map(center=[40, -100], zoom=4)
dataset = ee.FeatureCollection('TIGER/2010/Blocks') \
.filter(ee.Filter.eq('statefp10', '47'))
pop = dataset.aggregate_sum('pop10')
print("The number of census blocks: ", dataset.size().getInfo())
print("Total population: ", pop.getInfo())
Map.setCenter(-86.79, 35.87, 6)
Map.addLayer(dataset, {}, "Census Block", False)
visParams = {
'min': 0.0,
'max': 700.0,
'palette': ['black', 'brown', 'yellow', 'orange', 'red']
}
image = ee.Image().float().paint(dataset, 'pop10')
Map.setCenter(-73.99172, 40.74101, 13)
Map.addLayer(image, visParams, 'TIGER/2010/Blocks')
Map.addLayerControl()
Map
```
## TIGER: US Census Counties 2018
https://developers.google.com/earth-engine/datasets/catalog/TIGER_2018_Counties

```
Map = emap.Map(center=[40, -100], zoom=4)
Map.setCenter(-110, 40, 5)
states = ee.FeatureCollection('TIGER/2018/States')
# .filter(ee.Filter.eq('STUSPS', 'TN'))
# // Turn the strings into numbers
states = states.map(lambda f: f.set('STATEFP', ee.Number.parse(f.get('STATEFP'))))
state_image = ee.Image().float().paint(states, 'STATEFP')
visParams = {
'palette': ['purple', 'blue', 'green', 'yellow', 'orange', 'red'],
'min': 0,
'max': 50,
'opacity': 0.8,
};
counties = ee.FeatureCollection('TIGER/2016/Counties')
# print(counties.first().propertyNames().getInfo())
image = ee.Image().paint(states, 0, 2)
# Map.setCenter(-99.844, 37.649, 4)
# Map.addLayer(image, {'palette': 'FF0000'}, 'TIGER/2018/States')
Map.addLayer(state_image, visParams, 'TIGER/2016/States');
Map.addLayer(ee.Image().paint(counties, 0, 1), {}, 'TIGER/2016/Counties')
Map.addLayerControl()
Map
```
## TIGER: US Census Tracts
https://developers.google.com/earth-engine/datasets/catalog/TIGER_2010_Tracts_DP1
http://magic.lib.uconn.edu/magic_2/vector/37800/demogprofilehousect_37800_0000_2010_s100_census_1_t.htm

```
Map = emap.Map(center=[40, -100], zoom=4)
dataset = ee.FeatureCollection('TIGER/2010/Tracts_DP1')
visParams = {
'min': 0,
'max': 4000,
'opacity': 0.8,
'palette': ['#ece7f2', '#d0d1e6', '#a6bddb', '#74a9cf', '#3690c0', '#0570b0', '#045a8d', '#023858']
}
# print(dataset.first().propertyNames().getInfo())
# Turn the strings into numbers
dataset = dataset.map(lambda f: f.set('shape_area', ee.Number.parse(f.get('dp0010001'))))
# Map.setCenter(-103.882, 43.036, 8)
image = ee.Image().float().paint(dataset, 'dp0010001')
Map.addLayer(image, visParams, 'TIGER/2010/Tracts_DP1')
Map.addLayerControl()
Map
```
## TIGER: US Census Roads
https://developers.google.com/earth-engine/datasets/catalog/TIGER_2016_Roads

```
Map = emap.Map(center=[40, -100], zoom=4)
fc = ee.FeatureCollection('TIGER/2016/Roads')
Map.setCenter(-73.9596, 40.7688, 12)
Map.addLayer(fc, {}, 'Census roads')
Map.addLayerControl()
Map
```
| github_jupyter |
# gym-anytrading
`AnyTrading` is a collection of [OpenAI Gym](https://github.com/openai/gym) environments for reinforcement learning-based trading algorithms.
Trading algorithms are mostly implemented in two markets: [FOREX](https://en.wikipedia.org/wiki/Foreign_exchange_market) and [Stock](https://en.wikipedia.org/wiki/Stock). AnyTrading aims to provide some Gym environments to improve and facilitate the procedure of developing and testing RL-based algorithms in this area. This purpose is obtained by implementing three Gym environments: **TradingEnv**, **ForexEnv**, and **StocksEnv**.
TradingEnv is an abstract environment which is defined to support all kinds of trading environments. ForexEnv and StocksEnv are simply two environments that inherit and extend TradingEnv. In the future sections, more explanations will be given about them but before that, some environment properties should be discussed.
**Note:** For experts, it is recommended to check out the [gym-mtsim](https://github.com/AminHP/gym-mtsim) project.
## Installation
### Via PIP
```bash
pip install gym-anytrading
```
### From Repository
```bash
git clone https://github.com/AminHP/gym-anytrading
cd gym-anytrading
pip install -e .
## or
pip install --upgrade --no-deps --force-reinstall https://github.com/AminHP/gym-anytrading/archive/master.zip
```
## Environment Properties
First of all, **you can't simply expect an RL agent to do everything for you and just sit back on your chair in such complex trading markets!**
Things need to be simplified as much as possible in order to let the agent learn in a faster and more efficient way. In all trading algorithms, the first thing that should be done is to define **actions** and **positions**. In the two following subsections, I will explain these actions and positions and how to simplify them.
### Trading Actions
If you search on the Internet for trading algorithms, you will find them using numerous actions such as **Buy**, **Sell**, **Hold**, **Enter**, **Exit**, etc.
Referring to the first statement of this section, a typical RL agent can only solve a part of the main problem in this area. If you work in trading markets you will learn that deciding whether to hold, enter, or exit a pair (in FOREX) or stock (in Stocks) is a statistical decision depending on many parameters such as your budget, pairs or stocks you trade, your money distribution policy in multiple markets, etc. It's a massive burden for an RL agent to consider all these parameters and may take years to develop such an agent! In this case, you certainly will not use this environment but you will extend your own.
So after months of work, I finally found out that these actions just make things complicated with no real positive impact. In fact, they just increase the learning time and an action like **Hold** will be barely used by a well-trained agent because it doesn't want to miss a single penny. Therefore there is no need to have such numerous actions and only `Sell=0` and `Buy=1` actions are adequate to train an agent just as well.
### Trading Positions
If you're not familiar with trading positions, refer [here](https://en.wikipedia.org/wiki/Position_\(finance\)). It's a very important concept and you should learn it as soon as possible.
In a simple vision: **Long** position wants to buy shares when prices are low and profit by sticking with them while their value is going up, and **Short** position wants to sell shares with high value and use this value to buy shares at a lower value, keeping the difference as profit.
Again, in some trading algorithms, you may find numerous positions such as **Short**, **Long**, **Flat**, etc. As discussed earlier, I use only `Short=0` and `Long=1` positions.
## Trading Environments
As I noticed earlier, now it's time to introduce the three environments. Before creating this project, I spent so much time to search for a simple and flexible Gym environment for any trading market but didn't find one. They were almost a bunch of complex codes with many unclear parameters that you couldn't simply look at them and comprehend what's going on. So I concluded to implement this project with a great focus on simplicity, flexibility, and comprehensiveness.
In the three following subsections, I will introduce our trading environments and in the next section, some IPython examples will be mentioned and briefly explained.
### TradingEnv
TradingEnv is an abstract class which inherits `gym.Env`. This class aims to provide a general-purpose environment for all kinds of trading markets. Here I explain its public properties and methods. But feel free to take a look at the complete [source code](https://github.com/AminHP/gym-anytrading/blob/master/gym_anytrading/envs/trading_env.py).
* Properties:
> `df`: An abbreviation for **DataFrame**. It's a **pandas'** DataFrame which contains your dataset and is passed in the class' constructor.
>
> `prices`: Real prices over time. Used to calculate profit and render the environment.
>
> `signal_features`: Extracted features over time. Used to create *Gym observations*.
>
> `window_size`: Number of ticks (current and previous ticks) returned as a *Gym observation*. It is passed in the class' constructor.
>
> `action_space`: The *Gym action_space* property. Containing discrete values of **0=Sell** and **1=Buy**.
>
> `observation_space`: The *Gym observation_space* property. Each observation is a window on `signal_features` from index **current_tick - window_size + 1** to **current_tick**. So `_start_tick` of the environment would be equal to `window_size`. In addition, initial value for `_last_trade_tick` is **window_size - 1** .
>
> `shape`: Shape of a single observation.
>
> `history`: Stores the information of all steps.
* Methods:
> `seed`: Typical *Gym seed* method.
>
> `reset`: Typical *Gym reset* method.
>
> `step`: Typical *Gym step* method.
>
> `render`: Typical *Gym render* method. Renders the information of the environment's current tick.
>
> `render_all`: Renders the whole environment.
>
> `close`: Typical *Gym close* method.
* Abstract Methods:
> `_process_data`: It is called in the constructor and returns `prices` and `signal_features` as a tuple. In different trading markets, different features need to be obtained. So this method enables our TradingEnv to be a general-purpose environment and specific features can be returned for specific environments such as *FOREX*, *Stocks*, etc.
>
> `_calculate_reward`: The reward function for the RL agent.
>
> `_update_profit`: Calculates and updates total profit which the RL agent has achieved so far. Profit indicates the amount of units of currency you have achieved by starting with *1.0* unit (Profit = FinalMoney / StartingMoney).
>
> `max_possible_profit`: The maximum possible profit that an RL agent can obtain regardless of trade fees.
### ForexEnv
This is a concrete class which inherits TradingEnv and implements its abstract methods. Also, it has some specific properties for the *FOREX* market. For more information refer to the [source code](https://github.com/AminHP/gym-anytrading/blob/master/gym_anytrading/envs/forex_env.py).
* Properties:
> `frame_bound`: A tuple which specifies the start and end of `df`. It is passed in the class' constructor.
>
> `unit_side`: Specifies the side you start your trading. Containing string values of **left** (default value) and **right**. As you know, there are two sides in a currency pair in *FOREX*. For example in the *EUR/USD* pair, when you choose the `left` side, your currency unit is *EUR* and you start your trading with 1 EUR. It is passed in the class' constructor.
>
> `trade_fee`: A default constant fee which is subtracted from the real prices on every trade.
### StocksEnv
Same as ForexEnv but for the *Stock* market. For more information refer to the [source code](https://github.com/AminHP/gym-anytrading/blob/master/gym_anytrading/envs/stocks_env.py).
* Properties:
> `frame_bound`: A tuple which specifies the start and end of `df`. It is passed in the class' constructor.
>
> `trade_fee_bid_percent`: A default constant fee percentage for bids. For example with trade_fee_bid_percent=0.01, you will lose 1% of your money every time you sell your shares.
>
> `trade_fee_ask_percent`: A default constant fee percentage for asks. For example with trade_fee_ask_percent=0.005, you will lose 0.5% of your money every time you buy some shares.
Besides, you can create your own customized environment by extending TradingEnv or even ForexEnv or StocksEnv with your desired policies for calculating reward, profit, fee, etc.
## Examples
### Create an environment
```
import gym
import gym_anytrading
env = gym.make('forex-v0')
# env = gym.make('stocks-v0')
```
- This will create the default environment. You can change any parameters such as dataset, frame_bound, etc.
### Create an environment with custom parameters
I put two default datasets for [*FOREX*](https://github.com/AminHP/gym-anytrading/blob/master/gym_anytrading/datasets/data/FOREX_EURUSD_1H_ASK.csv) and [*Stocks*](https://github.com/AminHP/gym-anytrading/blob/master/gym_anytrading/datasets/data/STOCKS_GOOGL.csv) but you can use your own.
```
from gym_anytrading.datasets import FOREX_EURUSD_1H_ASK, STOCKS_GOOGL
custom_env = gym.make('forex-v0',
df = FOREX_EURUSD_1H_ASK,
window_size = 10,
frame_bound = (10, 300),
unit_side = 'right')
# custom_env = gym.make('stocks-v0',
# df = STOCKS_GOOGL,
# window_size = 10,
# frame_bound = (10, 300))
```
- It is to be noted that the first element of `frame_bound` should be greater than or equal to `window_size`.
### Print some information
```
print("env information:")
print("> shape:", env.shape)
print("> df.shape:", env.df.shape)
print("> prices.shape:", env.prices.shape)
print("> signal_features.shape:", env.signal_features.shape)
print("> max_possible_profit:", env.max_possible_profit())
print()
print("custom_env information:")
print("> shape:", custom_env.shape)
print("> df.shape:", custom_env.df.shape)
print("> prices.shape:", custom_env.prices.shape)
print("> signal_features.shape:", custom_env.signal_features.shape)
print("> max_possible_profit:", custom_env.max_possible_profit())
```
- Here `max_possible_profit` signifies that if the market didn't have trade fees, you could have earned **4.054414887146586** (or **1.122900180008982**) units of currency by starting with **1.0**. In other words, your money is almost *quadrupled*.
### Plot the environment
```
env.reset()
env.render()
```
- **Short** and **Long** positions are shown in `red` and `green` colors.
- As you see, the starting *position* of the environment is always **Short**.
### A complete example
```
import gym
import gym_anytrading
from gym_anytrading.envs import TradingEnv, ForexEnv, StocksEnv, Actions, Positions
from gym_anytrading.datasets import FOREX_EURUSD_1H_ASK, STOCKS_GOOGL
import matplotlib.pyplot as plt
env = gym.make('forex-v0', frame_bound=(50, 100), window_size=10)
# env = gym.make('stocks-v0', frame_bound=(50, 100), window_size=10)
observation = env.reset()
while True:
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
# env.render()
if done:
print("info:", info)
break
plt.cla()
env.render_all()
plt.show()
```
- You can use `render_all` method to avoid rendering on each step and prevent time-wasting.
- As you see, the first **10** points (`window_size`=10) on the plot don't have a *position*. Because they aren't involved in calculating reward, profit, etc. They just display the first observations. So the environment's `_start_tick` and initial `_last_trade_tick` are **10** and **9**.
#### Mix with `stable-baselines` and `quantstats`
[Here](https://github.com/AminHP/gym-anytrading/blob/master/examples/a2c_quantstats.ipynb) is an example that mixes `gym-anytrading` with the mentioned famous libraries and shows how to utilize our trading environments in other RL or trading libraries.
### Extend and manipulate TradingEnv
In case you want to process data and extract features outside the environment, it can be simply done by two methods:
**Method 1 (Recommended):**
```
def my_process_data(env):
start = env.frame_bound[0] - env.window_size
end = env.frame_bound[1]
prices = env.df.loc[:, 'Low'].to_numpy()[start:end]
signal_features = env.df.loc[:, ['Close', 'Open', 'High', 'Low']].to_numpy()[start:end]
return prices, signal_features
class MyForexEnv(ForexEnv):
_process_data = my_process_data
env = MyForexEnv(df=FOREX_EURUSD_1H_ASK, window_size=12, frame_bound=(12, len(FOREX_EURUSD_1H_ASK)))
```
**Method 2:**
```
def my_process_data(df, window_size, frame_bound):
start = frame_bound[0] - window_size
end = frame_bound[1]
prices = df.loc[:, 'Low'].to_numpy()[start:end]
signal_features = df.loc[:, ['Close', 'Open', 'High', 'Low']].to_numpy()[start:end]
return prices, signal_features
class MyStocksEnv(StocksEnv):
def __init__(self, prices, signal_features, **kwargs):
self._prices = prices
self._signal_features = signal_features
super().__init__(**kwargs)
def _process_data(self):
return self._prices, self._signal_features
prices, signal_features = my_process_data(df=STOCKS_GOOGL, window_size=30, frame_bound=(30, len(STOCKS_GOOGL)))
env = MyStocksEnv(prices, signal_features, df=STOCKS_GOOGL, window_size=30, frame_bound=(30, len(STOCKS_GOOGL)))
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import sys
sys.path.append('../../pyutils')
import metrics
import utils
```
# Introduction
In an undirected graph model, each vertex represent a random variable, and the absence of edge between two vertices means that the 2 randoms variables are conditionaly independent, given the other. The graph gives a visual way of understanding the joint distribution of the entire set of random variables.
This graph is also called a Markov Random Field.
Sparge graphs have a small number of edges, and are convenient for interpretation.
Each edge is parametrized by it's value (or potiential) that encode the strength of the conditional dependence between the random variables.
Challenges of grahical models are:
- model selection (graph structure)
- estimation of the edges parameters from data (learning)
- compute marginal random variables propabilities and expectations. (inference)
# Markov Graphs and Their Properties
Let a graph $\mathcal{G}$ a pair $(V,E)$ with $V$ a set of vertices and $E$ a set of edges.
Two vertices $X$ and $Y$ are adjacent is there is an edge between them: $X \sim Y$.
A path $X_1,\text{...},X_n$ is a set of joined vertices; $X_{i-1} \sim X_i$ for $i=2,\text{...},n$.
A complete graph is a graph with every pair of vertices joined by an edge.
A subgraph $U \in V$ is a subject of vertices with their edges.
In a markov graph, the absence of an edge implies that the random variables are conditionally independant given the other variables:
$$\text{No edge joining $X$ and $Y$} \iff X \perp Y | \text{rest}$$
$A,B,C$ subgraphs. $C$ separate $A$ and $B$ if every path between $A$ and $B$ intersects $C$.
$$\text{$C$ separates $A$ and $B$} \implies A \perp B | C$$
A clique is a complete subgraph. A clique is said maximal is no other vertices can be added to it and still yield a clique.
A probability density function $f$ over $\mathcal{G}$ can be represented as:
$$f(x) = \frac{1}{Z} \prod_{C \in \mathcal{C}} \psi_C(x_C)$$
with $\mathcal{C}$ set of maximal cliques, and $\psi_C$ clique potentials. These are not really density functions, but capture dependence in $X_C$ by scoring certains $x_C$ higher than others.
$Z$ is the normilazing constant, also called the partition function:
$$Z = \sum_{x \in \mathcal{X}} \prod_{C \in \mathcal{C}} \Psi_C (x_C)$$
This chapter focus on pairwise Markov Graph. There is a potential function for each edge, and at-most second-order interactions are represented. They need fewer parameters, and are easier to work with.
# Undirected Graphical Models for Continuous Variables
We assume the observations have a multivariate Gaussian distribution with mean $\mu$ and covariance $\Sigma$. Since Gaussian distribution represents at most second-order relationships, it encodes a pairwise Markov Graph.
Th Gaussian distribution has the property that all conditionals distributions are also Gaussian.
Let $\Theta = \Sigma^{-1}$. If $\Theta_{ij}=0$, then variables $i$ and $j$ are conditionally independant given the other variables.
## Estimation of the Parameters when the Graph Structure is Known
Given some observations of $X$, let estimate the parameters of the joint distribution ($\mu$ and $\Sigma$). We suppose that the graph is complete.
Let's define the empirical covariance matrix $S$:
$$S = \frac{1}{N} \sum_{i=1}^N (x_i - \bar{x}) (x_i - \bar{x})^T$$
with $\bar{x}$ the sample mean vector.
The log-likelihood of the data can be written as:
$$l(\Theta) = \log \det \theta - \text{trace}(S\theta)$$
The maximum likelihood estimate of $\Sigma$ is $S$.
Now is some edges are missing, we are trying to maximize $l(\Theta)$ under the constraint that somes entries of $\Theta$ are 0.
We add Lagrange constants for all missing edges:
$$l_C(\Theta) = \log \det \theta - \text{trace}(S\theta) - \sum_{(j,k) \notin E} \gamma_{jk}\theta_{jk}$$
This can be maximized with the followigng equation:
$$\Theta^{-1} - S - \Gamma = 0$$
with $\Gamma$ matrix of Lagrange parameters.
We can patition the matrices into 2 parts: part 1 the first $p-1$ rows and columns, and part 2 the $pth$ row and column.
The equation can be rewrtieen as:
$$W_{11}\beta - s_{12} - \gamma_{12} = 0$$
We can remove all non-zeros elements from $\gamma_12$, corresponding to edges constrained to be $0$, because they carry no information. We can also reduce the same way $\beta$ and $W_11$, giving us the new equation:
$$W^*_{11}\beta^* - s^*_{12} = 0$$
with solution:
$$\hat{\beta}^* = W_{11}^{*-1} s^*_{12}$$
the solution is padded with zeros to give $\hat{\beta}$
Algorithm $17.1$ page 634
```
def part_mats(W, S, j):
W_11 = np.delete(W, j, axis=0)
W_11 = np.delete(W_11, j, axis=1)
s_12 = np.delete(S, j, axis=0)[:,j]
return W_11, s_12
def regroup_mat(W, w_12, j):
w_12 = np.insert(w_12, j, W[j,j])
W[:,j] = w_12
return W
def edit_mats(G, j, W_11, s_12):
N = len(G)
suppr = [i for i in range(N) if G[i,j] == 0]
suppr = [x if x < j else x - 1 for x in suppr]
s_12 = np.delete(s_12, suppr)
W_11 = np.delete(W_11, suppr, axis=0)
W_11 = np.delete(W_11, suppr, axis=1)
return W_11, s_12
def extend_beta(G, j, betar):
N = len(G)
suppr = [i for i in range(N) if G[i,j] == 0]
suppr = [x if x < j else x - 1 for x in suppr]
beta = np.insert(betar, suppr, 0)
return beta
def estim_gauss(S, G, max_iters = 100, tol=1e-16):
N = len(S)
W = S.copy()
for it in range(max_iters):
W_old = W.copy()
for j in range(N):
W_11, s_12 = part_mats(W, S, j)
#print(W_11)
#print(s_12)
W_11r, s_12r = edit_mats(G, j, W_11, s_12)
#print(W_11r)
#print(s_12r)
betar = np.linalg.inv(W_11r) @ s_12r
beta = extend_beta(G, j, betar)
#print(betar)
#print(beta)
w_12 = W_11 @ beta
#print(w_12)
W = regroup_mat(W, w_12, j)
#W[:N-1, -1] = w_12
#print(W)
if np.linalg.norm(W - W_old) < tol:
break
print('Iterations:', it)
return W
S = np.array([
[10, 1, 5, 4],
[1., 10, 2, 6],
[5, 2, 10, 3],
[4, 6, 3, 10]
])
G = np.array([
[1, 1, 0, 1],
[1, 1, 1, 0],
[0, 1, 1, 1],
[1, 0, 1, 1]
])
W = estim_gauss(S, G)
print(np.around(W,2))
print(np.around(np.linalg.inv(W),2))
```
## Estimation of the Graph Structure
Sparse inverse covariance estimation with the graphical lasso - Friedman, J., Hastie, T. and Tibshirani, R. (2008) -[PDF](file:///home/aiw/docs/eosl-refs.pdf)
We can use the lasso regularization to estimate $\Sigma$ in a way that tries to insert zeroes in $\Theta$, giving us the graph strcture.
Let's maximizing the penalized log-likelihood:
$$\log \det \Theta - \text{trace}(S\Theta) - \lambda ||\Theta||_1$$
The gradient equation is:
$$\Theta^{-1} - S - \lambda \text{sign}(\Theta) = 0$$
Similary to the algorithm above, we reach the equation:
$$W_{11}\beta - s_{12} + \lambda \text{sign}(\beta) = 0$$
This problem is similar to linear regression with lasso, and can be solved using the pathwise coordinate descent method.
Let $V = W_{11}$, the update has the form:
$$\hat{\beta}_j \leftarrow \frac{1}{V_jj} S(s_{12j} - \sum_{k \neq j} V_{kj} \hat{\beta}_k, \lambda)$$
with $S$ the soft-threshold operator:
$$S(x,t) = \text{sign}(x)(|x| - t)_+$$
Algorithm $17.2$ page $636$
## Undirected Graphical Models for Discrete Variables
Pairwise Markov Networks with binary variables are veary common. They are called Ising models, or Botlzmann machines.
Each vertices are referred to as nodes or units. The values at each node can either be obversed (visible) or unobserved (hidden).
We consider first the case where all $p$ nodes are visible with edge pairs $(j,k) \in E$. Their joint probability is given by:
$$p(X,\Theta) = \exp \left[ \sum_{(j,k) \in E} \theta_{jk} X_j X_k - \Phi(\Theta) \right]$$
with $\Phi(\Theta)$ the log of the partition function:
$$\Phi(\Theta) = \log \sum_{x \in \mathcal{X}} \left[ \exp \sum_{(j,k) \in E} \theta_{jk} x_j x_k \right]$$
The model requires a constant node $X_0=1$.
The Ising model implies a logistic form for each node conditional on the others:
$$P(X_j=1| X_{-j} = x_{-j}) = \left( 1 + \exp ( -\theta_{j0} - \sum_{(j,k) \in E} \theta_{jk} x_k) \right) ^{-1}$$
## Estimations of the Parameters when the Graph Structure is known
Given N observations $x_i$, we can estimate the parameters by maximizing the log-likelihood:
$$l(\Theta) = \sum_{i=1}^N \log P_\Theta(X = x_i)$$
$$
\begin{equation}
\begin{split}
l(\Theta) & = \sum_{i=1}^N \log P_\Theta(X = x_i) \\
& = \sum_{i=1}^N \left( \sum_{(j,k) \in E} \theta_{jk} x_{ij} x_{ik} - \Phi(\Theta) \right)
\end{split}
\end{equation}
$$
Setting the gradient to $0$ gives:
$$\hat{E}(X_j,X_k) - E_\Theta(X_j,X_k) = 0$$
with $\hat{E}(X_j,X_k)$ the expecation taken with respect to the empirical distribution of the data:
$$\hat{E}(X_j, X_k) = \frac{1}{N} \sum_{i=1}^N x_{ij}x_{ik}$$
We can find the maximum likelihood estimates using gradient search or Netwon methods, but computing $E_\Theta(X_j,X_k)$ involves the enumeration of $p(X,\Theta)$ over the $|\mathcal{X}|=2^p$ values of $X$, and it not feasible for large $p$ (over 30).
When $p$ is large, the gradient is approximated using other methods, like mean field approximation or Gibbs sampling
## Hidden Nodes
Let's suppose we have a subset of visible variables $X_V$, and the remaining are the hidden $X_H$. The log-likelihood become:
$$
\begin{equation}
\begin{split}
l(\Theta) & = \sum_{i=1}^N \log P_\Theta(X_V = x_{iV}) \\
& = \sum_{i=1}^N \left( \log \sum_{x_h \in \mathcal{X}_H} \exp \sum_{(j,k) \in E} \theta_{jk} x_{ij} x_{ik} - \Phi(\Theta) \right)
\end{split}
\end{equation}
$$
The gradient becames:
$$\frac{d l(\Theta)}{d\theta_{jk}} = \hat{E}_V E_\Theta(X_k,X_K|X_V) - E\Theta(X_j,X_k)$$
It can be computed using Gibbs sampling, but the method can be very slow, even for moderate-sized models. We can add more restrictions to make those computations manageable
## Estimation of the Graph Structure
As for continuous random variables, we can use lasso to remove edges.
One solution is to use a penalized log-likelihood, but the gradient computation for dense graphs is not manageable.
Another solution fit an $L_1$ penalized logistic regression model to each node as a functon of the other nodes, and symmetrize the edge parameter estimates.
## Restricted Boltzmann Machines
An RBM consts of one layer of visible units, and one layer of hidden units , with no connections within a layer.
The restricted form simplifies the Gibbs Sampling to compute the gradient of the log-likelihhod.
Using the contrastive-divergence algorithm, they can be trained rapidly.
RBM can learn to extract interesting features from data.
We can stack several RBM togethers and train the whole joint density model.
| github_jupyter |
# First Steps in Altair: Stacked, Grouped, and Layered Bar Charts
### Using _bar charts_ to visualize a simple data set with five items for which some quantity has been measured across five categories, requires careful consideration of the questions that the visualization should help answer.
This notebook is based on [Streit & Gehlenborg, "Points of View: Bar charts and box plots", _Nature Methods_ **11**, 117 (2014)](https://www.nature.com/articles/nmeth.2807). Here we are demonstrating how to use [Altair](https://altair-viz.github.io/index.html) to re-create some of the figures from the paper.
```
import altair as alt
# setup renderer for Jupyter Notebooks (not needed for Juptyer Lab)
alt.renderers.enable('notebook')
```
## Data Set
This is an artificial data set that contains five items for which some quantity was measured in five categories. A real world data set with this structure could be for instance the number of users of Twitter, Facebook, YouTube, Snapchat, and Instagram in North America, South America, Europe, Africa, and Asia.
```
# generate data table used in https://www.nature.com/articles/nmeth.2807
data = alt.Data(values = [{ 'category': 'A', 'item': '1', 'value': '6' },
{ 'category': 'A', 'item': '2', 'value': '8' },
{ 'category': 'A', 'item': '3', 'value': '12' },
{ 'category': 'A', 'item': '4', 'value': '20' },
{ 'category': 'A', 'item': '5', 'value': '22' },
{ 'category': 'B', 'item': '1', 'value': '29' },
{ 'category': 'B', 'item': '2', 'value': '27' },
{ 'category': 'B', 'item': '3', 'value': '21' },
{ 'category': 'B', 'item': '4', 'value': '18' },
{ 'category': 'B', 'item': '5', 'value': '5' },
{ 'category': 'C', 'item': '1', 'value': '18' },
{ 'category': 'C', 'item': '2', 'value': '17' },
{ 'category': 'C', 'item': '3', 'value': '16' },
{ 'category': 'C', 'item': '4', 'value': '16' },
{ 'category': 'C', 'item': '5', 'value': '15' },
{ 'category': 'D', 'item': '1', 'value': '30' },
{ 'category': 'D', 'item': '2', 'value': '12' },
{ 'category': 'D', 'item': '3', 'value': '3' },
{ 'category': 'D', 'item': '4', 'value': '9' },
{ 'category': 'D', 'item': '5', 'value': '20' },
{ 'category': 'E', 'item': '1', 'value': '7' },
{ 'category': 'E', 'item': '2', 'value': '12' },
{ 'category': 'E', 'item': '3', 'value': '19' },
{ 'category': 'E', 'item': '4', 'value': '7' },
{ 'category': 'E', 'item': '5', 'value': '1' }
])
```
## Data Table
An example how we can use Altair to render our data as a table.
```
alt.Chart(data).mark_text().encode(
alt.Row('category:N'),
alt.Column('item:N'),
alt.Text('value:Q')
)
```
## Color Palette
We will be using colors from a [color palette by Okabe & Ito](http://jfly.iam.u-tokyo.ac.jp/color/) that was designed to accommodate different types of color blindness. Select colors from the palette are mapped to the categories. [Mike Mol](https://www.twitter.com/mikemol) provides the color values in [various representations](https://mikemol.github.io/technique/colorblind/2018/02/11/color-safe-palette.html).
```
# map categories to colors from the Okabe & Ito color palette (http://jfly.iam.u-tokyo.ac.jp/color/)
oi_scale = alt.Scale(domain=['A', 'B', 'C', 'D', 'E'],
range=['#0072B2', '#D55E00', '#009E73', '#CC79A7', '#E69F00'])
```
## Stacked Bar Chart
**Recommendation**: Use a stacked bar chart if the focus is on comparing the overall quantities across items but you also need to illustrate contributions of each category to the total.
```
# create stacked bar chart
```
## Grouped Bar Chart
**Recommendation**: Use a grouped bar chart if the focus is on comparison of values across categories within each item, while still enabling comparisons across items.
_Note that if quantities add up to the same total for each item, then a grouped bar chart is equivalent to multiple pie charts, yet a grouped bar chart affords more accurate readings of values and comparisons._
```
# create grouped bar chart
```
## Layered Bar Chart
**Recommendation**: Use a layered bar chart if the focus is on the distribution of values in each category across all items.
_Note that comparisons within each category are more accurate than in stacked bar charts due to common baseline for the values in each category._
```
# create layered bar chart
```
| github_jupyter |
# RGB and CIE
In this project we'll regularly need to calculate distances between pixels in colour space as a proxy for the visual difference between the colours. The simplest way of doing this is to calculate the Euclidean distance between them (also known as cosine distance or $\ell2$ norm).
If we have two colours $C_1 = (R_1, G_1, B_1)$ and $C_2 = (R_2, G_2, B_2)$, the Euclidean distance $\Delta C$ is defined as:
${\displaystyle \Delta C ={\sqrt {(R_{2}-R_{1})^{2}+(G_{2}-G_{1})^{2}+(B_{2}-B_{1})^{2}}}}$
We can implement the function in python as follows
```
def colour_distance_1(colour_1, colour_2):
return (
sum(
[
(channel_2 - channel_1) ** 2
for channel_1, channel_2 in zip(colour_1, colour_2)
]
)
** 0.5
)
```
The red, green, and blue channels available to us in RGB space are ideally suited for representing colour on pixelated screens. However, our goal is to represent the _ perceptual differences_ between colours, and RGB isn't ideal for this. It's now [pretty well established](https://en.wikipedia.org/wiki/Color_difference) that euclidean distances in RGB space are a bad representation of the distances that our eyes see.
By stretching the RGB dimensions by different amounts, we can better approximate that difference:
$\displaystyle \Delta C ={ {\sqrt {2\times \Delta R^{2}+4\times \Delta G^{2}+3\times \Delta B^{2}}}}$
Again, here's the python
```
def colour_distance_2(colour_1, colour_2):
r_1, g_1, b_1 = colour_1
r_2, g_2, b_2 = colour_2
return (2 * (r_1 - r_2) ** 2 + 4 * (g_1 - g_2) ** 2 + 3 * (b_1 - b_2) ** 2) ** 0.5
```
We can improve further by adding some extra weirdness to the red and blue channels
${\displaystyle \Delta C={\sqrt {2\times \Delta R^{2}+4\times \Delta G^{2}+3\times \Delta B^{2}+{{{\bar {r}}\times (\Delta R^{2}-\Delta B^{2})} \over {256}}}}}$
Here it is in python
```
def colour_distance_3(colour_1, colour_2):
r_1, g_1, b_1 = colour_1
r_2, g_2, b_2 = colour_2
d_r_sq = (r_1 - r_2) ** 2
d_g_sq = (g_1 - g_2) ** 2
d_b_sq = (b_1 - b_2) ** 2
mean_r = (r_1 + r_2) / 2
d_c_sq = 2 * d_r_sq + 4 * d_g_sq + 3 * d_b_sq + (mean_r * (d_r_sq - d_b_sq) / 256)
return d_c_sq ** 0.5
```
The most general and efficient approach (as far as I know) is to transform the image's RGB coordinates into an entirely new space. The _International Commission on Illumination_ (CIE) produced [CIELAB](https://en.wikipedia.org/wiki/CIELAB_color_space#CIELAB) to better approximate human perception of colour distances. The three coordinates of CIELAB represent:
- The lightness of the color. `L` = 0 yields black and `L` = 100 indicates diffuse white.
- its position between red/magenta and green (`a`, negative values indicate green while positive values indicate magenta)
- its position between yellow and blue (`b`, negative values indicate blue and positive values indicate yellow).
[CIE76](https://en.wikipedia.org/wiki/Color_difference#CIE76) (ie euclidean distance in LAB space) was the original distance proposed with the space. It's been improved upon since, but the differences are minor and as far as I've seen, are unnecessary complications for such minimal gain.
We can map from RGB to LAB and back again by importing the relevant function from `skimage`
```
import numpy as np
from skimage.color import rgb2lab, lab2rgb
```
In this new space we can use our first, super-simple colour distance function to measure the perceptual difference between colours. Below we're randomly generating two colours, converting them to LAB space and calculating the distance. This distance can be seen as a kind of inverse similarity score (colour pairs with lower distance values are more perceptually similar).
```
rgb_colour_1 = np.random.randint(0, 255, (1, 1, 3)).astype(np.float64)
rgb_colour_2 = np.random.randint(0, 255, (1, 1, 3)).astype(np.float64)
lab_colour_1 = rgb2lab(rgb_colour_1).squeeze()
lab_colour_2 = rgb2lab(rgb_colour_2).squeeze()
colour_distance_1(lab_colour_1, lab_colour_2)
```
| github_jupyter |
### Problem Statement
Given an unsorted array `Arr` with `n` positive integers. Find the $k^{th}$ smallest element in the given array, using Divide & Conquer approach.
**Input**: Unsorted array `Arr` and an integer `k` where $1 \leq k \leq n$ <br>
**Output**: The $k^{th}$ smallest element of array `Arr`<br>
**Example 1**<br>
Arr = `[6, 80, 36, 8, 23, 7, 10, 12, 42, 99]`<br>
k = `10`<br>
Output = `99`<br>
**Example 2**<br>
Arr = `[6, 80, 36, 8, 23, 7, 10, 12, 42, 99]`<br>
k = `5`<br>
Output = `12`<br>
---
### The Pseudocode - `fastSelect(Arr, k)`
1. Break `Arr` into $\frac{n}{5}$ (actually it is $\left \lceil{\frac{n}{5}} \right \rceil $) groups, namely $G_1, G_2, G_3...G_{\frac{n}{5}}$
2. For each group $G_i, \forall 1 \leq i \leq \frac{n}{5} $, do the following:
- Sort the group $G_i$
- Find the middle position i.e., median $m_i$ of group $G_i$
- Add $m_i$ to the set of medians **$S$**
3. The set of medians **$S$** will become as $S = \{m_1, m_2, m_3...m_{\frac{n}{5}}\}$. The "good" `pivot` element will be the median of the set **$S$**. We can find it as $pivot = fastSelect(S, \frac{n}{10})$.
4. Partition the original `Arr` into three sub-arrays - `Arr_Less_P`, `Arr_Equal_P`, and `Arr_More_P` having elements less than `pivot`, equal to `pivot`, and bigger than `pivot` **respectively**.
5. Recurse based on the **sizes of the three sub-arrays**, we will either recursively search in the small set, or the big set, as defined in the following conditions:
- If `k <= length(Arr_Less_P)`, then return `fastSelect(Arr_Less_P, k)`. This means that if the size of the "small" sub-array is at least as large as `k`, then we know that our desired $k^{th}$ smallest element lies in this sub-array. Therefore recursively call the same function on the "small" sub-array. <br><br>
- If `k > (length(Arr_Less_P) + length(Arr_Equal_P))`, then return `fastSelect(Arr_More_P, (k - length(Arr_Less_P) - length(Arr_Equal_P)))`. This means that if `k` is more than the size of "small" and "equal" sub-arrays, then our desired $k^{th}$ smallest element lies in "bigger" sub-array. <br><br>
- Return `pivot` otherwise. This means that if the above two cases do not hold true, then we know that $k^{th}$ smallest element lies in the "equal" sub-array.
---
### Exercise - Write the function definition here
```
def fastSelect(Arr, k):
'''TO DO'''
# Implement the algorithm explained above to find the k^th lasrgest element in the given array
pass
```
<span class="graffiti-highlight graffiti-id_dsq4qxt-id_29dh0dm"><i></i><button>Show Solution</button></span>
### Test - Let's test your function
```
Arr = [6, 80, 36, 8, 23, 7, 10, 12, 42]
k = 5
print(fastSelect(Arr, k)) # Outputs 12
Arr = [5, 2, 20, 17, 11, 13, 8, 9, 11]
k = 5
print(fastSelect(Arr, k)) # Outputs 11
Arr = [6, 80, 36, 8, 23, 7, 10, 12, 42, 99]
k = 10
print(fastSelect(Arr, k)) # Outputs 99
```
| github_jupyter |
```
import pandas as pd
```
con = sqlite3.connect('/home/ranu/data_to_go/proTeacher/data.sqlite')
cursor = con.cursor()
cursor.execute('CREATE TABLE IF NOT EXISTS forums (uid integer PRIMARY KEY, user_link text, username text, joined_date text, no_of_posts text, membership_type text, post_title text, posted_date text, post_text text, user_details text, post_url text);')
```
import sqlite3
import warnings
warnings.filterwarnings('ignore')
con = sqlite3.connect('/home/ranu/data_to_go/proTeacher/data.sqlite')
cursor = con.cursor()
pd.read_sql_query('select * from forums', con).shape
df = pd.read_sql_query('select * from forums', con)
df
import pandas as pd
from bs4 import BeautifulSoup
import requests
import re
def fetch_all_posts_in_a_pt_thread(post_url):
res = requests.get(post_url)
soup = BeautifulSoup(res.content)
soup = soup.find('div',attrs={'id':'posts'})
all_posts = soup.find_all('table',attrs={'class':'tborder'})
post = all_posts[0]
all_dicts = []
for post in all_posts:
try:
tmp_dict = fetch_post(post)
all_dicts.append(tmp_dict)
except:
pass
dataframe = pd.DataFrame(all_dicts)
dataframe['post_url'] = post_url
return dataframe
def fetch_post(post_tag):
tmp_dict = {}
user_link = 'http://www.proteacher.net/discussions/' + post_tag.find('a',attrs={'class':'bigusername'}).get('href')
username = post_tag.find('a',attrs={'class':'bigusername'}).text
tmp_dict['user_link'] = user_link
tmp_dict['username'] = username
try:
user_details_string = post_tag.find('div',attrs={'class':'smallfont'}).text.strip()
user_details = user_details_string.replace('\n','').replace('\t','').replace(' ','')
user_details = user_details.split('\r\r')
joined_date = user_details[0]
tmp_dict['joined_date'] = joined_date
no_of_posts = user_details[1]
tmp_dict['no_of_posts'] = no_of_posts
membership_type = user_details[2]
tmp_dict['membership_type'] = membership_type
except:
user_details_string = post_tag.find('div',attrs={'class':'smallfont'}).text.strip()
post_title = post_tag.find('strong').text
tmp_dict['post_title'] = post_title
posted_date = post_tag.find('td',attrs={'align':'left'}).text.split('\t\t\t')[1]
tmp_dict['posted_date'] = posted_date
tmp_dict['post_text'] = post_tag.find('div',attrs={'id':re.compile('post_message_')}).text
tmp_dict['user_details'] = user_details_string
return tmp_dict
fetch_all_posts_in_a_pt_thread('http://www.proteacher.net/discussions/showthread.php?t=656293')
```
| github_jupyter |
```
%pylab notebook
import cv2
import time
```
# Load the effects data
```
crown = cv2.imread('data/crown.png', cv2.IMREAD_UNCHANGED)
glasses = cv2.imread('data/glasses.png', cv2.IMREAD_UNCHANGED)
flower = cv2.imread('data/flower.png', cv2.IMREAD_UNCHANGED)
eye_cartoon = cv2.imread('data/eye.png', cv2.IMREAD_UNCHANGED)
b_hat = cv2.imread('data/b_hat2.png', cv2.IMREAD_UNCHANGED)
balloon = cv2.imread('data/balloon.png', cv2.IMREAD_UNCHANGED)
heart = cv2.imread('data/heart_eye.png', cv2.IMREAD_UNCHANGED)
moustache = cv2.imread('data/mustache.png', cv2.IMREAD_UNCHANGED)
```
# Load the Haar Cascades
```
face_cascade = cv2.CascadeClassifier("utils/haarcascade_frontalface_default.xml")
eye_cascade = cv2.CascadeClassifier("utils/haarcascade_eye_tree_eyeglasses.xml")
smile_cascade = cv2.CascadeClassifier("utils/haarcascade_smile.xml")
```
# Add the image categories
```
num_effects = 7
# Function : get_option_set
# Input : x - integer
# Output : a set of options representing the effect
# Description: Return a set of effects to be performed
def get_option_set(x):
return {
0: set([]),
1: set(['crown']),
2: set(['flower']),
3: set(['eye']),
4: set(['glasses']),
5: set(['crown', 'glasses']),
6: set(['heart']),
7: set(['moustache'])
}[x]
```
# Catch user input
```
# Function : catch_ux_input
# Input : event - Event passed by asychronous call
# x - x coordinate of event
# y - y coordinate of event
# flags - Any special flags set
# param - User data if any
# Output : None
# Description: Catch the asyncrhonous user input and fill the
# command queue based on the option entry parameter.
def catch_ux_input(event,x,y,flags,param):
global option_entry
if((x > 400) and (y > 440)):
if event == cv2.EVENT_LBUTTONDBLCLK:
option_entry = option_entry + 1
option_entry = option_entry % (num_effects + 1)
option = get_option_set(option_entry)
cmd_q.append(option)
```
# Alpha blending
```
# Function : alpha_blend
# Input : fg - foreground with alpha channel
# bg - background
# Output : blended image
# Description: Blend the foreground on background
# Note : bg and fg must have same dimensions
def alpha_blend(fg, bg):
b, g, r, a = cv2.split(fg)
mask = cv2.merge((a,a,a))
_, mask = cv2.threshold(mask,127,255,cv2.THRESH_BINARY)
mask = mask // 255
fg = cv2.cvtColor(fg, cv2.COLOR_BGRA2BGR)
fg = cv2.multiply(mask, fg)
bg = cv2.multiply(1-mask, bg)
return cv2.add(fg, bg)
# Function : set_effect
# Input : img - video frame
# option - set of all effects to be performed
# Output : output frame
# Description: Perform the requested effect on the video frame
def set_effect(img, option):
# Compute the gray scale version of frame
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Find all the faces in the frame
faces = face_cascade.detectMultiScale(gray, 1.3, 3)
# For each face, apply the associated effects (if any)
for (x,y, w, h) in faces:
roi_g = gray[y:y+h, x:x+w]
roi_c = img[y:y+h, x:x+w]
# Fetch the effect image as foreground
if('crown' in option or 'flower' in option or 'birthday' in option):
if('crown' in option):
fg = crown.copy()
elif('flower' in option):
fg = flower.copy()
elif('birthday' in option):
fg = b_hat.copy()
# Scale the foreground wrt face size
sf = w / fg.shape[1]
dim = (int(round(fg.shape[1]*sf)), int(round(fg.shape[0]*sf)))
fg_rz = cv2.resize(fg, dim)
# Extract the background frame to perform alpha blending
back = img[y-fg_rz.shape[0]:y, x:x+fg_rz.shape[1]]
img[y-fg_rz.shape[0]:y, x:x+fg_rz.shape[1]] = alpha_blend(fg_rz, back)
# Fetch the combination of both eye effect images as foreground
if('glasses' in option or 'eye' in option):
if('glasses' in option):
fg = glasses.copy()
elif('eye' in option):
fg = eye_cartoon.copy()
# Find the eyes in the face region of interest
eye = eye_cascade.detectMultiScale(roi_g, 1.1, 3)
# To avoid multiple false positives
if(len(eye) == 2):
# Find the width in accordance to distance between the eye's end to end
w = abs(eye[0][0] - eye[1][0]) + max(eye[0][2], eye[1][2])
x = min(eye[0][0], eye[1][0])
y = max(eye[0][1], eye[1][1])
# Scale the foreground according to requirement
sf = w / fg.shape[1]
dim = (int(round(fg.shape[1]*sf)), int(round(fg.shape[0]*sf)))
n_dim = (int(round(dim[0]*1.2)), int(round(dim[1]*1.2)))
hdiff_x = int(round((n_dim[0] - dim[0])/1.8))
hdiff_y = int(round((n_dim[1] - dim[1])/1.8))
# Adjust some delta to accomodate more natural looking effects
y_pl = y - hdiff_y
x_pl = x - hdiff_x
fg_rz = cv2.resize(fg, n_dim)
# Get the background roi to perform alpha blending
back = roi_c[y_pl: y_pl+n_dim[1], x_pl:x_pl+n_dim[0]]
roi_c[y_pl: y_pl+n_dim[1], x_pl:x_pl+n_dim[0]] = alpha_blend(fg_rz, back)
# Single eye Effects: replacing the eye with desired object
if ('heart' in option):
eyes = eye_cascade.detectMultiScale(roi_g, 1.1, 10)
for (ex,ey,ew,eh) in eyes:
res = cv2.resize(heart,(ew,eh), interpolation = cv2.INTER_CUBIC)
roi_c[ey:ey+res.shape[0], ex:ex+res.shape[1]] = alpha_blend(res,roi_c[ey:ey+res.shape[0], ex:ex+res.shape[1]])
if ('moustache' in option):
smile = smile_cascade.detectMultiScale(roi_g, 1.1, 20)
for (ex,ey,ew,eh) in smile:
scale_width = int(np.round(ew*1.1))
scale_height = eh//3
res = cv2.resize(moustache,(scale_width, scale_height), interpolation = cv2.INTER_CUBIC)
roi_c[ey:ey+(res.shape[0]), ex:ex+res.shape[1]]= alpha_blend(res,roi_c[ey:ey+(res.shape[0]), ex:ex+res.shape[1]])
return img
```
# Start the video capture
```
# Use webcam or input file based on requirement
filename = input("Press Enter to use camera or input file name")
if(filename == ''):
filename=0
print("As per user input: using webcam")
# Open the video capture stream
test = cv2.VideoCapture(filename)
# Set the option_entry and command queue as global to be used for mouse callback
global option_entry, cmd_q
option_entry = 0
cmd_q = []
# Set the mouse callback
cv2.namedWindow('frame')
cv2.setMouseCallback('frame',catch_ux_input)
# For calculating FPS
prev_time = time.time()
num_frames = 0
# Dummy option set
option = set([])
```
# Saving video in x264 format
```
fourcc = cv2.VideoWriter_fourcc(*'X264')
output_filename = 'output_'+ str(filename) + '.mp4'
vw_obj = cv2.VideoWriter(output_filename,fourcc, 30.0, (1200,900))
while(True):
num_frames = num_frames + 1
# Read the input stream frame
ret, frame = test.read()
# If there is command effect pending, do that
if(len(cmd_q) > 0):
option = cmd_q.pop(0)
try:
# Set the effect
out = set_effect(frame, option)
# Saving output stream
flip_out = cv2.flip(out,1)
# Print the option for user to provide input
font = cv2.FONT_HERSHEY_SCRIPT_COMPLEX
cv2.putText(flip_out,'Change Effect',(400,465), font, 1.2,(127,127,255),2,cv2.LINE_AA)
# Resize the OpenCV window for better visualization
flip_out = cv2.resize(flip_out, (1200, 900))
cv2.resizeWindow('frame', 1200,900)
vw_obj.write(flip_out)
# Display the live effects to the user
cv2.imshow('frame',flip_out)
except:
pass
# Continue looping till user presses 'q'
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# Compute the frame rate.
tic = time.time()
print("FPS: ", num_frames/(tic - prev_time))
# Release the capture object and destroy all open OpenCV windows
vw_obj.release()
test.release()
cv2.destroyAllWindows()
```
| github_jupyter |
<img align="right" src="images/tf.png" width="128"/>
<img align="right" src="images/uu-small.png" width="128"/>
<img align="right" src="images/dans.png" width="128"/>
---
To get started: consult [start](start.ipynb)
---
# Search Introduction
*Search* in Text-Fabric is a template based way of looking for structural patterns in your dataset.
Within Text-Fabric we have the unique possibility to combine the ease of formulating search templates for
complicated syntactical patterns with the power of programmatically processing the results.
This notebook will show you how to get up and running.
## Easy command
Search is as simple as saying (just an example)
```python
results = A.search(template)
A.show(results)
```
See all ins and outs in the
[search template docs](https://annotation.github.io/text-fabric/tf/about/searchusage.html).
```
%load_ext autoreload
%autoreload 2
from tf.app import use
A = use("quran:clone", checkout="clone", hoist=globals())
# A = use('quran', hoist=globals())
```
# Basic search command
We start with the most simple form of issuing a query.
Let's look for the words in sura 1.
All work involved in searching takes place under the hood.
```
query = """
sura number=1|2|3
aya number<4
word
"""
results = A.search(query)
A.table(results, end=10)
```
The hyperlinks take us to aya on Tanzil.
Note that we can choose start and/or end points in the results list.
```
A.table(results, start=8, end=13)
```
We can show the results more fully with `show()`.
```
A.displaySetup(queryFeatures=False)
A.show(results, start=1, end=3)
```
We can show all results condensed by *aya*:
```
A.show(results, condensed=True)
```
# Meaningful queries
Let's turn to a bit more meaningful query:
all ayas with a verb immediately followed by the word for Allah.
```
query = """
aya
word pos=verb
<: word pos=noun posx=proper root=Alh
"""
```
We run it with `A.search()`:
```
results = A.search(query)
A.table(results, start=10, end=20)
```
Here it comes: the `A.show()` function asks you for some limits (it will not show more than 100 at a time), and then it displays them.
It lists the results as follows:
* a heading showing which result in the sequence of all results this is
* an overview of the nodes in the tuple of this result
* a display of all verses that have result material, with the places highlighted that
correspond to a node in the result tuple
```
A.show(results, start=10, end=14, withNodes=True)
```
We can also package the results tuples in other things than ayas, e.g. pages:
```
A.show(results, start=12, end=12, withNodes=True, condensed=True, condenseType="page")
```
---
All chapters:
* **[start](start.ipynb)** introduction to computing with your corpus
* **[display](display.ipynb)** become an expert in creating pretty displays of your text structures
* **search** turbo charge your hand-coding with search templates
* **[exportExcel](exportExcel.ipynb)** make tailor-made spreadsheets out of your results
* **[share](share.ipynb)** draw in other people's data and let them use yours
* **[similarAyas](similarAyas.ipynb)** spot the similarities between lines
* **[rings](rings.ipynb)** ring structures in sura 2
CC-BY Dirk Roorda
| github_jupyter |
# Differentiators in PySINDy
This notebook explores the differentiation methods available in PySINDy. Most of the methods are powered by the [derivative](https://pypi.org/project/derivative/) package.
```
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
from scipy.integrate import odeint
import seaborn as sns
import pysindy as ps
```
In the cell below we define all the available differentiators. Note that the different options in `SINDyDerivative` all originate from `derivative`.
* `FiniteDifference` - First order (forward difference) or second order (centered difference) finite difference methods with the ability to drop endpoints. Does *not* assume a uniform time step. Appropriate for smooth data.
* `finite_difference` - Central finite differences of any order. Assumes a uniform time step. Appropriate for smooth data.
* `Smoothed Finite Difference` - `FiniteDifference` with a smoother (default is Savitzky Golay) applied to the data before differentiation. Appropriate for noisy data.
* `savitzky_golay` - Perform a least-squares fit of a polynomial to the data, then compute the derivative of the polynomial. Appropriate for noisy data.
* `spline` - Fit the data with a spline (of arbitrary order) then perform differentiation on the spline. Appropriate for noisy data.
* `trend_filtered` - Use total squared variations to fit the data (computes a global derivative that is a piecewise combination of polynomials of a chosen order). Set `order=0` to obtain the total-variational derivative. Appropriate for noisy data
* `spectral` - Compute the spectral derivative of the data via Fourier Transform. Appropriate for very smooth (i.e. analytic) data.
```
diffs = [
('PySINDy Finite Difference', ps.FiniteDifference()),
('Finite Difference', ps.SINDyDerivative(kind='finite_difference', k=1)),
('Smoothed Finite Difference', ps.SmoothedFiniteDifference()),
('Savitzky Golay', ps.SINDyDerivative(kind='savitzky_golay', left=0.5, right=0.5, order=3)),
('Spline', ps.SINDyDerivative(kind='spline', s=1e-2)),
('Trend Filtered', ps.SINDyDerivative(kind='trend_filtered', order=0, alpha=1e-2)),
('Spectral', ps.SINDyDerivative(kind='spectral')),
]
plot_kws = dict(alpha=0.7, linewidth=3)
pal = sns.color_palette("Set1")
```
## Compare differentiation methods directly
First we'll use the methods to numerically approximate derivatives to measurement data directly, without bringing SINDy into the picture. We'll compare the different methods' accuracies when working with clean data ("approx" in the plots) and data with a small amount of white noise ("noisy").
```
noise_level = 0.01
def compare_methods(diffs, x, y, y_noisy, y_dot):
n_methods = len(diffs)
n_rows = (n_methods // 3) + int(n_methods % 3 > 0)
fig, axs = plt.subplots(n_rows, 3, figsize=(15, 3 * n_rows), sharex=True)
for (name, method), ax in zip(diffs, axs.flatten()):
ax.plot(x, y_dot, label='Exact', color=pal[0], **plot_kws)
ax.plot(x, method(y, x), ':', label='Approx.', color='black', **plot_kws)
ax.plot(x, method(y_noisy, x), label='Noisy', color=pal[1], **plot_kws)
ax.set(title=name)
axs[0, 0].legend()
fig.show()
return axs
```
### Sine
```
# True data
x = np.linspace(0, 2 * np.pi, 100)
y = np.sin(x)
y_dot = np.cos(x)
# Add noise
seed = 111
np.random.seed(seed)
y_noisy = y + noise_level * np.random.randn(len(y))
axs = compare_methods(diffs, x, y, y_noisy, y_dot)
```
### Absolute value
```
# Shrink window for Savitzky Golay method
diffs[3] = ('Savitzky Golay', ps.SINDyDerivative(kind='savitzky_golay', left=0.1, right=0.1, order=3))
# True data
x = np.linspace(-1, 1, 100)
y = np.abs(x)
y_dot = np.sign(x)
# Add noise
seed = 111
np.random.seed(seed)
y_noisy = y + noise_level * np.random.randn(len(y))
axs = compare_methods(diffs, x, y, y_noisy, y_dot)
```
## Compare differentiators when used in PySINDy
We got some idea of the performance of the differentiation options applied to raw data. Next we'll look at how they work as a single component of the SINDy algorithm.
```
def print_equations(equations_clean, equations_noisy):
print(f"{'':<30} {'Noiseless':<40} {'Noisy':<40}")
for name in equations_clean.keys():
print(f"{name:<30} {'':<40} {'':<40}")
for k, (eq1, eq2) in enumerate(zip(equations_clean[name], equations_noisy[name])):
print(f"{'':<30} {'x' + str(k) + '=' + str(eq1):<40} {'x' + str(k) + '=' + str(eq2):<40}")
print("-------------------------------------------------------------------------------------------")
def plot_coefficients(coefficients, input_features=None, feature_names=None, ax=None, **heatmap_kws):
if input_features is None:
input_features = [f"$\dot x_{k}$" for k in range(coefficients.shape[0])]
else:
input_features = [f"$\dot {fi}$" for fi in input_features]
if feature_names is None:
feature_names = [f"f{k}" for k in range(coefficients.shape[1])]
with sns.axes_style(style="white", rc={"axes.facecolor": (0, 0, 0, 0)}):
if ax is None:
fig, ax = plt.subplots(1, 1)
max_mag = np.max(np.abs(coefficients))
heatmap_args = {
"xticklabels": input_features,
"yticklabels": feature_names,
"center": 0.0,
"cmap": sns.color_palette("vlag", n_colors=20, as_cmap=True),
"ax": ax,
"linewidths": 0.1,
"linecolor": "whitesmoke",
}
heatmap_args.update(**heatmap_kws)
sns.heatmap(
coefficients.T,
**heatmap_args
)
ax.tick_params(axis="y", rotation=0)
return ax
def compare_coefficient_plots(
coefficients_clean,
coefficients_noisy,
input_features=None,
feature_names=None
):
n_cols = len(coefficients_clean)
def signed_sqrt(x):
return np.sign(x) * np.sqrt(np.abs(x))
with sns.axes_style(style="white", rc={"axes.facecolor": (0, 0, 0, 0)}):
fig, axs = plt.subplots(2, n_cols, figsize=(1.9 * n_cols, 8), sharey=True, sharex=True)
max_clean = max(np.max(np.abs(c)) for c in coefficients_clean.values())
max_noisy = max(np.max(np.abs(c)) for c in coefficients_noisy.values())
max_mag = np.sqrt(max(max_clean, max_noisy))
for k, name in enumerate(coefficients_clean.keys()):
plot_coefficients(
signed_sqrt(coefficients_clean[name]),
input_features=input_features,
feature_names=feature_names,
ax=axs[0, k],
cbar=False,
vmax=max_mag,
vmin=-max_mag
)
plot_coefficients(
signed_sqrt(coefficients_clean[name]),
input_features=input_features,
feature_names=feature_names,
ax=axs[1, k],
cbar=False
)
axs[0, k].set_title(name, rotation=45)
axs[0, 0].set_ylabel("Noiseless", labelpad=10)
axs[1, 0].set_ylabel("Noisy", labelpad=10)
fig.tight_layout()
```
### Linear oscillator
$$ \frac{d}{dt} \begin{bmatrix}x \\ y\end{bmatrix} = \begin{bmatrix} -0.1 & 2 \\ -2 & -0.1 \end{bmatrix} \begin{bmatrix}x \\ y\end{bmatrix} $$
```
noise_level = 0.1
# Generate training data
def f(x, t):
return [
-0.1 * x[0] + 2 * x[1],
-2 * x[0] - 0.1 * x[1]
]
dt = 0.01
t_train = np.arange(0, 10, dt)
x0_train = [2, 0]
x_train = odeint(f, x0_train, t_train)
x_train_noisy = x_train + noise_level * np.random.randn(*x_train.shape)
fig, ax = plt.subplots(1, 1, figsize=(5, 5))
ax.plot(x_train[:, 0], x_train[:, 1], '.', label="Clean", color=pal[0], **plot_kws)
ax.plot(x_train_noisy[:, 0], x_train_noisy[:, 1], '.', label="Noisy", color=pal[1], **plot_kws)
ax.set(title='Training data', xlabel='$x_0$', ylabel='$x_1$')
ax.legend()
fig.show()
# Allow Trend Filtered method to work with linear functions
diffs[5] = ('Trend Filtered', ps.SINDyDerivative(kind='trend_filtered', order=1, alpha=1e-2))
equations_clean = {}
equations_noisy = {}
coefficients_clean = {}
coefficients_noisy = {}
input_features = ['x', 'y']
threshold = 0.5
for name, method in diffs:
model = ps.SINDy(
differentiation_method=method,
optimizer=ps.STLSQ(threshold=threshold),
t_default=dt,
feature_names=input_features
)
model.fit(x_train)
equations_clean[name] = model.equations()
coefficients_clean[name] = model.coefficients()
model.fit(x_train_noisy)
equations_noisy[name] = model.equations()
coefficients_noisy[name] = model.coefficients()
print_equations(equations_clean, equations_noisy)
feature_names = model.get_feature_names()
compare_coefficient_plots(
coefficients_clean,
coefficients_noisy,
input_features=input_features,
feature_names=feature_names
)
```
### Lorenz system
$$ \begin{aligned} \dot x &= 10(y-x)\\ \dot y &= x(28 - z) - y \\ \dot z &= xy - \tfrac{8}{3} z, \end{aligned} $$
```
noise_level = 0.5
def lorenz(z, t):
return [
10 * (z[1] - z[0]),
z[0] * (28 - z[2]) - z[1],
z[0] * z[1] - (8 / 3) * z[2]
]
# Generate measurement data
dt = .002
t_train = np.arange(0, 10, dt)
x0_train = [-8, 8, 27]
x_train = odeint(lorenz, x0_train, t_train)
x_train_noisy = x_train + noise_level * np.random.randn(*x_train.shape)
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(1, 1, 1, projection="3d")
ax.plot(
x_train[:, 0],
x_train[:, 1],
x_train[:, 2],
color=pal[0],
label='Clean',
**plot_kws
)
ax.plot(
x_train_noisy[:, 0],
x_train_noisy[:, 1],
x_train_noisy[:, 2],
'.',
color=pal[1],
label='Noisy',
alpha=0.3,
)
ax.set(title='Training data', xlabel="$x$", ylabel="$y$", zlabel="$z$")
ax.legend()
fig.show()
equations_clean = {}
equations_noisy = {}
coefficients_clean = {}
coefficients_noisy = {}
input_features = ['x', 'y', 'z']
threshold = 0.5
for name, method in diffs:
model = ps.SINDy(
differentiation_method=method,
optimizer=ps.STLSQ(threshold=threshold),
t_default=dt,
feature_names=input_features
)
model.fit(x_train)
equations_clean[name] = model.equations()
coefficients_clean[name] = model.coefficients()
model.fit(x_train_noisy)
equations_noisy[name] = model.equations()
coefficients_noisy[name] = model.coefficients()
print_equations(equations_clean, equations_noisy)
feature_names = model.get_feature_names()
compare_coefficient_plots(
coefficients_clean,
coefficients_noisy,
input_features=input_features,
feature_names=feature_names
)
```
| github_jupyter |
# The Perceptron
We just employed an optimization method - stochastic gradient descent, without really thinking twice about why it should work at all. It's probably worth while to pause and see whether we can gain some intuition about why this should actually work at all. We start with considering the E. Coli of machine learning algorithms - the Perceptron. After that, we'll give a simple convergence proof for SGD. This chapter is not really needed for practitioners but will help to understand why the algorithms that we use are working at all.
```
import mxnet as mx
from mxnet import nd, autograd
import matplotlib.pyplot as plt
import numpy as np
mx.random.seed(1)
```
## A Separable Classification Problem
The Perceptron algorithm aims to solve the following problem: given some classification problem of data $x \in \mathbb{R}^d$ and labels $y \in \{\pm 1\}$, can we find a linear function $f(x) = w^\top x + b$ such that $f(x) > 0$ whenever $y = 1$ and $f(x) < 0$ for $y = -1$. Obviously not all classification problems fall into this category but it's a very good baseline for what can be solved easily. It's also the kind of problems computers could solve in the 1960s. The easiest way to ensure that we have such a problem is to fake it by generating such data. We are going to make the problem a bit more interesting by specifying how well the data is separated.
```
# generate fake data that is linearly separable with a margin epsilon given the data
def getfake(samples, dimensions, epsilon):
wfake = nd.random_normal(shape=(dimensions)) # fake weight vector for separation
bfake = nd.random_normal(shape=(1)) # fake bias
wfake = wfake / nd.norm(wfake) # rescale to unit length
# making some linearly separable data, simply by chosing the labels accordingly
X = nd.zeros(shape=(samples, dimensions))
Y = nd.zeros(shape=(samples))
i = 0
while (i < samples):
tmp = nd.random_normal(shape=(1,dimensions))
margin = nd.dot(tmp, wfake) + bfake
if (nd.norm(tmp).asscalar() < 3) & (abs(margin.asscalar()) > epsilon):
X[i,:] = tmp
Y[i] = 1 if margin.asscalar() > 0 else -1
i += 1
return X, Y
# plot the data with colors chosen according to the labels
def plotdata(X,Y):
for (x,y) in zip(X,Y):
if (y.asscalar() == 1):
plt.scatter(x[0].asscalar(), x[1].asscalar(), color='r')
else:
plt.scatter(x[0].asscalar(), x[1].asscalar(), color='b')
# plot contour plots on a [-3,3] x [-3,3] grid
def plotscore(w,d):
xgrid = np.arange(-3, 3, 0.02)
ygrid = np.arange(-3, 3, 0.02)
xx, yy = np.meshgrid(xgrid, ygrid)
zz = nd.zeros(shape=(xgrid.size, ygrid.size, 2))
zz[:,:,0] = nd.array(xx)
zz[:,:,1] = nd.array(yy)
vv = nd.dot(zz,w) + d
CS = plt.contour(xgrid,ygrid,vv.asnumpy())
plt.clabel(CS, inline=1, fontsize=10)
X, Y = getfake(50, 2, 0.3)
plotdata(X,Y)
plt.show()
```
Now we are going to use the simplest possible algorithm to learn parameters. It's inspired by the [Hebbian Learning Rule](https://en.wikipedia.org/wiki/Hebbian_theory) which suggests that positive events should be reinforced and negative ones diminished. The analysis of the algorithm is due to Rosenblatt and we will give a detailed proof of it after illustrating how it works. In a nutshell, after initializing parameters $w = 0$ and $b = 0$ it updates them by $y x$ and $y$ respectively to ensure that they are properly aligned with the data. Let's see how well it works:
```
def perceptron(w,b,x,y):
if (y * (nd.dot(w,x) + b)).asscalar() <= 0:
w += y * x
b += y
return 1
else:
return 0
w = nd.zeros(shape=(2))
b = nd.zeros(shape=(1))
for (x,y) in zip(X,Y):
res = perceptron(w,b,x,y)
if (res == 1):
print('Encountered an error and updated parameters')
print('data {}, label {}'.format(x.asnumpy(),y.asscalar()))
print('weight {}, bias {}'.format(w.asnumpy(),b.asscalar()))
plotscore(w,b)
plotdata(X,Y)
plt.scatter(x[0].asscalar(), x[1].asscalar(), color='g')
plt.show()
```
As we can see, the model has learned something - all the red dots are positive and all the blue dots correspond to a negative value. Moreover, we saw that the values for $w^\top x + b$ became more extreme as values over the grid. Did we just get lucky in terms of classification or is there any math behind it? Obviously there is, and there's actually a nice theorem to go with this. It's the perceptron convergence theorem.
## The Perceptron Convergence Theorem
**Theorem** Given data $x_i$ with $\|x_i\| \leq R$ and labels $y_i \in \{\pm 1\}$ for which there exists some pair of parameters $(w^*, b^*)$ such that $y_i((w^*)^\top x_i + b) \geq \epsilon$ for all data, and for which $\|w^*\| \leq 1$ and $b^2 \leq 1$, then the perceptron algorithm converges after at most $2 (R^2 + 1)/\epsilon^2$ iterations.
The cool thing is that this theorem is *independent of the dimensionality of the data*. Moreover, it is *independent of the number of observations*. Lastly, looking at the algorithm itself, we see that we only need to store the mistakes that the algorithm made - for the data that was classified correctly no update on $(w,b)$ happened.
As a first step, let's check how accurate the theorem is.
```
Eps = np.arange(0.025, 0.45, 0.025)
Err = np.zeros(shape=(Eps.size))
for j in range(10):
for (i,epsilon) in enumerate(Eps):
X, Y = getfake(1000, 2, epsilon)
for (x,y) in zip(X,Y):
Err[i] += perceptron(w,b,x,y)
Err = Err / 10.0
plt.plot(Eps, Err, label='average number of updates for training')
plt.legend()
plt.show()
```
As we can see, the number of errors (and with it, updates) decreases inversely with the width of the margin. Let's see whether we can put this into equations. The first thing to consider is the size of the inner product between $(w,b)$ and $(w^*, b^*)$, the parameter that solves the classification problem with margin $\epsilon$. Note that we do not need explicit knowledge of $(w^*, b^*)$ for this, just know about its existence. For convenience, we will index $w$ and $b$ by $t$, the number of updates on the parameters. Moreover, whenever convenient we will treat $(w,b)$ as a new vector with an extra dimension and with the appropriate terms such as norms $\|(w,b)\|$ and inner products.
First off, $w_0^\top w^* + b_0 b^* = 0$ by construction. Second, by the update rule we have that
$$\begin{eqnarray}
(w_{t+1}, b_{t+1})^\top (w^*, b^*) = & (w_t, b_t)^\top (w^*, b^*) + y_t \left(x_t^\top w^* + b^*\right)\\
\geq & (w_t, b_t)^\top (w^*, b^*) + \epsilon \\
\geq & (t+1) \epsilon
\end{eqnarray}$$
Here the first equality follows from the definition of the weight updates. The next inequality follows from the fact that $(w^*, b^*)$ separate the problem with margin at least $\epsilon$, and the last inequality is simply a consequence of iterating this inequality $t+1$ times. Growing alignment between the 'ideal' and the actual weight vectors is great, but only if the actual weight vectors don't grow too rapidly. So we need a bound on their length:
$$\begin{eqnarray}
\|(w_{t+1}, b_{t+1})\|^2 \leq & \|(w_t, b_t)\|^2 + 2 y_t x_t^\top w_t + 2 y_t b_t + \|(x_t, 1)\|^2 \\
= & \|(w_t, b_t)\|^2 + 2 y_t \left(x_t^\top w_t + b_t\right) + \|(x_t, 1)\|^2 \\
\leq & \|(w_t, b_t)\|^2 + R^2 + 1 \\
\leq & (t+1) (R^2 + 1)
\end{eqnarray}$$
Now let's combine both inequalities. By Cauchy-Schwartz, i.e. $\|a\| \cdot \|b\| \geq a^\top b$ and the first inequality we have that $t \epsilon \leq (w_t, b_t)^\top (w^*, b^*) \leq \|(w_t, b_t)\| \sqrt{2}$. Using the second inequality we furthermore get $\|(w_t, b_t)\| \leq \sqrt{t (R^2 + 1)}$. Combined this yields
$$t \epsilon \leq \sqrt{2 t (R^2 + 1)}$$
This is a strange equation - we have a linear term on the left and a sublinear term on the right. So this inequality clearly cannot hold indefinitely for large $t$. The only logical conclusion is that there must never be updates beyond when the inequality is no longer satisfied. We have $t \leq 2 (R^2 + 1)/\epsilon^2$, which proves our claim.
**Note** - sometimes the perceptron convergence theorem is written without bias $b$. In this case a lot of things get simplified both in the proof and in the bound, since we can do away with the constant terms. Without going through details, the theorem becomes $t \leq R^2/\epsilon^2$.
**Note** - the perceptron convergence proof crucially relied on the fact that the data is actually separable. If this is not the case, the perceptron algorithm will diverge. It will simply keep on trying to get things right by updating $(w,b)$. And since it has no safeguard to keep the parameters bounded, the solution will get worse. This sounds like an 'academic' concern, alas it is not. The avatar in the computer game [Black and White](https://en.wikipedia.org/wiki/Black_%26_White_(video_game%29) used the perceptron to adjust the avatar. Due to the poorly implemented update rule the game quickly became unplayable after a few hours (as one of the authors can confirm).
## Stochastic Gradient Descent
The perceptron algorithm also can be viewed as a stochastic gradient descent algorithm, albeit with a rather strange loss function: $\mathrm{max}(0, -y f(x))$. This is commonly called the hinge loss. As can be checked quite easily, its gradient is $0$ whenever $y f(x) > 0$, i.e. whenever $x$ is classified correctly, and gradient $-y$ for incorrect classification. For a linear function, this leads exactly to the updates that we have (with the minor difference that we consider $f(x) = 0$ as an example of incorrect classification). To get some intuition, let's plot the loss function.
```
f = np.arange(-5,5,0.1)
zero = np.zeros(shape=(f.size))
lplus = np.max(np.array([f,zero]), axis=0)
lminus = np.max(np.array([-f,zero]), axis=0)
plt.plot(f,lplus, label='max(0,f(x))')
plt.plot(f,lminus, label='max(0,-f(x))')
plt.legend()
plt.show()
```
More generally, a stochastic gradient descent algorithm uses the following template:
```
initialize w
loop over data and labels (x,y):
compute f(x)
compute loss gradient g = partial_w l(y, f(x))
w = w - eta g
```
Here the learning rate $\eta$ may well change as we iterate over the data. Moreover, we may traverse the data in nonlinear order (e.g. we might reshuffle the data), depending on the specific choices of the algorithm. The issue is that as we go over the data, sometimes the gradient might point us into the right direction and sometimes it might not. Intuitively, on average things *should* get better. But to be really sure, there's only one way to find out - we need to prove it. We pick a simple and elegant (albeit a bit restrictive) proof of [Nesterov and Vial](http://dl.acm.org/citation.cfm?id=1377347).
The situation we consider are *convex* losses. This is a bit restrictive in the age of deep networks but still quite instructive (in addition to that, nonconvex convergence proofs are a lot messier). For recap - a convex function $f(x)$ satisfies $f(\lambda x + (1-\lambda) x') \leq \lambda f(x) + (1-\lambda) f(x')$, that is, the linear interpolant between function values is *larger* than the function values in between. Likewise, a convex set $S$ is a set where for any points $x, x' \in S$ the line $[x,x']$ is in the set, i.e. $\lambda x + (1-\lambda) x' \in S$ for all $\lambda \in [0,1]$. Now assume that $w^*$ is the minimizer of the expected loss that we are trying to minimize, e.g.
$$w^* = \mathrm{argmin}_w R(w) \text{ where } R(w) = \frac{1}{m} \sum_{i=1}^m l(y_i, f(x_i, w))$$
Let's assume that we actually *know* that $w^*$ is contained in some set convex set $S$, e.g. a ball of radius $R$ around the origin. This is convenient since we want to make sure that during optimization our parameter $w$ doesn't accidentally diverge. We can ensure that, e.g. by shrinking it back to such a ball whenever needed.
Secondly, assume that we have an upper bound on the magnitude of the gradient $g_i := \partial_w l(y_i, f(x_i, w))$ for all $i$ by some constant $L$ (it's called so since this is often referred to as the Lipschitz constant). Again, this is super useful since we don't want $w$ to diverge while we're optimizing. In practice, many algorithms employ e.g. *gradient clipping* to force our gradients to be well behaved, by shrinking the gradients back to something tractable.
Third, to get rid of variance in the parameter $w_t$ that is obtained during the optimization, we use the weighted average over the entire optimization process as our solution, i.e. we use $\bar{w} := \sum_t \eta_t w_t / \sum_t \eta_t$.
Let's look at the distance $r_t := \|w_t - w^*\|$, i.e. the distance between the optimal solution vector $w^*$ and what we currently have. It is bounded as follows:
$$\begin{eqnarray}
\|w_{t+1} - w^*\|^2 = & \|w_t - w^*\|^2 + \eta_t^2 \|g_t\|^2 - 2 \eta_t g_t^\top (w_t - w^*) \\
\leq & \|w_t - w^*\|^2 + \eta_t^2 L^2 - 2 \eta_t g_t^\top (w_t - w^*)
\end{eqnarray}$$
Next we use convexity of $R(w)$. We know that $R(w^*) \geq R(w_t) + \partial_w R(w_t)^\top (w^* - w_t)$ and moreover that the average of function values is larger than the function value of the average, i.e. $\sum_{t=1}^T \eta_t R(w_t) / \sum_t \eta_t \geq R(\bar{w})$. The first inequality allows us to bound the expected decrease in distance to optimality via
$$\mathbf{E}[r_{t+1} - r_t] \leq \eta_t^2 L^2 - 2 \eta_t \mathbf{E}[g_t^\top (w_t - w^*)] \leq
\eta_t^2 L^2 - 2 \eta_t \mathbf{E}[R[w_t] - R[w^*]]$$
Summing over $t$ and using the facts that $r_T \geq 0$ and that $w$ is contained inside a ball of radius $R$ yields:
$$-R^2 \leq L^2 \sum_{t=1}^T \eta_t^2 - 2 \sum_t \eta_t \mathbf{E}[R[w_t] - R[w^*]]$$
Rearranging terms, using convexity of $R$ the second time, and dividing by $\sum_t \eta_t$ yields a bound on how far we are likely to stray from the best possible solution:
$$\mathbf{E}[R[\bar{w}]] - R[w^*] \leq \frac{R^2 + L^2 \sum_{t=1}^T \eta_t^2}{2\sum_{t=1}^T \eta_t}$$
Depending on how we choose $\eta_t$ we will get different bounds. For instance, if we make $\eta$ constant, i.e. if we use a constant learning rate, we get the bounds $(R^2 + L^2 \eta^2 T)/(2 \eta T)$. This is minimized for $\eta = R/L\sqrt{T}$, yielding a bound of $RL/\sqrt{T}$. A few things are interesting in this context:
* If we are potentially far away from the optimal solution, we should use a large learning rate (the O(R) dependency).
* If the gradients are potentially large, we should use a smaller learning rate (the O(1/L) dependency).
* If we have a long time to converge, we should use a smaller learning rate, but not too small.
* Large gradients and a large degree of uncertainty as to how far we are away from the optimal solution lead to poor convergence.
* More optimization steps make things better.
None of these insights are terribly surprising, albeit useful to keep in mind when we use SGD in the wild. And this was the very point of going through this somewhat tedious proof. Furthermore, if we use a decreasing learning rate, e.g. $\eta_t = O(1/\sqrt{t})$, then our bounds are somewhat less tight, and we get a bound of $O(\log T / \sqrt{T})$ bound on how far away from optimality we might be. The key difference is that for the decreasing learning rate we need not know when to stop. In other words, we get an anytime algorithm that provides a good result at any time, albeit not as good as what we could expect if we knew how much time to optimize we have right from the beginning.
## Next
[Softmax regression from scratch](../chapter02_supervised-learning/softmax-regression-scratch.ipynb)
For whinges or inquiries, [open an issue on GitHub.](https://github.com/zackchase/mxnet-the-straight-dope)
| github_jupyter |
```
import seaborn as sns
import matplotlib.pyplot as plt
import sys
import os
from pathlib import Path
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
from src.refinement import knowledge
%run utils/attention_graph.py
%run utils/mlflow_query.py
%run utils/percentiles.py
%run utils/loading.py
%run utils/comparison.py
%run utils/refinement.py
mlflow_helper = MlflowHelper(pkl_file=Path("mlflow_run_df.pkl"))
#mlflow_helper.query_all_runs(query_metrics=False)
```
# Experiment Results
## Mimic
```
relevant_mimic_ref_df = mlflow_helper.mimic_run_df(include_noise=False, include_refinements=True)
relevant_mimic_ref_df = relevant_mimic_ref_df[
relevant_mimic_ref_df["data_tags_refinement_type"].fillna("").astype(str).apply(len) > 0
].copy()
relevant_mimic_ref_df['refinement_run'] = relevant_mimic_ref_df["data_tags_refinement_type"].apply(lambda x: x.split("_")[0])
relevant_mimic_ref_df['refinement_type'] = relevant_mimic_ref_df["data_tags_refinement_type"].apply(lambda x: "_".join(x.split("_")[1:]))
relevant_mimic_ref_df
mimic_accuracy_df = mlflow_helper.load_best_metrics_for_ids(run_ids=set(relevant_mimic_ref_df['info_run_id']))
mimic_accuracy_df['refinement_run'] = mimic_accuracy_df["data_tags_refinement_type"].apply(lambda x: x.split("_")[0])
mimic_accuracy_df['refinement_type'] = mimic_accuracy_df["data_tags_refinement_type"].apply(lambda x: "_".join(x.split("_")[1:]))
mimic_accuracy_df['refinement_type_order'] = mimic_accuracy_df['refinement_type'].replace({
'reference':0,
'original':1,
'refinement_0':2,
'refinement_1':3,
'refinement_2':4,})
mimic_accuracy_df
g = sns.lineplot(data=mimic_accuracy_df[
(mimic_accuracy_df['data_params_RefinementConfigreference_file_knowledge'].fillna('').apply(len) > 0) &
(mimic_accuracy_df['data_params_RefinementConfigedges_to_add'].fillna(0.0).astype(float) == 0.0) &
(mimic_accuracy_df['val_top_20_categorical_accuracy_history_best'].fillna(-1) > 0)
].sort_values(by="refinement_type_order"),
x="refinement_type",
y="val_top_20_categorical_accuracy_history_best",
hue="data_params_RefinementConfigoriginal_file_knowledge",
estimator=None,
units="refinement_run",
sort=False,
)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.show()
g = sns.lineplot(data=mimic_accuracy_df[
(mimic_accuracy_df['data_params_RefinementConfigreference_file_knowledge'].fillna('').apply(len) > 0) &
(mimic_accuracy_df['data_params_RefinementConfigedges_to_add'].fillna(-1).astype(float) == 0.1) &
(mimic_accuracy_df['val_top_20_categorical_accuracy_history_best'].fillna(-1) > 0)
].sort_values(by="refinement_type_order"),
x="refinement_type",
y="val_top_20_categorical_accuracy_history_best",
hue="data_params_RefinementConfigoriginal_file_knowledge",
estimator=None,
units="refinement_run",
sort=False,
)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.show()
mimic_accuracy_df_p = calculate_accuracies_per_percentiles(
relevant_run_df=relevant_mimic_ref_df,
k=20, num_percentiles=10, num_input_percentiles=10,
percentile_names=[
'avg_input_frequencies_percentile',
'median_input_frequencies_percentile',
'min_input_frequencies_percentile',
'p10_input_frequencies_percentile',
'unknown_inputs_percentile',
'output_frequency_percentile',
'avg_input_frequencies_range',
'median_input_frequencies_range',
'min_input_frequencies_range',
'p10_input_frequencies_range',
'unknown_inputs_range',
],
local_mlflow_dir=mlflow_helper.local_mlflow_dir)
mimic_accuracy_df_p
plot_refinement_improvement(
accuracy_df=mimic_accuracy_df_p,
refinement_df=relevant_mimic_ref_df[relevant_mimic_ref_df["data_params_RefinementConfigedges_to_add"].fillna("0.0") == "0.0"],
reference_refinement_type="original",
)
plot_refinement_improvement(
accuracy_df=mimic_accuracy_df_p,
refinement_df=relevant_mimic_ref_df[relevant_mimic_ref_df["data_params_RefinementConfigedges_to_add"].fillna("0.0") == "0.1"],
reference_refinement_type="reference",
)
```
### GRAM without unknowns
```
def load_icd9_text():
icd9_df = pd.read_csv("../data/icd9.csv")
return (
icd9_df[["child_name", "child_code"]]
.drop_duplicates()
.rename(columns={"child_name": "description", "child_code": "code",})
.set_index("code")
.to_dict("index")
)
mimic_df = mlflow_helper.mimic_run_df(include_noise=False, include_refinements=False)
mimic_gram_run_id = mimic_df[
(mimic_df['data_tags_noise_type'].fillna('').apply(len) == 0) &
(mimic_df['data_params_ModelConfigbase_hidden_embeddings_trainable'] == 'False') &
(mimic_df['data_tags_model_type'] == 'gram')
].iloc[0].get('info_run_id')
texts = load_icd9_text()
unknowns = set([x for x,y in texts.items() if
(y["description"].lower().startswith("other")
or y["description"].lower().startswith("unspecified")
or y["description"].lower().endswith("unspecified")
or y["description"].lower().endswith("unspecified type")
or y["description"].lower().endswith("not elsewhere classified"))])
attentions = load_attention_weights(
mimic_gram_run_id,
mlflow_helper.local_mlflow_dir
)
print(sum([len(x) for x in attentions.values()]))
attentions_without_unknowns = {
x:[y for y in ys if y not in unknowns or x == y] for x,ys in attentions.items()
}
print(sum([len(x) for x in attentions_without_unknowns.values()]))
with open('gram_without_unknowns.json', 'w') as f:
json.dump(attentions_without_unknowns, f)
unknowns2 = set([
x for x,y in texts.items()
if any(z in y["description"].lower() for z in ["other", "unspecified", "elsewhere"])
])
attentions_without_unknowns2 = {
x:[y for y in ys if y not in unknowns2 or x == y] for x,ys in attentions.items()
}
print(sum([len(x) for x in attentions_without_unknowns2.values()]))
with open('gram_without_unknowns2.json', 'w') as f:
json.dump(attentions_without_unknowns2, f)
attentions_without_unknowns3 = {
x:(
[y for y in ys if y not in unknowns2 or x == y] if x not in unknowns2
else [x]
) for x,ys in attentions.items()
}
print(sum([len(x) for x in attentions_without_unknowns3.values()]))
with open('gram_without_unknowns3.json', 'w') as f:
json.dump(attentions_without_unknowns3, f)
```
## Huawei
```
relevant_huawei_ref_df = mlflow_helper.huawei_run_df(include_noise=False, include_refinements=True)
relevant_huawei_ref_df = relevant_huawei_ref_df[
relevant_huawei_ref_df["data_tags_refinement_type"].fillna("").astype(str).apply(len) > 0
].copy()
relevant_huawei_ref_df['refinement_run'] = relevant_huawei_ref_df["data_tags_refinement_type"].apply(lambda x: x.split("_")[0])
relevant_huawei_ref_df['refinement_type'] = relevant_huawei_ref_df["data_tags_refinement_type"].apply(lambda x: "_".join(x.split("_")[1:]))
relevant_huawei_ref_df
huawei_accuracy_df = mlflow_helper.load_best_metrics_for_ids(run_ids=set(relevant_huawei_ref_df['info_run_id']))
huawei_accuracy_df['refinement_run'] = huawei_accuracy_df["data_tags_refinement_type"].apply(lambda x: x.split("_")[0])
huawei_accuracy_df['refinement_type'] = huawei_accuracy_df["data_tags_refinement_type"].apply(lambda x: "_".join(x.split("_")[1:]))
huawei_accuracy_df['refinement_type_order'] = huawei_accuracy_df['refinement_type'].replace({
'reference':0,
'original':1,
'refinement_0':2,
'refinement_1':3,
'refinement_2':4,})
huawei_accuracy_df
g = sns.lineplot(data=huawei_accuracy_df[
(huawei_accuracy_df['data_params_RefinementConfigreference_file_knowledge'].fillna('').apply(len) > 0) &
(huawei_accuracy_df['data_params_RefinementConfigedges_to_add'].fillna(-1).astype(float) <= 0) &
(huawei_accuracy_df['val_top_5_categorical_accuracy_history_best'].fillna(-1) > 0)
].sort_values(by="refinement_type_order"),
x="refinement_type",
y="val_top_5_categorical_accuracy_history_best",
hue="data_params_RefinementConfigoriginal_file_knowledge",
estimator=None,
units="refinement_run",
sort=False,
)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.show()
g = sns.lineplot(data=huawei_accuracy_df[
(huawei_accuracy_df['data_params_RefinementConfigreference_file_knowledge'].fillna('').apply(len) > 0) &
(huawei_accuracy_df['data_params_RefinementConfigedges_to_add'].fillna(-1).astype(float) == 0.1) &
(huawei_accuracy_df['val_top_5_categorical_accuracy_history_best'].fillna(-1) > 0)
].sort_values(by="refinement_type_order"),
x="refinement_type",
y="val_top_5_categorical_accuracy_history_best",
hue="data_params_RefinementConfigoriginal_file_knowledge",
estimator=None,
units="refinement_run",
sort=False,
)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.show()
huawei_accuracy_df_p = calculate_accuracies_per_percentiles(
relevant_run_df=relevant_huawei_ref_df,
k=20, num_percentiles=10, num_input_percentiles=10,
percentile_names=[
'avg_input_frequencies_percentile',
'median_input_frequencies_percentile',
'min_input_frequencies_percentile',
'p10_input_frequencies_percentile',
'unknown_inputs_percentile',
'output_frequency_percentile',
'avg_input_frequencies_range',
'median_input_frequencies_range',
'min_input_frequencies_range',
'p10_input_frequencies_range',
'unknown_inputs_range',
],
local_mlflow_dir=mlflow_helper.local_mlflow_dir)
huawei_accuracy_df_p
plot_refinement_improvement(
accuracy_df=huawei_accuracy_df_p,
refinement_df=relevant_huawei_ref_df[relevant_huawei_ref_df["data_params_RefinementConfigedges_to_add"].fillna("0.0") == "0.0"],
reference_refinement_type="original",
)
plot_refinement_improvement(
accuracy_df=huawei_accuracy_df_p,
refinement_df=relevant_huawei_ref_df[relevant_huawei_ref_df["data_params_RefinementConfigedges_to_add"].fillna("0.0") == "0.1"],
reference_refinement_type="reference",
)
```
# Graph Plotting
```
mimic_example_runs = {
'edges_added': {
'gram': relevant_mimic_ref_df[
(relevant_mimic_ref_df['data_params_RefinementConfigoriginal_file_knowledge'] == 'data/gram_original_file_knowledge.json') &
(relevant_mimic_ref_df['data_params_RefinementConfigedges_to_add'].fillna('0.0') == '0.1')
]['refinement_run'].iloc[0],
'text': relevant_mimic_ref_df[
(relevant_mimic_ref_df['data_params_RefinementConfigoriginal_file_knowledge'] == 'data/text_original_file_knowledge.json') &
(relevant_mimic_ref_df['data_params_RefinementConfigedges_to_add'].fillna('0.0') == '0.1')
]['refinement_run'].iloc[0],
'causal': relevant_mimic_ref_df[
(relevant_mimic_ref_df['data_params_RefinementConfigoriginal_file_knowledge'] == 'data/causal_original_file_knowledge.json') &
(relevant_mimic_ref_df['data_params_RefinementConfigedges_to_add'].fillna('0.0') == '0.1')
]['refinement_run'].iloc[0],
},
'edges_removed': {
'gram': relevant_mimic_ref_df[
(relevant_mimic_ref_df['data_params_RefinementConfigoriginal_file_knowledge'] == 'data/gram_original_file_knowledge.json') &
(relevant_mimic_ref_df['data_params_RefinementConfigedges_to_add'].fillna('0.0') == '0.0')
]['refinement_run'].iloc[0],
'text': relevant_mimic_ref_df[
(relevant_mimic_ref_df['data_params_RefinementConfigoriginal_file_knowledge'] == 'data/text_original_file_knowledge.json') &
(relevant_mimic_ref_df['data_params_RefinementConfigedges_to_add'].fillna('0.0') == '0.0')
]['refinement_run'].iloc[0],
'causal': relevant_mimic_ref_df[
(relevant_mimic_ref_df['data_params_RefinementConfigoriginal_file_knowledge'] == 'data/causal_original_file_knowledge.json') &
(relevant_mimic_ref_df['data_params_RefinementConfigedges_to_add'].fillna('0.0') == '0.0')
]['refinement_run'].iloc[0],
},
}
mimic_example_runs
huawei_example_runs = {
'edges_added': {
'gram': relevant_huawei_ref_df[
(relevant_huawei_ref_df['data_params_RefinementConfigoriginal_file_knowledge'] == 'data/huawei_gram_original_file_knowledge.json') &
(relevant_huawei_ref_df['data_params_RefinementConfigedges_to_add'].fillna(-1).astype(float) == 0.1)
]['refinement_run'].iloc[0],
'text': relevant_huawei_ref_df[
(relevant_huawei_ref_df['data_params_RefinementConfigoriginal_file_knowledge'] == 'data/huawei_text_original_file_knowledge.json') &
(relevant_huawei_ref_df['data_params_RefinementConfigedges_to_add'].fillna(-1).astype(float) == 0.1)
]['refinement_run'].iloc[0],
'causal': relevant_huawei_ref_df[
(relevant_huawei_ref_df['data_params_RefinementConfigoriginal_file_knowledge'] == 'data/huawei_causal_original_file_knowledge.json') &
(relevant_huawei_ref_df['data_params_RefinementConfigedges_to_add'].fillna(-1).astype(float) == 0.1)
]['refinement_run'].iloc[0],
},
'edges_removed': {
'gram': relevant_huawei_ref_df[
(relevant_huawei_ref_df['data_params_RefinementConfigoriginal_file_knowledge'] == 'data/huawei_gram_original_file_knowledge.json') &
(relevant_huawei_ref_df['data_params_RefinementConfigedges_to_add'].fillna(-1).astype(float) <= 0)
].sort_values(by="info_start_time")['refinement_run'].iloc[0],
'text': relevant_huawei_ref_df[
(relevant_huawei_ref_df['data_params_RefinementConfigoriginal_file_knowledge'] == 'data/huawei_text_original_file_knowledge.json') &
(relevant_huawei_ref_df['data_params_RefinementConfigedges_to_add'].fillna(-1).astype(float) <= 0)
].sort_values(by="info_start_time")['refinement_run'].iloc[0],
'causal': relevant_huawei_ref_df[
(relevant_huawei_ref_df['data_params_RefinementConfigoriginal_file_knowledge'] == 'data/huawei_causal_original_file_knowledge.json') &
(relevant_huawei_ref_df['data_params_RefinementConfigedges_to_add'].fillna(-1).astype(float) <= 0)
].sort_values(by="info_start_time")['refinement_run'].iloc[0],
},
}
huawei_example_runs
class RefinementConfig:
min_edge_weight: float = 0.8
max_train_examples: int = 100
refinement_metric: str = "mean_outlier_score"
refinement_metric_maxrank: int = 100
max_edges_to_remove: int = 100
max_refinement_metric: int = -1
mlflow_dir: str = "../gsim01/mlruns/1/"
def plot_for_removed_edges(original_run_id, reference_run_id, local_mlflow_dir, use_node_mapping=False):
original_attention = load_attention_weights(original_run_id, local_mlflow_dir)
frequencies = load_input_frequency_dict(original_run_id, local_mlflow_dir)
config = RefinementConfig()
config.min_edge_weight = 0.5
config.max_train_examples = 50
config.max_refinement_metric = -2
refined_knowledge = knowledge.KnowledgeProcessor(config).load_refined_knowledge(refinement_run_id=original_run_id, reference_run_id=reference_run_id)
feature_node_mapping = convert_to_node_mapping(
[x for x in original_attention], use_node_mapping
)
colored_connections = calculate_colored_connections(
reference_connections=set(
[(c,p) for c,ps in refined_knowledge.items() for p in ps]
),
attention_weights=original_attention,
feature_node_mapping=feature_node_mapping,
)
print("Removed", len(colored_connections), "edges")
node_mapping = _create_graph_visualization(
attention_weights=original_attention,
threshold=0.25,
run_name="refinement_edges_removed",
node_mapping=feature_node_mapping,
colored_connections=colored_connections)
return node_mapping, frequencies
def plot_for_added_edges(original_run_id, reference_run_id, local_mlflow_dir, use_node_mapping=False):
original_attention = load_attention_weights(original_run_id, local_mlflow_dir)
reference_attention = load_attention_weights(reference_run_id, local_mlflow_dir)
frequencies = load_input_frequency_dict(original_run_id, local_mlflow_dir)
config = RefinementConfig()
config.min_edge_weight = 0.5
config.max_train_examples = 50
config.max_edges_to_remove = 1000
config.max_refinement_metric = 2
refined_knowledge = knowledge.KnowledgeProcessor(config).load_refined_knowledge(refinement_run_id=original_run_id, reference_run_id=reference_run_id)
refined_attention = {c:{} for c in refined_knowledge}
for child in original_attention:
for parent in original_attention[child]:
if parent in refined_knowledge.get(child, {}):
refined_attention[child][parent] = original_attention[child][parent]
feature_node_mapping = convert_to_node_mapping(
[x for x in original_attention], use_node_mapping
)
colored_connections = calculate_colored_connections(
reference_connections=set(
[(c,p) for c,ps in reference_attention.items() for p in ps]
),
attention_weights=refined_attention,
feature_node_mapping=feature_node_mapping,
)
print("Added", len(colored_connections), "edges")
node_mapping = _create_graph_visualization(
attention_weights=refined_attention,
threshold=0.25,
run_name="refinement_edges_removed",
node_mapping=feature_node_mapping,
colored_connections=colored_connections)
return node_mapping, frequencies
_, frequencies = plot_for_removed_edges(
original_run_id=relevant_mimic_ref_df[
(relevant_mimic_ref_df['refinement_run'] == mimic_example_runs['edges_removed']['gram']) &
(relevant_mimic_ref_df['refinement_type'] == "original")
]["info_run_id"].iloc[0],
reference_run_id=relevant_mimic_ref_df[
(relevant_mimic_ref_df['refinement_run'] == mimic_example_runs['edges_removed']['gram']) &
(relevant_mimic_ref_df['refinement_type'] == "reference")
]["info_run_id"].iloc[0],
local_mlflow_dir=mlflow_helper.local_mlflow_dir,
use_node_mapping=False,
)
_, frequencies = plot_for_removed_edges(
original_run_id=relevant_huawei_ref_df[
(relevant_huawei_ref_df['refinement_run'] == huawei_example_runs['edges_removed']['gram']) &
(relevant_huawei_ref_df['refinement_type'] == "original")
]["info_run_id"].iloc[0],
reference_run_id=relevant_huawei_ref_df[
(relevant_huawei_ref_df['refinement_run'] == huawei_example_runs['edges_removed']['gram']) &
(relevant_huawei_ref_df['refinement_type'] == "reference")
]["info_run_id"].iloc[0],
local_mlflow_dir=mlflow_helper.local_mlflow_dir,
use_node_mapping=True,
)
original_run_id = relevant_huawei_ref_df[
(relevant_huawei_ref_df['refinement_run'] == huawei_example_runs['edges_removed']['gram']) &
(relevant_huawei_ref_df['refinement_type'] == "original")
]["info_run_id"].iloc[0]
reference_run_id = relevant_huawei_ref_df[
(relevant_huawei_ref_df['refinement_run'] == huawei_example_runs['edges_removed']['gram']) &
(relevant_huawei_ref_df['refinement_type'] == "reference")
]["info_run_id"].iloc[0]
original_attention = load_attention_weights(original_run_id, mlflow_helper.local_mlflow_dir)
frequencies = load_input_frequency_dict(original_run_id, mlflow_helper.local_mlflow_dir)
config = RefinementConfig()
config.min_edge_weight = 0.5
config.max_train_examples = 1000
config.max_refinement_metric = 0
refined_knowledge = knowledge.KnowledgeProcessor(config).load_refined_knowledge(refinement_run_id=original_run_id, reference_run_id=reference_run_id)
feature_node_mapping = convert_to_node_mapping(
[x for x in original_attention], False,
)
colored_connections = calculate_colored_connections(
reference_connections=set(
[(c,p) for c,ps in refined_knowledge.items() for p in ps]
),
attention_weights=original_attention,
feature_node_mapping=feature_node_mapping,
)
print("Removed", len(colored_connections), "edges")
colored_connections
print('\n'.join([str((x, frequencies[x[0]]['absolute_frequency'])) for x in colored_connections if x[1] == "server"]))
with open("asldkfj.txt", "w") as file:
file.write("\n".join([x for x, ys in original_attention.items() if "server" in ys and float(ys.get("server", -1)) > 0.5]))
node_mapping = _create_graph_visualization(
attention_weights=original_attention,
threshold=0.25,
run_name="refinement_edges_removed_gram",
node_mapping=feature_node_mapping,
colored_connections=colored_connections)
refined_knowledge_text = refined_knowledge
```
| github_jupyter |
# Multiscalar Segregation Profiles on Urban Street Networks
This notebook examines how different definitions of urban space can affect observed measures of segregation. Using the spatial information theory index (H), we look at how different definitions of the local environment affect values of H for the same study area and population groups. Next, we calculate multiscalar segregation profiles introduced by [Reardon et al](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2831394/) to examine differences between macro and micro segregation. Finally, we examine how these segregation profiles differ when we measure distance in the local environment as a straight line, versus along a pedestrian transport network
**Note** I *highly* recommend installing `pandana` as a multithreaded library following [these instructions](http://udst.github.io/pandana/installation.html). The notebook will still run if you intall from pip or anaconda, but the network computations will take considerably longer
```
import geopandas as gpd
from pysal.explore.segregation.aspatial import Multi_Information_Theory
from pysal.explore.segregation.spatial import SpatialInformationTheory
from pysal.explore.segregation.network import get_osm_network
from pysal.explore.segregation.util import compute_segregation_profile
from pysal.lib.weights import Queen, Rook, Kernel
from geosnap.data import Community
from geosnap.data.data import get_lehd
from pandana.network import Network
from pysal.viz.splot.pysal.lib import plot_spatial_weights
import matplotlib.pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
```
First, we'll get census 2010 data from `geosnap` and project into state plane (so we have planar units for the kernel weights constructor)
```
dc = Community(source='ltdb', statefips='11')
df = dc.tracts.merge(dc.census[dc.census['year']==2010], on='geoid')
df = df.to_crs(epsg=6487)
df.plot()
```
**Note: there are likely to be some nontrivial edge effects since we're truncating the data artificially at the DC border**
kernel weights operate on points, so we'll abstract each polygon to exist at its mathematical center
```
df_pts = df.copy()
df_pts['geometry'] = df_pts.centroid
```
We define local environments using spatial weights matrices that encode relationships among our units of observations. Weights matrices can take many forms, so we can choose how to parameterize the environment. Here' we'll examine two contiguity weights, "queen" and "rook", which mean that the local environment of each census tract is the tract itself, and the adjacent tracts sharing either a vertex or side (queen) or just a side (rook). We'll also use kernel distance-based weights. This type of weights matrix considers the local environment to be all tracts that fall within a specified distance of the focal tract, but applies a distance decay function so that tracts further away have a smaller affect than tracts nearby. The network-based weights we'll examine later also work this way.
Here, we'll create 4 different weights matrices: queen, rook, 1km euclidian kernel, and 2km euclidian kernel
```
w_queen = Queen.from_dataframe(df)
w_rook = Rook.from_dataframe(df)
w_kernel_1k = Kernel.from_dataframe(df_pts, bandwidth=1000)
w_kernel_2k = Kernel.from_dataframe(df_pts, bandwidth=2000)
fig, ax = plt.subplots(1,4, figsize=(16,4))
plot_spatial_weights(w_queen, df, ax=ax[0])
ax[0].set_title('queen')
plot_spatial_weights(w_rook, df, ax=ax[1])
ax[1].set_title('rook')
plot_spatial_weights(w_kernel_1k, df, ax=ax[2])
ax[2].set_title('kernel 1k')
plot_spatial_weights(w_kernel_2k, df, ax=ax[3])
ax[3].set_title('kernel 2k')
```
these plots show us which tracts are considered neighbors with each other using each type of weights matrix. Internally, `segregation` uses the `build_local_environment` function to turn these weights matrices into localized data. The different relationships implied by each matrix result in significantly different local environments as shown below
we'll measure *H* as a function of 4 racial categories, and we'll plot the non-hispanic black population to get a sense for how these local environments vary
```
groups = ['n_nonhisp_white_persons', 'n_nonhisp_black_persons', 'n_hispanic_persons', 'n_asian_persons']
from pysal.explore.segregation.spatial.spatial_indexes import _build_local_environment
def plot_local_environment(w, ax):
d = _build_local_environment(df, groups, w)
d['geometry'] = df.geometry
d = gpd.GeoDataFrame(d)
d.plot('n_nonhisp_black_persons', k=6, scheme='quantiles', ax=ax)
ax.axis('off')
fig, axs = plt.subplots(1,4, figsize=(16,4))
for i, wtype in enumerate([w_queen, w_rook, w_kernel_1k, w_kernel_2k]):
plot_local_environment(w=wtype, ax=axs[i])
```
*again, note that this is slightly misleading since in this toy example we're not including data from Maryland and Virigina that would otherwise have a big impact on the "local environment" values displayed here*
And, as we might expect, these different local environments result in different segregation statistics
```
# aspatial
Multi_Information_Theory(df, groups).statistic
# rook neighborhood
SpatialInformationTheory(df, groups, w=w_rook).statistic
# queen neighborhood
SpatialInformationTheory(df, groups, w=w_queen).statistic
# 1 kilometer kernel distance neighborhood
SpatialInformationTheory(df, groups, w=w_kernel_1k).statistic
# 2 kilometer kernel distance neighborhood
SpatialInformationTheory(df, groups, w=w_kernel_2k).statistic
```
As we increas the distance on the kernel density weights, we get a sense for how segregation varies across scales. Following Reardon et al, We can calculate *H* for a set of increasing distances and plot the results to get a sense of the variation in macro versus micro segregation. `segregation` provides the `compute_segregation_profile` function for that purpose. You pass a dataframe, a list of distances, and a set of groups for which to calculate the statistics
```
distances = [1000.,2000.,3000.,4000.,5000.] # note these are floats
euclidian_profile = compute_segregation_profile(df_pts, groups=groups, distances=distances)
euclidian_profile
```
The drawback for kernel density weights is that urban space is not experienced in a straight line, but instead conditioned by transport networks. In other words, the local street network can have a big impact on how easily a person may come into contact with others. For that reason, `segregation` can also calculate multiscalar segregation profiles using street network distance. We include the `get_osm_network` function for downloading street network data from OpenStreetMap
```
# convert back to lat/long and get an OSM street network
df = df.to_crs(epsg=4326)
```
note it can take awhile to download a street network, so you can save and read it back in using pandana
```
#net = get_osm_network(df)
#net.save_hdf5('dc_network.h5')
net = Network.from_hdf5('dc_network.h5')
network_linear_profile = compute_segregation_profile(df_pts, groups=groups, network=net, distances=distances)
network_exponential_profile = compute_segregation_profile(df_pts, groups=groups, network=net, distances=distances, decay='exp', precompute=False)
# we're using the same network as before, so no need to precompute again
import matplotlib.pyplot as plt
```
We now have three different segregation profiles:
- an exponential kernel in euclidian space
- an exponential kernel in network space
- a linear kernel in network space
lets plot all three of them and examine the differences
```
fig, ax = plt.subplots(figsize=(12,8))
ax.scatter(euclidian_profile.keys(), euclidian_profile.values(), c='green', label='euclidian exp')
ax.plot(euclidian_profile.keys(), euclidian_profile.values(), c='green')
ax.scatter(network_linear_profile.keys(), network_linear_profile.values(), c='red', label='net linear')
ax.plot(network_linear_profile.keys(), network_linear_profile.values(), c='red')
ax.scatter(network_exponential_profile.keys(), network_exponential_profile.values(), c='blue', label='net exp')
ax.plot(network_exponential_profile.keys(), network_exponential_profile.values(), c='blue')
plt.xlabel('meters')
plt.ylabel('SIT')
plt.legend()
plt.show()
```
These results are interesting and show that measured levels of segregation differ according to the analyst's operationalization of space/distance. In general, the network kernels tend to estimate higher levels of segregation, and the euclidian distance has the steepest slope
## Multiscalar Segregation Profiles for Residential and Workplace Areas
### with Block-Level Data
Here, we'll examine how segregation profiles vary by time of day by calculating multiscalar measures for both workplace populations (i.e. daytime) and residential populations (i.e. night time). We'll use more detailed block-level data from LEHD, and we will compare how the profiles differ when we measure using network distance, but weight further observations using a linear decay function versus an exponential decay.
Again, we'll read in the data, convert to a point representation, and project into state plane for our calculations. Well use the `get_lehd` convenience function from `geosnap` to quickly collect block-level attributes for workplace and residential populations
```
# you can download this file here: https://www2.census.gov/geo/tiger/TIGER2018/TABBLOCK/tl_2018_11_tabblock10.zip
blks = gpd.read_file('zip://tl_2018_11_tabblock10.zip')
blks['geometry'] = blks.centroid
blks = blks.to_crs(epsg=6487)
# we need both workplace area characteristics (wac) and residence area characteristics (rac)
de_wac = get_lehd('wac', state='dc', year='2015')
de_rac = get_lehd('rac', state='dc', year='2015')
# https://lehd.ces.census.gov/data/lodes/LODES7/LODESTechDoc7.3.pdf
# white = CR01
# black = CR02
# asian = CR04
# hispanic = CT02 - guessing these arent exclusive,
# lets just do white-black in this case
groups_lehd = ['CR01', 'CR02']
blks_wac = blks.merge(de_wac, left_on='GEOID10', right_index=True)
blks_rac = blks.merge(de_rac, left_on='GEOID10', right_index=True)
rac_net = compute_segregation_profile(blks_rac, distances=distances, groups=groups_lehd, network=net, precompute=False)
wac_net = compute_segregation_profile(blks_wac, distances=distances, groups=groups_lehd, network=net, precompute=False)
rac_net_exp = compute_segregation_profile(blks_rac, distances=distances, groups=groups_lehd, network=net, decay='exp', precompute=False)
wac_net_exp = compute_segregation_profile(blks_wac, distances=distances, groups=groups_lehd, network=net, decay='exp', precompute=False)
fig, ax = plt.subplots(figsize=(12,8))
ax.scatter(rac_net_exp.keys(), rac_net_exp.values(), c='blue', label='rac exponential')
ax.plot(rac_net_exp.keys(), rac_net_exp.values(), c='blue')
ax.scatter(wac_net_exp.keys(), wac_net_exp.values(), c='yellow', label='wac exponential')
ax.plot(wac_net_exp.keys(), wac_net_exp.values(), c='yellow')
ax.scatter(rac_net.keys(), rac_net.values(), c='green', label='rac linear')
ax.plot(rac_net.keys(), rac_net.values(), c='green')
ax.scatter(wac_net.keys(), wac_net.values(), c='red', label='wac linear')
ax.plot(wac_net.keys(), wac_net.values(), c='red')
plt.xlabel('meters')
plt.ylabel('SIT')
plt.legend()
plt.show()
```
A few things to take in here:
- these curves are significantly different from the tract-based profiles above.
- They are different years, different populations, and different aggregation levels, so we expect them to be (but it raises the question *which* of these variables is causing the difference?)
- residential segregation is **so** much larger than workplace segregation
- in fact, workplace segregation in DC is essentially 0 for environments greater than 1km
- the residential curve falls faster for exponential decay (as we might expect)
- there's almost no discernible differece between workplace areas segregation profiles using different decay types
what might this say about transport equity and spatial mismatch?
| github_jupyter |
# Data-driven calibration of transition viability thresholds
We used the hierarchy of occupations inherent in the ESCO data set to derive a data-driven threshold for viable transitions along with an additional indicator for transitions that are highly viable.
# 0. Import dependencies and inputs
```
%run ../notebook_preamble_Transitions.ipy
import mapping_career_causeways.plotting_utils as plotting_utils
from scipy.stats import percentileofscore
from itertools import combinations
data = load_data.Data()
def flatten_without_diagonal(W):
"""
Return all matrix's W values, except the diagonal, in a flat vector,
"""
return np.concatenate((
W[np.triu_indices(W.shape[0], k=1)],
W[np.tril_indices(W.shape[0], k=-1)]))
# Import occupation table
occupations = data.occupation_hierarchy
# Combined similarity measure
W = load_data.Similarities().W_combined
occupations.head(1)
```
# 1. Calibrate viability treshold
We set the threshold for viability to correspond to the typical similarity between closely related occupations
that belong to the same ISCO unit group. For example, shop assistants and sales processors are both in the ISCO unit group ‘Shop sales assistants’ with the four-digit code 5223, and it is reasonable to assume that the transition between these two occupations should be viable.
We calculated the average within-group occupation similarity for each ISCO unit group that had more than one occupation (using the combined similarity measure) and used the distribution of these within-group averages to make a judgement on the viability threshold. In the interest of obtaining more robust estimates of within-group averages, we used all occupations from the ESCO framework.
```
# Occupational hierarchy level that we are using (here, ISCO unit groups)
group_category = 'isco_level_4'
# Get all parent occupations *with at least 1 child*
df = occupations.groupby(group_category).count()
parents_with_children = df[df.id>1].index.to_list()
### Calculate within-group similarity
# similarity values
w_same_group = []
# compared occupation IDs
pairs = []
# list of lists for each group of occupations
w_within_groups = []
for j in range(len(parents_with_children)):
ids = occupations[occupations[group_category] == parents_with_children[j]].id.to_list()
w_within_group = []
for pair in list(combinations(ids,2)):
# Transitions in both directions
w_same_group.append(W[pair])
pairs.append(pair)
w_same_group.append(W[(pair[1],pair[0])])
pairs.append((pair[1],pair[0]))
# List of lists, storing each groups within-group similarities
w_within_group.append(W[pair])
w_within_group.append(W[(pair[1],pair[0])])
w_within_groups.append(w_within_group)
# Calculate the average within-group similarity for each group of occupations
mean_within_group_sim = [np.mean(y) for y in w_within_groups]
# Median & mean average-within-group similarities
print(np.median(mean_within_group_sim))
print(np.mean(mean_within_group_sim))
# Check the spread of the aveage-within-group similarities (in terms of standard deviations)
print(f'-2SD: {np.mean(mean_within_group_sim) - 2*np.std(mean_within_group_sim) :.3f}')
print(f'-1.5SD: {np.mean(mean_within_group_sim) - 1.5*np.std(mean_within_group_sim) :.3f}')
print(f'-1SD: {np.mean(mean_within_group_sim) - 1*np.std(mean_within_group_sim) :.3f}')
print(f'0SD: {np.mean(mean_within_group_sim) :.2f}')
print(f'+1SD: {np.mean(mean_within_group_sim) + 1*np.std(mean_within_group_sim) :.3f}')
print(f'+1.5SD: {np.mean(mean_within_group_sim) + 1.5*np.std(mean_within_group_sim) :.3f}')
print(f'+2SD: {np.mean(mean_within_group_sim) + 2*np.std(mean_within_group_sim) :.3f}')
```
Interestingly, there was considerable variation across different ISCO unit groups, and hence we set the viability threshold as the mean minus one standard deviation of these within-group averages (rounded to the first decimal point). This yielded a viability threshold equal to 0.30, with approximately 80 per cent of the within-group transitions above this threshold.
```
VIABILITY_THRESHOLD = np.round(np.mean(mean_within_group_sim) - 1*np.std(mean_within_group_sim), 1)
print(VIABILITY_THRESHOLD)
# Fraction of transitions above this threshold
all_w = [x for y in w_within_groups for x in y]
np.sum(np.array(all_w)>VIABILITY_THRESHOLD)/len(all_w)
# Fraction of ISCO unit groups above this threshold
np.sum(np.array(mean_within_group_sim)>VIABILITY_THRESHOLD)/len(mean_within_group_sim)
```
## 1.1 Visualise the distribution
```
# Distribution of within-group similarities
sns.set_style("ticks")
plt.figure(figsize=(7,5))
sns.distplot(mean_within_group_sim, kde=False, rug=True, bins=20)
# Viability threshold
plt.plot([VIABILITY_THRESHOLD, VIABILITY_THRESHOLD], [0, 60], c='r')
plt.xlabel('Within-group similarity (ISCO unit groups)', fontsize=16)
plt.ylabel('Number of unit groups', fontsize=16)
plt.ylim([0, 60])
plt.tick_params(axis='both', which='major', labelsize=14)
plotting_utils.export_figure('fig_54')
plt.show()
```
### Check examples
```
df_isco_titles = pd.DataFrame(data={
'sim': mean_within_group_sim,
'isco': parents_with_children}).merge(data.isco_titles[['isco', 'isco_title']], how='left')
df_isco_titles.sort_values('sim')
```
# 2. Calibrate highly viable transitions
The ESCO framework defines a further hierarchy of broader and narrower ESCO occupations that goes beyond the
ISCO unit groups (cf. Figure 47, page 85 in the Mapping Career Causeways report). For example, butcher is related to two other, narrower occupations: halal butcher and kosher butcher. We leveraged this hierarchy to
derive an indicator for highly viable transitions by defining ‘broad ESCO groups’ that contain the broad ESCO level
5 occupation and all its narrower occupations (Figure 55, page 94 in the report).
Analogous to the calibration process of the viability threshold, we set the indicator for highly viable transitions equal to the mean minus one standard deviation of the average within-group similarities of all broad ESCO groups rounded to the nearest decimal point.
```
# Occupational hierarchy level that we are using (here, ISCO unit groups)
group_category = 'top_level_parent_id'
# Get all broader top level parent occupations *with children (narrower occupations)*
df = occupations.groupby(group_category).count()
parents_with_children = df[df.id>1].index.to_list()
## Calculate within-group similarity across all broader top-level occupations
# similarity values
w_same_group = []
# compared occupation IDs
pairs = []
# list of lists for each group of occupations
w_within_groups = []
for j in range(len(parents_with_children)):
ids = occupations[occupations[group_category] == parents_with_children[j]].id.to_list()
w_within_group = []
for pair in list(combinations(ids,2)):
w_same_group.append(W[pair])
pairs.append(pair)
w_same_group.append(W[(pair[1],pair[0])])
pairs.append((pair[1],pair[0]))
w_within_group.append(W[pair])
w_within_group.append(W[(pair[1],pair[0])])
w_within_groups.append(w_within_group)
# Calculate the average within-group similarity for each group of occupations
mean_within_group_sim = [np.mean(y) for y in w_within_groups]
# Median & mean average-within-group similarities
print(np.median(mean_within_group_sim))
print(np.mean(mean_within_group_sim))
# Standard deviations
print(f'-2SD: {np.mean(mean_within_group_sim) - 2*np.std(mean_within_group_sim) :.2f}')
print(f'-1.5SD: {np.mean(mean_within_group_sim) - 1.5*np.std(mean_within_group_sim) :.2f}')
print(f'-1SD: {np.mean(mean_within_group_sim) - 1*np.std(mean_within_group_sim) :.2f}')
print(f'0SD: {np.mean(mean_within_group_sim) :.2f}')
print(f'+1SD: {np.mean(mean_within_group_sim) + 1*np.std(mean_within_group_sim) :.2f}')
print(f'+1.5SD: {np.mean(mean_within_group_sim) + 1.5*np.std(mean_within_group_sim) :.2f}')
print(f'+2SD: {np.mean(mean_within_group_sim) + 2*np.std(mean_within_group_sim) :.2f}')
HIGHLY_VIABLE_THRESHOLD = np.round(np.mean(mean_within_group_sim) - 1*np.std(mean_within_group_sim), 1)
print(HIGHLY_VIABLE_THRESHOLD)
```
## 2.1 Visualise the distribution
```
# Distribution of within-group similarities
sns.set_style("ticks")
plt.figure(figsize=(7,5))
sns.distplot(mean_within_group_sim, kde=False, rug=True, bins=15)
plt.plot([HIGHLY_VIABLE_THRESHOLD, HIGHLY_VIABLE_THRESHOLD], [0, 60], c='r')
plt.xlabel('Within-group similarity (Broad ESCO occupation groups)', fontsize=16)
plt.ylabel('Number of broad ESCO groups', fontsize=16)
plt.ylim([0, 45])
plt.tick_params(axis='both', which='major', labelsize=14)
plotting_utils.export_figure('fig_56')
plt.show()
```
### Check examples
```
df_occ_titles = pd.DataFrame(data={
'sim':mean_within_group_sim,
'id': parents_with_children}).merge(data.occ[['id','preferred_label']], how='left')
df_occ_titles.sort_values('sim')
```
# 3. Summarise the viability thresholds
Based on the observations above, we define transition similarities in the following way:
- **Viable** transitions have similarity above 0.30. This corresponds to about mean minus one standard deviation of within--group similarity for four-digit ISCO unit groups.
- **Highly viable transitions** have similarity above 0.40. This corresponds to mean minus one standard deviation for within-group similarity for broad ESCO occupation groups.
```
VIABILITY_THRESHOLD
HIGHLY_VIABLE_THRESHOLD
```
# 4. Visualise the distribution of all similarities
```
# All transition similarities
w = flatten_without_diagonal(W)
# Characterise the thresholds with respect to all possible transitions (between all ESCO occupations)
print(f'Viable transitions are in the {percentileofscore(w, VIABILITY_THRESHOLD):.1f} precentile')
print(f'Highly viable transitions are in the {percentileofscore(w, HIGHLY_VIABLE_THRESHOLD):.1f} precentile')
# Distribution of all similarities
sns.set_style("ticks")
plt.figure(figsize=(7,5))
sns.distplot(w, kde=False)
# Viability thresholds
plt.plot([VIABILITY_THRESHOLD, VIABILITY_THRESHOLD], [0, 2e+6], c='r')
plt.plot([HIGHLY_VIABLE_THRESHOLD, HIGHLY_VIABLE_THRESHOLD], [0, 2e+6], c='b')
plt.xlabel('Occupation similarity', fontsize=16)
plt.ylabel('Number of comparisons (millions)', fontsize=16)
plt.ylim([0, 1.3e+6])
plt.tick_params(axis='both', which='major', labelsize=14)
plotting_utils.export_figure('fig_57')
plt.show()
```
| github_jupyter |
### Lab4: Conditionals
#### Student: Juan Vecino
#### Group: Turno 2
#### Date: 09/10/2020
#### Lab5.2 Discount calculator
```
while True:
items = int(input('Cuantos items vas a comprar: '))
if items <= 10 and items >0:
price = 10
break
elif items>=11 and items<=100:
price = 7
break
elif items>=101 and items <= 1000:
price=5
break
else:
print('Choose another number')
continue
```
#### Lab5.3 Body Mass Index calculator
```
while True:
weight = float(input('Cuanto pesas(kg)? '))
height = float(input('Cuanto mides(m)? '))
if weight and height > 0:
break
else:
print('La masa o la altura es negativa por favor vuelve a intentarlo.')
continue
IMC = weight/height**2
print('Tiene un Indice de masa corporal de',IMC)
if IMC <= 16.0:
print(f'Tienes delgadez extrema')
elif IMC>= 18.5 and IMC <= 24.99:
print('Estas normal')
else:
print('Tienes sobre peso')
```
### Lab 5.4 Password generator
```
import random
#Importar
abecedario_lower = 'qwertyuiopasdfghjklñzxcvbnm'
abecedario_upper = abecedario_lower.upper()
simbolos='|@#¢∞¬÷œ≠'
numero = '1234567890'
password = []
#Añadir las condiciones abligatorias
sim = random.sample(simbolos,1)[0]
abe_lower = random.sample(abecedario_lower,1)[0]
abe_upper = random.sample(abecedario_upper,1)[0]
dig = random.sample(numero,1)[0]
condiciones = [sim,abe_lower,abe_upper,dig]
for valor in condiciones:
password.append(valor)
while len(password)!=8:
password.append(random.sample([sim,abe_lower,abe_upper,dig],1)[0])
random.shuffle(password)
''.join(password)
```
### Lab 5.5. User registration application
```
# Usuario
username= input('Chose a 8 characters user name:\t')
while len(username)!= 8:
username= input('Chose a 8 characters user name:\t')
print(f'User name: {username}')
#Contraseña
print('\nChose a password that fits:\n-Contein 8 characters\n-Contein at least 1 letter lower case\n-Contein at least 1 letter upeer case\n-Contain at least 1 digit\n-Contain at least 1 symbol')
for i in range(3):
print(f'\nTry number {i+1}, remaining:{2-i}')
contraseña= input('\nChose a characters password:\t')
condicion= any([i.islower() for i in contraseña]) and [len(contraseña)==8] and any([i.isupper() for i in contraseña]) and any([i.isdigit() for i in contraseña]) and any([any([i==v for v in simbolos]) for i in contraseña])
if condicion == True:
break
elif i==2:
print('\nTe pongo yo la contraseña')
contraseña=password
print(contraseña)
#Registro
registro={}
registro[username]=contraseña
print(registro)
```
### Lab 5.6. ATM Simulator
```
customer = int(input('How many customers do you have?\t'))
database = {}
x = 0
while x < customer:
ID = int(input('Introduce your ID:\t'))
Savings= int(input('Introduce your savings:\t'))
database[ID]=Savings
x +=1
ID_user= int(input('Introduce yout customer ID:'))
while database.get(ID_user)==None:
print(f'Error, ID {ID_user} is not in our database\n')
ID_user= int(input('Introduce yout customer ID:'))
continue
print(f'Customer ID: {ID_user} Savings: {database[ID_user]}\nOptions:\n -Op 1: Withdraw money\n -Op 2: Deposit money\n -Op 3: Exit')
option = int(input('Op:\t'))
#Withdraw money
if option == 1:
if database[ID_user]>0:
withdraw= int(input('Cuanto quieres sacar'))
if withdraw>0:
database[ID_user]-= withdraw
print(f'\nTu cuenta se ha quedado así:\nCustomer ID: {ID_user} Savings:{database[ID_user]}')
else:print('No te intentes sumar dinero rata ;)')
else:print('No tienes dinero')
#Deposit money
if option==2:
deposit=int(input('Select a quantity to deposit:\t'))
if deposit>0:
database[ID_user] += deposit
print(f'\nTu cuenta se ha quedado así:\nCustomer ID: {ID_user} Savings:{database[ID_user]}')
else: print('Eso sería sacar dinero')
#Exit
if option==3:
print('Exit')
```
### Lab 5.7. Checj Tic Tac Toe
```
#Forma normal
import numpy as np
game=[['X','O',''],['O','X',''],['O','X','X']]
X= []
O= []
#Filas
[X.append(all([i=='X'for i in v]))for v in game]
[O.append(all([i=='O'for i in v]))for v in game]
#Columnas
game_t= np.transpose(game)
[X.append(all([i=='X'for i in v]))for v in game_t]
[O.append(all([i=='O'for i in v]))for v in game_t]
#Diagonal
game_d= np.diag(game)
X.append(all([ i=='X' for i in game_d]))
O.append(all([ i=='O' for i in game_d]))
#Diagonal 2
game_d2 = np.diag(np.fliplr(game))
X.append(all([ i=='X' for i in game_d2]))
O.append(all([ i=='O' for i in game_d2]))
if any([i==True for i in X])==True:
print('Han ganado las X')
elif any([i==True for i in O])==True:
print('Han ganado las O')
# Forma muy rápdida
import numpy as np
game=[['X','O',''],['O','X',''],['O','X','X']]
X= []
O= []
game_t= np.transpose(game)
game_d= np.diag(game)
game_d2 = np.diag(np.fliplr(game))
matriz = [game,game_t,game_d,game_d2]
for tipo in matriz:
[X.append(all([i=='X' for i in v]))for v in tipo]
[O.append(all([i=='O'for i in v]))for v in tipo]
if any(X)==True or any(O)==True:
print('Has ganado')
#Forma muy larga
import numpy as np
game=[['X','X',''],['O','X',''],['O','X','X']]
print(np.array(game))
contadorX = 0
contadorO = 0
X=[]
O=[]
# Filas
for v in game:
for i in v:
if i =='X':
contadorX +=1
else:
contadorO +=1
if contadorO==3:
X.append(True)
break
elif contadorX==3:
O.append(True)
break
else:
X.append(False)
O.append(False)
contadorO = 0
contadorX = 0
continue
#Columna
game_t= np.transpose(game)
for v in game_t:
for i in v:
if i =='X':
contadorX +=1
else:
contadorO +=1
if contadorO==3:
X.append(True)
break
elif contadorX==3:
O.append(True)
break
else:
X.append(False)
O.append(False)
contadorO = 0
contadorX = 0
continue
#Diagonal
game_d= np.diag(game)
for v in game_d:
if v =='X':
contadorX +=1
else:
contadorO +=1
if contadorO==3:
X.append(True)
elif contadorX==3:
O.append(True)
else:
X.append(False)
O.append(False)
contadorO = 0
contadorX = 0
#Diagonal 2
game_d2= np.diag(np.fliplr(game))
for v in game_d2:
if v =='X':
contadorX +=1
else:
contadorO +=1
if contadorO==3:
X.append(True)
elif contadorX==3:
O.append(True)
else:
X.append(False)
O.append(False)
contadorO = 0
contadorX = 0
if any([i==True for i in X])==True:
print('Han ganado las X')
elif any([i==True for i in O])==True:
print('Han ganado las O')
#Juego de 3 en ralla
import numpy as np
import random
game =[['','',''],['','',''],['','','']]
jugador_1=input('Quien es el jugador 1?\t')
jugador_2=input('Quien es el jugador 2?\t')
empieza = random.sample([jugador_1,jugador_2],1)[0]
print(f'Empieza {empieza}')
print('\nTablero:\n',np.array(game))
X= []
O= []
# ver si alguien va ganando
def comprobar():
#Filas
any([X.append(all([i=='X'for i in v]))for v in game])
any([O.append(all([i=='O'for i in v]))for v in game])
#Columnas
game_t= np.transpose(game)
any([X.append(all([i=='X'for i in v]))for v in game_t])
any([O.append(all([i=='O'for i in v]))for v in game_t])
#Diagonal
game_d= np.diag(game)
X.append(all([ i=='X' for i in game_d]))
O.append(all([ i=='O' for i in game_d]))
#Diagonal 2
game_d2 = np.diag(np.fliplr(game))
X.append(all([ i=='X' for i in game_d2]))
O.append(all([ i=='O' for i in game_d2]))
if any([i==True for i in X])==True:
print('Han ganado las X')
return True
elif any([i==True for i in O])==True:
print('Han ganado las O')
return True
#Empate
Z=[]
for v in game:
for i in v:
if (i=='X' or i=='O'):
Z.append(True)
else:
Z.append(False)
if all([i==True for i in Z]):
print('Empate')
return True
#Aqui se ponen las fichas
def poner():
i=1
while comprobar()==None:
print(comprobar())
#Saber de quien es el Turno
i+=1
empieza = 'X'
segundo = 'O'
if i%2==0:
print(f'Turno {empieza}')
else:
print(f'Turno {segundo}')
fila = int(input('En que fila quieres poner?\t'))-1
columna = int(input('En que columna quieres poner?\t'))-1
if i%2==0:
game[fila][columna]= empieza
else:
game[fila][columna]= segundo
print('\nTablero:\n',np.array(game))
if comprobar()!=None:
print('Fin de la partida')
break
print(poner())
```
| github_jupyter |
```
import json
import re
import numpy as np
import tensorflow as tf
from sklearn.model_selection import KFold
from tqdm import tqdm
def process_string(string):
string = re.sub('[^A-Za-z0-9\-\/ ]+', ' ', string).split()
return [y.strip() for y in string]
with open('pos-data-v3.json','r') as fopen:
dataset = json.load(fopen)
texts, labels = [], []
for i in dataset:
try:
texts.append(process_string(i[0])[0].lower())
labels.append(i[-1])
except Exception as e:
print(e, i)
word2idx = {'PAD': 0,'NUM':1,'UNK':2}
tag2idx = {'PAD': 0}
char2idx = {'PAD': 0}
word_idx = 3
tag_idx = 1
char_idx = 1
def parse_XY(texts, labels):
global word2idx, tag2idx, char2idx, word_idx, tag_idx, char_idx
X, Y = [], []
for no, text in enumerate(texts):
text = text.lower()
tag = labels[no]
for c in text:
if c not in char2idx:
char2idx[c] = char_idx
char_idx += 1
if tag not in tag2idx:
tag2idx[tag] = tag_idx
tag_idx += 1
Y.append(tag2idx[tag])
if text not in word2idx:
word2idx[text] = word_idx
word_idx += 1
X.append(word2idx[text])
return X, np.array(Y)
X, Y = parse_XY(texts, labels)
idx2word={idx: tag for tag, idx in word2idx.items()}
idx2tag = {i: w for w, i in tag2idx.items()}
seq_len = 50
def iter_seq(x):
return np.array([x[i: i+seq_len] for i in range(0, len(x)-seq_len, 1)])
def to_train_seq(*args):
return [iter_seq(x) for x in args]
def generate_char_seq(batch):
x = [[len(idx2word[i]) for i in k] for k in batch]
maxlen = max([j for i in x for j in i])
temp = np.zeros((batch.shape[0],batch.shape[1],maxlen),dtype=np.int32)
for i in range(batch.shape[0]):
for k in range(batch.shape[1]):
for no, c in enumerate(idx2word[batch[i,k]]):
temp[i,k,-1-no] = char2idx[c]
return temp
X_seq, Y_seq = to_train_seq(X, Y)
X_char_seq = generate_char_seq(X_seq)
X_seq.shape
import json
with open('crf-lstm-concat-bidirectional-pos.json','w') as fopen:
fopen.write(json.dumps({'idx2tag':idx2tag,'idx2word':idx2word,
'word2idx':word2idx,'tag2idx':tag2idx,'char2idx':char2idx}))
from keras.utils import to_categorical
Y_seq_3d = [to_categorical(i, num_classes=len(tag2idx)) for i in Y_seq]
from sklearn.cross_validation import train_test_split
train_X, test_X, train_Y, test_Y, train_char, test_char = train_test_split(X_seq, Y_seq_3d, X_char_seq,
test_size=0.1)
import keras
print(keras.__version__)
from keras.models import Model, Input
from keras.layers import LSTM, Embedding, Dense, TimeDistributed, Dropout, Bidirectional, Reshape, Concatenate, Lambda
from keras_contrib.layers import CRF
from keras import backend as K
from keras.backend.tensorflow_backend import set_session
set_session(tf.InteractiveSession())
max_len = seq_len
input_word = Input(shape=(None,))
input_char = Input(shape=(None,None,))
model_char = Embedding(input_dim=len(char2idx) + 1, output_dim=128)(input_char)
s = K.shape(model_char)
def backend_reshape(x):
return K.reshape(x, (s[0]*s[1],s[2],128))
model_char = Lambda(backend_reshape)(model_char)
model_char = Bidirectional(LSTM(units=50, return_sequences=True, recurrent_dropout=0.1))(model_char)
def sliced(x):
return x[:,-1]
model_char = Lambda(sliced)(model_char)
def backend_reshape(x):
return K.reshape(x, (s[0],s[1],100))
model_char = Lambda(backend_reshape)(model_char)
model_word = Embedding(input_dim=len(word2idx) + 1, output_dim=64, mask_zero=True)(input_word)
concated_word_char = Concatenate(-1)([model_char,model_word])
model = Bidirectional(LSTM(units=50, return_sequences=True, recurrent_dropout=0.1))(concated_word_char)
model = TimeDistributed(Dense(50, activation="relu"))(model)
crf = CRF(len(tag2idx))
output = crf(model)
model = Model(inputs=[input_word, input_char], outputs=output)
model.compile(optimizer="rmsprop", loss=crf.loss_function, metrics=[crf.accuracy])
model.summary()
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(model).create(prog='dot', format='svg'))
history = model.fit([train_X,train_char], np.array(train_Y), batch_size=32, epochs=2,
validation_split=0.1, verbose=1)
predicted=model.predict([test_X,test_char],verbose=1)
idx2tag = {i: w for w, i in tag2idx.items()}
def pred2label(pred):
out = []
for pred_i in pred:
out_i = []
for p in pred_i:
p_i = np.argmax(p)
out_i.append(idx2tag[p_i])
out.append(out_i)
return out
pred_labels = pred2label(predicted)
test_labels = pred2label(test_Y)
from sklearn.metrics import classification_report
print(classification_report(np.array(test_labels).ravel(), np.array(pred_labels).ravel()))
model.save_weights('crf-lstm-concat-bidirectional-pos.h5')
```
| github_jupyter |
# Full Gait
----
Kevin J. Walchko, created 12 Nov 2016
This notebook calls the actual robot code and plots both the x,y,z positions and the 3 servo angles of a leg during a gait. The command can be modified below to determine the movement.
```
%matplotlib inline
from __future__ import print_function
from __future__ import division
import matplotlib.pyplot as plt
import numpy as np
import sys
sys.path.insert(0, '../..')
from math import pi, sqrt
from Quadruped import Quadruped
from Gait import DiscreteRippleGait, ContinousRippleGait
```
## Setup
Create a quadrupted `robot` with correct leg segment lengths.
```
data = {
# 'serialPort': '/dev/tty.usbserial-A5004Flb',
'legLengths': {
'coxaLength': 45,
'femurLength': 55,
'tibiaLength': 104
},
'legAngleLimits': [[-90, 90], [-90, 90], [-180, 0]],
'legOffset': [150, 150, 150+90]
}
robot = Quadruped(data)
leg = robot.legs[0].foot0
```
Now create a gait and a command to execute for one complete duty cycle. This will produce both the foot positions (x,y,z) and the servo angles ($\theta_1$,$\theta_2$,$\theta_3$) for each step sequence.
```
# Edit this command for linear [mm] or angular [rads] movements.
# cmd = {'linear': [0,0], 'angle': pi/4}
cmd = {'linear': [50,50], 'angle': 0}
height = 25 # height the foot is lifted during movement in mm
# gait = ContinousRippleGait(height, leg)
gait = DiscreteRippleGait(height, leg)
alpha = 1.0
pos = []
for i in range(0,12):
p = gait.eachLeg(i,cmd)
pos.append(p)
print('pos: {:.2f} {:.2f} {:.2f}'.format(*p))
angle = []
for p in pos:
# print('pos: {:.2f} {:.2f} {:.2f}'.format(*p))
a = robot.legs[0].move(*p)
print('angle: {:.2f} {:.2f} {:.2f}'.format(*a))
if a:
angle.append(a)
```
## Plotting the Results
We will now plot the x, y, z, and x-y plane (ground plane) plots for the foot's position. Also the each of the leg's servo angles will be plotted.
```
px = []
py = []
pz = []
for p in pos:
px.append(p[0])
py.append(p[1])
pz.append(p[2])
# pz.append(sqrt(p[0]**2+p[1]**2))
plt.subplot(2,2,1);
plt.plot(px);
plt.ylabel('x');
plt.subplot(2,2,2);
plt.plot(py);
plt.ylabel('y');
plt.subplot(2,2,3);
plt.plot(pz);
plt.ylabel('z');
plt.subplot(2,2,4);
plt.plot(px,py);
plt.ylabel('y');
plt.xlabel('x');
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot(px, py, pz);
t1 = []
t2 = []
t3 = []
for a in angle:
t1.append(a[0])
t2.append(a[1])
t3.append(a[2])
plt.subplot(2,2,1);
plt.plot(t1)
plt.ylabel('$\\theta_1$')
plt.subplot(2,2,2)
plt.plot(t2)
plt.ylabel('$\\theta_2$')
plt.subplot(2,2,3);
plt.plot(t3)
plt.ylabel('$\\theta_3$');
# plt.subplot(2,2,4);
# plt.plot(px,py)
# plt.ylabel('y')
# plt.xlabel('x')
print('Number of points: {}'.format(len(angle)))
print('-----------------------')
for a in angle:
print('{:.2f} {:.2f} {:.2f}'.format(*a))
```
-----------
<a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>.
| github_jupyter |
# About
This notebook will cover a basic model with a basic EDA about the competition.
## Imports
```
# Imports
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import tree
from sklearn import metrics
# Visualization
# Ploty Imports
from itertools import combinations
import plotly.graph_objs as go
from plotly.offline import init_notebook_mode, iplot
# Figures inline and set visualization style
%matplotlib inline
sns.set() #Different type of visualization
# import the necessary modelling algorithms
# Regression
from sklearn.linear_model import LinearRegression,Ridge,Lasso,RidgeCV
from sklearn.ensemble import RandomForestRegressor,BaggingRegressor,GradientBoostingRegressor,AdaBoostRegressor
from sklearn.svm import SVR
from sklearn.neighbors import KNeighborsRegressor
from sklearn.ensemble import ExtraTreesRegressor
import xgboost as xgb
from xgboost.sklearn import XGBRegressor
# Model selection
from sklearn.model_selection import train_test_split,cross_validate
from sklearn.model_selection import KFold
from sklearn.model_selection import GridSearchCV
# Preprocessing
from sklearn.preprocessing import MinMaxScaler,StandardScaler,Imputer,LabelEncoder,PolynomialFeatures
# Evaluation metrics
from sklearn.metrics import mean_squared_log_error,mean_squared_error, r2_score,mean_absolute_error # for regression
from sklearn.metrics import accuracy_score,precision_score,recall_score,f1_score # for classification
# Show multiple statements at once
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
```
## Load Data
```
# Train Data
train = pd.read_csv('input/train.csv')
# Test Data
test = pd.read_csv('input/test.csv')
sub = pd.read_csv('input/sample_submission.csv')
structures = pd.read_csv('input/structures.csv')
```
## EDA
```
# Lets take a look at the main csv files
train.head()
sub.head()
structures.head()
print(f'{train.shape[0]} rows in the train data.')
print(f'{test.shape[0]} rows in the test data.')
print(f"{train['molecule_name'].nunique()} different molecules in the train data.")
print(f"There are {test['molecule_name'].nunique()} different molecules in the test data.")
print(f"There are {structures['atom'].nunique()} unique atoms.")
print(f"There are {train['type'].nunique()} unique types.")
# We take a first look at the dataset
train.info()
print ('#################################################')
print ('#################################################')
test.info()
sub.info()
sub['scalar_coupling_constant'].unique()
```
Lets take a look at a nicer visualization.
```
# Plotly notebook mode
init_notebook_mode(connected=True)
'''
This function will plot a mollecule
'''
def plot_molecule(molecule_name, structures_df):
"""Creates a 3D plot of the molecule"""
atomic_radii = dict(C=0.68, F=0.64, H=0.23, N=0.68, O=0.68)
cpk_colors = dict(C='black', F='green', H='white', N='blue', O='red')
molecule = structures[structures.molecule_name == molecule_name]
coordinates = molecule[['x', 'y', 'z']].values
x_coordinates = coordinates[:, 0]
y_coordinates = coordinates[:, 1]
z_coordinates = coordinates[:, 2]
elements = molecule.atom.tolist()
radii = [atomic_radii[element] for element in elements]
def get_bonds():
"""Generates a set of bonds from atomic cartesian coordinates"""
ids = np.arange(coordinates.shape[0])
bonds = set()
coordinates_compare, radii_compare, ids_compare = coordinates, radii, ids
for i in range(len(ids)):
coordinates_compare = np.roll(coordinates_compare, -1, axis=0)
radii_compare = np.roll(radii_compare, -1, axis=0)
ids_compare = np.roll(ids_compare, -1, axis=0)
distances = np.linalg.norm(coordinates - coordinates_compare, axis=1)
bond_distances = (radii + radii_compare) * 1.3
mask = np.logical_and(distances > 0.1, distances < bond_distances)
bonds.update(map(frozenset, zip(ids[mask], ids_compare[mask])))
return bonds
def atom_trace():
"""Creates an atom trace for the plot"""
colors = [cpk_colors[element] for element in elements]
markers = dict(color=colors, line=dict(color='lightgray', width=2), size=7, symbol='circle', opacity=0.8)
trace = go.Scatter3d(x=x_coordinates, y=y_coordinates, z=z_coordinates, mode='markers', marker=markers,
text=elements)
return trace
def bond_trace():
""""Creates a bond trace for the plot"""
trace = go.Scatter3d(x=[], y=[], z=[], hoverinfo='none', mode='lines',
marker=dict(color='grey', size=7, opacity=1))
for i, j in bonds:
trace['x'] += (x_coordinates[i], x_coordinates[j], None)
trace['y'] += (y_coordinates[i], y_coordinates[j], None)
trace['z'] += (z_coordinates[i], z_coordinates[j], None)
return trace
bonds = get_bonds()
atoms = zip(range(len(elements)), x_coordinates, y_coordinates, z_coordinates)
annotations = [dict(text=num, x=x, y=y, z=z, showarrow=False, yshift=15)
for num, x, y, z in atoms]
data = [atom_trace(), bond_trace()]
axis_params = dict(showgrid=False, showticklabels=False, zeroline=False, titlefont=dict(color='white'))
layout = go.Layout(scene=dict(xaxis=axis_params, yaxis=axis_params, zaxis=axis_params, annotations=annotations),
margin=dict(r=0, l=0, b=0, t=0), showlegend=False)
fig = go.Figure(data=data, layout=layout)
iplot(fig)
# Plots
plot_molecule('dsgdb9nsd_133885', structures)
plot_molecule('dsgdb9nsd_105227', structures)
plot_molecule('dsgdb9nsd_099964', structures)
```
### Duplicate Check
```
# We check if we have duplicates
train.duplicated().any()
test.duplicated().any()
```
### Label Encoding
Before modeling it would be a good idea to do a Label Encoding to variables which are still as objects in order to improve the model's performance.
```
categoricals = train.select_dtypes(include='object').columns
categoricals = test.select_dtypes(include='object').columns
for c in categoricals:
lbl = LabelEncoder()
lbl.fit(list(train[c].values))
train[c] = lbl.transform(list(train[c].values))
for c in categoricals:
lbl = LabelEncoder()
lbl.fit(list(test[c].values))
test[c] = lbl.transform(list(test[c].values))
# Check it has been done properly
train.dtypes
test.dtypes
```
## Metric
```
def metric(df, preds):
df["prediction"] = preds
maes = []
for t in df.type.unique():
y_true = df[df.type==t].scalar_coupling_constant.values
y_pred = df[df.type==t].prediction.values
mae = np.log(metrics.mean_absolute_error(y_true, y_pred))
maes.append(mae)
return np.mean(maes)
```
# Modeling
### Setting K-Folds
```
# Setting a 5-fold stratified cross-validation (note: shuffle=True)
skf = KFold(n_splits=5, shuffle=True, random_state=8)
params = {'booster' : 'gbtree',
#'nthread' : 5,
'objective' : 'reg:linear',
'eval_metric' : 'mae',
'max_depth' : 8,
'eta' : 0.3,
'subsample' : 0.7,
'colsample_bytree' : 0.7
}
# We define de label
y = train['scalar_coupling_constant']
dtrain = xgb.DMatrix(train, label = y)
res = xgb.cv(params,
dtrain,
num_boost_round = 4000,
folds=skf,
seed=2019,
early_stopping_rounds = 10,
verbose_eval=True)
best_round=[i for i, e in enumerate(res['test-rmse-mean']) if e == min(res['test-rmse-mean'])][0]
best_round
res.iloc[best_round]
```
| github_jupyter |
```
import pandas as pd
import numpy as np
from tqdm import tqdm
import gc
def extract_emojis(string):
try:
return ''.join(c for c in str(string) if c in emoji.UNICODE_EMOJI)
except TypeError:
print('wattafak is dat', type(string), string)
exit()
def __check_conditions(df, mean, std, error=(1,1)):
target_mean = np.mean(df['num_tracks'])
target_std = np.std(df['num_tracks'])
if mean > (target_mean + error[0]) or mean < (target_mean - error[0]):
print("error m ",mean,target_mean)
return False
if std > (target_std + error[1]) or std < (target_std - error[1]):
print("error s ",std,target_std)
return False
return True
def get_random_df_constrained(source_df, num_of_pl, min_v, max_v, mean, std, errors=(1.0, 1.0)):
"""
iterates until it created a dataframe that satisfies the conditions.
"""
seed = 0
while True:
df = source_df[((source_df['num_tracks']) >= min_v) & ((source_df['num_tracks']) <= max_v)].sample(
n=num_of_pl, random_state=seed)
if __check_conditions(df, mean=mean, std=std, error=errors):
break
seed+=1
return df,seed
playlists_train = pd.read_csv("../../../dataset/playlists.csv", delimiter='\t')
interactions = pd.read_csv("../../../dataset/interactions.csv", delimiter='\t')
tracks = pd.read_csv("../../../dataset/tracks.csv", delimiter='\t')
playlists.head()
interactions.head()
len(interactions)
tracks.head()
len(tracks)
from collections import OrderedDict
cates = {'cat1': (10, 50, 1000,28.6,11.2 ), 'cat2_1': (10, 40, 998,23.8,8.7),
'cat2_2':(70,80,2,75,4), 'cat3_1': (10, 50,314,29.4,11.4),
'cat3_2':(51,75,425,62,7.2),'cat3_3':(75,100,261,87,7.1),
'cat4': (40, 100,1000,63,16.5),'cat5': (40, 100,1000,63.5,17.2),
'cat6': (40, 100,1000,63.6,16.7),'cat7': (101, 250,1000,150,38.6),
'cat8': (101, 250,1000,151.7,38.6), 'cat9': (150, 250,1000,189,28),
'cat_10': (150, 250,1000,187.5,27)}
cates = OrderedDict(sorted(cates.items(), key=lambda t: t[0]))
print (" name: (min_value, max_value, how many to extract, mean, std)")
cates
cat_pids = {}
seeds = [0]* len(cates)
count = 0
for cat, info in cates.items():
print(cat)
df, seeds[count] = get_random_df_constrained(playlists_train, min_v=info[0], max_v=info[1], num_of_pl=info[2],
mean=info[3],std=info[4],errors=(1.5, 1.5))
cat_pids[cat] = list(df.pid)
playlists_train = playlists_train.drop(df.index)
count+=1
playlists_train = playlists.reset_index(drop=True)
#save
playlists_train.to_csv("../testset/train_playlists.csv",sep='\t')
gc.collect()
del(playlists_train)
```
### to generate the same testset you need to use those two variables:
cates,
seeds
```
cates
seeds
```
### after having selected the pids of the song to extract, we generate the files
```
def build_df(pids, title, num_samples):
pids= cat_pids[cat]
df = playlists[playlists['pid'].isin(pids)]
df=df[['name','pid','num_tracks']]
if not title:
df['name']=''
df['num_samples'] = num_samples
df['num_holdouts'] = df['num_tracks'] - df['num_samples']
return df
playlists_challange = pd.DataFrame()
test_interactions = pd.DataFrame()
evaluation_interactions = pd.DataFrame()
df_eval_itr= pd.DataFrame()
def build_playlists_df(pids, title, num_samples):
df = playlists[playlists['pid'].isin(pids)]
df=df[['name','pid','num_tracks']]
if not title:
df['name']=''
df['num_samples'] = num_samples
df['num_holdouts'] = df['num_tracks'] - df['num_samples']
return df
for cat in cates.keys():
gc.collect()
pids= cat_pids[cat]
if cat == 'cat1':
print(cat,"start", end='')
#Predict tracks for a playlist given its title only
num_samples = 0
position=True
title = True
elif cat == 'cat2_1' or cat=='cat2_2':
print(cat,"start", end='')
#Predict tracks for a playlist given its title and the first track
num_samples = 1
position = True
title = True
elif cat == 'cat3_1' or cat == 'cat3_2' or cat == 'cat3_3':
print(cat,"start", end='')
#Predict tracks for a playlist given its title and the first 5 tracks
num_samples = 5
position = True
title = True
elif cat == 'cat4':
print(cat,"start", end='')
#Predict tracks for a playlist given its first 5 tracks (no title)
num_samples = 5
position= False
title = False
elif cat == 'cat5':
print(cat,"start", end='')
#Predict tracks for a playlist given its title and the first 10 tracks
num_samples = 10
position=True
title = True
elif cat == 'cat6':
print(cat,"start", end='')
#Predict tracks for a playlist given its first ten tracks (no title)
num_samples = 10
position=True
title = False
elif cat == 'cat7':
print(cat,"start", end='')
#Predict tracks for a playlist given its title and the first 25 tracks
num_samples = 25
title = True
position= True
elif cat == 'cat8':
print(cat,"start", end='')
#Predict tracks for a playlist given its title and 25 random tracks
num_samples = 25
title= True
position = False
elif cat == 'cat9':
print(cat,"start", end='')
#Predict tracks for a playlist given its title and the first 100 tracks
num_samples = 100
title= True
position= True
elif cat == 'cat_10':
print(cat,"start", end='')
#Predict tracks for a playlist given its title and 100 random tracks
num_samples = 100
title = True
position = False
else:
raise(Exception,"azz")
print("- build playlist sample", end='')
df = build_playlists_df(pids, title, num_samples)
playlists_challange = pd.concat([playlists_challange,df])
print("- interactions", end='')
test_interactions = interactions[interactions['pid'].isin(pids)]
print("- interactions drop", end='')
interactions = interactions.drop(test_interactions.index)
print("- extract songs")
if position:
df_sample = test_interactions[(test_interactions['pos'] >= 0) & (test_interactions['pos'] < num_samples)]
test_interactions = pd.concat([test_interactions, df_sample])
test_interactions = test_interactions.drop(df_sample.index)
df_eval_itr = pd.concat([df_eval_itr, test_interactions])
else:
for pid in pids:
df = test_interactions[test_interactions['pid'] == pid]
df_sample = df.sample(n=num_samples)
test_interactions = pd.concat([test_interactions, df_sample])
df = df.drop(df_sample.index)
df_eval_itr = pd.concat([df_eval_itr, df])
print('interactions merge')
tids = set(df_eval_itr['tid'])
df = tracks[tracks['tid'].isin(tids)]
df = df[['tid', 'arid']]
df_eval_itr = pd.merge(df_eval_itr, df, on='tid')
print("reset index")
playlists_challange = playlists_challange.reset_index(drop=True)
test_interactions = test_interactions.reset_index(drop=True)
df_eval_itr = df_eval_itr.reset_index(drop=True)
interactions = interactions.reset_index(drop=True)
```
# print files
```
print(playlists_challange.head())
playlists_challange.to_csv("challange_playlists.csv", sep='\t')
del(playlists_challange)
print(df_eval_itr.head())
df_eval_itr.to_csv("evaluation_interactions.csv", sep='\t')
del(df_eval_itr)
print(interactions.head())
df_.to_csv("train_interactions.csv", sep='\t')
del(interactions)
print(test_interactions.head())
test_interactions.to_csv("test_interactions.csv", sep='\t')
del(test_interactions)
```
| github_jupyter |
<table>
<tr align=left><td><img align=left src="./images/CC-BY.png">
<td>Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Kyle T. Mandli</td>
</table>
```
%matplotlib inline
import numpy
import matplotlib.pyplot as plt
```
# Numerical Quadrature
**Goal:** Evaluate integrals
$$ \int^b_a f(x) dx$$
Many integrals do not have closed form solutions
$$ \int^b_a \sqrt{1 + \cos^2 x} dx$$
Solution to ordinary differential equations
$$\frac{\text{d}^2 u}{\text{d}t^2} = f\left(u, \frac{\text{d} u}{\text{d}t}, t \right)$$
Defining $v = \frac{\text{d} u}{\text{d}t}$ then leads to
$$\begin{bmatrix}
\frac{\text{d} v}{\text{d}t} \\ \frac{\text{d} u}{\text{d}t} \end{bmatrix} = \begin{bmatrix} f(u, v, t) \\ v \end{bmatrix}$$
which can be solved by integration
$$\begin{bmatrix}
v \\ u \end{bmatrix} = \begin{bmatrix} v(t_0) + \int^t_{t_0} f(u, v, \hat{t}) d\hat{t} \\ u(t_0) + \int^t_{t_0} v d\hat{t} \end{bmatrix}$$
Solving partial differential equations
$$
u_t = \nabla^2 u
$$
## Riemann Sums
Given $f(x)$ and a partition of the interval $[a,b]$ with $\{x_i\}^N_{i=0}$ and $a = x_0 < x_1 < \ldots < x_N = b$ and $x^*_i \in [x_i, x_{i+1}]$ we define the Riemann integral as
$$\int^b_a f(x) dx = \lim_{N\rightarrow \infty} \sum^{N-1}_{i=0} f(x_i^*) (x_{i+1} - x_i)$$
This is a general definition and leads to a number of quadrature approaches based on how we pick $x_i^* \in [x_i, x_{i+1}]$.
### Midpoint Rule
Choose $x_i^*$ such that
$$x_i^* = \frac{x_{i+1} + x_i}{2}$$
so that
$$I[f] = \int^b_a f(x) dx \approx \sum^{N-1}_{i=0} f\left(\frac{x_{i+1} + x_i}{2} \right ) (x_{i+1} - x_i) = Q_m[f]$$
over $\Delta x_i = x_{i+1} - x_i$
$$Q_m[f] = \Delta x f\left(\frac{\Delta x}{2} \right )$$
#### Example: Integrate using midpoint rule
Calculate and illustrate the midpoint rule. Note that we are computing the cummulative integral here:
$$\int^x_0 sin(\hat{x}) d\hat{x} = -\cos \hat{x} |^x_0 = 1 - \cos x$$
Code demo...
```
# Note that this calculates the cummulative integral from 0.0
f = lambda x: numpy.sin(x)
If = lambda x: 1.0 - numpy.cos(x)
x = numpy.linspace(0.0, 2.0 * numpy.pi, 100)
num_partitions = 10
x_hat = numpy.linspace(0.0, 2.0 * numpy.pi, num_partitions + 1)
x_star = 0.5 * (x_hat[1:] + x_hat[:-1])
delta_x = x_hat[1] - x_hat[0]
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2.0)
axes = fig.add_subplot(1, 2, 1)
axes.plot(x, numpy.zeros(x.shape), 'k--')
axes.plot(x, f(x), 'b')
for i in xrange(num_partitions):
axes.plot([x_hat[i], x_hat[i]], [0.0, f(x_star[i])], 'k--')
axes.plot([x_hat[i + 1], x_hat[i + 1]], [0.0, f(x_star[i])], 'k--')
axes.plot([x_hat[i], x_hat[i + 1]], [f(x_star[i]), f(x_star[i])], 'k--')
axes.set_xlabel("x")
axes.set_ylabel("$f(x)$")
axes.set_title("Partition and $f(x)$")
axes.set_xlim((0.0, 2.0 * numpy.pi))
axes.set_ylim((-1.1, 1.1))
Qf = numpy.zeros(x_star.shape)
Qf[0] = f(x_star[0]) * delta_x
for i in xrange(1, num_partitions):
Qf[i] = Qf[i - 1] + f(x_star[i]) * delta_x
axes = fig.add_subplot(1, 2, 2)
axes.plot(x, If(x), 'r')
# Offset due to indexing above
axes.plot(x_star + delta_x / 2.0, Qf, 'ko')
axes.set_xlabel("x")
axes.set_ylabel("$f(x)$")
axes.set_title("Integral and Approximated Integral")
axes.set_xlim((0.0, 2.0 * numpy.pi))
axes.set_ylim((-0.1, 2.5))
plt.show()
```
## Quadrature
A method to evaluate $I[f]$ using a discrete, finite number of function evaluations:
$$Q[f] = \sum^M_{i=0} w_i f(x_i)$$
where $w_i$ are weights. A particular quadrature method will specify the weights $w_i$ and the points $x_i$ to evaluate the function $f(x)$ at.
### Error Analysis
Define the error $E[f]$ such that
$$I[f] = Q[f] + E[f]$$
The degree of $Q[f]$ is the integer $n$ such that $E[p_i] = 0~~~ \forall i \leq n$ and $\exists p_{n+1}$ such that $E[p_{n+1}] \neq 0$.
### Newton-Cotes Quadrature
Using $N+1$ equally spaced points, evaluate $f(x)$ at these points and exactly integrate the interpolating polynomial:
$$Q[f] = \int^b_a P_N(x) dx$$
#### Trapezoidal Rule
Use $N = 1$ polynomial to derive the trapezoidal rule.
Trapezoidal rule uses $N = 1$ order polynomials between each point (i.e. piece-wise defined linear polynomials). The coefficients of the polynomial in each interval are
$$p_0 = f(x_i) ~~~~~ p_1 = \frac{f(x_{i+1}) - f(x_i)}{x_{i+1} - x_i}$$
which gives the interpolating polynomial
$$p_1(x) = \frac{f(x_{i+1}) - f(x_i)}{x_{i+1} - x_i} ( x- x_i) + f(x_i)$$
Integrating this polynomial we have
$$Q[f] = \int^{x_{i+1}}_{x_i} (p_0 + p_1 (x - x_i)) dx = \left . p_0 x + p_1 \left (\frac{x^2}{2} - x_i x\right) \right |^{x_{i+1}}_{x_i}$$
$$= p_0 \Delta x + p_1 \left (\frac{1}{2} (x_{i+1} + x_i) \Delta x - x_i \Delta x\right) $$
$$= f(x_i) \Delta x + (f(x_{i+1}) - f(x_i))\left (\frac{1}{2} (x_{i+1} + x_i) - x_i\right) $$
$$= f(x_i) \Delta x + (f(x_{i+1}) - f(x_i)) \frac{\Delta x}{2} $$
$$= \frac{\Delta x}{2} (f(x_{i+1}) + f(x_i)) $$
We can also simplify the sum over all the intervals by noting that all but the end points will have total contribution of $\Delta x$ to the entire sum such that
$$Q[f] = \frac{\Delta x}{2} (f(x_0) + f(x_N) ) + \sum^{N-1}_{j=1} \Delta x f(x_j)$$
This is known as the composite trapezoidal rule.
Code demo...
```
# Note that this calculates the cummulative integral from 0.0
f = lambda x: numpy.sin(x)
If = lambda x: 1.0 - numpy.cos(x)
x = numpy.linspace(0.0, 2.0 * numpy.pi, 100)
num_partitions = 20
x_hat = numpy.linspace(0.0, 2.0 * numpy.pi, num_partitions + 1)
delta_x = x_hat[1] - x_hat[0]
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2.0)
axes = fig.add_subplot(1, 2, 1)
axes.plot(x, numpy.zeros(x.shape), 'k--')
axes.plot(x, f(x), 'b')
for i in xrange(num_partitions):
axes.plot([x_hat[i], x_hat[i]], [0.0, f(x_hat[i])], 'k--')
axes.plot([x_hat[i + 1], x_hat[i + 1]], [0.0, f(x_hat[i+1])], 'k--')
axes.plot([x_hat[i], x_hat[i + 1]], [f(x_hat[i]), f(x_hat[i+1])], 'k--')
axes.set_xlabel("x")
axes.set_ylabel("$f(x)$")
axes.set_title("Partition and $f(x)$")
axes.set_xlim((0.0, 2.0 * numpy.pi))
axes.set_ylim((-1.1, 1.1))
Qf = numpy.zeros(x_hat.shape)
Qf[0] = (f(x_hat[1]) + f(x_hat[0])) * delta_x / 2.0
for i in xrange(1, num_partitions):
Qf[i] = Qf[i - 1] + (f(x_hat[i + 1]) + f(x_hat[i])) * delta_x / 2.0
axes = fig.add_subplot(1, 2, 2)
axes.plot(x, If(x), 'r')
# Offset due to indexing above
axes.plot(x_hat + delta_x, Qf, 'ko')
axes.set_xlabel("x")
axes.set_ylabel("$f(x)$")
axes.set_title("Integral and Approximated Integral")
axes.set_xlim((0.0, 2.0 * numpy.pi))
axes.set_ylim((-0.1, 2.5))
plt.show()
```
#### Simpson's Rule
Simpson's rule uses $N = 2$ order polynomials between each point (i.e. piece-wise defined quadratic polynomials).
The polynomial has the form
$$P_2(x) = \frac{2 f(x_i)}{\Delta x^2} \left (x - \frac{\Delta x}{2} \right ) (x - \Delta x) - \frac{4 f\left(x_i + \frac{\Delta x}{2}\right)}{\Delta x^2} x (x - \Delta x) + \frac{2 f(x_{i+1})}{\Delta x^2} x \left (x - \frac{\Delta x}{2} \right )$$
Integrating this polynomial we have
$$Q[f] = \int^{x_{i+1}}_{x_i} P_2(x) dx = \frac{\Delta x}{6} f(x_i) + \frac{2 \Delta x}{3} f\left(x_i + \frac{\Delta x}{2} \right ) + \frac{\Delta x}{6} f(x_{i+1})$$
We can also show this by using the method of undetermined coefficients.
Use the general form of the quadrature rule and determine weights $w_j$ by using functions we know the solution to. These functions can be any representation of polynomials up to order $N=2$ however the monomials $1$, $x$, $x^2$ are the easiest in this case.
$$Q_{\Delta x}[f] = w_0 f(0) + w_1 f(\Delta x / 2) + w_2 f(\Delta x)$$
$$\begin{aligned}
&\text{if}~f = 1: &I[f] = \int^{\Delta x}_{0} 1 dx = \Delta x & & Q[1] &= w_0 + w_1 + w_2 \\
&\text{if}~f = x: &I[f] = \int^{\Delta x}_{0} x dx = \frac{\Delta x^2}{2} & & Q[x] &= w_1 \frac{\Delta x}{2} + w_2\Delta x\\
&\text{if}~f = x^2: &I[f] = \int^{\Delta x}_{0} x^2 dx = \frac{\Delta x^3}{3} & & Q[x^2] &= \frac{\Delta x^2}{4} w_1 + w_2\Delta x^2\\
\end{aligned}$$
We then have the system of equations:
$$\begin{aligned}
w_0 &+& w_1 &+& w_2 &=\Delta x \\
&~& \frac{\Delta x}{2} w_1 &+& \Delta x w_2 &= \frac{\Delta x^2}{2} \\
&~& \frac{\Delta x^2}{4} w_1 &+& \Delta x^2 w_2 &=\frac{\Delta x^3}{6} \\
\end{aligned}$$
or
$$\begin{bmatrix}
1 & 1 & 1 \\
0 & \Delta x / 2 & \Delta x \\
0 & \Delta x^2 / 4 & \Delta x^2 \\
\end{bmatrix} \begin{bmatrix}
w_0 \\ w_1 \\ w_2
\end{bmatrix} = \begin{bmatrix}
\Delta x \\ \Delta x^2 / 2 \\ \Delta x^3 / 3
\end{bmatrix} \Rightarrow \begin{bmatrix}
1 & 1 & 1 \\
0 & 1 / 2 & 1 \\
0 & 1 / 4 & 1 \\
\end{bmatrix} \begin{bmatrix}
w_0 \\ w_1 \\ w_2
\end{bmatrix} = \begin{bmatrix}
\Delta x \\ \Delta x / 2 \\ \Delta x / 3
\end{bmatrix} \Rightarrow \begin{bmatrix}
1 & 1 & 1 \\
0 & 1 / 2 & 1 \\
0 & 0 & -1 \\
\end{bmatrix} \begin{bmatrix}
w_0 \\ w_1 \\ w_2
\end{bmatrix} = \begin{bmatrix}
\Delta x \\ \Delta x / 2 \\ -\Delta x / 6
\end{bmatrix}$$
Leading to
$$ w_2 = \frac{\Delta x}{6} ~~~~ w_1 = \frac{2}{3} \Delta x ~~~~ w_0 = \frac{\Delta x}{6}$$
Another way to write Simpson's rule is to use intervals of three points (similar to one of the ways we did this last time). The formulation here effectively has a $\Delta x$ half of what the intervals show but is easier to program.
Code demo...
```
# Note that this calculates the cummulative integral from 0.0
f = lambda x: numpy.sin(x)
If = lambda x: 1.0 - numpy.cos(x)
x = numpy.linspace(0.0, 2.0 * numpy.pi, 100)
num_partitions = 10
x_hat = numpy.linspace(0.0, 2.0 * numpy.pi, num_partitions + 1)
delta_x = x_hat[1] - x_hat[0]
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2.0)
axes = fig.add_subplot(1, 2, 1)
axes.plot(x, numpy.zeros(x.shape), 'k--')
axes.plot(x, f(x), 'b')
for i in xrange(num_partitions):
axes.plot([x_hat[i], x_hat[i]], [0.0, f(x_hat[i])], 'k--')
axes.plot([x_hat[i + 1], x_hat[i + 1]], [0.0, f(x_hat[i + 1])], 'k--')
coeff = numpy.polyfit((x_hat[i], x_hat[i] + delta_x / 2.0, x_hat[i + 1]),
(f(x_hat[i]), f(x_hat[i] + delta_x / 2.0), f(x_hat[i+1])), 2)
x_star = numpy.linspace(x_hat[i], x_hat[i+1], 10)
axes.plot(x_star, numpy.polyval(coeff, x_star), 'k--')
axes.set_xlabel("x")
axes.set_ylabel("$f(x)$")
axes.set_title("Partition and $f(x)$")
axes.set_xlim((0.0, 2.0 * numpy.pi))
axes.set_ylim((-1.1, 1.1))
Qf = numpy.zeros(x_hat.shape)
Qf[0] = delta_x * (1.0 / 6.0 * (f(x_hat[0]) + f(x_hat[1])) + 2.0 / 3.0 * f(x_hat[0] + delta_x / 2.0))
for i in xrange(1, num_partitions):
Qf[i] = Qf[i - 1] + delta_x * (1.0 / 6.0 * (f(x_hat[i]) + f(x_hat[i+1])) + 2.0 / 3.0 * f(x_hat[i] + delta_x / 2.0))
axes = fig.add_subplot(1, 2, 2)
axes.plot(x, If(x), 'r')
# Offset due to indexing above
axes.plot(x_hat + delta_x, Qf, 'ko')
axes.set_xlabel("x")
axes.set_ylabel("$f(x)$")
axes.set_title("Integral and Approximated Integral")
axes.set_xlim((0.0, 2.0 * numpy.pi))
axes.set_ylim((-0.1, 2.5))
plt.show()
```
### Quadrature Accuracy
We can also use our polynomial analysis from before to analyze the errors made using both of the aforementioned methods. From Lagrange's theorem we have the remainder term as before which we can use to look at the error
$$R_N(x) = (x - x_0)(x - x_1) \cdots (x- x_N) \frac{f^{(N+1)}(c)}{(N+1)!}$$
and integrate it to find the form and magnitude of the error on a single interval.
To find the total error we must sum the error over all the intervals:
$$I[f] = \sum_{i=0}^N \int^{x_{i+1}}_{x_i} P_N(x) dx + \sum_{i=0}^N \int^{x_{i+1}}_{x_i} R_N(x) dx = Q[f] + E[f]$$
as we defined before.
#### Trapezoidal error
With $N=1$ we have
$$R_1(x) = (x - x_i) (x - x_{i+1}) \frac{f''(c)}{2}$$
Integrating this leads to
$$\int^{x_{i+1}}_{x_i} (x - x_i) (x - x_{i+1}) \frac{f''(c)}{2} dx = \frac{\Delta x^3}{12} f''(c)$$
giving us a form for the error.
If we sum up across all the intervals the total error is
$$E[f] = -\frac{\Delta x^3}{12} \sum_{i=0}^{N} f''(c_i)$$
or more illustrative
$$E[f] = -\frac{1}{2} \Delta x^2 (b - a) \left [ \frac{1}{N} \sum^{N-1}_{i=0} f''(c_i) \right ]$$
where the expression in the brackets is the mean value of the second derivative over the interval $[a,b]$. This also shows that the trapezoidal rule converges quadratically as $\Delta x \rightarrow 0$.
#### Simpson's Rule Error
Similarly here we have $N = 2$ and
$$R_2(x) = (x - x_i) \left(x - x_i - \frac{\Delta x}{2} \right) (x - x_{i+1}) \frac{f'''(c)}{3!}$$
Integrating and summing the error contributions we find
$$E[f] = -\frac{1}{180} (b - a) \Delta x^4 f^{(4)}(c)$$
Interestingly we have gained two orders of accuracy by increasing the polynomial order by only 1!
##### Example 1:
If $f(x) = \sin \pi x$ look at the relative accuracy of midpoint, trapezoidal and simpson's rules for a single interval $x\in[0,1]$.
$$\begin{aligned}
\text{Exact:} ~ &I[f] &=& \int^1_0 \sin \pi x = \left . \frac{-\cos \pi x}{\pi} \right |^1_0 = \frac{2}{\pi} \approx 0.636619772 \\
\text{Midpoint:} ~ &Q[f] &=& \Delta x f(1/2) = \sin (\pi / 2) = 1 \\
\text{Trapezoid:} ~ &Q[f] &=& \frac{\Delta x}{2} (\sin(0) + \sin(\pi)) = 0 \\
\text{Simpson's:} ~ &Q[f] &=& \frac{\Delta x}{6} \sin(0) + \frac{2 \Delta x}{3} \sin(\pi / 2) + \frac{\Delta x}{6} \sin(\pi) = \frac{2 \Delta x}{3} = \frac{2}{3}
\end{aligned}$$
Code demo...
```
# Compute the error as a function of delta_x for each method
f = lambda x: numpy.sin(numpy.pi * x)
num_partitions = range(50, 1000, 50)
delta_x = numpy.empty(len(num_partitions))
error_mid = numpy.empty(len(num_partitions))
error_trap = numpy.empty(len(num_partitions))
error_simpson = numpy.empty(len(num_partitions))
for (j, N) in enumerate(num_partitions):
x_hat = numpy.linspace(0.0, 1.0, N + 1)
delta_x[j] = x_hat[1] - x_hat[0]
# Compute Midpoint
x_star = 0.5 * (x_hat[1:] + x_hat[:-1])
Qf = 0.0
for i in xrange(0, N):
Qf += f(x_star[i]) * delta_x[j]
error_mid[j] = numpy.abs(Qf - 2.0 / numpy.pi)
# Compute trapezoid
Qf = 0.0
for i in xrange(1, N):
Qf += (f(x_hat[i + 1]) + f(x_hat[i])) * delta_x[j] / 2.0
error_trap[j] = numpy.abs(Qf - 2.0 / numpy.pi)
# Compute simpson's
Qf = 0.0
for i in xrange(0, N):
Qf += delta_x[j] * (1.0 / 6.0 * (f(x_hat[i]) + f(x_hat[i+1])) + 2.0 / 3.0 * f(x_hat[i] + delta_x[j] / 2.0))
error_simpson[j] = numpy.abs(Qf - 2.0 / numpy.pi)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
order_C = lambda delta_x, error, order: numpy.exp(numpy.log(error) - order * numpy.log(delta_x))
axes.loglog(delta_x, error_mid, 'ro', label="Midpoint")
axes.loglog(delta_x, error_trap, 'bo', label="Trapezoid")
axes.loglog(delta_x, error_simpson, 'go', label="Simpson's")
axes.loglog(delta_x, order_C(delta_x[0], error_trap[0], 2.0) * delta_x**2.0, 'b--', label="2nd Order")
axes.loglog(delta_x, order_C(delta_x[0], error_simpson[0], 4.0) * delta_x**4.0, 'g--', label="4th Order")
axes.legend(loc=4)
plt.show()
```
### Recursive Improvement of Accuracy
Say we ran the trapezoidal rule with step size $2 \Delta x$, we then will have
$$\begin{aligned}
\int^{x_2}_{x_0} f(x) dx &= \frac{2 \Delta x}{2} (f_0 + f_2) = h (f_0 + f_2) \Rightarrow \\
\int^b_a f(x)dx &\approx Q_{2\Delta x}[f] = \sum^{N/2-1}_{j=0} \Delta x (f_{2j} + f_{2j+2}) \\
&= \Delta x (f_{0} + f_{2}) + \Delta x (f_{2} + f_{4}) + \cdots + \Delta x (f_{N-2} + f_{N}) \\
&= \Delta x\left ( f_0 + f_N + 2 \sum^{N/2-1}_{j=1} f_{2j} \right )
\end{aligned}
$$
Now compare the two rules for $\Delta x$ and $2 \Delta x$:
$$Q_{\Delta x}[f] = \frac{\Delta x}{2} \left (f_0 + f_N + 2 \sum^{N-1}_{j=1} f_j \right)~~~~~~~~~ Q_{2 \Delta x}[f] = \Delta x \left ( f_0 + f_N + 2 \sum^{N/2-1}_{j=1} f_{2j} \right )$$
$$Q_{\Delta x}[f] = \frac{1}{2} Q_{2\Delta x} + \Delta x(f_1 + f_3 + \cdots + f_{N-1})$$
Here we see we can actually reuse the work we did to calculate $Q_{2 \Delta x}[f]$ to refine the integral.
### Arbitrary Intervals (Affine Transforms)
Mapping $\xi \in [-1,1] \rightarrow x \in [a,b]$ can be done through an *affine transform* or *affine map* which is a linear transformation.
$$x = \underbrace{\frac{b - a}{2}}_{\text{scaling}} \xi + \underbrace{\frac{a+b}{2}}_{\text{translation}} ~~~~~ \text{or} ~~~~~ \xi = \left( x - \frac{a + b}{2}\right) \frac{2}{b-a}$$
$$\begin{aligned}
I[f] &= \int^b_a f(x) dx = \int^1_{-1} f(x(\xi)) \frac{dx}{d\xi} d\xi = \frac{b - a}{2} \int^1_{-1} f(x(\xi)) d\xi\\
Q[f] &= \sum_i w_i f(x(\xi_i)) \left . \frac{dx}{d\xi}\right|_{\xi_i}
\end{aligned}$$
## Optimal Quadrature Methods
Can we determine $Q_{\Delta x}[f]$ to maximize degree for a given number of function evaluations (points)?
### Generalized Gaussian Quadrature
Given $g(x) \in P_N(x)$ with roots $\{x_i\}^N_{i=1}$ we have
$$
\int^1_{-1} w(x) x^i g(x) dx = 0 ~~~~ \forall i < N,
$$
i.e. $g(x)$ is orthogonal to the $x^i$ with respect to the weight function $w(x)$.
Recall something similar:
$$
<x, y> = \sum^N_{i=1} x_i y_i = ||x|| \cdot ||y|| \cos \theta.
$$
If $<x, y> = 0$ then the vectors $x$ and $y$ are orthogonal.
Given the above $g(x)$ there then exists $\{w_i\}$ such that
$$\int^1_{-1} w(x) P_j(x) dx = \sum^N_{i=1} w_i P_j(x_i) ~~~~ \forall j \leq 2 N - 1$$
In other words, given a polynomial basis function and weight and orthogonality to all polynomials of order $i < N$ we can exactly integrate polynomials of order $2 N - 1$. Choosing the correct weighting function and basis leads to a number of useful quadrature approaches:
#### Gauss-Legendre
General Gauss-Legendre quadrature uses $w(x) = 1$ and $g(x) = \ell_N(x)$ which can be shown to have weights
$$w_i = \frac{2}{(1-x_i^2)(P'_n(x_i))^2}$$
and $x_i$ is the $i$th root of $\ell_N$.
The first few rules yield
<table width="80%">
<tr align="center"><th>$$N$$</th> <th align="center">$$x_i$$</th> <th align="center"> $$w_i$$ </th></tr>
<tr align="center"><td>$$1$$</td> <td> $$0$$ </td> <td> $$2$$ </td> </tr>
<tr align="center"><td>$$2$$</td> <td> $$\pm \sqrt{\frac{1}{3}}$$ </td> <td> $$1$$ </td> </tr>
<tr align="center"><td rowspan=2>$$3$$</td> <td> $$0$$ </td> <td> $$8/9$$ </td> </tr>
<tr align="center"> <td> $$\pm \sqrt{\frac{3}{5}}$$ </td> <td> $$5/9$$</td> </tr>
<tr align="center"><td rowspan=2>$$4$$</td> <td> $$\pm \sqrt{\frac{3}{7} - \frac{2}{7} \sqrt{\frac{6}{5}}}$$</td> <td> $$\frac{18 + \sqrt{30}}{36}$$ </td> </tr>
<tr align="center"> <td> $$\pm \sqrt{\frac{3}{7} + \frac{2}{7} \sqrt{\frac{6}{5}}}$$</td> <td>$$\frac{18 - \sqrt{30}}{36}$$ </td> </tr>
<tr align="center"><td rowspan=3>$$5$$</td> <td> $$0$$ </td> <td> $$\frac{128}{225}$$ </td> </tr>
<tr align="center"> <td> $$\pm \frac{1}{3} \sqrt{5 - 2 \sqrt{\frac{10}{7}}}$$</td> <td> $$\frac{322 + 13\sqrt{70}}{900}$$</td> </tr>
<tr align="center"> <td> $$\pm \frac{1}{3} \sqrt{5 + 2 \sqrt{\frac{10}{7}}}$$</td> <td> $$\frac{322 - 13\sqrt{70}}{900}$$</td> </tr>
</table>
##### Example 2: 2-Point Gauss-Legendre Quadrature
Let $N=2$ on $x \in [-1,1]$
$$Q[f] = w_0 f(x_0) + w_1 f(x_1)$$
Using undetermined coefficients again we have
$$\begin{aligned}
&\text{if}~f = 1: &I[f] = \int^{1}_{-1} 1 dx = 2 & & Q[1] &= w_0 + w_1\\
&\text{if}~f = x: &I[f] = \int^{1}_{-1} x dx = 0 & & Q[x] &= w_0 x_0 + w_1 x_1\\
&\text{if}~f = x^2: &I[f] = \int^{1}_{-1} x^2 dx = \frac{2}{3} & & Q[x^2] &= w_0 x_0^2 + w_1 x_1^2\\
&\text{if}~f = x^3: &I[f] = \int^{1}_{-1} x^3 dx = 0 & & Q[x^3] &= w_0 x_0^3 + w_1 x_1^3\\
\end{aligned}$$
$$\begin{aligned}
&w_0 + w_1 = 2\\
&w_0 x_0 + w_1 x_1 = 0\\
&w_0 x_0^2 + w_1 x_1^2 = \frac{2}{3}\\
&w_0 x_0^3 + w_1 x_1^3 = 0\\
\end{aligned}$$
Note that we need to solve for 4 unknowns $x_0$, $x_1$, $w_0$, and $w_1$. Solving these equations leads to
$$x_0 = -\sqrt{\frac{1}{3}}, x_1 = \sqrt{\frac{1}{3}} ~~~~~\text{and}~~~~~ w_0 = w_1 = 1 $$
Code demo...
```
# Compute Gauss-Legendre based quadrature and affine transforms
f = lambda x: numpy.sin(x)
If = lambda x: 1.0 - numpy.cos(x)
x = numpy.linspace(0.0, 2.0 * numpy.pi, 100)
num_partitions = 10
x_hat = numpy.linspace(0.0, 2.0 * numpy.pi, num_partitions + 1)
delta_x = x_hat[1] - x_hat[0]
xi_map = lambda a,b,xi : (b - a) / 2.0 * xi + (a + b) / 2.0
xi_0 = -numpy.sqrt(1.0 / 3.0)
xi_1 = numpy.sqrt(1.0 / 3.0)
Qf = numpy.zeros(x_hat.shape)
Qf[0] = (f(xi_map(x_hat[0], x_hat[1], xi_0)) + f(xi_map(x_hat[0], x_hat[1], xi_1))) * delta_x / 2.0
for i in xrange(1, num_partitions):
Qf[i] = Qf[i - 1] + (f(xi_map(x_hat[i], x_hat[i+1], xi_0)) + f(xi_map(x_hat[i], x_hat[i+1], xi_1))) * delta_x / 2.0
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, If(x), 'r')
# Offset due to indexing above
axes.plot(x_hat + delta_x, Qf, 'ko')
axes.set_xlabel("x")
axes.set_ylabel("$f(x)$")
axes.set_title("Integral and Approximated Integral")
axes.set_xlim((0.0, 2.0 * numpy.pi))
axes.set_ylim((-0.1, 2.5))
plt.show()
```
#### Other Quadrature Families
- Gauss-Chebyshev: If $w(x) = \frac{1}{\sqrt{1 - x^2}}$ and $g(x)$ are Chebyshev polynomials then we know the roots of the polynomials to be $x_i = \cos\left(\frac{2i-1}{2N} \pi \right)$ (the Chebyshev nodes) and we can derive that $w_i = \frac{\pi}{N}$.
- Gauss-Hermite: If $w(x) = e^{-x^2}$ and $g(x)$ are Hermite polynomials $H_i(x)$ then
$$w_i = \frac{2^{N-1} N! \sqrt{\pi}}{N^2 (H_{N-1}(x_i))^2}$$
##### Example 3:
If $f(x) = e^x$ look at the relative accuracy of midpoint, trapezoidal, simpson and 2-point Gauss-Legendre quadrature for a single interval $x \in [-1,1]$.
$$\begin{aligned}
\text{Exact:} ~ &I[f] &=& \int^1_{-1} e^x = \left . e^x \right |^1_{-1} = e - \frac{1}{e} \approx 2.350402387 \\
\text{Midpoint:} ~ &Q[f] &=& 2 e^0 = 2 \\
\text{Trapezoid:} ~ &Q[f] &=& \frac{2}{2} (e^{-1} + e^1) = e + \frac{1}{e} = 3.08616127 \\
\text{Simpson's:} ~ &Q[f] &=& \frac{2}{6} e^{-1} + \frac{4}{3} e^0 + \frac{2}{6} e^1 = \frac{4}{3} + \frac{1}{3} (e^{-1} + e^1) \approx 2.362053757 \\
\text{Gauss-Legendre:} ~ &Q[f] &=& e^{-\sqrt{\frac{1}{3}}} + e^{\sqrt{\frac{1}{3}}} \approx 2.342696088
\end{aligned}$$
Code here...
```
# Compute the error as a function of delta_x for each method
f = lambda x: numpy.sin(numpy.pi * x)
If = 2.0 / numpy.pi
num_partitions = range(50, 1000, 50)
delta_x = numpy.empty(len(num_partitions))
error_trap = numpy.empty(len(num_partitions))
error_simpson = numpy.empty(len(num_partitions))
error_2 = numpy.empty(len(num_partitions))
error_3 = numpy.empty(len(num_partitions))
error_4 = numpy.empty(len(num_partitions))
for (j, N) in enumerate(num_partitions):
x_hat = numpy.linspace(0.0, 1.0, N)
delta_x[j] = x_hat[1] - x_hat[0]
# Compute trapezoid
Qf = 0.0
for i in xrange(0, N - 1):
Qf += (f(x_hat[i + 1]) + f(x_hat[i])) * delta_x[j] / 2.0
error_trap[j] = numpy.abs(Qf - If)
# Compute simpson's
Qf = 0.0
for i in xrange(0, N - 1):
Qf += delta_x[j] * (1.0 / 6.0 * (f(x_hat[i]) + f(x_hat[i+1])) + 2.0 / 3.0 * f(x_hat[i] + delta_x[j] / 2.0))
error_simpson[j] = numpy.abs(Qf - If)
# Compute Gauss-Legendre 2-point
xi_map = lambda a,b,xi : (b - a) / 2.0 * xi + (a + b) / 2.0
xi = [-numpy.sqrt(1.0 / 3.0), numpy.sqrt(1.0 / 3.0)]
w = [1.0, 1.0]
Qf = 0.0
for i in xrange(0, N - 1):
for k in xrange(len(xi)):
Qf += f(xi_map(x_hat[i], x_hat[i+1], xi[k])) * w[k]
Qf *= delta_x[j] / 2.0
error_2[j] = numpy.abs(Qf - If)
# Compute Gauss-Legendre 3-point
xi_map = lambda a,b,xi : (b - a) / 2.0 * xi + (a + b) / 2.0
xi = [-numpy.sqrt(3.0 / 5.0), 0.0, numpy.sqrt(3.0 / 5.0)]
w = [5.0 / 9.0, 8.0 / 9.0, 5.0 / 9.0]
Qf = 0.0
for i in xrange(0, N - 1):
for k in xrange(len(xi)):
Qf += f(xi_map(x_hat[i], x_hat[i+1], xi[k])) * w[k]
Qf *= delta_x[j] / 2.0
error_3[j] = numpy.abs(Qf - If)
# Compute Gauss-Legendre 4-point
xi_map = lambda a,b,xi : (b - a) / 2.0 * xi + (a + b) / 2.0
xi = [-numpy.sqrt(3.0 / 7.0 - 2.0 / 7.0 * numpy.sqrt(6.0 / 5.0)),
numpy.sqrt(3.0 / 7.0 - 2.0 / 7.0 * numpy.sqrt(6.0 / 5.0)),
-numpy.sqrt(3.0 / 7.0 + 2.0 / 7.0 * numpy.sqrt(6.0 / 5.0)),
numpy.sqrt(3.0 / 7.0 + 2.0 / 7.0 * numpy.sqrt(6.0 / 5.0))]
w = [(18.0 + numpy.sqrt(30.0)) / 36.0, (18.0 + numpy.sqrt(30.0)) / 36.0,
(18.0 - numpy.sqrt(30.0)) / 36.0, (18.0 - numpy.sqrt(30.0)) / 36.0]
Qf = 0.0
for i in xrange(0, N - 1):
for k in xrange(len(xi)):
Qf += f(xi_map(x_hat[i], x_hat[i+1], xi[k])) * w[k]
Qf *= delta_x[j] / 2.0
error_4[j] = numpy.abs(Qf - If)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
# axes.plot(delta_x, error)
axes.loglog(delta_x, error_trap, 'o', label="Trapezoid")
axes.loglog(delta_x, error_simpson, 'o', label="Simpson's")
axes.loglog(delta_x, error_2, 'o', label="G-L 2-point")
axes.loglog(delta_x, error_3, 'o', label="G-L 3-point")
axes.loglog(delta_x, error_4, 'o', label="G-L 4-point")
order_C = lambda delta_x, error, order: numpy.exp(numpy.log(error) - order * numpy.log(delta_x))
axes.loglog(delta_x, order_C(delta_x[0], error_trap[0], 2.0) * delta_x**2.0, 'r--', label="2nd Order")
axes.loglog(delta_x, order_C(delta_x[0], error_simpson[0], 4.0) * delta_x**4.0, 'g--', label="4th Order")
axes.legend(loc=1)
axes.set_xlim((1e-3, 2e-1))
plt.show()
```
## SciPy Integration Routines
SciPy has a number of integration routines that we have derived here including general purpose integrators that can control error.
```
import scipy.integrate as integrate
# integrate?
```
| github_jupyter |
### Regression Example for python pipefiiter
This is an example to show how to solve regression problem by decision tree using python pipefitter. It involves grid search to do hyperparameter tuning
```
import swat
import numpy as np
import pandas as pd
```
#### Generate Data
Idea from Chapter 10 Boosting and Additive Trees, 'the elements of statistical learning', Trevor Hastie. Robert Tibshirani. Jerome Friedman
```
mu, sigma = 0, 1 # mean and standard deviation
np.random.normal(mu, sigma, 10)
allnumpys = list()
for i in range(50):
st = np.random.normal(mu, sigma, 1000)
allnumpys.append(st)
data = pd.DataFrame(allnumpys)
data = data.transpose()
data.columns=['a'+str(i) for i in range(50)]
data['label']=1
def f(x):
sumn=0
for i in range(10):
sumn = sumn + x['a'+str(i)]*x['a'+str(i)]+2*np.random.normal(0, 1, 1)
return sumn
data['label']=data.apply(f, axis=1)
data.head()
```
## SAS Viya version
### Create Connections and Load Data
```
casconn = swat.CAS('sasserver.demo.sas.com', 5570, nworkers=1)
casdata = casconn.upload_frame(data)
```
Return first 5 rows of casdata
```
casdata.head()
```
show the information of the table such as create time, rows etc.
```
casdata.tableinfo()
```
### Estimator
Import regression models: decision tree, random forest and gradient boosting tree
```
from pipefitter.estimator import DecisionTree, DecisionForest, GBTree
```
Create a DecisionTree object. This object is the high-level object that has no knowledge of CAS or SAS.
```
params = dict(target='label',
inputs=['a'+str(i) for i in range(50)])
dtree = DecisionTree(max_depth=6, **params)
dtree
```
#### Decision Tree Fit and Score of CAS Table
Using the `DecisionTree` instance, we'll first run the `fit` method on the data set. This will return a model object.
```
model = dtree.fit(casdata)
model
```
The ``score`` method can then be called on the resulting model object
```
score = model.score(casdata)
score
```
### HyperParameter Tuning
The hyper-parameter tuning classes allow you to test multiple sets of parameters across
a set of estimators.
```
from pipefitter.model_selection import HyperParameterTuning
```
define parameter space
```
param_grid = dict(
max_depth=[6, 10],
leaf_size=[3, 5],
)
hpt = HyperParameterTuning(
estimator=DecisionTree(target='label',
inputs=['a'+str(i) for i in range(50)]),
param_grid=param_grid,
cv=3)
hpt.gridsearch(casdata)
```
## SAS 9 Version
### Open sas session and load sas data
```
import saspy
sas = saspy.SASsession(cfgname='tdi')
sasdata = sas.dataframe2sasdata(data)
params = dict(target='label',
inputs=['a'+str(i) for i in range(50)])
dtree = DecisionTree(max_depth=6, **params)
dtree
```
#### Decision Tree Fit and Score of SAS Table
```
model = dtree.fit(sasdata)
model
score = model.score(sasdata)
score
```
### HyperParameter Tuning
```
param_grid = dict(
max_depth=[6, 10],
leaf_size=[3, 5],
)
hpt = HyperParameterTuning(
estimator=DecisionTree(target='label',
inputs=['a'+str(i) for i in range(50)]),
param_grid=param_grid,
cv=3)
hpt.gridsearch(sasdata)
```
| github_jupyter |
# Scraping Reviews
```
#Definitions and imports
from lxml import html
import requests
import pandas as pd
import numpy as np
import random
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.cross_validation import train_test_split
from sklearn import naive_bayes
from sklearn.metrics import roc_auc_score
from textblob import TextBlob as tb
import nltk
nltk.download('stopwords')
nltk.download('punkt')
nltk.download('averaged_perceptron_tagger')
from nltk.corpus import stopwords
set(stopwords.words('english'))
#Creating an empty dataframe for reviews of Amazon Echo Dot
reviews_df = pd.DataFrame()
#Defining local browser's user agent string to avoid requests being blocked
user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36'
#Creating a list of URL's that point to the first 1000 review pages for Apple Airpods
url_list = []
for i in range(1,1001):
url_list.append("https://www.amazon.com/Apple-MD827LL-EarPods-Remote-Mic/product-reviews/B0097BEG1C/ref=cm_cr_getr_d_paging_btm_prev_1?ie=UTF8&reviewerType=all_reviews&sortBy=recent&pageNumber={0}".format(i))
#Total number of URL's
len(url_list)
# Loop to fetch reviews from Amazon
# Looping through all 1000 URL's
for u in url_list:
amazon_url = u
# Setting header to user agent string
headers = {'User-Agent': user_agent}
page = requests.get(amazon_url, headers = headers)
# Define parser
parser = html.fromstring(page.content)
# All reviews are located in a div tag with class "review"
xpath_reviews = '//div[@data-hook="review"]'
reviews = parser.xpath(xpath_reviews)
# Within the review div, the following 2 items can be located
# The rating is located in an i tag with class "review-star-rating"
#xpath_rating = './/i[@data-hook="review-star-rating"]//text()'
# The body text of the review is located in a span tag with class "review-body"
xpath_body = './/span[@data-hook="review-body"]//text()'
# Looping through each outer div tag and appending results to the dataframe
for review in reviews:
body = review.xpath(xpath_body)
review_dict = {'body' : body}
reviews_df = reviews_df.append(review_dict, ignore_index=True)
#Dropping any null cells
reviews_df.replace('', np.nan, inplace=True)
reviews_df.dropna(inplace=True,axis = 0)
reviews_df.head()
#Verifying review count
reviews_df.count()
#Each row is currently a list object due to some reviews having responses and thus creating a list of review body texts
#We go around this by considering the main review and discarding the rest and the exporting to a csv
import re
reviews_df_text = pd.DataFrame()
for i in reviews_df['body']:
if i:
text = i[0].strip()
#Many reviews have emojis and special characters. This section defines hex code ranges and removes them.
emoji_pattern = re.compile("["
u"\U0001F600-\U0001F926" # emoticons
u"\U0001F300-\U0001F5FF" # symbols & pictographs
u"\U0001F680-\U0001F6FF" # transport & map symbols
u"\U0001F1E0-\U0001F1FF" # flags (iOS)
u"\U0001FE0F-\U0001Fe0F" # hearts
u"\U00002764-\U0001Fe0F" # hearts
u"\U00002753" # question marks
"]+", flags=re.UNICODE)
text2 = emoji_pattern.sub(r'', text) # no emoji
text_dict = {'body' : text2}
reviews_df_text = reviews_df_text.append(text_dict, ignore_index=True)
#Exporting to csv
reviews_df_text.to_csv("Final_Amazon_Reviews.csv",encoding='utf8',index=False)
```
# Classifying sentiments for the reviews
```
#Extracting data from csv since scraping changes dataset on every run.
review_data = pd.read_csv("Final_Amazon_Reviews.csv")
review_data.head()
#Extracting one review as an example
blob_ex = review_data['body'][2].strip()
blob_ex
#Creating a text blob object from the review
test_blob = tb(blob_ex)
#Listing the words in the review
test_blob.words
#Listing tags for each of the words in the review
#For example, 'quality' is marked as a noun
test_blob.tags
#Extracting the sentiment for the review. This returns 2 values
# 1. Polarity: A range from -1 to 1 indicating sentiment where -1 is negative, 0 is neutral and 1 is positive
# 2. Subjectivity: This returns a float between 0 and 1 where 0 is very objective and 1 is very subjective
test_blob.sentiment
#Extracting the sentiment polarity into a variable
sent_val = test_blob.sentiment.polarity
#Verify sentiment value
sent_val
```
### For the purposes of the analysis here, I have considered neutral values top be positive and defined the following rule
##### if sent_val >= 0:
##### print("Positive")
##### else:
##### print("Negative")
```
#Creating an empty dataframe for the review text and its respective sentiment
sent_df = pd.DataFrame()
for i in review_data['body']:
#Extract review text
text = i.strip()
#Creating text blob
body_text = tb(text)
#Extract sentiment polarity
sent_pol = body_text.sentiment.polarity
if sent_pol >= 0:
sent_val = 1
else:
sent_val = 0
comb_dict = {'review': text,
'sentiment': sent_val}
#print(sent_val)
sent_df = sent_df.append(comb_dict, ignore_index=True)
#A combined dataframe with the sentiment values for each review is created
sent_df.head()
#Exporting to csv to keep record
sent_df.to_csv(header=True,path_or_buf="sentiment_one.csv")
```
# Naive Bayesian Classifier
```
#Defining the stopset of english words
#This defines a list of words that are inconsequential to sentiment analysis and can be removed from the data
stopset = set(stopwords.words('english'))
#Defining a TfidVectorizer to convert the raw text into a sparse matrix
#This matrix essentially holds a frequency for each word and associates the words with either a positive or negative sentiment
vectorizer = TfidfVectorizer(use_idf=True, lowercase=True, strip_accents='ascii', stop_words=stopset)
#Defining the predictor variables, i.e. the sentiment
y = sent_df.sentiment
#Vectorize the reviews from the dataframe
x = vectorizer.fit_transform(sent_df.review)
#This is the number of records
print(y.shape)
# This returns 2 values.
# 1. Number of input records
# 2. Number of unique words in the dataset
print(x.shape)
# Using sklearn to perform a train test split
X_train, X_test, y_train, y_test = train_test_split(x, y, train_size=0.8, test_size=0.2)
#Defining a Naice Bayes classifier and fitting it to our train data
clf = naive_bayes.MultinomialNB()
clf.fit(X_train, y_train)
# Using roc_auc_score to determin the accuracy of the classifier
roc_auc_score(y_test, clf.predict_proba(X_test)[:,1])
```
### Creating sample sentences to test the model
```
#Define and vectorise a positive sentence
sample_array_pos = np.array(["I enjoyed this product"])
sample_vector_pos = vectorizer.transform(sample_array_pos)
#Define and vectorize a negatie sentence
sample_array_neg = np.array(["I hate it. It's useless and can't do anything"])
sample_vector_neg = vectorizer.transform(sample_array_neg)
#This means that we are analysing this one sample based on evidence collected for over 7000 words.
sample_vector_pos.shape
#Predict sentiment for the positive vector
sentiment = clf.predict(sample_vector_pos)
if sentiment == 1:
print("Positive")
else:
print("Negative")
#Predict sentiment for the negative vector
sentiment = clf.predict(sample_vector_neg)
if sentiment == 1:
print("Positive")
else:
print("Negative")
```
# Conclusions:
1. The dataset constantly fluctuates in sentiment based on what product reviews were scraped and when they were scraped.
2. Most reviews on amazon seem to be positive. This makes our model biased and results in a few misclassifications.
3. For the naive bayes classifier, the accuracy varies between 85% to 98% depending on the nature of the input data.
4. In general, adding more data tends to increase the accuracy since the model is exposed to a larger amount of words.
| github_jupyter |
# Tutorial
This is a brief tutorial of basic Metagraph usage.
First, we import Metagraph:
```
import metagraph as mg
```
## Inspecting Types and Available Algorithms
The default resolver automatically pulls in all registered Metagraph plugins.
The default resolver also links itself with the `metagraph` namespace, allowing
users to ignore the default resolver in most cases.
```
res = mg.resolver
```
A hierarchy of available types is automatically added as properties on `res`.
```
dir(res.types)
```
Alternatively, simply access the types directly from `mg`.
Most attributes of `res` are exposed as attributes on `mg` for convenience.
```
dir(mg.types)
```
Two important concepts in Metagraph are abstract types and concrete types.
Abstract types describe a generic kind of data container with potentially many equivalent representations.
Concrete types describe a specific data object which fits under the abstract type category.
One can think of abstract types as data container specifications and concrete types as implementations of those specifications.
For each abstract type, there are several concrete types.
Within a single abstract type, all concrete types are able to represent equivalent data, but in a different format or data structure.
Here we show the concrete types which represent `Graphs`:
```
dir(mg.types.Graph)
```
Available algorithms are listed under `mg.algos` and grouped by categories.
Note that these are also available under `res.algos`.
```
dir(mg.algos)
dir(mg.algos.traversal)
```
## Example Usage
Let's see how to use Metagraph by first constructing a graph from an edge list.
Begin with an input csv file representing an edge list and weights.
```
data = """
Source,Destination,Weight
0,1,4
0,3,2
0,4,7
1,3,3
1,4,5
2,4,5
2,5,2
2,6,8
3,4,1
4,7,4
5,6,4
5,7,6
"""
```
Read in the csv file and convert to a Pandas `DataFrame`.
```
import pandas as pd
import io
csv_file = io.StringIO(data)
df = pd.read_csv(csv_file)
```
This `DataFrame` represents a graph’s edges, but Metagraph doesn’t know that yet. To use the `DataFrame` within Metagraph, we first need to convert it into an `EdgeMap`.
A `PandasEdgeMap` takes a `DataFrame` plus the labels of the columns representing source, destination, and weight. With these, Metagraph will know how to interpret the `DataFrame` as an `EdgeMap`.
```
em = mg.wrappers.EdgeMap.PandasEdgeMap(df, 'Source', 'Destination', 'Weight', is_directed=False)
em.value
```
## Convert EdgeMap to a Graph
`Graphs` and `EdgeMaps` have many similarities, but `Graphs` are more powerful. `Graphs` can have weights on the nodes, not just on the edges. `Graphs` can also have isolate nodes (nodes with no edges), which `EdgeMaps` cannot have.
Most Metagraph algorithms take a `Graph` as input, so we will convert our `PandasEdgeMap` into a `Graph`. In this case, it will become a `NetworkXGraph`.
```
g = mg.algos.util.graph.build(em)
g
g.value.edges(data=True)
```
## Translate to other Graph formats
Because Metagraph knows how to interpret `g` as a `Graph`, we can easily convert it other `Graph` formats.
Let's convert it to a `ScipyGraph`. This format stores the edges and weights in a scipy.sparse matrix along with a numpy array mapping the position to a NodeId (in case the nodes are not sequential from 0..n). Any node weights are stored in a separate numpy array.
```
g2 = mg.translate(g, mg.wrappers.Graph.ScipyGraph)
g2
```
The matrix is accessed using `g2.value`. The node list is accessed using `.node_list`.
We can verify the weighs and edges by inspecting the sparse adjacency matrix directly.
```
g2.value.toarray()
```
We can also convert `g` into an adjacency matrix representation using a `GrblasGraph`. This also stores the edges and node weights separately.
```
g3 = mg.translate(g, mg.types.Graph.GrblasGraphType)
g3
g3.value
```
We can also visualize the graph.
```
import grblas
grblas.io.draw(g3.value)
```
## Inspect the steps required for translations
Rather than actually converting `g` into other formats, let’s ask Metagraph how it will do the conversion. Each conversion requires a translator (written by plugin developers) to convert between the two formats. However, even if there isn’t a direct translator between two formats, Metagraph will find a path and take several translation steps as needed to perform the task.
The mechanism for viewing the plan is to invoke the translation from ``mg.plan.translate`` rather than ``mg.translate``. Other than the additional ``.plan``, the call signature is identical.
In this first example, there is a direct function which translates between `NetworkXGraphType` and `ScipyGraphType`.
```
mg.plan.translate(g, mg.types.Graph.ScipyGraphType)
```
---
In this next example, there is no direct function which convert `NetworkXGraphType` into a `GrblasGraphType`. Instead, we have to first convert to `ScipyGraphType` and then to `GrblasGraphType` before finally arriving at our desired format.
While Metagraph will do the conversion automatically, understanding the steps involved helps users plan for expected computation time and memory usage. If needed, plugin developers can write a plugin to provide a direct translation path.
```
mg.plan.translate(g, mg.types.Graph.GrblasGraphType)
```
## Algorithm Example #1: Breadth First Search
Algorithms are described initially in an abstract definition. For bfs_iter, we take a `Graph` and return a `Vector` indicating the NodeIDs in the order visited.
After the abstract definition is written, multiple concrete implementations are written to operate on concrete types.
Let's look at the signature and specific implementations available for bfs_iter.
```
mg.algos.traversal.bfs_iter.signatures
```
We see that there are two implementations available, each with a different type of input graph.
---
Let's perform a breadth-first search with our different representations of `g`. We should get approximately the same answer no matter which implementation is chosen (same NodeIDs within each depth level of the traversal).
```
cc = mg.algos.traversal.bfs_iter(g, 0)
cc
cc2 = mg.algos.traversal.bfs_iter(g2, 0)
cc2
```
---
Similar to how we can view the plan for translations, we can view the plan for algorithms.
No translation is needed because we already have a concrete implementation which takes a `NetworkXGraph` as input.
```
mg.plan.algos.traversal.bfs_iter(g, 0)
```
---
In the next example, `g2` also satisfies a concrete implementation, so no input translation is required.
```
mg.plan.algos.traversal.bfs_iter(g2, 0)
```
## Algorithm Example #2: Pagerank
Let's look at the same pieces of information, but for pagerank. Pagerank takes a `Graph` and returns a `NodeMap` indicating the rank value of each node in the graph.
First, let's verify the signature and the implementations available.
We see that there are two implementations available, taking a `NetworkXGraph` or `GrblasGraph` as input.
```
mg.algos.centrality.pagerank.signatures
```
---
Let's look at the steps required in the plan if we start with a `ScipyGraph`. Then let's perform the computation.
We see that the `ScipyGraph` will need to be translated to a `GrblasGraph` in order to call the algorithm. **Metagraph will do this for us automatically.**
```
mg.plan.algos.centrality.pagerank(g2)
pr = mg.algos.centrality.pagerank(g2)
pr
```
The result is a `GrblasNodeMap` which can be inspected by looking at the underlying `.value`.
```
pr.value
```
Let's translate it to a numpy array.
```
pr_array = mg.translate(pr, mg.types.NodeMap.NumpyNodeMapType)
pr_array.value
```
**Helpful tip**: The translation type can also be specified as a string
```
mg.translate(pr, "NumpyNodeMap")
```
Now let's verify that we get the same answer with the NetworkX implementation of Pagerank.
We can ensure the NetworkX implementation is called by passing in a NetworkXGraph. Because no translations
are required, it will choose that implementation.
The result is a `PythonNodeMapType`, which is simply a Python `dict`.
```
pr2 = mg.algos.centrality.pagerank(g)
pr2
```
Translate to a numpy array and verify the same results (within tolerance)
```
pr2_array = mg.translate(pr2, "NumpyNodeMap")
pr2_array.value
abs(pr2_array - pr_array.value) < 1e-15
```
| github_jupyter |
```
input_dir = '../input/'
working_dir = '../working/'
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
train = pd.read_csv(os.path.join(input_dir, 'train.csv'))
test = pd.read_csv(os.path.join(input_dir, 'test.csv'))
# Set index
train.index = train['Id'].values
test.index = test['Id'].values
print(train.shape)
print(test.shape)
```
# **EDA**
```
train.info()
train.head(5)
```
The data have 9557 entries, each entry has 143 columns.
Most of the data are floats and integers, a few objects. Let's take a look at the objects.
# **Object value**
```
train.columns[train.dtypes==object]
```
Id, idhogar - no problem, they are just identifications
dependency - dependency rate
edjefe, edjefa - years of education of head of household
### Clean Data
1. dependency 'no' -> 0
2. edjefa, edjefe 'no' -> 0, 'yes' -> 1
3. meaneduc NaN -> mean escolari of household
4. v2a1 NaN -> 0
5. v18q1 NaN -> 0
6. rez_esc NaN -> 0
1. dependency
dependency 'no' -> 0
we can just derive the dependency from the SQBdependency.
* So the "square" of no is 0.
* So the "square" of yes is 1.
```
train['dependency'].unique()
train['SQBdependency'].unique()
train['SQBdependency']
train[(train['dependency']=='no') & (train['SQBdependency']!=0)]
train[(train['dependency']=='yes') & (train['SQBdependency']!=1)]
train[(train['dependency']=='no') & (train['SQBdependency']!=1)]
train['dependency']=np.sqrt(train['SQBdependency'])
```
2. edjefa, edjefe 'no' -> 0, 'yes' -> 1
Basically:
* 'edjefe' and 'edjefa' are both 'no' when the head of the household had 0 years of school
* there's 'edjefe'= 'yes' and 'edjefa'='no' in some cases, all these cases the head of the household had 1 year of school
* there's 'edjefe'= 'no' and 'edjefa'='yes' in some cases, all these cases the head of the household had 1 year of school
* most of the time either 'edjefe' or 'edjefa' is a number while the other is a 'no'
* Let's merge the jefe and jefa education into one, undependent of gender
```
train['edjefa'].unique()
train['edjefa'].unique()
train['SQBedjefe'].unique()
train[['edjefe', 'edjefa', 'SQBedjefe']][:20]
```
'SQBedjefe is just the square of 'edjefe', it's 0 if the head of the household is a woman.
```
train[['edjefe', 'edjefa', 'SQBedjefe']][train['edjefe']=='yes']
train[(train['edjefe']=='yes') & (train['edjefa']!='no')]
```
escolari = years of schooling
parentesco1 =1 if household head
```
train[(train['edjefa']=='yes') & (train['parentesco1']==1)][['edjefe', 'edjefa', 'parentesco1', 'escolari']]
train[train['edjefe']=='yes'][['edjefe', 'edjefa','age', 'escolari', 'parentesco1','male', 'female', 'idhogar']]
train[(train['edjefe']=='no') & (train['edjefa']=='no')][['edjefe', 'edjefa', 'age', 'escolari', 'female', 'male', 'Id', 'parentesco1', 'idhogar']]
conditions = [
(train['edjefe']=='no') & (train['edjefa']=='no'), #both no
(train['edjefe']=='yes') & (train['edjefa']=='no'), # yes and no
(train['edjefe']=='no') & (train['edjefa']=='yes'), #no and yes
(train['edjefe']!='no') & (train['edjefe']!='yes') & (train['edjefa']=='no'), # number and no
(train['edjefe']=='no') & (train['edjefa']!='no') # no and number
]
choices = [0, 1, 1, train['edjefe'], train['edjefa']]
train['edjefx']=np.select(conditions, choices)
train['edjefx']=train['edjefx'].astype(int)
train[['edjefe', 'edjefa', 'edjefx']][:15]
```
# **missing values**
```
train.columns[train.isna().sum()!=0]
```
Columns with nans:
* v2a1 - monthly rent
* v18q1 - number of tablets
* rez_esc - years behind school
* meaneduc - mean education for adults
* SQBmeaned - square of meaned
3. meaneduc NaN -> mean escolari of household
'meaneduc' and 'SQBmeaned' are related
```
train[train['meaneduc'].isnull()]
train[train['meaneduc'].isnull()][['Id','idhogar','edjefe','edjefa', 'hogar_adul', 'hogar_mayor', 'hogar_nin', 'age', 'escolari']]
```
So, the 5 rows with Nan for 'meaneduc' is just 3 households, where 18-19 year-olds live. No other people live in these households. Then we can just take the education levels of these kids ('escolari') and put them into 'meaneduc' and 'SQBmeaned'.
4. v2a1 NaN -> 0
Next, let's look at 'v2a1', the monthly rent payment, that also has missing values.
```
norent=train[train['v2a1'].isnull()]
print("Owns his house:", norent[norent['tipovivi1']==1]['Id'].count())
print("Owns his house paying installments", norent[norent['tipovivi2']==1]['Id'].count())
print("Rented ", norent[norent['tipovivi3']==1]['Id'].count())
print("Precarious ", norent[norent['tipovivi4']==1]['Id'].count())
print("Other ", norent[norent['tipovivi5']==1]['Id'].count())
print("Total ", 6860)
```
The majority in fact owns their houses, only a few have odd situations. We can probably just assume they don't pay rent, and put 0 in these cases.
5. v18q1 NaN -> 0
let's look at 'v18q1', which indicates how many tablets the household owns.
```
train['v18q1'].unique()
```
6. rez_esc NaN -> 0
rez_esc
: Years behind in school
```
rez_esc_nan=train[train['rez_esc'].isnull()]
rez_esc_nan[(rez_esc_nan['age']<18) & rez_esc_nan['escolari']>0][['age', 'escolari']]
```
So all the nans here are either adults or children before school age. We can input 0 again
```
def data_cleaning(data):
data['dependency']=np.sqrt(data['SQBdependency'])
data['rez_esc']=data['rez_esc'].fillna(0)
data['v18q1']=data['v18q1'].fillna(0)
data['v2a1']=data['v2a1'].fillna(0)
conditions = [
(data['edjefe']=='no') & (data['edjefa']=='no'), #both no
(data['edjefe']=='yes') & (data['edjefa']=='no'), # yes and no
(data['edjefe']=='no') & (data['edjefa']=='yes'), #no and yes
(data['edjefe']!='no') & (data['edjefe']!='yes') & (data['edjefa']=='no'), # number and no
(data['edjefe']=='no') & (data['edjefa']!='no') # no and number
]
choices = [0, 1, 1, data['edjefe'], data['edjefa']]
data['edjefx']=np.select(conditions, choices)
data['edjefx']=data['edjefx'].astype(int)
data.drop(['edjefe', 'edjefa'], axis=1, inplace=True)
meaneduc_nan=data[data['meaneduc'].isnull()][['Id','idhogar','escolari']]
me=meaneduc_nan.groupby('idhogar')['escolari'].mean().reset_index()
for row in meaneduc_nan.iterrows():
idx=row[0]
idhogar=row[1]['idhogar']
m=me[me['idhogar']==idhogar]['escolari'].tolist()[0]
data.at[idx, 'meaneduc']=m
data.at[idx, 'SQBmeaned']=m*m
return data
train = data_cleaning(train)
test = data_cleaning(test)
```
### Extract heads of household
```
train = train.query('parentesco1==1')
train = train.drop('parentesco1', axis=1)
test = test.drop('parentesco1', axis=1)
print(train.shape)
```
## Convert one-hot variables into numeric
* 'epared', 'etecho', 'eviv' and 'instlevel' can be converted into numeric
* like (bad, regular, good) -> (0 ,1, 2)
```
def get_numeric(data, status_name):
# make a list of column names containing 'sataus_name'
status_cols = [s for s in data.columns.tolist() if status_name in s]
print('status column names')
print(status_cols)
# make a DataFrame with only status_cols
status_df = data[status_cols]
# change its column name like ['epared1', 'epared2', 'epared3'] -> [0, 1, 2]
status_df.columns = list(range(status_df.shape[1]))
# get the column name which has the biggest value in every row
# this is pandas.Series
status_numeric = status_df.idxmax(1)
# set Series name
status_numeric.name = status_name
# add status_numeric as a new column
data = pd.concat([data, status_numeric], axis=1)
return data
status_name_list = ['epared', 'etecho', 'eviv', 'instlevel']
for status_name in status_name_list:
train = get_numeric(train, status_name)
test = get_numeric(test, status_name)
```
## Delete needless columns
### redundant columns
* r4t3, tamviv, tamhog, hhsize ... almost the same as hogar_total
* v14a ... almost the same as saniatrio1
* v18q, mobilephone ... can be generated by v18q1, qmobilephone
* SQBxxx, agesq ... squared values
* parentescoxxx ... only heads of household are in dataset now
### extra columns
(One-hot variables should be linearly independent. For example, female (or male) column is needless, because whether the sample is female or not can be explained only with male (or female) column.)
* paredother, pisoother, abastaguano, energcocinar1, techootro, sanitario6, elimbasu6, estadocivil7, parentesco12, tipovivi5, lugar1, area1, female
### obsolete columns
* epared1~3, etecho1~3, eviv1~3, instlevel1~9 ... we don't use these columns anymore.
```
needless_cols = ['r4t3', 'tamhog', 'tamviv', 'hhsize', 'v18q', 'v14a', 'agesq',
'mobilephone', 'paredother', 'pisoother', 'abastaguano',
'energcocinar1', 'techootro', 'sanitario6', 'elimbasu6',
'estadocivil7', 'parentesco12', 'tipovivi5',
'lugar1', 'area1', 'female', 'epared1', 'epared2',
'epared3', 'etecho1', 'etecho2', 'etecho3',
'eviv1', 'eviv2', 'eviv3', 'instlevel1', 'instlevel2',
'instlevel3', 'instlevel4', 'instlevel5', 'instlevel6',
'instlevel7', 'instlevel8', 'instlevel9']
SQB_cols = [s for s in train.columns.tolist() if 'SQB' in s]
parentesco_cols = [s for s in train.columns.tolist() if 'parentesco' in s]
needless_cols.extend(SQB_cols)
needless_cols.extend(parentesco_cols)
train = train.drop(needless_cols, axis=1)
test = test.drop(needless_cols, axis=1)
ori_train = pd.read_csv(os.path.join(input_dir, 'train.csv'))
ori_train_X = ori_train.drop(['Id', 'Target', 'idhogar'], axis=1)
train_X = train.drop(['Id', 'Target', 'idhogar'], axis=1)
print('feature columns \n {} -> {}'.format(ori_train_X.shape[1], train_X.shape[1]))
```
## Simple LightGBM
```
# Split data
train_Id = train['Id'] # individual ID
train_idhogar = train['idhogar'] # household ID
train_y = train['Target'] # Target value
train_X = train.drop(['Id', 'Target', 'idhogar'], axis=1) # features
test_Id = test['Id'] # individual ID
test_idhogar = test['idhogar'] # household ID
test_X = test.drop(['Id', 'idhogar'], axis=1) # features
# Union train and test
all_Id = pd.concat([train_Id, test_Id], axis=0, sort=False)
all_idhogar = pd.concat([train_idhogar, test_idhogar], axis=0, sort=False)
all_X = pd.concat([train_X, test_X], axis=0, sort=False)
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.metrics import confusion_matrix, f1_score, make_scorer
import lightgbm as lgb
X_train, X_test, y_train, y_test = train_test_split(train_X, train_y, test_size=0.1, random_state=0)
F1_scorer = make_scorer(f1_score, greater_is_better=True, average='macro')
# gbm_param = {
# 'num_leaves':[210]
# ,'min_data_in_leaf':[9]
# ,'max_depth':[14]
# }
# gbm = GridSearchCV(
# lgb.LGBMClassifier(objective='multiclassova', class_weight='balanced', seed=0)
# , gbm_param
# , scoring=F1_scorer
# )
# params = {'num_leaves': 13, 'min_data_in_leaf': 23, 'max_depth': 11, 'learning_rate': 0.09, 'feature_fraction': 0.74}
gbm = lgb.LGBMClassifier(boosting_type='dart', objective='multiclassova', class_weight='balanced', random_state=0)
# gbm.set_params(**params)
gbm.fit(X_train, y_train)
# gbm.best_params_
import pickle
with open(os.path.join(working_dir, '20180801_lgbm.pickle'), mode='wb') as f:
pickle.dump(gbm, f)
y_test_pred = gbm.predict(X_test)
cm = confusion_matrix(y_test, y_test_pred)
f1 = f1_score(y_test, y_test_pred, average='macro')
print("confusion matrix: \n", cm)
print("macro F1 score: \n", f1)
pred = gbm.predict(test_X)
pred = pd.Series(data=pred, index=test_Id.values, name='Target')
pred = pd.concat([test_Id, pred], axis=1, join_axes=[test_Id.index])
pred.to_csv('submission.csv', index=False)
```
| github_jupyter |
# Multi-Vendor Network Automation with NAPALM
## Part 4 - Make Configuration Changes with NAPALM
> This lesson uses Jupyter notebooks to provide a guided experience with in-line, executable and editable code snippets. Read <a target="_blank" href="/jupyterlessonguides.html">here</a> for more details on how to interact with these.
Everything we've done thus far is retrieve information from our network device. While this is a very powerful tool to have, we'll eventually need to make changes on our device. Does NAPALM support this?
The good news is - yes, it totally does. However, this part isn't always as abstracted as the "getter" functions we've seen thus far. By retrieving data, we can focus only on very particular use cases, and NAPALM will do the hard work of translating the retrieved data from vendor-specific APIs into a common format. The problem with trying to do this with configuration data is that vendors have wildly different configuration syntaxes, and it's not very useful to only focus on configuring a small subset of a device's capabilities.
> Hopefully this will eventually improve, with the advent of [OpenConfig](http://www.openconfig.net/), models which can be used with the [napalm-yang](https://napalm-yang.readthedocs.io/en/latest/root/supported_models/index.html) project to provide vendor-agnostic configurations. Unfortunately only a handful of vendors currently support OpenConfig.
So, what we end up having to do is construct a vendor-specific configuration outside of NAPALM, such as with a templating language like Jinja, and then pass this in to one of the configuration functions in NAPALM. Let's do that now.
Let's assume you have a variable titled `vqfx1_config` that contains a partial configuration for setting the description of an interface. Ideally you would have built this from a template using Jinja and something like YAML, but we'll just create this explicitly for now so we can learn how to apply it using NAPALM:
> Try changing the description yourself in the XML structure below, by editing the text between the `<description>` and `</description>` tags.
```
vqfx1_config = """
<configuration>
<interfaces>
<interface>
<name>em0</name>
<unit>
<name>0</name>
<description>This is em0, and it connects to something.</description>
</unit>
</interface>
</interfaces>
</configuration>
"""
```
Again, we need to recreate the NAPALM driver and device objects so we can use them in this notebook:
```
import napalm
driver = napalm.get_network_driver("junos")
device = driver(hostname="vqfx1", username="antidote", password="antidotepassword")
device.open()
```
Next, we can use a function we haven't seen yet, called `load_merge_candidate`, and pass our configuration string as the `config` parameter.
> If we had the config stored in a file, we could use the `filename` parameter instead.
The `merge` portion of that function name is the "strategy" used for applying that config. In short, this will attempt to merge the existing configuration with the changes you're proposing, and only change what needs to change to incorporate the new configuration.
```
device.load_merge_candidate(config=vqfx1_config)
```
As is common on modern network operating systems, the changes we've proposed have gone into a candidate configuration store, which means they've been sent to the device, but the device hasn't started using the new changes. We have to `commit` them in order to do that. As is custom in these situations, we can use the `compare_config` to see the exact diff of what will happen to the configuration if we were to commit it now:
```
print(device.compare_config())
```
If we didn't like the diff we saw above, we could call `device.discard_config()` to get rid of the changes we proposed as candidate. However, this looks good to us, so we can apply the changes with a `commit_config`:
```
device.commit_config()
```
> The function `load_replace_candidate` is similar, but instead of attempting to merge the configuration, it overwrites the entire configuration with what you pass in. **USE WITH CAUTION** as you will need to make sure your configuration is exactly what it needs to be.
Now that our change is applied, we can use the `get_interfaces` function to see the description applied to the operational state of the device:
```
device.get_interfaces()['em0.0']['description']
```
Often, we make mistakes and need to roll back a change. Let's say we made a typo in the description and just want to undo what we just committed. No worries - the `rollback` function does this for us.
```
device.rollback()
device.get_interfaces()['em0.0']['description']
```
NAPALM also comes with a few [built-in templates] you can use to perform some common tasks without building your own configuration.
> You can see that even the small number of templates included in the project include examples that will only work on some vendor devices. This is the challenge with trying to unify existing configuration paradigms.
That's it for now! In future versions of the lesson (or perhaps in other lessons) we'll dive deeper into some of the more specific tools within NAPALM for targeted workflows. For now, check out the [NAPALM documentation](https://napalm.readthedocs.io) for more information.
| github_jupyter |
```
import os, cv2, random, json
import numpy as np
import pandas as pd
np.random.seed(2016)
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
import matplotlib.pyplot as plt
from matplotlib import ticker
import seaborn as sns
%matplotlib inline
from keras.models import Sequential, Model
from keras.layers import Input, Dropout, Flatten, Convolution2D, MaxPooling2D, ZeroPadding2D, Dense, Activation
from keras.layers import merge, Convolution1D, BatchNormalization, Reshape, Permute
from keras.optimizers import RMSprop, Adam, Adamax, Nadam, SGD, Adadelta
from keras.callbacks import ModelCheckpoint, Callback, EarlyStopping
from keras.utils import np_utils
from keras.regularizers import l2
from keras import backend as K
from keras.preprocessing.image import ImageDataGenerator
ROWS = 120
COLS = 320
CHANNELS = 3
DIR = 'data/IMG/'
```
## Parsing the Data Log
First, I load the data_log.csv into pandas and extract the center image paths and steering angle labels.
```
data = pd.read_csv('data/driving_log.csv', header=None,
names=['center', 'left', 'right', 'angle', 'throttle', 'break', 'speed'])
print(data.ix[0].center)
data.sample()
def img_id(path):
return path.split('/IMG/')[1]
image_paths = data.center.apply(img_id).values.tolist()
image_paths[:5]
# y_all = data[['angle', 'throttle']].values
y_all = data.angle.values
n_samples = y_all.shape[0]
print("Training Model with {} Samples".format(n_samples))
```
## Reading and Preprocessing the Images with OpenCV
```
def read_image(path):
img = cv2.imread(path, cv2.IMREAD_COLOR)
img = img[40:160, 0:320] ## Cropping top section of image, just useless noise
# img = cv2.imread(path, cv2.IMREAD_GRAYSCALE)
# img = np.expand_dims(img, axis=2)
return img[:,:,::-1]
X_all = np.ndarray((n_samples, ROWS, COLS, CHANNELS), dtype=np.uint8)
for i, path in enumerate(image_paths):
DIR+path
img = read_image(DIR+path)
X_all[i] = img
print(X_all.shape)
for img in X_all[:3]:
plt.imshow(img)
plt.show()
```
## Building a Convnet in Keras
1. Split the data in train/test sets.
2. Build a keras model for regression.
```
X_train, X_test, y_train, y_test = train_test_split(
X_all, y_all, test_size=0.20, random_state=23)
def fit_gen(data, batch_size):
while 1:
x = np.ndarray((batch_size, ROWS, COLS, CHANNELS), dtype=np.uint8)
y = np.zeros(batch_size)
i=0
for line in data.iterrows():
path = line[1].center.split('/IMG/')[1]
x[i] = read_image(DIR+path)
y[i] = line[1].angle
i+=1
if i == batch_size:
i=0
yield (x, y)
x = np.ndarray((batch_size, ROWS, COLS, CHANNELS), dtype=np.uint8)
y = np.zeros(batch_size)
def rmse(y_true, y_pred):
return K.sqrt(K.mean(K.square(y_pred - y_true), axis=-1))
def get_model():
lr = 0.0001
weight_init='glorot_normal'
opt = RMSprop(lr)
loss = 'mean_squared_error'
model = Sequential()
model.add(BatchNormalization(mode=2, axis=1, input_shape=(ROWS, COLS, CHANNELS)))
model.add(Convolution2D(3, 3, 3, init=weight_init, border_mode='valid', activation='relu', input_shape=(ROWS, COLS, CHANNELS)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(9, 3, 3, init=weight_init, border_mode='valid', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(18, 3, 3, init=weight_init, border_mode='valid', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(32, 3, 3, init=weight_init, border_mode='valid', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(80, activation='relu', init=weight_init))
# model.add(Dropout(0.5))
model.add(Dense(15, activation='relu', init=weight_init))
model.add(Dropout(0.25))
model.add(Dense(1, init=weight_init, activation='linear'))
model.compile(optimizer=opt, loss=loss)
return model
model = get_model()
model.summary()
nb_epoch = 30
batch_size = 64
### Creating Validation Data
X_train, X_test, y_train, y_test = train_test_split(
X_all, y_all, test_size=0.20, random_state=23)
# Callbacks
early_stopping = EarlyStopping(monitor='val_loss', patience=8, verbose=1, mode='auto')
save_weights = ModelCheckpoint('new_model.h5', monitor='val_loss', save_best_only=True)
model.fit_generator(fit_gen(data, 32),
samples_per_epoch=data.shape[0], nb_epoch=nb_epoch,
validation_data=(X_test, y_test), callbacks=[save_weights, early_stopping])
model.fit(X_all, y_all, batch_size=batch_size, nb_epoch=nb_epoch,
validation_data=(X_test, y_test), verbose=1, shuffle=True, callbacks=[save_weights, early_stopping])
preds = model.predict(X_test, verbose=1)
print( "Test MSE: {}".format(mean_squared_error(y_test, preds)))
print( "Test RMSE: {}".format(np.sqrt(mean_squared_error(y_test, preds))))
js = model.to_json()
with open('model.json', 'w') as outfile:
json.dump(js, outfile)
```
| github_jupyter |
# WaveCalib Class [v1.1]
```
%matplotlib inline
# imports
import os
from importlib import reload
from astropy.table import Table
from pypit import wavecalib
# Path to PYPIT-Development-suite
pypdev_path = os.getenv('PYPIT_DEV')
```
# User options
## Explore 1D solution
#### Load MasterFrame
```
reload(wavecalib)
Wavecalib = wavecalib.WaveCalib(None, spectrograph='shane_kast_blue')
Wavecalib.load_wv_calib(pypdev_path+'/Cooked/WaveCalib/MasterWaveCalib_ShaneKastBlue_A.json')
Wavecalib.wv_calib.keys()
```
#### Show spectrum with detected lines
```
Wavecalib.show('spec', slit=0)
```
#### Show IDs and fit
```
Wavecalib.show('fit', slit=0)
```
## Redo
#### Check arcparam
Main item to fuss with is min_ampl
```
Wavecalib.arcparam
Wavecalib.arcparam['min_ampl'] = 1000.
```
#### Rerun
```
new_wv_calib = Wavecalib.calibrate_spec(0)
```
----
# Development
```
from pypit import traceslits
from pypit import arcimage
```
## Load up required MasterFrames
```
settings = dict(masters={})
settings['masters']['directory'] = pypdev_path+'/Cooked/MF_shane_kast_blue'
settings['masters']['reuse'] = True
setup = 'A_01_aa'
```
### MasterArc
```
AImg = arcimage.ArcImage(setup=setup, settings=settings)
msarc, header, _ = AImg.load_master_frame()
```
### TraceSlits
```
TSlits = traceslits.TraceSlits.from_master_files(settings['masters']['directory']+'/MasterTrace_A_01_aa')
TSlits._make_pixel_arrays()
```
### Fitstbl
```
fitstbl = Table.read(settings['masters']['directory']+'/shane_kast_blue_setup_A.fits')
```
## Init Wavecalib
```
reload(wavecalib)
Wavecalib = wavecalib.WaveCalib(msarc, spectrograph='shane_kast_blue', settings=settings, setup=setup, fitstbl=fitstbl, sci_ID=1, det=1)
```
## Extract arcs -- Requires msarc and slit info
```
arccen, maskslits = Wavecalib._extract_arcs(TSlits.lcen, TSlits.rcen, TSlits.pixlocn)
arccen.shape
```
## Arcparam -- Requires fitstbl, sci_ID, det
```
arcparam = Wavecalib._load_arcparam()
arcparam
```
## Build wv_calib
```
wv_calib = Wavecalib._build_wv_calib('arclines', skip_QA=True)
wv_calib
```
## Save as master
```
Wavecalib.save_master(wv_calib,outfile='tmp.json') # This doesn't save steps nor arcparam which *is* done in the master() call
```
## One shot
```
reload(wavecalib)
settings['masters']['reuse'] = False
settings['masters']['force'] = False
Wavecalib = wavecalib.WaveCalib(msarc, spectrograph='shane_kast_blue', settings=settings, setup=setup, fitstbl=fitstbl, sci_ID=1, det=1)
wv_calib2, _ = Wavecalib.run(TSlits.lcen, TSlits.rcen, TSlits.pixlocn, skip_QA=True)
wv_calib2.keys()
```
| github_jupyter |
```
import os
import pickle
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
import matplotlib.pyplot as plt
import matplotlib.colors as colors
plt.rcParams['figure.figsize'] = (5.0, 0.8)
import matplotlib.patches as mpatches
from util.color_util import *
import pickle
from random import shuffle
import torch.optim as optim
import colorsys
from model.RSA import *
from numpy import dot
from numpy.linalg import norm
from scipy import spatial
from colormath.color_objects import sRGBColor, LabColor
from colormath.color_conversions import convert_color
from colormath.color_diff import delta_e_cie2000
from skimage import io, color
import random
from tabulate import tabulate
RGB = True
EXTEND = True
NUM_EPOCHE = 1000
RETRAIN = False
FOURIER_TRANSFORM = False
MODEL_NAME = "literal_speaker"
SAMPLE_PER_COLOR = 3
COLOR_DIM = 54 if FOURIER_TRANSFORM else 3
# load triples
if EXTEND:
triple_train = pickle.load( open( "../munroe/triple_train.p", "rb" ) )
triple_dev = pickle.load( open( "../munroe/triple_dev.p", "rb" ) )
triple_test = pickle.load( open( "../munroe/triple_test.p", "rb" ) )
else:
triple_train = pickle.load( open( "../munroe/triple_train_reduce.p", "rb" ) )
triple_dev = pickle.load( open( "../munroe/triple_dev_reduce.p", "rb" ) )
triple_test = pickle.load( open( "../munroe/triple_test_reduce.p", "rb" ) )
# load colors
cdict_train_rgb = pickle.load( open( "../munroe/cdict_train.p", "rb" ) )
cdict_dev_rgb = pickle.load( open( "../munroe/cdict_dev.p", "rb" ) )
cdict_test_rgb = pickle.load( open( "../munroe/cdict_test.p", "rb" ) )
cdict_train = dict()
cdict_dev = dict()
cdict_test = dict()
if RGB:
cdict_train = cdict_train_rgb
cdict_dev = cdict_dev_rgb
cdict_test = cdict_test_rgb
else:
for c in cdict_train_rgb.keys():
cdict_train[c] = torch.tensor(colors.rgb_to_hsv(cdict_train_rgb[c]))
for c in cdict_dev_rgb.keys():
cdict_dev[c] = torch.tensor(colors.rgb_to_hsv(cdict_dev_rgb[c]))
for c in cdict_test_rgb.keys():
cdict_test[c] = torch.tensor(colors.rgb_to_hsv(cdict_test_rgb[c]))
# load embeddings for this dataset only
embeddings = pickle.load( open( "../munroe/glove_color.p", "rb" ) )
# generate test sets
test_set = generate_test_set(triple_train, triple_test)
mse = nn.MSELoss(reduction = 'none')
cos = nn.CosineSimilarity(dim=1)
colorLoss = lambda source, target, wg: ((1-cos(wg, target-source)) + mse(target, source+wg).sum(dim=-1)).sum()
net = LiteralSpeaker(color_dim=COLOR_DIM)
if RETRAIN:
'''
Skip this as you dont have to retrain!
Main training loop
'''
optimizer = optim.Adam(net.parameters(), lr=0.001)
debug = False
sample_per_color = SAMPLE_PER_COLOR
for i in range(NUM_EPOCHE):
net.train()
loss = 0.0
batch_num = 0
batch_index = 0
for batch_emb1, batch_emb2, batch_base_color, batch_base_color_raw, batch_target_color in \
generate_batch(cdict_train, triple_train, embeddings,
sample_per_color=sample_per_color,
fourier=FOURIER_TRANSFORM):
pred = net(batch_emb1, batch_emb2, batch_base_color)
wg = pred - batch_base_color_raw # calculate the wg for the loss to use
batch_loss = colorLoss(batch_base_color_raw, batch_target_color, wg)
loss += batch_loss
batch_num += batch_emb1.shape[0] # sum up total sample size
batch_loss.backward()
optimizer.step()
optimizer.zero_grad()
if debug:
print(f"Batch: {batch_index+1}, train loss:{batch_loss.detach().numpy()}")
batch_index += 1
if i % 100 == 0:
print(f"Epoche: {i+1}, train loss:{loss.detach().numpy()}")
# save the literal speaker to disk
checkpoint = {"model" : net.state_dict(), "name" : MODEL_NAME}
torch.save(checkpoint, "./save_model/" + MODEL_NAME + ".pth")
else:
checkpoint = torch.load("./save_model/" + MODEL_NAME + ".pth")
net.load_state_dict(checkpoint['model'])
net_predict = predict_color(net, test_set, cdict_test, embeddings, sample_per_color=1, fourier=FOURIER_TRANSFORM)
evaluation_metrics = evaluate_color(net_predict, fmt="rgb")
```
| github_jupyter |
## DenseNetwork 구현 및 학습
```
import tensorflow as tf
import numpy as np
```
## 하이퍼 파라미터
```
EPOCHS = 10
```
## Dense Unit 구현
```
class DenseUnit(tf.keras.Model):
def __init__(self, filter_out, kernel_size):
super(DenseUnit, self).__init__()
# TODO
def call(self, x, training=False, mask=None):
# TODO
```
## Dense Layer 구현
```
class DenseLayer(tf.keras.Model):
def __init__(self, num_unit, growth_rate, kernel_size):
super(DenseLayer, self).__init__()
# TODO
def call(self, x, training=False, mask=None):
# TODO
```
## Transition Layer 구현
```
class TransitionLayer(tf.keras.Model):
def __init__(self, filters, kernel_size):
super(TransitionLayer, self).__init__()
# TODO
def call(self, x, training=False, mask=None):
# TODO
```
## 모델 정의
```
class DenseNet(tf.keras.Model):
def __init__(self):
super(DenseNet, self).__init__()
# TODO
def call(self, x, training=False, mask=None):
# TODO
```
## 학습, 테스트 루프 정의
```
# Implement training loop
@tf.function
def train_step(model, images, labels, loss_object, optimizer, train_loss, train_accuracy):
with tf.GradientTape() as tape:
predictions = model(images)
loss = loss_object(labels, predictions)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
train_loss(loss)
train_accuracy(labels, predictions)
# Implement algorithm test
@tf.function
def test_step(model, images, labels, loss_object, test_loss, test_accuracy):
predictions = model(images)
t_loss = loss_object(labels, predictions)
test_loss(t_loss)
test_accuracy(labels, predictions)
```
## 데이터셋 준비
```
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
x_train = x_train[..., tf.newaxis].astype(np.float32)
x_test = x_test[..., tf.newaxis].astype(np.float32)
train_ds = tf.data.Dataset.from_tensor_slices((x_train, y_train)).shuffle(10000).batch(32)
test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(32)
```
## 학습 환경 정의
### 모델 생성, 손실함수, 최적화 알고리즘, 평가지표 정의
```
# Create model
model = DenseNet()
# Define loss and optimizer
loss_object = tf.keras.losses.SparseCategoricalCrossentropy()
optimizer = tf.keras.optimizers.Adam()
# Define performance metrics
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')
test_loss = tf.keras.metrics.Mean(name='test_loss')
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='test_accuracy')
```
## 학습 루프 동작
```
for epoch in range(EPOCHS):
for images, labels in train_ds:
train_step(model, images, labels, loss_object, optimizer, train_loss, train_accuracy)
for test_images, test_labels in test_ds:
test_step(model, test_images, test_labels, loss_object, test_loss, test_accuracy)
template = 'Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, Test Accuracy: {}'
print(template.format(epoch + 1,
train_loss.result(),
train_accuracy.result() * 100,
test_loss.result(),
test_accuracy.result() * 100))
train_loss.reset_states()
train_accuracy.reset_states()
test_loss.reset_states()
test_accuracy.reset_states()
```
| github_jupyter |
This notebook was autohered by [Sayantan Dasgupta](https://github.com/Arka2001), [Nirmalya Misra](https://github.com/nirmalya8) and [Deepetendu Santra](https://github.com/Dsantra92).
# Introduction to python programming.
## Data Types in python
This is notebook is divided into three parts. This is the first section of the notebook covering basics python data types and some very basic commands.
### Integers
```
# What is an integer?
x = 10
print(type(x))
```

### Floating point numbers
```
# What is a float point number? Basically numbers which have a decimal point in them, like 5.20, 6.324, etc.
x = 6.324
print(type(x))
```
Note: There are no specific byte information about the data type in python like int32, int64, float64, etc..Python does allow you access the memory like C does. It stores integer data and you need(can) not worry about the actual number of bytes taken up by the integer/float implementation in memory.
### Basic Mathematical Operations using integers and float
```
# There are mainly 6 types of mathematical operations in Python
# '+' - addition operator
print(5+6)
# '-' - subtraction operator
print(5-6)
# '*' - Multiplication operator
print(5*6)
# '/' - Division operator
print(5/6)
# '//' - Division operator, but with a little twist ;-)
print(5//6)
# '**' - power operator
print(5**3) # 5^3
```
### String
```
# String is also an important data type in python, or any programming language whatsoever
# It is denoted by quotation marks, like this
s = "Hello World, 123" # or s = 'Hello World, 123'
# We can access any one of the elements of the string using their respective indices
# In Python, indexing starts from 0
print("First Element: ",s[0])
print("Third element: ",s[2])
```
Note : Unline C there is \0 character at the end of the string in Python.
```
# Negative indexing
# There is also a special type of indexing supported in Python
# That is negative indexing. This type of indexing starts from the last element of the string with an index of -1
s[-2] #this is the second last element
# String slicing
# Print the till the first 4 elements in s
print(s[:4])
# Print starting from the 5th element in s
print(s[4:])
# String to List and Vice versa
s = "Hello my name is Sayantan"
s_list = s.split(' ')
# s_list = s.split('e')
print(s_list)
#s1 = ' '.join(s_list)
#print(s1)
# String Formatting
age, age2 = 200, 210
print("My age is {} and his age is {}".format(age,age2))
```
### A quick demonstration of Dynamic Typing
```
# Python is dynamically typed language. Which means that we do not have to explicitly declare the data type of a variable.
# It is automatically inferred by the interpreter during runtime
x = 5
print(type(x))
# Python is cool with changing data-types
x = "Five"
print(type(x))
```
### Functions
```
# In Python, a function is defined using the 'def' keyword
# You are encouraged to use these kind of type hinting in functions
# The reason will be clear in the next sections
# The function runs without the type hints too
def findMax(a : int,b : int, c : int ):
if(a>=b and a>=c):
return a
if(b>=a and b>=c):
return b
if(c>=a and c>=b):
return c
# There is a better way to use multiple if statements
# which Nirmalya will discuss in the next section
findMax(45,12,98)
```
## Control and Looping statements in Python
### Conditionals
```
a =10
b =0
a > b
# Check if the coniditon evaluates to True and performs certain tasks
if a > b:
print(a)
# Execute if condition evaluates to be False
else :
print(b)
```
Make sure you keep maintain proper indentation throught the code. It is a common practise to use a Tab or 2xSpace for the indentation. Make sure you don't end up using both in the document.
```
a =0
b =0
if a > b:
print(a)
# Divert the control flow to check another condition
elif a == b:
print("{} = {}".format(a,b))
else :
print(b)
```
### Loops
What if you need to run a block of code again an again?
For example if you need to print your name 100 times. You can make use of loops to do these repetitive tasks.
Initialisation -> Condition -> Block of code -> Update
#### For loop
```
# for <variable> in range([start], stop, [step])
# A [] around a variable in a function docstring indicates that the variable is optional
for i in range(0,10,2):
print(i)
```
#### While loop
```
i = 0
while i<10:
print(i)
i = i+2 # Make sure you update the iterating variable :)
# Here is a block of code that does not update the looping variable in while loop
# See for yourself what happens
# i = 0
# while i<10:
# print(i)
```
### Lists
Suppose you need to store a group of objects together. Suppose you need to make a list of the groceries that your mum has asked you to bring from the store.
Let us make a list of vegetables that you need to bring from the market.
```
vegetables = ["Carrot🥕", "Eggplant 🍆", "Potato 🥔", "Corn 🌽"]
```
You are now too tired to count but you need to find out the number of items in the list. Thankfully python has got you covered. Use the python `len` function
```
len(vegetables)
```
Suddenly you remeber you forgot to add some juciy mangoes to the list. Can we add it to the list ? Absolutely, yes!!!
Inserting elements:
- append(element)
- insert(index, element)
```
vegetables
# Add a new element to the end of the list
vegetables.append("Mango 🥭")
# Thank god!! We added mango to the list
vegetables
```
We want to do something more delicate. You find out that one kg potato is not enough. You need to add one more, and you don't want to add at it to the last. You wan to add another potato element after the potato element that is already present in the list.
Before we add the new element in our desired position, we need to find out the list of elements in the list, you need to decide the postion where to insert the list.
```
# We want to insert at the index of old potato
# the index of old potato is 2, you can verify
vegetables[2]
vegetables.insert(2, "Potato 🥔")
vegetables
```
Now we after we have bought 2 potatos we want to remove them from the list.
### Remove elements
- remove(element)
- pop(index)
```
# Does it delete the element of the list
# Or returns a new list with potato removed?
vegetables.remove("Potato 🥔")
vegetables
```
Why was the 2nd potato not removed??
```
# We don't want to take any risk anymore
# We wan't more fine grained control
# we will delete by index 😤
vegetables.pop(5)
```
### Indexing and Slicing
```
# Now we wan to get the first 3 elements of the list
vegetables[:3]
# Everything except the first 3 elements of the list
vegetables[2:]
# negative indexing works on list too!!
# Access the last element of the list
vegetables[-1]
```
### Mutability and Immutability
```
# Tuples
# Tuples are like lists but uneditable
# List is like writing with a pencil on paper and tuple is like writing pen on paper
# You can edit list after you have created it, with tuple you cannot
a = (1, 2, "Three")
# Let's see what is the second element of a
a[1]
# Let us now try to change it
a[1] = 3
```
### The `in` keyword
Suppose you want to print all the elements of the list, you can use the `in` keyword. Here are two approaches on how to do it.
```
vegetables
# Approach 1
# Messy and more C/C++
for i in range(len(vegetables)):
print(vegetables[i])
# Approach 2
# Cleaner and more pythonic
for i in vegetables:
print(i)
```
## Law of Large Numbers using Python
**Python Packages**
- You are not expected to code everything from scratch. Every langauge provides you packages/libraires that implements certain artifacts and saves you from coding everything from scratch. One of the most important things that makes python so popular is the amount of packages it has to offer.
- Python comes some default packages, these packages are called standard packages. Some of the standard packages are math, random, zip etc.
**How to use a pakcage in python ?**
To use the functions and things that a package(more correctly module) has to offer we need to `import` the module.
```
# Import the random module
import random
```
Let us have a look at the functions and Modules that come with Random
```
dir(random)
```
The list is too long. Can you just print the last 5 elements?
**Ask for Help?**
There are a lot of python packages providing lots of functionalities.You are not expected to know every functionality by heart. If you can, please don't.
<br>
Website to refer to:
- [Python Official Documentation](https://docs.python.org/3/)
- [StackOverFlow](https://stackoverflow.com/questions/tagged/python-3.x)
Please make sure you trust the website before learning from it.
**Unifrom Distribution**
A uniform distribution, sometimes also known as a rectangular distribution, is a distribution that has constant probability. [Wolfram Alpha]
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/9/96/Uniform_Distribution_PDF_SVG.svg/1200px-Uniform_Distribution_PDF_SVG.svg.png" width=500>
```
random.uniform(4, 6)
a = random.uniform(0, 1)
if a>0.5:
print("Head")
else:
print("Tail")
def unbiased_coin_toss():
# Generate a random number from 0 to 1
x = random.uniform(0, 1)
# Probability that the number falls between 0 and 0.5 is 1/2
if x > 0.5:
return True
else:
return False
unbiased_coin_toss()
```
Implement the coin toss of a biased coin given the probability of heads.
```
def biased_coin_toss(h):
# Generate a random number from 0 to 1
x = random.uniform(0, 1)
if x > (1-h):
return True
else:
return False
biased_coin_toss(0.9)
```
Now we have a function that emulates the tossing of a coin. We are going to verify that the probability of the coin matches our original probability.
```
# Before we go on to do a simluation let us try something
# What is the result of the commented out code?
sum([1, 2, 3, 4])
# sum([True, False, True, True, False, True])
N=10
results = []
# Toss the coin 10 times and store that in a list
for i in range(N):
result = unbiased_coin_toss()
results.append(result)
n_heads = sum(results)
p_heads = n_heads/N
print("Probability is {:.1f}".format(p_heads))
# Write a function for the code above
```
Are the results compareable to what you expected??
### The Law of large numbers
**Law of large numbers is**<br>
It states that if you repeat an experiment independently a large number of times and average the result, what you obtain should be close to the expected value.
- For a small number of cases the probability of an event may not refelect the original probability of events.
- Smaller the population, higher the bias
- What is Mote Carlo?
```
# Simulate for 100 results
N = 500
results = []
for i in range(N):
result = biased_coin_toss(0.3)
results.append(result)
n_heads = sum(results)
p_heads = n_heads/N
print("Probability is {:.3f}".format(p_heads))
```
- Do you notice any improvement in results?
- Repeat the same expermient for biased coin and check if the results match.
- For which value of N the result if correct to 3 decimal places?

In this graph the blue curve inidicates the probability of heads after each toss. The graph is taken from a [Turing.jl tutorial](https://turing.ml/dev/tutorials/0-introduction/).
Can you plot the same using Python?
| github_jupyter |
# Word2Vec
**Learning Objectives**
1. Compile all steps into one function
2. Prepare training data for Word2Vec
3. Model and Training
4. Embedding lookup and analysis
## Introduction
Word2Vec is not a singular algorithm, rather, it is a family of model architectures and optimizations that can be used to learn word embeddings from large datasets. Embeddings learned through Word2Vec have proven to be successful on a variety of downstream natural language processing tasks.
Note: This notebook is based on [Efficient Estimation of Word Representations in Vector Space](https://arxiv.org/pdf/1301.3781.pdf) and
[Distributed
Representations of Words and Phrases and their Compositionality](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf). It is not an exact implementation of the papers. Rather, it is intended to illustrate the key ideas.
These papers proposed two methods for learning representations of words:
* **Continuous Bag-of-Words Model** which predicts the middle word based on surrounding context words. The context consists of a few words before and after the current (middle) word. This architecture is called a bag-of-words model as the order of words in the context is not important.
* **Continuous Skip-gram Model** which predict words within a certain range before and after the current word in the same sentence. A worked example of this is given below.
You'll use the skip-gram approach in this notebook. First, you'll explore skip-grams and other concepts using a single sentence for illustration. Next, you'll train your own Word2Vec model on a small dataset. This notebook also contains code to export the trained embeddings and visualize them in the [TensorFlow Embedding Projector](http://projector.tensorflow.org/).
Each learning objective will correspond to a _#TODO_ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/word2vec.ipynb)
## Skip-gram and Negative Sampling
While a bag-of-words model predicts a word given the neighboring context, a skip-gram model predicts the context (or neighbors) of a word, given the word itself. The model is trained on skip-grams, which are n-grams that allow tokens to be skipped (see the diagram below for an example). The context of a word can be represented through a set of skip-gram pairs of `(target_word, context_word)` where `context_word` appears in the neighboring context of `target_word`.
Consider the following sentence of 8 words.
> The wide road shimmered in the hot sun.
The context words for each of the 8 words of this sentence are defined by a window size. The window size determines the span of words on either side of a `target_word` that can be considered `context word`. Take a look at this table of skip-grams for target words based on different window sizes.
Note: For this tutorial, a window size of *n* implies n words on each side with a total window span of 2*n+1 words across a word.

The training objective of the skip-gram model is to maximize the probability of predicting context words given the target word. For a sequence of words *w<sub>1</sub>, w<sub>2</sub>, ... w<sub>T</sub>*, the objective can be written as the average log probability

where `c` is the size of the training context. The basic skip-gram formulation defines this probability using the softmax function.

where *v* and *v<sup>'<sup>* are target and context vector representations of words and *W* is vocabulary size.
Computing the denominator of this formulation involves performing a full softmax over the entire vocabulary words which is often large (10<sup>5</sup>-10<sup>7</sup>) terms.
The [Noise Contrastive Estimation](https://www.tensorflow.org/api_docs/python/tf/nn/nce_loss) loss function is an efficient approximation for a full softmax. With an objective to learn word embeddings instead of modelling the word distribution, NCE loss can be [simplified](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) to use negative sampling.
The simplified negative sampling objective for a target word is to distinguish the context word from *num_ns* negative samples drawn from noise distribution *P<sub>n</sub>(w)* of words. More precisely, an efficient approximation of full softmax over the vocabulary is, for a skip-gram pair, to pose the loss for a target word as a classification problem between the context word and *num_ns* negative samples.
A negative sample is defined as a (target_word, context_word) pair such that the context_word does not appear in the `window_size` neighborhood of the target_word. For the example sentence, these are few potential negative samples (when `window_size` is 2).
```
(hot, shimmered)
(wide, hot)
(wide, sun)
```
In the next section, you'll generate skip-grams and negative samples for a single sentence. You'll also learn about subsampling techniques and train a classification model for positive and negative training examples later in the tutorial.
## Setup
```
# Use the chown command to change the ownership of repository to user.
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
!pip install -q tqdm
# You can use any Python source file as a module by executing an import statement in some other Python source file.
# The import statement combines two operations; it searches for the named module, then it binds the
# results of that search to a name in the local scope.
import io
import itertools
import numpy as np
import os
import re
import string
import tensorflow as tf
import tqdm
from tensorflow.keras import Model, Sequential
from tensorflow.keras.layers import Activation, Dense, Dot, Embedding, Flatten, GlobalAveragePooling1D, Reshape
from tensorflow.keras.layers.experimental.preprocessing import TextVectorization
```
Please check your tensorflow version using the cell below.
```
# Show the currently installed version of TensorFlow
print("TensorFlow version: ",tf.version.VERSION)
SEED = 42
AUTOTUNE = tf.data.experimental.AUTOTUNE
```
### Vectorize an example sentence
Consider the following sentence:
`The wide road shimmered in the hot sun.`
Tokenize the sentence:
```
sentence = "The wide road shimmered in the hot sun"
tokens = list(sentence.lower().split())
print(len(tokens))
```
Create a vocabulary to save mappings from tokens to integer indices.
```
vocab, index = {}, 1 # start indexing from 1
vocab['<pad>'] = 0 # add a padding token
for token in tokens:
if token not in vocab:
vocab[token] = index
index += 1
vocab_size = len(vocab)
print(vocab)
```
Create an inverse vocabulary to save mappings from integer indices to tokens.
```
inverse_vocab = {index: token for token, index in vocab.items()}
print(inverse_vocab)
```
Vectorize your sentence.
```
example_sequence = [vocab[word] for word in tokens]
print(example_sequence)
```
### Generate skip-grams from one sentence
The `tf.keras.preprocessing.sequence` module provides useful functions that simplify data preparation for Word2Vec. You can use the `tf.keras.preprocessing.sequence.skipgrams` to generate skip-gram pairs from the `example_sequence` with a given `window_size` from tokens in the range `[0, vocab_size)`.
Note: `negative_samples` is set to `0` here as batching negative samples generated by this function requires a bit of code. You will use another function to perform negative sampling in the next section.
```
window_size = 2
positive_skip_grams, _ = tf.keras.preprocessing.sequence.skipgrams(
example_sequence,
vocabulary_size=vocab_size,
window_size=window_size,
negative_samples=0)
print(len(positive_skip_grams))
```
Take a look at few positive skip-grams.
```
for target, context in positive_skip_grams[:5]:
print(f"({target}, {context}): ({inverse_vocab[target]}, {inverse_vocab[context]})")
```
### Negative sampling for one skip-gram
The `skipgrams` function returns all positive skip-gram pairs by sliding over a given window span. To produce additional skip-gram pairs that would serve as negative samples for training, you need to sample random words from the vocabulary. Use the `tf.random.log_uniform_candidate_sampler` function to sample `num_ns` number of negative samples for a given target word in a window. You can call the funtion on one skip-grams's target word and pass the context word as true class to exclude it from being sampled.
Key point: *num_ns* (number of negative samples per positive context word) between [5, 20] is [shown to work](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) best for smaller datasets, while *num_ns* between [2,5] suffices for larger datasets.
```
# Get target and context words for one positive skip-gram.
target_word, context_word = positive_skip_grams[0]
# Set the number of negative samples per positive context.
num_ns = 4
context_class = tf.reshape(tf.constant(context_word, dtype="int64"), (1, 1))
negative_sampling_candidates, _, _ = tf.random.log_uniform_candidate_sampler(
true_classes=context_class, # class that should be sampled as 'positive'
num_true=1, # each positive skip-gram has 1 positive context class
num_sampled=num_ns, # number of negative context words to sample
unique=True, # all the negative samples should be unique
range_max=vocab_size, # pick index of the samples from [0, vocab_size]
seed=SEED, # seed for reproducibility
name="negative_sampling" # name of this operation
)
print(negative_sampling_candidates)
print([inverse_vocab[index.numpy()] for index in negative_sampling_candidates])
```
### Construct one training example
For a given positive `(target_word, context_word)` skip-gram, you now also have `num_ns` negative sampled context words that do not appear in the window size neighborhood of `target_word`. Batch the `1` positive `context_word` and `num_ns` negative context words into one tensor. This produces a set of positive skip-grams (labelled as `1`) and negative samples (labelled as `0`) for each target word.
```
# Add a dimension so you can use concatenation (on the next step).
negative_sampling_candidates = tf.expand_dims(negative_sampling_candidates, 1)
# Concat positive context word with negative sampled words.
context = tf.concat([context_class, negative_sampling_candidates], 0)
# Label first context word as 1 (positive) followed by num_ns 0s (negative).
label = tf.constant([1] + [0]*num_ns, dtype="int64")
# Reshape target to shape (1,) and context and label to (num_ns+1,).
target = tf.squeeze(target_word)
context = tf.squeeze(context)
label = tf.squeeze(label)
```
Take a look at the context and the corresponding labels for the target word from the skip-gram example above.
```
print(f"target_index : {target}")
print(f"target_word : {inverse_vocab[target_word]}")
print(f"context_indices : {context}")
print(f"context_words : {[inverse_vocab[c.numpy()] for c in context]}")
print(f"label : {label}")
```
A tuple of `(target, context, label)` tensors constitutes one training example for training your skip-gram negative sampling Word2Vec model. Notice that the target is of shape `(1,)` while the context and label are of shape `(1+num_ns,)`
```
print(f"target :", target)
print(f"context :", context )
print(f"label :", label )
```
### Summary
This picture summarizes the procedure of generating training example from a sentence.

## Lab Task 1: Compile all steps into one function
### Skip-gram Sampling table
A large dataset means larger vocabulary with higher number of more frequent words such as stopwords. Training examples obtained from sampling commonly occuring words (such as `the`, `is`, `on`) don't add much useful information for the model to learn from. [Mikolov et al.](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) suggest subsampling of frequent words as a helpful practice to improve embedding quality.
The `tf.keras.preprocessing.sequence.skipgrams` function accepts a sampling table argument to encode probabilities of sampling any token. You can use the `tf.keras.preprocessing.sequence.make_sampling_table` to generate a word-frequency rank based probabilistic sampling table and pass it to `skipgrams` function. Take a look at the sampling probabilities for a `vocab_size` of 10.
```
sampling_table = tf.keras.preprocessing.sequence.make_sampling_table(size=10)
print(sampling_table)
```
`sampling_table[i]` denotes the probability of sampling the i-th most common word in a dataset. The function assumes a [Zipf's distribution](https://en.wikipedia.org/wiki/Zipf%27s_law) of the word frequencies for sampling.
Key point: The `tf.random.log_uniform_candidate_sampler` already assumes that the vocabulary frequency follows a log-uniform (Zipf's) distribution. Using these distribution weighted sampling also helps approximate the Noise Contrastive Estimation (NCE) loss with simpler loss functions for training a negative sampling objective.
### Generate training data
Compile all the steps described above into a function that can be called on a list of vectorized sentences obtained from any text dataset. Notice that the sampling table is built before sampling skip-gram word pairs. You will use this function in the later sections.
```
# Generates skip-gram pairs with negative sampling for a list of sequences
# (int-encoded sentences) based on window size, number of negative samples
# and vocabulary size.
def generate_training_data(sequences, window_size, num_ns, vocab_size, seed):
# Elements of each training example are appended to these lists.
targets, contexts, labels = [], [], []
# Build the sampling table for vocab_size tokens.
# TODO 1a -- your code goes here
# Iterate over all sequences (sentences) in dataset.
for sequence in tqdm.tqdm(sequences):
# Generate positive skip-gram pairs for a sequence (sentence).
positive_skip_grams, _ = tf.keras.preprocessing.sequence.skipgrams(
sequence,
vocabulary_size=vocab_size,
sampling_table=sampling_table,
window_size=window_size,
negative_samples=0)
# Iterate over each positive skip-gram pair to produce training examples
# with positive context word and negative samples.
# TODO 1b -- your code goes here
# Build context and label vectors (for one target word)
negative_sampling_candidates = tf.expand_dims(
negative_sampling_candidates, 1)
context = tf.concat([context_class, negative_sampling_candidates], 0)
label = tf.constant([1] + [0]*num_ns, dtype="int64")
# Append each element from the training example to global lists.
targets.append(target_word)
contexts.append(context)
labels.append(label)
return targets, contexts, labels
```
## Lab Task 2: Prepare training data for Word2Vec
With an understanding of how to work with one sentence for a skip-gram negative sampling based Word2Vec model, you can proceed to generate training examples from a larger list of sentences!
### Download text corpus
You will use a text file of Shakespeare's writing for this tutorial. Change the following line to run this code on your own data.
```
path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt')
```
Read text from the file and take a look at the first few lines.
```
with open(path_to_file) as f:
lines = f.read().splitlines()
for line in lines[:20]:
print(line)
```
Use the non empty lines to construct a `tf.data.TextLineDataset` object for next steps.
```
# TODO 2a -- your code goes here
```
### Vectorize sentences from the corpus
You can use the `TextVectorization` layer to vectorize sentences from the corpus. Learn more about using this layer in this [Text Classification](https://www.tensorflow.org/tutorials/keras/text_classification) tutorial. Notice from the first few sentences above that the text needs to be in one case and punctuation needs to be removed. To do this, define a `custom_standardization function` that can be used in the TextVectorization layer.
```
# We create a custom standardization function to lowercase the text and
# remove punctuation.
def custom_standardization(input_data):
lowercase = tf.strings.lower(input_data)
return tf.strings.regex_replace(lowercase,
'[%s]' % re.escape(string.punctuation), '')
# Define the vocabulary size and number of words in a sequence.
vocab_size = 4096
sequence_length = 10
# Use the text vectorization layer to normalize, split, and map strings to
# integers. Set output_sequence_length length to pad all samples to same length.
vectorize_layer = TextVectorization(
standardize=custom_standardization,
max_tokens=vocab_size,
output_mode='int',
output_sequence_length=sequence_length)
```
Call `adapt` on the text dataset to create vocabulary.
```
vectorize_layer.adapt(text_ds.batch(1024))
```
Once the state of the layer has been adapted to represent the text corpus, the vocabulary can be accessed with `get_vocabulary()`. This function returns a list of all vocabulary tokens sorted (descending) by their frequency.
```
# Save the created vocabulary for reference.
inverse_vocab = vectorize_layer.get_vocabulary()
print(inverse_vocab[:20])
```
The vectorize_layer can now be used to generate vectors for each element in the `text_ds`.
```
def vectorize_text(text):
text = tf.expand_dims(text, -1)
return tf.squeeze(vectorize_layer(text))
# Vectorize the data in text_ds.
text_vector_ds = text_ds.batch(1024).prefetch(AUTOTUNE).map(vectorize_layer).unbatch()
```
### Obtain sequences from the dataset
You now have a `tf.data.Dataset` of integer encoded sentences. To prepare the dataset for training a Word2Vec model, flatten the dataset into a list of sentence vector sequences. This step is required as you would iterate over each sentence in the dataset to produce positive and negative examples.
Note: Since the `generate_training_data()` defined earlier uses non-TF python/numpy functions, you could also use a `tf.py_function` or `tf.numpy_function` with `tf.data.Dataset.map()`.
```
sequences = list(text_vector_ds.as_numpy_iterator())
print(len(sequences))
```
Take a look at few examples from `sequences`.
```
for seq in sequences[:5]:
print(f"{seq} => {[inverse_vocab[i] for i in seq]}")
```
### Generate training examples from sequences
`sequences` is now a list of int encoded sentences. Just call the `generate_training_data()` function defined earlier to generate training examples for the Word2Vec model. To recap, the function iterates over each word from each sequence to collect positive and negative context words. Length of target, contexts and labels should be same, representing the total number of training examples.
```
targets, contexts, labels = generate_training_data(
sequences=sequences,
window_size=2,
num_ns=4,
vocab_size=vocab_size,
seed=SEED)
print(len(targets), len(contexts), len(labels))
```
### Configure the dataset for performance
To perform efficient batching for the potentially large number of training examples, use the `tf.data.Dataset` API. After this step, you would have a `tf.data.Dataset` object of `(target_word, context_word), (label)` elements to train your Word2Vec model!
```
BATCH_SIZE = 1024
BUFFER_SIZE = 10000
dataset = tf.data.Dataset.from_tensor_slices(((targets, contexts), labels))
dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)
print(dataset)
```
Add `cache()` and `prefetch()` to improve performance.
```
dataset = dataset.cache().prefetch(buffer_size=AUTOTUNE)
print(dataset)
```
## Lab Task 3: Model and Training
The Word2Vec model can be implemented as a classifier to distinguish between true context words from skip-grams and false context words obtained through negative sampling. You can perform a dot product between the embeddings of target and context words to obtain predictions for labels and compute loss against true labels in the dataset.
### Subclassed Word2Vec Model
Use the [Keras Subclassing API](https://www.tensorflow.org/guide/keras/custom_layers_and_models) to define your Word2Vec model with the following layers:
* `target_embedding`: A `tf.keras.layers.Embedding` layer which looks up the embedding of a word when it appears as a target word. The number of parameters in this layer are `(vocab_size * embedding_dim)`.
* `context_embedding`: Another `tf.keras.layers.Embedding` layer which looks up the embedding of a word when it appears as a context word. The number of parameters in this layer are the same as those in `target_embedding`, i.e. `(vocab_size * embedding_dim)`.
* `dots`: A `tf.keras.layers.Dot` layer that computes the dot product of target and context embeddings from a training pair.
* `flatten`: A `tf.keras.layers.Flatten` layer to flatten the results of `dots` layer into logits.
With the sublassed model, you can define the `call()` function that accepts `(target, context)` pairs which can then be passed into their corresponding embedding layer. Reshape the `context_embedding` to perform a dot product with `target_embedding` and return the flattened result.
Key point: The `target_embedding` and `context_embedding` layers can be shared as well. You could also use a concatenation of both embeddings as the final Word2Vec embedding.
```
class Word2Vec(Model):
def __init__(self, vocab_size, embedding_dim):
super(Word2Vec, self).__init__()
self.target_embedding = Embedding(vocab_size,
embedding_dim,
input_length=1,
name="w2v_embedding", )
self.context_embedding = Embedding(vocab_size,
embedding_dim,
input_length=num_ns+1)
self.dots = Dot(axes=(3,2))
self.flatten = Flatten()
def call(self, pair):
target, context = pair
we = self.target_embedding(target)
ce = self.context_embedding(context)
dots = self.dots([ce, we])
return self.flatten(dots)
```
### Define loss function and compile model
For simplicity, you can use `tf.keras.losses.CategoricalCrossEntropy` as an alternative to the negative sampling loss. If you would like to write your own custom loss function, you can also do so as follows:
``` python
def custom_loss(x_logit, y_true):
return tf.nn.sigmoid_cross_entropy_with_logits(logits=x_logit, labels=y_true)
```
It's time to build your model! Instantiate your Word2Vec class with an embedding dimension of 128 (you could experiment with different values). Compile the model with the `tf.keras.optimizers.Adam` optimizer.
```
# TODO 3a -- your code goes here
```
Also define a callback to log training statistics for tensorboard.
```
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir="logs")
```
Train the model with `dataset` prepared above for some number of epochs.
```
word2vec.fit(dataset, epochs=20, callbacks=[tensorboard_callback])
```
Tensorboard now shows the Word2Vec model's accuracy and loss.
```
!tensorboard --bind_all --port=8081 --logdir logs
```
Run the following command in **Cloud Shell:**
<code>gcloud beta compute ssh --zone <instance-zone> <notebook-instance-name> --project <project-id> -- -L 8081:localhost:8081</code>
Make sure to replace <instance-zone>, <notebook-instance-name> and <project-id>.
In Cloud Shell, click *Web Preview* > *Change Port* and insert port number *8081*. Click *Change and Preview* to open the TensorBoard.

**To quit the TensorBoard, click Kernel > Interrupt kernel**.
## Lab Task 4: Embedding lookup and analysis
Obtain the weights from the model using `get_layer()` and `get_weights()`. The `get_vocabulary()` function provides the vocabulary to build a metadata file with one token per line.
```
# TODO 4a -- your code goes here
```
Create and save the vectors and metadata file.
```
out_v = io.open('vectors.tsv', 'w', encoding='utf-8')
out_m = io.open('metadata.tsv', 'w', encoding='utf-8')
for index, word in enumerate(vocab):
if index == 0: continue # skip 0, it's padding.
vec = weights[index]
out_v.write('\t'.join([str(x) for x in vec]) + "\n")
out_m.write(word + "\n")
out_v.close()
out_m.close()
```
Download the `vectors.tsv` and `metadata.tsv` to analyze the obtained embeddings in the [Embedding Projector](https://projector.tensorflow.org/).
```
try:
from google.colab import files
files.download('vectors.tsv')
files.download('metadata.tsv')
except Exception as e:
pass
```
## Next steps
This tutorial has shown you how to implement a skip-gram Word2Vec model with negative sampling from scratch and visualize the obtained word embeddings.
* To learn more about word vectors and their mathematical representations, refer to these [notes](https://web.stanford.edu/class/cs224n/readings/cs224n-2019-notes01-wordvecs1.pdf).
* To learn more about advanced text processing, read the [Transformer model for language understanding](https://www.tensorflow.org/tutorials/text/transformer) tutorial.
* If you’re interested in pre-trained embedding models, you may also be interested in [Exploring the TF-Hub CORD-19 Swivel Embeddings](https://www.tensorflow.org/hub/tutorials/cord_19_embeddings_keras), or the [Multilingual Universal Sentence Encoder](https://www.tensorflow.org/hub/tutorials/cross_lingual_similarity_with_tf_hub_multilingual_universal_encoder)
* You may also like to train the model on a new dataset (there are many available in [TensorFlow Datasets](https://www.tensorflow.org/datasets)).
| github_jupyter |
# Self-Driving Car Engineer Nanodegree
## Project: **Finding Lane Lines on the Road**
***
In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/322/view) for this project.
---
Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
**Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".**
---
**The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.**
---
<figure>
<img src="examples/line-segments-example.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
</figcaption>
</figure>
<p></p>
<figure>
<img src="examples/laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
</figcaption>
</figure>
**Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**
## Import Packages
```
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
```
## Read in an Image
```
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
```
## Ideas for Lane Detection Pipeline
**Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:**
`cv2.inRange()` for color selection
`cv2.fillPoly()` for regions selection
`cv2.line()` to draw lines on an image given endpoints
`cv2.addWeighted()` to coadd / overlay two images
`cv2.cvtColor()` to grayscale or change color
`cv2.imwrite()` to output images to file
`cv2.bitwise_and()` to apply a mask to an image
**Check out the OpenCV documentation to learn about these and discover even more awesome functionality!**
## Helper Functions
Below are some helper functions to help get you started. They should look familiar from the lesson!
```
import math
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')"""
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
`vertices` should be a numpy array of integer points.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[255, 0, 0], thickness=6):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
for line in lines:
for x1,y1,x2,y2 in line:
if x2-x1 == 0:
continue
k = (y2-y1)/(x2-x1)
if abs(k) < k_threshold_low:
continue
else:
cv2.line(img, (x1, y1), (x2, y2), color, thickness)
def draw_lines_extrapolate(img, lines, color=[255, 0, 0], thickness=6):
"""
draw the right/left lane
"""
left, right = [],[]
for line in lines:
for x1,y1,x2,y2 in line:
if (x2-x1) == 0: # handle the zero issue
pass
else: #exclude the line which abs(k) < threshold_low
k = (y2-y1)/(x2-x1)
if k< -k_threshold_low and k > -k_threshold_high: # note in the img coordinate, the k < 0 is left
left.append(line)
elif k > k_threshold_low and k < k_threshold_high:
right.append(line)
left, right = lines_check(left), lines_check(right)
if not (left and right): # handle the issue not catch the lines
return img
# print("left lines after check: ", left)
left_lane, right_lane = line_fit(left, img), line_fit(right,img)
# print("left lane is: ", left_lane)
# print("right lane is : ", right_lane)
cv2.line(img, left_lane[0], left_lane[1], color, thickness)
cv2.line(img, right_lane[0], right_lane[1], color, thickness)
def lines_check(lines):
# print("runging lines_check...")
# print(lines)
while lines:
ks = [(line[0,1]-line[0,3])/(line[0,0]-line[0,2]) for line in lines]
ks_error = [abs(k-np.mean(ks)) for k in ks]
# print(ks_error)
if max(ks_error) < k_diff_threshold:
break
idx = ks_error.index(max(ks_error))
ks_error.pop(idx)
lines.pop(idx)
# print(lines)
return lines
def line_fit(lines,img):
"""
input: lines, img
output: top, bot points of lane
"""
# print("runing line_fit...")
x, y = [],[]
# print("input lines: ", lines)
for line in lines:
x.append(line[0,0])
x.append(line[0,2])
y.append(line[0,1])
y.append(line[0,3])
line_f = np.poly1d(np.polyfit(y,x,1)) # need use y to calcul x, so fit(y, x)
x_top = int(line_f(0))
x_bot = int(line_f(img.shape[0]))
return ((x_top,0),(x_bot,img.shape[0]))
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
# draw_lines(line_img, lines)
draw_lines_extrapolate(line_img, lines)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., γ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + γ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, γ)
def HSV_mask(img, threshold):
HSV = cv2.cvtColor(img, cv2.COLOR_RGB2HSV)
mask = cv2.inRange(HSV, threshold[0], threshold[1])
return mask
```
## Test Images
Build your pipeline to work on the images in the directory "test_images"
**You should make sure your pipeline works well on these images before you try the videos.**
```
import os
os.listdir("test_images/")
```
## Build a Lane Finding Pipeline
Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report.
Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
```
# TODO: Build your pipeline that will draw lane lines on the test_images
# then save them to the test_images_output directory.
# set the paramters
kernel_size = 5 # paramter for guassian_blur
low_threshold = 50 # paramter for edge finding
high_threshold = 150
# paramter for image masking
imshape = image.shape
vertices = np.array([[(80, imshape[0]),(410,330),(550,330),(900, imshape[0])]],dtype=np.int32)
# parameter for hough function
rho = 1 # angular resolution in pixels of the Hough grid
theta = np.pi/180 # angular resolution in radians of the Hough grid
threshold = 15 # minimum number of pixels making up a line
min_line_len = 40 # minumum number of pixels making up a line
max_line_gap = 20 # maximum gap in pixels making up a line
k_threshold_low = 0.5 # used to exclude the horizental line
k_threshold_high = 100 # setted not adjusted
k_diff_threshold = 0.1 # in lines check, to sorted out the lines which's slope is different with others
# parameter for color mask HSV value
white_threshold = ((60,0,130),(180,50,255))
yellow_threshold = ((10,60,0),(30,255,255))
def img_lane_detect(img):
"""
This function is used to package the lane detect pipline, all the paramter should set/ajust out of this function.
"""
# white_mask = HSV_mask(img, white_threshold)
# yellow_mask = HSV_mask(img, yellow_threshold)
# color_mask = white_mask | yellow_mask
img_gray = grayscale(img)
blur_gray = gaussian_blur(img, kernel_size)
edges = canny(blur_gray, low_threshold, high_threshold)
masked_edges = region_of_interest(edges, vertices)
# masked_edges[color_mask==0] = 0 # apply color mask
lined_edges = hough_lines(masked_edges, rho, theta, threshold, min_line_len, max_line_gap)
masked_lined_edges = region_of_interest(lined_edges, vertices)
return weighted_img(masked_lined_edges, img, α=0.8, β=1., γ=0.)
for file in os.listdir("test_images/"):
if file == "challenge": # ship the folder
continue
img = mpimg.imread("test_images/"+file)
lined_edges = img_lane_detect(img)
outputfile_name = "test_images_output/"+file
print(outputfile_name)
# swith the RGB to BGR
r,g,b = lined_edges[:,:,0], lined_edges[:,:,1], lined_edges[:,:,2]
lined_edges = np.dstack((b,g,r))
cv2.imwrite(outputfile_name, lined_edges) # matplotlib.image not support save jpg file
```
## Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos:
`solidWhiteRight.mp4`
`solidYellowLeft.mp4`
**Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**
**If you get an error that looks like this:**
```
NeedDownloadError: Need ffmpeg exe.
You can download it by calling:
imageio.plugins.ffmpeg.download()
```
**Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.**
```
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# you should return the final output (image where lines are drawn on lanes)
result = img_lane_detect(image)
return result
```
Let's try the one with the solid white lane on the right first ...
```
white_output = 'test_videos_output/solidWhiteRight.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
```
Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
```
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
```
## Improve the draw_lines() function
**At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".**
**Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.**
Now for the one with the solid yellow lane on the left. This one's more tricky!
```
yellow_output = 'test_videos_output/solidYellowLeft.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output))
```
## Writeup and Submission
If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file.
## Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
```
# get images for test
# https://stackoverflow.com/questions/43148590/extract-images-using-opencv-and-python-or-moviepy
clip3 = VideoFileClip('test_videos/challenge.mp4')
# times = [1,3,5,7,9]
# times = [3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 4.0, 4.1, 4.2] # get the frame which has draw line problem
# times = [4.25, 4.30, 4.35, 4.40, 4.45, 4.50, 4.55, 4.60]
times = [5.1, 5.2, 5.3, 5.4, 5.5, 5.6, 5.7, 5.8, 5.9, 6.0]
for t in times:
imgpath = os.path.join("test_images/challenge", '{}.jpg'.format(t))
clip3.save_frame(imgpath, t)
# set the paramters
kernel_size = 5 # paramter for guassian_blur
low_threshold = 150 # paramter for edge finding
high_threshold = 200
# parameter for hough function
rho = 1 # angular resolution in pixels of the Hough grid
theta = np.pi/180 # angular resolution in radians of the Hough grid
threshold = 15 # minimum number of pixels making up a line
min_line_len = 15 # minumum number of pixels making up a line
max_line_gap = 20 # maximum gap in pixels making up a line
k_threshold_low = 0.5 # used to exclude the horizental line
k_threshold_high = 100 # setted not adjusted
k_diff_threshold = 0.1 # in lines check, to sorted out the lines which's slope is different with others
vertices = np.array([[(160, 720),(615,430),(700,430),(1210, 720)]],dtype=np.int32)
# parameter for color mask HSV value
# white_threshold = ((0,0,100),(60,40,255))
# white_threshold = ((10,0,100),(60,30,255))
white_threshold = ((10,0,100),(180,30,255))
yellow_threshold = ((10,60,0),(30,255,255))
def img_lane_detect(img):
"""
This function is used to package the lane detect pipline, all the paramter should set/ajust out of this function.
"""
img_gray = grayscale(img)
blur_gray = gaussian_blur(img, kernel_size)
edges = canny(blur_gray, low_threshold, high_threshold)
masked_edges = region_of_interest(edges, vertices)
# white_mask = HSV_mask(img, white_threshold)
# yellow_mask = HSV_mask(img, yellow_threshold)
# color_mask = white_mask | yellow_mask
# masked_edges[color_mask==0] = 0 # apply color mask
lined_edges = hough_lines(masked_edges, rho, theta, threshold, min_line_len, max_line_gap)
masked_lined_edges = region_of_interest(lined_edges, vertices)
return weighted_img(masked_lined_edges, img, α=0.8, β=1., γ=0.)
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# you should return the final output (image where lines are drawn on lanes)
result = img_lane_detect(image)
return result
for file in os.listdir("test_images/challenge"):
imgpath = os.path.join("test_images/challenge",file)
img = mpimg.imread(imgpath)
# draw ROI
# cv2.line(img, (160, 720),(615,430), (255,0,0),2)
# cv2.line(img, (615,430),(700,430), (255,0,0),2)
# cv2.line(img, (700,430),(1120, 650), (255,0,0),2)
lined_edges = img_lane_detect(img)
output_imgpath = os.path.join("test_images_output/challenge",file)
print(output_imgpath)
# swith the RGB to BGR
r,g,b = lined_edges[:,:,0], lined_edges[:,:,1], lined_edges[:,:,2]
lined_edges = np.dstack((b,g,r))
cv2.imwrite(output_imgpath, lined_edges) # matplotlib.image not support save jpg file
challenge_output = 'test_videos_output/challenge.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
# clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)
clip3 = VideoFileClip('test_videos/challenge.mp4')
challenge_clip = clip3.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output))
```
| github_jupyter |
```
import nibabel as nib
import glob
import sys
sys.path.append('gradient_data/src/')
import os
import scipy.stats
import numpy as np
import statsmodels.sandbox.stats.multicomp
import nibabel as nib
### Comparison of cerebello-cerebral connectivity between cerebellar Gradient 1 and 2 peaks at each area of
### motor and nonmotor representation was calculated as follows:
### 1) Using workbench view, save dscalar map of connectivity from peak of each Gradient (e.g. "L2midcog_fconn1003subjects.dscalar.nii")
### 2) Using workbench view, annotate correlation values between seeds:
### Fisher_z
### R1mot R2mot 0.231264
### L1mot L2mot 0.23966
### R12highcog R3highcog 0.313801
### L12highcog L3highcog 0.370525
### R1midcog R2midcog 0.450554
### L1midcog L2midcog 0.330115
### R1midcog R3midcog 0.330242
### L1midcog L3midcog 0.222103
### R1midcog R3midcog_alt 0.245531
### L1midcog L3midcog_alt 0.336851
### R2midcog R3midcog 0.276653
### L2midcog L3midcog 0.154236
### R2midcog R3midcog_alt 0.188767
### L2midcog L3midcog_alt 0.218249
### "Alt" refers to peak of Gradient 2 in lobule IX if peak was in lobule X, and vice versa. These values were
### not included in the final analysis
### 3) Contrast r values between seeds using the equations from Meng et al., 1992:
### wb_command -cifti-math '((var_zr1)-(var_zr2))*((sqrt(1003-3))/(2*(1-0.301)*((1-(((1-0.301)/(2*(1-(((tanh(var_zr1)^2)+(tanh(var_zr2)^2))/2)))+((1-0.301)/(2*(1-((( tanh(var_zr1)^2)+( tanh(var_zr2)^2))/2)))>1)*(1-(1-0.301)/(2*(1-(((tanh(var_zr1)^2)+(tanh(var_zr2)^2))/2))))))*((( tanh(var_zr1)^2)+( tanh(var_zr2)^2))/2))/(1-(((tanh(var_zr1)^2)+(tanh(var_zr2)^2))/2)))))' comparison_L2midcog_vs_L1midcog_1003subjects.dscalar.nii -var var_zr1 L2midcog_fconn1003subjects.dscalar.nii -var var_zr2 L1midcog_fconn1003subjects.dscalar.nii
### Substitute 0.301 with each particular correlation value between seeds. Note that z values have to be converted to r values (e.g. L1midcog L2midcog z=0.330115 corresponds to r=0.301)
### 4) Correct comparison maps with FDR as follows:
directory = 'xxx' ### write directory with comparison maps here
files = os.listdir(directory)
### Check that files are loaded
for x in files:
if x.endswith('motor.dscalar.nii'):
print(x)
for x in files:
if x.endswith('dscalar.nii'):
print('calculating: ' + x)
zvalues = nib.load(x).get_data()
pvals = scipy.stats.norm.sf(zvalues)
pvals = pvals.T
pvals_onlycortex = pvals[0:59411] ### This selects values only from the cortex
pvals_onlycortex = pvals_onlycortex.T
pvals_onlycortex_FDR = statsmodels.sandbox.stats.multicomp.multipletests(pvals_onlycortex[0], alpha=0.05, method='fdr_bh', is_sorted=False, returnsorted=False)
### Put cortical FDR corrected p values back to the original pvals matrix
pvals_onlycortex_FDR[1].shape = (59411, 1)
pvals[0:59411] = pvals_onlycortex_FDR[1]
np.save(x + '_pvalues_onetailedcortexonly_FDR.npy', pvals.T[0])
print('finished calculating: ' + x)
### Transform to dscalar format
files = os.listdir(directory) #This is to update list of files in directory
for y in files:
if y.endswith('.npy'):
res = nib.load(directory2 + '/hcp.tmp.lh.dscalar.nii').get_data()
cortL = np.squeeze(np.array(np.where(res != 0)[0], dtype=np.int32))
res = nib.load(directory2 + '/hcp.tmp.rh.dscalar.nii').get_data()
cortR = np.squeeze(np.array(np.where(res != 0)[0], dtype=np.int32))
cortLen = len(cortL) + len(cortR)
del res
emb = np.load(y)
emb.shape
emb.shape = (91282, 1)
tmp = nib.load('comparison_R1midcog_vs_R2midcog_peakfconn1003.dscalar.nii') #Just to take the shape; has to be dscalar with one map, and brain only
tmp_cifti = nib.cifti2.load('comparison_R1midcog_vs_R2midcog_peakfconn1003.dscalar.nii')
data = tmp_cifti.get_data() * 0
mim = tmp.header.matrix[1]
for idx, bm in enumerate(mim.brain_models):
print ((idx, bm.index_offset, bm.brain_structure))
img = nib.cifti2.Cifti2Image(emb.T, nib.cifti2.Cifti2Header(tmp.header.matrix))
img.to_filename(y + '.dscalar.nii')
### 5) Open corrected maps with wb_view and use a threshold of 0.05
```
| github_jupyter |
# TensorFlow Tutorial
Welcome to this week's programming assignment. Until now, you've always used numpy to build neural networks. Now we will step you through a deep learning framework that will allow you to build neural networks more easily. Machine learning frameworks like TensorFlow, PaddlePaddle, Torch, Caffe, Keras, and many others can speed up your machine learning development significantly. All of these frameworks also have a lot of documentation, which you should feel free to read. In this assignment, you will learn to do the following in TensorFlow:
- Initialize variables
- Start your own session
- Train algorithms
- Implement a Neural Network
Programing frameworks can not only shorten your coding time, but sometimes also perform optimizations that speed up your code.
## <font color='darkblue'>Updates</font>
#### If you were working on the notebook before this update...
* The current notebook is version "v3b".
* You can find your original work saved in the notebook with the previous version name (it may be either TensorFlow Tutorial version 3" or "TensorFlow Tutorial version 3a.)
* To view the file directory, click on the "Coursera" icon in the top left of this notebook.
#### List of updates
* forward_propagation instruction now says 'A1' instead of 'a1' in the formula for Z2;
and are updated to say 'A2' instead of 'Z2' in the formula for Z3.
* create_placeholders instruction refer to the data type "tf.float32" instead of float.
* in the model function, the x axis of the plot now says "iterations (per fives)" instead of iterations(per tens)
* In the linear_function, comments remind students to create the variables in the order suggested by the starter code. The comments are updated to reflect this order.
* The test of the cost function now creates the logits without passing them through a sigmoid function (since the cost function will include the sigmoid in the built-in tensorflow function).
* In the 'model' function, the minibatch_cost is now divided by minibatch_size (instead of num_minibatches).
* Updated print statements and 'expected output that are used to check functions, for easier visual comparison.
## 1 - Exploring the Tensorflow Library
To start, you will import the library:
```
import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.python.framework import ops
from tf_utils import load_dataset, random_mini_batches, convert_to_one_hot, predict
%matplotlib inline
np.random.seed(1)
```
Now that you have imported the library, we will walk you through its different applications. You will start with an example, where we compute for you the loss of one training example.
$$loss = \mathcal{L}(\hat{y}, y) = (\hat y^{(i)} - y^{(i)})^2 \tag{1}$$
```
y_hat = tf.constant(36, name='y_hat') # Define y_hat constant. Set to 36.
y = tf.constant(39, name='y') # Define y. Set to 39
loss = tf.Variable((y - y_hat)**2, name='loss') # Create a variable for the loss
init = tf.global_variables_initializer() # When init is run later (session.run(init)),
# the loss variable will be initialized and ready to be computed
with tf.Session() as session: # Create a session and print the output
session.run(init) # Initializes the variables
print(session.run(loss)) # Prints the loss
```
Writing and running programs in TensorFlow has the following steps:
1. Create Tensors (variables) that are not yet executed/evaluated.
2. Write operations between those Tensors.
3. Initialize your Tensors.
4. Create a Session.
5. Run the Session. This will run the operations you'd written above.
Therefore, when we created a variable for the loss, we simply defined the loss as a function of other quantities, but did not evaluate its value. To evaluate it, we had to run `init=tf.global_variables_initializer()`. That initialized the loss variable, and in the last line we were finally able to evaluate the value of `loss` and print its value.
Now let us look at an easy example. Run the cell below:
```
a = tf.constant(2)
b = tf.constant(10)
c = tf.multiply(a,b)
print(c)
```
As expected, you will not see 20! You got a tensor saying that the result is a tensor that does not have the shape attribute, and is of type "int32". All you did was put in the 'computation graph', but you have not run this computation yet. In order to actually multiply the two numbers, you will have to create a session and run it.
```
sess = tf.Session()
print(sess.run(c))
```
Great! To summarize, **remember to initialize your variables, create a session and run the operations inside the session**.
Next, you'll also have to know about placeholders. A placeholder is an object whose value you can specify only later.
To specify values for a placeholder, you can pass in values by using a "feed dictionary" (`feed_dict` variable). Below, we created a placeholder for x. This allows us to pass in a number later when we run the session.
```
# Change the value of x in the feed_dict
x = tf.placeholder(tf.int64, name = 'x')
print(sess.run(2 * x, feed_dict = {x: 3}))
sess.close()
```
When you first defined `x` you did not have to specify a value for it. A placeholder is simply a variable that you will assign data to only later, when running the session. We say that you **feed data** to these placeholders when running the session.
Here's what's happening: When you specify the operations needed for a computation, you are telling TensorFlow how to construct a computation graph. The computation graph can have some placeholders whose values you will specify only later. Finally, when you run the session, you are telling TensorFlow to execute the computation graph.
### 1.1 - Linear function
Lets start this programming exercise by computing the following equation: $Y = WX + b$, where $W$ and $X$ are random matrices and b is a random vector.
**Exercise**: Compute $WX + b$ where $W, X$, and $b$ are drawn from a random normal distribution. W is of shape (4, 3), X is (3,1) and b is (4,1). As an example, here is how you would define a constant X that has shape (3,1):
```python
X = tf.constant(np.random.randn(3,1), name = "X")
```
You might find the following functions helpful:
- tf.matmul(..., ...) to do a matrix multiplication
- tf.add(..., ...) to do an addition
- np.random.randn(...) to initialize randomly
```
# GRADED FUNCTION: linear_function
def linear_function():
"""
Implements a linear function:
Initializes X to be a random tensor of shape (3,1)
Initializes W to be a random tensor of shape (4,3)
Initializes b to be a random tensor of shape (4,1)
Returns:
result -- runs the session for Y = WX + b
"""
np.random.seed(1)
"""
Note, to ensure that the "random" numbers generated match the expected results,
please create the variables in the order given in the starting code below.
(Do not re-arrange the order).
"""
### START CODE HERE ### (4 lines of code)
X = None
W = None
b = None
Y = None
### END CODE HERE ###
# Create the session using tf.Session() and run it with sess.run(...) on the variable you want to calculate
### START CODE HERE ###
sess = None
result = None
### END CODE HERE ###
# close the session
sess.close()
return result
print( "result = \n" + str(linear_function()))
```
*** Expected Output ***:
```
result =
[[-2.15657382]
[ 2.95891446]
[-1.08926781]
[-0.84538042]]
```
### 1.2 - Computing the sigmoid
Great! You just implemented a linear function. Tensorflow offers a variety of commonly used neural network functions like `tf.sigmoid` and `tf.softmax`. For this exercise lets compute the sigmoid function of an input.
You will do this exercise using a placeholder variable `x`. When running the session, you should use the feed dictionary to pass in the input `z`. In this exercise, you will have to (i) create a placeholder `x`, (ii) define the operations needed to compute the sigmoid using `tf.sigmoid`, and then (iii) run the session.
** Exercise **: Implement the sigmoid function below. You should use the following:
- `tf.placeholder(tf.float32, name = "...")`
- `tf.sigmoid(...)`
- `sess.run(..., feed_dict = {x: z})`
Note that there are two typical ways to create and use sessions in tensorflow:
**Method 1:**
```python
sess = tf.Session()
# Run the variables initialization (if needed), run the operations
result = sess.run(..., feed_dict = {...})
sess.close() # Close the session
```
**Method 2:**
```python
with tf.Session() as sess:
# run the variables initialization (if needed), run the operations
result = sess.run(..., feed_dict = {...})
# This takes care of closing the session for you :)
```
```
# GRADED FUNCTION: sigmoid
def sigmoid(z):
"""
Computes the sigmoid of z
Arguments:
z -- input value, scalar or vector
Returns:
results -- the sigmoid of z
"""
### START CODE HERE ### ( approx. 4 lines of code)
# Create a placeholder for x. Name it 'x'.
x = None
# compute sigmoid(x)
sigmoid = None
# Create a session, and run it. Please use the method 2 explained above.
# You should use a feed_dict to pass z's value to x.
None
# Run session and call the output "result"
result = None
### END CODE HERE ###
return result
print ("sigmoid(0) = " + str(sigmoid(0)))
print ("sigmoid(12) = " + str(sigmoid(12)))
```
*** Expected Output ***:
<table>
<tr>
<td>
**sigmoid(0)**
</td>
<td>
0.5
</td>
</tr>
<tr>
<td>
**sigmoid(12)**
</td>
<td>
0.999994
</td>
</tr>
</table>
<font color='blue'>
**To summarize, you how know how to**:
1. Create placeholders
2. Specify the computation graph corresponding to operations you want to compute
3. Create the session
4. Run the session, using a feed dictionary if necessary to specify placeholder variables' values.
### 1.3 - Computing the Cost
You can also use a built-in function to compute the cost of your neural network. So instead of needing to write code to compute this as a function of $a^{[2](i)}$ and $y^{(i)}$ for i=1...m:
$$ J = - \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log a^{ [2] (i)} + (1-y^{(i)})\log (1-a^{ [2] (i)} )\large )\small\tag{2}$$
you can do it in one line of code in tensorflow!
**Exercise**: Implement the cross entropy loss. The function you will use is:
- `tf.nn.sigmoid_cross_entropy_with_logits(logits = ..., labels = ...)`
Your code should input `z`, compute the sigmoid (to get `a`) and then compute the cross entropy cost $J$. All this can be done using one call to `tf.nn.sigmoid_cross_entropy_with_logits`, which computes
$$- \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log \sigma(z^{[2](i)}) + (1-y^{(i)})\log (1-\sigma(z^{[2](i)})\large )\small\tag{2}$$
```
# GRADED FUNCTION: cost
def cost(logits, labels):
"""
Computes the cost using the sigmoid cross entropy
Arguments:
logits -- vector containing z, output of the last linear unit (before the final sigmoid activation)
labels -- vector of labels y (1 or 0)
Note: What we've been calling "z" and "y" in this class are respectively called "logits" and "labels"
in the TensorFlow documentation. So logits will feed into z, and labels into y.
Returns:
cost -- runs the session of the cost (formula (2))
"""
### START CODE HERE ###
# Create the placeholders for "logits" (z) and "labels" (y) (approx. 2 lines)
z = None
y = None
# Use the loss function (approx. 1 line)
cost = None
# Create a session (approx. 1 line). See method 1 above.
sess = None
# Run the session (approx. 1 line).
cost = None
# Close the session (approx. 1 line). See method 1 above.
None
### END CODE HERE ###
return cost
logits = np.array([0.2,0.4,0.7,0.9])
cost = cost(logits, np.array([0,0,1,1]))
print ("cost = " + str(cost))
```
** Expected Output** :
```
cost = [ 0.79813886 0.91301525 0.40318605 0.34115386]
```
### 1.4 - Using One Hot encodings
Many times in deep learning you will have a y vector with numbers ranging from 0 to C-1, where C is the number of classes. If C is for example 4, then you might have the following y vector which you will need to convert as follows:
<img src="images/onehot.png" style="width:600px;height:150px;">
This is called a "one hot" encoding, because in the converted representation exactly one element of each column is "hot" (meaning set to 1). To do this conversion in numpy, you might have to write a few lines of code. In tensorflow, you can use one line of code:
- tf.one_hot(labels, depth, axis)
**Exercise:** Implement the function below to take one vector of labels and the total number of classes $C$, and return the one hot encoding. Use `tf.one_hot()` to do this.
```
# GRADED FUNCTION: one_hot_matrix
def one_hot_matrix(labels, C):
"""
Creates a matrix where the i-th row corresponds to the ith class number and the jth column
corresponds to the jth training example. So if example j had a label i. Then entry (i,j)
will be 1.
Arguments:
labels -- vector containing the labels
C -- number of classes, the depth of the one hot dimension
Returns:
one_hot -- one hot matrix
"""
### START CODE HERE ###
# Create a tf.constant equal to C (depth), name it 'C'. (approx. 1 line)
C = None
# Use tf.one_hot, be careful with the axis (approx. 1 line)
one_hot_matrix = None
# Create the session (approx. 1 line)
sess = None
# Run the session (approx. 1 line)
one_hot = None
# Close the session (approx. 1 line). See method 1 above.
None
### END CODE HERE ###
return one_hot
labels = np.array([1,2,3,0,2,1])
one_hot = one_hot_matrix(labels, C = 4)
print ("one_hot = \n" + str(one_hot))
```
**Expected Output**:
```
one_hot =
[[ 0. 0. 0. 1. 0. 0.]
[ 1. 0. 0. 0. 0. 1.]
[ 0. 1. 0. 0. 1. 0.]
[ 0. 0. 1. 0. 0. 0.]]
```
### 1.5 - Initialize with zeros and ones
Now you will learn how to initialize a vector of zeros and ones. The function you will be calling is `tf.ones()`. To initialize with zeros you could use tf.zeros() instead. These functions take in a shape and return an array of dimension shape full of zeros and ones respectively.
**Exercise:** Implement the function below to take in a shape and to return an array (of the shape's dimension of ones).
- tf.ones(shape)
```
# GRADED FUNCTION: ones
def ones(shape):
"""
Creates an array of ones of dimension shape
Arguments:
shape -- shape of the array you want to create
Returns:
ones -- array containing only ones
"""
### START CODE HERE ###
# Create "ones" tensor using tf.ones(...). (approx. 1 line)
ones = None
# Create the session (approx. 1 line)
sess = None
# Run the session to compute 'ones' (approx. 1 line)
ones = None
# Close the session (approx. 1 line). See method 1 above.
None
### END CODE HERE ###
return ones
print ("ones = " + str(ones([3])))
```
**Expected Output:**
<table>
<tr>
<td>
**ones**
</td>
<td>
[ 1. 1. 1.]
</td>
</tr>
</table>
# 2 - Building your first neural network in tensorflow
In this part of the assignment you will build a neural network using tensorflow. Remember that there are two parts to implement a tensorflow model:
- Create the computation graph
- Run the graph
Let's delve into the problem you'd like to solve!
### 2.0 - Problem statement: SIGNS Dataset
One afternoon, with some friends we decided to teach our computers to decipher sign language. We spent a few hours taking pictures in front of a white wall and came up with the following dataset. It's now your job to build an algorithm that would facilitate communications from a speech-impaired person to someone who doesn't understand sign language.
- **Training set**: 1080 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (180 pictures per number).
- **Test set**: 120 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (20 pictures per number).
Note that this is a subset of the SIGNS dataset. The complete dataset contains many more signs.
Here are examples for each number, and how an explanation of how we represent the labels. These are the original pictures, before we lowered the image resolutoion to 64 by 64 pixels.
<img src="images/hands.png" style="width:800px;height:350px;"><caption><center> <u><font color='purple'> **Figure 1**</u><font color='purple'>: SIGNS dataset <br> <font color='black'> </center>
Run the following code to load the dataset.
```
# Loading the dataset
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
```
Change the index below and run the cell to visualize some examples in the dataset.
```
# Example of a picture
index = 0
plt.imshow(X_train_orig[index])
print ("y = " + str(np.squeeze(Y_train_orig[:, index])))
```
As usual you flatten the image dataset, then normalize it by dividing by 255. On top of that, you will convert each label to a one-hot vector as shown in Figure 1. Run the cell below to do so.
```
# Flatten the training and test images
X_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T
X_test_flatten = X_test_orig.reshape(X_test_orig.shape[0], -1).T
# Normalize image vectors
X_train = X_train_flatten/255.
X_test = X_test_flatten/255.
# Convert training and test labels to one hot matrices
Y_train = convert_to_one_hot(Y_train_orig, 6)
Y_test = convert_to_one_hot(Y_test_orig, 6)
print ("number of training examples = " + str(X_train.shape[1]))
print ("number of test examples = " + str(X_test.shape[1]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
```
**Note** that 12288 comes from $64 \times 64 \times 3$. Each image is square, 64 by 64 pixels, and 3 is for the RGB colors. Please make sure all these shapes make sense to you before continuing.
**Your goal** is to build an algorithm capable of recognizing a sign with high accuracy. To do so, you are going to build a tensorflow model that is almost the same as one you have previously built in numpy for cat recognition (but now using a softmax output). It is a great occasion to compare your numpy implementation to the tensorflow one.
**The model** is *LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX*. The SIGMOID output layer has been converted to a SOFTMAX. A SOFTMAX layer generalizes SIGMOID to when there are more than two classes.
### 2.1 - Create placeholders
Your first task is to create placeholders for `X` and `Y`. This will allow you to later pass your training data in when you run your session.
**Exercise:** Implement the function below to create the placeholders in tensorflow.
```
# GRADED FUNCTION: create_placeholders
def create_placeholders(n_x, n_y):
"""
Creates the placeholders for the tensorflow session.
Arguments:
n_x -- scalar, size of an image vector (num_px * num_px = 64 * 64 * 3 = 12288)
n_y -- scalar, number of classes (from 0 to 5, so -> 6)
Returns:
X -- placeholder for the data input, of shape [n_x, None] and dtype "tf.float32"
Y -- placeholder for the input labels, of shape [n_y, None] and dtype "tf.float32"
Tips:
- You will use None because it let's us be flexible on the number of examples you will for the placeholders.
In fact, the number of examples during test/train is different.
"""
### START CODE HERE ### (approx. 2 lines)
X = None
Y = None
### END CODE HERE ###
return X, Y
X, Y = create_placeholders(12288, 6)
print ("X = " + str(X))
print ("Y = " + str(Y))
```
**Expected Output**:
<table>
<tr>
<td>
**X**
</td>
<td>
Tensor("Placeholder_1:0", shape=(12288, ?), dtype=float32) (not necessarily Placeholder_1)
</td>
</tr>
<tr>
<td>
**Y**
</td>
<td>
Tensor("Placeholder_2:0", shape=(6, ?), dtype=float32) (not necessarily Placeholder_2)
</td>
</tr>
</table>
### 2.2 - Initializing the parameters
Your second task is to initialize the parameters in tensorflow.
**Exercise:** Implement the function below to initialize the parameters in tensorflow. You are going use Xavier Initialization for weights and Zero Initialization for biases. The shapes are given below. As an example, to help you, for W1 and b1 you could use:
```python
W1 = tf.get_variable("W1", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer())
```
Please use `seed = 1` to make sure your results match ours.
```
# GRADED FUNCTION: initialize_parameters
def initialize_parameters():
"""
Initializes parameters to build a neural network with tensorflow. The shapes are:
W1 : [25, 12288]
b1 : [25, 1]
W2 : [12, 25]
b2 : [12, 1]
W3 : [6, 12]
b3 : [6, 1]
Returns:
parameters -- a dictionary of tensors containing W1, b1, W2, b2, W3, b3
"""
tf.set_random_seed(1) # so that your "random" numbers match ours
### START CODE HERE ### (approx. 6 lines of code)
W1 = None
b1 = None
W2 = None
b2 = None
W3 = None
b3 = None
### END CODE HERE ###
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2,
"W3": W3,
"b3": b3}
return parameters
tf.reset_default_graph()
with tf.Session() as sess:
parameters = initialize_parameters()
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected Output**:
<table>
<tr>
<td>
**W1**
</td>
<td>
< tf.Variable 'W1:0' shape=(25, 12288) dtype=float32_ref >
</td>
</tr>
<tr>
<td>
**b1**
</td>
<td>
< tf.Variable 'b1:0' shape=(25, 1) dtype=float32_ref >
</td>
</tr>
<tr>
<td>
**W2**
</td>
<td>
< tf.Variable 'W2:0' shape=(12, 25) dtype=float32_ref >
</td>
</tr>
<tr>
<td>
**b2**
</td>
<td>
< tf.Variable 'b2:0' shape=(12, 1) dtype=float32_ref >
</td>
</tr>
</table>
As expected, the parameters haven't been evaluated yet.
### 2.3 - Forward propagation in tensorflow
You will now implement the forward propagation module in tensorflow. The function will take in a dictionary of parameters and it will complete the forward pass. The functions you will be using are:
- `tf.add(...,...)` to do an addition
- `tf.matmul(...,...)` to do a matrix multiplication
- `tf.nn.relu(...)` to apply the ReLU activation
**Question:** Implement the forward pass of the neural network. We commented for you the numpy equivalents so that you can compare the tensorflow implementation to numpy. It is important to note that the forward propagation stops at `z3`. The reason is that in tensorflow the last linear layer output is given as input to the function computing the loss. Therefore, you don't need `a3`!
```
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
"""
Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX
Arguments:
X -- input dataset placeholder, of shape (input size, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3"
the shapes are given in initialize_parameters
Returns:
Z3 -- the output of the last LINEAR unit
"""
# Retrieve the parameters from the dictionary "parameters"
W1 = parameters['W1']
b1 = parameters['b1']
W2 = parameters['W2']
b2 = parameters['b2']
W3 = parameters['W3']
b3 = parameters['b3']
### START CODE HERE ### (approx. 5 lines) # Numpy Equivalents:
Z1 = None # Z1 = np.dot(W1, X) + b1
A1 = None # A1 = relu(Z1)
Z2 = None # Z2 = np.dot(W2, A1) + b2
A2 = None # A2 = relu(Z2)
Z3 = None # Z3 = np.dot(W3, A2) + b3
### END CODE HERE ###
return Z3
tf.reset_default_graph()
with tf.Session() as sess:
X, Y = create_placeholders(12288, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
print("Z3 = " + str(Z3))
```
**Expected Output**:
<table>
<tr>
<td>
**Z3**
</td>
<td>
Tensor("Add_2:0", shape=(6, ?), dtype=float32)
</td>
</tr>
</table>
You may have noticed that the forward propagation doesn't output any cache. You will understand why below, when we get to brackpropagation.
### 2.4 Compute cost
As seen before, it is very easy to compute the cost using:
```python
tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = ..., labels = ...))
```
**Question**: Implement the cost function below.
- It is important to know that the "`logits`" and "`labels`" inputs of `tf.nn.softmax_cross_entropy_with_logits` are expected to be of shape (number of examples, num_classes). We have thus transposed Z3 and Y for you.
- Besides, `tf.reduce_mean` basically does the summation over the examples.
```
# GRADED FUNCTION: compute_cost
def compute_cost(Z3, Y):
"""
Computes the cost
Arguments:
Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples)
Y -- "true" labels vector placeholder, same shape as Z3
Returns:
cost - Tensor of the cost function
"""
# to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...)
logits = tf.transpose(Z3)
labels = tf.transpose(Y)
### START CODE HERE ### (1 line of code)
cost = None
### END CODE HERE ###
return cost
tf.reset_default_graph()
with tf.Session() as sess:
X, Y = create_placeholders(12288, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
cost = compute_cost(Z3, Y)
print("cost = " + str(cost))
```
**Expected Output**:
<table>
<tr>
<td>
**cost**
</td>
<td>
Tensor("Mean:0", shape=(), dtype=float32)
</td>
</tr>
</table>
### 2.5 - Backward propagation & parameter updates
This is where you become grateful to programming frameworks. All the backpropagation and the parameters update is taken care of in 1 line of code. It is very easy to incorporate this line in the model.
After you compute the cost function. You will create an "`optimizer`" object. You have to call this object along with the cost when running the tf.session. When called, it will perform an optimization on the given cost with the chosen method and learning rate.
For instance, for gradient descent the optimizer would be:
```python
optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost)
```
To make the optimization you would do:
```python
_ , c = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})
```
This computes the backpropagation by passing through the tensorflow graph in the reverse order. From cost to inputs.
**Note** When coding, we often use `_` as a "throwaway" variable to store values that we won't need to use later. Here, `_` takes on the evaluated value of `optimizer`, which we don't need (and `c` takes the value of the `cost` variable).
### 2.6 - Building the model
Now, you will bring it all together!
**Exercise:** Implement the model. You will be calling the functions you had previously implemented.
```
def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001,
num_epochs = 1500, minibatch_size = 32, print_cost = True):
"""
Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX.
Arguments:
X_train -- training set, of shape (input size = 12288, number of training examples = 1080)
Y_train -- test set, of shape (output size = 6, number of training examples = 1080)
X_test -- training set, of shape (input size = 12288, number of training examples = 120)
Y_test -- test set, of shape (output size = 6, number of test examples = 120)
learning_rate -- learning rate of the optimization
num_epochs -- number of epochs of the optimization loop
minibatch_size -- size of a minibatch
print_cost -- True to print the cost every 100 epochs
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
"""
ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
tf.set_random_seed(1) # to keep consistent results
seed = 3 # to keep consistent results
(n_x, m) = X_train.shape # (n_x: input size, m : number of examples in the train set)
n_y = Y_train.shape[0] # n_y : output size
costs = [] # To keep track of the cost
# Create Placeholders of shape (n_x, n_y)
### START CODE HERE ### (1 line)
X, Y = None
### END CODE HERE ###
# Initialize parameters
### START CODE HERE ### (1 line)
parameters = None
### END CODE HERE ###
# Forward propagation: Build the forward propagation in the tensorflow graph
### START CODE HERE ### (1 line)
Z3 = None
### END CODE HERE ###
# Cost function: Add cost function to tensorflow graph
### START CODE HERE ### (1 line)
cost = None
### END CODE HERE ###
# Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer.
### START CODE HERE ### (1 line)
optimizer = None
### END CODE HERE ###
# Initialize all the variables
init = tf.global_variables_initializer()
# Start the session to compute the tensorflow graph
with tf.Session() as sess:
# Run the initialization
sess.run(init)
# Do the training loop
for epoch in range(num_epochs):
epoch_cost = 0. # Defines a cost related to an epoch
num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set
seed = seed + 1
minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# IMPORTANT: The line that runs the graph on a minibatch.
# Run the session to execute the "optimizer" and the "cost", the feedict should contain a minibatch for (X,Y).
### START CODE HERE ### (1 line)
_ , minibatch_cost = None
### END CODE HERE ###
epoch_cost += minibatch_cost / minibatch_size
# Print the cost every epoch
if print_cost == True and epoch % 100 == 0:
print ("Cost after epoch %i: %f" % (epoch, epoch_cost))
if print_cost == True and epoch % 5 == 0:
costs.append(epoch_cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per fives)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
# lets save the parameters in a variable
parameters = sess.run(parameters)
print ("Parameters have been trained!")
# Calculate the correct predictions
correct_prediction = tf.equal(tf.argmax(Z3), tf.argmax(Y))
# Calculate accuracy on the test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print ("Train Accuracy:", accuracy.eval({X: X_train, Y: Y_train}))
print ("Test Accuracy:", accuracy.eval({X: X_test, Y: Y_test}))
return parameters
```
Run the following cell to train your model! On our machine it takes about 5 minutes. Your "Cost after epoch 100" should be 1.048222. If it's not, don't waste time; interrupt the training by clicking on the square (⬛) in the upper bar of the notebook, and try to correct your code. If it is the correct cost, take a break and come back in 5 minutes!
```
parameters = model(X_train, Y_train, X_test, Y_test)
```
**Expected Output**:
<table>
<tr>
<td>
**Train Accuracy**
</td>
<td>
0.999074
</td>
</tr>
<tr>
<td>
**Test Accuracy**
</td>
<td>
0.716667
</td>
</tr>
</table>
Amazing, your algorithm can recognize a sign representing a figure between 0 and 5 with 71.7% accuracy.
**Insights**:
- Your model seems big enough to fit the training set well. However, given the difference between train and test accuracy, you could try to add L2 or dropout regularization to reduce overfitting.
- Think about the session as a block of code to train the model. Each time you run the session on a minibatch, it trains the parameters. In total you have run the session a large number of times (1500 epochs) until you obtained well trained parameters.
### 2.7 - Test with your own image (optional / ungraded exercise)
Congratulations on finishing this assignment. You can now take a picture of your hand and see the output of your model. To do that:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Write your image's name in the following code
4. Run the code and check if the algorithm is right!
```
import scipy
from PIL import Image
from scipy import ndimage
## START CODE HERE ## (PUT YOUR IMAGE NAME)
my_image = "thumbs_up.jpg"
## END CODE HERE ##
# We preprocess your image to fit your algorithm.
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
image = image/255.
my_image = scipy.misc.imresize(image, size=(64,64)).reshape((1, 64*64*3)).T
my_image_prediction = predict(my_image, parameters)
plt.imshow(image)
print("Your algorithm predicts: y = " + str(np.squeeze(my_image_prediction)))
```
You indeed deserved a "thumbs-up" although as you can see the algorithm seems to classify it incorrectly. The reason is that the training set doesn't contain any "thumbs-up", so the model doesn't know how to deal with it! We call that a "mismatched data distribution" and it is one of the various of the next course on "Structuring Machine Learning Projects".
<font color='blue'>
**What you should remember**:
- Tensorflow is a programming framework used in deep learning
- The two main object classes in tensorflow are Tensors and Operators.
- When you code in tensorflow you have to take the following steps:
- Create a graph containing Tensors (Variables, Placeholders ...) and Operations (tf.matmul, tf.add, ...)
- Create a session
- Initialize the session
- Run the session to execute the graph
- You can execute the graph multiple times as you've seen in model()
- The backpropagation and optimization is automatically done when running the session on the "optimizer" object.
| github_jupyter |
# Connecting to google drive
Felix Zaussinger | 11.11.2020
## Core Analysis Goal(s)
1. Auto-connect to our google sheets documents
- https://docs.google.com/spreadsheets/d/1kEEcKdP__1XbYKe5-nVlxzAD_P5-EQx-23IAw9SXOoc/edit#gid=370671396
2. Based on
- https://towardsdatascience.com/how-to-access-google-sheet-data-using-the-python-api-and-convert-to-pandas-dataframe-5ec020564f0e
- https://developers.google.com/sheets/api/quickstart/python.
## Key Insight(s)
1. It works.
```
%load_ext autoreload
%autoreload 2
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import os
import os.path
import numpy as np
import pandas as pd
import seaborn as sns
import configparser
from src.gecm import io
from pathlib import Path
sns.set_theme(
context='talk', style='ticks', palette='Paired', font='sans-serif',
font_scale=1.05, color_codes=True, rc=None
)
```
#### Define paths
```
abspath = os.path.abspath('')
project_dir = str(Path(abspath).parents[0])
data_raw = os.path.join(project_dir, "data", "raw")
data_processed = os.path.join(project_dir, "data", "processed")
```
#### Authentification
```
# init config file parser. methods: config.getboolean, config.getint, ... .
config = configparser.ConfigParser()
fpath_cf = os.path.join(project_dir, 'config.ini')
config.read(fpath_cf)
# read sections
io.config_describe(config)
# If modifying these scopes, delete the file token.pickle.
SCOPES = [config.get(section="default", option="scopes")]
# <Your spreadsheet ID>
SPREADSHEET_ID = config.get(section="gdrive_spreadsheet_ids", option="spreadsheet_id_farmers")
# <Your worksheet names>
SHEETS_STRING = config.get(section="gdrive_sheet_names", option="sheet_names_farmers")
SHEETS = io.parse_list(config_string=SHEETS_STRING)
# API credentials
credentials_fpath = os.path.join(project_dir, 'google_api_credentials.json')
```
#### Download
```
sheet_dict = {}
for i, sheet_name in enumerate(SHEETS):
print(sheet_name)
# 1) fetch data
data_dict = io.get_google_sheet(
credentials=credentials_fpath,
spreadsheet_id=SPREADSHEET_ID,
range_name=sheet_name,
scopes=SCOPES
)
# 2) convert to data frame
df_raw = io.gsheet2df(data_dict, header=0, stop=11)
df_raw = df_raw.set_index("Round")
# 3) convert to numeric
df = df_raw.copy()
for col in df.columns:
df[col] = pd.to_numeric(df[col], errors="coerce", downcast="integer")
# 4) append to dict
sheet_dict[sheet_name] = df
df_all = pd.concat(sheet_dict.values(), keys=sheet_dict.keys())
df_all
df_all.index = df_all.index.set_names(["player", "round"])
df_all = df_all.reset_index()
df_all["round"] = pd.to_numeric(df_all["round"], errors="coerce", downcast="integer")
df_all.info()
df_final = df_all.set_index(["round", "player"]).sort_index()
# https://stackoverflow.com/questions/25386870/pandas-plotting-with-multi-index
df_final = df_final.unstack(level="player")
df_final
```
| github_jupyter |
```
import os
import cv2
import math
import warnings
import numpy as np
import pandas as pd
import seaborn as sns
import tensorflow as tf
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, fbeta_score
from keras import optimizers
from keras import backend as K
from keras.models import Sequential, Model
from keras import applications
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import LearningRateScheduler, EarlyStopping, ReduceLROnPlateau
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPool2D, Activation, BatchNormalization, GlobalAveragePooling2D, Input
# Set seeds to make the experiment more reproducible.
from tensorflow import set_random_seed
from numpy.random import seed
set_random_seed(0)
seed(0)
%matplotlib inline
sns.set(style="whitegrid")
warnings.filterwarnings("ignore")
train = pd.read_csv('../input/imet-2019-fgvc6/train.csv')
labels = pd.read_csv('../input/imet-2019-fgvc6/labels.csv')
test = pd.read_csv('../input/imet-2019-fgvc6/sample_submission.csv')
train["attribute_ids"] = train["attribute_ids"].apply(lambda x:x.split(" "))
train["id"] = train["id"].apply(lambda x: x + ".png")
test["id"] = test["id"].apply(lambda x: x + ".png")
print('Number of train samples: ', train.shape[0])
print('Number of test samples: ', test.shape[0])
print('Number of labels: ', labels.shape[0])
display(train.head())
display(labels.head())
```
### Model parameters
```
# Model parameters
BATCH_SIZE = 128
EPOCHS = 30
LEARNING_RATE = 0.0001
HEIGHT = 64
WIDTH = 64
CANAL = 3
N_CLASSES = labels.shape[0]
ES_PATIENCE = 5
DECAY_DROP = 0.5
DECAY_EPOCHS = 10
classes = list(map(str, range(N_CLASSES)))
def f2_score_thr(threshold=0.5):
def f2_score(y_true, y_pred):
beta = 2
y_pred = K.cast(K.greater(K.clip(y_pred, 0, 1), threshold), K.floatx())
true_positives = K.sum(K.clip(y_true * y_pred, 0, 1), axis=1)
predicted_positives = K.sum(K.clip(y_pred, 0, 1), axis=1)
possible_positives = K.sum(K.clip(y_true, 0, 1), axis=1)
precision = true_positives / (predicted_positives + K.epsilon())
recall = true_positives / (possible_positives + K.epsilon())
return K.mean(((1+beta**2)*precision*recall) / ((beta**2)*precision+recall+K.epsilon()))
return f2_score
def custom_f2(y_true, y_pred):
beta = 2
tp = np.sum((y_true == 1) & (y_pred == 1))
tn = np.sum((y_true == 0) & (y_pred == 0))
fp = np.sum((y_true == 0) & (y_pred == 1))
fn = np.sum((y_true == 1) & (y_pred == 0))
p = tp / (tp + fp + K.epsilon())
r = tp / (tp + fn + K.epsilon())
f2 = (1+beta**2)*p*r / (p*beta**2 + r + 1e-15)
return f2
def step_decay(epoch):
initial_lrate = LEARNING_RATE
drop = DECAY_DROP
epochs_drop = DECAY_EPOCHS
lrate = initial_lrate * math.pow(drop, math.floor((1+epoch)/epochs_drop))
return lrate
train_datagen=ImageDataGenerator(rescale=1./255, validation_split=0.25)
train_generator=train_datagen.flow_from_dataframe(
dataframe=train,
directory="../input/imet-2019-fgvc6/train",
x_col="id",
y_col="attribute_ids",
batch_size=BATCH_SIZE,
shuffle=True,
class_mode="categorical",
classes=classes,
target_size=(HEIGHT, WIDTH),
subset='training')
valid_generator=train_datagen.flow_from_dataframe(
dataframe=train,
directory="../input/imet-2019-fgvc6/train",
x_col="id",
y_col="attribute_ids",
batch_size=BATCH_SIZE,
shuffle=True,
class_mode="categorical",
classes=classes,
target_size=(HEIGHT, WIDTH),
subset='validation')
test_datagen = ImageDataGenerator(rescale=1./255)
test_generator = test_datagen.flow_from_dataframe(
dataframe=test,
directory = "../input/imet-2019-fgvc6/test",
x_col="id",
target_size=(HEIGHT, WIDTH),
batch_size=1,
shuffle=False,
class_mode=None)
```
### Model
```
def create_model(input_shape, n_out):
input_tensor = Input(shape=input_shape)
base_model = applications.NASNetMobile(weights=None, include_top=False,
input_tensor=input_tensor)
base_model.load_weights('../input/nasnetmobile/NASNet-mobile-no-top.h5')
x = GlobalAveragePooling2D()(base_model.output)
x = Dropout(0.5)(x)
x = Dense(1024, activation='relu')(x)
x = Dropout(0.5)(x)
final_output = Dense(n_out, activation='sigmoid', name='final_output')(x)
model = Model(input_tensor, final_output)
return model
# warm up model
# first: train only the top layers (which were randomly initialized)
model = create_model(input_shape=(HEIGHT, WIDTH, CANAL), n_out=N_CLASSES)
for layer in model.layers:
layer.trainable = False
for i in range(-5,0):
model.layers[i].trainable = True
optimizer = optimizers.Adam(lr=LEARNING_RATE)
metrics = ["accuracy", "categorical_accuracy"]
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=ES_PATIENCE)
callbacks = [es]
model.compile(optimizer=optimizer, loss="binary_crossentropy", metrics=metrics)
model.summary()
```
#### Train top layers
```
STEP_SIZE_TRAIN = train_generator.n//train_generator.batch_size
STEP_SIZE_VALID = valid_generator.n//valid_generator.batch_size
history = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
epochs=EPOCHS,
callbacks=callbacks,
verbose=2,
max_queue_size=16, workers=3, use_multiprocessing=True)
```
#### Fine-tune the complete model
```
for layer in model.layers:
layer.trainable = True
metrics = ["accuracy", "categorical_accuracy"]
lrate = LearningRateScheduler(step_decay)
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=(ES_PATIENCE))
rlrop = ReduceLROnPlateau(monitor='val_loss', factor=0.25, patience=(ES_PATIENCE-2))
callbacks = [es, rlrop]
optimizer = optimizers.SGD(lr=0.001)
model.compile(optimizer=optimizer, loss="binary_crossentropy", metrics=metrics)
model.summary()
STEP_SIZE_TRAIN = train_generator.n//train_generator.batch_size
STEP_SIZE_VALID = valid_generator.n//valid_generator.batch_size
history = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
epochs=EPOCHS,
callbacks=callbacks,
verbose=2,
max_queue_size=16, workers=3, use_multiprocessing=True)
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=(3))
callbacks = [es]
optimizer = optimizers.Adam(lr=0.0001)
model.compile(optimizer=optimizer, loss="binary_crossentropy", metrics=metrics)
STEP_SIZE_TRAIN = train_generator.n//train_generator.batch_size
STEP_SIZE_VALID = valid_generator.n//valid_generator.batch_size
history = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
epochs=EPOCHS,
callbacks=callbacks,
verbose=2,
max_queue_size=16, workers=3, use_multiprocessing=True)
```
### Complete model graph loss
```
sns.set_style("whitegrid")
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, sharex='col', figsize=(20,7))
ax1.plot(history.history['loss'], label='Train loss')
ax1.plot(history.history['val_loss'], label='Validation loss')
ax1.legend(loc='best')
ax1.set_title('Loss')
ax2.plot(history.history['acc'], label='Train Accuracy')
ax2.plot(history.history['val_acc'], label='Validation accuracy')
ax2.legend(loc='best')
ax2.set_title('Accuracy')
ax3.plot(history.history['categorical_accuracy'], label='Train Cat Accuracy')
ax3.plot(history.history['val_categorical_accuracy'], label='Validation Cat Accuracy')
ax3.legend(loc='best')
ax3.set_title('Cat Accuracy')
plt.xlabel('Epochs')
sns.despine()
plt.show()
```
### Find best threshold value
```
lastFullValPred = np.empty((0, N_CLASSES))
lastFullValLabels = np.empty((0, N_CLASSES))
for i in range(STEP_SIZE_VALID+1):
im, lbl = next(valid_generator)
scores = model.predict(im, batch_size=valid_generator.batch_size)
lastFullValPred = np.append(lastFullValPred, scores, axis=0)
lastFullValLabels = np.append(lastFullValLabels, lbl, axis=0)
print(lastFullValPred.shape, lastFullValLabels.shape)
def find_best_fixed_threshold(preds, targs, do_plot=True):
score = []
thrs = np.arange(0, 0.5, 0.01)
for thr in thrs:
score.append(custom_f2(targs, (preds > thr).astype(int)))
score = np.array(score)
pm = score.argmax()
best_thr, best_score = thrs[pm], score[pm].item()
print(f'thr={best_thr:.3f}', f'F2={best_score:.3f}')
if do_plot:
plt.plot(thrs, score)
plt.vlines(x=best_thr, ymin=score.min(), ymax=score.max())
plt.text(best_thr+0.03, best_score-0.01, f'$F_{2}=${best_score:.3f}', fontsize=14);
plt.show()
return best_thr, best_score
threshold, best_score = find_best_fixed_threshold(lastFullValPred, lastFullValLabels, do_plot=True)
```
### Apply model to test set and output predictions
```
test_generator.reset()
STEP_SIZE_TEST = test_generator.n//test_generator.batch_size
preds = model.predict_generator(test_generator, steps=STEP_SIZE_TEST)
predictions = []
for pred_ar in preds:
valid = []
for idx, pred in enumerate(pred_ar):
if pred > threshold:
valid.append(idx)
if len(valid) == 0:
valid.append(np.argmax(pred_ar))
predictions.append(valid)
filenames = test_generator.filenames
label_map = {valid_generator.class_indices[k] : k for k in valid_generator.class_indices}
results = pd.DataFrame({'id':filenames, 'attribute_ids':predictions})
results['id'] = results['id'].map(lambda x: str(x)[:-4])
results['attribute_ids'] = results['attribute_ids'].apply(lambda x: list(map(label_map.get, x)))
results["attribute_ids"] = results["attribute_ids"].apply(lambda x: ' '.join(x))
results.to_csv('submission.csv',index=False)
results.head(10)
```
| github_jupyter |
If you're opening this Notebook on colab, you will probably need to install 🤗 Transformers and 🤗 Datasets. Uncomment the following cell and run it. We also use the `sacrebleu` and `sentencepiece` libraries - you may need to install these even if you already have 🤗 Transformers!
```
#! pip install transformers[sentencepiece] datasets
#! pip install sacrebleu sentencepiece
#! pip install huggingface_hub
```
If you're opening this notebook locally, make sure your environment has an install from the last version of those libraries.
To be able to share your model with the community and generate results like the one shown in the picture below via the inference API, there are a few more steps to follow.
First you have to store your authentication token from the Hugging Face website (sign up [here](https://huggingface.co/join) if you haven't already!) then uncomment the following cell and input your token:
```
from huggingface_hub import notebook_login
notebook_login()
```
Then you need to install Git-LFS and setup Git if you haven't already. Uncomment the following instructions and adapt with your name and email:
```
# !apt install git-lfs
# !git config --global user.email "you@example.com"
# !git config --global user.name "Your Name"
```
Make sure your version of Transformers is at least 4.16.0 since some of the functionality we use was introduced in that version:
```
import transformers
print(transformers.__version__)
```
You can find a script version of this notebook to fine-tune your model in a distributed fashion using multiple GPUs or TPUs [here](https://github.com/huggingface/transformers/tree/master/examples/seq2seq).
# Fine-tuning a model on a translation task
In this notebook, we will see how to fine-tune one of the [🤗 Transformers](https://github.com/huggingface/transformers) model for a translation task. We will use the [WMT dataset](http://www.statmt.org/wmt16/), a machine translation dataset composed from a collection of various sources, including news commentaries and parliament proceedings.

We will see how to easily load the dataset for this task using 🤗 Datasets and how to fine-tune a model on it using Keras.
```
model_checkpoint = "Helsinki-NLP/opus-mt-en-ROMANCE"
```
This notebook is built to run with any model checkpoint from the [Model Hub](https://huggingface.co/models) as long as that model has a sequence-to-sequence version in the Transformers library. Here we picked the [`Helsinki-NLP/opus-mt-en-romance`](https://huggingface.co/Helsinki-NLP/opus-mt-en-ROMANCE) checkpoint.
## Loading the dataset
We will use the [🤗 Datasets](https://github.com/huggingface/datasets) library to download the data and get the metric we need to use for evaluation (to compare our model to the benchmark). This can be easily done with the functions `load_dataset` and `load_metric`. We use the English/Romanian part of the WMT dataset here.
```
from datasets import load_dataset, load_metric
raw_datasets = load_dataset("wmt16", "ro-en")
metric = load_metric("sacrebleu")
```
The `dataset` object itself is [`DatasetDict`](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasetdict), which contains one key for the training, validation and test set:
```
raw_datasets
```
To access an actual element, you need to select a split first, then give an index:
```
raw_datasets["train"][0]
```
To get a sense of what the data looks like, the following function will show some examples picked randomly in the dataset.
```
import datasets
import random
import pandas as pd
from IPython.display import display, HTML
def show_random_elements(dataset, num_examples=5):
assert num_examples <= len(
dataset
), "Can't pick more elements than there are in the dataset."
picks = []
for _ in range(num_examples):
pick = random.randint(0, len(dataset) - 1)
while pick in picks:
pick = random.randint(0, len(dataset) - 1)
picks.append(pick)
df = pd.DataFrame(dataset[picks])
for column, typ in dataset.features.items():
if isinstance(typ, datasets.ClassLabel):
df[column] = df[column].transform(lambda i: typ.names[i])
display(HTML(df.to_html()))
show_random_elements(raw_datasets["train"])
```
The metric is an instance of [`datasets.Metric`](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Metric):
```
metric
```
You can call its `compute` method with your predictions and labels, which need to be list of decoded strings (list of list for the labels):
```
fake_preds = ["hello there", "general kenobi"]
fake_labels = [["hello there"], ["general kenobi"]]
metric.compute(predictions=fake_preds, references=fake_labels)
```
## Preprocessing the data
Before we can feed those texts to our model, we need to preprocess them. This is done by a 🤗 Transformers `Tokenizer` which will (as the name indicates) tokenize the inputs (including converting the tokens to their corresponding IDs in the pretrained vocabulary) and put it in a format the model expects, as well as generate the other inputs that model requires.
To do all of this, we instantiate our tokenizer with the `AutoTokenizer.from_pretrained` method, which will ensure:
- we get a tokenizer that corresponds to the model architecture we want to use,
- we download the vocabulary used when pretraining this specific checkpoint.
That vocabulary will be cached, so it's not downloaded again the next time we run the cell.
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
```
For the mBART tokenizer (like we have here), we need to set the source and target languages (so the texts are preprocessed properly). You can check the language codes [here](https://huggingface.co/facebook/mbart-large-cc25) if you are using this notebook on a different pairs of languages.
```
if "mbart" in model_checkpoint:
tokenizer.src_lang = "en-XX"
tokenizer.tgt_lang = "ro-RO"
```
By default, the call above will use one of the fast tokenizers (backed by Rust) from the 🤗 Tokenizers library.
You can directly call this tokenizer on one sentence or a pair of sentences:
```
tokenizer("Hello, this is a sentence!")
```
Depending on the model you selected, you will see different keys in the dictionary returned by the cell above. They don't matter much for what we're doing here (just know they are required by the model we will instantiate later), you can learn more about them in [this tutorial](https://huggingface.co/transformers/preprocessing.html) if you're interested.
Instead of one sentence, we can pass along a list of sentences:
```
tokenizer(["Hello, this is a sentence!", "This is another sentence."])
```
To prepare the targets for our model, we need to tokenize them inside the `as_target_tokenizer` context manager. This will make sure the tokenizer uses the special tokens corresponding to the targets:
```
with tokenizer.as_target_tokenizer():
print(tokenizer(["Hello, this is a sentence!", "This is another sentence."]))
```
If you are using one of the five T5 checkpoints that require a special prefix to put before the inputs, you should adapt the following cell.
```
if model_checkpoint in ["t5-small", "t5-base", "t5-larg", "t5-3b", "t5-11b"]:
prefix = "translate English to Romanian: "
else:
prefix = ""
```
We can then write the function that will preprocess our samples. We just feed them to the `tokenizer` with the argument `truncation=True`. This will ensure that an input longer that what the model selected can handle will be truncated to the maximum length accepted by the model. The padding will be dealt with later on (in a data collator) so we pad examples to the longest length in the batch and not the whole dataset.
```
max_input_length = 128
max_target_length = 128
source_lang = "en"
target_lang = "ro"
def preprocess_function(examples):
inputs = [prefix + ex[source_lang] for ex in examples["translation"]]
targets = [ex[target_lang] for ex in examples["translation"]]
model_inputs = tokenizer(inputs, max_length=max_input_length, truncation=True)
# Setup the tokenizer for targets
with tokenizer.as_target_tokenizer():
labels = tokenizer(targets, max_length=max_target_length, truncation=True)
model_inputs["labels"] = labels["input_ids"]
return model_inputs
```
This function works with one or several examples. In the case of several examples, the tokenizer will return a list of lists for each key:
```
preprocess_function(raw_datasets["train"][:2])
```
To apply this function on all the pairs of sentences in our dataset, we just use the `map` method of our `dataset` object we created earlier. This will apply the function on all the elements of all the splits in `dataset`, so our training, validation and testing data will be preprocessed in one single command.
```
tokenized_datasets = raw_datasets.map(preprocess_function, batched=True)
```
Even better, the results are automatically cached by the 🤗 Datasets library to avoid spending time on this step the next time you run your notebook. The 🤗 Datasets library is normally smart enough to detect when the function you pass to map has changed (and thus requires to not use the cache data). For instance, it will properly detect if you change the task in the first cell and rerun the notebook. 🤗 Datasets warns you when it uses cached files, you can pass `load_from_cache_file=False` in the call to `map` to not use the cached files and force the preprocessing to be applied again.
Note that we passed `batched=True` to encode the texts by batches together. This is to leverage the full benefit of the fast tokenizer we loaded earlier, which will use multi-threading to treat the texts in a batch concurrently.
## Fine-tuning the model
Now that our data is ready, we can download the pretrained model and fine-tune it. Since our task is of the sequence-to-sequence kind, we use the `AutoModelForSeq2SeqLM` class. Like with the tokenizer, the `from_pretrained` method will download and cache the model for us.
```
from transformers import TFAutoModelForSeq2SeqLM, DataCollatorForSeq2Seq
model = TFAutoModelForSeq2SeqLM.from_pretrained(model_checkpoint)
```
Note that we don't get a warning like in our classification example. This means we used all the weights of the pretrained model and there is no randomly initialized head in this case.
Next we set some parameters like the learning rate and the `batch_size`and customize the weight decay.
The last two arguments are to setup everything so we can push the model to the [Hub](https://huggingface.co/models) at the end of training. Remove the two of them if you didn't follow the installation steps at the top of the notebook, otherwise you can change the value of push_to_hub_model_id to something you would prefer.
```
batch_size = 16
learning_rate = 2e-5
weight_decay = 0.01
num_train_epochs = 1
model_name = model_checkpoint.split("/")[-1]
push_to_hub_model_id = f"{model_name}-finetuned-{source_lang}-to-{target_lang}"
```
Then, we need a special kind of data collator, which will not only pad the inputs to the maximum length in the batch, but also the labels. Note that our data collators are multi-framework, so make sure you set `return_tensors='tf'` so you get `tf.Tensor` objects back and not something else!
```
data_collator = DataCollatorForSeq2Seq(tokenizer, model=model, return_tensors="tf")
```
Now we convert our input datasets to TF datasets using this collator. There's a built-in method for this: `to_tf_dataset()`. Make sure to specify the collator we just created as our `collate_fn`!
Computing the `BLEU` metric can be slow because it requires the model to generate outputs token-by-token. To speed things up, we make a `generation_dataset` that contains only 200 examples from the validation dataset, and use this for `BLEU` computations.
```
train_dataset = tokenized_datasets["train"].to_tf_dataset(
batch_size=batch_size,
columns=["input_ids", "attention_mask", "labels"],
shuffle=True,
collate_fn=data_collator,
)
validation_dataset = tokenized_datasets["validation"].to_tf_dataset(
batch_size=batch_size,
columns=["input_ids", "attention_mask", "labels"],
shuffle=False,
collate_fn=data_collator,
)
generation_dataset = (
tokenized_datasets["validation"]
.shuffle()
.select(list(range(48)))
.to_tf_dataset(
batch_size=8,
columns=["input_ids", "attention_mask", "labels"],
shuffle=False,
collate_fn=data_collator,
)
)
```
Now we initialize our loss and optimizer and compile the model. Note that most Transformers models compute loss internally, so we can just leave the loss argument blank to use the internal loss instead. For the optimizer, we can use the `AdamWeightDecay` optimizer in the Transformer library.
```
from transformers import AdamWeightDecay
import tensorflow as tf
optimizer = AdamWeightDecay(learning_rate=learning_rate, weight_decay_rate=weight_decay)
model.compile(optimizer=optimizer)
```
Now we can train our model. We can also add a few optional callbacks here, which you can remove if they aren't useful to you. In no particular order, these are:
- PushToHubCallback will sync up our model with the Hub - this allows us to resume training from other machines, share the model after training is finished, and even test the model's inference quality midway through training!
- TensorBoard is a built-in Keras callback that logs TensorBoard metrics.
- KerasMetricCallback is a callback for computing advanced metrics. There are a number of common metrics in NLP like ROUGE which are hard to fit into your compiled training loop because they depend on decoding predictions and labels back to strings with the tokenizer, and calling arbitrary Python functions to compute the metric. The KerasMetricCallback will wrap a metric function, outputting metrics as training progresses.
If this is the first time you've seen `KerasMetricCallback`, it's worth explaining what exactly is going on here. The callback takes two main arguments - a `metric_fn` and an `eval_dataset`. It then iterates over the `eval_dataset` and collects the model's outputs for each sample, before passing the `list` of predictions and the associated `list` of labels to the user-defined `metric_fn`. If the `predict_with_generate` argument is `True`, then it will call `model.generate()` for each input sample instead of `model.predict()` - this is useful for metrics that expect generated text from the model, like `ROUGE` and `BLEU`.
This callback allows complex metrics to be computed each epoch that would not function as a standard Keras Metric. Metric values are printed each epoch, and can be used by other callbacks like `TensorBoard` or `EarlyStopping`.
```
from transformers.keras_callbacks import KerasMetricCallback
import numpy as np
def metric_fn(eval_predictions):
preds, labels = eval_predictions
prediction_lens = [
np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds
]
decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)
# We use -100 to mask labels - replace it with the tokenizer pad token when decoding
# so that no output is emitted for these
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
# Some simple post-processing
decoded_preds = [pred.strip() for pred in decoded_preds]
decoded_labels = [[label.strip()] for label in decoded_labels]
result = metric.compute(predictions=decoded_preds, references=decoded_labels)
result = {"bleu": result["score"]}
result["gen_len"] = np.mean(prediction_lens)
return result
metric_callback = KerasMetricCallback(
metric_fn=metric_fn, eval_dataset=generation_dataset, predict_with_generate=True
)
```
With the metric callback ready, now we can specify the other callbacks and fit our model:
```
from transformers.keras_callbacks import PushToHubCallback
from tensorflow.keras.callbacks import TensorBoard
tensorboard_callback = TensorBoard(log_dir="./translation_model_save/logs")
push_to_hub_callback = PushToHubCallback(
output_dir="./translation_model_save",
tokenizer=tokenizer,
hub_model_id=push_to_hub_model_id,
)
# callbacks = [tensorboard_callback, metric_callback, push_to_hub_callback]
callbacks = [metric_callback, tensorboard_callback, push_to_hub_callback]
model.fit(
train_dataset, validation_data=validation_dataset, epochs=1, callbacks=callbacks
)
```
If you used the callback above, you can now share this model with all your friends, family or favorite pets: they can all load it with the identifier `"your-username/the-name-you-picked"` so for instance:
```python
from transformers import TFAutoModelForSeq2SeqLM
model = TFAutoModelForSeq2SeqLM.from_pretrained("your-username/my-awesome-model")
```
| github_jupyter |
# Tutorial Part 2: Learning MNIST Digit Classifiers
In the previous tutorial, we learned some basics of how to load data into DeepChem and how to use the basic DeepChem objects to load and manipulate this data. In this tutorial, you'll put the parts together and learn how to train a basic image classification model in DeepChem. You might ask, why are we bothering to learn this material in DeepChem? Part of the reason is that image processing is an increasingly important part of AI for the life sciences. So learning how to train image processing models will be very useful for using some of the more advanced DeepChem features.
The MNIST dataset contains handwritten digits along with their human annotated labels. The learning challenge for this dataset is to train a model that maps the digit image to its true label. MNIST has been a standard benchmark for machine learning for decades at this point.

For convenience, TensorFlow has provided some loader methods to get access to the MNIST dataset. We'll make use of these loaders.
```
from tensorflow.examples.tutorials.mnist import input_data
# TODO: This is deprecated. Let's replace with a DeepChem native loader for maintainability.
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
import deepchem as dc
import tensorflow as tf
from deepchem.models.tensorgraph.layers import Layer, Input, Reshape, Flatten, Conv2D, Label, Feature
from deepchem.models.tensorgraph.layers import Dense, SoftMaxCrossEntropy, ReduceMean, SoftMax
train = dc.data.NumpyDataset(mnist.train.images, mnist.train.labels)
valid = dc.data.NumpyDataset(mnist.validation.images, mnist.validation.labels)
tg = dc.models.TensorGraph(tensorboard=True, model_dir='/tmp/mnist', use_queue=False)
feature = Feature(shape=(None, 784))
# Images are square 28x28 (batch, height, width, channel)
make_image = Reshape(shape=(-1, 28, 28, 1), in_layers=[feature])
conv2d_1 = Conv2D(num_outputs=32, in_layers=[make_image])
conv2d_2 = Conv2D(num_outputs=64, in_layers=[conv2d_1])
flatten = Flatten(in_layers=[conv2d_2])
dense1 = Dense(out_channels=1024, activation_fn=tf.nn.relu, in_layers=[flatten])
dense2 = Dense(out_channels=10, in_layers=[dense1])
label = Label(shape=(None, 10))
smce = SoftMaxCrossEntropy(in_layers=[label, dense2])
loss = ReduceMean(in_layers=[smce])
tg.set_loss(loss)
output = SoftMax(in_layers=[dense2])
tg.add_output(output)
# nb_epoch set to 0 to permit rendering of tutorials online.
# Set nb_epoch=10 for better results
tg.fit(train, nb_epoch=0)
# Note that AUCs will be nonsensical without setting nb_epoch higher!
from sklearn.metrics import roc_curve, auc
import numpy as np
print("Validation")
prediction = np.squeeze(tg.predict_on_batch(valid.X))
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(10):
fpr[i], tpr[i], thresh = roc_curve(valid.y[:, i], prediction[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
print("class %s:auc=%s" % (i, roc_auc[i]))
```
| github_jupyter |
Welcome to the Deep Learning Lab :)
Before starting this journey, here is a couple of ways to **load data into Colab** (in case you haven't done it before).
Colab can generate this scripts for you by clicking on the code icon (<>) on the left bar and selecting the code snippet you want.
**Loading files from your local drive**
```
from google.colab import files
uploaded = files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
```
You can access uploaded files via the folder icon on the left bar. You can also manipulate files by code, of course :)
```
with open('plain_text_file.txt', 'r') as f:
for line in f.readlines():
print(line)
```
Or you can access directly your Google Drive!
```
from google.colab import drive
drive.mount('/gdrive')
%cd /gdrive
!ls
```
# Introduction to Keras
Keras is a high-level, flexible library for deep learning experiments.
It is tightly integrated with Tensorflow, which provides low-level support.
**Warning:** Unless you are confident with what you are doing, in the beginning it is better if you stick with Keras as much as possible.
Use Tensorflow only when there is no alternative.
## Where do I start if I want to learn Keras?
Getting started *guide*: https://keras.io/getting_started/
You may want to choose *Introduction to Keras for Engineers*.
Developer guide: https://keras.io/guides/
API documentation: https://keras.io/api/
```
from tensorflow import keras as K
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
%matplotlib inline
```
## Data manipulation
```
help(make_classification)
N_CLASSES = 3
N_PATTERNS_PER_CLASS = 5000
N_PATTERNS = N_CLASSES * N_PATTERNS_PER_CLASS
X, y = make_classification(n_samples=N_PATTERNS, n_classes=N_CLASSES, n_informative=5)
X.shape, y.shape
X.dtype, y.dtype
test_size = int(0.25 * y.shape[0])
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size, shuffle=True, stratify=y)
X_train.shape, y_train.shape, X_test.shape, y_test.shape
```
### Enter Keras
Well, actually you can use numpy arrays directly in Keras, you don't have to do much...
You can also use Python generators
But believe me, you will need something more advanced sooner or later.
Let's build a real Keras (actually, a Tensorflow) `Dataset` from those arrays!
```
dataset = tf.data.Dataset.from_tensor_slices((X_test, y_test))
```
Ideally, you would like to use your dataset in a **training loop**
```
for x, y in dataset:
print(x.shape)
break
```
mmmm... no batch size?
```
dataset = dataset.shuffle(buffer_size=1024).batch(32)
for x, y in dataset:
print(x.shape)
break
```
You can create a dataset from different sources (e.g. files in your hard disk).
For example, try to build a dataset [from `csv` file](https://www.tensorflow.org/api_docs/python/tf/data/experimental/make_csv_dataset).
**Exercise**: try to get acquainted with Tensorflow datasets. Try to build data from different sources (tensors, csv files, plain text file, folder structure...). Try to build datasets with one or more elements per iteration.
### Data Preprocessing
```
from tensorflow.keras.layers.experimental.preprocessing import Rescaling, Normalization
```
Classes which take care of preprocessing your dataset. They can also contain a state (e.g. mean and std of your data). Updated with the `adapt` call (the `fit` method of scikit-learn).
**N.B.** the call to `adapt` changes the internal state of the normalizer, not the data!
```
norm_layer = Normalization(axis=-1)
norm_layer.adapt(X_train)
norm_layer.mean, norm_layer.variance
normalized_X_train = norm_layer(X_train)
print(np.mean(normalized_X_train))
print(np.var(normalized_X_train))
```
Rescaling and Normalization operates very similarly. However, `Rescaling` does not require the call to `adapt` since it has no internal state.
```
help(Rescaling)
```
## Functional API
This is one of the powerful features of Keras. **Easily build complex models!**
A model is composed by **layers**.
A layer takes an input and returns an output (usually by using adaptive parameters).
A model is built by composing many layers and it also exposes a more complex interface with methods for training, inference etc...
```
model = K.Sequential()
model.add(K.layers.Dense(units=64, activation='relu'))
model.add(K.layers.Dense(units=N_CLASSES, activation='softmax'))
#model.summary() # will fail! what is the input of this model?
```
Keras does not require you to specify the `Input` of a model.
Instead, it tries to dynamically infer the model input layer when yuo call it with data. However, you can always specify it explicitly.
We need **sparse_categorical_crossentropy** because we are *not* dealing with one-hot targets but with numerical targets.
You can use **categorical_crossentropy** if the targets are one-hot encoded.
Keras can convert to one-hot: `K.utils.to_categorical`.
[You can also use scikit-learn to encode your targets in one-hot form](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html).
### Training
Keras metrics description: https://keras.io/api/metrics/
```
model.compile(loss='sparse_categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])
model.fit(X_train, y_train, epochs=10, batch_size=64)
model.summary()
```
You can also pass a Keras dataset to the fit. Try it out!
Since I did not specify the `Input` what will happen if I change the input size?
```
# model.fit(X_train[:, :3], y_train, epochs=10, batch_size=64)
```
What's happening under-the-hood of `fit`?
Basically, loop over training set, compute model predictions, compute loss, compute gradients, update model parameters (and much more). We will see how to implement a basic fit from scratch later on. We will need Tensorflow for that.
### Evaluation
*Keras* returns loss and all the metrics you previously specified
```
metrics = model.evaluate(X_test, y_test)
metrics # loss, accuracy
predictions = model.predict(X_test[:20])
print(predictions.shape)
predictions.argmax(axis=1)
```
You can use metrics also standalone by instantiating them, calling `update_states`, `reset_states` and `result`.
### Save your model and load it again
Fundamental to manage long training processes and to use your trained model for inference.
`Model serialization` helps also when training on colab (runtime can disconnect after a while).
```
# save weights, optimizer state, model topology
model.save('my_model.h5') # common file format to save models
del model
# if it was already compiled, it will be compiled and viceversa
loaded_model = K.models.load_model('my_model.h5')
```
Alternatively, you can only save and load the weights. Try the `save_weights` and `load_weights`.
Model saving guide: https://keras.io/guides/serialization_and_saving/
### Functional API v2
***Alternative (but similar) way to use the functional API*** *italicized text*
None is used in a tensor size when you don't know the size. In the functional API, `batch size` is assumed to be None and added by default.
```
# input layer
inputs = K.Input(shape=(20,)) # here the size is (None, 20)
x = norm_layer(inputs)
x = K.layers.Dense(units=64, activation='relu')(x)
outputs = K.layers.Dense(units=N_CLASSES, activation='softmax')(x)
model = K.Model(inputs=inputs, outputs=outputs)
model.summary()
model.compile(optimizer=K.optimizers.SGD(learning_rate=1e-2), loss=K.losses.SparseCategoricalCrossentropy())
model.fit(X_train, y_train, batch_size=64, epochs=10)
metrics = model.evaluate(X_test, y_test)
metrics
```
## Use validation dataset and plot learning curves
```
X_train, X_valid, y_train, y_valid = train_test_split(X_train, y_train,
test_size=int(0.25*y_train.shape[0]), shuffle=True, stratify=y_train)
X_train.shape, y_train.shape, X_valid.shape, y_valid.shape
```
we should recompute the `Normalization`, because statistics are computed also on what now is the validation dataset! Anyway...
```
history = model.fit(X_train, y_train, epochs=10, batch_size=32, validation_data=(X_valid, y_valid))
vars(history)
help(K.Model.fit)
help(K.callbacks.History)
```
Mmm... callbacks? Seems interesting!
```
plt.plot(history.history['loss'], 'b-', label='train_loss')
plt.plot(history.history['val_loss'], 'r--', label='validation_loss')
plt.legend(loc='best')
```
**Exercise**: at this point you have more or less all you need to perform basic DL experiments. Try to train your first model on a Keras dataset and see what happens. Do not focus on performance, but rather on setting up your code to be reused later.
## Checkpointing the model
Callbacks are functions that are called at particular moments in time automatically. You can pass callbacks to the `fit` function.
https://keras.io/api/callbacks/
```
help(K.callbacks.ModelCheckpoint)
callback_list = [K.callbacks.ModelCheckpoint(
filepath='model_{epoch}',
save_freq='epoch')]
history = model.fit(dataset, epochs=10, callbacks=callback_list)
metrics = model.evaluate(X_test, y_test)
print(metrics)
```
## Eager execution vs. compiled execution!
Compiled means that each line of code adds a component to the **computational graph**, it does not execute what the line states.
After PyTorch came in, the advantages of eager execution became evident, especially when it comes to rapid prototyping and debugging. In eager execution, each line of code is immediately executed and the results returned to the user (imperative programming interface). You can use `print` and debugger to see results of your operations.
For deployment in real world applications, compiled is far more efficient (PyTorch now provides support also for this version).
Keras run the model in compiled version by default, TF now uses eager by default. That means you cannot debug it line by line or through prints. To enable eager execution, set `run_eagerly=True` in `compile` call.
In TF you can use a TF Function to pass to compiled graph: https://www.tensorflow.org/guide/function
## GPU or CPU
TF and Keras automatically use GPU, when available.
You can specify where to send each tensor explicitly, if you prefer.
```
import tensorflow as tf
with tf.device('/CPU:0'):
a = tf.Variable([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
a.device
```
Check out if there is a GPU available
```
len(tf.config.list_physical_devices('GPU'))
b = tf.Variable([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b.device
```
# Low Level API - Tensorflow
Tensorflow is the "backend" of Keras: https://www.tensorflow.org/overview
```
print(tf.executing_eagerly())
x = tf.constant([1,2,3,4,5,6], dtype=tf.float32)
print(x, x.shape)
print(x.numpy())
print(tf.cast(x, tf.float64))
print(tf.cast(x, tf.int32))
#print(tf.cast(x, tf.string)) # error!
print(tf.strings.as_string(x))
for el in x:
print(el)
print(float(el))
break
new_x = tf.reshape(x, [2,3])
print(new_x, new_x.shape)
# indexing
print(new_x[:, 0])
print(new_x[:, :])
print(new_x[0, 2])
print(new_x[-1, :])
tf.transpose(new_x, [1, 0])
tf.random.uniform(minval=0, maxval=1, shape=(3,4), dtype=tf.float32)
```
Check out also `tf.zeros`, `tf.ones`.
A `Variable` is a tensor with a state you can update
```
w = tf.Variable(x) # set x as initial value for w
print(w.assign(x + 1)) # + operator does broadcast
print(w.assign_add(x))
# print(w.assign_add(1)) # assign_add does not broadcast
```
## Compute Gradients
```
print(w)
with tf.GradientTape() as tape:
tape.watch(w)
w_squared = tf.square(w)
grad = tape.gradient(w_squared, w)
print(grad)
```
does `w` need to be a `Variable`?
```
print(x)
with tf.GradientTape() as tape:
tape.watch(x) # what happens if you remove this line?
x_squared = tf.square(x)
grad = tape.gradient(x_squared, x)
print(grad)
```
`Variable` is watched automatically (tensorflow supposes that you will be interested in that gradient)
```
print(w)
with tf.GradientTape() as tape:
w_squared = tf.square(w)
grad = tape.gradient(w_squared, w)
print(grad)
```
Second-order derivatives??
```
# second derivatives
print(w)
with tf.GradientTape() as tape:
with tf.GradientTape() as tape_inner:
w_squared = tf.square(w)
grad = tape_inner.gradient(w_squared, w)
print(grad)
grad2 = tape.gradient(grad, w)
print(grad2)
```
### From eager to compiled
```
@tf.function # python decorator
def compiled_function(x):
y = x * 3
print("Compiled tensor: ", y)
return y
out = compiled_function(tf.Variable(tf.ones([2, 5], tf.int32)))
print("Eager result: ", out)
```
The compilation prevents the tensor in the compiled function to be printed.
What is actually printed is the name of the node in the computational graph.
# Combining Keras and TF
Despite being a high-level library for DL, Keras offers you the possibility to customize different parts of your DL pipeline.
Especially if you are willing to deal with TF.
```
class CustomDense(K.layers.Layer):
def __init__(self, input_dim, units=64):
super(CustomDense, self).__init__() # this call is needed to set up the Layer
w_init = tf.random_normal_initializer()
self.w = tf.Variable(
initial_value=w_init(shape=(input_dim, units), dtype="float32"),
trainable=True) # this means that w will be updated
b_init = tf.zeros_initializer()
self.b = tf.Variable(
initial_value=b_init(shape=(units,), dtype="float32"),
trainable=True)
def call(self, inputs):
"""
This method is automatically called when you call an instance of Linear
e.g. out = linear(x)
"""
preact = tf.matmul(inputs, self.w) + self.b
postact = tf.nn.relu(preact)
return postact
```
or equivalently
```
class CustomDense2(K.layers.Layer):
def __init__(self, units=64):
super(CustomDense2, self).__init__()
self.units = units
# we are not specifying input size here...
# ring a bell?
def build(self, input_shape):
"""
Lazily called when an input is provided to the model
"""
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True)
def call(self, inputs):
preact = tf.matmul(inputs, self.w) + self.b
postact = tf.nn.relu(preact)
return postact
```
**Many** Keras predefined `Layers`: https://keras.io/api/layers/
**Many** TF activation functions: https://www.tensorflow.org/api_docs/python/tf/nn/
```
l = CustomDense(4)
x = tf.linspace(0, 20, 20)
out = l(tf.reshape(x, [5,4]))
print(out.shape)
l.weights
```
Ok... now what?
```
# Instantiate an optimizer.
optimizer = K.optimizers.SGD(learning_rate=1e-3)
criterion = K.losses.SparseCategoricalCrossentropy(from_logits=True)
model = CustomDense2()
for step, (x, y) in enumerate(dataset):
with tf.GradientTape() as tape:
logits = model(x)
loss = criterion(y, logits)
gradients = tape.gradient(loss, model.trainable_weights)
# you can modify gradients before updating the model
optimizer.apply_gradients(zip(gradients, model.trainable_weights))
```
A `weight` with `training=False` will not appear in `trainable_weights` iterator. There is a specific `non_trainable_weights`.
**Exercise**: build a custom Multi Layer Perceptron (i.e. a feedforward neural network) by leveraging the modules we already created. Try and experiment with this model by training it on a dataset (either a Keras one or a fake one).
**Exercise**: try out an Autoencoder. Encoder and Decoder are both feedforward networks. Try to encode patterns of a dataset, decode them and see how much reconstruction error you got!
| github_jupyter |
# SIT742: Modern Data Science
**(Week 03: Data Wrangling)**
---
- Materials in this module include resources collected from various open-source online repositories.
- You are free to use, change and distribute this package.
- If you found any issue/bug for this document, please submit an issue at [tulip-lab/sit742](https://github.com/tulip-lab/sit742/issues)
Prepared by **SIT742 Teaching Team**
---
# Session 3A - Data Wrangling with Pandas
## Table of Content
* Part 1. Scraping data from the web
* Part 2. States and Territories of Australia
* Part 3. Parsing XML files with BeautifulSoup
**Note**: The data available on those service might be changing, so you need to adjust the code to accommodate those changes.
---
## Part 1. Scraping data from the web
Many of you will probably be interested in scraping data from the web for your projects. For example, what if we were interested in working with some historical Canadian weather data? Well, we can get that from: http://climate.weather.gc.ca using their API. Requests are going to be formatted like this:
```
import pandas as pd
url_template = "http://climate.weather.gc.ca/climate_data/bulk_data_e.html?format=csv&stationID=5415&Year={year}&Month={month}&timeframe=1&submit=Download+Data"
```
Note that we've requested the data be returned as a CSV, and that we're going to supply the month and year as inputs when we fire off the query. To get the data for March 2012, we need to format it with month=3, year=2012:
```
url = url_template.format(month=3, year=2012)
url
```
This is great! We can just use the same `read_csv` function as before, and just give it a URL as a filename. Awesome.
Upon inspection, we find out that there are 0 rows (as in 03/2020) of metadata at the top of this CSV, but pandas knows CSVs are weird, so there's a `skiprows` options. We parse the dates again, and set 'Date/Time' to be the index column. Here's the resulting dataframe.
```
weather_mar2012 = pd.read_csv(url, skiprows=0, index_col='Date/Time', parse_dates=True, encoding='latin1')
weather_mar2012.head()
```
As before, we can get rid of any columns that don't contain real data using ${\tt .dropna()}$
```
weather_mar2012 = weather_mar2012.dropna(axis=1, how='any')
weather_mar2012.head()
```
Getting better! The Year/Month/Day/Time columns are redundant, though, and the Data Quality column doesn't look too useful. Let's get rid of those.
```
weather_mar2012 = weather_mar2012.drop(['Year', 'Month', 'Day', 'Time'], axis=1)
weather_mar2012[:5]
```
Great! Now let's figure out how to download the whole year? It would be nice if we could just send that as a single request, but like many APIs this one is limited to prevent people from hogging bandwidth. No problem: we can write a function!
```
def download_weather_month(year, month):
url = url_template.format(year=year, month=month)
weather_data = pd.read_csv(url, skiprows=0, index_col='Date/Time', parse_dates=True)
weather_data = weather_data.dropna(axis=1)
weather_data.columns = [col.replace('\xb0', '') for col in weather_data.columns]
weather_data = weather_data.drop(['Year', 'Day', 'Month', 'Time'], axis=1)
return weather_data
```
Now to test that this function does the right thing:
```
download_weather_month(2020, 1).head()
```
Woohoo! Now we can iteratively request all the months using a single line. This will take a little while to run.
```
data_by_month = [download_weather_month(2012, i) for i in range(1, 12)]
```
Once that's done, it's easy to concatenate all the dataframes together into one big dataframe using ${\tt pandas.concat()}$. And now we have the whole year's data!
```
weather_2012 = pd.concat(data_by_month)
```
This thing is long, so instead of printing out the whole thing, I'm just going to print a quick summary of the ${\tt DataFrame}$ by calling ${\tt .info()}$:
```
weather_2012.info()
```
And a quick reminder, if we wanted to save that data to a file:
```
weather_2012.to_csv('weather_2012.csv')
!ls
```
And finally, something you should do early on in the wrangling process, plot data:
```
# plot that data
import matplotlib.pyplot as plt
# so now 'plt' means matplotlib.pyplot
dateRange = weather_2012.index
temperature = weather_2012['Temp (C)']
df1 = pd.DataFrame({'Temperature' : temperature}, index=dateRange)
plt.plot(df1.index.to_pydatetime(), df1.Temperature)
plt.title("The 2012 annual temperature in Canada")
plt.xlabel("Month")
plt.ylabel("Temperature")
# nothing to see... in iPython you need to specify where the chart will display, usually it's in a new window
# to see them 'inline' use:
%matplotlib inline
#If you add the %matplotlib inline, then you can skip the plt.show() function.
#How to close python warnings
import warnings
warnings.filterwarnings('ignore')
# that's better, try other plots, scatter is popular, also boxplot
df1 = pd.read_csv('weather_2012.csv', low_memory=False)
df1.plot(kind='scatter',x='Dew Point Temp (C)',y='Rel Hum (%)',color='red')
df1.plot(kind='scatter',x='Temp (C)',y='Wind Spd (km/h)',color='yellow')
# show first several 'weather' columns value
weather_2012['Weather'].head()
#Boxplot sample
climategroup1 = df1[df1['Weather']=='Fog']['Temp (C)']
climategroup2 = df1[df1['Weather']=='Rain']['Temp (C)']
climategroup3 = df1[df1['Weather']=='Clear']['Temp (C)']
climategroup4 = df1[df1['Weather']=='Cloudy']['Temp (C)']
data =[climategroup1,climategroup2,climategroup3,climategroup4]
fig1, ax1 = plt.subplots()
ax1.set_title('Temperature Boxplot based on the Climate group')
ax1.set_ylabel('Temperature')
ax1.set_xlabel('Climate Group')
boxplot=ax1.boxplot(data,
notch=True,
patch_artist=True,
labels=['Fog','Rain','Clear','Cloudy'],
boxprops=dict(linestyle='--', linewidth=2, color='black'))
colors = ['cyan', 'pink', 'lightgreen', 'tan', 'pink']
for patch, color in zip(boxplot['boxes'], colors):
patch.set_facecolor(color)
plt.show()
```
## Part 2. States and Territories of Australia
We are interested in getting State and Territory information from Wikipedia, however we do not want to copy and paste the table : )
Here is the URL
https://en.wikipedia.org/wiki/States_and_territories_of_Australia
We need two libraries to do the task:
Check documentations here:
* [urllib](https://docs.python.org/2/library/urllib.html)
* [BeautifulSoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/)
```
import sys
if sys.version_info[0] == 3:
from urllib.request import urlopen
else:
from urllib import urlopen
from bs4 import BeautifulSoup
```
We first save the link in wiki
```
wiki = "https://en.wikipedia.org/wiki/States_and_territories_of_Australia"
```
Then use ulropen to open the page.
If you get "SSL: CERTIFICATE_VERIFY_FAILED", what you need to do is find where "Install Certificates.command" file is, and click it to upgrade the certificate. Then, you should be able to solve the problem.
```
page = urlopen(wiki)
if sys.version_info[0] == 3:
page = page.read()
```
You will meet BeautifulSoup later in this subject, so don't worry if you feel uncomfortable with it now. You can always revisit.
We begin by reading in the source code and creating a Beautiful Soup object with the BeautifulSoup function.
```
soup = BeautifulSoup(page, "lxml")
```
Then we print and see.
```
print(soup.prettify())
```
For who do not know much about HTML, this might be a bit overwhelming, but essentially it contains lots of tags in the angled brackets providing structural and formatting information that we don't care so much here. What we need is the table.
Let's first check the title.
```
soup.title.string
```
It looks fine, then we would like to find the table.
Let's have a try to extract all contents within the 'table' tag.
```
all_tables = soup.findAll('table')
print(all_tables)
```
This returns a collection of tag objects. It seems that most of the information are useless and it's getting hard to hunt for the table. So searched online and found an instruction here:
https://adesquared.wordpress.com/2013/06/16/using-python-beautifulsoup-to-scrape-a-wikipedia-table/
The class is "wikitable sortable"!! Have a try then.
```
right_table = soup.find('table', class_='wikitable sortable')
print(right_table)
```
Next we need to extract table header row by find the first 'tr'>
```
head_row = right_table.find('tr')
print(head_row)
```
Then we extract header row name via iterate through each row and extract text.
The `.findAll` function in Python returns a list containing all the elements, which you can iterate through.
```
header_list = []
headers = head_row.findAll('th')
for header in headers:
#print header.find(text = True)
header_list.append(header.find(text = True).strip())
header_list
```
We can probably iterate through this list and then extract contents. But let's take a simple approach of extracting each column separately.
```
flag = []
state = []
abbrev = []
ISO = []
Capital = []
Population = []
Area = []
Gov = []
Premier = []
for row in right_table.findAll("tr"):
cells = row.findAll('td')
if len(cells) > 0 : # and len(cells) < 10:
flag.append(cells[0].find(text=True))
state.append(cells[1].find(text=True).strip())
abbrev.append(cells[2].find(text=True).strip())
ISO.append(cells[3].find(text=True).strip())
Capital.append(cells[4].find(text=True).strip())
Population.append(cells[5].find(text=True).strip())
Area.append(cells[6].find(text=True).strip())
Gov.append(cells[7].find(text=True).strip())
Premier.append(cells[9].find(text=True).strip())
```
Next we can append all list to the dataframe.
```
df_au = pd.DataFrame()
df_au[header_list[0]] = flag
df_au[header_list[1]] = state
df_au[header_list[2]] = abbrev
df_au[header_list[3]] = ISO
df_au[header_list[4]] = Capital
df_au[header_list[5]] = Population
df_au[header_list[6]] = Area
df_au[header_list[7]] = Gov
df_au[header_list[8]] = Premier
```
Done !
```
df_au
```
## Part 3. Parsing XML files with BeautifulSoup
Now, we are going to demonstrate how to use BeautifulSoup to extract information from the XML file, called "Melbourne_bike_share.xml".
For the documentation of BeautifulSoup, please refer to it <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#find-all">official website</a>.
```
!pip install wget
import wget
link_to_data = 'https://github.com/tulip-lab/sit742/raw/master/Jupyter/data/Melbourne_bike_share.xml'
DataSet = wget.download(link_to_data)
!ls
from bs4 import BeautifulSoup
btree = BeautifulSoup(open("Melbourne_bike_share.xml"),"lxml-xml")
```
You can also print out the BeautifulSoup object by calling the <font color="blue">prettify()</font> function.
```
print(btree.prettify())
```
It is easy to figure out information we would like to extract is stored in the following tags
<ul>
<li>id </li>
<li>featurename </li>
<li>terminalname </li>
<li>nbbikes </li>
<li>nbemptydoc </li>
<li>uploaddate </li>
<li>coordinates </li>
</ul>
Each record is stored in "<row> </row>". To extract information from those tags, except for "coordinates", we use the <font color="blue">find_all()</font> function. Its documentation can be found <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#find-all">here</a>.
```
featuretags = btree.find_all("featurename")
featuretags
```
The output shows that the <font color="blue"> find_all() </font> returns all the 50 station names. Now, we need to exclude the tags and just keep the text stored between the tags.
```
for feature in featuretags:
print (feature.string)
```
Now, we can put all the above code together using list comprehensions.
```
featurenames = [feature.string for feature in btree.find_all("featurename")]
featurenames
```
Similarly, we can use the <font color = "blue">find_all()</font> function to extract the other information.
```
nbbikes = [feature.string for feature in btree.find_all("nbbikes")]
nbbikes
NBEmptydoc = [feature.string for feature in btree.find_all("nbemptydoc")]
NBEmptydoc
TerminalNames = [feature.string for feature in btree.find_all("terminalname")]
TerminalNames
UploadDate = [feature.string for feature in btree.find_all("uploaddate")]
UploadDate
ids = [feature.string for feature in btree.find_all("id")]
ids
```
Now, how can we extract the attribute values from the tags called "coordinates"?
```
latitudes = [coord["latitude"] for coord in btree.find_all("coordinates")]
latitudes
longitudes = [coord["longitude"] for coord in btree.find_all("coordinates")]
longitudes
```
After the extraction, we can put all the information in a Pandas DataFrame.
```
import pandas as pd
dataDict = {}
dataDict['Featurename'] = featurenames
dataDict['TerminalName'] = TerminalNames
dataDict['NBBikes'] = nbbikes
dataDict['NBEmptydoc'] = NBEmptydoc
dataDict['UploadDate'] = UploadDate
dataDict['lat'] = lattitudes
dataDict['lon'] = longitudes
df = pd.DataFrame(dataDict, index = ids)
df.index.name = 'ID'
df.head()
```
| github_jupyter |
# Cleaning Pipeline
## Importing Packages, defining AC list and Folders
```
# Importing Packages
import pandas as pd
import numpy as np
import re
import glob
import os
import concurrent # for parallel instances
import warnings
# importing helper functions
from cleaning_functions import cleaning
from cleaning_functions import connect_party
from cleaning_functions import connect_party2
##### For when constituencies come from book.xlsx #####
# Defining Selection of Constituencies
df = pd.read_excel('/home/hennes/Downloads/Book.xlsx')
# get relevant pdf numbers
worklist = df[df['Ready for Cleaning and Merging?'] == 'y']['Constituency number'].tolist()
# give appropriate filename endings to items
for idx, item in enumerate(worklist):
if len(str(item)) == 1:
worklist[idx] = f'AC00{item}.csv'
if len(str(item)) == 2:
worklist[idx] = f'AC0{item}.csv'
if len(str(item)) == 3:
worklist[idx] = f'AC{item}.csv'
worklist = tuple(worklist)
# Defining Folders
folder = '/home/hennes/Internship/constituencies/'
save_folder = '/home/hennes/Internship/constituencies_edit/'
old = '/home/hennes/Internship/old_files/'
candidates = pd.read_excel('/home/hennes/Internship/Party_Data_2019.xlsx')
PC_AC = set(sorted([folder.split('-')[0]+'-'+ folder.split('-')[1] for folder in next(os.walk(old))[1]]))
PC_AC_dict = {e.split('-')[1]: e.split('-')[0] for e in PC_AC}
constituencies = sorted([os.path.split(file)[-1] for file in glob.glob(folder+'*') if file.endswith(".csv")]) # list with all files
constituencies = [file for file in constituencies if file.endswith(worklist)]
# list of ACs belonging to individual PCs
PClist = []
unique_pc = set([e.split('-')[0] for e in PC_AC])
edited_ac = [os.path.split(e)[1] for e in glob.glob(save_folder+'*')]
for x in unique_pc:
PClist.append([e.split('-')[1] for e in PC_AC
if e.split('-')[0] == x and e.split('-')[1]+'.csv' in edited_ac])
# defining function options so that it can be executed with concurrent futures
def pipeline(c):
try:
df = cleaning(c, candidates, A_serial= False, year=2019, max_digits = 5, max_value = 1600)
df.to_csv(save_folder+c, index=False)
except Exception as e:
print(e)
print(f'Problem with {c}. Jump to next one. \n')
return
# for when constituencies are just all csv files that are in folder (and not PC)
constituencies = sorted([os.path.split(file)[-1] for file in glob.glob(folder+'*') if file.endswith(".csv")]) # list with all files
```
## Running Program
```
# for running with multiple cores
# ignore pandas userwarnings
warnings.simplefilter(action='ignore', category=UserWarning)
warnings.simplefilter(action='ignore', category=RuntimeWarning)
with concurrent.futures.ProcessPoolExecutor(max_workers=4) as executor:
executor.map(pipeline, constituencies)
# for running with single core (better troubleshooting)
# ignore pandas userwarnings
warnings.simplefilter(action='ignore', category=UserWarning)
warnings.simplefilter(action='ignore', category=RuntimeWarning)
for c in constituencies:
pipeline(c)
# to connect party for PC constituencies when constituencies in save folder still in PC format
for element in constituencies:
df = pd.read_csv(save_folder+element)
connect_party(df, element)
# to connect party to AC constituencies when constituencies in save folder already in AC format
for element in constituencies:
df = pd.read_csv(save_folder+element)
connect_party2(df)
```
| github_jupyter |
Install spark dependencies
```
!apt-get install openjdk-8-jdk-headless -qq > /dev/null
!wget -q https://archive.apache.org/dist/spark/spark-3.1.2/spark-3.1.2-bin-hadoop3.2.tgz
!tar xf spark-3.1.2-bin-hadoop3.2.tgz
!pip install -q findspark
```
Set environmental path
```
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["SPARK_HOME"] = "/content/spark-3.1.2-bin-hadoop3.2"
```
Run a local spark session
```
import findspark
findspark.init()
from pyspark.sql import SparkSession
spark = SparkSession.builder.master("local[*]").getOrCreate()
```
unzip the dataset
```
!unzip archive.zip
from pyspark.sql.functions import col, round
```
Load the data into Pyspark
```
college_place_df = spark.read.csv("collegePlace.csv", header=True, inferSchema=True)
```
Extract first 5 rows from the spark dataframe
```
college_place_df.show(5)
```
Display the columns names in college_place_df
```
print(college_place_df.columns)
```
Filter the dataframe by Age
```
college_place_21_df = college_place_df.filter(college_place_df.Age == 21)
college_place_22_df = college_place_df.filter(college_place_df.Age == 22)
```
Find the dimension of dataframe of spark dataframe
```
print((college_place_21_df.count(), len(college_place_21_df.columns)))
print((college_place_22_df.count(), len(college_place_22_df.columns)))
```
Find the average cgpa, number of students placed and total number of students in each stream
```
college_place_by_stream = college_place_df.groupBy("Stream").agg({'CGPA':'avg', 'PlacedOrNot':'sum'})
college_student_by_stream = college_place_df.groupBy("Stream").agg({'PlacedOrNot':'count'})
college_place_by_stream.printSchema()
college_student_by_stream.printSchema()
```
Rename columns in spark dataframe
```
college_place_by_stream = college_place_by_stream.withColumnRenamed("avg(CGPA)","Average_CGPA") \
.withColumnRenamed("sum(PlacedOrNot)","Number_of_Students_Placed")
college_student_by_stream = college_student_by_stream.withColumnRenamed("count(PlacedOrNot)","Number_of_Students")
#college_place_by_stream.printSchema()
#college_student_by_stream.printSchema()
#college_place_by_stream.show()
#college_student_by_stream.show()
```
Join college_place_by_stream and college_student_by_stream
```
college_placement_join = college_place_by_stream.join(college_student_by_stream,['Stream'],"inner")
college_placement_join.show()
```
Change the datatype of columns
```
college_placement_join = college_placement_join.withColumn("Number_of_Students_Placed",col("Number_of_Students_Placed").cast("int"))\
.withColumn("Number_of_Students",col("Number_of_Students").cast("int"))
```
Create derived column - Percentage of students placed
```
college_placement_join = college_placement_join.withColumn("percent_placed", round((col("Number_of_Students_Placed")/col("Number_of_Students"))*100,2))
college_placement_join.show()
```
Finding which Stream has highest number of placed students
```
college_placement_join = college_placement_join.sort(college_placement_join.Number_of_Students_Placed.desc())
college_placement_join.show()
```
Sort by number of students placed
```
college_placement_join = college_placement_join.sort(college_placement_join.percent_placed.desc())
college_placement_join.show()
```
| github_jupyter |
# Search GliderDAC for Pioneer Glider Data
Use ERDDAP's RESTful advanced search to try to find OOI Pioneer glider water temperatures from the IOOS GliderDAC. Use case from Stace Beaulieu (sbeaulieu@whoi.edu)
```
import pandas as pd
```
### First try just searching for "glider"
```
url = 'https://data.ioos.us/gliders/erddap/search/advanced.csv?page=1&itemsPerPage=1000&searchFor={}'.format('glider')
dft = pd.read_csv(url, usecols=['Title', 'Summary', 'Institution','Dataset ID'])
dft.head()
```
### Now search for all temperature data in specified bounding box and temporal extent
```
start = '2000-01-01T00:00:00Z'
stop = '2017-02-22T00:00:00Z'
lat_min = 39.
lat_max = 41.5
lon_min = -72.
lon_max = -69.
standard_name = 'sea_water_temperature'
endpoint = 'https://data.ioos.us/gliders/erddap/search/advanced.csv'
import pandas as pd
base = (
'{}'
'?page=1'
'&itemsPerPage=1000'
'&searchFor='
'&protocol=(ANY)'
'&cdm_data_type=(ANY)'
'&institution=(ANY)'
'&ioos_category=(ANY)'
'&keywords=(ANY)'
'&long_name=(ANY)'
'&standard_name={}'
'&variableName=(ANY)'
'&maxLat={}'
'&minLon={}'
'&maxLon={}'
'&minLat={}'
'&minTime={}'
'&maxTime={}').format
url = base(
endpoint,
standard_name,
lat_max,
lon_min,
lon_max,
lat_min,
start,
stop
)
print(url)
dft = pd.read_csv(url, usecols=['Title', 'Summary', 'Institution', 'Dataset ID'])
print('Glider Datasets Found = {}'.format(len(dft)))
dft
```
Define a function that returns a Pandas DataFrame based on the dataset ID. The ERDDAP request variables (e.g. pressure, temperature) are hard-coded here, so this routine should be modified for other ERDDAP endpoints or datasets
```
def download_df(glider_id):
from pandas import DataFrame, read_csv
# from urllib.error import HTTPError
uri = ('https://data.ioos.us/gliders/erddap/tabledap/{}.csv'
'?trajectory,wmo_id,time,latitude,longitude,depth,pressure,temperature'
'&time>={}'
'&time<={}'
'&latitude>={}'
'&latitude<={}'
'&longitude>={}'
'&longitude<={}').format
url = uri(glider_id,start,stop,lat_min,lat_max,lon_min,lon_max)
print(url)
# Not sure if returning an empty df is the best idea.
try:
df = read_csv(url, index_col='time', parse_dates=True, skiprows=[1])
except:
df = pd.DataFrame()
return df
# concatenate the dataframes for each dataset into one single dataframe
df = pd.concat(list(map(download_df, dft['Dataset ID'].values)))
print('Total Data Values Found: {}'.format(len(df)))
df.head()
df.tail()
```
# plot up the trajectories with Cartopy (Basemap replacement)
```
%matplotlib inline
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
from cartopy.feature import NaturalEarthFeature
bathym_1000 = NaturalEarthFeature(name='bathymetry_J_1000',
scale='10m', category='physical')
fig, ax = plt.subplots(
figsize=(9, 9),
subplot_kw=dict(projection=ccrs.PlateCarree())
)
ax.coastlines(resolution='10m')
ax.add_feature(bathym_1000, facecolor=[0.9, 0.9, 0.9], edgecolor='none')
dx = dy = 0.5
ax.set_extent([lon_min-dx, lon_max+dx, lat_min-dy, lat_max+dy])
g = df.groupby('trajectory')
for glider in g.groups:
traj = df[df['trajectory'] == glider]
ax.plot(traj['longitude'], traj['latitude'], label=glider)
gl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True,
linewidth=2, color='gray', alpha=0.5, linestyle='--')
ax.legend();
```
| github_jupyter |
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-59152712-8');
</script>
# Start-to-Finish Example: `GiRaFFE_NRPy` 1D tests
### Authors: Terrence Pierre Jacques
### Adapted from [Start-to-Finish Example: Head-On Black Hole Collision](../Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide.ipynb)
## This module compiles and runs code tests for all 1D initial data options available in GiRaFFE-NRPy+, evolving one-dimensional GRFFE waves.
### NRPy+ Source Code for this module:
* Main python module for all 1D initial data: [GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_1D_tests.py](../../edit/in_progress/GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_1D_tests.py) __Options:__
1. [Fast Wave](Tutorial-GiRaFFEfood_NRPy_1D_tests-fast_wave.ipynb)
1. [Alfven Wave](Tutorial-GiRaFFEfood_NRPy_1D_alfven_wave.ipynb)
1. [Degenerate Alfven Wave](Tutorial-GiRaFFEfood_NRPy_1D_tests-degen_Alfven_wave.ipynb)
1. [Three Alfven Waves](Tutorial-GiRaFFEfood_NRPy_1D_tests-three_waves.ipynb)
1. [FFE Breakdown](Tutorial-GiRaFFEfood_NRPy_1D_tests-FFE_breakdown.ipynb)
<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
This notebook is organized as follows
1. [Step 1](#initializenrpy): Set core NRPy+ parameters for numerical grids
1. [Step 2](#grffe): Output C code for GRFFE evolution
1. [Step 2.a](#mol): Output macros for Method of Lines timestepping
1. [Step 3](#gf_id): Import `GiRaFFEfood_NRPy` initial data modules
1. [Step 4](#cparams): Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h`
1. [Step 5](#mainc): `GiRaFFE_NRPy_standalone.c`: The Main C Code
1. [Step 6](#compileexec): Compile and execute C codes
1. [Step 7](#plots): Data Visualization
1. [Step 8](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file
<a id='setup'></a>
# Step 1: Set up core functions and parameters for solving GRFFE equations \[Back to [top](#toc)\]
$$\label{setup}$$
```
import os, sys # Standard Python modules for multiplatform OS-level functions
# First, we'll add the parent directory to the list of directories Python will check for modules.
nrpy_dir_path = os.path.join("..")
if nrpy_dir_path not in sys.path:
sys.path.append(nrpy_dir_path)
# Import needed Python modules
import NRPy_param_funcs as par # NRPy+: Parameter interface
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import reference_metric as rfm # NRPy+: Reference metric support
import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends
#Step 0: Set the spatial dimension parameter to 3.
par.set_parval_from_str("grid::DIM", 3)
DIM = par.parval_from_str("grid::DIM")
# TINYDOUBLE = par.Cparameters("REAL", "TINYDOUBLE", 1e-100)
# Choices are: Spherical, SinhSpherical, SinhSphericalv2, Cylindrical, SinhCylindrical,
# SymTP, SinhSymTP
dst_basis = "Cartesian"
# Set coordinate system to dst_basis
par.set_parval_from_str("reference_metric::CoordSystem",dst_basis)
rfm.reference_metric()
```
<a id='gf_id'></a>
# Step 3: Import `GiRaFFEfood_NRPy` initial data modules \[Back to [top](#toc)\]
$$\label{gf_id}$$
With the preliminaries out of the way, we will write the C functions to set up initial data. There are two categories of initial data that must be set: the spacetime metric variables, and the GRFFE plasma variables. We will set up the spacetime first, namely the Minkowski spacetime.
Now, we will write out the initials data function for the GRFFE variables.
```
ID_opts = ["AlfvenWave", "ThreeAlfvenWaves", "DegenAlfvenWave", "FastWave", "FFEBD"]
# for initial_data in ID_opts:
initial_data = "FFEBD"
if initial_data=="AlfvenWave":
import GiRaFFEfood_NRPy.GiRaFFEfood_NRPy_1D_tests as gid
gid.GiRaFFEfood_NRPy_1D_tests(stagger = True)
desc = "Generate Alfven wave 1D initial data for GiRaFFEfood_NRPy."
elif initial_data=="ThreeAlfvenWaves":
import GiRaFFEfood_NRPy.GiRaFFEfood_NRPy_1D_tests_three_waves as gid
gid.GiRaFFEfood_NRPy_1D_tests_three_waves(stagger = True)
desc = "Generate three Alfven wave 1D initial data for GiRaFFEfood_NRPy."
elif initial_data=="DegenAlfvenWave":
import GiRaFFEfood_NRPy.GiRaFFEfood_NRPy_1D_tests_degen_Alfven_wave as gid
gid.GiRaFFEfood_NRPy_1D_tests_degen_Alfven_wave(stagger = True)
desc = "Generate degenerate Alfven wave 1D initial data for GiRaFFEfood_NRPy."
elif initial_data=="FastWave":
import GiRaFFEfood_NRPy.GiRaFFEfood_NRPy_1D_tests_fast_wave as gid
gid.GiRaFFEfood_NRPy_1D_tests_fast_wave(stagger = True)
desc = "Generate fast wave 1D initial data for GiRaFFEfood_NRPy."
elif initial_data=="FFEBD":
import GiRaFFEfood_NRPy.GiRaFFEfood_NRPy_1D_tests_FFE_breakdown as gid
gid.GiRaFFEfood_NRPy_1D_tests_FFE_breakdown(stagger = True)
desc = "Generate FFE breakdown 1D initial data for GiRaFFEfood_NRPy."
```
We define Jacobians relative to the center of the destination grid, at a point $x^j_{\rm dst}=$(`xx0,xx1,xx2`)${}_{\rm dst}$ on the destination grid:
$$
{\rm Jac\_dUCart\_dDdstUD[i][j]} = \frac{\partial x^i_{\rm Cart}}{\partial x^j_{\rm dst}},
$$
via exact differentiation (courtesy SymPy), and the inverse Jacobian
$$
{\rm Jac\_dUdst\_dDCartUD[i][j]} = \frac{\partial x^i_{\rm dst}}{\partial x^j_{\rm Cart}},
$$
using NRPy+'s `generic_matrix_inverter3x3()` function. In terms of these, the transformation of BSSN tensors from Cartesian to the destination grid's `"reference_metric::CoordSystem"` coordinates may be written:
$$
B^i_{\rm dst} = \frac{\partial x^i_{\rm dst}}{\partial x^\ell_{\rm Cart}} B^\ell_{\rm Cart},
$$
while for lowered indices we have
$$
A^{\rm dst}_{i} =
\frac{\partial x^\ell_{\rm Cart}}{\partial x^i_{\rm dst}} A^{\rm Cart}_{\ell}\\
$$
```
# Step 3: Transform BSSN tensors in Cartesian basis to destination grid basis, using center of dest. grid as origin
# Step 3.a: Next construct Jacobian and inverse Jacobian matrices:
Jac_dUCart_dDrfmUD,Jac_dUrfm_dDCartUD = rfm.compute_Jacobian_and_inverseJacobian_tofrom_Cartesian()
# Step 3.b: Convert basis of all BSSN *vectors* from Cartesian to destination basis
BU_dst = rfm.basis_transform_vectorU_from_Cartesian_to_rfmbasis(Jac_dUrfm_dDCartUD, gid.BU)
ValenciavU_dst = rfm.basis_transform_vectorU_from_Cartesian_to_rfmbasis(Jac_dUrfm_dDCartUD, gid.ValenciavU)
# Note that the below the function should really be "...basis_transform_vectorUDfrom_Cartesian_to_rfmbasis.."
AD_dst = rfm.basis_transform_vectorU_from_Cartesian_to_rfmbasis(Jac_dUCart_dDrfmUD, gid.AD)
print("Initial data type = "+ initial_data)
for i in range(DIM):
print(gid.ValenciavU[i] - ValenciavU_dst[i])
print(gid.BU[i] - BU_dst[i])
print(gid.AD[i] - AD_dst[i])
# Choices are: Spherical, SinhSpherical, SinhSphericalv2, Cylindrical, SinhCylindrical,
# SymTP, SinhSymTP
dst_basis = "SymTP"
# Set coordinate system to dst_basis
par.set_parval_from_str("reference_metric::CoordSystem",dst_basis)
rfm.reference_metric()
# Step 3: Transform BSSN tensors in Cartesian basis to destination grid basis, using center of dest. grid as origin
# Step 3.a: Next construct Jacobian and inverse Jacobian matrices:
Jac_dUCart_dDrfmUD,Jac_dUrfm_dDCartUD = rfm.compute_Jacobian_and_inverseJacobian_tofrom_Cartesian()
# Step 3.b: Convert basis of all BSSN *vectors* from Cartesian to destination basis
BU_dst = rfm.basis_transform_vectorU_from_Cartesian_to_rfmbasis(Jac_dUrfm_dDCartUD, gid.BU)
ValenciavU_dst = rfm.basis_transform_vectorU_from_Cartesian_to_rfmbasis(Jac_dUrfm_dDCartUD, gid.ValenciavU)
# Note that the below the function should really be "...basis_transform_vectorUDfrom_Cartesian_to_rfmbasis.."
AD_dst = rfm.basis_transform_vectorU_from_Cartesian_to_rfmbasis(Jac_dUCart_dDrfmUD, gid.AD)
import GiRaFFEfood_NRPy.BasisTransform as BT
BT.basis_transform(dst_basis, gid.AD, gid.ValenciavU, gid.BU)
def consistency_check(quantity1,quantity2,string):
if quantity1-quantity2==0:
print(string+" is in agreement!")
else:
print(string+" does not agree!")
sys.exit(1)
print("Consistency check between GiRaFFEfood_NRPy tutorial and NRPy+ module:")
for i in range(3):
consistency_check(ValenciavU_dst[i],BT.ValenciavU_dst[i],"ValenciavU"+str(i))
consistency_check(AD_dst[i],BT.AD_dst[i],"AD"+str(i))
consistency_check(BU_dst[i],BT.BU_dst[i],"BU"+str(i))
```
<a id='latex_pdf_output'></a>
# Step 8: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-Start_to_Finish-GiRaFFE_NRPy-1D_tests-staggered.pdf](Tutorial-Start_to_Finish-GiRaFFE_NRPy-1D_tests-staggered.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
```
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-Start_to_Finish-GiRaFFE_NRPy-1D_tests-staggered",location_of_template_file=os.path.join(".."))
```
| github_jupyter |
# Quantum Counting
To understand this algorithm, it is important that you first understand both Grover’s algorithm and the quantum phase estimation algorithm. Whereas Grover’s algorithm attempts to find a solution to the Oracle, the quantum counting algorithm tells us how many of these solutions there are. This algorithm is interesting as it combines both quantum search and quantum phase estimation.
## Contents
1. [Overview](#overview)
1.1 [Intuition](#intuition)
1.2 [A Closer Look](#closer_look)
2. [The Code](#code)
2.1 [Initialising our Code](#init_code)
2.2 [The Controlled-Grover Iteration](#cont_grover)
2.3 [The Inverse QFT](#inv_qft)
2.4 [Putting it Together](#putting_together)
3. [Simulating](#simulating)
4. [Finding the Number of Solutions](#finding_m)
5. [Exercises](#exercises)
6. [References](#references)
## 1. Overview <a id='overview'></a>
### 1.1 Intuition <a id='intuition'></a>
In quantum counting, we simply use the quantum phase estimation algorithm to find an eigenvalue of a Grover search iteration. You will remember that an iteration of Grover’s algorithm, $G$, rotates the state vector by $\theta$ in the $|\omega\rangle$, $|s’\rangle$ basis:

The percentage number of solutions in our search space affects the difference between $|s\rangle$ and $|s’\rangle$. For example, if there are not many solutions, $|s\rangle$ will be very close to $|s’\rangle$ and $\theta$ will be very small. It turns out that the eigenvalues of the Grover iterator are $e^{\pm i\theta}$, and we can extract this using quantum phase estimation (QPE) to estimate the number of solutions ($M$).
### 1.2 A Closer Look <a id='closer_look'></a>
In the $|\omega\rangle$,$|s’\rangle$ basis we can write the Grover iterator as the matrix:
$$
G =
\begin{pmatrix}
\cos{\theta} && -\sin{\theta}\\
\sin{\theta} && \cos{\theta}
\end{pmatrix}
$$
The matrix $G$ has eigenvectors:
$$
\begin{pmatrix}
-i\\
1
\end{pmatrix}
,
\begin{pmatrix}
i\\
1
\end{pmatrix}
$$
With the aforementioned eigenvalues $e^{\pm i\theta}$. Fortunately, we do not need to prepare our register in either of these states, the state $|s\rangle$ is in the space spanned by $|\omega\rangle$, $|s’\rangle$, and thus is a superposition of the two vectors.
$$
|s\rangle = \alpha |\omega\rangle + \beta|s'\rangle
$$
As a result, the output of the QPE algorithm will be a superposition of the two phases, and when we measure the register we will obtain one of these two values! We can then use some simple maths to get our estimate of $M$.

## 2. The Code <a id='code'></a>
### 2.1 Initialising our Code <a id='init_code'></a>
First, let’s import everything we’re going to need:
```
import matplotlib.pyplot as plt
import numpy as np
import math
# importing Qiskit
import qiskit
from qiskit import QuantumCircuit, transpile, assemble, Aer
# import basic plot tools
from qiskit.visualization import plot_histogram
```
In this guide will choose to ‘count’ on the first 4 qubits on our circuit (we call the number of counting qubits $t$, so $t = 4$), and to 'search' through the last 4 qubits ($n = 4$). With this in mind, we can start creating the building blocks of our circuit.
### 2.2 The Controlled-Grover Iteration <a id='cont_grover'></a>
We have already covered Grover iterations in the Grover’s algorithm section. Here is an example with an Oracle we know has 5 solutions ($M = 5$) of 16 states ($N = 2^n = 16$), combined with a diffusion operator:
```
def example_grover_iteration():
"""Small circuit with 5/16 solutions"""
# Do circuit
qc = QuantumCircuit(4)
# Oracle
qc.h([2,3])
qc.ccx(0,1,2)
qc.h(2)
qc.x(2)
qc.ccx(0,2,3)
qc.x(2)
qc.h(3)
qc.x([1,3])
qc.h(2)
qc.mct([0,1,3],2)
qc.x([1,3])
qc.h(2)
# Diffuser
qc.h(range(3))
qc.x(range(3))
qc.z(3)
qc.mct([0,1,2],3)
qc.x(range(3))
qc.h(range(3))
qc.z(3)
return qc
```
Notice the python function takes no input and returns a `QuantumCircuit` object with 4 qubits. In the past the functions you created might have modified an existing circuit, but a function like this allows us to turn the `QuantumCircuit` object into a single gate we can then control.
We can use `.to_gate()` and `.control()` to create a controlled gate from a circuit. We will call our Grover iterator `grit` and the controlled Grover iterator `cgrit`:
```
# Create controlled-Grover
grit = example_grover_iteration().to_gate()
grit.label = "Grover"
cgrit = grit.control()
```
### 2.3 The Inverse QFT <a id='inv_qft'></a>
We now need to create an inverse QFT. This code implements the QFT on n qubits:
```
def qft(n):
"""Creates an n-qubit QFT circuit"""
circuit = QuantumCircuit(4)
def swap_registers(circuit, n):
for qubit in range(n//2):
circuit.swap(qubit, n-qubit-1)
return circuit
def qft_rotations(circuit, n):
"""Performs qft on the first n qubits in circuit (without swaps)"""
if n == 0:
return circuit
n -= 1
circuit.h(n)
for qubit in range(n):
circuit.cp(np.pi/2**(n-qubit), qubit, n)
qft_rotations(circuit, n)
qft_rotations(circuit, n)
swap_registers(circuit, n)
return circuit
```
Again, note we have chosen to return another `QuantumCircuit` object, this is so we can easily invert the gate. We create the gate with t = 4 qubits as this is the number of counting qubits we have chosen in this guide:
```
qft_dagger = qft(4).to_gate().inverse()
qft_dagger.label = "QFT†"
```
### 2.4 Putting it Together <a id='putting_together'></a>
We now have everything we need to complete our circuit! Let’s put it together.
First we need to put all qubits in the $|+\rangle$ state:
```
# Create QuantumCircuit
t = 4 # no. of counting qubits
n = 4 # no. of searching qubits
qc = QuantumCircuit(n+t, t) # Circuit with n+t qubits and t classical bits
# Initialize all qubits to |+>
for qubit in range(t+n):
qc.h(qubit)
# Begin controlled Grover iterations
iterations = 1
for qubit in range(t):
for i in range(iterations):
qc.append(cgrit, [qubit] + [*range(t, n+t)])
iterations *= 2
# Do inverse QFT on counting qubits
qc.append(qft_dagger, range(t))
# Measure counting qubits
qc.measure(range(t), range(t))
# Display the circuit
qc.draw()
```
Great! Now let’s see some results.
## 3. Simulating <a id='simulating'></a>
```
# Execute and see results
qasm_sim = Aer.get_backend('qasm_simulator')
transpiled_qc = transpile(qc, qasm_sim)
qobj = assemble(transpiled_qc)
job = qasm_sim.run(qobj)
hist = job.result().get_counts()
plot_histogram(hist)
```
We can see two values stand out, having a much higher probability of measurement than the rest. These two values correspond to $e^{i\theta}$ and $e^{-i\theta}$, but we can’t see the number of solutions yet. We need to little more processing to get this information, so first let us get our output into something we can work with (an `int`).
We will get the string of the most probable result from our output data:
```
measured_str = max(hist, key=hist.get)
```
Let us now store this as an integer:
```
measured_int = int(measured_str,2)
print("Register Output = %i" % measured_int)
```
## 4. Finding the Number of Solutions (M) <a id='finding_m'></a>
We will create a function, `calculate_M()` that takes as input the decimal integer output of our register, the number of counting qubits ($t$) and the number of searching qubits ($n$).
First we want to get $\theta$ from `measured_int`. You will remember that QPE gives us a measured $\text{value} = 2^n \phi$ from the eigenvalue $e^{2\pi i\phi}$, so to get $\theta$ we need to do:
$$
\theta = \text{value}\times\frac{2\pi}{2^t}
$$
Or, in code:
```
theta = (measured_int/(2**t))*math.pi*2
print("Theta = %.5f" % theta)
```
You may remember that we can get the angle $\theta/2$ can from the inner product of $|s\rangle$ and $|s’\rangle$:

$$
\langle s'|s\rangle = \cos{\tfrac{\theta}{2}}
$$
And that $|s\rangle$ (a uniform superposition of computational basis states) can be written in terms of $|\omega\rangle$ and $|s'\rangle$ as:
$$
|s\rangle = \sqrt{\tfrac{M}{N}}|\omega\rangle + \sqrt{\tfrac{N-M}{N}}|s'\rangle
$$
The inner product of $|s\rangle$ and $|s'\rangle$ is:
$$
\langle s'|s\rangle = \sqrt{\frac{N-M}{N}} = \cos{\tfrac{\theta}{2}}
$$
From this, we can use some trigonometry and algebra to show:
$$
N\sin^2{\frac{\theta}{2}} = M
$$
From the [Grover's algorithm](https://qiskit.org/textbook/ch-algorithms/grover.html) chapter, you will remember that a common way to create a diffusion operator, $U_s$, is actually to implement $-U_s$. This implementation is used in the Grover iteration provided in this chapter. In a normal Grover search, this phase is global and can be ignored, but now we are controlling our Grover iterations, this phase does have an effect. The result is that we have effectively searched for the states that are _not_ solutions, and our quantum counting algorithm will tell us how many states are _not_ solutions. To fix this, we simply calculate $N-M$.
And in code:
```
N = 2**n
M = N * (math.sin(theta/2)**2)
print("No. of Solutions = %.1f" % (N-M))
```
And we can see we have (approximately) the correct answer! We can approximately calculate the error in this answer using:
```
m = t - 1 # Upper bound: Will be less than this
err = (math.sqrt(2*M*N) + N/(2**(m+1)))*(2**(-m))
print("Error < %.2f" % err)
```
Explaining the error calculation is outside the scope of this article, but an explanation can be found in [1].
Finally, here is the finished function `calculate_M()`:
```
def calculate_M(measured_int, t, n):
"""For Processing Output of Quantum Counting"""
# Calculate Theta
theta = (measured_int/(2**t))*math.pi*2
print("Theta = %.5f" % theta)
# Calculate No. of Solutions
N = 2**n
M = N * (math.sin(theta/2)**2)
print("No. of Solutions = %.1f" % (N-M))
# Calculate Upper Error Bound
m = t - 1 #Will be less than this (out of scope)
err = (math.sqrt(2*M*N) + N/(2**(m+1)))*(2**(-m))
print("Error < %.2f" % err)
```
## 5. Exercises <a id='exercises'></a>
1. Can you create an oracle with a different number of solutions? How does the accuracy of the quantum counting algorithm change?
2. Can you adapt the circuit to use more or less counting qubits to get a different precision in your result?
## 6. References <a id='references'></a>
[1] Michael A. Nielsen and Isaac L. Chuang. 2011. Quantum Computation and Quantum Information: 10th Anniversary Edition (10th ed.). Cambridge University Press, New York, NY, USA.
```
import qiskit
qiskit.__qiskit_version__
```
| github_jupyter |
# Data Analysis
# FINM August Review
# Homework Solution 3
## Imports
```
import pandas as pd
import numpy as np
import statsmodels.api as sm
from statsmodels.regression.rolling import RollingOLS
from statsmodels.graphics.tsaplots import plot_acf
import matplotlib.pyplot as plt
from matplotlib.pyplot import figure
import seaborn as sns
from arch import arch_model
from arch.univariate import GARCH, EWMAVariance
sns.set(font_scale = 1.2, rc={'figure.figsize':(15,8)})
import warnings
warnings.filterwarnings('ignore')
```
## Data
```
data = pd.read_excel("../data/AugustReviewhw_1_data.xlsx").rename(columns={"Unnamed: 0": "Date"}).set_index("Date")
data.head()
```
## 1. Forecast Regressions
SPY returns are our dependent variable:
```
y = data['SPY US Equity']
```
Define a regression summary function that gives us the parameters and statistics we want:
```
def reg_summary(model):
reg_summary = model.params.to_frame('Parameters')
reg_summary['t-stats'] = round(model.tvalues, 3)
reg_summary.loc[r'$R^{2}$'] = [model.rsquared, "-"]
return round(reg_summary, 3)
```
### 1.1a
Try to explain future SPY returns using dividend-price ratio:
```
X = sm.add_constant(data['Dvd-Price Ratio']).shift(1)
model1 = sm.OLS(y,X,missing='drop').fit()
reg_summary(model1)
```
### 1.1b
Try to explain future SPY returns using 10-year yields:
```
X = sm.add_constant(data['10-yr Yields']).shift(1)
model2 = sm.OLS(y,X,missing='drop').fit()
reg_summary(model2)
```
### 1.1c
Try to explain future SPY returns using both divident-price ratio and 10-year yields:
```
X = sm.add_constant(data[['Dvd-Price Ratio','10-yr Yields']]).shift(1)
model3 = sm.OLS(y,X,missing='drop').fit()
reg_summary(model3)
```
### 1.2a
Run autoregressions for both dividend-price ratio and 10-year yields:
```
y = data['Dvd-Price Ratio']
X = sm.add_constant(data['Dvd-Price Ratio']).shift(1)
model4 = sm.OLS(y,X,missing='drop').fit()
reg_summary(model4)
data['Dvd-Price Ratio'].plot()
y = data['10-yr Yields']
X = sm.add_constant(data['10-yr Yields']).shift(1)
model5 = sm.OLS(y,X,missing='drop').fit()
reg_summary(model5)
```
### 1.2b
**Does the autoregressive nature of X present problems for OLS estimates?**
We see that for both the Dividend Ratio and the 10-yr Yield, the $R^{2}$ is large and the $\beta$ is close to 1 and statistically significant.
One of the assumptions for the classic OLS results, (the Gauss-Markov Theorem,) is that epsilon does not have serial correlation. Typically, if we use a regressor such as these two variables, with such high serial correlation, this will induce serial correlation in epsilon. Accordingly, we may not be able to trust the classic t-stats, as noted in the "Notes" of the package regression summaries (model.summary()).
Furthermore, this high serial correlation of X means that in a regression such as that of 1(a), we may have significant small-sample bias.
Accordingly, time-series regressions where the X has high serial correlation tend to rely on large-sample properties and adjusted t-stats.
```
data['10-yr Yields'].plot()
```
## 2. OOS $R^{2}$
Set up x and y and the minimum number of observations we want:
```
X = sm.add_constant(data[['Dvd-Price Ratio','10-yr Yields']]).shift(1)
y = data['SPY US Equity']
min_obv = 60
```
Starting at $t = 60$, loop through each time and observe the error when predicting future SPY returns with a regression (using dividend-price ratio and 10-year yields as factors) as opposed to using the mean as the prediction:
```
err_x, err_null = [], []
for i in range(min_obv, len(y)):
### Data up to t
currX = X.iloc[:i]
currY = y.iloc[:i]
### Fit the model
model = sm.OLS(currY, currX, missing='drop').fit()
### Use the model to predict next SPY returns using the most recent x values
pred = model.predict(X.iloc[[i]])[0]
### Forecast error of the regression
err_x.append(y.iat[i] - pred)
### Null error is the actual value - the mean of previous values
err_null.append(y.iat[i] - currY.mean())
### Calculate out-of-sample r2 using the errors we calculated
r_sqr_oos = 1 - np.square(err_x).sum() / np.square(err_null).sum()
print('OOS r-squared:' + str(round(r_sqr_oos, 4)))
```
## 3. Time-Series Models of Volatility
We are using SPY, and we set up the initial parameters we are given:
```
spy = data['SPY US Equity']
var_1 = (0.15 * (1 / (12**0.5)))**2
theta = 0.97
```
Calculate the Expanding Series and Rolling Window estimates:
```
### Expanding Window
var = (spy**2).shift(1).expanding().mean().to_frame('Expanding Window')
### Rolling Window
var['Rolling Window'] = (spy**2).shift(1).rolling(60).mean()
```
Using the arch package, fit a GARCH(1, 1) model. We will use the parameters generated to find our estimates:
```
GARCH = arch_model(spy, vol='Garch', p=1, o=0, q=1, dist='Normal',mean = 'Constant')
GARCH_model = GARCH.fit()
GARCH_model.params
```
We have $\theta$ so we do not need the step below to find our estimates. However, this is an example of how to use the arch package to fit an IGARCH model. The package has several methods that could be useful in other applications.
```
IGARCH = arch_model(spy)
IGARCH.volatility = EWMAVariance(theta)
IGARCH_model = IGARCH.fit()
IGARCH_model.params
```
We calculate our variance estimates for the GARCH and IGARCH models:
```
var[['GARCH','IGARCH']] = None
### Initialize with the given parameter
var.iloc[0,2:] = var_1
### Simulate the rest of the period using the initial variance given
for i in range(1, len(var)):
### Forecast variance by plugging the GARCH and IGARCH parameters into their respective equations
var['IGARCH'].iloc[i] = var['IGARCH'].iloc[i-1] * theta + (1-theta)*(spy.iloc[i-1]**2)
var['GARCH'].iloc[i] = GARCH_model.params['omega'] + var['GARCH'].iloc[i-1] * GARCH_model.params['beta[1]'] + GARCH_model.params['alpha[1]']*(spy.iloc[i-1]**2)
var = var#.dropna()
var#.head()
```
Let's plot our volatility estimates now:
```
### Convert variance to volatility
vol = var**.5
vol.plot()
plt.title('Volatility Forecasts')
plt.ylabel('Volatility')
plt.show()
```
Volatility estimates for October of 2008:
```
vol.loc["2008-10"]
```
Volatility estimates for December of 2018:
```
vol.loc["2018-12"]
```
| github_jupyter |
# COMPANY NAME MATCHER
One of the challenges with querying company names from different databases is discrepancies in the way entity names are spelt. For example, company “ABC” is shown as “ABC” in the Bloomberg system, but “AB C” in another data source. We want to make sure they actually refer to the same company.
I have decided to tackle this problem in two stages.
## STAGE 1: ACCURATE REGISTERED BUSINESS ENTITY NAMES
A quick search of "SINGTEL" revealed more than 30 different business entities registered in Singapore -- with some devastatingly similar names like "SINGTEL AUSTRALIA HOLDING PTE LTD" and "SINGTEL AUSTRALIA INVESTMENT LTD.". A logical first step is to access the Accounting and Corporate Regulatory Authority's(ACRA) API to cross check the names in our dataset. We can confirm with a high degree of accuracy if there are perfect matches. We will access the API via [data.gov.sg](https://data.gov.sg/dataset/entities-with-unique-entity-number).
## STAGE 2: INTERNAL MATCHING
For non-perfect matches, we will then create a function to check similarity scores within the data set. We will be using Levenshtein distance as our metric for measuring the difference between company names. Levenshtein distance between two words can be explained as the the minimum number of single-character edits (insertions, deletions or substitutions) required to change one word into the other.
## CONCLUSION/LIMITATIONS
```
import requests
import time
import pandas as pd
import numpy as np
import json
from random import randint
import matplotlib.pyplot as plt
import re
from fuzzywuzzy import process
import pycontractions
from cleanco import cleanco
# read in sample company list
companies = pd.read_csv("../data/name_match.csv")
# Convert cols to lowercase
companies.columns = map(str.lower, companies.columns)
# Check length of df
print("number of company names to check", companies.shape[0])
# Preview first few rows
companies.head()
companies.drop(columns=['manual_category'], inplace=True)
```
# STAGE 1: ACCURATE REGISTERED BUSINESS ENTITY NAMES
```
# Defining a function to pre-process text for ACRA API search
# The API's quirk is it does not search strings in brackets well
def acra_process(search_term):
search_term = search_term.upper() # Change to uppercase
search_term = re.sub("[\(\[].*?[\)\]]", "", search_term) # Remove text in brackets
search_term = re.sub("(?:\s)(PTE+\.*)|(?:\s)(LTD+\.*)", "", search_term) # Remove pte. and ltd with or without periods
search_term = search_term.strip() # Remove spaces at the start and end of the string
return search_term
# Testing the function
search_term = " SIMPTE. KOOLTD HEWLETT-PACKARD SINGAPORE (SALES) [TESTING Div] (ASIA) PTE. LTD. "
acra_process(search_term)
# Creating an empty list to nest accurate names
revised_namelist = []
match_accuracy = []
# Defining a function to query ACRA's API for the accurate version of a company's name
def acra_name_matcher(name_series, companies_df):
print("<<FINDING ACCURATE SPELLING OF COMPANY NAMES IN ACRA>>\n", "-"*52)
# Create a counter to account for poor/no matches
no_record_count = 0
# Looping through all company names
for search_term in companies_df[name_series]:
# Reformatting text before querying API
cleaned_term = acra_process(search_term)
# Using data.gov's ACRA API
query_string='https://data.gov.sg/api/action/datastore_search?resource_id=bdb377b8-096f-4f86-9c4f-85bad80ef93c&q=' + cleaned_term
resp = requests.get(query_string)
#Convert JSON into Python Object
data = json.loads(resp.content)
# Accounting for failures in search
if data["result"]["records"]==[]:
# CASE 1 -- NO MATCH: for records that cannot be found in ACRA
revised_namelist.append("No Match")
print("Zero Matches Found!")
print("Search Term: ", search_term)
# Adding one to no-record counter
no_record_count += 1
# Scoring Zero for accuracy
match_accuracy.append(0)
else:
# Create a Dataframe from search results
search_results = pd.DataFrame(data["result"]["records"])
# Creating a list of matches
list_of_matches = search_results["entity_name"][search_results["entity_name"].str.contains(cleaned_term)].tolist()
if search_term in list_of_matches:
# CASE 2 -- PERFECT MATCH: Appending official company name filed with ACRA
revised_namelist.append(search_term)
# Scoring 100 for perfect accuracy
match_accuracy.append(100)
else:
# Adding one to no record counter
no_record_count += 1
# CASE 3 -- CLOSE MATCH: for records that are close but not perfect
revised_namelist.append("Close Match")
print("Close Matches Found!")
print("Search Term: ", search_term)
print("Closest Matches: ", list_of_matches, "\n")
# Scoring Zero for accuracy
match_accuracy.append(0)
# PRINT SUMMARY
print("\n---------\n","PERFECT MATCHES: ", len(revised_namelist)-no_record_count , "/", companies_df.shape[0])
print(" NO MATCH or POOR MATCH: ", no_record_count , "/", companies_df.shape[0])
# CREATE NEW COLUMNS IN DATAFRAME
companies_df["official_name"]=revised_namelist
companies_df["match_accuracy"]=match_accuracy
return companies_df
# Calling the function on our data
acra_name_matcher("company_name", companies)
```
# STAGE 2: INTERNAL MATCHING
```
# Defining a function to pre-process text for an internal matching exercise
def internal_process(search_term):
search_term = search_term.upper() # Change to uppercase
search_term = re.sub("[\(\[].*?[\)\]]", "", search_term) # Remove text in brackets
search_term = re.sub("(?:\s)(PTE+\.*)|(?:\s)(LTD+\.*)", "", search_term) # Remove pte. and ltd with or without periods
search_term = search_term.strip() # Remove spaces at the start and end of the string
search_term = cleanco(search_term).clean_name() # Use cleanco's library to remove unexpected legal suffixes like LLP
return search_term
# Creating an empty list to nest matched values
official_name_list = []
# Defining a function to check for similar company names within the dataset
def comparative_name_match(name_series, official_name_series, accuracy_series, companies_df):
print("<<MATCHING SIMILAR COMPANIES WITHIN DATASET>>\n", "-"*42)
# Looping through dataset to isolate currently unmatched companies
for name in range(companies_df.shape[0]):
matches = ["No Match" ,"Close Match"]
if companies_df[official_name_series].iloc[name] in matches:
# Calling pre-processing function on company name
string_to_match = internal_process(companies_df[name_series].iloc[name])
# Calling pre-processing function on perfectly-matched ACRA names through list comprehension
# Enumerating so that index is present for use later
string_tuples = [(index, internal_process(word)) for index, word in enumerate(companies_df[official_name_series])]
string_options = [tup[1] for tup in string_tuples]
# Using fuzzywuzzy to compute a similarity score between string and ACRA names
Ratios = process.extract(string_to_match,string_options)
# Print Summary Report
print("<<Finding Match for: ", string_to_match, " >>")
print("Top 3 Matches(+ Levenshtein Score): ", Ratios[:3])
# Extract string with the highest matching percentage
highest = process.extractOne(string_to_match,string_options)
print("Top Match: ", highest[0])
print("Score: ", highest[1])
# We will accept matches only if scores are 90 and above
if highest[1] >= 90:
index = [i for i, string in string_tuples if string == highest[0]]
new_name = companies_df[official_name_series].iloc[index[0]]
# Appending new name directly into dataframe
companies_df.at[name,official_name_series] = new_name
# Giving the match an accuracy score
# Minus 10 off accuracy as this is a stage 2 match
companies_df.at[name,accuracy_series] = highest[1] - 10
print("Match Appended!", "\n")
else:
print("**LOW SCORE WARNING** -- As the Levenshtein Score is below 90, we will not be providing a match for this company.", "\n")
return companies_df
# Calling the function on our data
comparative_name_match("company_name", "official_name","match_accuracy", companies)
```
# CONCLUSION
| github_jupyter |
# Noise2Void - 2D Example for SEM data
```
# We import all our dependencies.
from n2v.models import N2VConfig, N2V
import numpy as np
from csbdeep.utils import plot_history
from n2v.utils.n2v_utils import manipulate_val_data
from n2v.internals.N2V_DataGenerator import N2V_DataGenerator
from matplotlib import pyplot as plt
import urllib
import os
import zipfile
```
# Download Example Data
Data by Reza Shahidi and Gaspar Jekely, Living Systems Institute, Exeter<br>
Thanks!
```
# create a folder for our data.
if not os.path.isdir('./data'):
os.mkdir('./data')
# check if data has been downloaded already
zipPath="data/SEM.zip"
if not os.path.exists(zipPath):
#download and unzip data
data = urllib.request.urlretrieve('https://cloud.mpi-cbg.de/index.php/s/pXgfbobntrw06lC/download', zipPath)
with zipfile.ZipFile(zipPath, 'r') as zip_ref:
zip_ref.extractall("data")
```
# Training Data Preparation
For training we load __one__ set of low-SNR images and use the <code>N2V_DataGenerator</code> to extract training <code>X</code> and validation <code>X_val</code> patches.
```
# We create our DataGenerator-object.
# It will help us load data and extract patches for training and validation.
datagen = N2V_DataGenerator()
# We load all the '.tif' files from the 'data' directory.
# If you want to load other types of files see the RGB example.
# The function will return a list of images (numpy arrays).
imgs = datagen.load_imgs_from_directory(directory = "data/")
# Let's look at the shape of the images.
print(imgs[0].shape,imgs[1].shape)
# The function automatically added two extra dimensions to the images:
# One at the beginning, is used to hold a potential stack of images such as a movie.
# One at the end, represents channels.
# Lets' look at the images.
# We have to remove the added extra dimensions to display them as 2D images.
plt.imshow(imgs[0][0,...,0], cmap='magma')
plt.show()
plt.imshow(imgs[1][0,...,0], cmap='magma')
plt.show()
# We will use the first image to extract training patches and store them in 'X'
X = datagen.generate_patches_from_list(imgs[:1], shape=(96,96))
# We will use the second image to extract validation patches.
X_val = datagen.generate_patches_from_list(imgs[1:], shape=(96,96))
# Patches are created so they do not overlap.
# (Note: this is not the case if you specify a number of patches. See the docstring for details!)
# Non-overlapping patches would also allow us to split them into a training and validation set
# per image. This might be an interesting alternative to the split we performed above.
# Just in case you don't know how to access the docstring of a method:
datagen.generate_patches_from_list?
# Let's look at one of our training and validation patches.
plt.figure(figsize=(14,7))
plt.subplot(1,2,1)
plt.imshow(X[0,...,0], cmap='magma')
plt.title('Training Patch');
plt.subplot(1,2,2)
plt.imshow(X_val[0,...,0], cmap='magma')
plt.title('Validation Patch');
```
# Configure
Noise2Void comes with a special config-object, where we store network-architecture and training specific parameters. See the docstring of the <code>N2VConfig</code> constructor for a description of all parameters.
When creating the config-object, we provide the training data <code>X</code>. From <code>X</code> we extract <code>mean</code> and <code>std</code> that will be used to normalize all data before it is processed by the network. We also extract the dimensionality and number of channels from <code>X</code>.
Compared to supervised training (i.e. traditional CARE), we recommend to use N2V with an increased <code>train_batch_size</code> and <code>batch_norm</code>.
To keep the network from learning the identity we have to manipulate the input pixels during training. For this we have the parameter <code>n2v_manipulator</code> with default value <code>'uniform_withCP'</code>. Most pixel manipulators will compute the replacement value based on a neighborhood. With <code>n2v_neighborhood_radius</code> we can control its size.
Other pixel manipulators:
* normal_withoutCP: samples the neighborhood according to a normal gaussian distribution, but without the center pixel
* normal_additive: adds a random number to the original pixel value. The random number is sampled from a gaussian distribution with zero-mean and sigma = <code>n2v_neighborhood_radius</code>
* normal_fitted: uses a random value from a gaussian normal distribution with mean equal to the mean of the neighborhood and standard deviation equal to the standard deviation of the neighborhood.
* identity: performs no pixel manipulation
For faster training multiple pixels per input patch can be manipulated. In our experiments we manipulated about 1.6% of the input pixels per patch. For a patch size of 64 by 64 pixels this corresponds to about 64 pixels. This fraction can be tuned via <code>n2v_perc_pix</code>.
For Noise2Void training it is possible to pass arbitrarily large patches to the training method. From these patches random subpatches of size <code>n2v_patch_shape</code> are extracted during training. Default patch shape is set to (64, 64).
<font color='red'>Warning:</font> to make this example notebook execute faster, we have set <code>train_epochs</code> to only 10 and <code>train_steps_per_epoch</code> to only 10. <br>For better results we suggest values of 100, and a few dozen, respectively.
```
# You can increase "train_steps_per_epoch" to get even better results at the price of longer computation.
config = N2VConfig(X, unet_kern_size=3,
train_steps_per_epoch=10,train_epochs=10, train_loss='mse', batch_norm=True,
train_batch_size=128, n2v_perc_pix=1.6, n2v_patch_shape=(64, 64),
n2v_manipulator='uniform_withCP', n2v_neighborhood_radius=5)
# Let's look at the parameters stored in the config-object.
vars(config)
# a name used to identify the model
model_name = 'n2v_2D'
# the base directory in which our model will live
basedir = 'models'
# We are now creating our network model.
model = N2V(config, model_name, basedir=basedir)
```
# Training
Training the model will likely take some time. We recommend to monitor the progress with TensorBoard, which allows you to inspect the losses during training. Furthermore, you can look at the predictions for some of the validation images, which can be helpful to recognize problems early on.
You can start TensorBoard in a terminal from the current working directory with tensorboard --logdir=. Then connect to http://localhost:6006/ with your browser.
```
# We are ready to start training now.
history = model.train(X, X_val)
```
### After training, lets plot training and validation loss.
```
print(sorted(list(history.history.keys())))
plt.figure(figsize=(16,5))
plot_history(history,['loss','val_loss']);
```
## Export Model to be Used with CSBDeep Fiji Plugins and KNIME Workflows
See https://github.com/CSBDeep/CSBDeep_website/wiki/Your-Model-in-Fiji for details.
```
model.export_TF()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/nateanl/audio/blob/mvdr/examples/beamforming/MVDR_tutorial.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
This is a tutorial on how to apply MVDR beamforming by using [torchaudio](https://github.com/pytorch/audio)
-----------
The multi-channel audio example is selected from [ConferencingSpeech](https://github.com/ConferencingSpeech/ConferencingSpeech2021) dataset.
```
original filename: SSB07200001\#noise-sound-bible-0038\#7.86_6.16_3.00_3.14_4.84_134.5285_191.7899_0.4735\#15217\#25.16333303751458\#0.2101221178590021.wav
```
Note:
- You need to use the nightly torchaudio in order to use the MVDR and InverseSpectrogram modules.
Steps
- Ideal Ratio Mask (IRM) is generated by dividing the clean/noise magnitude by the mixture magnitude.
- We test all three solutions (``ref_channel``, ``stv_evd``, ``stv_power``) of torchaudio's MVDR module.
- We test the single-channel and multi-channel masks for MVDR beamforming. The multi-channel mask is averaged along channel dimension when computing the covariance matrices of speech and noise, respectively.
```
!pip install --pre torchaudio -f https://download.pytorch.org/whl/nightly/torch_nightly.html --force
import torch
import torchaudio
import IPython.display as ipd
```
### Load audios of mixture, reverberated clean speech, and dry clean speech.
```
!curl -LJO https://github.com/nateanl/torchaudio_mvdr_tutorial/raw/main/wavs/mix.wav
!curl -LJO https://github.com/nateanl/torchaudio_mvdr_tutorial/raw/main/wavs/reverb_clean.wav
!curl -LJO https://github.com/nateanl/torchaudio_mvdr_tutorial/raw/main/wavs/clean.wav
mix, sr = torchaudio.load('mix.wav')
reverb_clean, sr2 = torchaudio.load('reverb_clean.wav')
clean, sr3 = torchaudio.load('clean.wav')
assert sr == sr2
noise = mix - reverb_clean
```
## Note: The MVDR Module requires ``torch.cdouble`` dtype for noisy STFT. We need to convert the dtype of the waveforms to ``torch.double``
```
mix = mix.to(torch.double)
noise = noise.to(torch.double)
clean = clean.to(torch.double)
reverb_clean = reverb_clean.to(torch.double)
```
### Initilize the Spectrogram and InverseSpectrogram modules
```
stft = torchaudio.transforms.Spectrogram(n_fft=1024, hop_length=256, return_complex=True, power=None)
istft = torchaudio.transforms.InverseSpectrogram(n_fft=1024, hop_length=256)
```
### Compute the complex-valued STFT of mixture, clean speech, and noise
```
spec_mix = stft(mix)
spec_clean = stft(clean)
spec_reverb_clean = stft(reverb_clean)
spec_noise = stft(noise)
```
### Generate the Ideal Ratio Mask (IRM)
Note: we found using the mask directly peforms better than using the square root of it. This is slightly different from the definition of IRM.
```
def get_irms(spec_clean, spec_noise, spec_mix):
mag_mix = spec_mix.abs() ** 2
mag_clean = spec_clean.abs() ** 2
mag_noise = spec_noise.abs() ** 2
irm_speech = mag_clean / (mag_clean + mag_noise)
irm_noise = mag_noise / (mag_clean + mag_noise)
return irm_speech, irm_noise
```
## Note: We use reverberant clean speech as the target here, you can also set it to dry clean speech
```
irm_speech, irm_noise = get_irms(spec_reverb_clean, spec_noise, spec_mix)
```
### Apply MVDR beamforming by using multi-channel masks
```
results_multi = {}
for solution in ['ref_channel', 'stv_evd', 'stv_power']:
mvdr = torchaudio.transforms.MVDR(ref_channel=0, solution=solution, multi_mask=True)
stft_est = mvdr(spec_mix, irm_speech, irm_noise)
est = istft(stft_est, length=mix.shape[-1])
results_multi[solution] = est
```
### Apply MVDR beamforming by using single-channel masks
(We use the 1st channel as an example. The channel selection may depend on the design of the microphone array)
```
results_single = {}
for solution in ['ref_channel', 'stv_evd', 'stv_power']:
mvdr = torchaudio.transforms.MVDR(ref_channel=0, solution=solution, multi_mask=False)
stft_est = mvdr(spec_mix, irm_speech[0], irm_noise[0])
est = istft(stft_est, length=mix.shape[-1])
results_single[solution] = est
```
### Compute Si-SDR scores
```
def si_sdr(estimate, reference, epsilon=1e-8):
estimate = estimate - estimate.mean()
reference = reference - reference.mean()
reference_pow = reference.pow(2).mean(axis=1, keepdim=True)
mix_pow = (estimate * reference).mean(axis=1, keepdim=True)
scale = mix_pow / (reference_pow + epsilon)
reference = scale * reference
error = estimate - reference
reference_pow = reference.pow(2)
error_pow = error.pow(2)
reference_pow = reference_pow.mean(axis=1)
error_pow = error_pow.mean(axis=1)
sisdr = 10 * torch.log10(reference_pow) - 10 * torch.log10(error_pow)
return sisdr.item()
```
### Single-channel mask results
```
for solution in results_single:
print(solution+": ", si_sdr(results_single[solution][None,...], reverb_clean[0:1]))
```
### Multi-channel mask results
```
for solution in results_multi:
print(solution+": ", si_sdr(results_multi[solution][None,...], reverb_clean[0:1]))
```
### Display the mixture audio
```
print("Mixture speech")
ipd.Audio(mix[0], rate=16000)
```
### Display the noise
```
print("Noise")
ipd.Audio(noise[0], rate=16000)
```
### Display the clean speech
```
print("Clean speech")
ipd.Audio(clean[0], rate=16000)
```
### Display the enhanced audios¶
```
print("multi-channel mask, ref_channel solution")
ipd.Audio(results_multi['ref_channel'], rate=16000)
print("multi-channel mask, stv_evd solution")
ipd.Audio(results_multi['stv_evd'], rate=16000)
print("multi-channel mask, stv_power solution")
ipd.Audio(results_multi['stv_power'], rate=16000)
print("single-channel mask, ref_channel solution")
ipd.Audio(results_single['ref_channel'], rate=16000)
print("single-channel mask, stv_evd solution")
ipd.Audio(results_single['stv_evd'], rate=16000)
print("single-channel mask, stv_power solution")
ipd.Audio(results_single['stv_power'], rate=16000)
```
| github_jupyter |
<h1>Data exploration, preprocessing and feature engineering</h1>
In this and the following notebooks we will demonstrate how you can build your ML Pipeline leveraging SKLearn Feature Transformers and SageMaker XGBoost algorithm & after the model is trained, deploy the Pipeline (Feature Transformer and XGBoost) as a SageMaker Inference Pipeline behind a single Endpoint for real-time inference.
In particular, in this notebook we will tackle the first steps related to data exploration and preparation. We will use [Amazon Athena](https://aws.amazon.com/athena/) to query our dataset and have a first insight about data quality and available features, [AWS Glue](https://aws.amazon.com/glue/) to create a Data Catalog and [Amazon SageMaker Processing](https://docs.aws.amazon.com/sagemaker/latest/dg/processing-job.html) for building the feature transformer model with SKLearn.
```
# When using Amazon SageMaker Studio, please set this variable to True and execute the cell
use_sm_studio = True
if use_sm_studio:
%cd /root/amazon-sagemaker-build-train-deploy/02_data_exploration_and_feature_eng/
# Check SageMaker Python SDK version
import sagemaker
print(sagemaker.__version__)
def versiontuple(v):
return tuple(map(int, (v.split("."))))
if versiontuple(sagemaker.__version__) < versiontuple('2.5.0'):
raise Exception("This notebook requires at least SageMaker Python SDK version 2.5.0. Please install it via pip.")
import boto3
import time
role = sagemaker.get_execution_role()
region = boto3.Session().region_name
sagemaker_session = sagemaker.Session()
bucket_name = sagemaker_session.default_bucket()
prefix = 'endtoendmlsm'
print(region)
print(role)
print(bucket_name)
```
We can now copy to our bucket the dataset used for this use case. We will use the `windturbine_raw_data_header.csv` made available for this workshop in the `gianpo-public` public S3 bucket. In this Notebook, we will download from that bucket and upload to your bucket so that AWS services can access the data.
```
import boto3
s3 = boto3.resource('s3')
file_key = 'data/raw/windturbine_raw_data_header.csv'
copy_source = {
'Bucket': 'gianpo-public',
'Key': 'endtoendml/{0}'.format(file_key)
}
s3.Bucket(bucket_name).copy(copy_source, '{0}/'.format(prefix) + file_key)
```
The first thing we need now is to infer a schema for our dataset. Thanks to its [integration with AWS Glue](https://docs.aws.amazon.com/athena/latest/ug/glue-athena.html), we will later use Amazon Athena to run SQL queries against our data stored in S3 without the need to import them into a relational database. To do so, Amazon Athena uses the AWS Glue Data Catalog as a central location to store and retrieve table metadata throughout an AWS account. The Athena execution engine, indeed, requires table metadata that instructs it where to read data, how to read it, and other information necessary to process the data.
To organize our Glue Data Catalog we create a new database named `endtoendml-db`. To do so, we create a Glue client via Boto and invoke the `create_database` method.
However, first we want to make sure these AWS resources to not exist yet to avoid any error.
```
from notebook_utilities import cleanup_glue_resources
cleanup_glue_resources()
glue_client = boto3.client('glue')
response = glue_client.create_database(DatabaseInput={'Name': 'endtoendml-db'})
response = glue_client.get_database(Name='endtoendml-db')
response
assert response['Database']['Name'] == 'endtoendml-db'
```
Now we define a Glue Crawler that we point to the S3 path where the dataset resides, and the crawler creates table definitions in the Data Catalog.
To grant the correct set of access permission to the crawler, we use one of the roles created before (`GlueServiceRole-endtoendml`) whose policy grants AWS Glue access to data stored in your S3 buckets.
```
response = glue_client.create_crawler(
Name='endtoendml-crawler',
Role='service-role/GlueServiceRole-endtoendml',
DatabaseName='endtoendml-db',
Targets={'S3Targets': [{'Path': '{0}/{1}/data/raw/'.format(bucket_name, prefix)}]}
)
```
We are ready to run the crawler with the `start_crawler` API and to monitor its status upon completion through the `get_crawler_metrics` API.
```
glue_client.start_crawler(Name='endtoendml-crawler')
while glue_client.get_crawler_metrics(CrawlerNameList=['endtoendml-crawler'])['CrawlerMetricsList'][0]['TablesCreated'] == 0:
print('RUNNING')
time.sleep(15)
assert glue_client.get_crawler_metrics(CrawlerNameList=['endtoendml-crawler'])['CrawlerMetricsList'][0]['TablesCreated'] == 1
```
When the crawler has finished its job, we can retrieve the Table definition for the newly created table.
As you can see, the crawler has been able to correctly identify 12 fields, infer a type for each column and assign a name.
```
table = glue_client.get_table(DatabaseName='endtoendml-db', Name='raw')
table
```
Based on our knowledge of the dataset, we can be more specific with column names and types.
```
del table['Table']['CatalogId']
table['Table']['StorageDescriptor']['Columns'] = [{'Name': 'turbine_id', 'Type': 'string'},
{'Name': 'turbine_type', 'Type': 'string'},
{'Name': 'wind_speed', 'Type': 'double'},
{'Name': 'rpm_blade', 'Type': 'double'},
{'Name': 'oil_temperature', 'Type': 'double'},
{'Name': 'oil_level', 'Type': 'double'},
{'Name': 'temperature', 'Type': 'double'},
{'Name': 'humidity', 'Type': 'double'},
{'Name': 'vibrations_frequency', 'Type': 'double'},
{'Name': 'pressure', 'Type': 'double'},
{'Name': 'wind_direction', 'Type': 'string'},
{'Name': 'breakdown', 'Type': 'string'}]
updated_table = table['Table']
updated_table.pop('DatabaseName', None)
updated_table.pop('CreateTime', None)
updated_table.pop('UpdateTime', None)
updated_table.pop('CreatedBy', None)
updated_table.pop('IsRegisteredWithLakeFormation', None)
glue_client.update_table(
DatabaseName='endtoendml-db',
TableInput=updated_table
)
```
<h2>Data exploration with Amazon Athena</h2>
For data exploration, let's install PyAthena, a Python client for Amazon Athena. Note: PyAthena is not maintained by AWS, please visit: https://pypi.org/project/PyAthena/ for additional information.
```
!pip install s3fs
!pip install pyathena
import pyathena
from pyathena import connect
import pandas as pd
athena_cursor = connect(s3_staging_dir='s3://{0}/{1}/staging/'.format(bucket_name, prefix),
region_name=region).cursor()
athena_cursor.execute('SELECT * FROM "endtoendml-db".raw limit 8;')
pd.read_csv(athena_cursor.output_location)
```
Another SQL query to count how many records we have
```
athena_cursor.execute('SELECT COUNT(*) FROM "endtoendml-db".raw;')
pd.read_csv(athena_cursor.output_location)
```
Let's try to see what are possible values for the field "breakdown" and how frequently they occur over the entire dataset
```
athena_cursor.execute('SELECT breakdown, (COUNT(breakdown) * 100.0 / (SELECT COUNT(*) FROM "endtoendml-db".raw)) \
AS percent FROM "endtoendml-db".raw GROUP BY breakdown;')
pd.read_csv(athena_cursor.output_location)
athena_cursor.execute('SELECT breakdown, COUNT(breakdown) AS bd_count FROM "endtoendml-db".raw GROUP BY breakdown;')
df = pd.read_csv(athena_cursor.output_location)
%matplotlib inline
import matplotlib.pyplot as plt
plt.bar(df.breakdown, df.bd_count)
```
We have discovered that the dataset is quite unbalanced, although we are not going to try balancing it.
```
athena_cursor.execute('SELECT DISTINCT(turbine_type) FROM "endtoendml-db".raw')
pd.read_csv(athena_cursor.output_location)
athena_cursor.execute('SELECT COUNT(*) FROM "endtoendml-db".raw WHERE oil_temperature IS NULL GROUP BY oil_temperature')
pd.read_csv(athena_cursor.output_location)
```
We also realized there are a few null values that need to be managed during the data preparation steps.
For the purpose of keeping the data exploration step short during the workshop, we are not going to execute additional queries. However, feel free to explore the dataset more if you have time.
**Note**: you can go to Amazon Athena console and check for query duration under History tab: usually queries are executed in a few seconds, then wait some time for Pandas to load results into a dataframe
## Create an experiment
Before getting started with preprocessing and feature engineering, we want to leverage on Amazon SageMaker Experiments to track the experimentations that we will be executing.
We are going to create a new experiment and then a new trial, that represents a multi-step ML workflow (e.g. preprocessing stage1, preprocessing stage2, training stage, etc.). Each step of a trial maps to a trial component in SageMaker Experiments.
We will use the Amazon SageMaker Experiments SDK to interact with the service from the notebooks. Additional info and documentation is available here: https://github.com/aws/sagemaker-experiments
```
!pip install sagemaker-experiments
```
Now we are creating the experiment, or loading if it already exists.
```
import time
from smexperiments import experiment
from botocore.exceptions import ClientError
experiment_name = 'end-to-end-ml-sagemaker-{0}'.format(str(int(time.time())))
print(experiment_name)
current_experiment = None
try:
current_experiment = experiment.Experiment.load(experiment_name)
print('Experiment loaded.')
except ClientError as ex:
if ex.response['Error']['Code'] == 'ResourceNotFound':
# Create experiment
current_experiment = experiment.Experiment.create(experiment_name=experiment_name,
description='SageMaker workshop experiment')
print('Experiment created.')
else:
raise ex
```
Once we have our experiment, we can create a new trial.
```
import time
trial_name = 'sklearn-xgboost-{0}'.format(str(int(time.time())))
current_trial = current_experiment.create_trial(trial_name=trial_name)
```
From now own, we will use the experiment and the trial as configuration parameters for the preprocessing and training jobs, to make sure we track executions.
```
%store experiment_name
%store trial_name
```
<h2>Preprocessing and Feature Engineering with Amazon SageMaker Processing</h2>
The preprocessing and feature engineering code is implemented in the `source_dir/preprocessor.py` file.
You can go through the code and see that a few categorical columns required one-hot encoding, plus we are filling some NaN values based on domain knowledge.
Once the SKLearn fit() and transform() is done, we are splitting our dataset into 80/20 train & validation and then saving to the output paths whose content will be automatically uploaded to Amazon S3 by SageMaker Processing. Finally, we also save the featurizer model as it will be reused later for inference.
```
!pygmentize source_dir/preprocessor.py
```
Configuring an Amazon SageMaker Processing job through the SM Python SDK requires to create a `Processor` object (in this case `SKLearnProcessor` as we are using the default SKLearn container for processing); we can specify how many instances we are going to use and what instance type is requested.
```
from sagemaker.processing import ProcessingInput, ProcessingOutput
from sagemaker.sklearn.processing import SKLearnProcessor
sklearn_processor = SKLearnProcessor(role=role,
base_job_name='end-to-end-ml-sm-proc',
instance_type='ml.m5.large',
instance_count=1,
framework_version='0.20.0')
```
Then, we can invoke the `run()` method of the `Processor` object to kick-off the job, specifying the script to execute, its arguments and the configuration of inputs and outputs as shown below.
```
raw_data_path = 's3://{0}/{1}/data/raw/'.format(bucket_name, prefix)
train_data_path = 's3://{0}/{1}/data/preprocessed/train/'.format(bucket_name, prefix)
val_data_path = 's3://{0}/{1}/data/preprocessed/val/'.format(bucket_name, prefix)
model_path = 's3://{0}/{1}/output/sklearn/'.format(bucket_name, prefix)
# Experiment tracking configuration
experiment_config={
"ExperimentName": current_experiment.experiment_name,
"TrialName": current_trial.trial_name,
"TrialComponentDisplayName": "sklearn-preprocessing",
}
sklearn_processor.run(code='source_dir/preprocessor.py',
inputs=[ProcessingInput(input_name='raw_data', source=raw_data_path, destination='/opt/ml/processing/input')],
outputs=[ProcessingOutput(output_name='train_data', source='/opt/ml/processing/train', destination=train_data_path),
ProcessingOutput(output_name='val_data', source='/opt/ml/processing/val', destination=val_data_path),
ProcessingOutput(output_name='model', source='/opt/ml/processing/model', destination=model_path)],
arguments=['--train-test-split-ratio', '0.2'],
experiment_config=experiment_config)
```
Once the job is completed, we can give a look at the preprocessed dataset, by loading the validation features as follows:
```
file_name = 'val_features.csv'
s3_key_prefix = '{0}/data/preprocessed/val/{1}'.format(prefix, file_name)
sagemaker_session.download_data('./', bucket_name, s3_key_prefix)
import pandas as pd
df = pd.read_csv(file_name)
df.head(10)
```
We can see that the categorical variables have been one-hot encoded, and you are free to check that we do not have NaN values anymore as expected.
Note that exploring the dataset locally with Pandas vs using Amazon Athena is possible given the limited size of the dataset.
### Experiment analytics
You can visualize experiment analytics either from Amazon SageMaker Studio Experiments plug-in or using the SDK from a notebook, as follows:
```
from sagemaker.analytics import ExperimentAnalytics
analytics = ExperimentAnalytics(experiment_name=experiment_name)
analytics.dataframe()
```
After the preprocessing and feature engineering are completed, you can move to the next notebook in the **03_train_model** folder to start model training.
| github_jupyter |
### Milestone 1 Mar. 15
Team 0 <br />
Milestone 1 <br />
Due: March 15 <br />
Members: <br />
Baer, Miles: Code <br />
Boyer, Nathaniel: Presentation <br />
Cope, Rex: Paper/Demo <br />
Johnson, Charles: Paper/Demo <br />
Smith, Robert: Code <br />
| Deliverable | Percent Complete | Estimated Completion Date | Percent Complete by Next Milestone |
| :--- | :---: | :---: | :-- |
|Code | 5% | May.1 | 30% |
|Paper | 0% | May.1 | 10% |
|Demo | 0% | May.1 | 10% |
|Presentation | 0% | May.1 | 10% |
<br />
<br />
#### 1. What deliverable goals established in the last milestone report were accomplished to the anticipated percentage? <br />
We have chosen a topic for the project and submitted the project proposal. Miles and Rob have started researching the methods needed to
build the Neural Network.
#### 2. What deliverable goals established in the last milestone report were not accomplished to the anticipated percentage? <br />
N/A
#### 3. What are the main deliverable goals to meet before the next milestone report, and who is working on them? <br />
Rob/Miles: Finish research and data gathering. Start building Neural Network <br />
Charles/Rex: Complete outline for Paper <br />
John: Complete outline for presentation
### Milestone 2 Mar. 29
Team 0 <br />
Milestone 2 <br />
Due: March 29 <br />
Members: <br />
Baer, Miles: Code <br />
Boyer, Nathaniel: Presentation <br />
Cope, Rex: Paper/Demo <br />
Johnson, Charles: Paper/Demo <br />
Smith, Robert: Code <br />
| Deliverable | Percent Complete | Estimated Completion Date | Percent Complete by Next Milestone |
| :--- | :---: | :---: | :-- |
|Code | 30% | May.1 | 50% |
|Paper | 10% | Apr. 24 | 30% |
|Demo | 0% | May. 3 | 10% |
|Presentation | 10% | May. 3 | 20% |
<br />
<br />
#### 1. What deliverable goals established in the last milestone report were accomplished to the anticipated percentage? <br />
Rob and Miles have gathered enough data to build the network and start training it.
Charles and Rex have completed the outline except the results section.
John has completed the outline for the presentation.
#### 2. What deliverable goals established in the last milestone report were not accomplished to the anticipated percentage? <br />
N/a
#### 3. What are the main deliverable goals to meet before the next milestone report, and who is working on them? <br />
Rob/Miles: Start training and testing Neural Network <br />
Charles/Rex: Complete first draft of introduction and background sections. Start the documentation for the demo <br />
John: Complete design and first draft of presentation.
### Milestone 3 Apr. 12
Team 0 <br />
Milestone 3 <br />
Due: April 12 <br />
Members: <br />
Baer, Miles: Code <br />
Boyer, Nathaniel: Presentation <br />
Cope, Rex: Paper/Demo <br />
Johnson, Charles: Paper/Demo <br />
Smith, Robert: Code <br />
| Deliverable | Percent Complete | Estimated Completion Date | Percent Complete by Next Milestone |
| :--- | :---: | :---: | :-- |
|Code | 100% | May.1 | 100% |
|Paper | 30% | Apr. 24 | 100% |
|Demo | 10% | May. 3 | 25% |
|Presentation | 20% | May. 3 | 30% |
<br />
<br />
#### 1. What deliverable goals established in the last milestone report were accomplished to the anticipated percentage? <br />
Rob and Miles have completed the code and have started training the network.
Charles and Rex have started the paper and documentation for the demo.
John has designed the first draft of presentation.
#### 2. What deliverable goals established in the last milestone report were not accomplished to the anticipated percentage? <br />
N/a
#### 3. What are the main deliverable goals to meet before the next milestone report, and who is working on them? <br />
Rob/Miles: Continue training and testing Neural Network <br />
Charles/Rex: Complete paper.<br />
John: Continue work on the presentation.
### Milestone 4 Apr. 26
Team 0 <br />
Milestone 4 <br />
Due: April 26 <br />
Members: <br />
Baer, Miles: Code/Paper/Presentation/Demo <br />
Boyer, Nathaniel: Presentation <br />
Cope, Rex: Presentation/Demo <br />
Johnson, Charles: Presentation/Demo <br />
Smith, Robert: Code/Paper/Presentation/Demo <br />
| Deliverable | Percent Complete | Estimated Completion Date | Percent Complete by Next Milestone |
| :--- | :---: | :---: | :-- |
|Code | 100% | May.1 | 100% |
|Paper | 30% | Apr. 24 | 100% |
|Demo | 10% | May. 3 | 100% |
|Presentation | 65% | May. 3 | 100% |
<br />
<br />
#### 1. What deliverable goals established in the last milestone report were accomplished to the anticipated percentage? <br />
Rob and Miles have completed the code and have finished training the network, but will be trying new methods and networks before the demo. <br />
Rob and Miles started and finished the paper. <br />
Rex started documentation for the demo. <br />
Rob has been cleaning up our scripts and compiling instructions for use for the demo <br />
Miles has designed the first draft of presentation, just need to add results and make it look nicer. (Based on files in GitHub at the moment of submission) <br />
#### 2. What deliverable goals established in the last milestone report were not accomplished to the anticipated percentage? <br />
N/a
#### 3. What are the main deliverable goals to meet before the next milestone report, and who is working on them? <br />
Rob/Miles/Rex/Charles/John: Finish the presentation and the demo <br />
| github_jupyter |
```
import pymc3 as pm
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
import arviz as az
%config InlineBackend.figure_format = 'retina'
az.style.use('arviz-darkgrid')
```
#### Code 2.1
```
ways = np.array([0, 3, 8, 9, 0])
ways / ways.sum()
```
#### Code 2.2
$$Pr(w \mid n, p) = \frac{n!}{w!(n − w)!} p^w (1 − p)^{n−w}$$
The probability of observing six W’s in nine tosses—under a value of p=0.5
```
stats.binom.pmf(6, n=9, p=0.5)
```
#### Code 2.3 and 2.5
Computing the posterior using a grid approximation.
In the book the following code is not inside a function, but this way is easier to play with different parameters
```
def posterior_grid_approx(grid_points=5, success=6, tosses=9):
"""
"""
# define grid
p_grid = np.linspace(0, 1, grid_points)
# define prior
prior = np.repeat(5, grid_points) # uniform
#prior = (p_grid >= 0.5).astype(int) # truncated
#prior = np.exp(- 5 * abs(p_grid - 0.5)) # double exp
# compute likelihood at each point in the grid
likelihood = stats.binom.pmf(success, tosses, p_grid)
# compute product of likelihood and prior
unstd_posterior = likelihood * prior
# standardize the posterior, so it sums to 1
posterior = unstd_posterior / unstd_posterior.sum()
return p_grid, posterior
```
#### Code 2.3
```
points = 20
w, n = 6, 9
p_grid, posterior = posterior_grid_approx(points, w, n)
plt.plot(p_grid, posterior, 'o-', label='success = {}\ntosses = {}'.format(w, n))
plt.xlabel('probability of water', fontsize=14)
plt.ylabel('posterior probability', fontsize=14)
plt.title('{} points'.format(points))
plt.legend(loc=0);
```
#### Code 2.6
Computing the posterior using the quadratic aproximation
```
data = np.repeat((0, 1), (3, 6))
with pm.Model() as normal_aproximation:
p = pm.Uniform('p', 0, 1)
w = pm.Binomial('w', n=len(data), p=p, observed=data.sum())
mean_q = pm.find_MAP()
std_q = ((1/pm.find_hessian(mean_q, vars=[p]))**0.5)[0]
mean_q['p'], std_q
norm = stats.norm(mean_q, std_q)
prob = .89
z = stats.norm.ppf([(1-prob)/2, (1+prob)/2])
pi = mean_q['p'] + std_q * z
pi
```
#### Code 2.7
```
# analytical calculation
w, n = 6, 9
x = np.linspace(0, 1, 100)
plt.plot(x, stats.beta.pdf(x , w+1, n-w+1),
label='True posterior')
# quadratic approximation
plt.plot(x, stats.norm.pdf(x, mean_q['p'], std_q),
label='Quadratic approximation')
plt.legend(loc=0, fontsize=13)
plt.title('n = {}'.format(n), fontsize=14)
plt.xlabel('Proportion water', fontsize=14)
plt.ylabel('Density', fontsize=14);
import sys, IPython, scipy, matplotlib, platform
print("""This notebook was created using:\nPython {}\nIPython {}\nPyMC3 {}\nArviZ {}\nNumPy {}\nSciPy {}\nMatplotlib {}\n""".format(sys.version[:5], IPython.__version__, pm.__version__, az.__version__, np.__version__, scipy.__version__, matplotlib.__version__))
```
| github_jupyter |
# Analysis of Groundtruth Biases
## Preamble
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import math
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.stats.contingency import expected_freq
from scipy.stats import fisher_exact
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = 'all'
pd.set_option('display.width', 100)
pd.set_option('display.max_columns', 200)
pd.set_option('display.max_rows', 200)
plt.rcParams['pdf.fonttype'] = 42
plt.rcParams['font.family'] = 'Helvetica'
plt.rcParams['font.size'] = 12
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 14
plt.rcParams['legend.fontsize'] = 12
```
## Load Data
```
FILE_FEATURES = '../../data/corpus-validity/wdvc15_features.csv'
FILE_ANNOTATION = '../../data/corpus-validity/wdvc15_annotations.csv'
OUTPUT_DIR = '../../data/classification/'
usecols = ['revisionId', 'selected', 'timestamp', 'registeredUser', 'rollbackReverted']
df_load = pd.read_csv(FILE_FEATURES, index_col=0, usecols=usecols)
df_load['vandalismManual'] = pd.read_csv(FILE_ANNOTATION, index_col=0)
df_load['timestamp'] = pd.to_datetime(df_load['timestamp'])
```
## Biases by Wikidata Reviewers
```
df = df_load[(df_load['selected'] == 'rollback') | (df_load['selected'] == 'inconspicuous')]
df = df[~df['vandalismManual']]
ct_observed = pd.crosstab(df['registeredUser'], df['rollbackReverted']).iloc[:, ::-1]
ct_expected = ct_observed.copy()
ct_expected.loc[:, :] = expected_freq(ct_observed.values)
table = pd.concat([ct_observed, ct_expected], axis=1, keys=['Observed', 'Expected'])
formatters = {
('Expected', True): '{:.1f}'.format,
('Expected', False): '{:.1f}'.format
}
table.to_latex(OUTPUT_DIR + 'table-bias-reviewers.tex', formatters=formatters)
table.style.format(formatters)
fisher_exact(ct_observed.values)[1]
```
## Reviewer Bias over Time
```
df_load['timestamp'] = pd.to_datetime(df_load['timestamp'])
def plot_reviewer_bias_over_time(ax, df_odds_ratio):
# define colors
odds_color = '#800080'
revision_color = 'xkcd:light grey'
revision_color2 = 'xkcd:black'
# init
xticks = np.arange(0, len(df_odds_ratio))
# odds ratio
ax.plot(xticks, df_odds_ratio['odds_ratio'].values, marker='o',
color=odds_color, label='Odds ratio');
ax.tick_params(axis='y', labelcolor=odds_color);
ax.set_xticks(xticks);
ax.set_xticklabels(df_odds_ratio.index);
ax.set_ylabel('Odds ratio', color=odds_color);
ax.set_yscale('log')
ax.fill_between(xticks, df_odds_ratio['ci_start'], df_odds_ratio['ci_end'], interpolate=True,
color=odds_color, alpha=0.4, label='95% CI')
# revisions
ax2 = ax.twinx()
ax2.bar(xticks, df_odds_ratio['n_revisions'].values,
color=revision_color, label='Edit bins');
ax2.tick_params(axis='y', labelcolor=revision_color2);
ax2.set_ylabel('Number of edits', color=revision_color2);
ax2.yaxis.set_major_locator(ticker.MultipleLocator(50))
# z order
ax.set_zorder(ax2.get_zorder() + 1);
ax.patch.set_visible(False);
handles, labels = ax.get_legend_handles_labels()
handles2, labels2 = ax2.get_legend_handles_labels()
ax2.legend(handles + handles2, labels + labels2, loc='upper left')
return fig
def compute_odds_ratio(ct):
return ct.iloc[0, 0] / ct.iloc[0, 1] / (ct.iloc[1, 0] / ct.iloc[1, 1])
def format_datetime(datetime):
return datetime.strftime('%m/%y')
def format_index(df):
start = format_datetime(df.index[0])
end = format_datetime(df.index[-1])
return '{} - {}'.format(start, end)
def get_CI(odds_ratio, ct):
SE_ln_odds_ratio = math.sqrt((1 / ct.values).sum())
start = math.exp(math.log(odds_ratio) - 1.96 * SE_ln_odds_ratio)
end = math.exp(math.log(odds_ratio) + 1.96 * SE_ln_odds_ratio)
return start, end
import matplotlib.ticker as ticker
df = df_load[(df_load['selected'] == 'rollback') | (df_load['selected'] == 'inconspicuous')].sort_index()
df = df[~df['vandalismManual']]
df.set_index('timestamp', inplace=True)
df_odds_ratio = pd.DataFrame()
step_size = round(len(df) / 4)
for i in range(0, len(df) - 1, step_size):
df1 = df.iloc[i: i + step_size]
ct = pd.crosstab(df1['registeredUser'], df1['rollbackReverted']).iloc[:, ::-1]
odds_ratio = compute_odds_ratio(ct)
ci_start, ci_end = get_CI(odds_ratio, ct)
index = format_index(df1)
df_odds_ratio.loc[index, 'odds_ratio'] = odds_ratio
df_odds_ratio.loc[index, 'ci_start'] = ci_start
df_odds_ratio.loc[index, 'ci_end'] = ci_end
df_odds_ratio.loc[index, 'n_revisions'] = ct.values.sum()
df_odds_ratio
len(df)
fig, ax = plt.subplots(figsize=(6, 3))
fig = plot_reviewer_bias_over_time(ax, df_odds_ratio)
fig.savefig(OUTPUT_DIR + 'plot-reviewer-bias-over-time.pdf', transparent=True, bbox_inches='tight')
```
| github_jupyter |
```
import matplotlib.pyplot as plt
from matplotlib.image import imread
import pandas as pd
import numpy as np
import seaborn as sns
from sklearn.datasets.samples_generator import (make_blobs, make_circles, make_moons)
from sklearn.cluster import KMeans, SpectralClustering
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import silhouette_samples, silhouette_score
df_profiles = pd.read_csv("../data/raw/profiles.csv")
df_profiles
# Standardize the data
X_std = StandardScaler().fit_transform(df_profiles)
# Run local implementation of kmeans
km = KMeans(n_clusters=4)
km.fit(X_std)
centroids = km.cluster_centers_
centroids
df_silhouette = pd.DataFrame(columns = ['cluster_type', 'n_cluster', 'avg_score'])
df_silhouette
col = df_profiles.columns
col
rows = df_profiles.index
ex_rows = rows[2:500]
ex_rows
count = []
for each in ex_rows:
each = int(each)
if each not in count:
count.append(each)
count
for a, b in enumerate(col):
print(a,b)
for i, k in enumerate(count):
# Run the Kmeans algorithm
km = KMeans(n_clusters=k)
labels = km.fit_predict(X_std)
centroids = km.cluster_centers_
# Get silhouette samples
silhouette_vals = silhouette_samples(X_std, labels)
# Silhouette
for i, cluster in enumerate(np.unique(labels)):
cluster_silhouette_vals = silhouette_vals[labels == cluster]
cluster_silhouette_vals.sort()
# Get the average silhouette score
avg_score = np.mean(silhouette_vals)
if k % 50 == 0:
print(k, avg_score)
df_silhouette.loc[i,'cluster_type'] = 'KMeans'
df_silhouette.loc[i,'n_cluster'] = k
df_silhouette.loc[i,'avg_score'] = avg_score
df_silhouette
df_silhouette.to_csv("../data/task2/silhouette130.csv")
x = []
for i in range(500,1000,25):
x.append(i)
x
for i, k in enumerate(x):
# Run the Kmeans algorithm
km = KMeans(n_clusters=k)
labels = km.fit_predict(X_std)
centroids = km.cluster_centers_
# Get silhouette samples
silhouette_vals = silhouette_samples(X_std, labels)
# Silhouette
for i, cluster in enumerate(np.unique(labels)):
cluster_silhouette_vals = silhouette_vals[labels == cluster]
cluster_silhouette_vals.sort()
# Get the average silhouette score
avg_score = np.mean(silhouette_vals)
if k % 50 == 0:
print(k, avg_score)
df_silhouette.loc[i,'cluster_type'] = 'KMeans'
df_silhouette.loc[i,'n_cluster'] = k
df_silhouette.loc[i,'avg_score'] = avg_score
df_silhouette
df_silhouette.to_csv("../data/task2/silhouette.csv")
for i, k in enumerate([1000, 5000]):
# Run the Kmeans algorithm
km = KMeans(n_clusters=k)
labels = km.fit_predict(X_std)
centroids = km.cluster_centers_
# Get silhouette samples
silhouette_vals = silhouette_samples(X_std, labels)
# Silhouette
for i, cluster in enumerate(np.unique(labels)):
cluster_silhouette_vals = silhouette_vals[labels == cluster]
cluster_silhouette_vals.sort()
# Get the average silhouette score
avg_score = np.mean(silhouette_vals)
if k % 50 == 0:
print(k, avg_score)
df_silhouette.loc[i,'cluster_type'] = 'KMeans'
df_silhouette.loc[i,'n_cluster'] = k
df_silhouette.loc[i,'avg_score'] = avg_score
df_silhouette.to_csv("../data/task2/silhouette5000.csv")
for i, k in enumerate([2, 3, 4]):
# Run the Kmeans algorithm
km = KMeans(n_clusters=k)
labels = km.fit_predict(X_std)
centroids = km.cluster_centers_
# Get silhouette samples
silhouette_vals = silhouette_samples(X_std, labels)
# Silhouette
for i, cluster in enumerate(np.unique(labels)):
cluster_silhouette_vals = silhouette_vals[labels == cluster]
cluster_silhouette_vals.sort()
# Get the average silhouette score
avg_score = np.mean(silhouette_vals)
```
| github_jupyter |
# Преобразование Жуковского
Это задание ко второму модулю **"Потенциальные вихри и подъемная сила"** курса [_AeroPython_](https://github.com/barbagroup/AeroPython). Первый модуль курса, "Строительные элементы потенциальных течений", который состоял из трех блокнотов, закончился рассмотрением потенциального обтекания двумерного цилиндра, полученного путем суперпозиции диполя и равномерного потока. А во втором модуле вы узнали, что добавление еще одной особенности – вихря, позволяет получить цилиндр, на который действует подъемная сила. Может возникнуть вопрос: *есть ли польза от этих знаний?*
И вот как они станут полезными. Используя простые методы ТФКП, мы получим из обтекания цилиндра течение вокруг профиля. Фокус в том, чтобы использовать конформное [*отображение*](https://ru.wikipedia.org/wiki/%D0%9A%D0%BE%D0%BD%D1%84%D0%BE%D1%80%D0%BC%D0%BD%D0%BE%D0%B5_%D0%BE%D1%82%D0%BE%D0%B1%D1%80%D0%B0%D0%B6%D0%B5%D0%BD%D0%B8%D0%B5) (комплексную функцию, сохраняющую углы) для перехода из плоскости цилиндра в плоскость профиля.
Давайте исследуем это классический подход!
## Введение
Вы узнали, как получить потенциальное обтекание цилиндра путем суперпозиции равномерного течения и [диполя](03_Lesson03_doublet.ipynb). Еще вы узнали, что при добавлении вихря возникает [подъемная сила](06_Lesson06_vortexLift.ipynb). Ну и что с того? Почему это так важно? Что нам делать с потенциальным течением вокруг цилиндра?
В те времена, когда еще не было компьютеров, ученые-аэродинамики и математики использовали мощный инструмент — ТФКП — для изучения потенциальных течений, не решая напрямую системы уравнений в частных производных. Пользуясь им и зная решение для потенциального обтекания цилиндра, они с легкостью получали решения для множества различных внешних потенциальных течений, включая обтекание нескольких видов профилей.
Однако сегодня мы больше не пользуемся этим волшебным инструментом. Тем не менее, по-прежнему полезно знать основной принцип, стоящий за этим волшебством — конформные отображения. В этом задании мы пройдем все шаги в процедуре получения потенциального обтекания профиля из течения вокруг цилиндра при помощи знаменитого конформного преобразования — *функции Жуковского*. И вам станет понятно, какое важное значение в истории аэродинамики у потенциального обтекания цилиндра!
Не волнуйтесь. Математики будет немного. И вам, в отличие от пионеров аэродинамики, не придется ничего вычислять вручную. Просто следуйте инструкциям, а Питон сделает всю тяжелую работу за вас.
## 1. Комплексные числа в Питоне
Начнем с задания двух комплексных плоскостей: в одной заданы точки $z = x + iy$, в другой — точки $\xi = \xi_x+i\xi_y$. Функция Жуковского переводит точку из плоскости $z$ в точку на плоскости $\xi$:
\begin{equation}
\xi = z + \frac{c^2}{z}
\end{equation}
где $c$ — константа. Прежде чем перейти к обсуждению формулы Жуковского, попрактикуемся немного с комплексными числами в Питоне.
Если использовать комплексные числа, то формула Жуковского приобретает простой вид,
при этом отпадает необходимость вычислять действительную и мнимую части по отдельности.
Питон, а следовательно и NumPy, умеет работать с комплексными числами, что называется, из коробки. Мнимая единица $i=\sqrt{-1}$ обозначается символом`j`, *а не* `i`, чтобы не возникало путаницы с итерационной переменной `i`.
Если вы впервые сталкиваетесь с комплексными переменными, попробуйте выполнить несколько простых операций. Например, введите в следующую ячейку код:
```Python
3 + 2j
```
А теперь попробуйте так:
```Python
a = 3
b = 3
z = a + b * 1j
print('z = ', z)
print('The type of the variable is ', type(z))
```
### Упражнения:
Познакомьтесь с комплексными операциями в Питоне и ответьте на следующие вопросы:
1. $(2.75+3.69i)\times(8.55-6.13i)=$
2. $1.4\times e^{i5.32}=$
3. $\frac{7.51-9.15i}{4.43+9.64i}=$
## 2. Фигуры, созданные при помощи формулы Жуковского
Начните с написания функции, которая принимает `z` и `c` в качестве параметров и возвращает преобразованное значение `z`.
При помощи такого преобразования можно получить несколько типов кривых. Используйте только что содзанную функцию чтобы проделать вычисления, описанные ниже и ответьте на вопросы.
Для простоты предположим, что $c=1$.
1. На плоскости $z$ расположите окружность радиусом $R$ больше $c=1$, скажем $R=1.5$, с центром в начале координат. Какой формы получится отображение этой окружности на полскость $\xi$?
1. окружность
2. эллипс
3. симметричный профиль
4. несимметричный профиль
2. Теперь поместите на плоскости $z$ окружность с центром в точке $(x_c,y_c)=(c-R, 0)$, радиус которой $c \lt R \lt 2c$ (например, $c=1$; $R=1.2$) Какой формы теперь получится отображение на полскость $\xi$?
1. окружность
2. эллипс
3. симметричный профиль
4. несимметричный профиль
3. Поместите центр окружности в точку $(x_c, y_c)=(-\Delta x, \Delta y)$, где $\Delta x$ и $\Delta y$ — небольшие положительные значения, например, $\Delta x=0.1$ и $\Delta y=0.1$. Радиус окружности задайте как $R = \sqrt{(c - x_c)^2 + y_c^2}$. Что получилось на плоскости $\xi$?
1. окружность
2. эллипс
3. симметричный профиль
4. несимметричный профиль
4. Рассмотрим случай симметричного профиля. В полярных координатах $(\theta, r=R)$, какая точка на окружности соответствует задней кромке профиля?
* $\theta=$?
## 3. Расчетная сетка в плоскости $z$ в полярной системе координат
Формула Жуковского ставит в соответствие точке на плоскости $z$ точку на плоскости $\xi$. Как вы видели в предыдущем разделе, такое преобразование иногда дает фигуры, сильно похожие на аэродинамические профили. _Ну и что?_
Окажывается, согласно ТФКП, если применить [конформное отображение](https://ru.wikipedia.org/wiki/%D0%9A%D0%BE%D0%BD%D1%84%D0%BE%D1%80%D0%BC%D0%BD%D0%BE%D0%B5_%D0%BE%D1%82%D0%BE%D0%B1%D1%80%D0%B0%D0%B6%D0%B5%D0%BD%D0%B8%D0%B5) к решению уравнения Лапласа, то получившаяся функция тоже будет решением уравнения Лапласа.
Значит, можно отобразить потенциал и функцию тока для обтекания цилиндра из плоскости $z$ в $\xi$ и получить течение вокруг профиля. Функция тока для обтекания профиля задается уравнением
$$
\psi(\xi_x, \xi_y) = \psi(\xi_x(x, y), \xi_y(x, y))
$$
в котором комплексные координаты $\xi$, $\xi_x$ и $\xi_y$ получены путем преобразования по формуле Жуковского из $z=x+iy$.
Выполнив это задание, вы получите обтекание симметричного профиля Жуковского под нулевым и ненулевым углами атаки. Форму профиля можно получить, задав в плоскости $z$ окружность с центром в точке $(x_c, y_c)=(-0.15, 0)$ и радиусом $R = 1.15$ (параметр $c=1$). Для того, чтобы достичь поставленной цели, нужно последовательно решить задачи с (3) по (6).
Сначала построим расчетную сетку в плоскости $z$ и посмотрим, как ее узлы будут выглядеть на плоскости $\xi$ после преобразования. Используйте в плоскости $z$ полярные координаты. Если поместить узлы расчетной сетки внутрь окружности, то после отображения они окажутся снаружи профиля (убедитесь в этом сами!) Это, вроде бы, проблема. С другой стороны, на окружности ставится граничное условие непротекания, поэтому линии тока внутри окружности нет, и эту область можно просто игнорировать.
### Упражнения:
Постройте расчетную сетку в плоскости $z$ в полярных координатах и при помощи формулы Жуковского отразите ее на плоскость $\xi$.
Поместите $N_r = 100$ узлов сетки в радиальном направлении при $R \le r \le 5$ и $N_{\theta}=145$ узлов в трансверсальном. В этом направлении $\theta$ меняется от 0 до $2\pi$. Теперь, если нарисовать что получилось при помощи функции `pyplot.scatter()`, должно получиться что-то такое:
<img src="./resources/Problem3.png" width="600">
## 4. Обтекание симметричного профиля Жуковского под нулевым углом атаки
### Функция и линии тока
Рассмотрим потенциальное обтекание цилиндра в плоскости $z$. Как отмечалось выше, $\psi(\xi) = \psi(\xi(z))$. Значит, значение функции тока в точке на плоскости $z$ будет таким же, как и в соответствующей ей точке плоскости $\xi$. Мы можем построить линии тока в обеих плоскостях с помощью функции `pyplot.contour()`, поскольку функция тока является скалярной функцией.
В качестве скорости на бесконечности используйте $1$, т. е. $U_{\infty}=1$. Для того чтобы получить цилиндр с радиусом $R=1.15$, сначала нужно вычислить интенсивность диполя.
У вас должны получиться картины линий тока, похожие на те, что представлены на рисунках ниже.
<img src="./resources/Problem4-fig1.png" width="600">
### Векторы скорости и коэффициент давления
Чтобы вычислить коэффициент давления, нам потребуется поле скорости. Поле скорости в плоскости $z$ можно легко получить, используя координаты узлов расчетной сетки. Но можно ли утверждать, что скорости в соответствующих точках плоскости $\xi$ будут такими же, как на плосокти $z$, по анлогии с функцией тока? _Правильный ответ: нет._
Значения функции тока остаются неизменными в исходных и отображенных точках, поскольку функция потока является решением скалярного уравнения Лапласа!
Однако, скорость-это вектор и он не является решением уравнения Лапласа. Координаты вектора изменяются при смене системы координат путем конформного преобразования.
В нашем случае плоскости $z$ и $\xi$ будут двумя различными системами координат. И скорость в заданной точке плоскости $z$ будет отличаться от скорости в соответствующей точке на плоскости $\xi$. Нужно выполнить некоторые манипуляции.
Скорости в плосокостях $z$ и $\xi$ записываются в следующем виде:
\begin{equation}
\left\{
\begin{array}{l}
u_z = \frac{\partial \psi}{\partial y} \\
v_z = -\frac{\partial \psi}{\partial x}
\end{array}
\right.
\text{ и }
\left\{
\begin{array}{l}
u_\xi = \frac{\partial \psi}{\partial \xi_y} \\
v_\xi = - \frac{\partial \psi}{\partial \xi_x}
\end{array}
\right.
\end{equation}
Как получить $u_\xi$ и $v_\xi$, имея $u_z$ и $v_z$, ? Это можно сделать, используя правило дифференцирования сложной функции. Самое время вспомнить о функции потенциала $\phi$, которая тоже является решением уравнения Лапласа. Таким образом, значения потенциала и функции тока остаются неизменными при конформном отображении. То же самое справедливо для комплексного потенциала $F(\xi) = F(\xi(z))= \phi + i\psi$.
По правилу диффернцирования сложной функции, $dF/d\xi=dF/dz\times dz/d\xi$. Таким образом,
\begin{equation}
W_\xi = u_\xi - iv_\xi = \frac{d F}{d \xi} = \frac{d F}{d z}\times\frac{d z}{d \xi} = \frac{d F}{d z}/\frac{d \xi}{d z} = (u_z-iv_z) / \frac{d \xi}{d z}
\end{equation}
И
\begin{equation}
\frac{d \xi}{d z} = \frac{d (z + c^2/z)}{dz} = 1 - \left(\frac{c}{z}\right)^2
\end{equation}
Теперь, используя два полученных выше соотношения, можно получить скорости $u_\xi$ и $v_\xi$ на плоскости $\xi$.
Если воспользоваться функцией `pyplot.quiver()` для визуализации векторных полей, результат должен быть таким:
<img src="./resources/Problem4-fig2.png" width="600">
А поле коэффициента давления в расчетной области — так:
<img src="./resources/Problem4-fig3.png" width="600">
### Упражнения:
* Напишите код на Python для получения этих картин: линий тока, векторов скорости и распределения коэффициента давления.
* Ответьте на следующие вопросы:
1. Какова интенсивность диполя?
2. Чему равна скорость в 62-й точке на поверхности профиля? Предположим, что задняя кромка имеет индекс 1 и что значения индекса возрастает при движении против часовой стрелки.
3. Каково минимальное значение коэффициента давления на поверхности профиля?
## 5. Обтекание симметричного профиля Жуковского под ненулевым углом атаки, без циркуляции
Теперь мы хотим расположить профиль под углом атаки (AoA — angle of attack) относительно набегающего потока. Конечно, для этого можно использовать преобразование Жуковского, применив его к обтеканию цилиндра. *Но как это сделать?* Можно ли достичь желаемого результата, сложив безграничное течение и диполь? На самом деле, нет. Если пойти этим путем, нам не удастся выделить замкнутую линию тока, как мы делали это раньше — а такая замкнутая линия тока выступает в качестве поверхности цилиндра
Для того, чтобы получить равномерный поток под углом атаки, нужно просто повернуть систему координат. Сначала создадим новую систему координат (или еще одну комплексную плоскость) $z'$ с началом в цетре цилиндра, в которой ось $x'$ (действительная часть $z'$) параллельна потоку, как показано на картинке.
<img src='./resources/rotating coordinate.png' width=400>
Взаимосвязь между плоскостями $z'$ $z$:
\begin{equation}
z'=\left[ z-(x_c+iy_c) \right]e^{-i\times AoA}
\end{equation}
Или, для координат $x$, $y$, $x'$ и $y'$:
\begin{equation}
\left\{
\begin{array}{l}
x' = (x-x_c)\cos(AoA) + (y-y_c)\sin(AoA) \\
y' = - (x-x_c)\sin(AoA) + (y-y_c)\cos(AoA)
\end{array}
\right.
\end{equation}
где $(x_c, y_c)$ — положение центра цилиндра, а $AoA$ — угол атаки.
Теперь в новой плоскости $z'$ можно получить обтекание цилиндра, сложив безграничный поток с *нулевым углом атаки* и диполь, *расположенный в начале системы координат*. Затем, получим течение в плоскостях $z$ и $\xi$. Опять же, функция потока остается той же самой в заданной точке во всех трех различных системах координат ($z'$, $z$ и $\xi$). В плоскостях $z$ и $\xi$ у вас должны получиться такие линии тока:
<img src="./resources/Problem5-fig1.png" width="600">
Векторы скоростей нужно развернуть обратно при переходе от плоскости $z'$ к $z$.
\begin{equation}
u-iv=\frac{d F}{d z}=\frac{d F}{d z'}\times\frac{d z'}{d z}=(u'-iv')e^{-i\times AoA}
\end{equation}
Конечно же, можно использовать и явную запись для компонент $x$, $y$, $x'$ и $y'$. Выведите соответствующие соотношения самостоятельно, если вам удобнее пользоваться явной формой записи. Получив скорости в плоскости $z$, вы можете воспользоваться навыками, приобретенными при выполнении предыдущего упражнения, для того, чтобы выписать скорости в плоскости $\xi$. Итоговые поля скорости и коэффициента давления должны выглядеть следующим образом:
<img src="./resources/Problem5-fig2.png" width="600">
<img src="./resources/Problem5-fig3.png" width="600">
### Упражнения:
* Напишите код на Python, чтобы получть показанные выше картины. Используйте угол атаки $AoA=20^\circ$.
* Ответьте на следующие вопросы:
1. Как вы думаете, физично ли полученное решение? Обоснуйте свой ответ
2. Где на профиле расположены точки торможения? Предположим, что задняя кромка имеет индекс 1 и что значения индекса возрастает при движении против часовой стрелки.
3. Каково значение подъемной силы?
4. Каково значение коэффициента сопротивления?
5. Чему равна скорость в 50-й точке на поверхности профиля?
6. Чему равен коэффициент давления в 75-й точке на поверхности профиля?
## 6. Обтекание симметричного профиля Жуковского под ненулевым углом атаки при наличии циркуляции
Результат, полученный в предыдущем упражнении, не имеет физического смысла. Нам нужен **вихрь**. Как известно из [Занятия 6: Подъемная сила цилиндра](06_Lesson06_vortexLift.ipynb), добавление вихря (другими словами циркуляции) к потенциальному обтеканию цилиндра приводит к изменению положения точек торможения, а также создает подъемную силу.
Для того, чтобы сделать решение более физичным, нужно, чтобы выполнялось [условие Кутты-Жуковского](https://ru.wikipedia.org/wiki/%D0%9F%D0%BE%D1%81%D1%82%D1%83%D0%BB%D0%B0%D1%82_%D0%96%D1%83%D0%BA%D0%BE%D0%B2%D1%81%D0%BA%D0%BE%D0%B3%D0%BE_%E2%80%94_%D0%A7%D0%B0%D0%BF%D0%BB%D1%8B%D0%B3%D0%B8%D0%BD%D0%B0),
>"Из всех возможных обтеканий крыла с задней острой кромкой в природе реализуется только то, в котором скорость в заднем острие конечна."
Это утверждение позволит нам подобрать нужную интенсивность вихря. Она должна быть такой, чтобы задняя точка торможения на цилиндре переместилась из положения $\theta=$AoA в положение $\theta=0^\circ$ на плоскости $z$. Знаний, полученных на Занятии 6, должно быть достаточно для того, чтобы самостоятельно рассчитать необходимую интенсивность.
Линии тока, поля скоростей и коэффициента давления в плоскостях $z$ и $\xi$ должны выглядеть так:
<img src="./resources/Problem6-fig1.png" width="600">
<img src="./resources/Problem6-fig2.png" width="600">
<img src="./resources/Problem6-fig3.png" width="600">
### Упражнения
* Напишите код на Python, чтобы получть показанные выше картины.
* Ответьте на следующие вопросы:
1. Какова интенсивность вихря?
2. Каково значение подъемной силы? (Подсказка: подъемная сила, как нам известно из Занятия 6, действует в направлении, нормальном к набегающему потоку, в нашем случае в направлении оси $y'$. )
3. Попробуйте вычислить значения подъемной силы и сопротивления напрямую, по формулам $L=-\oint p \times \sin{\theta} dA$ и $D=\oint p \times \cos{\theta} dA$. Удовлетворяет ли полученное значение подъемной силы теореме Кутты–Жуковского? Каково значение коэффициента сопротивления?
4. Где на профиле расположены точки торможения? Предположим, что задняя кромка имеет индекс 1 и что значения индекса возрастает при движении против часовой стрелки.
5. Чему равна скорость в 92-й точке на поверхности профиля?
6. Чему равен коэффициент давления в 111-й точке на поверхности профиля?
7. Что происходит с коэффициентом давления на задней кромке профиля?
```
from IPython.core.display import HTML
def css_styling():
styles = open('../styles/custom.css', 'r').read()
return HTML(styles)
css_styling()
```
| github_jupyter |
## Trading Context
In this notebook we will explain how to use a TensorTrade `TradingContext`. In addition we will also explain the backbone behind TensorTrade's major components and how they are used inside the framework. We will begin with an easy example of how to use the most basic context, this involves specifying what `base_intrument` we are using and what `instruments` we are exchanging. By default, the framework sets these as `USD` and `BTC`, however, we would like to specify our instruments as `BTC` and `ETH`. We also don't want to have to put them as parameters to every component we make because we know it will be the same for all components. Therefore, we use a trading context to set these parameters across the entire framework.
```
import tensortrade as td
from tensortrade.environments import TradingEnvironment
config = {
"base_instrument": "USD",
"instruments": ["BTC", "ETH"]
}
with td.TradingContext(**config):
env = TradingEnvironment(exchange='simulated',
action_scheme='discrete',
reward_scheme='simple')
```
## Special Keys
The following keys are considered special configuration keys:
* `shared`
* `exchanges`
* `actions`
* `rewards`
* `features`
* `slippage`
There is one for each major component of the library. This ensures that each component gets the parameters it needs. The ones that get shared across components get stored on either the top level of this dictionary or inside the `shared` special key. After the creating the configuration dictionary, all that needs to be done is to use a `TradingContext` in a `with` statement. Any components that are defined under the `with` statement are injected with shared parameters as well as the parameters for the special configuration corresponding to the component that has been initialized.
## Initializing a Trading Environment from a Configuration
For example, this configuration sets up a trading environment using a `DiscreteActions` with 24 actions.
```
config = {
"base_instrument": "USD",
"instruments": ["BTC", "ETH"],
"actions": {
"n_actions": 24
}
}
with TradingContext(**config):
env = TradingEnvironment(exchange='simulated',
action_scheme='discrete',
reward_scheme='simple')
config = {
"base_instrument": "USD",
"instruments": ["BTC", "ETH"],
"actions": {
"n_actions_per_instrument": 25,
'max_allowed_slippage_percent': 5.0
}
}
with TradingContext(**config):
action_scheme = MultiDiscreteActions()
```
## Initialize Live Trading Environment with Credentials
```
config = {
"base_instrument": "USD",
"instruments": ["BTC", "ETH"],
"actions": {
"n_actions": 25,
'max_allowed_slippage_percent': 5.0
},
"exchanges": {
"credentials": {
"key": "o3874hfowiulejhrbf",
"b64secret": "fo4hruwvoliejrbvwrl",
"passphrase": "f9ohr8oviu3rbvlufb3iuebfo"
}
}
}
with TradingContext(**config):
env = TradingEnvironment(exchange='coinbase',
action_scheme='discrete',
reward_scheme='simple')
config = {
"base_instrument": "USD",
"instruments": ["BTC", "ETH"],
"actions": {
"n_actions": 25,
'max_allowed_slippage_percent': 5.0
}
}
exchange = StochasticExchange(base_price=2, base_volume=2)
with TradingContext(**config):
env = TradingEnvironment(exchange=exchange,
action_scheme='discrete',
reward_scheme='simple')
```
## Initialize from File
The two file format that are currently supported for creating trading context are JSON and YAML. The following are examples of what these files might look like for customization of action scheme and exchange parameters.
### YAML
```YAML
base_instrument: "EURO"
instruments: ["BTC", "ETH"]
actions:
n_actions: 24
max_allowed_slippage_percent: 5.0
exchanges:
credentials:
api_key: "487r63835t4323"
api_secret_key: "do8u43hgiurwfnlveio"
min_trade_price: 1e-7
max_trade_price: 1e7
min_trade_size: 1e-4
max_trade_size: 1e4
```
### JSON
```JSON
{
"base_instrument": "EURO",
"instruments": ["BTC", "ETH"],
"exchanges": {
"commission_percent": 0.5,
"base_precision": 0.3,
"instrument_precision": 10,
"min_trade_price": 1e-7,
"max_trade_price": 1e7,
"min_trade_size": 1e-4,
"max_trade_size": 1e4,
"initial_balance": 1e5,
"window_size": 5,
"should_pretransform_obs": true,
"max_allowed_slippage_percent": 3.0,
"slippage_model": "uniform"
}
```
```
yaml_path = "data/config/configuration.yaml"
json_path = "data/config/configuration.json"
with TradingContext.from_yaml(yaml_path):
env = TradingEnvironment(exchange='fbm',
action_scheme='discrete',
reward_scheme='simple')
with TradingContext.from_json(json_path):
env = TradingEnvironment(exchange='fbm',
action_scheme='discrete',
reward_scheme='simple')
```
| github_jupyter |
# Tacotron 2 Training
This notebook is designed to provide a guide on how to train Tacotron2 as part of the TTS pipeline. It contains the following sections
1. Tacotron2 and NeMo - An introduction to the Tacotron2 model
2. LJSpeech - How to train Tacotron2 on LJSpeech
3. Custom Datasets - How to collect audio data to train Tacotron2 for difference voices and languages
# License
> Copyright 2020 NVIDIA. All Rights Reserved.
>
> Licensed under the Apache License, Version 2.0 (the "License");
> you may not use this file except in compliance with the License.
> You may obtain a copy of the License at
>
> http://www.apache.org/licenses/LICENSE-2.0
>
> Unless required by applicable law or agreed to in writing, software
> distributed under the License is distributed on an "AS IS" BASIS,
> WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> See the License for the specific language governing permissions and
> limitations under the License.
```
"""
You can either run this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.
Instructions for setting up Colab are as follows:
1. Open a new Python 3 notebook.
2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL)
3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator)
4. Run this cell to set up dependencies# .
"""
BRANCH = 'main'
# # If you're using Colab and not running locally, uncomment and run this cell.
# !apt-get install sox libsndfile1 ffmpeg
# !pip install wget unidecode
# !python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[tts]
```
# Tacotron2 and NeMo
Tacotron2 is a neural network that converts text characters into a mel spectrogram. For more details on the model, please refer to Nvidia's [Tacotron2 Model Card](https://ngc.nvidia.com/catalog/models/nvidia:nemo:tts_en_tacotron2), or the original [paper](https://arxiv.org/abs/1712.05884).
Tacotron2 like most NeMo models are defined as a LightningModule, allowing for easy training via PyTorch Lightning, and parameterized by a configuration, currently defined via a yaml file and loading using Hydra.
Let's take a look using NeMo's pretrained model and how to use it to generate spectrograms.
```
# Load the Tacotron2Model
from nemo.collections.tts.models import Tacotron2Model
from nemo.collections.tts.models.base import SpectrogramGenerator
# Let's see what pretrained models are available
print(Tacotron2Model.list_available_models())
# We can load the pre-trained model as follows
model = Tacotron2Model.from_pretrained("tts_en_tacotron2")
# Tacotron2 is a SpectrogramGenerator
assert isinstance(model, SpectrogramGenerator)
# SpectrogramGenerators in NeMo have two helper functions:
# 1. parse(str_input: str, **kwargs) which takes an English string and produces a token tensor
# 2. generate_spectrogram(tokens: 'torch.tensor', **kwargs) which takes the token tensor and generates a spectrogram
# Let's try it out
tokens = model.parse(str_input = "Hey, this produces speech!")
spectrogram = model.generate_spectrogram(tokens = tokens)
# Now we can visualize the generated spectrogram
# If we want to generate speech, we have to use a vocoder in conjunction to a spectrogram generator.
# Refer to the TTS Inference notebook on how to convert spectrograms to speech.
from matplotlib.pyplot import imshow
from matplotlib import pyplot as plt
%matplotlib inline
imshow(spectrogram.cpu().detach().numpy()[0,...], origin="lower")
plt.show()
```
# Training
Now that we looked at the Tacotron2 model, let's see how to train a Tacotron2 Model
```
# NeMo's training scripts are stored inside the examples/ folder. Let's grab the tacotron2.py file
# as well as the tacotron2.yaml file
!wget https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/examples/tts/tacotron2.py
!mkdir conf && cd conf && wget https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/examples/tts/conf/tacotron2.yaml && cd ..
```
Let's take a look at the tacotron2.py file
```python
import pytorch_lightning as pl
from nemo.collections.common.callbacks import LogEpochTimeCallback
from nemo.collections.tts.models import Tacotron2Model
from nemo.core.config import hydra_runner
from nemo.utils.exp_manager import exp_manager
# hydra_runner is a thin NeMo wrapper around Hydra
# It looks for a config named tacotron2.yaml inside the conf folder
# Hydra parses the yaml and returns it as a Omegaconf DictConfig
@hydra_runner(config_path="conf", config_name="tacotron2")
def main(cfg):
# Define the Lightning trainer
trainer = pl.Trainer(**cfg.trainer)
# exp_manager is a NeMo construct that helps with logging and checkpointing
exp_manager(trainer, cfg.get("exp_manager", None))
# Define the Tacotron 2 model, this will construct the model as well as
# define the training and validation dataloaders
model = Tacotron2Model(cfg=cfg.model, trainer=trainer)
# Let's add a few more callbacks
lr_logger = pl.callbacks.LearningRateMonitor()
epoch_time_logger = LogEpochTimeCallback()
trainer.callbacks.extend([lr_logger, epoch_time_logger])
# Call lightning trainer's fit() to train the model
trainer.fit(model)
if __name__ == '__main__':
main() # noqa pylint: disable=no-value-for-parameter
```
Let's take a look at the yaml config
```yaml
name: &name Tacotron2
sample_rate: &sr 22050
# <PAD>, <BOS>, <EOS> will be added by the tacotron2.py script
labels: &labels [' ', '!', '"', "'", '(', ')', ',', '-', '.', ':', ';', '?', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H',
'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z', '[', ']',
'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't',
'u', 'v', 'w', 'x', 'y', 'z']
n_fft: &n_fft 1024
n_mels: &n_mels 80
fmax: &fmax null
n_stride: &n_window_stride 256
pad_value: &pad_value -11.52
train_dataset: ???
validation_datasets: ???
```
The first part of the yaml defines some parameters used by Tacotron. You can see
that the sample rate is set to 22050 for LJSpeech. You can also see that this
model has characters for labels instead of phones. To use phones as input,
see the GlowTTS yaml and setup for an example.
Looking at the yaml, there is `train_dataset: ???` and `validation_datasets: ???`. The ??? indicates to hydra that these values must be passed via the command line or the script will fail.
Looking further down the yaml, we get to the pytorch lightning trainer parameters.
```yaml
trainer:
gpus: 1 # number of gpus
max_epochs: ???
num_nodes: 1
accelerator: ddp
accumulate_grad_batches: 1
checkpoint_callback: False # Provided by exp_manager
logger: False # Provided by exp_manager
gradient_clip_val: 1.0
flush_logs_every_n_steps: 1000
log_every_n_steps: 200
check_val_every_n_epoch: 25
```
These values can be changed either by editing the yaml or through the command line.
Let's grab some simple audio data and test Tacotron2.
```
!wget https://github.com/NVIDIA/NeMo/releases/download/v0.11.0/test_data.tar.gz && mkdir -p tests/data && tar xzf test_data.tar.gz -C tests/data
# Just like ASR, the Tacotron2 require .json files to define the training and validation data.
!cat tests/data/asr/an4_val.json
# Now that we have some sample data, we can try training Tacotron 2
# NOTE: The sample data is not enough data to properly train a Tacotron 2. This will not result in a trained Tacotron 2 and is used to illustrate how to train Tacotron 2 model
!python tacotron2.py sample_rate=16000 train_dataset=tests/data/asr/an4_train.json validation_datasets=tests/data/asr/an4_val.json trainer.max_epochs=3 trainer.accelerator=null trainer.check_val_every_n_epoch=1
```
# Training Data
In order to train Tacotron2, it is highly recommended to obtain high quality speech data with the following properties:
- Sampling rate of 22050Hz or higher
- Single speaker
- Speech should contain a variety of speech phonemes
- Audio split into segments of 1-10 seconds
- Audio segments should not have silence at the beginning and end
- Audio segments should not contain long silences inside
After obtaining the speech data and splitting into training, validation, and test sections, it is required to construct .json files to tell NeMo where to find these audio files.
The .json files should adhere to the format required by the `nemo.collections.asr.data.audio_to_text.AudioToCharDataset` class. For example, here is a sample .json file
```json
{"audio_filepath": "/path/to/audio1.wav", "text": "the transcription", "duration": 0.82}
{"audio_filepath": "/path/to/audio2.wav", "text": "the other transcription", "duration": 2.1}
...
```
Please note that the duration is in seconds.
Lastly, update the labels inside the Tacotron 2 yaml config if your data contains a different set of characters.
Then you are ready to run your training script:
```bash
python tacotron2.py train_dataset=YOUR_TRAIN.json validation_datasets=YOUR_VAL.json trainer.gpus=-1
```
| github_jupyter |
#### Feature Engineering & Encoding.
This notebooks is about pre-processing and feature engineering, we will create artifacts in order to deploy our model from the calculations made here. This will serve us to create also our pipelines.
**Regular Feature Engineering tasks:**
- Geohashes. **DONE**
- Difference between area total y area construida. **DONE**
- Small differences between our mean district area's **DONE**
**Spatial Feature Engineering tasks:**
- Point of interest features **DONE**
- Distance Bands **DONE**
- GeoClustering
**Encodings**
**Correlations**
```
import pandas as pd
import numpy as np
import seaborn as sns
import geohash2 as gh #pip install geohash2
import gc
lima_data = pd.read_csv('../data/2020_Notebook01_clean_data.csv')
lima_data.describe()
#The coordinates were scraped without the -1
lima_data['latitud'] = lima_data['latitud']*-1
lima_data['longitud'] = lima_data['longitud']*-1
```
#### Feature Engineering
```
lima_data['areas_diff'] = lima_data['Area_total'] - lima_data['Area_constr']
lima_data['areas_proporcion'] = lima_data['Area_total']/lima_data['Area_constr']
drop_prop = lima_data[lima_data['areas_proporcion'] >= 4.0].index
lima_data.drop(drop_prop, inplace=True)
#This is as proxy to detect outliers, but it's not all the outlier processing
tmp_table = lima_data.groupby('Barrio', as_index=False)['Area_total'].mean()
tmp_table = pd.merge(tmp_table, lima_data, on=['Barrio'], how='inner', suffixes=('_TMP','_ORIGIN'))
lima_data['diff_total_mean_district'] = tmp_table['Area_total_ORIGIN'] - tmp_table['Area_total_TMP']
lima_data.describe()
```
#### GeoHashes
```
for num in range(5, 10):
lima_data['geohash_grado{}'.format(num)] = lima_data.apply(lambda x: gh.encode(x['latitud'], x['longitud'], precision = num),axis=1)
top_geohashes = lima_data['geohash_grado7'].value_counts().iloc[0:7].keys().tolist()
#Which discrits have more housing offer?
lima_data[lima_data['geohash_grado7'].isin(top_geohashes)]['Barrio'].value_counts()
```
#### CAVEAT! Many posts doen't have exact coordinates but near reference points, that's why we will see same locations across different houses. We found 77 repeated coordinates but with different houses. In La Molina we take as reference EL Molicentro as well as Surco and San Isidro have their own point of reference
```
lima_data[lima_data['geohash_grado7'] == top_geohashes[0]][['latitud','longitud']].value_counts()
print(lima_data[lima_data['geohash_grado7'] == top_geohashes[1]][['latitud','longitud']].value_counts())
print(lima_data[lima_data['geohash_grado7'] == top_geohashes[2]][['latitud','longitud']].value_counts())
lima_data.filter(regex='^(geohash*)').describe()
```
## Spatial Feature Engineering
```
import geopandas
import osmnx as ox
from shapely.geometry import Point
import warnings
warnings.filterwarnings('ignore')
#We built an extra feature that is not included in notebook2 output. If i have Price per total square meter.
#To build Price per constructed meter I'll have to pivot values.
lima_data['Precio_m2_constr'] = (lima_data['Precio_m2_total'] * lima_data['Area_total'])/lima_data['Area_constr']
#We restricted data only for Lima and Callao.
filter_1 = (lima_data['Ciudad'].isin(['Lima','Callao','Cañete']))
filter_2 = (lima_data['latitud']!= 0)
lima_data.where(filter_1 & filter_2 , inplace=True)
#Import the Lima Shape File per districts.
peru_districts = geopandas.read_file('../data/GeoShape_Distritos/DISTRITOS.shp')
lima_districts = peru_districts[peru_districts['DEPARTAMEN'] == 'LIMA']
#Aggregate Polygons geometries to get Lima Department Polygon:
lima_department = geopandas.GeoSeries([lima_districts[lima_districts['DEPARTAMEN']=='LIMA'].unary_union])
#Aggregate Polygons geometries to get Lima District Polygon:
lima_province = geopandas.GeoSeries([lima_districts[lima_districts['PROVINCIA']=='LIMA'].unary_union])
#We only awant districts inside Lima.
lima_filter = (peru_districts['DEPARTAMEN'] == 'LIMA')
lima_districts = peru_districts[lima_filter]
#We need shapely object. We can transform lats and longs into a shapely.Point object
gdf_lima = geopandas.GeoDataFrame(
lima_data, geometry=geopandas.points_from_xy(lima_data.longitud, lima_data.latitud))
gdf_lima = gdf_lima[gdf_lima['geometry'].apply(lambda x: x.within(lima_districts.values[0]))] #Validator.
#Temporal drop: Since there are many outliers we'll drop some values.
# -------------------------------------------------------------------
gdf_lima = gdf_lima[gdf_lima['Precio_m2_total'] < 9000]
gdf_lima = gdf_lima[gdf_lima['Precio_m2_constr']<11000]
#Re-setting index
gdf_lima = gdf_lima.reset_index(drop=True).reset_index().rename(columns={'index':'ID'})
#Checkin the Coordinate reference system
gdf_lima = gdf_lima.set_crs('epsg:4326')
gdf_lima.crs
```
### Map Matching.
We will extract point of interests from OSMnx and transpolate those features to our housing dataset. First we will restrict our total area of interest and then we will unify that dataframe with our origins. We are choosing these categories: 'school','restaurant','parking','marketplace','bank','college','university','bus_station','police','clinic'
```
gdf_ch = gdf_lima.unary_union.convex_hull #Lima's Geometry
#amenities = ox.geometries_from_polygon(gdf_ch, tags={"amenity":['school','restaurant','parking','marketplace','bank','college','university','bus_station','police','clinic']})[["amenity","name","geometry"]]
#Inside Lima Filter
#amenities = amenities[amenities['geometry'].apply(lambda x: x.within(lima_province.values[0]))]
#amenities.to_csv('amenities.csv')
#Load our Amenities Dataset
from shapely import wkt
amenities = pd.read_csv('../data/to_load/amenities.csv')
geometry = amenities['geometry'].map(wkt.loads)
amenities.drop('geometry', axis=1, inplace=True)
amenities = geopandas.GeoDataFrame(amenities, crs='epsg:4326', geometry = geometry)
#Creating Buffer.
#--------------------------------------BUFFER HYPERPARAM--------------------------------------------
BUFFER_SIZE = 0.01
gdf_lima['buffer_500m'] = gdf_lima.buffer(BUFFER_SIZE)
#see one buffer to be sure makes sense.
buffer = gdf_lima.iloc[0:6,:]['buffer_500m']
fig, ax = plt.subplots(1, figsize=(12,12))
lima_province.plot(ax=ax,color='grey', alpha=0.70)
buffer.plot(ax=ax)
```
#### Creating All Point Of interest Features:
**CAVEAT:** 0.005 is an Hyperparameter that needs to be validated.
```
### CREATING POINT OF INTEREST FEATURES
dict_poi = {}
gdf_lima['buffer_500m'] = gdf_lima.buffer(0.005)
for amenity in amenities['amenity'].unique():
amenity_df = amenities[amenities['amenity'] == amenity]
joined = geopandas.sjoin(amenity_df, gdf_lima.set_geometry('buffer_500m')[['ID','buffer_500m']], op='within')
poi_count = joined.reset_index().groupby("ID")["osmid"].count().to_frame('{}_poi_count'.format(amenity))
dict_poi[amenity] = poi_count
#Merge with Origin Data (GDF)
#GDF size = 3477,25 - dict-bank 2776,1
for amenity_name, poi_count in dict_poi.items():
temp_df = dict_poi[amenity_name].reset_index()
gdf_lima = gdf_lima.merge(temp_df, on=['ID'],how='left').fillna(0.0)
```
#### Distance Bands and Distance Buffers:
We will use Distance Bands weight objects, considering other houses as "neighbors" if they are within the distance threshold. In other words, we will be considering the median of some feature around the spatial distance that we will define. For now we will just make Dormitorios and Area_total based feature
```
import libpysal.weights as weights
#THRESHOLD HYPERPARAMETER
THRESHOLD_HYPERPARAM = 500
d500_w = weights.DistanceBand.from_dataframe(gdf_lima, threshold=THRESHOLD_HYPERPARAM , silence_warnings=True)
d500_w.transform = 'r'
local_average_bedrooms = weights.lag_spatial(d500_w, gdf_lima[['Dormitorios']].values)
gdf_lima['local_average_bedrooms'] = local_average_bedrooms
area_500_w = weights.DistanceBand.from_dataframe(gdf_lima, threshold=500, silence_warnings=True)
area_500_w.transform = 'r'
local_average_total_area = weights.lag_spatial(area_500_w, gdf_lima[['Area_total']].values)
gdf_lima['local_average_total_area'] = local_average_total_area
```
### Dropping Geometry columns:
```
gdf_lima.drop(['ID','geometry','buffer_500m'],axis=1,inplace=True)
lima_data = pd.DataFrame(gdf_lima)
lima_data.info()
```
#### Correlations
```
import seaborn as sns
import matplotlib.pyplot as plt
corr = lima_data.corr()
sns.heatmap(corr)
```
##### Target Selection
We have 4 ways to define our target data.
- Precio: Total price of the house
- Precio_m2_total: Price / total square meters
- Precio_m2_constr: Price / constructed square meters
- Precio_cat: Categories prices
```
X = lima_data.drop(lima_data.filter(regex='(P?p?recio)').columns,axis=1)
Y = lima_data.filter(regex='(P?p?recio)')
print('columnas Y: {}'.format(Y.columns.values))
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, Y['Precio_m2_total'], test_size=0.1, random_state = 215)
pd.concat([X_train,y_train],axis=1).to_csv('../data/2020_Notebook02_train_output.csv',index=False)
pd.concat([X_test,y_test],axis=1).to_csv('../data/2020_Notebook02_test_output.csv',index=False)
```
| github_jupyter |
# Scalar and vector
> Marcos Duarte, Renato Naville Watanabe
> [Laboratory of Biomechanics and Motor Control](http://pesquisa.ufabc.edu.br/bmclab)
> Federal University of ABC, Brazil
Python handles very well all mathematical operations with numeric scalars and vectors and you can use [Sympy](http://sympy.org) for similar stuff but with abstract symbols. Let's briefly review scalars and vectors and show how to use Python for numerical calculation.
For a review about scalars and vectors, see chapter 2 of [Ruina and Rudra's book](http://ruina.tam.cornell.edu/Book/index.html).
## Scalar
>A **scalar** is a one-dimensional physical quantity, which can be described by a single real number.
For example, time, mass, and energy are examples of scalars.
### Scalar operations in Python
Simple arithmetic operations with scalars are indeed simple:
```
import math
a = 2
b = 3
print('a =', a, ', b =', b)
print('a + b =', a + b)
print('a - b =', a - b)
print('a * b =', a * b)
print('a / b =', a / b)
print('a ** b =', a ** b)
print('sqrt(b) =', math.sqrt(b))
```
If you have a set of numbers, or an array, it is probably better to use Numpy; it will be faster for large data sets, and combined with Scipy, has many more mathematical funcions.
```
import numpy as np
a = 2
b = [3, 4, 5, 6, 7, 8]
b = np.array(b)
print('a =', a, ', b =', b)
print('a + b =', a + b)
print('a - b =', a - b)
print('a * b =', a * b)
print('a / b =', a / b)
print('a ** b =', a ** b)
print('np.sqrt(b) =', np.sqrt(b)) # use numpy functions for numpy arrays
```
Numpy performs the arithmetic operations of the single number in `a` with all the numbers of the array `b`. This is called broadcasting in computer science.
Even if you have two arrays (but they must have the same size), Numpy handles for you:
```
a = np.array([1, 2, 3])
b = np.array([4, 5, 6])
print('a =', a, ', b =', b)
print('a + b =', a + b)
print('a - b =', a - b)
print('a * b =', a * b)
print('a / b =', a / b)
print('a ** b =', a ** b)
```
## Vector
>A **vector** is a quantity with magnitude (or length) and direction expressed numerically as an ordered list of values according to a coordinate reference system.
For example, position, force, and torque are physical quantities defined by vectors.
For instance, consider the position of a point in space represented by a vector:
<br>
<figure><img src="./../images/vector3D.png" width=300/><figcaption><center><i>Figure. Position of a point represented by a vector in a Cartesian coordinate system.</i></center></figcaption></figure>
The position of the point (the vector) above can be represented as a tuple of values:
$$ (x,\: y,\: z) \; \Rightarrow \; (1, 3, 2) $$
or in matrix form:
$$ \begin{bmatrix} x \\y \\z \end{bmatrix} \;\; \Rightarrow \;\; \begin{bmatrix} 1 \\3 \\2 \end{bmatrix}$$
We can use the Numpy array to represent the components of vectors.
For instance, for the vector above is expressed in Python as:
```
a = np.array([1, 3, 2])
print('a =', a)
```
Exactly like the arrays in the last example for scalars, so all operations we performed will result in the same values, of course.
However, as we are now dealing with vectors, now some of the operations don't make sense. For example, for vectors there are no multiplication, division, power, and square root in the way we calculated.
A vector can also be represented as:
<span class="notranslate">
$$ \overrightarrow{\mathbf{a}} = a_x\hat{\mathbf{i}} + a_y\hat{\mathbf{j}} + a_z\hat{\mathbf{k}} $$
</span>
<br>
<figure><img src="./../images/vector3Dijk.png" width=300/><figcaption><center><i>Figure. A vector representation in a Cartesian coordinate system. The versors <span class="notranslate"> $\hat{\mathbf{i}},\, \hat{\mathbf{j}},\, \hat{\mathbf{k}}\,$ </span> are usually represented in the color sequence <b>rgb</b> (red, green, blue) for easier visualization.</i></center></figcaption></figure>
Where <span class="notranslate"> $\hat{\mathbf{i}},\, \hat{\mathbf{j}},\, \hat{\mathbf{k}}\,$ </span> are unit vectors, each representing a direction and <span class="notranslate"> $ a_x\hat{\mathbf{i}},\: a_y\hat{\mathbf{j}},\: a_z\hat{\mathbf{k}} $ </span> are the vector components of the vector $\overrightarrow{\mathbf{a}}$.
A unit vector (or versor) is a vector whose length (or norm) is 1.
The unit vector of a non-zero vector $\overrightarrow{\mathbf{a}}$ is the unit vector codirectional with $\overrightarrow{\mathbf{a}}$:
<span class="notranslate">
$$ \mathbf{\hat{u}} = \frac{\overrightarrow{\mathbf{a}}}{||\overrightarrow{\mathbf{a}}||} = \frac{a_x\,\hat{\mathbf{i}} + a_y\,\hat{\mathbf{j}} + a_z\, \hat{\mathbf{k}}}{\sqrt{a_x^2+a_y^2+a_z^2}} $$
</span>
### Magnitude (length or norm) of a vector
The magnitude (length) of a vector is often represented by the symbol $||\;||$, also known as the norm (or Euclidean norm) of a vector and it is defined as:
<span class="notranslate">
$$ ||\overrightarrow{\mathbf{a}}|| = \sqrt{a_x^2+a_y^2+a_z^2} $$
</span>
The function `numpy.linalg.norm` calculates the norm:
```
a = np.array([1, 2, 3])
np.linalg.norm(a)
```
Or we can use the definition and compute directly:
```
np.sqrt(np.sum(a*a))
```
Then, the versor for the vector <span class="notranslate"> $ \overrightarrow{\mathbf{a}} = (1, 2, 3) $ </span> is:
```
a = np.array([1, 2, 3])
u = a/np.linalg.norm(a)
print('u =', u)
```
And we can verify its magnitude is indeed 1:
```
np.linalg.norm(u)
```
But the representation of a vector as a tuple of values is only valid for a vector with its origin coinciding with the origin $ (0, 0, 0) $ of the coordinate system we adopted.
For instance, consider the following vector:
<br>
<figure><img src="./../images/vector2.png" width=260/><figcaption><center><i>Figure. A vector in space.</i></center></figcaption></figure>
Such a vector cannot be represented by $ (b_x, b_y, b_z) $ because this would be for the vector from the origin to the point B. To represent exactly this vector we need the two vectors <span class="notranslate"> $ \mathbf{a} $ </span> and <span class="notranslate"> $ \mathbf{b} $ </span>. This fact is important when we perform some calculations in Mechanics.
### Vecton addition and subtraction
The addition of two vectors is another vector:
<span class="notranslate">
$$ \overrightarrow{\mathbf{a}} + \overrightarrow{\mathbf{b}} = (a_x\hat{\mathbf{i}} + a_y\hat{\mathbf{j}} + a_z\hat{\mathbf{k}}) + (b_x\hat{\mathbf{i}} + b_y\hat{\mathbf{j}} + b_z\hat{\mathbf{k}}) =
(a_x+b_x)\hat{\mathbf{i}} + (a_y+b_y)\hat{\mathbf{j}} + (a_z+b_z)\hat{\mathbf{k}} $$
</span>
<figure><img src="http://upload.wikimedia.org/wikipedia/commons/2/28/Vector_addition.svg" width=300 alt="Vector addition"/><figcaption><center><i>Figure. Vector addition (image from Wikipedia).</i></center></figcaption></figure>
The subtraction of two vectors is also another vector:
<span class="notranslate">
$$ \overrightarrow{\mathbf{a}} - \overrightarrow{\mathbf{b}} = (a_x\hat{\mathbf{i}} + a_y\hat{\mathbf{j}} + a_z\hat{\mathbf{k}}) + (b_x\hat{\mathbf{i}} + b_y\hat{\mathbf{j}} + b_z\hat{\mathbf{k}}) =
(a_x-b_x)\hat{\mathbf{i}} + (a_y-b_y)\hat{\mathbf{j}} + (a_z-b_z)\hat{\mathbf{k}} $$
</span>
<figure><img src="http://upload.wikimedia.org/wikipedia/commons/2/24/Vector_subtraction.svg" width=160 alt="Vector subtraction"/><figcaption><center><i>Figure. Vector subtraction (image from Wikipedia).</i></center></figcaption></figure></div>
Consider two 2D arrays (rows and columns) representing the position of two objects moving in space. The columns represent the vector components and the rows the values of the position vector in different instants.
Once again, it's easy to perform addition and subtraction with these vectors:
```
a = np.array([[1, 2, 3], [1, 1, 1]])
b = np.array([[4, 5, 6], [7, 8, 9]])
print('a =', a, '\nb =', b)
print('a + b =', a + b)
print('a - b =', a - b)
```
Numpy can handle a N-dimensional array with the size limited by the available memory in your computer.
And we can perform operations on each vector, for example, calculate the norm of each one.
First let's check the shape of the variable `a` using the method `shape` or the function `numpy.shape`:
```
print(a.shape)
print(np.shape(a))
```
This means the variable `a` has 2 rows and 3 columns.
We have to tell the function `numpy.norm` to calculate the norm for each vector, i.e., to operate through the columns of the variable `a` using the paraneter `axis`:
```
np.linalg.norm(a, axis=1)
```
## Dot product
Dot product (or scalar product or inner product) between two vectors is a mathematical operation algebraically defined as the sum of the products of the corresponding components (maginitudes in each direction) of the two vectors. The result of the dot product is a single number (a scalar).
The dot product between vectors <span class="notranslate">$\overrightarrow{\mathbf{a}}$</span> and $\overrightarrow{\mathbf{b}}$ is:
<span class="notranslate">
$$ \overrightarrow{\mathbf{a}} \cdot \overrightarrow{\mathbf{b}} = (a_x\,\hat{\mathbf{i}}+a_y\,\hat{\mathbf{j}}+a_z\,\hat{\mathbf{k}}) \cdot (b_x\,\hat{\mathbf{i}}+b_y\,\hat{\mathbf{j}}+b_z\,\hat{\mathbf{k}}) = a_x b_x + a_y b_y + a_z b_z $$
</span>
Because by definition:
<span class="notranslate">
$$ \hat{\mathbf{i}} \cdot \hat{\mathbf{i}} = \hat{\mathbf{j}} \cdot \hat{\mathbf{j}} = \hat{\mathbf{k}} \cdot \hat{\mathbf{k}}= 1 \quad \text{and} \quad \hat{\mathbf{i}} \cdot \hat{\mathbf{j}} = \hat{\mathbf{i}} \cdot \hat{\mathbf{k}} = \hat{\mathbf{j}} \cdot \hat{\mathbf{k}} = 0 $$
</span>
The geometric equivalent of the dot product is the product of the magnitudes of the two vectors and the cosine of the angle between them:
<span class="notranslate">
$$ \overrightarrow{\mathbf{a}} \cdot \overrightarrow{\mathbf{b}} = ||\overrightarrow{\mathbf{a}}||\:||\overrightarrow{\mathbf{b}}||\:cos(\theta) $$
</span>
Which is also equivalent to state that the dot product between two vectors $\overrightarrow{\mathbf{a}}$ and $\overrightarrow{\mathbf{b}}$ is the magnitude of $\overrightarrow{\mathbf{a}}$ times the magnitude of the component of $\overrightarrow{\mathbf{b}}$ parallel to $\overrightarrow{\mathbf{a}}$ (or the magnitude of $\overrightarrow{\mathbf{b}}$ times the magnitude of the component of $\overrightarrow{\mathbf{a}}$ parallel to $\overrightarrow{\mathbf{b}}$).
The dot product between two vectors can be visualized in this interactive animation:
```
from IPython.display import IFrame
IFrame('https://faraday.physics.utoronto.ca/PVB/Harrison/Flash/Vectors/DotProduct/DotProduct.html',
width='100%', height=400)
```
The Numpy function for the dot product is `numpy.dot`:
```
a = np.array([1, 2, 3])
b = np.array([4, 5, 6])
print('a =', a, '\nb =', b)
print('np.dot(a, b) =', np.dot(a, b))
```
Or we can use the definition and compute directly:
```
np.sum(a*b)
```
For 2D arrays, the `numpy.dot` function performs matrix multiplication rather than the dot product; so let's use the `numpy.sum` function:
```
a = np.array([[1, 2, 3], [1, 1, 1]])
b = np.array([[4, 5, 6], [7, 8, 9]])
np.sum(a*b, axis=1)
```
## Vector product
Cross product or vector product between two vectors is a mathematical operation in three-dimensional space which results in a vector perpendicular to both of the vectors being multiplied and a length (norm) equal to the product of the perpendicular components of the vectors being multiplied (which is equal to the area of the parallelogram that the vectors span).
The cross product between vectors $\overrightarrow{\mathbf{a}}$ and $\overrightarrow{\mathbf{b}}$ is:
<span class="notranslate">
$$ \overrightarrow{\mathbf{a}} \times \overrightarrow{\mathbf{b}} = (a_x\,\hat{\mathbf{i}} + a_y\,\hat{\mathbf{j}} + a_z\,\hat{\mathbf{k}}) \times (b_x\,\hat{\mathbf{i}}+b_y\,\hat{\mathbf{j}}+b_z\,\hat{\mathbf{k}}) = (a_yb_z-a_zb_y)\hat{\mathbf{i}} + (a_zb_x-a_xb_z)\hat{\mathbf{j}}+(a_xb_y-a_yb_x)\hat{\mathbf{k}} $$
</span>
Because by definition:
<span class="notranslate">
$$ \begin{array}{l l}
\hat{\mathbf{i}} \times \hat{\mathbf{i}} = \hat{\mathbf{j}} \times \hat{\mathbf{j}} = \hat{\mathbf{k}} \times \hat{\mathbf{k}} = 0 \\
\hat{\mathbf{i}} \times \hat{\mathbf{j}} = \hat{\mathbf{k}}, \quad \hat{\mathbf{k}} \times \hat{\mathbf{k}} = \hat{\mathbf{i}}, \quad \hat{\mathbf{k}} \times \hat{\mathbf{i}} = \hat{\mathbf{j}} \\
\hat{\mathbf{j}} \times \hat{\mathbf{i}} = -\hat{\mathbf{k}}, \quad \hat{\mathbf{k}} \times \hat{\mathbf{j}}= -\hat{\mathbf{i}}, \quad \hat{\mathbf{i}} \times \hat{\mathbf{k}} = -\hat{\mathbf{j}}
\end{array} $$
</span>
The direction of the vector resulting from the cross product between the vectors $\overrightarrow{\mathbf{a}}$ and $\overrightarrow{\mathbf{b}}$ is given by the right-hand rule.
The geometric equivalent of the cross product is:
The geometric equivalent of the cross product is the product of the magnitudes of the two vectors and the sine of the angle between them:
<span class="notranslate">
$$ \overrightarrow{\mathbf{a}} \times \overrightarrow{\mathbf{b}} = ||\overrightarrow{\mathbf{a}}||\:||\overrightarrow{\mathbf{b}}||\:sin(\theta) $$
</span>
Which is also equivalent to state that the cross product between two vectors $\overrightarrow{\mathbf{a}}$ and $\overrightarrow{\mathbf{b}}$ is the magnitude of $\overrightarrow{\mathbf{a}}$ times the magnitude of the component of $\overrightarrow{\mathbf{b}}$ perpendicular to $\overrightarrow{\mathbf{a}}$ (or the magnitude of $\overrightarrow{\mathbf{b}}$ times the magnitude of the component of $\overrightarrow{\mathbf{a}}$ perpendicular to $\overrightarrow{\mathbf{b}}$).
The definition above, also implies that the magnitude of the cross product is the area of the parallelogram spanned by the two vectors:
<br>
<figure><img src="http://upload.wikimedia.org/wikipedia/commons/4/4e/Cross_product_parallelogram.svg" width=160 alt="Vector subtraction"/><figcaption><center><i>Figure. Area of a parallelogram as the magnitude of the cross product (image from Wikipedia).</i></center></figcaption></figure>
The cross product can also be calculated as the determinant of a matrix:
<span class="notranslate">
$$ \overrightarrow{\mathbf{a}} \times \overrightarrow{\mathbf{b}} = \left| \begin{array}{ccc}
\hat{\mathbf{i}} & \hat{\mathbf{j}} & \hat{\mathbf{k}} \\
a_x & a_y & a_z \\
b_x & b_y & b_z
\end{array} \right|
= a_y b_z \hat{\mathbf{i}} + a_z b_x \hat{\mathbf{j}} + a_x b_y \hat{\mathbf{k}} - a_y b_x \hat{\mathbf{k}}-a_z b_y \hat{\mathbf{i}} - a_x b_z \hat{\mathbf{j}} \\
\overrightarrow{\mathbf{a}} \times \overrightarrow{\mathbf{b}} = (a_yb_z-a_zb_y)\hat{\mathbf{i}} + (a_zb_x-a_xb_z)\hat{\mathbf{j}} + (a_xb_y-a_yb_x)\hat{\mathbf{k}} $$
</span>
The same result as before.
The cross product between two vectors can be visualized in this interactive animation:
```
IFrame('https://faraday.physics.utoronto.ca/PVB/Harrison/Flash/Vectors/CrossProduct/CrossProduct.html',
width='100%', height=400)
```
The Numpy function for the cross product is `numpy.cross`:
```
print('a =', a, '\nb =', b)
print('np.cross(a, b) =', np.cross(a, b))
```
For 2D arrays with vectors in different rows:
```
a = np.array([[1, 2, 3], [1, 1, 1]])
b = np.array([[4, 5, 6], [7, 8, 9]])
np.cross(a, b, axis=1)
```
### Gram–Schmidt process
The [Gram–Schmidt process](http://en.wikipedia.org/wiki/Gram%E2%80%93Schmidt_process) is a method for orthonormalizing (orthogonal unit versors) a set of vectors using the scalar product. The Gram–Schmidt process works for any number of vectors.
For example, given three vectors, $\overrightarrow{\mathbf{a}}, \overrightarrow{\mathbf{b}}, \overrightarrow{\mathbf{c}}$, in the 3D space, a basis $\{\hat{e}_a, \hat{e}_b, \hat{e}_c\}$ can be found using the Gram–Schmidt process by:
The first versor is in the $\overrightarrow{\mathbf{a}}$ direction (or in the direction of any of the other vectors):
$$ \hat{e}_a = \frac{\overrightarrow{\mathbf{a}}}{||\overrightarrow{\mathbf{a}}||} $$
The second versor, orthogonal to $\hat{e}_a$, can be found considering we can express vector $\overrightarrow{\mathbf{b}}$ in terms of the $\hat{e}_a$ direction as:
$$ \overrightarrow{\mathbf{b}} = \overrightarrow{\mathbf{b}}^\| + \overrightarrow{\mathbf{b}}^\bot $$
Then:
$$ \overrightarrow{\mathbf{b}}^\bot = \overrightarrow{\mathbf{b}} - \overrightarrow{\mathbf{b}}^\| = \overrightarrow{\mathbf{b}} - (\overrightarrow{\mathbf{b}} \cdot \hat{e}_a ) \hat{e}_a $$
Finally:
$$ \hat{e}_b = \frac{\overrightarrow{\mathbf{b}}^\bot}{||\overrightarrow{\mathbf{b}}^\bot||} $$
The third versor, orthogonal to $\{\hat{e}_a, \hat{e}_b\}$, can be found expressing the vector $\overrightarrow{\mathbf{C}}$ in terms of $\hat{e}_a$ and $\hat{e}_b$ directions as:
$$ \overrightarrow{\mathbf{c}} = \overrightarrow{\mathbf{c}}^\| + \overrightarrow{\mathbf{c}}^\bot $$
Then:
$$ \overrightarrow{\mathbf{c}}^\bot = \overrightarrow{\mathbf{c}} - \overrightarrow{\mathbf{c}}^\| $$
Where:
$$ \overrightarrow{\mathbf{c}}^\| = (\overrightarrow{\mathbf{c}} \cdot \hat{e}_a ) \hat{e}_a + (\overrightarrow{\mathbf{c}} \cdot \hat{e}_b ) \hat{e}_b $$
Finally:
$$ \hat{e}_c = \frac{\overrightarrow{\mathbf{c}}^\bot}{||\overrightarrow{\mathbf{c}}^\bot||} $$
Let's implement the Gram–Schmidt process in Python.
For example, consider the positions (vectors) $\overrightarrow{\mathbf{a}} = [1,2,0], \overrightarrow{\mathbf{b}} = [0,1,3], \overrightarrow{\mathbf{c}} = [1,0,1]$:
```
import numpy as np
a = np.array([1, 2, 0])
b = np.array([0, 1, 3])
c = np.array([1, 0, 1])
```
The first versor is:
```
ea = a/np.linalg.norm(a)
print(ea)
```
The second versor is:
```
eb = b - np.dot(b, ea)*ea
eb = eb/np.linalg.norm(eb)
print(eb)
```
And the third version is:
```
ec = c - np.dot(c, ea)*ea - np.dot(c, eb)*eb
ec = ec/np.linalg.norm(ec)
print(ec)
```
Let's check the orthonormality between these versors:
```
print('Versors:', '\nea =', ea, '\neb =', eb, '\nec =', ec)
print('\nTest of orthogonality (scalar product between versors):',
'\nea x eb:', np.dot(ea, eb),
'\neb x ec:', np.dot(eb, ec),
'\nec x ea:', np.dot(ec, ea))
print('\nNorm of each versor:',
'\n||ea|| =', np.linalg.norm(ea),
'\n||eb|| =', np.linalg.norm(eb),
'\n||ec|| =', np.linalg.norm(ec))
```
Or, we can simply use the built-in QR factorization function from NumPy:
```
vectors = np.vstack((a,b,c)).T
Q, R = np.linalg.qr(vectors)
print(Q)
ea, eb, ec = Q[:, 0], Q[:, 1], Q[:, 2]
print('Versors:', '\nea =', ea, '\neb =', eb, '\nec =', ec)
print('\nTest of orthogonality (scalar product between versors):')
print(np.dot(Q.T, Q))
print('\nTest of orthogonality (scalar product between versors):',
'\nea x eb:', np.dot(ea, eb),
'\neb x ec:', np.dot(eb, ec),
'\nec x ea:', np.dot(ec, ea))
print('\nNorm of each versor:',
'\n||ea|| =', np.linalg.norm(ea),
'\n||eb|| =', np.linalg.norm(eb),
'\n||ec|| =', np.linalg.norm(ec))
```
Which results in the same basis with exception of the changed signals.
## Problems
1. Given the vectors, $\overrightarrow{\mathbf{a}}=[1, 0, 0]$ and $\overrightarrow{\mathbf{b}}=[1, 1, 1]$, calculate the dot and cross products between them.
2. Calculate the unit vectors for $[2, −2, 3]$ and $[3, −3, 2]$ and determine an orthogonal vector to these two vectors.
3. Given the vectors $\overrightarrow{\mathbf{a}}$=[1, 0, 0] and $\overrightarrow{\mathbf{b}}$=[1, 1, 1], calculate $ \overrightarrow{\mathbf{a}} \times \overrightarrow{\mathbf{b}} $ and verify that this vector is orthogonal to vectors $\overrightarrow{\mathbf{a}}$ and $\overrightarrow{\mathbf{b}}$. Also, calculate $\overrightarrow{\mathbf{b}} \times \overrightarrow{\mathbf{a}}$ and compare it with $\overrightarrow{\mathbf{a}} \times \overrightarrow{\mathbf{b}}$.
4. Given the vectors $[1, 1, 0]; [1, 0, 1]; [0, 1, 1]$, calculate a basis using the Gram–Schmidt process.
5. Write a Python function to calculate a basis using the Gram–Schmidt process (implement the algorithm!) considering that the input are three variables where each one contains the coordinates of vectors as columns and different positions of these vectors as rows. For example, sample variables can be generated with the command `np.random.randn(5, 3)`.
6. Study the sample problems **1.1** to **1.9**, **1.11** (using Python), **1.12**, **1.14**, **1.17**, **1.18** to **1.24** of Ruina and Rudra's book
7. From Ruina and Rudra's book, solve the problems **1.1.1** to **1.3.16**.
If you are new to scalars and vectors, you should solve these problems first by hand and then use Python to check the answers.
## References
- Ruina A, Rudra P (2015) [Introduction to Statics and Dynamics](http://ruina.tam.cornell.edu/Book/index.html). Oxford University Press.
| github_jupyter |
# Strong Lens Finding Challenge
In this notebook, we illustrate how to use CMU DeepLens to classify ground based images from the strong lens finding challenge http://metcalf1.difa.unibo.it/blf-portal/gg_challenge.html
```
import sys
sys.path.append('..')
%load_ext autoreload
%autoreload 2
%pylab inline
```
## Download and extract data
The training sets from the Strong Lensing Challenge can be downloaded from the challenge page with the following procedure:
```
$ cd [data_dir]
$ wget http://metcalf1.difa.unibo.it/blf-portal/data/GroundBasedTraining.tar.gz
$ tar -xvzf GroundBasedTraining.tar.gz
$ cd GroundBasedTraining
$ tar -xvzf Data.0.tar.gz
$ tar -xvzf Data.1.tar.gz
```
See the details about the content of this archive in the README file.
Once the data is downloaded, we use the following script to turn it into a convenient astropy table.
```
from astropy.table import Table
import pyfits as fits
import numpy as np
# Path to the downloaded files
download_path=[data-dir] # To be adjusted on your machine
# Path to export the data
export_path=[data-dir] # To be adjusted on your machine
# Loads the catalog
cat = Table.read(download_path+'GroundBasedTraining/classifications.csv')
ims = np.zeros((20000, 4, 101, 101))
# Loads the images
for i, id in enumerate(cat['ID']):
print i
for j, b in enumerate(['R', 'I', 'G', 'U']):
ims[i, j] = fits.getdata(download_path+'GroundBasedTraining/Public/Band'+str(j+1)+'/imageSDSS_'+b+'-'+str(id)+'.fits')
# Concatenate images to catalog
cat['image'] = ims
# Export catalog as HDF5
cat.write(export_path+'catalogs.hdf5', path='/ground', append=True)
print "Done !"
```
## Load data and separate training and testing sets
We now load the astropy table compiled above, and we apply some very minor pre-processing (clipping and scaling)
```
from astropy.table import Table
# Loads the table created in the previous section
d = Table.read('catalogs.hdf5', path='/ground') # Path to be adjusted on your machine
# We use the full set for training,
# as we can test on the independent challenge testing set
x = array(d['image']).reshape((-1,4,101,101))
y = array(d['is_lens']).reshape((-1,1))
# [Warning: We reuse the training set as our validation set,
# don't do that if you don't have an independent testing set]
xval = array(d['image'][15000:]).reshape((-1,4,101,101))
yval = array(d['is_lens'][15000:]).reshape((-1,1))
# Clipping and scaling parameters applied to the data as preprocessing
vmin=-1e-9
vmax=1e-9
scale=100
mask = where(x == 100)
mask_val = where(xval == 100)
x[mask] = 0
xval[mask_val] = 0
# Simple clipping and rescaling the images
x = np.clip(x, vmin, vmax)/vmax * scale
xval = np.clip(xval, vmin, vmax)/vmax * scale
x[mask] = 0
xval[mask_val] = 0
# Illustration of a lens in the 4 bands provided
im = x[0].T
subplot(221)
imshow(im[:,:,0]); colorbar()
subplot(222)
imshow(im[:,:,1]); colorbar()
subplot(223)
imshow(im[:,:,2]); colorbar()
subplot(224)
imshow(im[:,:,3]); colorbar()
```
## Training the model
In this section, we train the CMU DeepLens model on the dataset prepared above.
Note that all the data-augmentation steps required to properly trained the model are performed online during training, the user does not need to augment the dataset himself
```
from deeplens.resnet_classifier import deeplens_classifier
model = deeplens_classifier(learning_rate=0.001, # Initial learning rate
learning_rate_steps=3, # Number of learning rate updates during training
learning_rate_drop=0.1, # Amount by which the learning rate is updated
batch_size=128, # Size of the mini-batch
n_epochs=120) # Number of epochs for training
model.fit(x,y,xval,yval) # Train the model, the validation set is provided for evaluation of the model
# Saving the model parameters
model.save('deeplens_params.npy')
# Completeness and purity evaluated on the training set [Warning: not very meaningful]
model.eval_purity_completeness(xval,yval)
# Plot ROC curve on the training set [Warning: not very meaningful]
tpr,fpr,th = model.eval_ROC(xval,yval)
title('ROC on Training set')
plot(fpr,tpr)
xlabel('FPR'); ylabel('TPR')
xlim(0,0.3); ylim(0.86,1.)
grid('on')
# Obtain predicted probabilities for each image
p = model.predict_proba(xval)
```
## Classify Testing set
In this section, we test the model on one of the test datasets provided as part of the challenge
```
from deeplens.utils.blfchallenge import classify_ground_challenge
# Utility function to classify the challenge data with a given model
cat = classify_ground_challenge(model, '/data2/BolognaSLChallenge/Dataset3') # Applies the same clipping
# and normalisation as during training
# Export the classified catalog, ready to submit to the challenge
cat.write('deeplens_ground_classif.txt',format='ascii.no_header')
from astropy.table import join
# Load results catalog
cat_truth = Table.read('ground_catalog.4.csv', format='csv', comment="#")
# Merging with results of the classification
cat = join(cat_truth,cat,'ID')
# Renaming columns for convenience
cat['prediction'] = cat['is_lens']
cat['is_lens'] = ( cat['no_source'] == 0).astype('int')
from sklearn.metrics import roc_curve,roc_auc_score
# Compute the ROC curve
fpr_test,tpr_test,thc = roc_curve(cat['is_lens'], cat['prediction'])
plot(fpr_test,tpr_test,label='CMU DeepLens')
xlim(0,1)
ylim(0,1)
legend(loc=4)
xlabel('FPR')
ylabel('TPR')
title('ROC evaluated on Testing set')
grid('on')
# Get AUROC metric on the whole testing set
roc_auc_score(cat['is_lens'], cat['prediction'])
```
As a point of reference, our winning submission got an AUROC of 0.9814321
| github_jupyter |
# Agenda
- Traditional Programming vs ML
- Types of Machine Learning
- Supervised Learning
- Unsupervised Learning
- Reinforcement Learning
- Learning problem definition and formalization
# Traditional Programming vs ML
<img src= "images/ML_vs_programming.png" width =550/>
[Image Source](https://github.com/rasbt/stat479-machine-learning-fs19/blob/master/01_overview/01-ml-overview__notes.pdf)
A More formal definition of ML:
> A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E. (Tom Mitchell)
<img src = "images/dogs_vs_cats.jpeg" width = 1200 />
[Image Source](http://adilmoujahid.com/posts/2016/06/introduction-deep-learning-python-caffe/?utm_content=buffer82f2c&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer)
E: Collected and labeled dataset of dogs and cats
T: To classify whether a photo has a cat or dog
P: Accuracy of the classification
[Your Turn!](https://forms.gle/2RwC8opUi6bDVpER6)
# Types of Machine Learning
<img src = "images/Types of Machine Learning.png" width = 850 />
## Supervised Learning
<img src ="images/iris_data.png" width = 750/>
[Image Source](https://python.astrotech.io/_images/iris-dataset.png)
```
import numpy as np
import pandas as pd
from sklearn.datasets import load_iris
data,target = load_iris(return_X_y=True, as_frame= True)
iris_df = pd.concat([data,target], axis = 1,)
iris_df.loc[99]
```
<img src ="images/supervised.jpg" width = 750/>
## Unsupervised Learning
__Clustering__
<img src = "images/clustering-handson-ml.png" width = 750 />
```
from sklearn.datasets import load_iris
data_np = load_iris()["data"]
feature_names = load_iris()['feature_names']
pd.DataFrame(data = data, columns= feature_names)
```
[Image Source](https://www.oreilly.com/library/view/hands-on-machine-learning/9781492032632/)
__Dimension Reduction__
<img src= "images/images_dimension_reduction.png" width = 750/>
[Image Source](https://scikit-learn.org/stable/auto_examples/manifold/plot_lle_digits.html)
```
from sklearn import datasets
import matplotlib.pyplot as plt
digits = datasets.load_digits(n_class=6)
num = np.random.randint(0, 1000)
img = digits.data[num]
plt.imshow(img.reshape(8,8), cmap ="gray" )
print(img, digits.target[num])
```
__Anomally Detection__
<img src = "images/anomally.png" width = 750/>
[Image Source](https://www.oreilly.com/library/view/hands-on-machine-learning/9781492032632/)
## Reinforcement Learning
<img src = "images/reinforcement.png" width = 750/>
[Types of ML](https://forms.gle/twHHWvgNk5QhMsEs7)
## Building ML Systems
<img src = "images/workflow.png" width =750/>
[Image Source](https://github.com/rasbt/stat479-machine-learning-fs19/blob/master/01_overview/01-ml-overview__notes.pdf)
## More Explicitly: Components of ML Systems
- Define Problem to be solved.
- Collect data.
- Choose an algorithm class.
- Choose an optimization metric for the learning model.
- Choose a metric for evaluation.
__Note!__
- optimization and model evaluation are different tasks!
- For optimization you might want to maximize log-Likelihood, or minimize mean squared errors, maximize information gain etc.
- For model evaluation, you might want to check: accuracy, precision, recall etc.
# Supervised Learning in depth
## Notation and Conventions
For the features we will use $\mathbf{X}$
$$ \begin{equation*}
\mathbf{X} =
\begin{pmatrix}
x_{1,1} & x_{1,2} & \cdots & x_{1,p} \\
x_{2,1} & x_{2,2} & \cdots & x_{2,p} \\
\vdots & \vdots & \ddots & \vdots \\
x_{N,1} & x_{N,2} & \cdots & x_{N,p}
\end{pmatrix}
\end{equation*}
$$
and for the target we will use $\mathbf{y}$:
$$
\begin{align}
y &= \begin{bmatrix}
y_{1} \\
y_{2} \\
\vdots \\
y_{N}
\end{bmatrix}
\end{align}
$$
```
## show data
data
## show target
target
```
## Formulation of the problem
Suppose we observe
- a quantitative response $\mathbf{Y}$
- predictors (features) $X_{1}, X_{2}, \cdots, X_{p}$
- We assume that there is some relationship between $\mathbf{Y}$ and $\mathbf{X} = (X_{1}, X_{2}, \cdots, X_{p})$
$$ \mathbf{Y} = f(\mathbf{X}) + \varepsilon$$
Problem: $f$ is unknown! (How many different functions can exists?)
<img src = "images/formulation.png" width = 750/>
[Image Source](https://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf)
## How to Estimate f?
- Parametric
- Non-Parametric Models
__Parametric Models__
- Assumes a certain format for $f$. For example linear models: $f(X) = \beta_{1}X_{1} + \cdots + \beta_{p}X_{p}$
- Simplifies the problem
- Usually a little bit naive to capture complicated relations.
- Either not flexible enough (lead to underfitting) or too flexible (lead to overfitting).
<img src = "images/poly.png" width =750/>
__Non-Parametric Models__
- No assumptions about the form of $f$
- Usually more flexible to detect true relation between $X$ and $y$.
- Needs a very large number of observations otherwise they might overfit.
## Some examples of ML algorithms we will cover in this class
- Linear Models
- Tree based models
- Ensemble Models
- Instance-based models
- K-Means, Hierarchical Models
- Dimension Reduction Algorithms
- Artificial Neural Networks
Review of today's lecture
- We've seen the differences between traditional programming vs ML.
- We've discussed different types of ML: Especially three types
- Supervised
- Unsupervised
- Reinforcement
- We learned the components of ML workflow.
- We learnt formulation of ML problem in supervised setting.
[Week-2 Exit Ticket](https://forms.gle/Upqvu9AVxeomzt7E6)
# Extra Resources
[Dimension Reduction](https://www.youtube.com/watch?v=wvsE8jm1GzE&ab_channel=GoogleDevelopers)
[Sebastian Raschka - Introduction to ML](https://github.com/rasbt/stat479-machine-learning-fs19/blob/master/01_overview/01-ml-overview__notes.pdf)
| github_jupyter |
```
import tensorflow as tf
import numpy as np
sess=tf.Session()
new_saver = tf.train.import_meta_graph('model.ckpt.meta')
new_saver.restore(sess, 'model.ckpt')
graph = tf.get_default_graph()
print(graph)
[(n.op, n.name) for n in tf.get_default_graph().as_graph_def().node if "Variable" in n.op]
vec = graph.get_tensor_by_name('Variable:0')
norm = tf.sqrt(tf.reduce_sum(tf.square(vec), 1, keep_dims=True))
normalized_embeddings = vec / norm
final_embeddings = normalized_embeddings.eval(session=sess)
final_embeddings.shape
import tensorflow as tf
import numpy as np
import math
import collections
import random
import pickle
import glob,os
from tempfile import gettempdir
from sklearn.manifold import TSNE
import matplotlib.pyplot as plt
PATH_TO_STORE_THE_DICTIONARY="C:/Users/admin/chenw2k/dict_file_c"
with open(PATH_TO_STORE_THE_DICTIONARY , "rb") as f:
[count,dictionary,reverse_dictionary,vocabulary_size] = pickle.load(f)
def plot_with_labels(low_dim_embs, labels, filename):
assert low_dim_embs.shape[0] >= len(labels), 'More labels than embeddings'
plt.figure(figsize=(18, 18)) # in inches
for i, label in enumerate(labels):
x, y = low_dim_embs[i, :]
plt.scatter(x, y)
plt.annotate(label,
xy=(x, y),
xytext=(5, 2),
textcoords='offset points',
ha='right',
va='bottom')
plt.savefig(filename)
tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000, method='exact')
plot_only = 200
low_dim_embs = tsne.fit_transform(final_embeddings[:plot_only, :])
labels = [reverse_dictionary[i] for i in range(plot_only)]
plot_with_labels(low_dim_embs, labels, 'wtf1.png')
np.save('java_w2v', final_embeddings)
import numpy as np
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.utils import to_categorical
from keras.layers import *
from keras.models import Model, load_model
from keras.initializers import Constant, TruncatedNormal
from keras.callbacks import EarlyStopping, ModelCheckpoint, ReduceLROnPlateau
from keras.optimizers import Adam
from keras_self_attention import SeqSelfAttention
from sklearn.utils import shuffle
final_embeddings = np.load('emb_cf.npy')
print(final_embeddings)
import pickle
f = open("cf_data.pkl" , "rb")
[sent1, sent2, label] = pickle.load(f)
assert len(sent1) == len(sent2) and len(sent2) == len(label)
print(len(sent1))
f2 = open("bugs_test.pickle" , "rb")
[s1, s2, lbl] = pickle.load(f2)
lbl = to_categorical(np.asarray(lbl))
s1 = pad_sequences(s1, maxlen=128)
s2 = pad_sequences(s2, maxlen=128)
s1, s2, lbl = shuffle(s1, s2, lbl)
X_test = get_rnn_data(s1, s2)
Y_test = lbl
def get_rnn_data(a,b):
x = {
'sentence1': a,
#
'sentence2': b,
}
return x
ccc = 0
ddd = 0
for _ in sent1:
for __ in _:
if __ == 0:
ccc += 1
else: ddd += 1
print(ccc, ddd)
#print(label)
label = to_categorical(np.asarray(label))
sent1 = pad_sequences(sent1, maxlen=128)
sent2 = pad_sequences(sent2, maxlen=128)
sent1, sent2, label = shuffle(sent1, sent2, label)
X_train = get_rnn_data(sent1[:4800],sent2[:4800])
Y_train = label[:4800]
X_valid = get_rnn_data(sent1[4800:6400],sent2[4800:6400])
Y_valid = label[4800:6400]
print(label)
X_test = get_rnn_data(sent1[6400:], sent2[6400:])
Y_test = label[6400:]
sent1, sent2, label = shuffle()
X_train
MAX_SEQUENCE_LENGTH = 128
EMBEDDING_DIM = 64
VOCABULARY_SIZE = 100000
embedding_layer = Embedding(VOCABULARY_SIZE,
EMBEDDING_DIM,
embeddings_initializer=Constant(final_embeddings),
input_length=MAX_SEQUENCE_LENGTH,
trainable=False)
def model_cnn(x):
filter_sizes = [1,2,3,5]
num_filters = 128
x = Reshape((128, 64, 1))(x)
maxpool_pool = []
for i in range(len(filter_sizes)):
conv = Conv2D(num_filters, kernel_size=(filter_sizes[i], 64),
kernel_initializer='he_normal', activation='relu')(x)
maxpool_pool.append(MaxPool2D(pool_size=(128 - filter_sizes[i] + 1, 1))(conv))
z = Concatenate(axis=1)(maxpool_pool)
z = Flatten()(z)
z = Dropout(0.2)(z)
return z
def LSTMAttn(x, col):
x = Bidirectional(CuDNNLSTM(128, return_sequences=True))(x)
u1 = SeqSelfAttention(attention_activation='sigmoid')(x)
x = Bidirectional(CuDNNLSTM(128, return_sequences=True))(x)
u2 = SeqSelfAttention(attention_activation='sigmoid')(x)
x = Bidirectional(CuDNNLSTM(128, return_sequences=True))(x)
u3 = SeqSelfAttention(attention_activation='sigmoid')(x)
x = Bidirectional(CuDNNLSTM(128, return_sequences=True))(x)
u4 = SeqSelfAttention(attention_activation='sigmoid')(x)
x = concatenate([u1, u2, u3, u4])
return x
inp1 = Input(shape=(128,), dtype='int32', name="sentence1")
inp2 = Input(shape=(128,), dtype='int32', name="sentence2")
emb1 = embedding_layer(inp1)
emb2 = embedding_layer(inp2)
x = concatenate([LSTMAttn(emb1, "sent1"), LSTMAttn(emb2, "sent2")])
#x = concatenate([model_cnn(emb1), model_cnn(emb2)])
x = Flatten()(x)
preds = Dense(2, activation='sigmoid', name='densejoke')(x)
model = Model(inputs=[inp1,inp2], outputs=preds)
model.compile(loss='categorical_crossentropy',
optimizer=Adam(lr=0.001),
metrics=['acc'])
learning_rate_reduction = ReduceLROnPlateau(monitor='val_acc',
patience=1,
verbose=1,
factor=0.5,
min_lr=0.00001)
file_path="checkpoint_codeforces_weights.hdf5"
checkpoint = ModelCheckpoint(file_path, monitor='val_acc', verbose=1, save_best_only=True, mode='max', save_weights_only=True)
early = EarlyStopping(monitor="val_acc", mode="max", patience=3)
model_callbacks = [checkpoint, early, learning_rate_reduction]
model.load_weights(file_path)
model.fit(X_train, Y_train,
batch_size=10,
epochs=30,
verbose=1,
validation_data=(X_valid, Y_valid),
callbacks = model_callbacks
)
model.evaluate(X_test, Y_test, batch_size=10)
model.metrics_names
Y_nasha = model.predict(X_test, batch_size=64)
Y_nasha = np.argmax(Y_nasha,axis=1)
Y_test_nasha = np.argmax(Y_test, axis=1)
def count0(l):
r = 0
for _ in l:
if _ == 0: r += 1
return r * 1.0 / len(l)
ssb = []
for i, (x, y) in enumerate(zip(Y_nasha, Y_test_nasha)):
if x != y:
ssb.append(count0(X_test['sentence1'][i]))
ssb.append(count0(X_test['sentence2'][i]))
import matplotlib.pyplot as plt
plt.hist(ssb, bins=[0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]) # arguments are passed to np.histogram
plt.title("Histogram with 'auto' bins")
plt.show()
kkb = []
for i, (x, y) in enumerate(zip(sent1, sent2)):
kkb.append(count0(x))
kkb.append(count0(y))
plt.hist(kkb, bins=[0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]) # arguments are passed to np.histogram
plt.title("Histogram with 'auto' bins")
plt.show()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/bereml/iap/blob/master/libretas/2f_speech_cnn2d.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Reconocimiento de comandos de voz con CNN 2D
Curso: [Introducción al Aprendizaje Profundo](http://turing.iimas.unam.mx/~ricardoml/course/iap/). Profesores: [Bere](https://turing.iimas.unam.mx/~bereml/) y [Ricardo](https://turing.iimas.unam.mx/~ricardoml/) Montalvo Lezama.
---
---
En esta libreta veremos un ejemplo de clasificación de audio empleando espectrogramas y redes convolucionales 2D.
[Speech Commands](https://arxiv.org/abs/1804.03209) es un conjunto de comandos de voz (palabras) con más de 100k ejemplos. Cada audio dura alrededor de 1 segundo y tienen un solo canal de voz con una frecuencia de muestreo de 16000hz.
<img src="https://d3i71xaburhd42.cloudfront.net/da6e404d8911b0e5785019a79dc8607e0b313dc4/7-Figure1-1.png" style="width: 200px;" />
## 1. Preparación
```
# Colab
! pip install torchinfo
try:
import torchaudio
except:
! pip install torchaudio
```
### 1.1. Bibliotecas
```
# funciones aleatorias
import random
# tomar n elementos de una secuencia
from itertools import islice as take
# audio
import librosa
import librosa.display
# gráficas
import matplotlib.pyplot as plt
# arreglos multidimensionales
import numpy as np
# redes neuronales
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
# redes audio
import torchaudio
import torchaudio.transforms as T
# redes visión
import torchvision.models as tvm
# redes neuronales
from torch.utils.data import DataLoader
from torchaudio.datasets import SPEECHCOMMANDS
# inspección de arquitectura
from torchinfo import summary
# barras de progreso
from tqdm.auto import trange
```
### 1.2. Auxiliares
```
# directorio de datos
DATA_DIR = '../datos/speech_commands'
# tamaño del lote
BATCH_SIZE = 32
# parámetros de audio
SECS = 1
SAMPLE_RATE = 16000
# parámetros FFT
N_FFT = 400
HOP_LENGTH = N_FFT // 2
# SpeechCommands classes
CLASSES = (
'backward', 'bed', 'bird', 'cat', 'dog',
'down', 'eight', 'five', 'follow', 'forward',
'four', 'go', 'happy', 'house', 'learn',
'left', 'marvin', 'nine', 'no', 'off',
'on', 'one', 'right', 'seven', 'sheila',
'six', 'stop', 'three', 'tree', 'two',
'up', 'visual', 'wow', 'yes', 'zero'
)
NUM_CLASSES = len(CLASSES)
CLASS_IDX = {c: i for i, c in enumerate(CLASSES)}
def set_seed(seed=0):
"""Initializes pseudo-random number generators."""
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
# reproducibilidad
set_seed()
```
## 2. Datos
### 2.1. Descarga
```
! mkdir -p {DATA_DIR}
ds = SPEECHCOMMANDS(DATA_DIR, download=True)
len(ds)
! ls -R {DATA_DIR} | head -55
```
### 2.2. Conjunto de datos
```
def identity(x):
return x
def label2index(label):
return CLASS_IDX[label]
class SPEECHCOMMANDS2(SPEECHCOMMANDS):
def __init__(self, root, download=False, subset=None,
waveform_tsfm=identity, label_tsfm=identity):
super().__init__(root=root, download=download, subset=subset)
self.waveform_tsfm = waveform_tsfm
self.label_tsfm = label_tsfm
def __getitem__(self, i):
waveform, sample_rate, label, *_ = super().__getitem__(i)
x = self.waveform_tsfm(waveform)
y = self.label_tsfm(label)
return x, y, label, sample_rate
class WaveformPadTruncate(nn.Module):
def __init__(self, secs=SECS, sample_rate=SAMPLE_RATE):
super().__init__()
self.samples = secs * sample_rate
def forward(self, waveform):
samples = waveform.shape[1]
# rellenamos con zeros
if samples < self.samples:
difference = self.samples - samples
padding = torch.zeros(1, difference)
waveform = torch.cat([waveform, padding], 1)
# recortamos
elif samples > self.samples:
start = random.randint(0, waveform.shape[1] - self.samples)
waveform = waveform.narrow(1, start, self.samples)
return waveform
# creamos un Dataset
ds = SPEECHCOMMANDS2(
# directorio de datos
root=DATA_DIR,
# transformación de la forma de onda
waveform_tsfm=WaveformPadTruncate(),
# transformación de etiqueta
label_tsfm=label2index,
)
# creamos un DataLoader
dl = DataLoader(
# conjunto
ds,
# tamaño del lote
batch_size=BATCH_SIZE,
)
# desplegamos un lote de imágenes
x, y, labels, sr = next(iter(dl))
print(f'x shape={x.shape} dtype={x.dtype}')
print(f'y shape={y.shape} dtype={y.dtype}')
```
### 2.3. Cargadores de datos
```
def build_dl(subset=None, spectrogram='spec', shuffle=False):
waveform_tsfm = [
WaveformPadTruncate()
]
if spectrogram == 'spec':
waveform_tsfm.extend([
T.Spectrogram(n_fft=N_FFT),
T.AmplitudeToDB(),
])
elif spectrogram == 'mel':
waveform_tsfm.extend([
T.MelSpectrogram(sample_rate=SAMPLE_RATE, n_fft=N_FFT),
T.AmplitudeToDB(),
])
elif spectrogram == 'mfcc':
waveform_tsfm.extend([
T.MFCC(sample_rate=SAMPLE_RATE),
])
else:
raise NotImplementedError(
f'Espectrograma no implementado: {spectrogram}')
waveform_tsfm = nn.Sequential(*waveform_tsfm)
# creamos un Dataset
ds = SPEECHCOMMANDS2(
# directorio de datos
root=DATA_DIR,
# subconjunto
subset=subset,
# transformación de la forma de onda
waveform_tsfm=waveform_tsfm,
# transformación de etiqueta
label_tsfm=label2index,
)
# creamos un DataLoader
dl = DataLoader(
# conjunto
ds,
# tamaño del lote
batch_size=BATCH_SIZE,
# barajear
shuffle=shuffle,
# procesos paralelos
num_workers=2
)
return dl
```
### 2.4. Espectrogramas
```
dl = build_dl(spectrogram='spec')
# inspeccionemos el lote
x, y, labels, sr = next(iter(dl))
print(f'x shape={x.shape} dtype={x.dtype}')
print(f'y shape={y.shape} dtype={y.dtype}')
# inspeccionemos un ejemplo
spec = x[0].squeeze().numpy()
label = labels[0]
librosa.display.specshow(spec, sr=SAMPLE_RATE, hop_length=HOP_LENGTH)
plt.title(f'Espectrograma: {label}')
plt.xlabel('tiempo')
plt.ylabel('frecuencia')
cbar = plt.colorbar()
cbar.set_label('magnitud', rotation=90)
plt.show()
```
### 2.5. Espectrogramas Log-Mel
```
dl = build_dl(spectrogram='mel')
# inspeccionemos el lote
x, y, labels, sr = next(iter(dl))
print(f'x shape={x.shape} dtype={x.dtype}')
print(f'y shape={y.shape} dtype={y.dtype}')
# inspeccionemos un ejemplo
spec = x[0].squeeze().numpy()
label = labels[0]
librosa.display.specshow(spec, sr=SAMPLE_RATE, hop_length=HOP_LENGTH)
plt.title(f'Espectrograma Log-Mel: {label}')
plt.xlabel('tiempo')
plt.ylabel('frecuencia')
cbar = plt.colorbar()
cbar.set_label('magnitud', rotation=90)
plt.show()
```
### 2.6. Espectrogramas MFCCs
```
dl = build_dl(spectrogram='mfcc')
# inspeccionemos el lote
x, y, labels, sr = next(iter(dl))
print(f'x shape={x.shape} dtype={x.dtype}')
print(f'y shape={y.shape} dtype={y.dtype}')
# inspeccionemos un ejemplo
spec = x[0].squeeze().numpy()
label = labels[0]
librosa.display.specshow(spec, sr=SAMPLE_RATE, hop_length=HOP_LENGTH)
plt.title(f'Espectrograma MFCC: {label}')
plt.xlabel('tiempo')
plt.ylabel('coeficientes')
cbar = plt.colorbar()
cbar.set_label('magnitud', rotation=90)
plt.show()
```
## 3. Ciclo de entrenamiento
```
def train_epoch(dl, model, opt, device):
# por cada lote
for x, y_true, *_ in dl:
# movemos a dispositivo
x = x.to(device)
y_true = y_true.to(device)
# computamos logits
y_lgts = model(x)
# computamos la pérdida
loss = F.cross_entropy(y_lgts, y_true)
# vaciamos los gradientes
opt.zero_grad()
# retropropagamos
loss.backward()
# actualizamos parámetros
opt.step()
def eval_epoch(dl, model, device, num_batches=None):
# evitamos que se registren las operaciones
# en la gráfica de cómputo
with torch.no_grad():
# historiales
losses, accs = [], []
# validación de la época con num_batches
# si num_batches==None, se usan todos los lotes
for x, y_true, *_ in take(dl, num_batches):
# movemos a dispositivo
x = x.to(device)
y_true = y_true.to(device)
# computamos los logits
y_lgts = model(x)
# computamos los puntajes
y_prob = F.softmax(y_lgts, 1)
# computamos la clases
y_pred = torch.argmax(y_prob, 1)
# computamos la pérdida
loss = F.cross_entropy(y_lgts, y_true)
# computamos la exactitud
acc = (y_true == y_pred).type(torch.float32).mean()
# guardamos históricos
losses.append(loss.item())
accs.append(acc.item())
# promediamos
loss = np.mean(losses) * 100
acc = np.mean(accs) * 100
return loss, acc
def train(model, trn_dl, tst_dl, lr=1e-4, epochs=20,
trn_batches=None, tst_batches=None):
# historiales
loss_hist, acc_hist = [], []
# optimizador
opt = optim.Adam(model.parameters(), lr=lr)
# usamos GPU si está disponible
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# movemos a dispositivo
model.to(device)
# ciclo de entrenamiento
for epoch in trange(epochs):
# modelo en modo de entrenamiento
model.train()
# entrenamos la época
train_epoch(trn_dl, model, opt, device)
# modelo en modo de evaluación
model.eval()
# evaluamos la época en entrenamiento
trn_loss, trn_acc = eval_epoch(trn_dl, model, device, trn_batches)
# evaluamos la época en prueba
tst_loss, tst_acc = eval_epoch(tst_dl, model, device, tst_batches)
# guardamos historial
loss_hist.append([trn_loss, tst_loss])
acc_hist.append([trn_acc, tst_acc])
# imprimimos progreso
print(f'E{epoch:02} '
f'loss=[{trn_loss:6.2f},{tst_loss:6.2f}] '
f'acc=[{trn_acc:5.2f},{tst_acc:5.2f}]')
return loss_hist, acc_hist
```
## 4. Arquitectura
```
class CNN(nn.Module):
def __init__(self, n_in_channels=1, n_classes=NUM_CLASSES, n_channel=32):
super().__init__()
self.cnn = nn.Sequential(
# bloque conv 1
nn.Conv2d(in_channels=n_in_channels,
out_channels=n_channel,
kernel_size=3),
nn.BatchNorm2d(n_channel),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2),
# bloque conv 2
nn.Conv2d(in_channels=n_channel,
out_channels=n_channel,
kernel_size=3),
nn.BatchNorm2d(num_features=n_channel),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2),
# bloque conv 3
nn.Conv2d(in_channels=n_channel,
out_channels=2*n_channel,
kernel_size=3),
nn.BatchNorm2d(num_features=2*n_channel),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2),
# bloque conv 4
nn.Conv2d(in_channels=2*n_channel,
out_channels=2*n_channel,
kernel_size=3),
nn.BatchNorm2d(num_features=2*n_channel),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2),
)
self.pool = nn.AdaptiveAvgPool2d(1)
self.flat = nn.Flatten()
self.fc = nn.Linear(2*n_channel, n_classes)
def forward(self, x):
x = self.cnn(x)
x = self.pool(x)
x = self.flat(x)
x = self.fc(x)
return x
def build_cnn():
return CNN()
cnn = build_cnn()
cnn
x = torch.zeros(1, 1, 201, 81)
y = cnn(x)
print(f'{x.shape} => {y.shape}')
summary(cnn, (1, 1, 201, 81), device='cpu', verbose=0,
col_names=['input_size', 'output_size', 'num_params'])
```
## 5. Entrenamiento
```
def train_model(spectrogram, build_model, lr=1e-4, epochs=5):
set_seed()
trn_dl = build_dl('training', spectrogram, shuffle=True)
val_dl = build_dl('validation', spectrogram, shuffle=False)
model = build_model()
loss_hist, acc_hist = train(
model, trn_dl, val_dl, lr=lr, epochs=epochs)
train_model(spectrogram='spec', build_model=build_cnn)
```
## 6. Participación
Adapta una arquitectura del estado del arte y entrena un modelo con algún espectrograma.
| github_jupyter |
```
# Erasmus+ ICCT project (2018-1-SI01-KA203-047081)
# Toggle cell visibility
from IPython.display import HTML
tag = HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide()
} else {
$('div.input').show()
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<sup>Toggle cell visibility <a href="javascript:code_toggle()">here</a>.</sup>''')
display(tag)
# Hide the code completely
# from IPython.display import HTML
# tag = HTML('''<style>
# div.input {
# display:none;
# }
# </style>''')
# display(tag)
%matplotlib notebook
import matplotlib.pyplot as plt
import numpy as np
import sympy as sym
import scipy.signal as signal
from ipywidgets import widgets, interact
import control as cn
```
## Korenska krivulja
Korenska krivulja je diagram lege polov zaprtozančnih sistemov za primere sprememb izbranega parametra; običajno je to ojačanje. Krivulje se začnejo v odprtozančnih polih in končali v odprtozančnih ničlah (ali v neskončnosti). Lokacija zaprtozančnih polov nam ne podaja zgolj informacije o stabilnosti sistemov, ampak tudi o lastnostih zaprtozančnih sistemov, kot so prenihaj, čas vzpona in čas ustalitve.
---
### Kako upravljati s tem interaktivnim primerom?
1. Izberi tip objekta: *P0* (proporcionalni objekt ničtega reda), *P1* (proporcionalni objekt prvega reda), *I0* (integrirni objekt ničtega reda) ali *I1* (integrirni objekt prvega reda). Prenosna funkcija objekta P0 je $k_p$ (v tem interaktivnem primeru $k_p=2$), PI objekta $\frac{k_p}{\tau s+1}$ (v tem interaktivnem primeru $k_p=1$ and $\tau=2$), IO objekta $\frac{k_i}{s}$ (v tem interaktivnem primeru $k_i=\frac{1}{10}$) in I1 objekta $\frac{k_i}{s(\tau s +1)}$ (v tem interaktivnem primeru $k_i=1$ in $\tau=10$).
2. Izberi tip krmilnega algoritma s klikom na *P*, *PI*, *PD* ali *PID* gumb
3. Z uporabo drsnikov spreminjaj vrednosti koeficientov proporcionalnega ($K_p$), integrirnega ($T_i$) in diferencirnega ($T_d$) ojačnja.
4. Z uporabo drsnika $t_{max}$ lahko spreminjaš interval vrednosti prikazanih na x osi.
<!-- Root locus is a plot of the location of closed-loop system poles in relation with a certain parameter (typically amplification). It can be shown that the curves start in the open-loop poles and end up in the open-loop zeros (or infinity). The location of closed-loop system poles not only gives an indication of system stability, but other closed-loop system response properties such as overshoot, rise time and settling time can also be inferred from pole location.
---
### How to use this notebook?
1. Click on *P0*, *P1*, *I0* or *I1* to toggle between the following objects: proportional of the zeroth, first or second order, or an integral one of zeroth or first order. The transfer function of P0 object is $k_p$ (in this example $k_p=2$), of PI object $\frac{k_p}{\tau s+1}$ (in this example $k_p=1$ and $\tau=2$), of IO object $\frac{k_i}{s}$ (in this example $k_i=\frac{1}{10}$) and of I1 object $\frac{k_i}{s(\tau s +1}$ (in this example $k_i=1$ and $\tau=10$).
2. Click on the *P*, *PI*, *PD* or *PID* button to toogle between proportional, proportional-integral, proportional-derivative or proportional–integral–derivative control algorithm types.
3. Move the sliders to change the values of proportional ($K_p$), integral ($T_i$) and derivative ($T_d$) PID tunning coefficients.
4. Move the slider $t_{max}$ to change the maximum value of the time on x axis. -->
```
A = 10
a=0.1
s, P, I, D = sym.symbols('s, P, I, D')
obj = 1/(A*s)
PID = P + P/(I*s) + P*D*s#/(a*D*s+1)
system = obj*PID/(1+obj*PID)
num = [sym.fraction(system.factor())[0].expand().coeff(s, i) for i in reversed(range(1+sym.degree(sym.fraction(system.factor())[0], gen=s)))]
den = [sym.fraction(system.factor())[1].expand().coeff(s, i) for i in reversed(range(1+sym.degree(sym.fraction(system.factor())[1], gen=s)))]
system_func_open = obj*PID
num_open = [sym.fraction(system_func_open.factor())[0].expand().coeff(s, i) for i in reversed(range(1+sym.degree(sym.fraction(system_func_open.factor())[0], gen=s)))]
den_open = [sym.fraction(system_func_open.factor())[1].expand().coeff(s, i) for i in reversed(range(1+sym.degree(sym.fraction(system_func_open.factor())[1], gen=s)))]
# make figure
fig = plt.figure(figsize=(9.8, 4),num='Korenska krivulja')
plt.subplots_adjust(wspace=0.3)
# add axes
ax = fig.add_subplot(121)
ax.grid(which='both', axis='both', color='lightgray')
ax.set_title('Časovni odziv')
ax.set_xlabel('t [s]')
ax.set_ylabel('vhod, izhod')
rlocus = fig.add_subplot(122)
# plot step function and responses (initalisation)
input_plot, = ax.plot([],[],'C0', lw=1, label='vstopni signal')
response_plot, = ax.plot([],[], 'C1', lw=2, label='izstopni signal')
ax.legend()
rlocus_plot, = rlocus.plot([], [], 'r')
plt.show()
system_open = None
system_close = None
def update_plot(KP, TI, TD, Time_span):
global num, den, num_open, den_open
global system_open, system_close
num_temp = [float(i.subs(P,KP).subs(I,TI).subs(D,TD)) for i in num]
den_temp = [float(i.subs(P,KP).subs(I,TI).subs(D,TD)) for i in den]
system = signal.TransferFunction(num_temp, den_temp)
system_close = system
num_temp_open = [float(i.subs(P,KP).subs(I,TI).subs(D,TD)) for i in num_open]
den_temp_open = [float(i.subs(P,KP).subs(I,TI).subs(D,TD)) for i in den_open]
system_open = signal.TransferFunction(num_temp_open, den_temp_open)
rlocus.clear()
r, k, xlim, ylim = cn.root_locus_modified(system_open, Plot=False)
# r, k = cn.root_locus(system_open, Plot=False)
#rlocus.scatter(r)
#plot closed loop poles and zeros
poles = np.roots(system.den)
rlocus.plot(np.real(poles), np.imag(poles), 'kx')
zeros = np.roots(system.num)
if zeros.size > 0:
rlocus.plot(np.real(zeros), np.imag(zeros), 'ko', alpha=0.5)
# plot open loop poles and zeros
poles = np.roots(system_open.den)
rlocus.plot(np.real(poles), np.imag(poles), 'x', alpha=0.5)
zeros = np.roots(system_open.num)
if zeros.size > 0:
rlocus.plot(np.real(zeros), np.imag(zeros), 'o')
#plot root locus
for index, col in enumerate(r.T):
rlocus.plot(np.real(col), np.imag(col), 'b', alpha=0.5)
rlocus.set_title('Korenska krivulja')
rlocus.set_xlabel('Re')
rlocus.set_ylabel('Im')
rlocus.grid(which='both', axis='both', color='lightgray')
rlocus.axhline(linewidth=.3, color='g')
rlocus.axvline(linewidth=.3, color='g')
rlocus.set_ylim(ylim)
rlocus.set_xlim(xlim)
time = np.linspace(0, Time_span, 300)
u = np.ones_like(time)
u[0] = 0
time, response = signal.step(system, T=time)
response_plot.set_data(time, response)
input_plot.set_data(time, u)
ax.set_ylim([min([np.min(u), min(response),-.1]),min(100,max([max(response)*1.05, 1, 1.05*np.max(u)]))])
ax.set_xlim([-0.1,max(time)])
plt.show()
controller_ = PID
object_ = obj
def calc_tf():
global num, den, controller_, object_, num_open, den_open
system_func = object_*controller_/(1+object_*controller_)
num = [sym.fraction(system_func.factor())[0].expand().coeff(s, i) for i in reversed(range(1+sym.degree(sym.fraction(system_func.factor())[0], gen=s)))]
den = [sym.fraction(system_func.factor())[1].expand().coeff(s, i) for i in reversed(range(1+sym.degree(sym.fraction(system_func.factor())[1], gen=s)))]
system_func_open = object_*controller_
num_open = [sym.fraction(system_func_open.factor())[0].expand().coeff(s, i) for i in reversed(range(1+sym.degree(sym.fraction(system_func_open.factor())[0], gen=s)))]
den_open = [sym.fraction(system_func_open.factor())[1].expand().coeff(s, i) for i in reversed(range(1+sym.degree(sym.fraction(system_func_open.factor())[1], gen=s)))]
update_plot(Kp_widget.value, Ti_widget.value, Td_widget.value, time_span_widget.value)
def transfer_func(controller_type):
global controller_
proportional = P
integral = P/(I*s)
differential = P*D*s/(a*D*s+1)
if controller_type =='P':
controller_func = proportional
Kp_widget.disabled=False
Ti_widget.disabled=True
Td_widget.disabled=True
elif controller_type =='PI':
controller_func = proportional+integral
Kp_widget.disabled=False
Ti_widget.disabled=False
Td_widget.disabled=True
elif controller_type == 'PD':
controller_func = proportional+differential
Kp_widget.disabled=False
Ti_widget.disabled=True
Td_widget.disabled=False
else:
controller_func = proportional+integral+differential
Kp_widget.disabled=False
Ti_widget.disabled=False
Td_widget.disabled=False
controller_ = controller_func
calc_tf()
def transfer_func_obj(object_type):
global object_
if object_type == 'P0':
object_ = 2
elif object_type == 'P1':
object_ = 1/(2*s+1)
elif object_type == 'I0':
object_ = 1/(10*s)
elif object_type == 'I1':
object_ = 1/(s*(10*s+1))
calc_tf()
style = {'description_width': 'initial'}
def buttons_controller_clicked(event):
controller = buttons_controller.options[buttons_controller.index]
transfer_func(controller)
buttons_controller = widgets.ToggleButtons(
options=['P', 'PI', 'PD', 'PID'],
description='Izberi tip krmilnega algoritma:',
disabled=False,
style=style)
buttons_controller.observe(buttons_controller_clicked)
def buttons_object_clicked(event):
object_ = buttons_object.options[buttons_object.index]
transfer_func_obj(object_)
buttons_object = widgets.ToggleButtons(
options=['P0', 'P1', 'I0', 'I1'],
description='Izberi tip objekta:',
disabled=False,
style=style)
buttons_object.observe(buttons_object_clicked)
Kp_widget = widgets.FloatLogSlider(value=.5,min=-3,max=2.1,step=.001,description=r'\(K_p\)',
disabled=False,continuous_update=True,orientation='horizontal',readout=True,readout_format='.3f')
Ti_widget = widgets.FloatLogSlider(value=1.,min=-3,max=1.8,step=.001,description=r'\(T_{i} \)',
disabled=False,continuous_update=True,orientation='horizontal',readout=True,readout_format='.3f')
Td_widget = widgets.FloatLogSlider(value=1.,min=-3,max=1.8,step=.001,description=r'\(T_{d} \)',
disabled=False,continuous_update=True,orientation='horizontal',readout=True,readout_format='.3f')
time_span_widget = widgets.FloatSlider(value=10.,min=.5,max=50.,step=0.1,description=r'\(t_{max} \)',
disabled=False,continuous_update=True,orientation='horizontal',readout=True,readout_format='.1f')
transfer_func(buttons_controller.options[buttons_controller.index])
transfer_func_obj(buttons_object.options[buttons_object.index])
display(buttons_object)
display(buttons_controller)
interact(update_plot, KP=Kp_widget, TI=Ti_widget, TD=Td_widget, Time_span=time_span_widget);
```
| github_jupyter |
# Introduction to gQuant
**gQuant** is a set of open-source examples for Quantitative Analysis tasks:
- Data preparation & feat. engineering
- Alpha seeking modeling
- Technical indicators
- Backtesting
It is GPU-accelerated by leveraging [**RAPIDS.ai**](https://rapids.ai) technology, and has Multi-GPU and Multi-Node support.
gQuant computing components are oriented around its plugins and task graph.
## Download example datasets
Before getting started, let's download the example datasets if not present.
```
! ((test ! -f './data/stock_price_hist.csv.gz' || test ! -f './data/security_master.csv.gz') && \
cd .. && bash download_data.sh) || echo "Dataset is already present. No need to re-download it."
```
## About this notebook
In this tutorial, we are going to use gQuant to do a simple quant job. The job tasks are listed below:
1. load csv stock data.
2. filter out the stocks that has average volume smaller than 50.
3. sort the stock symbols and datetime.
4. add rate of return as a feature into the table.
5. in two branches, computethe mean volume and mean return.
6. read the file containing the stock symbol names, and join the computed dataframes.
7. output the result in csv files.
## TaskGraph playground
Run the following gquant code to start a empty TaskGraph where computation graph can be created. You can follow the steps as listed below.
```
import sys; sys.path.insert(0, '..')
from gquant.dataframe_flow import TaskGraph
task_graph = TaskGraph()
task_graph.draw()
```
## Step by Step to build your first task graph
### Create Task node to load the included stock csv file
<img src="images/loader_csv.gif" align="center">
### Explore the data and visualize it
<img src='images/explore_data.gif' align='center'>
### Clean up the Task nodes for next steps
<img src='images/clean.gif' align='center'>
### Filter the data and compute the rate of return feature
<img src='images/get_return_feature.gif' align='center'>
### Save current TaskGraph for a composite Task node
<img src='images/add_composite_node.gif' align='center'>
### Clean up the redudant feature computation Task nodes
<img src='images/clean_up_feature.gif' align='center'>
### Compute the averge volume and returns
<img src='images/average.gif' align='center'>
### Dump the dataframe to csv files
<img src='images/csv_out.gif' align='center'>
Just in case you cannnot follow along, here you can load the tutorial taskgraph from the file. First one is the graph to calculate the return feature.
```
task_graph = TaskGraph.load_taskgraph('../taskgraphs/get_return_feature.gq.yaml')
task_graph.draw()
```
Load the full graph and click on the `run` button to see the result
```
task_graph = TaskGraph.load_taskgraph('../taskgraphs/tutorial_intro.gq.yaml')
task_graph.draw()
```
## About Task graphs, nodes and plugins
Quant processing operators are defined as nodes that operates on **cuDF**/**dask_cuDF** dataframes.
A **task graph** is a list of tasks composed of gQuant nodes.
The cell below contains the task graph described before.
```
import warnings; warnings.simplefilter("ignore")
csv_average_return = 'average_return.csv'
csv_average_volume = 'average_volume.csv'
csv_file_path = './data/stock_price_hist.csv.gz'
csv_name_file_path = './data/security_master.csv.gz'
from gquant.dataframe_flow import TaskSpecSchema
# load csv stock data
task_csvdata = {
TaskSpecSchema.task_id: 'stock_data',
TaskSpecSchema.node_type: 'CsvStockLoader',
TaskSpecSchema.conf: {'file': csv_file_path},
TaskSpecSchema.inputs: {}
}
# filter out the stocks that has average volume smaller than 50
task_minVolume = {
TaskSpecSchema.task_id: 'volume_filter',
TaskSpecSchema.node_type: 'ValueFilterNode',
TaskSpecSchema.conf: [{'min': 50.0, 'column': 'volume'}],
TaskSpecSchema.inputs: {'in': 'stock_data.cudf_out'}
}
# sort the stock symbols and datetime
task_sort = {
TaskSpecSchema.task_id: 'sort_node',
TaskSpecSchema.node_type: 'SortNode',
TaskSpecSchema.conf: {'keys': ['asset', 'datetime']},
TaskSpecSchema.inputs: {'in': 'volume_filter.out'}
}
# add rate of return as a feature into the table
task_addReturn = {
TaskSpecSchema.task_id: 'add_return_feature',
TaskSpecSchema.node_type: 'ReturnFeatureNode',
TaskSpecSchema.conf: {},
TaskSpecSchema.inputs: {'stock_in': 'sort_node.out'}
}
# read the stock symbol name file and join the computed dataframes
task_stockSymbol = {
TaskSpecSchema.task_id: 'stock_name',
TaskSpecSchema.node_type: 'StockNameLoader',
TaskSpecSchema.conf: {'file': csv_name_file_path },
TaskSpecSchema.inputs: {}
}
# In two branches, compute the mean volume and mean return seperately
task_volumeMean = {
TaskSpecSchema.task_id: 'average_volume',
TaskSpecSchema.node_type: 'AverageNode',
TaskSpecSchema.conf: {'column': 'volume'},
TaskSpecSchema.inputs: {'stock_in': 'add_return_feature.stock_out'}
}
task_returnMean = {
TaskSpecSchema.task_id: 'average_return',
TaskSpecSchema.node_type: 'AverageNode',
TaskSpecSchema.conf: {'column': 'returns'},
TaskSpecSchema.inputs: {'stock_in': 'add_return_feature.stock_out'}
}
task_leftMerge1 = {
TaskSpecSchema.task_id: 'left_merge1',
TaskSpecSchema.node_type: 'LeftMergeNode',
TaskSpecSchema.conf: {'column': 'asset'},
TaskSpecSchema.inputs: {'left': 'average_return.stock_out',
'right': 'stock_name.stock_name'}
}
task_leftMerge2 = {
TaskSpecSchema.task_id: 'left_merge2',
TaskSpecSchema.node_type: 'LeftMergeNode',
TaskSpecSchema.conf: {'column': 'asset'},
TaskSpecSchema.inputs: {'left': 'average_volume.stock_out',
'right': 'stock_name.stock_name'}
}
# output the result in csv files
task_outputCsv1 = {
TaskSpecSchema.task_id: 'output_csv1',
TaskSpecSchema.node_type: 'OutCsvNode',
TaskSpecSchema.conf: {'path': csv_average_return},
TaskSpecSchema.inputs: {'df_in': 'left_merge1.merged'}
}
task_outputCsv2 = {
TaskSpecSchema.task_id: 'output_csv2',
TaskSpecSchema.node_type: 'OutCsvNode',
TaskSpecSchema.conf: {'path': csv_average_volume },
TaskSpecSchema.inputs: {'df_in': 'left_merge2.merged'}
}
```
In Python, a gQuant task-spec is defined as a dictionary with the following fields:
- `id`
- `type`
- `conf`
- `inputs`
- `filepath`
- `module`
As a best practice, we recommend using the `TaskSpecSchema` class for these fields, instead of strings.
The `id` for a given task must be unique within a task graph. To use the result(s) of other task(s) as input(s) of a different task, we use the id(s) of the former task(s) in the `inputs` field of the next task.
The `type` field contains the node type to use for the compute task. gQuant includes a collection of node classes. These can be found in `gquant.plugin_nodes`. Click [here](#node_class_example) to see a gQuant node class example.
The `conf` field is used to parameterise a task. It lets you access user-set parameters within a plugin (such as `self.conf['min']` in the example above). Each node defines the `conf` json schema. The gQuant UI can use this schema to generate the proper form UI for the inputs. It is recommended to use the UI to configure the `conf`.
The `filepath` field is used to specify a python module where a custom plugin is defined. It is optional if the plugin is in `plugin_nodes` directory, and mandatory when the plugin is somewhere else. In a different tutorial, we will learn how to create custom plugins.
The `module` is optional to tell gQuant the name of module that the node type is from. If it is not specified, gQuant will search for it among all the customized modules.
A custom node schema will look something like this:
```
custom_task = {
TaskSpecSchema.task_id: 'custom_calc',
TaskSpecSchema.node_type: 'CustomNode',
TaskSpecSchema.conf: {},
TaskSpecSchema.inputs: ['some_other_node'],
TaskSpecSchema.filepath: 'custom_nodes.py'
}
```
Below, we compose our task graph and visualize it as a graph.
```
from gquant.dataframe_flow import TaskGraph
# list of nodes composing the task graph
task_list = [
task_csvdata, task_minVolume, task_sort, task_addReturn,
task_stockSymbol, task_volumeMean, task_returnMean,
task_leftMerge1, task_leftMerge2,
task_outputCsv1, task_outputCsv2]
task_graph = TaskGraph(task_list)
task_graph.draw(show='ipynb')
```
We can visualize the ports by setting `show_ports` to `True`
```
task_graph.draw(show='ipynb', show_ports=True)
```
It is recommended to visualize it with gQuant widget so you can interact with it by call `draw` without arguments
```
task_graph.draw()
```
We will use `save_taskgraph` method to save the task graph to a **yaml file**.
That will allow us to re-use it in the future.
```
task_graph_file_name = '01_tutorial_task_graph.gq.yaml'
task_graph.save_taskgraph(task_graph_file_name)
```
Here is a snippet of the content in the resulting yaml file:
```
%%bash -s "$task_graph_file_name"
head -n 19 $1
```
The yaml file describes the computation tasks. We can load it and visualize it as a graph.
```
task_graph = TaskGraph.load_taskgraph(task_graph_file_name)
task_graph.draw()
```
## Building a task graph
Running the task graph is the next logical step. Nevertheless, it can optionally be built before running it.
By calling `build` method, the graph is traversed without running the dataframe computations. This could be useful to inspect the column names and types, validate that the plugins can be instantiated, and check for errors.
The output of `build` are instances of each task in a dictionary.
In the example below, we inspect the column names and types for the inputs and outputs of the `left_merge1` task:
```
from pprint import pprint
task_graph.build()
print('Output of build task graph are instances of each task in a dictionary:\n')
print(str(task_graph))
# Output columns in 'left_merge_1' node
print('Output columns in outgoing dataframe:\n')
pprint(task_graph['left_merge1'].columns_setup())
```
## Running a task graph
To execute the graph computations, we will use the `run` method. If the `Output_Collector` task node is not added to the graph, a output list can be feeded to the run method. The result can be displayed in a rich mode if the `formated` argument is turned on.
`run` can also takes an optional `replace` argument which is used and explained later on
```
outputs = ['stock_data.cudf_out', 'output_csv1.df_out', 'output_csv2.df_out']
task_graph.run(outputs=outputs, formated=True)
```
The result can be used as a tuple or dictionary.
```
result = task_graph.run(outputs=outputs)
csv_data_df, csv_1_df, csv_2_df = result
result['output_csv2.df_out']
```
We can profile each of the computation node running time by turning on the profiler.
```
outputs = ['stock_data.cudf_out', 'output_csv1.df_out', 'output_csv2.df_out']
csv_data_df, csv_1_df, csv_2_df = task_graph.run(outputs=outputs, profile=True)
```
Where most of the time is spent on the csv file processing. This is because we have to convert the time string to the proper format via CPU. Let's inspect the content of `csv_1_df` and `csv_2_df`.
```
print('csv_1_df content:')
print(csv_1_df)
print('\ncsv_2_df content:')
print(csv_2_df)
```
Also, please notice that two resulting csv files has been created:
- average_return.csv
- average_volume.csv
```
print('\ncsv files created:')
!find . -iname "*symbol*"
```
## Subgraphs
A nice feature of task graphs is that we can evaluate any **subgraph**. For instance, if you are only interested in the `average volume` result, you can run only the tasks which are relevant for that computation.
If we would not want to re-run tasks, we could also use the `replace` argument of the `run` function with a `load` option.
The `replace` argument needs to be a dictionary where each key is the task/node id. The values are a replacement task-spec dictionary (i.e. each key is a spec overload, and its value is what to overload with).
In the example below, instead of re-running the `stock_data` node to load a csv file into a `cudf` dataframe, we will use its dataframe output to load from it.
```
replace = {
'stock_data': {
'load': {
'cudf_out': csv_data_df
},
'save': True
}
}
(volume_mean_df, ) = task_graph.run(outputs=['average_volume.stock_out'],
replace=replace)
print(volume_mean_df)
```
As a convenience, we can save on disk the checkpoints for any of the nodes, and re-load them if needed. It is only needed to set the save option to `True`. This step will take a while depends on the disk IO speed.
In the example above, the `replace` spec directs `run` to save on disk for the `stock_data`. If `load` was boolean then the data would be loaded from disk presuming the data was saved to disk in a prior run.
The default directory for saving is `<current_workdir>/.cache/<node_id>.hdf5`.
`replace` is also used to override parameters in the tasks. For instance, if we wanted to use the value `40.0` instead `50.0` in the task `volume_filter`, we would do something similar to:
```
replace_spec = {
'volume_filter': {
'conf': {
'min': 40.0
}
},
'some_task': etc...
}
```
```
replace = {'stock_data': {'load': True},
'average_return': {'save': True}}
(return_mean_df, ) = task_graph.run(outputs=['average_return.stock_out'], replace=replace)
print('Return mean Dataframe:\n')
print(return_mean_df)
```
Now, we might want to load the `return_mean_df` from the saved file and evaluate only tasks that we are interested in.
In the cells below, we compare different load approaches:
- in-memory,
- from disk,
- and not loading at all.
When working interactively, or in situations requiring iterative and explorative task graphs, a significant amount of time is saved by just re-loading the data that do not require to be recalculated.
```
%%time
print('Using in-memory dataframes for load:')
replace = {'stock_data': {'load': {
'cudf_out': csv_data_df
}},
'average return': {'load':
{'stock_out': return_mean_df}}
}
_ = task_graph.run(outputs=['output_csv2.df_out'], replace=replace)
%%time
print('Using cached dataframes on disk for load:')
replace = {'stock_data': {'load': True},
'average return': {'load': True}}
_ = task_graph.run(outputs=['output_csv2.df_out'], replace=replace)
%%time
print('Re-running dataframes calculations instead of using load:')
replace = {'stock_data': {'load': True}}
_ = task_graph.run(outputs=['output_csv2.df_out'], replace=replace)
```
An idiomatic way to save data, if not on disk, or load data, if present on disk, is demonstrated below.
```
%%time
import os
loadsave_csv_data = 'load' if os.path.isfile('./.cache/stock_data.hdf5') else 'save'
loadsave_return_mean = 'load' if os.path.isfile('./.cache/average_return.hdf5') else 'save'
replace = {'stock_data': {loadsave_csv_data: True},
'average_return': {loadsave_return_mean: True}}
_ = task_graph.run(outputs=['output_csv2.df_out'], replace=replace)
```
## Delete temporary files
A few cells above, we generated a .yaml file containing the example task graph, and also a couple of CSV files.
Let's keep our directory clean, and delete them.
```
%%bash -s "$task_graph_file_name" "$csv_average_return" "$csv_average_volume"
rm -f $1 $2 $3
```
<a id='node_class_example'></a>
---
## Node class example
Implementing custom nodes in gQuant is very straighforward.
Data scientists only need to override five methods in the parent class `Node`:
- `init`
- `columns_setup`
- `ports_setup`
- `conf_schema`
- `process`
`init` method is usually used to define the required column names
`ports_setup` defines the input and output ports for the node
`columns_setup` method is used to calculate the output column name and types.
`conf_schema` method is used to define the JSON schema for the node conf so the client can generate the proper UI for it.
`process` method takes input dataframes and computes the output dataframe.
In this way, dataframes are strongly typed, and errors can be detected early before the time-consuming computation happens.
Below, it can be observed `ValueFilterNode` implementation details:
```
import inspect
from gquant.plugin_nodes.transform import ValueFilterNode
print(inspect.getsource(ValueFilterNode))
```
| github_jupyter |
# Symbolic Hermitian Commutator
```
import sympy
from sympy.codegen.ast import Assignment
```
Define symbols for the minimum number of real variables required to store the 3x3 Hermitian matrices we need to calculate:
$\dfrac{\partial f}{\partial t} = \imath \left[ H, f \right]$
Matrix entries for $f$
```
fee_R, fuu_R, ftt_R = sympy.symbols('fee_R fuu_R ftt_R', real=True)
feu_R, fet_R, fut_R = sympy.symbols('feu_R fet_R fut_R', real=True)
feu_I, fet_I, fut_I = sympy.symbols('feu_I fet_I fut_I', real=True)
```
Matrix entries for $\dfrac{\partial f}{\partial t}$
```
Dfee_R, Dfuu_R, Dftt_R = sympy.symbols('Dfee_R Dfuu_R Dftt_R', real=True)
Dfeu_R, Dfet_R, Dfut_R = sympy.symbols('Dfeu_R Dfet_R Dfut_R', real=True)
Dfeu_I, Dfet_I, Dfut_I = sympy.symbols('Dfeu_I Dfet_I Dfut_I', real=True)
```
Matrix entries for $H$
```
Hee_R, Huu_R, Htt_R = sympy.symbols('Hee_R Huu_R Htt_R', real=True)
Heu_R, Het_R, Hut_R = sympy.symbols('Heu_R Het_R Hut_R', real=True)
Heu_I, Het_I, Hut_I = sympy.symbols('Heu_I Het_I Hut_I', real=True)
```
Define matrices $f$ and $H$ to be Hermitian by construction:
```
F = sympy.Matrix([[fee_R, feu_R + sympy.I * feu_I, fet_R + sympy.I * fet_I],
[feu_R - sympy.I * feu_I, fuu_R, fut_R + sympy.I * fut_I],
[fet_R - sympy.I * fet_I, fut_R - sympy.I * fut_I, ftt_R]])
H = sympy.Matrix([[Hee_R, Heu_R + sympy.I * Heu_I, Het_R + sympy.I * Het_I],
[Heu_R - sympy.I * Heu_I, Huu_R, Hut_R + sympy.I * Hut_I],
[Het_R - sympy.I * Het_I, Hut_R - sympy.I * Hut_I, Htt_R]])
```
Calculate commutator $[H,f] = H \cdot f - f \cdot H$
```
Commutator = H*F - F*H
```
Calculate $\dfrac{\partial f}{\partial t} = \imath \left[ H, f \right]$
```
dFdt = sympy.I * Commutator
```
Assign elements of the $\dfrac{\partial f}{\partial t}$ matrix to the real values we need to store the matrix
```
iE = 0
iU = 1
iT = 2
dfee_R_dt = Assignment(Dfee_R, sympy.re(dFdt[iE,iE]))
dfuu_R_dt = Assignment(Dfuu_R, sympy.re(dFdt[iU,iU]))
dftt_R_dt = Assignment(Dftt_R, sympy.re(dFdt[iT,iT]))
dfeu_R_dt = Assignment(Dfeu_R, sympy.re(dFdt[iE,iU]))
dfet_R_dt = Assignment(Dfet_R, sympy.re(dFdt[iE,iT]))
dfut_R_dt = Assignment(Dfut_R, sympy.re(dFdt[iU,iT]))
dfeu_I_dt = Assignment(Dfeu_I, sympy.im(dFdt[iE,iU]))
dfet_I_dt = Assignment(Dfet_I, sympy.im(dFdt[iE,iT]))
dfut_I_dt = Assignment(Dfut_I, sympy.im(dFdt[iU,iT]))
```
Define a function to return a code string for calculating the independent variables we need to store $\dfrac{\partial f}{\partial t}$
```
def get_rhs_code():
lines = []
lines.append(sympy.ccode(dfee_R_dt))
lines.append(sympy.ccode(dfuu_R_dt))
lines.append(sympy.ccode(dftt_R_dt))
lines.append(sympy.ccode(dfeu_R_dt))
lines.append(sympy.ccode(dfet_R_dt))
lines.append(sympy.ccode(dfut_R_dt))
lines.append(sympy.ccode(dfeu_I_dt))
lines.append(sympy.ccode(dfet_I_dt))
lines.append(sympy.ccode(dfut_I_dt))
for l in lines:
print(l + "\n")
return "\n".join(lines)
code_string = get_rhs_code()
```
| github_jupyter |
```
!pip install -q jax==0.2.11 #jaxlib==0.1.64
import jax.numpy as np
from jax import grad, jit, vmap
from jax import random
# for model stuff
import jax.experimental.optimizers as optimizers
import jax.experimental.stax as stax
from jax import jit
import jax
import jax.numpy as np
from jax.lib import xla_bridge
print(xla_bridge.get_backend().platform)
!pip install -q IMNN
pip uninstall tensorflow -y -q
pip install -Uq tfp-nightly[jax] > /dev/null
pip install -q tensorflow-gpu==2.2.0
pip install -q jax-cosmo
import tensorflow as tf
tf.__version__
import jax
print("jax version: ", jax.__version__)
import jax.numpy as np
import tensorflow_probability.substrates.jax as tfp
tfd = tfp.distributions
tfb = tfp.bijectors
import matplotlib.pyplot as plt
from scipy.linalg import toeplitz
rng = jax.random.PRNGKey(2)
import seaborn as sns
sns.set()
# for model stuff
import jax.experimental.optimizers as optimizers
import jax.experimental.stax as stax
from jax import jit
# for imnn
import imnn
import imnn.lfi
#import jax_cosmo as jc
import matplotlib.pyplot as plt
import tensorflow_probability
tfp = tensorflow_probability.substrates.jax
rng = jax.random.PRNGKey(0)
from jax.lib import xla_bridge
print(xla_bridge.get_backend().platform)
N=20
def scipy_compute_r2(N):
_Di = np.tile(toeplitz(np.arange(N)), (N, N))
_Dj = np.concatenate(
[np.concatenate(
[np.tile(np.abs(i - j),(N, N))
for i in range(N)],
axis=0)
for j in range(N)],
axis=1)
_distance_squared = _Di * _Di + _Dj * _Dj
return _distance_squared
def compute_r2(N):
_r2 = np.tile(np.abs(np.expand_dims(np.arange(N), 0)
- np.expand_dims(np.arange(N), 1)), (N, N)) ** 2. + np.abs(np.expand_dims(np.repeat(np.arange(N), N), 0)
- np.expand_dims(np.repeat(np.arange(N), N), 1)) ** 2.
return _r2
r2 = compute_r2(N).astype(np.float32)
def ξ_G(β):
return np.exp(
-np.expand_dims(r2, tuple(np.arange(β.ndim)))
/ 4. / np.expand_dims(β, (-2, -1))**2.)
def get_G_field(β):
pass
def fill_zeros(k, value):
from functools import partial
def fnk(k):
return jax.lax.cond(np.less_equal(k, 1e-5), lambda _: value, lambda k: k+value, operand=k)
if len(k.shape) == 1:
return jax.vmap(fnk)(k)
else:
return jax.vmap(partial(fill_zeros, value=value))(k)
def xi_LN(r, α, β, PixelNoise=0.01):
xi = 1/(np.power(α+1e-12,2)) * (np.exp(np.power(α,2)*np.exp(-0.25*np.power(r/β,2))) - 1)
# Add pixel noise at zero separation:
xi = fill_zeros(xi, PixelNoise**2)
#xi[np.where(r<1e-5)] += PixelNoise**2
return xi
def dxi_LN_dalpha(r, α, β):
_deriv = 2/(α+1e-12) * np.exp(-0.25*np.power(r/β,2)) * np.exp(np.power(α,2)*np.exp(-0.25*np.power(r/β,2))) - 2/np.power(α+1e-12,3) * (np.exp(np.power(α,2)*np.exp(-0.25*np.power(r/β,2))) - 1)
return _deriv
def dxi_LN_dbeta(r, β, α):
return (0.5*np.power(r, 2) * np.exp(np.power(α, 2) * np.exp(-0.25 * np.power(r/β,2)) - 0.25*np.power(r/β,2)))*np.power(1./β,3)
θ_fid = np.array([1.0, 0.5], dtype='float32')
r = np.sqrt(r2)
Nsq=N**2
def dξdβ(β):
return 0.5 * r2 *(1./β**3)* np.exp(-0.25 * r2 * (1/β)**2)
def known_fisher(θ, Nsq):
α,β = θ
# get covariance
_ξ = ξ_G(β)
# get derivative
dξ_dβ = dξdβ(β)
Cinv = np.linalg.inv(_ξ)
# fisher matrix entries
Faa = 2*Nsq / α**2
Fab = (1. / α) * np.trace(Cinv @ dξ_dβ)
Fba = Fab
Fbb = 0.5 * np.trace(Cinv @ dξ_dβ @ Cinv @ dξ_dβ)
return np.array([[Faa, Fab], [Fba, Fbb]])
f = - known_fisher(θ_fid, Nsq)
print(f)
analytic_detF = np.linalg.det(f)
print(analytic_detF)
def _f_NL(α, β):
return tfd.TransformedDistribution(
distribution=tfd.MultivariateNormalTriL(
loc=np.zeros((N**2,)),
scale_tril=np.linalg.cholesky(ξ_G(β))),
bijector= tfb.Chain([
# tfb.Exp(),
# tfb.AffineScalar(
# shift=-0.5 * np.expand_dims(α, -1)**2.,
# scale=np.expand_dims(α, -1))]))
#tfb.Scale(1. / np.expand_dims(α, (-1))),
#tfb.Expm1(),
tfb.Exp(),
tfb.AffineScalar(
# shift=-0.5 * np.expand_dims(α, -1)**2.,
scale=np.expand_dims(α, -1))]))
def loglike(α, β, key):
f_NL = _f_NL(α, β)
return f_NL.log_prob(f_NL.sample(seed=key))
@jax.jit
def dlnLdθ(α, β, key):
return jax.grad(loglike, argnums=(0, 1))(α, β, key)
def numeric_F(α, β, key, n_samples):
keys = np.array(jax.random.split(key, num=n_samples))
return np.cov(np.array(jax.vmap(dlnLdθ)(np.repeat(α, n_samples), np.repeat(β, n_samples), keys)), rowvar=True)
θ_fid = np.array([0.95, 0.55], dtype=np.float32)
rng, key = jax.random.split(rng)
_a,_b = θ_fid
# F_expected = np.mean(np.array([numeric_F(np.array(_a), np.array(_b), key, 1000)
# for i in range(10)
# ]), axis=0)
F_expected = numeric_F(np.array(_a), np.array(_b), key, 2000)
detF_expected = np.linalg.det(F_expected)
print('expected F: ', F_expected)
print('expected det F: ', (detF_expected))
analytic_detF / detF_expected
keys = jax.random.split(rng, num=1000)
# Compile a function that computes the Hessian of the likelihood
hessian_loglik = jax.jit(jax.hessian(loglike, argnums=(0,1)))
# Evalauate the Hessian at fiductial cosmology to retrieve Fisher matrix
# This is a bit slow at first....
num_f = np.mean(np.array([-np.array(hessian_loglik(_a,_b, k)) for k in keys]), axis=0)
np.linalg.det(num_f) / analytic_detF
```
# define LN class
```
# Define a log_normal field class (Florent's paper -> JAX)
class LogNormalField:
@staticmethod
def compute_rsquared(nside):
"""
Compute the correlation function of the underlying gaussian field
Parameters:
nside : int
Image is nside x nside pixels
"""
import jax.numpy as np
from scipy.linalg import toeplitz
_Di = np.tile(toeplitz(np.arange(nside)),(nside,nside))
_Dj = np.concatenate(
[np.concatenate(
[np.tile(np.abs(i-j),(nside,nside)) for i in range(nside)],
axis=0)
for j in range(nside)],axis=1)
_distance_squared = _Di*_Di+_Dj*_Dj
return _distance_squared
# The lognormal correlation function where the gaussian field has a gaussian power spectrum,
# and the gaussian correlation function xi_G.
@staticmethod
def xi_G(rsq, beta):
"""
Calculates the two-point correlation function of a gaussian field with gaussian power spectrum
Parameters:
rsq : float
separation^2
beta : float
Gaussian smoothing width of gaussian field
"""
import jax.numpy as np
# ADD IN SIGMA PARAM HERE
xi = np.exp(-0.25*rsq/(beta**2))
return xi
@staticmethod
def fill_zeros(k, value):
from functools import partial
def fnk(k):
return jax.lax.cond(np.less_equal(k, 1e-5), lambda _: 0., lambda k: k+value, operand=k)
if len(k.shape) == 1:
return jax.vmap(fnk)(k)
else:
return jax.vmap(partial(fill_zeros, value=value))(k)
@staticmethod
def xi_LN(r, beta, alpha, PixelNoise):
"""
Calculates the lognormal two-point correlation function
Parameters:
r : float
Pair separation
beta : float
Gaussian smoothing width of underlying gaussian field
alpha : float
Nongaussianity parameter in lognormal transformation
PixelNoise : float
Standard deviation of added noise per pixel
"""
import jax.numpy as np
xi = 1/(np.power(alpha+1e-12,2)) * (np.exp(np.power(alpha,2)*np.exp(-0.25*np.power(r/beta,2))) - 1)
# Add pixel noise at zero separation:
xi = self.fill_zeros(xi, PixelNoise**2)
#xi[np.where(r<1e-5)] += PixelNoise**2
return xi
@staticmethod
def dxi_LN_dalpha(r, beta, alpha, PixelNoise):
import jax.numpy as np
return 2/(alpha+1e-12) * np.exp(-0.25*np.power(r/beta,2)) * np.exp(np.power(alpha,2)*np.exp(-0.25*np.power(r/beta,2))) - 2/np.power(alpha+1e-12,3) * (np.exp(np.power(alpha,2)*np.exp(-0.25*np.power(r/beta,2))) - 1)
@staticmethod
def dxi_LN_dbeta(r, beta, alpha, PixelNoise):
import jax.numpy as np
return (0.5*np.power(r,2)/np.power(beta,3)) * np.exp(-0.25*np.power(r/beta,2)) * np.exp(np.power(alpha,2)*np.exp(-0.25*np.power(r/beta,2)))
def __init__(self,Lside,rmax,nbin):
"""
Parameters:
rmax : float
Maximum pair separation considered
nbin : int
Number of bins for shell-averaged correlation function
"""
import jax.numpy as np
self.rmax = rmax
self.nbin = nbin
self.Lside = Lside
# compute the separations and indices on a grid
self.rsq = self.compute_rsquared(Lside)
self.r = np.sqrt(self.rsq)
self.bins = np.arange(nbin)*rmax/nbin
self.index = np.digitize(self.r,self.bins)
self.average_r = np.array([self.r[self.index == n].mean() for n in range(nbin) if np.sum(self.index == n)>0])
@staticmethod
def G_to_LN(gaussian, alpha):
import jax.numpy as np
# Make lognormal (variance of gaussian field is unity by construction)
# Divide by 1/alpha so that the signal-to-noise ratio is independent of alpha
return np.exp(alpha * gaussian) #1./alpha * (np.exp(alpha * gaussian-0.5*alpha**2)-1)
def run_simulation(self, key, alpha, beta, PixelNoise=None):
"""
Create a lognormal field from a gaussian field with a Gaussian correlation function
"""
# split keys, one for field and one for noise
key1,key2 = jax.random.split(key)
Lside = self.Lside
rsq = self.rsq
# Compute the Gaussian correlation function
xiG = self.xi_G(rsq,beta)
# Compute the Gaussian random field
field = (jax.random.multivariate_normal(key1, np.zeros(Lside*Lside), xiG)).reshape(Lside,Lside)
# Make lognormal (variance of gaussian field is unity by construction)
field = self.G_to_LN(field, alpha)
# Add noise
if PixelNoise is not None:
field += jax.random.normal(key2, shape=(Lside,Lside))*np.sqrt(PixelNoise)
return field
def pymc3_model(self, field_data, alphamin, alphamax, betamin, betamax, PixelNoise):
import numpy as np
import pymc3 as pm
LN_model = pm.Model()
Lside = self.Lside
rsq = self.rsq
zero = np.zeros(Lside*Lside)
PixelNoiseVector = PixelNoise*np.ones(Lside*Lside)
InvNoiseCovariance = np.diag(1/(PixelNoiseVector**2))
field_data = field_data.reshape(Lside*Lside)
with LN_model:
# (TLM) TODO: add in μ,σ for full BHM
# Uniform priors for unknown model parameters (alpha,beta):
alpha_p = pm.Uniform("alpha", lower=alphamin, upper=alphamax)
beta_p = pm.Uniform("beta", lower=betamin, upper=betamax)
# Compute (beta-dependent) gaussian field correlation function:
xi = pm.math.exp(-0.25*rsq/(beta_p*beta_p))
# Gaussian field values are latent variables:
gaussian = pm.MvNormal("gaussian",mu=zero,cov=xi,shape=Lside*Lside)
# Expected value of lognormal field, for given (alpha, beta, gaussian):
muLN = 1/alpha_p * (pm.math.exp(alpha_p * gaussian-0.5*alpha_p*alpha_p)-1)
# Likelihood (sampling distribution) of observations, given the mean lognormal field:
Y_obs = pm.MvNormal("Y_obs", mu=muLN, tau=InvNoiseCovariance, observed=field_data)
return LN_model
def run_diff_simulation(self, alpha, beta, PixelNoise, step, seed):
"""
Run simulations for finite differencing
"""
import numpy as np
from scipy.stats import multivariate_normal
Lside = self.Lside
rsq = self.rsq
alphap = alpha*(1+step)
alpham = alpha*(1-step)
betap = beta*(1+step)
betam = beta*(1-step)
# Compute the gaussian correlation function
xiG = self.xi_G(rsq,beta)
xiG_betap = self.xi_G(rsq,betap)
xiG_betam = self.xi_G(rsq,betam)
# Compute Gaussian random fields with the same phases
Gfield = multivariate_normal(mean=np.zeros(Lside*Lside), cov=xiG).rvs(random_state=seed).reshape(Lside,Lside)
Gfield_betap = multivariate_normal(mean=np.zeros(Lside*Lside), cov=xiG_betap).rvs(random_state=seed).reshape(Lside,Lside)
Gfield_betam = multivariate_normal(mean=np.zeros(Lside*Lside), cov=xiG_betam).rvs(random_state=seed).reshape(Lside,Lside)
# Make lognormal (variance of gaussian field is unity by construction)
field = self.G_to_LN(Gfield, alpha)
field_betap = self.G_to_LN(Gfield_betap, alpha)
field_betam = self.G_to_LN(Gfield_betam, alpha)
field_alphap = self.G_to_LN(Gfield, alphap)
field_alpham = self.G_to_LN(Gfield, alpham)
# Add noise
noise = np.random.normal(loc=0.0,scale=PixelNoise,size=(Lside,Lside))
field += noise
field_betap += noise
field_betam += noise
field_alphap += noise
field_alpham += noise
return field, field_alphap, field_alpham, field_betap, field_betam
def compute_corrfn(self,field):
"""
Compute two-point correlation function
"""
import numpy as np
index = self.index
nbin = self.nbin
# compute the correlations
correlations = np.outer(field,field)
corrfn = np.array([correlations[index==n].mean() for n in range(nbin) if len(correlations[index==n])>0])
return corrfn
def compute_corrfn_derivatives(self, field, field_alphap, field_alpham, field_betap, field_betam, step):
"""
Compute derivatives of the two-point correlation function
"""
# Compute correlation functions
corrfn = self.compute_corrfn(field)
corrfn_dalphap = self.compute_corrfn(field_alphap)
corrfn_dalpham = self.compute_corrfn(field_alpham)
corrfn_dbetap = self.compute_corrfn(field_betap)
corrfn_dbetam = self.compute_corrfn(field_betam)
# Compute derivatives by second-order central finite differences
dcorrfn_dalpha = (corrfn_dalpham - 2*corrfn + corrfn_dalphap)/(step**2)
dcorrfn_dbeta = (corrfn_dbetam - 2*corrfn + corrfn_dbetap )/(step**2)
return dcorrfn_dalpha, dcorrfn_dbeta
def covariance(self,fields):
"""
Compute covariance from a number of fields
Parameter:
fields : int
lognormal field objects contributing to the covariance matrix
"""
import numpy as np
nsims = len(fields)
nbins = self.nonzerobins
print('Number of simulations',nsims)
print('Number of non-zero pair bins',nbins)
corrfns = np.array([fields[i]['corrfn'] for i in range(nsims)])
meanxi = np.mean(corrfns,axis=0)
covxi = np.cov(corrfns.T)
return meanxi, covxi
# Utility properties
@staticmethod
def var_th(alpha, PixelNoise):
import numpy as np
return 1/np.power(alpha+1e-12,2)*(np.exp(alpha**2)-1)+PixelNoise**2
@staticmethod
def skew_th(alpha):
import numpy as np
return (np.exp(alpha**2)+2)*np.sqrt(np.exp(alpha**2)-1)
@staticmethod
def dskew_dalpha(alpha):
import numpy as np
return 2*alpha*np.exp(alpha**2) * ( np.sqrt(np.exp(alpha**2)-1) - 0.5*(np.exp(alpha**2)+2)/(np.sqrt(np.exp(alpha**2)-1)) )
@staticmethod
def kurtosis_th(alpha):
import numpy as np
return np.exp(4*alpha**2)+2*np.exp(3*alpha**2)+3*np.exp(2*alpha**2)-6
@staticmethod
def dkurtosis_dalpha(alpha):
import numpy as np
return 8*alpha*np.exp(4*alpha**2)+6*alpha*np.exp(3*alpha**2)+6*alpha*np.exp(2*alpha**2)
@staticmethod
def max(field):
import numpy as np
return np.max(field)
@staticmethod
def min(field):
import numpy as np
return np.min(field)
@staticmethod
def var(field):
import numpy as np
return np.var(field)
@staticmethod
def mean(field):
import numpy as np
return np.mean(field)
@staticmethod
def skew(field):
from scipy.stats import skew
return skew(field.flatten())
@staticmethod
def kurtosis(field):
from scipy.stats import kurtosis
return kurtosis(field.flatten())
# xi has empty bins removed. Note the number of non-empty elements
@property
def nonzerobins(self):
return len(self.average_r)
@property
def dt(self):
import numpy as np
return np.dtype([('field', np.float, (self.Lside,self.Lside)), ('corrfn', np.float, (self.nonzerobins))])
# end class LogNormalField
```
# define simulator
```
Lside = N
alpha = 1.0
beta = 0.5
PixelNoise = 0.0
# Setup for correlation function
nbin = 4*Lside
ndata = 4*Lside
rmax = Lside*np.sqrt(2)
LN=LogNormalField(Lside,rmax,nbin)
field = LN.run_simulation(key, 1.0, 0.5, PixelNoise=None)
plt.imshow(np.squeeze(field))
plt.colorbar()
# simulator args
simulator_args = {'N': N, 'squeeze': False, 'pad': 2 }
# LN field distribution
def _f_NL(α, β):
return tfd.TransformedDistribution(
distribution=tfd.MultivariateNormalTriL(
loc=np.zeros((N**2,)),
scale_tril=np.linalg.cholesky(ξ_G(β))),
bijector=tfb.Chain([
tfb.Scale(1. / np.expand_dims(α, (-1))),
tfb.Expm1(),
tfb.AffineScalar(
shift=-0.5 * np.expand_dims(α, -1)**2.,
scale=np.expand_dims(α, -1))]))
# draw from the joint distribution
def simulator(rng, n, α, β,):
dist = _f_NL(α, β)
if n is not None:
return dist.sample(n, seed=rng)
else:
return dist.sample(seed=rng)
# simulator uses Florent's LN field simulator
# wrapper for IMNN and ABC sampler
def imnn_simulator(rng, θ, simulator_args=simulator_args):
A,B = θ
N = simulator_args['N']
pad = simulator_args['pad']
#noise = 0.01
def fn(key, A, B):
if simulator_args['squeeze']:
return np.expand_dims(
np.pad(
#simulator(key, None, A, B).reshape(N,N),
LN.run_simulation(key, A, B),
[pad,pad],
),
0)
else:
return (np.expand_dims(
np.expand_dims(
np.pad(
#simulator(key, None, A, B).reshape(N,N),
LN.run_simulation(key, A, B),
[pad,pad]
),
0),
0))
if A.shape == B.shape:
if len(A.shape) == 0:
return fn(rng, A, B)
else:
keys = jax.random.split(rng, num=A.shape[0] + 1)
rng = keys[0]
keys = keys[1:]
return jax.vmap(
lambda key, A, B: imnn_simulator(key, (A,B), simulator_args=simulator_args)
)(keys, A, B)
else:
if len(A.shape) > 0:
keys = jax.random.split(rng, num=A.shape[0] + 1)
rng = keys[0]
keys = keys[1:]
return jax.vmap(
lambda key, A: imnn_simulator(key, (A,B), simulator_args=simulator_args)
)(keys, A)
elif len(B.shape) > 0:
keys = jax.random.split(rng, num=B.shape[0])
return jax.vmap(
lambda key, B: imnn_simulator(key, (A,B), simulator_args=simulator_args)
)(keys, B)
pad = 2 #2**2
input_shape = (1,1, N+pad*2,N+pad*2)
print('input shape: ', input_shape)
θ_fid = np.array([0.95, 0.55], dtype=np.float32)
# IMNN params
n_s = 5000
n_d = 5000
λ = 100.0
ϵ = 0.1
n_params = 2
n_summaries = 2
```
# nn model stuff
```
# define stax model
from jax.nn.initializers import normal
def InceptBlock2(filters, strides, do_5x5=True, do_3x3=True,
padding="SAME", W_init=None):
"""InceptNet convolutional striding block.
filters: tuple: (f1,f2,f3)
filters1: for conv1x1
filters2: for conv1x1,conv3x3
filters3L for conv1x1,conv5x5"""
filters1, filters2, filters3 = filters
conv1x1 = stax.serial(stax.Conv(filters1, (1,1), strides, padding=padding, W_init=W_init))
filters4 = filters2
conv3x3 = stax.serial(stax.Conv(filters2, (1,1), strides=None, padding=padding, W_init=W_init),
stax.Conv(filters4, (3,3), strides, padding=padding, W_init=W_init))
filters5 = filters3
conv5x5 = stax.serial(stax.Conv(filters3, (1,1), strides=None, padding=padding, W_init=W_init),
stax.Conv(filters5, (5,5), strides, padding=padding, W_init=W_init))
maxpool = stax.serial(stax.MaxPool((3,3), padding=padding),
stax.Conv(filters4, (1,1), strides, padding=padding, W_init=W_init))
if do_3x3:
if do_5x5:
return stax.serial(
stax.FanOut(4), # should num=3 or 2 here ?
stax.parallel(conv1x1, conv3x3, conv5x5, maxpool),
stax.FanInConcat(),
stax.LeakyRelu)
else:
return stax.serial(
stax.FanOut(3), # should num=3 or 2 here ?
stax.parallel(conv1x1, conv3x3, maxpool),
stax.FanInConcat(),
stax.LeakyRelu)
else:
return stax.serial(
stax.FanOut(2), # should num=3 or 2 here ?
stax.parallel(conv1x1, maxpool),
stax.FanInConcat(),
stax.LeakyRelu)
def Reshape(newshape):
"""Layer function for a reshape layer."""
init_fun = lambda rng, input_shape: (newshape,())
apply_fun = lambda params, inputs, **kwargs: np.reshape(inputs,newshape)
return init_fun, apply_fun
from jax.nn.initializers import glorot_normal, normal, ones, zeros
def LogLayer(a_init=ones, b_init=ones,
c_init=ones, d_init=ones, C=3., scalar=True):
"""custom layer for log-normalizing field inputs"""
_a_init = lambda rng,shape: a_init(rng, shape)
_b_init = lambda rng,shape: b_init(rng, shape)
_c_init = lambda rng,shape: c_init(rng, shape)
_d_init = lambda rng,shape: d_init(rng, shape)
def init_fun(rng, input_shape):
if scalar:
shape = ()
else:
shape = input_shape
k1, rng = jax.random.split(rng)
k2, rng = jax.random.split(rng)
k3, rng = jax.random.split(rng)
k4, rng = jax.random.split(rng)
a,b = _a_init(k1, shape), _b_init(k2, shape)
c,d = _c_init(k3, shape)*C, _d_init(k4, shape)
return input_shape, (a,b,c,d)
def apply_fun(params, inputs, **kwargs):
a,b,c,d = params
return a * np.log(np.abs(b)*inputs + c) + d
return init_fun, apply_fun
def AsinhLayer(a_init=ones, b_init=ones,
c_init=ones, d_init=ones, scalar=True):
"""custom layer for Asinh-normalizing field inputs"""
_a_init = lambda rng,shape: a_init(rng, shape)
_b_init = lambda rng,shape: b_init(rng, shape)
_c_init = lambda rng,shape: c_init(rng, shape)
_d_init = lambda rng,shape: d_init(rng, shape)
def init_fun(rng, input_shape):
if scalar:
shape = ()
else:
shape = input_shape
k1, rng = jax.random.split(rng)
k2, rng = jax.random.split(rng)
k3, rng = jax.random.split(rng)
k4, rng = jax.random.split(rng)
a,b = _a_init(k1, shape), _b_init(k2, shape)
c,d = _c_init(k3, shape), _d_init(k4, shape)
return input_shape, np.stack((a,b,c,d), 0)
def apply_fun(params, inputs, **kwargs):
a,b,c,d = params
return a * np.arcsinh(b*inputs + c) + d
return init_fun, apply_fun
def ScalarLayer(C=None, c_init=ones):
"""Layer construction function for a reshape layer."""
if C is None:
C = 1.0
_c_init = lambda rng,shape: c_init(rng, shape)*C
def init_fun(rng, input_shape):
shape = input_shape
k1, rng = jax.random.split(rng)
constant = _c_init(k1, shape)
return input_shape, (constant)
def apply_fun(params, inputs, **kwargs):
cnst = params
return inputs*cnst
return init_fun, apply_fun
```
# build model
```
# build model
# build model
fs = 128
model = stax.serial(
AsinhLayer(scalar=True),
InceptBlock2((fs,fs,fs), strides=(1,1)),
InceptBlock2((fs,fs,fs), strides=(2,2)),
#InceptBlock2((fs,fs,fs), strides=(1,1)),
InceptBlock2((fs,fs,fs), strides=(2,2)),
#InceptBlock2((fs,fs,fs), strides=(1,1)),
InceptBlock2((fs,fs,fs), strides=(2,2), do_5x5=False),
#InceptBlock2((fs,fs,fs), strides=(1,1), do_5x5=False),
InceptBlock2((fs,fs,fs), strides=(3,3), do_5x5=False, do_3x3=False), # add in dense layers here ?
stax.Dense(50),
stax.LeakyRelu,
stax.Dense(50),
stax.LeakyRelu,
stax.Dense(n_summaries),
#stax.Conv(n_summaries, (1,1), strides=(1,1), padding="SAME"),
stax.Flatten,
Reshape((n_summaries,))
)
# # build model
# fs = 150
# model = stax.serial(
# AsinhLayer(scalar=True),
# # stax.Flatten,
# # stax.Dense(400),
# # stax.LeakyRelu,
# # stax.Dense(400),
# # stax.LeakyRelu,
# # Reshape((1,1,20,20)),
# #InceptBlock2((fs,fs,fs), strides=(1,1)),
# InceptBlock2((fs,fs,fs), strides=(2,2)),
# InceptBlock2((fs,fs,fs), strides=(2,2)),
# InceptBlock2((fs,fs,fs), strides=(5,5), do_5x5=False),
# #InceptBlock2((fs,fs,fs), strides=(2,2), do_5x5=False),
# #InceptBlock2((fs,fs,fs), strides=(3,3), do_5x5=False, do_3x3=False), # add in dense layers here ?
# stax.Flatten,
# stax.Dense(fs),
# stax.LeakyRelu,
# stax.Dense(fs),
# stax.LeakyRelu,
# stax.Dense(n_summaries),
# stax.Flatten,
# Reshape((n_summaries,))
# )
rng, initial_model_key = jax.random.split(rng)
rng, fitting_key = jax.random.split(rng)
optimiser = optimizers.adam(step_size=1e-3)
# adam w learn rate decay
batch_size = 1
num_batches = 3000
optimiser = optimizers.adam(
lambda t: np.select([t < batch_size*(num_batches//3),
t < batch_size*(2*num_batches//3),
t > batch_size*(2*num_batches//3)],
[1e-3, 3e-4, 1e-4]))
```
# load IMNN state
```
# load the optimizer state from the folder and feed to IMNN init function and train a couple times
import cloudpickle as pickle
import os
def unpckl_me(path):
file = open(path, 'rb')
return pickle.load(file)
folder_name = 'simple-LN-incept'
loadstate = unpckl_me(os.path.join(folder_name, 'IMNN_state'))
_state = jax.experimental.optimizers.pack_optimizer_state(loadstate)
rng, key = jax.random.split(rng)
IMNN = imnn.IMNN(
n_s=2000,
n_d=2000,
n_params=n_params,
n_summaries=n_summaries,
input_shape=input_shape,
θ_fid=θ_fid,
model=model,
optimiser=optimiser,
key_or_state=_state,
simulator=lambda rng, θ: imnn_simulator(
rng, θ, simulator_args={**simulator_args, **{"squeeze": False}}),
#host=jax.devices()[0],
#devices=[jax.devices()[0]],
#n_per_device=1000
)
%%time
rng, key = jax.random.split(rng)
IMNN.fit(λ=10., ϵ=0.1, rng=key, print_rate=None, min_iterations=2, best=True)
```
```
ax = IMNN.plot(expected_detF=detF_expected)
ax[0].set_yscale('log')
np.linalg.det(IMNN.F) / detF_expected
```
# save IMNN attributes
```
import cloudpickle as pickle
import os
def save_weights(IMNN, folder_name='./model', weights='final'):
# create output directory
if not os.path.exists(folder_name):
os.mkdir(folder_name)
def pckl_me(obj, path):
with open(path, 'wb') as file_pi:
pickle.dump(obj, file_pi)
file_pi.close()
# save IMNN (optimiser) state:
savestate = jax.experimental.optimizers.unpack_optimizer_state(IMNN.state)
pckl_me(savestate, os.path.join(folder_name, 'IMNN_state'))
# save weights
if weights == 'final':
np.save(os.path.join(folder_name, 'final_w'), IMNN.final_w)
else:
np.save(os.path.join(folder_name, 'best_w'), IMNN.best_w)
# save initial weights
np.save(os.path.join(folder_name, 'initial_w'), IMNN.initial_w)
# save training history
pckl_me(IMNN.history, os.path.join(folder_name, 'history'))
# save important attributes as a dict
imnn_attributes = {
'n_s': IMNN.n_s,
'n_d': IMNN.n_d,
'input_shape': IMNN.input_shape,
'n_params' : IMNN.n_params,
'n_summaries': IMNN.n_summaries,
'θ_fid': IMNN.θ_fid,
'F': IMNN.F,
'invF': IMNN.invF,
'C': IMNN.C,
'invC': IMNN.invC,
'validate': IMNN.validate,
'simulate': IMNN.simulate,
}
pckl_me(imnn_attributes, os.path.join(folder_name, 'IMNN_attributes'))
print('saved weights and attributes to the file ', folder_name)
def load_weights(IMNN, folder_name='./model', weights='final', load_attributes=True):
def unpckl_me(path):
file = open(path, 'rb')
return pickle.load(file)
# load and assign weights
if weights=='final':
weights = np.load(os.path.join(folder_name, 'final_w.npy'), allow_pickle=True)
IMNN.final_w = weights
else:
weights = np.load(os.path.join(folder_name, 'best_w.npy'), allow_pickle=True)
IMNN.best_w = weights
# re-pack and load the optimiser state
loadstate = unpckl_me(os.path.join(folder_name, 'IMNN_state'))
IMNN.state = jax.experimental.optimizers.pack_optimizer_state(loadstate)
# load history
IMNN.history = unpckl_me(os.path.join(folder_name, 'history'))
# load important attributes
if load_attributes:
IMNN.intial_w = np.load(os.path.join(folder_name, 'initial_w.npy'), allow_pickle=True)
attributes = unpckl_me(os.path.join(folder_name, 'IMNN_attributes'))
IMNN.θ_fid = attributes['θ_fid']
IMNN.n_s = attributes['n_s']
IMNN.n_d = attributes['n_d']
IMNN.input_shape = attributes['input_shape']
IMNN.F = attributes['F']
IMNN.invF = attributes['invF']
IMNN.C = attributes['C']
IMNN.invC = attributes['invC']
print('loaded IMNN with these attributes: ', attributes)
save_weights(IMNN, folder_name='simple-LN-incept', weights=None)
!zip -r /content/simple-LN-incept.zip /content/simple-LN-incept
from google.colab import files
files.download("/content/simple-LN-incept.zip")
```
# obtain target data from Florent's analysis
```
!git clone https://github.com/florent-leclercq/correlations_vs_field.git
dataid = 1 # for florent's sims as target data
rng, key = jax.random.split(rng)
θ_target = np.array([1.0, 0.5])
_α,_β = θ_target
target = dict(
f_NL=LN.run_simulation(key, _α, _β).flatten(),
α=np.array(1.0),
β=np.array(0.5))
##dat = np.load("./correlations_vs_field/data/Sims20_05_10_80_80_500_80_1_123456.npy")
#target['f_NL'] = dat[dataid]['field'].flatten()
plt.imshow(target["f_NL"].reshape((N, N)))
plt.colorbar()
target['f_NL'].shape
# put data in the proper shape for IMNN
θ_target = np.array([1.0, 0.5])
δ_target = np.expand_dims(np.expand_dims(np.expand_dims(np.pad(target["f_NL"].reshape((N, N)), [2,2]), 0), 0),0)
estimates = IMNN.get_estimate(δ_target)
```
# priors
```
tfp.__version__
import tensorflow as tf
import tensorflow_probability as tfp
tfd = tfp.distributions
tfb = tfp.bijectors
import tensorflow_probability.substrates.jax as tfpj
tfdj = tfpj.distributions
tfbj = tfpj.bijectors
import tensorflow.keras.backend as K
import tensorflow_probability.substrates.jax as tfp
# prior for sim draws in jax
prior = tfpj.distributions.Blockwise(
[tfpj.distributions.Uniform(low=low, high=high)
for low, high in zip([0.4, 0.2], [1.5, 0.8])])
prior.low = np.array([0.4, 0.2])
prior.high = np.array([1.5, 0.8])
# set up prior in non-jax tfp
samp_prior = tfp.distributions.Blockwise(
[tfp.distributions.Uniform(low=low, high=high)
for low, high in zip([0.4, 0.2], [1.5, 0.8])])
prior.low = np.array([0.4, 0.2])
prior.high = np.array([1.5, 0.8])
GA = imnn.lfi.GaussianApproximation(
parameter_estimates=estimates,
invF=np.expand_dims(np.linalg.inv(IMNN.F), 0),
prior=prior,
gridsize=100)
```
### initialize ABC
```
ABC = imnn.lfi.ApproximateBayesianComputation(
target_data=δ_target,
prior=prior,
simulator=lambda rng, θ : imnn_simulator(rng, θ, simulator_args={**simulator_args, **{'squeeze':False}}),
compressor=IMNN.get_estimate,
gridsize=100,
F=np.expand_dims(IMNN.F, 0))
from tqdm import trange
import numpy as onp
def affine_sample(log_prob, n_params, n_walkers, n_steps, walkers1, walkers2):
# initialize current state
current_state1 = tf.Variable(walkers1)
current_state2 = tf.Variable(walkers2)
# initial target log prob for the walkers (and set any nans to -inf)...
logp_current1 = log_prob(current_state1)
logp_current2 = log_prob(current_state2)
logp_current1 = tf.where(tf.math.is_nan(logp_current1), tf.ones_like(logp_current1)*tf.math.log(0.), logp_current1)
logp_current2 = tf.where(tf.math.is_nan(logp_current2), tf.ones_like(logp_current2)*tf.math.log(0.), logp_current2)
# holder for the whole chain
chain = [tf.concat([current_state1, current_state2], axis=0)]
# MCMC loop
with trange(1, n_steps) as t:
for epoch in t:
# first set of walkers:
# proposals
partners1 = tf.gather(current_state2, onp.random.randint(0, n_walkers, n_walkers))
z1 = 0.5*(tf.random.uniform([n_walkers], minval=0, maxval=1)+1)**2
proposed_state1 = partners1 + tf.transpose(z1*tf.transpose(current_state1 - partners1))
# target log prob at proposed points
logp_proposed1 = log_prob(proposed_state1)
logp_proposed1 = tf.where(tf.math.is_nan(logp_proposed1), tf.ones_like(logp_proposed1)*tf.math.log(0.), logp_proposed1)
# acceptance probability
p_accept1 = tf.math.minimum(tf.ones(n_walkers), z1**(n_params-1)*tf.exp(logp_proposed1 - logp_current1) )
# accept or not
accept1_ = (tf.random.uniform([n_walkers], minval=0, maxval=1) <= p_accept1)
accept1 = tf.cast(accept1_, tf.float32)
# update the state
current_state1 = tf.transpose( tf.transpose(current_state1)*(1-accept1) + tf.transpose(proposed_state1)*accept1)
logp_current1 = tf.where(accept1_, logp_proposed1, logp_current1)
# second set of walkers:
# proposals
partners2 = tf.gather(current_state1, onp.random.randint(0, n_walkers, n_walkers))
z2 = 0.5*(tf.random.uniform([n_walkers], minval=0, maxval=1)+1)**2
proposed_state2 = partners2 + tf.transpose(z2*tf.transpose(current_state2 - partners2))
# target log prob at proposed points
logp_proposed2 = log_prob(proposed_state2)
logp_proposed2 = tf.where(tf.math.is_nan(logp_proposed2), tf.ones_like(logp_proposed2)*tf.math.log(0.), logp_proposed2)
# acceptance probability
p_accept2 = tf.math.minimum(tf.ones(n_walkers), z2**(n_params-1)*tf.exp(logp_proposed2 - logp_current2) )
# accept or not
accept2_ = (tf.random.uniform([n_walkers], minval=0, maxval=1) <= p_accept2)
accept2 = tf.cast(accept2_, tf.float32)
# update the state
current_state2 = tf.transpose( tf.transpose(current_state2)*(1-accept2) + tf.transpose(proposed_state2)*accept2)
logp_current2 = tf.where(accept2_, logp_proposed2, logp_current2)
# append to chain
chain.append(tf.concat([current_state1, current_state2], axis=0))
# stack up the chain
chain = tf.stack(chain, axis=0)
return chain
class ConditionalMaskedAutoregressiveFlow(tf.Module):
def __init__(self, n_dimensions=None, n_conditionals=None, n_mades=1, n_hidden=[50,50], input_order="random",
activation=tf.keras.layers.LeakyReLU(0.01),
all_layers=True,
kernel_initializer=tf.keras.initializers.RandomNormal(mean=0.0, stddev=1e-5, seed=None),
bias_initializer=tf.keras.initializers.RandomNormal(mean=0.0, stddev=1e-5, seed=None),
kernel_regularizer=None, bias_regularizer=None, kernel_constraint=None,
bias_constraint=None):
super(ConditionalMaskedAutoregressiveFlow, self).__init__()
# extract init parameters
self.n_dimensions = n_dimensions
self.n_conditionals = n_conditionals
self.n_mades = n_mades
# construct the base (normal) distribution
self.base_distribution = tfd.MultivariateNormalDiag(loc=tf.zeros(self.n_dimensions), scale_diag=tf.ones(self.n_dimensions))
# put the conditional inputs to all layers, or just the first layer?
if all_layers == True:
all_layers = "all_layers"
else:
all_layers = "first_layer"
# construct stack of conditional MADEs
self.MADEs = [tfb.AutoregressiveNetwork(
params=2,
hidden_units=n_hidden,
activation=activation,
event_shape=[n_dimensions],
conditional=True,
conditional_event_shape=[n_conditionals],
conditional_input_layers=all_layers,
input_order=input_order,
kernel_initializer=kernel_initializer,
bias_initializer=bias_initializer,
kernel_regularizer=kernel_regularizer,
bias_regularizer=bias_regularizer,
kernel_constraint=kernel_constraint,
bias_constraint=bias_constraint,
#name="MADE_{}".format(i)) for i in range(n_mades)
)]
# bijector for x | y (chain the conditional MADEs together)
def bijector(self, y):
# start with an empty bijector
MAF = tfb.Identity()
# pass through the MADE layers (passing conditional inputs each time)
for i in range(self.n_mades):
MAF = tfb.MaskedAutoregressiveFlow(shift_and_log_scale_fn=lambda x: self.MADEs[i](x, conditional_input=y))(MAF)
return MAF
# construct distribution P(x | y)
def __call__(self, y):
return tfd.TransformedDistribution(
self.base_distribution,
bijector=self.bijector(y))
# log probability ln P(x | y)
def log_prob(self, x, y):
return self.__call__(y).log_prob(x)
# sample n samples from P(x | y)
def sample(self, n, y):
# base samples
base_samples = self.base_distribution.sample(n)
# biject the samples
return self.bijector(y).forward(base_samples)
num_models = 2
cmaf_models = [ConditionalMaskedAutoregressiveFlow(n_dimensions=2,
n_conditionals=2, n_hidden=[50,50,50]) for i in range(num_models)]
cmaf_models += [ConditionalMaskedAutoregressiveFlow(n_dimensions=2,
n_conditionals=2, n_hidden=[50,50]) for i in range(2)]
maf_optimizers = [tf.keras.optimizers.Adam(learning_rate=1e-3) for i in range(len(cmaf_models))]
@tf.function
def train_step(x, y):
_ls = []
for m in range(len(cmaf_models)):
with tf.GradientTape() as tape:
_l = K.mean(-cmaf_models[m].log_prob(x, y) - samp_prior.log_prob(y))
_ls.append(_l)
grads = tape.gradient(_l, cmaf_models[m].trainable_variables)
maf_optimizers[m].apply_gradients(zip(grads, cmaf_models[m].trainable_variables))
#train_acc_metric.update_state(y, logits)
return _ls
@tf.function
def val_step(x, y):
_val_l = []
for m in range(len(cmaf_models)):
_val_l.append(K.mean(-cmaf_models[m].log_prob(x, y) - samp_prior.log_prob(y)))
return _val_l
@tf.function
def loss(x, y):
return K.mean(-cmaf_model.log_prob(x, y))
# create dataset for this demo
num_sims = 1000
keys = jax.random.split(rng, num=num_sims)
def get_params_summaries(key, n_samples, ϑ_samp):
keys = np.array(jax.random.split(key, num=n_samples))
sim = lambda rng, θ: imnn_simulator(
rng, θ, simulator_args={
**simulator_args,
**{"squeeze": False}})
d = jax.vmap(sim)(keys, ϑ_samp)
t = IMNN.get_estimate(d)
t1 = t.flatten()[~np.isnan(t.flatten())]
ϑ_samp = ϑ_samp.flatten()[~np.isnan(t.flatten())]
# scale all summaries
t1 = t1.reshape((len(t1)//2, 2))
#_tstd = np.std(t1, axis=0)
#_tmu = np.mean(t1, axis=0)
#t1 = (t1 - _tmu) / _tstd
# return x,y; e.g. t and ϑ (conditional)
# learn p(x | y) = p(t | ϑ)
return t1, ϑ_samp.reshape(len(ϑ_samp)//2, 2) #, _tmu, _tstd
def get_dataset(data, batch_size=20, buffer_size=1000, split=0.75):
x,y = data
idx = int(len(x)*split)
x_train = x[:idx]
y_train = y[:idx]
x_val = x[idx:]
y_val = y[idx:]
# Prepare the training dataset.
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=buffer_size).batch(batch_size)
# Prepare the validation dataset.
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(batch_size)
return train_dataset, val_dataset
def proposal_distribution(prior, MAF):
"""return geometric mean of proposal distribution"""
n_samples = 1000
batch_size = 100
buffer_size = n_samples
key1,key2 = jax.random.split(rng)
ϑ_samp = prior.sample(sample_shape=(n_samples,), seed=key1)
ts, ϑ_samp = get_params_summaries(key2, n_samples, ϑ_samp)
data = (ts, ϑ_samp)
train_dataset, val_dataset = get_dataset(data, batch_size=batch_size, buffer_size=buffer_size)
import time
from tqdm import tqdm
epochs = 2000
train_losses = []
val_losses = []
for epoch in tqdm(range(epochs)):
#print("\nStart of epoch %d" % (epoch,))
start_time = time.time()
# shuffle training data anew every 100th epoch
if epoch % 100 == 0:
train_dataset = train_dataset.shuffle(buffer_size=buffer_size)
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
loss_values = train_step(x_batch_train, y_batch_train).to_numpy()
train_losses.append(loss_values)
# Run a validation loop at the end of each epoch.
for x_batch_val, y_batch_val in val_dataset:
val_loss = val_step(x_batch_val, y_batch_val).to_numpy()
val_losses.append(val_loss)
import seaborn as sns
%matplotlib inline
sns.set_theme()
plt.figure(figsize=(8,3.5))
plt.subplot(131)
plt.plot(np.array(train_losses).T[0], label='train')
plt.plot(np.array(val_losses).T[0], label='val')
plt.ylabel(r'$p(t\ |\ \vartheta, w)$')
plt.subplot(132)
plt.plot(np.array(train_losses).T[1], label='train')
plt.plot(np.array(val_losses).T[1], label='val')
plt.ylabel(r'$p(t\ |\ \vartheta, w)$')
plt.subplot(133)
plt.plot(np.array(train_losses).T[4], label='train')
plt.plot(np.array(val_losses).T[4], label='val')
plt.ylabel(r'$p(t\ |\ \vartheta, w)$')
plt.legend()
plt.tight_layout()
plt.show()
# get summaries for observed data from IMNN
# δ_target is the 20x20 simulation
estimates = IMNN.get_estimate(δ_target)#IMNN.get_estimate(np.expand_dims(δ_target, (0, 1, 2)))
# define cmaf model log prob with prior defined above
@tf.function
def my_log_prob(y, x=estimates):
# here all cmafs are trained to draw from p(x|y) <=> p(t|ϑ)
# take the mean of the data likelihood p(x|y) over all cmafs
_like = cmaf_models[0].log_prob(x,y)
_prior = samp_prior.log_prob(x)
return _like + _prior
# sample using affine
n_steps = 2000
n_walkers = 500
burnin_steps = 1800
n_params = 2
# initialize walkers...
walkers1 = tf.random.normal([n_walkers, 2], 0.5, 0.5)
walkers2 = tf.random.normal([n_walkers, 2], 0.5, 0.5)
# walkers1 = tf.random.uniform([n_walkers, 2], 0.1, 1.25)
# walkers2 = tf.random.uniform([n_walkers, 2], 0.1, 1.25)
chain = affine_sample(my_log_prob, n_params, n_walkers, n_steps, walkers1, walkers2)
skip = 4
post = np.stack([chain.numpy()[burnin_steps::skip,:,0].flatten(), chain.numpy()[burnin_steps::skip,:,1].flatten()], axis=-1)
# make ABC object for the marginal plot
_ABC = imnn.lfi.ApproximateBayesianComputation(
target_data=δ_target,
prior=prior,
simulator=lambda rng, θ: simulator(rng, θ, simulator_args={**simulator_args, **{'squeeze':False}}),
compressor=IMNN.get_estimate,
gridsize=100,
F=np.expand_dims(IMNN.F, 0))
margs = _ABC.get_marginals(accepted_parameters = [np.array(post)], smoothing=None, gridsize=400)
MAF = imnn.lfi.LikelihoodFreeInference(
prior=prior,
gridsize=100)
MAF.n_targets=2
MAF.put_marginals(
margs[1][0]);
MAF.marginal_plot(
known=θ_target,
label="Delfi estimate",
axis_labels=["A", "B"]);
```
# next sample from maf post and re-train
```
n_samples = 1000
key1,key2 = jax.random.split(rng)
idx = np.arange(len(post))
ϑ_samp = post[45000:][onp.random.choice(idx, size=n_samples)]
# ensure not < 0
idx = np.where(ϑ_samp > 0)[0]
ϑ_samp = ϑ_samp[idx]
n_samples = len(ϑ_samp)
ts, ϑ_samp = get_params_summaries(key2, n_samples, ϑ_samp)
new_data = (ts, ϑ_samp)
# this should shuffle the dataset
new_train_dataset, new_val_dataset = get_dataset(new_data, batch_size=batch_size, buffer_size=len(new_data[0]))
# concatenate datasets
train_dataset = train_dataset.concatenate(new_train_dataset)
#val_dataset = val_dataset.concatenate(new_val_dataset)
epochs = 1000
train_losses = []
val_losses = []
for epoch in tqdm(range(epochs)):
start_time = time.time()
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
loss_values = np.array(train_step(x_batch_train, y_batch_train))
train_losses.append(loss_values)
# Run a validation loop at the end of each epoch.
for x_batch_val, y_batch_val in val_dataset:
val_loss = np.array(val_step(x_batch_val, y_batch_val))
val_losses.append(val_loss)
plt.figure(figsize=(8,3.5))
plt.subplot(131)
plt.plot(np.array(train_losses).T[0], label='train')
plt.plot(np.array(val_losses).T[0], label='val')
plt.ylabel(r'$p(t\ |\ \vartheta, w)$')
plt.subplot(132)
plt.plot(np.array(train_losses).T[1], label='train')
plt.plot(np.array(val_losses).T[1], label='val')
plt.ylabel(r'$p(t\ |\ \vartheta, w)$')
plt.subplot(133)
plt.plot(np.array(train_losses).T[4], label='train')
plt.plot(np.array(val_losses).T[4], label='val')
plt.ylabel(r'$p(t\ |\ \vartheta, w)$')
plt.legend()
plt.tight_layout()
plt.show()
# get summaries for observed data from IMNN
# δ_target is the 128x128 simulation
# do for each model
MAFs = []
for m,cmaf_model in enumerate(cmaf_models):
@tf.function
def my_log_prob(y, x=estimates):
# here all cmafs are trained to draw from p(x|y) <=> p(t|ϑ)
# take the mean of the data likelihood p(x|y) over all cmafs
_like = cmaf_models[m].log_prob(x,y)
_prior = samp_prior.log_prob(y)
return _like + _prior
# sample using affine
n_steps = 2000
n_walkers = 500
burnin_steps = 1800
n_params = 2
# initialize walkers...
walkers1 = tf.random.normal([n_walkers, 2], 0.5, 0.5)
walkers2 = tf.random.normal([n_walkers, 2], 0.5, 0.5)
# walkers1 = tf.random.uniform([n_walkers, 2], 0.1, 1.25)
# walkers2 = tf.random.uniform([n_walkers, 2], 0.1, 1.25)
chain = affine_sample(my_log_prob, n_params, n_walkers, n_steps, walkers1, walkers2)
skip = 4
post = np.stack([chain.numpy()[burnin_steps::skip,:,0].flatten(), chain.numpy()[burnin_steps::skip,:,1].flatten()], axis=-1)
margs = ABC.get_marginals(accepted_parameters = [np.array(post)], smoothing=None, gridsize=400)
_MAF = imnn.lfi.LikelihoodFreeInference(
prior=prior,
gridsize=100)
_MAF.n_targets=2
_MAF.put_marginals(
margs[1][0]);
MAFs.append(_MAF)
_MAF.marginal_plot(
known=θ_target,
label="Delfi estimate",
axis_labels=["A", "B"]);
%%time
rng, key = jax.random.split(rng)
ABC(ϵ=0.1, rng=key, n_samples=10000, min_accepted=500,
smoothing=1., max_iterations=2000);
#np.save('./marginals/abc_distances', ABC.distances.accepted)
# ax = GA.marginal_plot(
# known=np.array([1.0, 0.5]),
# label="Gaussian approximation",
# axis_labels=[r"$\alpha$", r"$\beta$"],
# colours="C1")
# ABC contour plots
ax = ABC.marginal_plot(ax=None, colours='C4', label='ABC with IMNN', known=θ_target)
# ABC scatter plots
ax[0,0].hist(ABC.parameters.accepted[0][:, 0], color='purple', histtype='step', density=True, label='ABC with IMNN')
ax[1,0].scatter(ABC.parameters.accepted[0][:, 0], ABC.parameters.accepted[0][:, 1], s=8, alpha=0.6,
c=np.log(ABC.distances.accepted[0]), cmap='Purples', edgecolors=None, linewidths=0, marker='.')
ax[1,1].hist(ABC.parameters.accepted[0][:, 1], color='purple',
histtype='step', density=True, orientation='horizontal')
for m,_MAF in enumerate(MAFs):
if m == 0:
label = 'Delfi Estimate'
else:
label=None
_MAF.marginal_plot(
ax=ax,
label=label,
colours="C1", linestyle='solid');
plt.savefig('/mnt/home/tmakinen/repositories/field-plots/LN_delfi_vs_abc.png', dpi=800)
plt.legend()
plt.show()
# params for GA contour
row=1
column=0
target=0
levels=[0.68, 0.95]
all_margs = [GA.marginals for GA in [GA_1, GA_new]]
alphas = [0.75, 1.0]
linestyles = [':', 'solid']
GA_labels = ['initial GA', 'final GA']
ga_handles = []
for m,margs in enumerate(all_margs):
cs = ax[1,t].contour(
ranges[column],
ranges[row],
margs[row][column][target].T,
colors='#00c133',
linestyles=linestyles[m],
levels=GA_new.get_levels(
margs[row][column][target],
[ranges[column], ranges[row]],
levels=levels), alpha=alphas[m],
)
# proxy for legend
_ga, = ax[1,t].plot(-1,2, color='#00c133',
linestyle=linestyles[m],
alpha=alphas[m],label=GA_labels[m])
ga_handles.append(_ga)
ranges = ABC.ranges
# save ABC marginals
np.save('./marginals/abc_ranges.npy', ranges)
row=1
column=0
target=0
levels=[0.68, 0.95, 0.99]
margs = ABC.marginals
abc_2d_marg = ABC.marginals[row][column][target]
abc_levels = ABC.get_levels(margs[row][column][target],
[ranges[column], ranges[row]], levels=levels)
maf_2d_margs = []
maf_levels = []
for i,m in enumerate(MAFs):
maf_2d_margs.append(m.marginals[row][column][target])
maf_levels.append(m.get_levels(margs[row][column][target],
[ranges[column], ranges[row]], levels=levels))
abc_levels
np.save('./marginals/abc_2d_marginal_field_%d.npy'%(dataid), abc_2d_marg)
np.save('./marginals/abc_2d_marginal_field_levels_%d.npy'%(dataid), abc_levels)
for i,m in enumerate(maf_2d_margs):
np.save('./marginals/maf_%d_2d_marginal_field_%d.npy'%(i+1, dataid), maf_2d_margs[i])
np.save('./marginals/maf_%d_2d_marginal_field_levels_%d.npy'%(i + 1, dataid), maf_levels[i])
np.save('./marginals/abc_marginal_field_%d.npy'%(dataid), ABC.parameters.accepted)
class ConditionalMaskedAutoregressiveFlow(tf.Module):
def __init__(self, n_dimensions=None, n_conditionals=None, n_mades=1, n_hidden=[50,50], input_order="random",
activation=tf.keras.layers.LeakyReLU(0.01),
all_layers=True,
kernel_initializer=tf.keras.initializers.RandomNormal(mean=0.0, stddev=1e-5, seed=None),
bias_initializer=tf.keras.initializers.RandomNormal(mean=0.0, stddev=1e-5, seed=None),
kernel_regularizer=None, bias_regularizer=None, kernel_constraint=None,
bias_constraint=None):
# extract init parameters
self.n_dimensions = n_dimensions
self.n_conditionals = n_conditionals
self.n_mades = n_mades
# construct the base (normal) distribution
self.base_distribution = tfd.MultivariateNormalDiag(loc=tf.zeros(self.n_dimensions), scale_diag=tf.ones(self.n_dimensions))
# put the conditional inputs to all layers, or just the first layer?
if all_layers == True:
all_layers = "all_layers"
else:
all_layers = "first_layer"
# construct stack of conditional MADEs
self.MADEs = [tfb.AutoregressiveNetwork(
params=2,
hidden_units=n_hidden,
activation=activation,
event_shape=[n_dimensions],
conditional=True,
conditional_event_shape=[n_conditionals],
conditional_input_layers=all_layers,
input_order=input_order,
kernel_initializer=kernel_initializer,
bias_initializer=bias_initializer,
kernel_regularizer=kernel_regularizer,
bias_regularizer=bias_regularizer,
kernel_constraint=kernel_constraint,
bias_constraint=bias_constraint,
name="MADE_{}".format(i)) for i in range(n_mades)]
# bijector for x | y (chain the conditional MADEs together)
def bijector(self, y):
# start with an empty bijector
MAF = tfb.Identity()
# pass through the MADE layers (passing conditional inputs each time)
for i in range(self.n_mades):
MAF = tfb.MaskedAutoregressiveFlow(shift_and_log_scale_fn=lambda x: self.MADEs[i](x, conditional_input=y))(MAF)
return MAF
# construct distribution P(x | y)
def __call__(self, y):
return tfd.TransformedDistribution(
self.base_distribution,
bijector=self.bijector(y))
# log probability ln P(x | y)
def log_prob(self, x, y):
return self.__call__(y).log_prob(x)
# sample n samples from P(x | y)
def sample(self, n, y):
# base samples
base_samples = self.base_distribution.sample(n)
# biject the samples
return self.bijector(y).forward(base_samples)
```
| github_jupyter |
```
# Load dependencies
import pandas as pd
import sys
sys.path.insert(0,'../../statistics_helper/')
from openpyxl import load_workbook
from excel_utils import *
```
# Estimating the biomass of Cnidarians
To estimate the total biomass of cnidarians, we combine estimates for two main groups which we assume dominate the biomass of cnidarains = planktonic cnidarians (i.e. jellyfish) and corals. We describe the procedure for estimating the biomass of each group
## Planktonic cnidarians
Our estimate of the total biomass of plaktonic cnidarians is based on [Lucas et al.](http://dx.doi.org/10.1111/geb.12169), which assembled a large dataset of abundance mauresments of different dypes of gelatinous zooplankton. Globally, they estimate ≈0.04 Gt C of gelatinous zooplankton, of which 92% are contributed by cnidarians. Therefore, we estimate the total biomass of planktonic cnidarians at ≈0.04 Gt C.
```
planktonic_cnidarian_biomass = 0.04e15
```
## Corals
The procedure we take to estimate the total biomass of corals in coral reefs is to first calculate the total surface area of coral tissue globally, and then convert this value to biomass by the carbon mass density of coral tissue per unit surface area. We estimate the total surface area of corals worldwide using two approaches.
The first approach estimates the total surface area of corals using the total area of coral reefs (in $m^2$) from [Harris et al.](http://dx.doi.org/10.1016/j.margeo.2014.01.011).
```
# Total surface area of coral reefs
coral_reef_area = 0.25e12
```
We estimate that 20% of the reef area is covered by corals based on [De'ath et al.](http://dx.doi.org/10.1073/pnas.1208909109).
```
# Coverage of coral reef area by corals
coverage = 0.2
```
This gives us the projected area of corals. Corals have a complex 3D structure that increases their surface area. To take this effect into account, we use a recent study that estimated the ratio between coral tissue surface area and projected area at ≈5 ([Holmes & Glen](http://dx.doi.org/10.1016/j.jembe.2008.07.045)).
```
# The conversion factor from projected surface area to actual surface area
sa_3d_2a = 5
```
Multiplying these factors, we get an estimate for the total surface area of corals:
```
# Calculate the total surface area of corals
method1_sa = coral_reef_area*coverage*sa_3d_2a
print('Our estimate of the global surface area of corals based on our first method is ≈%.1f×10^11 m^2' % (method1_sa/1e11))
```
The second approach uses an estimate of the global calcification rate in coral reefs based on [Vecsei](http://dx.doi.org/10.1016/j.gloplacha.2003.12.002).
```
# Global annual calcufocation rate of corals [g CaCO3 yr^-1]
annual_cal = 0.75e15
```
We divide this rate by the surface area specific calcification rate of corals based on values from [McNeil](http://dx.doi.org/10.1029/2004GL021541) and [Kuffner et al.](http://dx.doi.org/10.1007/s00338-013-1047-8). Our best estimate for the surface area specific calcification rate is the geometric mean of values from the two sources above.
```
from scipy.stats import gmean
# Surface area specific calcification rate from McNeil, taken from figure 1 [g CaCO3 m^-2 yr^-1]
mcneil_cal_rate = 1.5e4
# Surface area specific calcification rate from Kuffner et al., taken from first
# Sentence of Discussion [g CaCO3 m^-2 yr^-1]
kuffner_cal_rate = 0.99e4
# Our best estimate for the surface area specific calcification rate is the geometric mean of the two values
best_cal_rate = gmean([mcneil_cal_rate,kuffner_cal_rate])
# Calculate the surface area of corals
method2_sa = annual_cal/best_cal_rate
print('Our estimate of the global surface area of corals based on our second method is ≈%.1f×10^11 m^2' % (method2_sa/1e11))
```
As our best estimate for the global surface area of corals we use the geometric mean of the estimates from the two methods:
```
best_sa = gmean([method1_sa,method2_sa])
print('Our best estimate of the global surface area of corals is ≈%.1f×10^11 m^2' % (best_sa/1e11))
```
To convert the total surface area to biomass, we use estimates for the tissue biomass per unit surface area of corals from [Odum & Odum](http://dx.doi.org/10.2307/1943285):
```
# Tissue biomass based on Odum & Odum [g C m^-2]
carbon_per_sa = 400
# Multiply our best estimate for the surface area of corals by the tissue biomass
coral_biomass = best_sa*carbon_per_sa
print('Our best estimate for the biomass of corals is ≈%.2f Gt C' %(coral_biomass/1e15))
```
An important caveat of this analysis is that it doesn’t include contribution of corals outside coral reefs, like those located in seamounts. Nevertheless, we account for this biomass of corals which are out of formal coral reefs when calculating the total benthic biomass.
Our best estimate of the total biomass of cnidarians is the sum of the biomass of planktonic cnidarians and corals:
```
best_estimate = planktonic_cnidarian_biomass + coral_biomass
print('Our best estimate for the biomass of cnidarians is ≈%.1f Gt C' %(best_estimate/1e15))
```
# Estimating the total number of cnidarians
To estimate the total number of cnidarians, we divide the total biomass of jellyfish by the characteristic carbon content of a single jellyfish. We do not consider corals as they are colonial organisms, and therefore it is hard to robustly define an individual. To estimate the characteristic carbon content of a single jellyfish, we rely on the data from Lucas et al.. We calculate the mean and median carbon content of all the species considered in the study, and use the geometric mean or the median and mean carbon contents as our best estimate of the characteristic carbon content of a single jellyfish.
```
# Load data from Lucas et al.
data = pd.read_excel('carbon_content_data.xls', 'Biometric equations', skiprows=1)
# Calculate the median and mean carbon contents
median_cc = (data['mg C ind-1'].median()*1e-3)
mean_cc = (data['mg C ind-1'].mean()*1e-3)
# Calculate the geometric mean of the median and mean carbon contents
best_cc = gmean([median_cc,mean_cc])
# Calculate the total number of jellyfish
tot_cnidaria_num = planktonic_cnidarian_biomass/best_cc
print('Our best estimate for the total number of cnidarians is ≈%.1e.' %tot_cnidaria_num)
# Feed results to the chordate biomass data
old_results = pd.read_excel('../animal_biomass_estimate.xlsx',index_col=0)
result = old_results.copy()
result.loc['Cnidarians',(['Biomass [Gt C]','Uncertainty'])] = (best_estimate/1e15,None)
result.to_excel('../animal_biomass_estimate.xlsx')
# Feed results to Table 1 & Fig. 1
update_results(sheet='Table1 & Fig1',
row=('Animals','Cnidarians'),
col=['Biomass [Gt C]', 'Uncertainty'],
values=[best_estimate/1e15,None],
path='../../results.xlsx')
# Feed results to Table S1
update_results(sheet='Table S1',
row=('Animals','Cnidarians'),
col='Number of individuals',
values= tot_cnidaria_num,
path='../../results.xlsx')
# We need to use the results on the biomass of gelatinous zooplankton
# for our estimate of the total biomass of marine arthropods, so we
# feed these results to the data used in the estimate of the total
# biomass of marine arthropods
path = '../arthropods/marine_arthropods/marine_arthropods_data.xlsx'
marine_arthropods_data = pd.read_excel(path,'Other macrozooplankton')
marine_arthropods_data.loc[0] = pd.Series({
'Parameter': 'Biomass of gelatinous zooplankton',
'Value': planktonic_cnidarian_biomass,
'Units': 'g C',
'Uncertainty': None
})
writer = pd.ExcelWriter(path, engine = 'openpyxl')
book = load_workbook(path)
writer.book = book
writer.sheets = dict((ws.title, ws) for ws in book.worksheets)
marine_arthropods_data.to_excel(writer, sheet_name = 'Other macrozooplankton',index=False)
writer.save()
```
| github_jupyter |
# Logistic Regression
Sources: DataCamp, [StatQuest](https://www.youtube.com/watch?v=yIYKR4sgzI8)
Despite its name, logistic regression is used in **classification** problems, not regression problems.
We will see how logreg works with binary classification problems, that is, when we have two possible labels for the target variable.
Given a feature, logreg will output a probability p, with respect to the target variable.
If p is below a certain threshold, we consider the label to be 0, and above that threshold we consider it to be 1. Defining that threshold is our task.
#### Differences with linear regression (StatQuest):
1) With linear regression you can predict continuous variables (like weight), logistic regression returns the probability of a givenlabel (like "obese").
2) Also, instead of fitting a line to the data, like linear regression, logistic regression fits an "S"-shaped logistic function.
3) We can have simple models (e.g.: Obesity is predicted by Weight) or more complex models ( e.g.: Obesity is predicted by Weight, Genotype and Age).
We can calculate the influence that each predictor variable has on the target variable, but unlike linear regression, we can't easily compare a complex model to a simple model.
Instead, we just test to see if a variable's effect on the prediction is significantly different from 0, with Wald's Test.
One big difference between linreg and logreg is how the line is fit to the data. With linreg, we fit the line using least squares. In other words, we find the line that minimizes the sum of the squares of these residuals. We also use residuals to calculate R squared and to compare simple models to complicated models.
Logreg doesn't have the same concept of a residual, so it can't use least squares and it can't calculate R squared. Instead, it uses something called "maximum likelihood".
You pick a probability, scaled by weight, of observing an obese mouse, and you use that to calculate the likelihood of observing all of your datapoints. Lastly, you multiplu all of the likelihoods together. That's the likelihood of the data given this line. Then you shift the line and calculate a new likelihood of the data, then shift the line and calculate it again and again. Finally, the curve with the maximum value for the likelihood is selected.
```
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
data = datasets.load_breast_cancer()
type(data)
print(data.keys())
print(data['DESCR'])
X = data.data
y = data.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=33)
logreg = LogisticRegression(solver='liblinear')
logreg.fit(X_train, y_train)
y_pred = logreg.predict(X_test)
print(y_pred)
# Now we plot our ROC curve
y_pred_prob = logreg.predict_proba(X_test)[:,1]
fpr, tpr, thresholds = roc_curve(y_test, y_pred_prob)
# And we plot it
plt.plot([0, 1], [0, 1], 'k--') # this creates a 45-degree dashed line
plt.plot(fpr, tpr, label = 'Logistic Regression')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Logistic Regression ROC Curve')
plt.show()
```
We used the predicted probabilities of the model assigning a value of 1 to the observation in question. This is because to compute the ROC we do not merely want the predictions on the test set, but we want the probability that our logreg model outputs before using a threshold to predict the label.
To do this, we use the method predict_proba to the model and pass it the test data. predict_proba returns an array with two columns: each column contains the probabilities for the respective target values (0 or 1, negative or positive). We choose the second column, the one with index 1, that is, the probabilities of predicted labels being '1' (positive).
I was curious about this.
```
see_two_cols_pred_proba = logreg.predict_proba(X_test)
print(see_two_cols_pred_proba)
sum_cols = see_two_cols_pred_proba[:,0] + see_two_cols_pred_proba[:,1]
print(sum_cols)
```
As expected, the sum of the probabilities for '0' and '1' equals 1.
We compute the area under the curve
```
auc = roc_auc_score(y_test, y_pred_prob)
print(auc)
```
## AUC using cross-validation
```
cv_scores = cross_val_score(logreg, X, y, cv=5, scoring='roc_auc')
print(cv_scores)
```
Say you have a binary classifier that in fact is just randomly making guesses. It would be correct approximately 50% of the time, and the resulting ROC curve would be a diagonal line in which the True Positive Rate and False Positive Rate are always equal. The Area under this ROC curve would be 0.5. This is one way in which the AUC is an informative metric to evaluate a model. If the AUC is significatively greater than 0.5, the model is better than random guessing.
| github_jupyter |
<h1> Creating a custom Word2Vec embedding on your data </h1>
This notebook illustrates:
<ol>
<li> Creating a training dataset
<li> Running word2vec
<li> Examining the created embedding
<li> Export the embedding into a file you can use in other models
<li> Training the text classification model of [txtcls2.ipynb](txtcls2.ipynb) with this custom embedding.
</ol>
```
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
```
# Creating a training dataset
The training dataset simply consists of a bunch of words separated by spaces extracted from your documents. The words are simply in the order that they appear in the documents and words from successive documents are simply appended together. In other words, there is not "document separator".
<p>
The only preprocessing that I do is to replace anything that is not a letter or hyphen by a space.
<p>
Recall that word2vec is unsupervised. There is no label.
```
import google.datalab.bigquery as bq
query="""
SELECT
CONCAT( LOWER(REGEXP_REPLACE(title, '[^a-zA-Z $-]', ' ')),
" ",
LOWER(REGEXP_REPLACE(text, '[^a-zA-Z $-]', ' '))) AS text
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
LENGTH(title) > 100
AND LENGTH(text) > 100
"""
df = bq.Query(query).execute().result().to_dataframe()
df[:5]
with open('word2vec/words.txt', 'w') as ofp:
for txt in df['text']:
ofp.write(txt + " ")
```
This is what the resulting file looks like:
```
!cut -c-1000 word2vec/words.txt
```
## Running word2vec
We can run the existing tutorial code as-is.
```
%bash
cd word2vec
TF_INC=$(python -c 'import tensorflow as tf; print(tf.sysconfig.get_include())')
TF_LIB=$(python -c 'import tensorflow as tf; print(tf.sysconfig.get_lib())')
g++ -std=c++11 \
-I/usr/local/lib/python2.7/dist-packages/tensorflow/include/external/nsync/public \
-shared word2vec_ops.cc word2vec_kernels.cc \
-o word2vec_ops.so -fPIC -I $TF_INC -O2 -D_GLIBCXX_USE_CXX11_ABI=0 -L$TF_LIB -ltensorflow_framework
```
The actual evaluation dataset doesn't matter. Let's just make sure to have some words in the input also in the eval. The analogy dataset is of the form
<pre>
Athens Greece Cairo Egypt
Baghdad Iraq Beijing China
</pre>
i.e. four words per line where the model is supposed to predict the fourth given the first three. But we'll just make up a junk file.
```
%writefile word2vec/junk.txt
: analogy-questions-ignored
the user plays several levels
of the game puzzle
vote down the negative
%bash
cd word2vec
rm -rf trained
python word2vec.py \
--train_data=./words.txt --eval_data=./junk.txt --save_path=./trained \
--min_count=1 --embedding_size=10 --window_size=2
```
## Examine the created embedding
Let's load up the embedding file in TensorBoard. Start up TensorBoard, switch to the "Projector" tab and then click on the button to "Load data". Load the vocab.txt that is in the output directory of the model.
```
from google.datalab.ml import TensorBoard
TensorBoard().start('word2vec/trained')
```
Here, for example, is the word "founders" in context -- it's near doing, creative, difficult, and fight, which sounds about right ... The numbers next to the words reflect the count -- we should try to get a large enough vocabulary that we can use --min_count=10 when training word2vec, but that would also take too long for a classroom situation. <img src="embeds.png" />
```
for pid in TensorBoard.list()['pid']:
TensorBoard().stop(pid)
print 'Stopped TensorBoard with pid {}'.format(pid)
```
## Export the embedding vectors into a text file
Let's export the embedding into a text file, so that we can use it the way we used the Glove embeddings in txtcls2.ipynb.
Notice that we have written out our vocabulary and vectors into two files. We just have to merge them now.
```
!wc word2vec/trained/*.txt
!head -3 word2vec/trained/*.txt
import pandas as pd
vocab = pd.read_csv("word2vec/trained/vocab.txt", sep="\s+", header=None, names=('word', 'count'))
vectors = pd.read_csv("word2vec/trained/vectors.txt", sep="\s+", header=None)
vectors = pd.concat([vocab, vectors], axis=1)
del vectors['count']
vectors.to_csv("word2vec/trained/embedding.txt.gz", sep=" ", header=False, index=False, index_label=False, compression='gzip')
!zcat word2vec/trained/embedding.txt.gz | head -3
```
## Training model with custom embedding
Now, you can use this embedding file instead of the Glove embedding used in [txtcls2.ipynb](txtcls2.ipynb)
```
%bash
gsutil cp word2vec/trained/embedding.txt.gz gs://${BUCKET}/txtcls2/custom_embedding.txt.gz
%bash
OUTDIR=gs://${BUCKET}/txtcls2/trained_model
JOBNAME=txtcls_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gsutil cp txtcls1/trainer/*.py $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=$(pwd)/txtcls1/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC_GPU \
--runtime-version=1.4 \
-- \
--bucket=${BUCKET} \
--output_dir=${OUTDIR} \
--glove_embedding=gs://${BUCKET}/txtcls2/custom_embedding.txt.gz \
--train_steps=36000
```
Copyright 2017 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| github_jupyter |
# Data Science Fundamentals in Python
## Softmax function
In mathematics, the softmax function, or normalized exponential function,[1]:198 is a generalization of the logistic function that "squashes" a K-dimensional vector {\displaystyle \mathbf {z} } \mathbf {z} of arbitrary real values to a K-dimensional vector {\displaystyle \sigma (\mathbf {z} )} \sigma (\mathbf {z} ) of real values, where each entry is in the range (0, 1], and all the entries add up to 1.
https://en.wikipedia.org/wiki/Softmax_function
```
import numpy as np
z = [1.0, 2.0, 3.0, 4.0, 1.0, 2.0, 3.0]
softmax = lambda x : np.exp(x)/np.sum(np.exp(x))
softmax(z)
```
## Monte Carlo method
Monte Carlo methods (or Monte Carlo experiments) are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. Their essential idea is using randomness to solve problems that might be deterministic in principle. They are often used in physical and mathematical problems and are most useful when it is difficult or impossible to use other approaches. Monte Carlo methods are mainly used in three problem classes: **optimization**, **numerical integration**, and **generating draws from a probability distribution**.
```
# from https://github.com/dandrewmyers/numerical/blob/master/mc_pi.py
#This is numerical estimate of pi based on Monte Carlo method using unit circle inscribed in rectangle.
#Outputs pi estimate in terminal and also plot based on number of random points inputed by user.
#This code based on previous version at https://gist.github.com/louismullie/3769218.
import numpy as np
import matplotlib.pyplot as plt
import time as time
# input total number of random points
total_random_points = int(input("\nNumber of random points for Monte Carlo estimate of Pi?\n>"))
# start time of calculation
start_time = time.time()
# number of random points inside unit cicrle and total random points
inside_circle = 0
# create empty x and y arrays for eventual scatter plot of generated random points
x_plot_array = np.empty(shape=(1,total_random_points))
y_plot_array = np.empty(shape=(1,total_random_points))
# generate random points and count points inside unit circle
# top right quadrant of unit cicrle only
for i in range(0, total_random_points):
# print(f'\nIteration: {i + 1}')
# generate random x, y in range [0, 1]
x = np.random.rand()
x_plot_array = np.append(x_plot_array, [x])
#print(f'{x_plot_array}')
# print(f'x value: {x}')
y = np.random.rand()
y_plot_array = np.append(y_plot_array, [y])
#print(f'{y_plot_array}')
# print(f'y value: {y}')
# calc x^2 and y^2 values
x_squared = x**2
y_squared = y**2
# count if inside unit circle, top right quadrant only
if np.sqrt(x_squared + y_squared) < 1.0:
inside_circle += 1
# print(f'Points inside circle {inside_circle}')
# print(f'Number of random points {i+1}')
# calc approximate pi value
pi_approx = inside_circle / (i+1) * 4
# print(f'Approximate value for pi: {pi_approx}')
# final numeric output for pi estimate
print ("\n--------------")
print (f"\nApproximate value for pi: {pi_approx}")
print (f"Difference to exact value of pi: {pi_approx-np.pi}")
print (f"Percent Error: (approx-exact)/exact*100: {(pi_approx-np.pi)/np.pi*100}%")
print (f"Execution Time: {time.time() - start_time} seconds\n")
# plot output of random points and circle, top right quadrant only
random_points_plot = plt.scatter(x_plot_array, y_plot_array, color='blue', s=.1)
circle_plot = plt.Circle( ( 0, 0 ), 1, color='red', linewidth=2, fill=False)
ax = plt.gca()
ax.cla()
ax.add_artist(random_points_plot)
ax.add_artist(circle_plot)
plt.show()
```
| github_jupyter |
# Quick tour with QM9 [](https://colab.research.google.com/github/Teoroo-CMC/PiNN/blob/TF2/docs/notebooks/Quick_tour.ipynb)
This notebook showcases a simple example of training a neural network potential on the QM9 dataset with PiNN.
```
# Install PiNN & download QM9 dataset
!pip install git+https://github.com/Teoroo-CMC/PiNN@TF2
!mkdir -p /tmp/dsgdb9nsd && curl -sSL https://ndownloader.figshare.com/files/3195389 | tar xj -C /tmp/dsgdb9nsd
import os, warnings
import tensorflow as tf
from glob import glob
from ase.collections import g2
from pinn.io import load_qm9, sparse_batch
from pinn import get_model, get_calc
# CPU is used for documentation generation, feel free to use your GPU!
os.environ['CUDA_VISIBLE_DEVICES'] = ''
# We heavily use indexed slices to do sparse summations,
# which causes tensorflow to complain,
# we believe it's safe to ignore this warning.
index_warning = 'Converting sparse IndexedSlices'
warnings.filterwarnings('ignore', index_warning)
```
## Getting the dataset
PiNN adapts TensorFlow's dataset API to handle different datasets.
For this and the following notebooks the QM9 dataset (https://doi.org/10.6084/m9.figshare.978904) is used.
To follow the notebooks, download the dataset and change the directory accordingly.
The dataset will be automatically split into subsets according to the split_ratio.
Note that to use the dataset with the estimator, the datasets should be a function, instead of a dataset object.
```
filelist = glob('/tmp/dsgdb9nsd/*.xyz')
dataset = lambda: load_qm9(filelist, splits={'train':8, 'test':2})
train = lambda: dataset()['train'].repeat().shuffle(1000).apply(sparse_batch(100))
test = lambda: dataset()['test'].repeat().apply(sparse_batch(100))
```
## Defining the model
In PiNN, models are defined at two levels: models and networks.
- A model (model_fn) defines the target, loss and training detail.
- A network defines the structure of the neural network.
In this example, we will use the potential model, and the PiNet network.
The configuration of a model is stored in a nested dictionary as shown below.
Available options of the network and model can be found in the documentation.
```
!rm -rf /tmp/PiNet_QM9
params = {'model_dir': '/tmp/PiNet_QM9',
'network': {
'name': 'PiNet',
'params': {
'depth': 4,
'rc':4.0,
'atom_types':[1,6,7,8,9]
},
},
'model': {
'name': 'potential_model',
'params': {
'learning_rate': 1e-3
}
}
}
model = get_model(params)
```
## Configuring the training process
The defined model is indeed a [tf.Estimator](https://www.tensorflow.org/api_docs/python/tf/estimator/Estimator) object, thus, the training can be easily controlled
```
train_spec = tf.estimator.TrainSpec(input_fn=train, max_steps=1000)
eval_spec = tf.estimator.EvalSpec(input_fn=test, steps=100)
```
## Train and evaluate
```
tf.estimator.train_and_evaluate(model, train_spec, eval_spec)
```
## Using the model
The trained model can be used as an ASE calculator.
```
from ase.collections import g2
from pinn import get_calc
params = {'model_dir': '/tmp/PiNet_QM92',
'network': {
'name': 'PiNet',
'params': {
'depth': 4,
'rc':4.0,
'atom_types':[1,6,7,8,9]
},
},
'model': {
'name': 'potential_model',
'params': {
'learning_rate': 1e-3
}
}
}
calc = get_calc(params)
calc.properties = ['energy']
atoms = g2['C2H4']
atoms.set_calculator(calc)
atoms.get_forces(), atoms.get_potential_energy()
```
## Conclusion
You have trained your first PiNN model, though the accuracy is not so satisfying
(RMSE=21 Hartree!). Also, the training speed is slow as it's limited by the IO and
pre-processing of data.
We will show in following notebooks that:
- Proper scaling of the energy will improve the accuracy of the model.
- The training speed can be enhanced by caching and pre-processing the data.
| github_jupyter |
# Regression - Experimenting with additional models
In the previous notebook, we used simple regression models to look at the relationship between features of a bike rentals dataset. In this notebook, we'll experiment with more complex models to improve our regression performance.
Let's start by loading the bicycle sharing data as a **Pandas** DataFrame and viewing the first few rows. We'll also split our data into training and test datasets.
```
# Import modules we'll need for this notebook
import pandas as pd
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import train_test_split
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# load the training dataset
bike_data = pd.read_csv('data/daily-bike-share.csv')
bike_data['day'] = pd.DatetimeIndex(bike_data['dteday']).day
numeric_features = ['temp', 'atemp', 'hum', 'windspeed']
categorical_features = ['season','mnth','holiday','weekday','workingday','weathersit', 'day']
bike_data[numeric_features + ['rentals']].describe()
print(bike_data.head())
# Separate features and labels
# After separating the dataset, we now have numpy arrays named **X** containing the features, and **y** containing the labels.
X, y = bike_data[['season','mnth', 'holiday','weekday','workingday','weathersit','temp', 'atemp', 'hum', 'windspeed']].values, bike_data['rentals'].values
# Split data 70%-30% into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
print ('Training Set: %d rows\nTest Set: %d rows' % (X_train.shape[0], X_test.shape[0]))
```
Now we have the following four datasets:
- **X_train**: The feature values we'll use to train the model
- **y_train**: The corresponding labels we'll use to train the model
- **X_test**: The feature values we'll use to validate the model
- **y_test**: The corresponding labels we'll use to validate the model
Now we're ready to train a model by fitting a suitable regression algorithm to the training data.
## Experiment with Algorithms
The linear regression algorithm we used last time to train the model has some predictive capability, but there are many kinds of regression algorithm we could try, including:
- **Linear algorithms**: Not just the Linear Regression algorithm we used above (which is technically an *Ordinary Least Squares* algorithm), but other variants such as *Lasso* and *Ridge*.
- **Tree-based algorithms**: Algorithms that build a decision tree to reach a prediction.
- **Ensemble algorithms**: Algorithms that combine the outputs of multiple base algorithms to improve generalizability.
> **Note**: For a full list of Scikit-Learn estimators that encapsulate algorithms for supervised machine learning, see the [Scikit-Learn documentation](https://scikit-learn.org/stable/supervised_learning.html). There are many algorithms to choose from, but for most real-world scenarios, the [Scikit-Learn estimator cheat sheet](https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html) can help you find a suitable starting point.
### Try Another Linear Algorithm
Let's try training our regression model by using a **Lasso** algorithm. We can do this by just changing the estimator in the training code.
```
from sklearn.linear_model import Lasso
# Fit a lasso model on the training set
model = Lasso().fit(X_train, y_train)
print (model, "\n")
# Evaluate the model using the test data
predictions = model.predict(X_test)
mse = mean_squared_error(y_test, predictions)
print("MSE:", mse)
rmse = np.sqrt(mse)
print("RMSE:", rmse)
r2 = r2_score(y_test, predictions)
print("R2:", r2)
# Plot predicted vs actual
plt.scatter(y_test, predictions)
plt.xlabel('Actual Labels')
plt.ylabel('Predicted Labels')
plt.title('Daily Bike Share Predictions')
# overlay the regression line
z = np.polyfit(y_test, predictions, 1)
p = np.poly1d(z)
plt.plot(y_test,p(y_test), color='magenta')
plt.show()
```
### Try a Decision Tree Algorithm
As an alternative to a linear model, there's a category of algorithms for machine learning that uses a tree-based approach in which the features in the dataset are examined in a series of evaluations, each of which results in a *branch* in a *decision tree* based on the feature value. At the end of each series of branches are leaf-nodes with the predicted label value based on the feature values.
It's easiest to see how this works with an example. Let's train a Decision Tree regression model using the bike rental data. After training the model, the code below will print the model definition and a text representation of the tree it uses to predict label values.
```
from sklearn.tree import DecisionTreeRegressor
from sklearn.tree import export_text
# Train the model
model = DecisionTreeRegressor().fit(X_train, y_train)
print (model, "\n")
# Visualize the model tree
tree = export_text(model)
print(tree)
```
So now we have a tree-based model; but is it any good? Let's evaluate it with the test data.
```
# Evaluate the model using the test data
predictions = model.predict(X_test)
mse = mean_squared_error(y_test, predictions)
print("MSE:", mse)
rmse = np.sqrt(mse)
print("RMSE:", rmse)
r2 = r2_score(y_test, predictions)
print("R2:", r2)
# Plot predicted vs actual
plt.scatter(y_test, predictions)
plt.xlabel('Actual Labels')
plt.ylabel('Predicted Labels')
plt.title('Daily Bike Share Predictions')
# overlay the regression line
z = np.polyfit(y_test, predictions, 1)
p = np.poly1d(z)
plt.plot(y_test,p(y_test), color='magenta')
plt.show()
```
The tree-based model doesn't seem to have improved over the linear model, so what else could we try?
### Try an Ensemble Algorithm
Ensemble algorithms work by combining multiple base estimators to produce an optimal model, either by applying an aggregate function to a collection of base models (sometimes referred to a *bagging*) or by building a sequence of models that build on one another to improve predictive performance (referred to as *boosting*).
For example, let's try a Random Forest model, which applies an averaging function to multiple Decision Tree models for a better overall model.
```
from sklearn.ensemble import RandomForestRegressor
# Train the model
model = RandomForestRegressor().fit(X_train, y_train)
print (model, "\n")
# Evaluate the model using the test data
predictions = model.predict(X_test)
mse = mean_squared_error(y_test, predictions)
print("MSE:", mse)
rmse = np.sqrt(mse)
print("RMSE:", rmse)
r2 = r2_score(y_test, predictions)
print("R2:", r2)
# Plot predicted vs actual
plt.scatter(y_test, predictions)
plt.xlabel('Actual Labels')
plt.ylabel('Predicted Labels')
plt.title('Daily Bike Share Predictions')
# overlay the regression line
z = np.polyfit(y_test, predictions, 1)
p = np.poly1d(z)
plt.plot(y_test,p(y_test), color='magenta')
plt.show()
```
For good measure, let's also try a *boosting* ensemble algorithm. We'll use a Gradient Boosting estimator, which like a Random Forest algorithm builds multiple trees, but instead of building them all independently and taking the average result, each tree is built on the outputs of the previous one in an attempt to incrementally reduce the *loss* (error) in the model.
```
# Train the model
from sklearn.ensemble import GradientBoostingRegressor
# Fit a lasso model on the training set
model = GradientBoostingRegressor().fit(X_train, y_train)
print (model, "\n")
# Evaluate the model using the test data
predictions = model.predict(X_test)
mse = mean_squared_error(y_test, predictions)
print("MSE:", mse)
rmse = np.sqrt(mse)
print("RMSE:", rmse)
r2 = r2_score(y_test, predictions)
print("R2:", r2)
# Plot predicted vs actual
plt.scatter(y_test, predictions)
plt.xlabel('Actual Labels')
plt.ylabel('Predicted Labels')
plt.title('Daily Bike Share Predictions')
# overlay the regression line
z = np.polyfit(y_test, predictions, 1)
p = np.poly1d(z)
plt.plot(y_test,p(y_test), color='magenta')
plt.show()
```
## Summary
Here we've tried a number of new regression algorithms to improve performance. In our notebook we'll look at 'tuning' these algorithms to improve performance.
## Further Reading
To learn more about Scikit-Learn, see the [Scikit-Learn documentation](https://scikit-learn.org/stable/modules/model_evaluation.html#regression-metrics).
| github_jupyter |
# UNIX Bash for Data Science
some sources on frequent UNIX commands:
* http://faculty.tru.ca/nmora/Frequently%20used%20UNIX%20commands.pdf
* https://www.tjhsst.edu/~dhyatt/superap/unixcmd.html
## UNIX Manual
```
command = input('Which UNIX command manual do you want?: ')
!echo ' ' && echo Manual for: $command && echo '' && man $command | head -20
```
### Basic file and directory commands
mkdir: make directory
rmdir: remove directory
cd: change directory
cp: copy file
mv: mofe file
rm: remove file
cmp: compare 2 files
more: output per window
chmod: change file permissions
#### Symbols
**.** - working directory
**..** - parent directory to working directory
**~** - home directory
**/** - root directory
***** - string of characters wildcard
**?** - one character wildcard
#### Directing and piping commands
command > file - redirects the output of 'command' to 'file' instead of to standard output (screen)
command >> file - appends the output of 'command' to 'file' instead of to standard output (screen)
command < file - takes input for 'command' from file
command1 | command2 - pipe standard output of command1 to standard input of command2
### Getting around the filesystem
File permissions in numeric format and their meaning :
0 – no permissions
1 – execute only
2 – write only
3 – write and execute
4 – read only
5 – read and execute
6 – read and write
7 – read, write and execute
e.g.: chmod 400 filename ~ read-only
```
!pwd
!ls
!ls -ltr # sorted by modification date
!ls ..
!ls -FGlAhp | grep ''
!file utils_.ipynb
!find .
```
### Print to terminal
```
filename = '../_data/shakespeare.txt'
!echo $filename
!head -n 5 $filename && tail $filename
!cat $filename | head
!cat $filename | more # output per window
!less $filename # output per window
# CTRL+F – forward one window
# CTRL+B – backward one window
```
### Create new file
```
!echo this is a file made in terminal > ../_data/new_file2.txt
!echo appended text on next line >> ../_data/new_file2.txt
!cat ../_data/new_file2.txt
!touch ../_data/new_file.txt
date = !echo date
!echo this is a file made by touch at && echo $(date) > ../_data/new_file.txt
!cat ../_data/new_file.txt
%%writefile new_file.txt
%%writefile -a append_file.txt
```
### Sort
```
!head $filename | sort | head
# !head -n 5 $filename | sort # by the first letter in each line
!head $filename | sort -t' ' -k2 # by -k: 2nd column, delimitter -t: ' '
```
### Wordcount (wc)
-l: lines
-c: characters
```
!wc $filename && echo chars: && wc -c $filename && echo lines: && wc -l $filename
!sort $filename | uniq -u | wc -l
```
### Split to words
MacOS version, need to use: `\'$', $'`
- Convert DOS file to Unix (\r\n in the end of each line): $sed 's/.$//' filename
- replace spaces by returns: 's/ /\'$'\n/g'
- replace carriage return by nothing: $'s/\r//g'
```
!sed 's/.$//' $filename > ../_data/shakespeare_unix.txt
!head ../_data/shakespeare_unix.txt
!head ../_data/shakespeare.txt
```
### Reverse lines of content
```
!sed -n '1!G;h;$p' ../_data/shakespeare_unix.txt | head
!sed -e 's/ /\'$'\n/g' -e $'s/\r//g' < $filename | head
```
### Most frequent words
- remove empty lines: sed `'/^$/d'`, **`^: start of line, $: end of line, d: delete`**
- sort words: sort
- count consecutive words: uniq -c
- sort numerically and reverse order: sort -nr
```
!sed -e 's/ /\'$'\n/g' -e $'s/\r//g' $filename | sed '/^$/d'| sort | uniq -c | sort -nr | head -15
!sed -e 's/ /\'$'\n/g' -e $'s/\r//g' $filename | sed '/^$/d'| sort | uniq -c | sort -nr > ../_data/count_vs_words
!head ../_data/count_vs_words
```
### xargs
```
!head ../_data/count_vs_words | xargs echo
```
### grep
- **g**lobal **r**egular **e**xpression **p**rint
```
!grep -i 'CDROMS' ../_data/shakespeare.txt
!grep -A 10 -i 'CDROMS' ../_data/shakespeare.txt
# Recursive search
!grep -r 'shakespeare' *
!grep 'Liberty' $filename # add -i to make case insensitive
!grep -i 'liberty' $filename | wc -l # location of first word
```
### sed
- **s**tream **ed**itor
- like grep + replacement
- **`s/from/to/g`**, **s**: substitute, **g**: general (all)
```
!sed -e 's/parchment/manuscript/g' $filename > ../_data/temp_shakespeare.txt
!grep -i 'manuscript' ../_data/temp_shakespeare.txt
!grep -i 'parchment' ../_data/shakespeare.txt
```
### find
```
!find .. | grep -i shakespeare
```
#### All UNIX commands can be used in combination with python. Moreover they can be shorthanded with aliases.
| github_jupyter |
### Use a pretrained image-classifier to find for each test image it's Nearest Neighbor train image,in the underlying latent (feauture) space. Then associate to the test image a random utterance (annotation) created for the its corresponding neighbor.
- Saves the results in a csv file
- The notebook is quite simple as all the heavy lifting happens in the imported classes
- IMHO, the results here are impressive (i.e., fitting on average) and this highlights further that
emotion-explanaing is really a subjective and open-ended task.
```
import torch
import numpy as np
import pandas as pd
import os.path as osp
from PIL import Image
from artemis.analysis.feature_extraction import extract_visual_features
from artemis.neural_models.distances import k_euclidean_neighbors
from artemis.in_out.basics import unpickle_data, splitall, pickle_data
## Change to YOUR PATHS
wiki_art_img_dir = '/home/optas/DATA/Images/Wiki-Art/rescaled_max_size_to_600px_same_aspect_ratio'
references_file = '/home/optas/DATA/OUT/artemis/preprocessed_data/for_neural_nets/artemis_gt_references_grouped.pkl'
save_file = '/home/optas/DATA/OUT/artemis/neural_nets/speakers/nearest_neighbor/samples_from_best_model_test_split.pkl'
method = 'resnet34' # you could use resnet101, vgg etc. see extract_visual_features.py
img_dim = 256 # size to rescale each image
gpu_id = '3'
random_seed = 2021
save_results = True
gt_data = next(unpickle_data(references_file))
train_data = gt_data['train']
test_data = gt_data['test']
print('Train N-Images vs. Test N-Images', len(train_data), len(test_data))
train_images = wiki_art_img_dir + '/' + train_data.art_style + '/' + train_data.painting + '.jpg'
test_images = wiki_art_img_dir + '/' + test_data.art_style + '/' + test_data.painting + '.jpg'
assert len(set(train_images)) == len(train_images)
assert len(set(test_images)) == len(test_images)
# Extract features
device = torch.device("cuda:" + gpu_id)
train_feats = extract_visual_features(train_images, img_dim, method=method, device=device)
test_feats = extract_visual_features(test_images, img_dim, method=method, device=device)
# Push features to GPU to do nearest-neighbors
train_feats = torch.from_numpy(train_feats).to(device)
test_feats = torch.from_numpy(test_feats).to(device)
n_dists, n_ids = k_euclidean_neighbors(2, test_feats, train_feats) # take 2 nearest-neigbhs to remove possible dups
# replace duplicate (among train/test) with 2nd NN
# this should be obsolete, since I de-duplicated the entire WikiArt on version#2 (CVPR)
# but if you are applying to another dataset it is not a bad idea
# (sorry do not have time to beautify this up now...)
f_ids = n_ids[:, 0]
f_dists = n_dists[:, 0]
too_close_mask = n_dists[:,0] < 1 # you can finetune this threshold
f_ids[too_close_mask] = n_ids[:, 1][too_close_mask] #F : final ids
f_dists[too_close_mask] = n_dists[:,1][too_close_mask]
print((f_dists < 1).sum())
f_ids = f_ids.cpu().numpy()
f_dists = f_dists.cpu().numpy()
# visualize random pair with random utterance
r = np.random.randint(len(test_images))
print('TEST')
display(Image.open(test_images[r]).resize((256, 256)))
print('TRAIN')
fr = f_ids[r]
display(Image.open(train_images[fr]).resize((256, 256)))
lt = train_data.iloc[fr].references_pre_vocab
lt[np.random.choice(len(lt), 1)[0]] # show a random caption of the training
# Make a pandas dataframe with the corresponding captions (sorry do not have time to beautify this up now...)
all_p = []
all_a = []
all_u = []
np.random.seed(random_seed)
for r in range(len(test_images)):
toks = splitall(test_images[r])
a_painting = toks[-1][:-len('.jpg')]
a_style = toks[-2]
all_p.append(a_painting)
all_a.append(a_style)
fr = f_ids[r]
lt = train_data.iloc[fr].references_pre_vocab
utter = lt[np.random.choice(len(lt), 1)[0]] # pick randomly one
all_u.append(utter)
all_p = pd.DataFrame(all_p)
all_a = pd.DataFrame(all_a)
all_u = pd.DataFrame(all_u)
nn_captions = pd.concat([all_a, all_p, pd.Series([None] * len(all_u)), all_u], axis=1)
nn_captions.columns = ['art_style', 'painting', 'grounding_emotion', 'caption']
# Optional, if you want to align the NN-captions with those of another generated set. (useful for Panos to compare in for loop fahsion )
if False:
sampled_captions_file = '/home/optas/DATA/OUT/artemis/neural_nets/speakers/default_training/03-16-2021-22-21-45/samples_from_best_model_test_split.pkl'
caption_data = next(unpickle_data(sampled_captions_file))[0]
sampling_config_details, other_captions, attn = caption_data # here I use the captions as saved from the sample_speaker.py
print(other_captions.head(2))
cc = other_captions[['art_style', 'painting']]
nn_captions = pd.merge(cc, nn_captions) # re-order according to the cc order
assert all(other_captions.painting == nn_captions.painting)
assert all(other_captions.art_style == nn_captions.art_style)
nn_captions.head()
rs = nn_captions.sample(1)
print(rs['caption'].iloc[0])
display(Image.open(osp.join(wiki_art_img_dir, rs['art_style'].iloc[0], rs['painting'].iloc[0] + '.jpg')))
if save_results:
sampling_configuration = 'Nearest-Neighbor'
atn_values = None
pickle_data(save_file, [[sampling_configuration, nn_captions, atn_values]])
```
| github_jupyter |
論文<br>
https://arxiv.org/abs/2203.09035<br>
<br>
GitHub<br>
https://github.com/datvuthanh/HybridNets<br>
<br>
<a href="https://colab.research.google.com/github/kaz12tech/ai_demos/blob/master/HybridNets_demo.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# 環境セットアップ
## ライブラリのインストール
```
!apt-get install git unzip wget ffmpeg libsm6 libxext6 -y
!pip install --upgrade --no-cache-dir gdown albumentations opencv-python
```
## GitHubからソースコードを取得
```
!git clone https://github.com/datvuthanh/HybridNets.git
```
## ライブラリのインストール
```
%cd HybridNets
!pip install -r requirements.txt
```
# 学習済みモデル・データセットのセットアップ
## gdownでダウンロード
```
!gdown 1-wbnw9_71KM9RW2f7LX9Dva6YT5vRHOF
!gdown 19CEnZzgLXNNYh1wCvUlNi8UfiBkxVRH0
!gdown 1NZM-xqJJYZ3bADgLCdrFOa5Vlen3JlkZ
!gdown 1o-XpIvHJq0TVUrwlwiMGzwP1CtFsfQ6t
```
## データセットを解凍
```
!unzip -q -o datasets.zip
!unzip -q da_seg_annotations.zip -d datasets
!unzip -q det_annotations.zip -d datasets
!unzip -q ll_seg_annotations.zip -d datasets
```
## 学習済みモデルを解凍
```
!mkdir weights
!curl -L -o weights/hybridnets.pth https://github.com/datvuthanh/HybridNets/releases/download/v1.0/hybridnets.pth
```
# train
```
# --num_gpus 0 でCPUのみ
!python train.py -p bdd100k -c 3 --batch_size 2 --num_gpus 1
```
# predict
```
# --cuda FalseでCPUのみ
!python hybridnets_test.py --cuda True
!python hybridnets_test_videos.py --cuda True
```
## 結果の表示
```
from moviepy.editor import *
from moviepy.video.fx.resize import resize
clip = VideoFileClip("/content/HybridNets/demo_result/output.mp4")
clip = resize(clip, height=420)
clip.ipython_display()
```
# アップロードした動画のPredict
使用動画<br>
https://pixabay.com/ja/videos/%E9%AB%98%E9%80%9F%E9%81%93%E8%B7%AF-%E3%83%88%E3%83%A9%E3%83%95%E3%82%A3%E3%83%83%E3%82%AF-%E8%BB%8A-8357/
```
%cd /content/HybridNets
!rm -rf upload
!mkdir -p upload
%cd upload
import os
from google.colab import files
uploaded = files.upload()
uploaded = list(uploaded.keys())
file_name = uploaded[0]
# 半角スペース除去
rename = file_name.replace(" ", "")
os.rename(file_name, rename)
# 切り出し
clip_video = "/content/HybridNets/upload/subclip.mp4"
clip = VideoFileClip(rename)
# 1~7秒までを切り出し
sub_clip = clip.subclip(1, 7)
sub_clip.write_videofile(clip_video)
```
# predict
```
%cd /content/HybridNets
!python hybridnets_test_videos.py \
--cuda True\
--source /content/HybridNets/upload \
--output /content/HybridNets/upload
```
## 結果の表示
```
clip = VideoFileClip("/content/HybridNets/upload/output.mp4")
clip = resize(clip, height=420)
clip.ipython_display()
```
| github_jupyter |
# Load data
```
import os
import numpy as np
from lib.eval.regression import normalize
from lib.zero_shot import get_gap_ids
from lib.utils import mkdir_p
# setup
seed = 123
rng = np.random.RandomState(seed)
data_dir = 'data/' #'../wgan/data/'
codes_dir = os.path.join(data_dir, 'codes/')
figs_dir = 'figs/'
mkdir_p(figs_dir)
n_c = 10
zshot = True
model_names = ['PCA', 'VAE', '$\\beta$-VAE', 'InfoGAN']
exp_names = [m.lower() for m in model_names]
n_models = len(model_names)
train_fract, dev_fract, test_fract = 0.8, 0.1, 0.1
# load inputs (model codes)
m_codes = []
for n in exp_names:
try:
m_codes.append(np.load(os.path.join(codes_dir, n + '.npy')))
except IOError:
# .npz, e.g. pca with keys: codes, explained_variance
m_codes.append(np.load(os.path.join(codes_dir, n + '.npz'))['codes'])
# load targets (ground truths)
gts = np.load(os.path.join(data_dir, 'teapots.npz'))['gts']
n_samples = gts.shape[0]
n_z = gts.shape[1]
n_train, n_dev, n_test = int(train_fract*n_samples), int(dev_fract*n_samples), int(test_fract*n_samples)
# create 'gap' in data if zeroshot (unseen factor combinations)
if zshot:
try:
gap_ids = np.load(os.path.join(data_dir, 'gap_ids.npy'))
except IOError:
gap_ids = get_gap_ids(gts)
def create_gap(data):
return np.delete(data, gap_ids, 0)
# split inputs and targets into sets: [train, dev, test, (zeroshot)]
def split_data(data):
train = data[:n_train]
dev = data[n_train: n_train + n_dev]
test = data[n_train + n_dev: n_train + n_dev + n_test]
if zshot:
return [create_gap(train), create_gap(dev), create_gap(test), data[gap_ids]]
return [train, dev, test, None]
gts = split_data(gts)
for i in range(n_models):
m_codes[i] = split_data(m_codes[i])
# normalize input and target datasets [train, dev, test, (zeroshot)]
def normalize_datasets(datasets):
datasets[0], mean, std, _ = normalize(datasets[0], remove_constant=False)
datasets[1], _, _, _ = normalize(datasets[1], mean, std, remove_constant=False)
datasets[2], _, _, _ = normalize(datasets[2], mean, std, remove_constant=False)
if zshot:
datasets[3], _, _, _ = normalize(datasets[3], mean, std, remove_constant=False)
return datasets
gts = normalize_datasets(gts)
for i in range(n_models):
m_codes[i] = normalize_datasets(m_codes[i])
```
# Regression
### Fit regressor, visualise and quantify criteria
```
import matplotlib.pyplot as plt
# %matplotlib inline
from lib.eval.hinton import hinton
from lib.eval.regression import *
def fit_visualise_quantify(regressor, params, err_fn, importances_attr, test_time=False, save_plot=False):
# lists to store scores
m_disent_scores = [] * n_models
m_complete_scores = [] * n_models
# arrays to store errors (+1 for avg)
train_errs = np.zeros((n_models, n_z + 1))
dev_errs = np.zeros((n_models, n_z + 1))
test_errs = np.zeros((n_models, n_z + 1))
zshot_errs = np.zeros((n_models, n_z + 1))
# init plot (Hinton diag)
fig, axs = plt.subplots(1,n_models, figsize=(12, 6), facecolor='w', edgecolor='k')
axs = axs.ravel()
for i in range(n_models):
# init inputs
X_train, X_dev, X_test, X_zshot = m_codes[i][0], m_codes[i][1], m_codes[i][2], m_codes[i][3]
# R_ij = relative importance of c_i in predicting z_j
R = []
for j in range(n_z):
# init targets [shape=(n_samples, 1)]
y_train = gts[0][:, j]
y_dev = gts[1][:, j]
y_test = gts[2][:, j] if test_time else None
y_zshot = gts[3][:, j] if zshot else None
# fit model
model = regressor(**params[i][j])
model.fit(X_train, y_train)
# predict
y_train_pred = model.predict(X_train)
y_dev_pred = model.predict(X_dev)
y_test_pred = model.predict(X_test) if test_time else None
y_zshot_pred = model.predict(X_zshot) if zshot else None
# calculate errors
train_errs[i, j] = err_fn(y_train_pred, y_train)
dev_errs[i, j] = err_fn(y_dev_pred, y_dev)
test_errs[i, j] = err_fn(y_test_pred, y_test) if test_time else None
zshot_errs[i, j] = err_fn(y_zshot_pred, y_zshot) if zshot else None
# extract relative importance of each code variable in predicting z_j
r = getattr(model, importances_attr)[:, None] # [n_c, 1]
R.append(np.abs(r))
R = np.hstack(R) #columnwise, predictions of each z
# disentanglement
disent_scores = entropic_scores(R.T)
c_rel_importance = np.sum(R,1) / np.sum(R) # relative importance of each code variable
disent_w_avg = np.sum(np.array(disent_scores) * c_rel_importance)
disent_scores.append(disent_w_avg)
m_disent_scores.append(disent_scores)
# completeness
complete_scores = entropic_scores(R)
complete_avg = np.mean(complete_scores)
complete_scores.append(complete_avg)
m_complete_scores.append(complete_scores)
# informativeness (append averages)
train_errs[i, -1] = np.mean(train_errs[i, :-1])
dev_errs[i, -1] = np.mean(dev_errs[i, :-1])
test_errs[i, -1] = np.mean(test_errs[i, :-1]) if test_time else None
zshot_errs[i, -1] = np.mean(zshot_errs[i, :-1]) if zshot else None
# visualise
hinton(R, '$\mathbf{z}$', '$\mathbf{c}$', ax=axs[i], fontsize=18)
axs[i].set_title('{0}'.format(model_names[i]), fontsize=20)
plt.rc('text', usetex=True)
if save_plot:
fig.tight_layout()
plt.savefig(os.path.join(figs_dir, "hint_{0}_{1}.pdf".format(regressor.__name__, n_c)))
else:
plt.show()
print_table_pretty('Disentanglement', m_disent_scores, 'c', model_names)
print_table_pretty('Completeness', m_complete_scores, 'z', model_names)
print("Informativeness:")
print_table_pretty('Training Error', train_errs, 'z', model_names)
print_table_pretty('Validation Error', dev_errs, 'z', model_names)
if test_time:
print_table_pretty('Test Error', test_errs, 'z', model_names)
if zshot:
print_table_pretty('Zeroshot Error', zshot_errs, 'z', model_names)
```
### Lasso
```
from sklearn.linear_model import Lasso
alpha = 0.02
params = [[{"alpha": alpha}] * n_z] * n_models # constant alpha for all models and targets
importances_attr = 'coef_' # weights
err_fn = nrmse # norm root mean sq. error
test_time = True
save_plot = False
fit_visualise_quantify(Lasso, params, err_fn, importances_attr, test_time, save_plot)
```
### Random Forest
```
from sklearn.ensemble.forest import RandomForestRegressor
n_estimators = 10
all_best_depths = [[12, 10, 10, 10, 10] , [12, 10, 3, 3, 3], [12, 10, 3, 3, 3], [4, 5, 2, 5, 5]]
# populate params dict with best_depths per model per target (z gt)
params = [[]] * n_models
for i, z_max_depths in enumerate(all_best_depths):
for z_max_depth in z_max_depths:
params[i].append({"n_estimators":n_estimators, "max_depth":z_max_depth, "random_state": rng})
importances_attr = 'feature_importances_'
err_fn = nrmse # norm root mean sq. error
test_time = True
save_plot = False
fit_visualise_quantify(RandomForestRegressor, params, err_fn, importances_attr, test_time, save_plot)
```
# Figs
## z vs. c
```
from matplotlib.transforms import offset_copy
zs = [0,0] + list(range(n_z))
all_import_codes = [[5,8,4,0,2,1,1],[2,5,9,6,1,8,3],[5,7,2,6,1,9,3],[0,8,1,3,4,9,2]]
n_samples = 5000
fig, axs = plt.subplots(len(zs), n_models, figsize=(20, 25), facecolor='w', edgecolor='k', sharey=True, sharex=True)
for i, import_codes in zip(range(n_models), all_import_codes):
X_train = m_codes[i][0]
for j, (z, c) in enumerate(zip(zs, import_codes)):
X = X_train[:, c:c+1]
y = gts[0][:, z]
X, y = subset_of_data(X, y, n_samples)
if i == 0: # set column titles
axs[j,i].set_ylabel('$z_{0}$'.format(z), fontsize=28)
if j == 0:
axs[j,i].set_title('{0}'.format(model_names[i]), fontsize=28)
axs[j,i].set_xlabel('$c_{0}$'.format(c), fontsize=28)
axs[j,i].scatter(y, X, color='black', linewidth=0)
axs[j,i].legend(loc=1, fontsize=21)
axs[j,i].set_ylim([-3.5,3.5])
axs[j,i].set_xlim([-2,2])
axs[j,i].grid(True)
axs[j,i].set_axisbelow(True)
plt.rc('text', usetex=True)
fig.tight_layout()
#plt.show()
plt.savefig(os.path.join(figs_dir, "cvsz.pdf"))
```
## Visual disentanglement
```
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import os
#%matplotlib inline
base = "/home/cian/wgan/papers/images"
models = ["vae", "bvae", "infogan"]
fs = ["traverse{0}".format(m) for m in models]
paths = [os.path.join(base, f) for f in fs]
zs = [0,0] + list(range(5))
all_important_cs = [[2,5,9,6,1,8,3],[5,7,2,6,1,9,3],[0,8,1,3,4,9,2]]
all_xlabels = [[-3, 0, 3], [-3, 0, 3], [-1, 0, 1]]
xlocs = [0, 320, 640]
ylabels = ['$z_{0}$'.format(z) for z in zs]
ylocs = [64*(z+1) - 32 for z in range(len(labels))]
for p, important_cs, xlabels in zip(paths, all_important_cs, all_xlabels):
ylbls = [ylabels[i] + "$(c_{0})$".format(important_cs[i]) for i in range(len(ylabels))]
image = mpimg.imread(p + ".png")
plt.imshow(image)
plt.yticks(ylocs, ylbls)
plt.xticks(xlocs, xlabels, fontsize=8)
plt.tight_layout()
plt.savefig(os.path.join(base, p + ".pdf"), dpi=700)
```
| github_jupyter |
[kernel #1](https://www.kaggle.com/sentdex/dogs-vs-cats-redux-kernels-edition/full-classification-example-with-convnet)
[kernel #2](https://www.kaggle.com/jeffd23/dogs-vs-cats-redux-kernels-edition/catdognet-keras-convnet-starter)
[kernel #3](https://www.kaggle.com/cgallay/dogs-vs-cats-redux-kernels-edition/cat-dog-notebook)
https://pythonprogramming.net/cnn-tensorflow-convolutional-nerual-network-machine-learning-tutorial/
https://pythonprogramming.net/tflearn-machine-learning-tutorial/
```
import cv2
import os
from tqdm import tqdm # progress bar for long operations
import numpy as np
import matplotlib.pyplot as plt
% matplotlib inline
from sklearn.model_selection import train_test_split
import tflearn
from tflearn.layers.conv import conv_2d, max_pool_2d
from tflearn.layers.core import input_data, dropout, fully_connected
from tflearn.layers.estimator import regression
image_shape = (128, 128)
LR = 0.001
Y_train = []
X_train = []
for filename in tqdm(os.listdir("train")):
Y_train.append(int(filename.startswith("cat")))
full_path = os.path.join("train", filename)
image = cv2.imread(full_path, cv2.IMREAD_GRAYSCALE)
image = cv2.resize(image, image_shape)
X_train.append([np.array(image).reshape(image_shape).astype(np.float32)])
Y_train = np.array(Y_train)
Y_train = np.concatenate((Y_train.reshape(-1, 1), 1 - Y_train.reshape(-1, 1)), axis=1)
X_train = np.concatenate(tuple(X_train), axis=0).reshape(-1, image_shape[0], image_shape[1], 1)
np.save("X_train", X_train)
np.save("Y_train", Y_train)
X_test = []
for filename in tqdm(os.listdir("test")):
full_path = os.path.join("test", filename)
image = cv2.imread(full_path, cv2.IMREAD_GRAYSCALE)
image = cv2.resize(image, image_shape)
X_test.append([np.array(image).reshape(image_shape).astype(np.float32)])
X_test = np.concatenate(tuple(X_test), axis=0).reshape(-1, image_shape[0], image_shape[1], 1)
np.save("X_test", X_test)
# reload arrays if shape is not changed
X_train = np.load("X_train.npy")
Y_train = np.load("Y_train.npy")
X_test = np.load("X_test.npy")
X_train, X_val, Y_train, Y_val = train_test_split(X_train, Y_train, test_size=0.1)
convnet = input_data(shape=[None, image_shape[0], image_shape[1], 1], name='input')
convnet = conv_2d(convnet, 32, 5, activation='relu')
convnet = max_pool_2d(convnet, 5)
convnet = conv_2d(convnet, 64, 5, activation='relu')
convnet = max_pool_2d(convnet, 5)
convnet = conv_2d(convnet, 32, 5, activation='relu')
convnet = max_pool_2d(convnet, 5)
convnet = conv_2d(convnet, 64, 5, activation='relu')
convnet = max_pool_2d(convnet, 5)
convnet = conv_2d(convnet, 32, 5, activation='relu')
convnet = max_pool_2d(convnet, 5)
convnet = conv_2d(convnet, 64, 5, activation='relu')
convnet = max_pool_2d(convnet, 5)
convnet = fully_connected(convnet, 1024, activation='relu')
convnet = dropout(convnet, 0.8)
convnet = fully_connected(convnet, 2, activation='softmax')
convnet = regression(convnet, optimizer='adam', learning_rate=LR, loss='categorical_crossentropy', name='targets')
model = tflearn.DNN(convnet, tensorboard_dir='log')
model.fit({'input': X_train}, {'targets': Y_train}, n_epoch=4, validation_set=({'input': X_val}, {'targets': Y_val}), snapshot_step=500, show_metric=True)
```
| github_jupyter |
<img width="10%" alt="Naas" src="https://landen.imgix.net/jtci2pxwjczr/assets/5ice39g4.png?w=160"/>
# Redshift - Connect with SQL Magic and IAM Credentials
<a href="https://app.naas.ai/user-redirect/naas/downloader?url=https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/Redshift/Redshift_Connect_with_SQL_Magic_and_IAM_Credentials.ipynb" target="_parent"><img src="https://naasai-public.s3.eu-west-3.amazonaws.com/open_in_naas.svg"/></a>
**Tags:** #redshift #database #snippet #operations #naas #jupyternotebooks
**Author:** [Caleb Keller](https://www.linkedin.com/in/calebmkeller/)
## Input
- ipython-sql
- boto3
- psycopg2
- sqlalchemy-redshift
If you're running in NaaS, you can execute the below to install the necessary libraries.
```
!pip install -q ipython-sql boto3 psycopg2-binary sqlalchemy-redshift
```
When using the `ipython-sql` library of *SQL Magics* I like to put the `%reload_ext sql` at the top. This loads the extension if it isn't already loaded or reloads it. Reload instead of load helps it not error out if you so something like run all cells.
```
%reload_ext sql
import boto3
import psycopg2
import getpass
import pandas as pd
from urllib import parse
```
## Model
The SQL Magic, powered by SQL ALchemy, needs the connection string in a specific format. The below function does several things:
1. It takes in your AWS IAM credentials.
2. It uses those credentials to get temporary database credentials.
3. It creates a SQL alchemy connection string from those credentials.
```
def rs_connect(dbname, dbhost, clusterid, dbport, dbuser, region_name='us-east-1'):
''' Connect to redshift using AIM credentials'''
aaki = getpass.getpass('aws_access_key_id')
asak = getpass.getpass('aws_secret_access_key')
aws_session = boto3.session.Session(aws_access_key_id=aaki, aws_secret_access_key=asak, region_name=region_name)
aaki = ''; asak = ''
aws_rs = aws_session.client('redshift')
response = aws_rs.get_cluster_credentials(DbUser=dbuser, DbName=dbname, ClusterIdentifier=clusterid, AutoCreate=False)
''' Convert those credentials into Database user credentials '''
dbuser = response['DbUser']
dbpwd = response['DbPassword']
''' Generate the SQLAlchemy Connection string '''
connectionString = 'redshift+psycopg2://{username}:{password}@{host}:{port}/{db}?sslmode=prefer'.format(username=parse.quote_plus(dbuser), password=parse.quote_plus(dbpwd), host=dbhost, port=dbport, db=dbname)
dbuser = None; dbpwd = None; conn_str = None; response = None;
return connectionString
```
## Output
Run the below and replace the parameters with your own server's information.
```
connectionString = rs_connect('database_name', 'host', 'cluster_id', 5439, 'database_user')
%sql $connectionString
%%sql
select
your,
sql,
goes,
here
from
your.brain
```
Article from the author here: <a href="https://calebmkeller.medium.com/jupyter-sql-magic-connection-to-redshift-using-iam-credentials-8a9c53ce29db" _target="blank">here</a>.
For more on SQL magics read up on them with the below links:
- https://towardsdatascience.com/jupyter-magics-with-sql-921370099589
- https://blog.dominodatalab.com/lesser-known-ways-of-using-notebooks/
| github_jupyter |
<a href="https://colab.research.google.com/github/happy-jihye/Natural-Language-Processing/blob/main/3_Faster_Sentiment_Analysis.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# 3 - Faster Sentiment Analysis
- Pytorch / TorchText
- 기존의 tutorial 2에서는 정확도를 높이기 위한 다양한 model들을 제시했다면, 이번 tutorial에서는 computation 속도를 향상시킬 수 있는 [Bag of Tricks for Efficient Text Classification](https://arxiv.org/abs/1607.01759) 논문의 **FastText model**을 학습할 예정입니다.
> 2021/03/16 Happy-jihye
>
> **Reference** : [pytorch-sentiment-analysis/3 - Faster Sentiment Analysis](https://github.com/bentrevett/pytorch-sentiment-analysis/blob/master/3%20-%20Faster%20Sentiment%20Analysis.ipynb)
## 1. Preparing Data
```
!apt install python3.7
!pip install -U torchtext==0.6.0
!python -m spacy download en
```
### 1) FastText
- FastText 논문의 핵심 idea 중 하나는 input 문장의 n-gram을 계산하여 문장 끝에 추가하는 것입니다. bi-gram, tri-gram 등 다양한 n-gram이 있지만, 이 tutorial에서는 bi-gram을 사용하였습니다.
- **bi-gram**은 문장 내에서 연속적으로 나타나는 word/token들의 쌍입니다.
- "how are you ?"를 예시로 들면, bi-gram은 "how are", "are you", "you ?"입니다.
- 아래의 **generate_bigrams** 함수에서는 이미 토큰화된 문장에서 bi-gram을 한 내역들을 tokenized list 끝에 추가해주었습니다.
```
def generate_bigrams(x):
n_grams = set(zip(*[x[i:] for i in range(2)]))
for n_gram in n_grams:
x.append(' '.join(n_gram))
return x
generate_bigrams(['This', 'film', 'is', 'terrible'])
```
- TorchText Field에는 **preprocessing argument**가 있습니다. 이 인자에 함수를 전달하면 token화된 문장들이 indexing 되기 전에 적용됩니다.
- 이번 tutorial에서는 RNN을 사용하지 않으므로 include_closed 를 True로 설정할 필요가 없습니다.
```
import torch
from torchtext import data
TEXT = data.Field(tokenize = 'spacy',
tokenizer_language = 'en',
preprocessing = generate_bigrams)
LABEL = data.LabelField(dtype = torch.float) # pos -> 1 / neg -> 0
```
#### 2) IMDb Dataset
- 5만개의 영화 리뷰로 구성된 dataset
- IMDb dataset을 다운로드 받은 후, 이전에 정의한 Field(TEXT, LABEL)를 사용해서 데이터를 처리하였습니다.
- torchtext.datasets의 [IMDB](https://pytorch.org/text/stable/datasets.html#imdb) 의 dataset에서 train_data, valid_data, test_data를 나눠주었습니다.
```
from torchtext import datasets
import random
SEED = 1234
torch.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
train_data, test_data = datasets.IMDB.splits(TEXT, LABEL)
train_data, valid_data = train_data.split(random_state = random.seed(SEED))
print(f'training examples 수 : {len(train_data)}')
print(f'validations examples 수 : {len(valid_data)}')
print(f'testing examples 수 : {len(test_data)}')
```
#### 3) Build Vocabulary and load the pre-trained word embeddings
```
MAX_VOCAB_SIZE = 25_000
TEXT.build_vocab(train_data,
max_size = MAX_VOCAB_SIZE,
vectors = "glove.6B.100d",
unk_init = torch.Tensor.normal_)
LABEL.build_vocab(train_data)
print(f"Unique tokens in TEXT vocabulary: {len(TEXT.vocab)}")
print(f"Unique tokens in LABEL vocabulary: {len(LABEL.vocab)}")
```
### 4) Create the iterators
```
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
BATCH_SIZE = 64
train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits(
(train_data, valid_data, test_data),
batch_size = BATCH_SIZE,
device = device)
```
## 2. Build Model
- 이번 tutorial에서는 RNN-model을 사용하지 않고 embedding layer와 linear layer, 2개의 layer만을 사용하기 때문에 parameter가 적습니다.

- 위 그림에서 파란색에 해당하는 embedding layer에서 각 단어들을 embedding한 후, 분홍색 부분에서 모든 단어의 임베딩 값의 평균을 취합니다. 이후 계산한 평균값을 은색의 linear layer에 전달하면 됩니다.
- 이때 평균은 avg_pool2d 함수를 이용하여 구할 수 있습니다. 문장들 자체는 1차원이지만, word embedding은 2차원의 그리드로 생각할 수 있으므로 avg_pool2d함수를 사용하여 단어의 평균값을 구할 수 있습니다.

- **avg_pool2d** 함수는 embedded.shape[1] size의 filter를 사용합니다.

- filter를 오른쪽으로 한칸씩 이동시켜가면서 평균을 계산할 수 있습니다.
- 위의 예제에서의 element가 [4x5]의 tensor였다면, 평균을 구하고 난 후에는 [1x5]의 tensor를 얻을 수 있습니다.
```
import torch.nn as nn
import torch.nn.functional as F
class FastText(nn.Module):
def __init__(self, vocab_size, embedding_dim, output_dim, pad_idx):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx = pad_idx)
self.fc = nn.Linear(embedding_dim, output_dim)
def forward(self, text):
# text = [sentence length, batch size]
embedded = self.embedding(text)
# embedded = [sentence length, batch size, embedding dim]
embedded = embedded.permute(1, 0, 2)
#embedded = [batch size, sentence length, embedding dim]
pooled = F.avg_pool2d(embedded, (embedded.shape[1],1)).squeeze(1)
# pooled = [batch size, embedding_dim]
return self.fc(pooled)
INPUT_DIM = len(TEXT.vocab)
EMBEDDING_DIM = 100
OUTPUT_DIM = 1
PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token]
model = FastText(INPUT_DIM, EMBEDDING_DIM, OUTPUT_DIM, PAD_IDX)
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(model):,} trainable parameters')
```
- tutorial2와 마찬가지로 미리 학습되어져있는 embedding vector를 사용하였습니다.
```
pretrained_embeddings = TEXT.vocab.vectors
print(pretrained_embeddings.shape)
model.embedding.weight.data.copy_(pretrained_embeddings)
```
- unknown token과 padding token은 embedding weight를 0으로 초기화합니다.
```
# PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token] : 1
UNK_IDX = TEXT.vocab.stoi[TEXT.unk_token] #0
model.embedding.weight.data[UNK_IDX] = torch.zeros(EMBEDDING_DIM)
model.embedding.weight.data[PAD_IDX] = torch.zeros(EMBEDDING_DIM)
print(model.embedding.weight.data)
```
## 3. Train the Model
#### optimizer
- **Adam** 를 이용해서 model을 update하였습니다.
- 이전 tutorial에서 사용했던 **SGD**는 동일한 학습속도로 parameter를 업데이트하기 때문에 학습속도를 선택하기 어렵지만, Adam은 각 매개변수에 대해 학습속도를 조정해주기 때문에 자주 학습되는 parameter에 낮은 learning rate를 update하고 자주 학습되지 않는 parameter에 높은 learning rate를 update할 수 있습니다.
```
import torch.optim as optim
optimizer =optim.Adam(model.parameters())
```
#### loss function
- loss function 으로는 **binary cross entropy with logits**을 사용하였습니다.
- 0아니면 1의 label을 예측해야하므로 **sigmoid**나 **logit** function을 사용하였습니다.
- [BCEWithLogitsLoss](https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html)는 sigmoid와 the binary cross entropy steps를 모두 수행합니다.
```
criterion = nn.BCEWithLogitsLoss()
# GPU
model = model.to(device)
criterion = criterion.to(device)
```
**accuracy function**
- sigmoid layer를 지나면 0과 1사이의 값이 나오는데, 우리가 필요한 값은 0,1의 label이므로 [nn.round](https://pytorch.org/docs/stable/generated/torch.round.html)를 이용하여 반올림하였습니다.
- prediction 값과 label 값이 같은 것들이 얼마나 있는지를 계산하여 정확도를 측정하였습니다.
```
def binary_accuracy(preds, y):
rounded_preds = torch.round(torch.sigmoid(preds))
# rounded_preds : [batch size]
# y : batch.label
correct = (rounded_preds == y).float()
acc = correct.sum() / len(correct)
return acc
```
### 1) Train
```
def train(model, iterator, optimizer, criterion):
epoch_loss = 0
epoch_acc = 0
model.train()
for batch in iterator:
# 모든 batch마다 gradient를 0으로 초기화
optimizer.zero_grad()
# batch of sentences인 batch.text를 model에 입력 (저절로 forward가 됨)
# predictions의 크기가 [batch size, 1]이므로 squeeze해서 [batch size]로 size를 변경해줘야 함
predictions = model(batch.text).squeeze(1)
# prediction결과와 batch.label을 비교하여 loss값 계산
loss = criterion(predictions, batch.label)
# 정확도 계산
acc = binary_accuracy(predictions, batch.label)
# backward()를 사용하여 역전파 수행
loss.backward()
# 최적화 알고리즘을 사용하여 parameter를 update
optimizer.step()
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
```
### 2) Evaluate
```
def evaluate(model, iterator, criterion):
epoch_loss = 0
epoch_acc = 0
# "evaluation mode" : dropout이나 batch nomalizaation을 끔
model.eval()
# pytorch에서 gradient가 계산되지 않도록 해서 memory를 적게 쓰고 computation 속도를 높임
with torch.no_grad():
for batch in iterator :
predictions = model(batch.text).squeeze(1)
loss = criterion(predictions, batch.label)
acc = binary_accuracy(predictions, batch.label)
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
```
- epoch 시간을 계산하기 위한 함수
```
import time
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
```
### Train the model through multiple epochs
- training을 한 결과 학습시간이 매우 줄어든 것을 확인할 수 있습니다.
- 또한, 정확도를 통해 이번 모델이 이전 모델과 비슷한 성능을 내고 있음을 확인할 수 있습니다.
```
N_EPOCHS = 5
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss, train_acc = train(model, train_iterator, optimizer, criterion)
valid_loss, valid_acc = evaluate(model, valid_iterator, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'tut3-model.pt')
print(f'Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}%')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. Acc: {valid_acc*100:.2f}%')
model.load_state_dict(torch.load('tut3-model.pt'))
test_loss, test_acc = evaluate(model, test_iterator, criterion)
print(f'Test Loss: {test_loss:.3f} | Test Acc: {test_acc*100:.2f}%')
```
# Test
```
import torch
model.load_state_dict(torch.load('tut3-model.pt'))
import spacy
nlp = spacy.load('en_core_web_sm')
def predict_sentiment(model, sentence):
model.eval()
tokenized = generate_bigrams([tok.text for tok in nlp.tokenizer(sentence)])
indexed = [TEXT.vocab.stoi[t] for t in tokenized]
tensor = torch.LongTensor(indexed).to(device)
tensor = tensor.unsqueeze(1)
prediction = torch.sigmoid(model(tensor))
return prediction.item()
predict_sentiment(model, "This film is terrible")
predict_sentiment(model, "This film is great")
predict_sentiment(model, "This movie is fantastic")
```
| github_jupyter |
# Demo: Singapore rail route segmentation and link length calculation
## Purpose
- Download SG's rail network from [RailRouter SG](https://railrouter.sg) and segment the LineString by stations. This gives the user the (nearly) shapes and actual lengths of each rail station link.
## Notation
- route: a rail way line, e.g., Circle Line
- pattern: a service running on a route, e.g., Marina Bay <-> Stadium
- Thus, a route may contain several operating patterns.
## Required Libraries
- segment_rail_routes (in the repo)
- geopandas
- shapely
- osmnx (optional for plotting)
## Notes
- For links with multiple lengths in the data:
> The problem with using mean is that we don't actually have a LineString for that averaged line shape. The route with min does not seem realistic in some part of the network, especially in the BP LRT. As a result, we use max (the longest track) of a given link right now and select the respective LineString as the geometry.
# 1. Setup
```
# import module
import os
import segment_rail_routes as srr
# all rail route files
# source: https://raw.githubusercontent.com/cheeaun/railrouter-sg/master/data/v2/
rr_files = [
'lrt-bukit-panjang-lrt.json',
'lrt-punggol-lrt-east-loop.json',
'lrt-punggol-lrt-west-loop.json',
'lrt-sengkang-lrt-east-loop.json',
'lrt-sengkang-lrt-west-loop.json',
'mrt-circle-line.json',
'mrt-downtown-line.json',
'mrt-east-west-line.json',
'mrt-north-east-line.json',
'mrt-north-south-line.json'
]
```
# 2. Get Output File for All Links
- Overwriting existing geojson with fiona may throw an `CPLE_NotSupportedError`. To bypass, remove existing files manually.
```
data_path = 'data/' # path to save output
links = srr.get_all_links(rr_files) # downloads data
output = srr.get_output(links_df=links, out_path=data_path) # format output
output.head()
```
# 3. Specific Route Exploration
- You are good if you just want the final output file, but you may continue with the following if you want to take a look at each pattern.
```
# specify which route file to use
i = 9
rr_file = rr_files[i]
rr_data, patterns = srr.download_rail_route(rr_file, verbose=True)
# specify which pattern to use
j = 0
pattern = patterns[j]
gdf, path, sttn, lines = srr.get_segmentation(pattern)
# for plotting, consider setting up the following:
# otherwise, use None for both params
# an output path if you want to save the figure (optional)
out_path = 'data/'
# SG road network for basemap plotting (optional)
# [REQUIREMENT] run `!pip install osmnx` if not yet installed
# [INFO] the download will TAKE SOME TIME (like minutes)
# [INFO] if you don't want to plot the basemap but want the route shape
# to be proportional to reality, covert the crs to 3857
roadnet = srr.load_roadnet()
srr.plot_segmentation(path, sttn, lines, roadnet, out_path=None)
```
# 4. Construct Rail Graph
- You may then use networkx to construct your length-weighted graph of the rail network and do path query with it.
| github_jupyter |
#### New to Plotly?
Plotly's Python library is free and open source! [Get started](https://plot.ly/python/getting-started/) by downloading the client and [reading the primer](https://plot.ly/python/getting-started/).
<br>You can set up Plotly to work in [online](https://plot.ly/python/getting-started/#initialization-for-online-plotting) or [offline](https://plot.ly/python/getting-started/#initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plot.ly/python/getting-started/#start-plotting-online).
<br>We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started!
#### Version Check
Plotly's Python API is updated frequently. Run pip install plotly --upgrade to update your Plotly version.
```
import plotly
plotly.__version__
```
#### Show Legend
By default the legend is displayed on Plotly charts with multiple traces.
```
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[1, 2, 3, 4, 5],
)
trace1 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[5, 4, 3, 2, 1],
)
data = [trace0, trace1]
fig = go.Figure(data=data)
py.iplot(fig, filename='default-legend')
```
Add `showlegend=True` to the `layout` object to display the legend on a plot with a single trace.
```
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[1, 2, 3, 4, 5],
)
data = [trace0]
layout = go.Layout(showlegend=True)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='show-legend')
```
#### Hide Legend
```
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[1, 2, 3, 4, 5],
)
trace1 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[5, 4, 3, 2, 1],
)
data = [trace0, trace1]
layout = go.Layout(showlegend=False)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='hide-legend')
```
#### Hide Legend Entries
```
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[1, 2, 3, 4, 5],
showlegend=False
)
trace1 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[5, 4, 3, 2, 1],
)
data = [trace0, trace1]
fig = go.Figure(data=data)
py.iplot(fig, filename='hide-legend-entry')
```
#### Legend Names
```
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[1, 2, 3, 4, 5],
name='Positive'
)
trace1 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[5, 4, 3, 2, 1],
name='Negative'
)
data = [trace0, trace1]
fig = go.Figure(data=data)
py.iplot(fig, filename='legend-names')
```
#### Horizontal Legend
```
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[1, 2, 3, 4, 5],
)
trace1 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[5, 4, 3, 2, 1],
)
data = [trace0, trace1]
layout = go.Layout(
legend=dict(orientation="h")
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='horizontal-legend')
```
#### Legend Position
```
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[1, 2, 3, 4, 5],
)
trace1 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[5, 4, 3, 2, 1],
)
data = [trace0, trace1]
layout = go.Layout(
legend=dict(x=-.1, y=1.2)
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='position-legend')
```
#### Style Legend
```
import plotly.plotly as py
import plotly.graph_objs as go
trace0 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[1, 2, 3, 4, 5],
)
trace1 = go.Scatter(
x=[1, 2, 3, 4, 5],
y=[5, 4, 3, 2, 1],
)
data = [trace0, trace1]
layout = go.Layout(
legend=dict(
x=0,
y=1,
traceorder='normal',
font=dict(
family='sans-serif',
size=12,
color='#000'
),
bgcolor='#E2E2E2',
bordercolor='#FFFFFF',
borderwidth=2
)
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='style-legend')
```
#### Grouped Legend
```
import plotly.plotly as py
data = [
{
'x': [1, 2, 3],
'y': [2, 1, 3],
'legendgroup': 'group', # this can be any string, not just "group"
'name': 'first legend group',
'mode': 'markers',
'marker': {
'color': 'rgb(164, 194, 244)'
}
},
{
'x': [1, 2, 3],
'y': [2, 2, 2],
'legendgroup': 'group',
'name': 'first legend group - average',
'mode': 'lines',
'line': {
'color': 'rgb(164, 194, 244)'
}
},
{
'x': [1, 2, 3],
'y': [4, 9, 2],
'legendgroup': 'group2',
'name': 'second legend group',
'mode': 'markers',
'marker': {
'color': 'rgb(142, 124, 195)'
}
},
{
'x': [1, 2, 3],
'y': [5, 5, 5],
'legendgroup': 'group2',
'name': 'second legend group - average',
'mode': 'lines',
'line': {
'color': 'rgb(142, 124, 195)'
}
}
]
py.iplot(data, filename='basic-legend-grouping')
```
You can also hide entries in grouped legends:
```
import plotly.plotly as py
data = [
{
'x': [1, 2, 3],
'y': [2, 1, 3],
'legendgroup': 'group',
'name': 'first legend group',
'mode': 'markers',
'marker': {
'color': 'rgb(164, 194, 244)'
}
},
{
'x': [1, 2, 3],
'y': [2, 2, 2],
'legendgroup': 'group',
'name': 'first legend group - average',
'mode': 'lines',
'line': {
'color': 'rgb(164, 194, 244)'
},
'showlegend': False
},
{
'x': [1, 2, 3],
'y': [4, 9, 2],
'legendgroup': 'group2',
'name': 'second legend group',
'mode': 'markers',
'marker': {
'color': 'rgb(142, 124, 195)'
}
},
{
'x': [1, 2, 3],
'y': [5, 5, 5],
'legendgroup': 'group2',
'name': 'second legend group - average',
'mode': 'lines',
'line': {
'color': 'rgb(142, 124, 195)'
},
'showlegend': False
}
]
py.iplot(data, filename='hiding-entries-from-grouped-legends')
```
#### Reference
See https://plot.ly/python/reference/#layout-legend for more information!
```
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
! pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'legend.ipynb', 'python/legend/', 'Legends | plotly',
'How to configure and style the legend in Plotly with Python.',
title = 'Legends | plotly',
name = 'Legends',
thumbnail='thumbnail/legend.jpg', language='python',
page_type='example_index', has_thumbnail='false', display_as='layout_opt', order=14,
ipynb='~notebook_demo/14')
```
| github_jupyter |
```
from ranks import *
from diversity_stats import *
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import os
from matplotlib.transforms import Affine2D
import json
sns.set(style='whitegrid')
def extract_data(label, data_dict):
diversity_list = list()
for iter_nb in range(4):
diversity_list.append(data_dict[iter_nb][label]['diversity_score'])
if 4 in data_dict.keys():
diversity_list.append(data_dict[4][label]['diversity_score'])
return diversity_list
def extract_data_new(label, data_dict):
diversity_list = list()
std_error_list = list()
for iter_nb in range(4):
diversity_list.append(data_dict[iter_nb][label]['mean_max_diversity'])
std_error_list.append(data_dict[iter_nb][label]['mean_max_diversity_std_error'])
if 4 in data_dict.keys():
diversity_list.append(data_dict[4][label]['mean_max_diversity'])
std_error_list.append(data_dict[4][label]['mean_max_diversity_std_error'])
return [diversity_list, std_error_list]
def extract_data_bis(label, data_dict):
diversity_list = list()
std_error_list = list()
for iter_nb in range(5):
diversity_list.append(data_dict[iter_nb][label]['diversity_score']['mean_diversity'])
std_error_list.append(data_dict[iter_nb][label]['diversity_score']['mean_diversity_std_error'])
return [diversity_list, std_error_list]
output_path = '/home/manuto/Documents/world_bank/bert_twitter_labor/twitter-labor-data/data/fig/diversity'
labels=['is_hired_1mo', 'is_unemployed', 'lost_job_1mo','job_offer', 'job_search']
iter_nb = range(5)
for label in labels:
fig,ax = plt.subplots(figsize=(4,4))
trans1 = Affine2D().translate(-0.1, 0.0) + ax.transData
trans2 = Affine2D().translate(+0.1, 0.0) + ax.transData
diversity_list_our_method = extract_data(label, our_method_diversity_calibrated)
diversity_list_adaptive = extract_data(label, adaptive_diversity_calibrated)
diversity_list_uncertainty = extract_data(label, uncertainty_diversity_calibrated)
ax.scatter(iter_nb, diversity_list_our_method, c='b', label='our method', transform=trans1)
ax.scatter(iter_nb, diversity_list_adaptive, c='r', label='adaptive retrieval')
ax.scatter(range(4), diversity_list_uncertainty, c='g', label='uncertainty sampling', transform=trans2)
ax.tick_params(which='both',direction='in',pad=3)
# ax.locator_params(axis='y',nbins=6)
ax.set_ylabel('Diversity',fontweight='bold')
ax.set_xlabel('Iteration number',fontweight='bold')
ax.set_title(label.replace('_',' ').replace('1mo','').title(),fontweight='bold')
ax.legend(loc='best',fontsize=9)
# ax.set_ylim([-0.05,1.05])
plt.savefig(os.path.join(output_path,f'diversity_topP_{label}.png'),bbox_inches='tight', format='png' ,dpi=1200, transparent=False)
for label in labels:
fig,ax = plt.subplots(figsize=(4,4))
diversity_list_our_method, std_list_our_method = extract_data_new(label, our_method_distance_with_seed)
diversity_list_adaptive, std_list_adaptive = extract_data_new(label, adaptive_distance_with_seed)
diversity_list_uncertainty, std_list_uncertainty = extract_data_new(label, uncertainty_distance_with_seed)
trans1 = Affine2D().translate(-0.1, 0.0) + ax.transData
trans2 = Affine2D().translate(+0.1, 0.0) + ax.transData
#ax.scatter(iter_nb, mean_list_our_method, c='b', label='our method')
ax.errorbar(iter_nb, diversity_list_our_method, std_list_our_method, linestyle='None', marker='o', transform=trans1, label='our method', c='b')
# ax.fill_between(iter_nb, list(np.array(mean_list_our_method) - np.array(std_list_our_method)), list(np.array(mean_list_our_method) + np.array(std_list_our_method)), color='b', alpha=0.2)
#ax.scatter(iter_nb, mean_list_adaptive, c='r', label='adaptive retrieval')
ax.errorbar(iter_nb, diversity_list_adaptive, std_list_adaptive, linestyle='None', marker='o', label='adaptive retrieval', c='r')
# ax.fill_between(iter_nb, list(np.array(mean_list_adaptive) - np.array(std_list_adaptive)), list(np.array(mean_list_adaptive) + np.array(std_list_adaptive)), color='r', alpha=0.2)
#ax.scatter(range(4), mean_list_uncertainty, c='g', label='uncertainty sampling')
ax.errorbar(range(4), diversity_list_uncertainty, std_list_uncertainty, linestyle='None', marker='o', transform=trans2, label='uncertainty sampling', c='g')
# ax.set_yscale('log')
ax.tick_params(which='both',direction='in',pad=3)
# ax.locator_params(axis='y',nbins=6)
ax.set_ylabel('Diversity',fontweight='bold')
ax.set_xlabel('Iteration number',fontweight='bold')
ax.set_title(label.replace('_',' ').replace('1mo','').title(),fontweight='bold')
# ax.set_ylim([0.1, 300])
ax.legend(loc='best',fontsize=9)
plt.savefig(os.path.join(output_path,f'{label}_distance_with_seed.png'),bbox_inches='tight', format='png' ,dpi=1200, transparent=False)
```
# diversity positives evaluation
```
data_path = '/home/manuto/Documents/world_bank/bert_twitter_labor/twitter-labor-data/data/evaluation_metrics/US/diversity/positives_evaluation'
with open(os.path.join(data_path, 'diversity_positives_top10000.json'), 'r') as JSON:
json_dict = json.load(JSON)
json_dict
extract_data_bis('is_hired_1mo', json_dict['adaptive'])
extract_data_bis('job_search', diversity_positives_top10k_dict['exploit_explore_retrieval'])
for num, label in enumerate(labels):
fig,ax = plt.subplots(figsize=(4,4))
diversity_list_our_method, std_list_our_method = extract_data_bis(label, diversity_positives_top10k_dict['exploit_explore_retrieval'])
diversity_list_adaptive, std_list_adaptive = extract_data_bis(label, diversity_positives_top10k_dict['adaptive'])
diversity_list_uncertainty, std_list_uncertainty = extract_data_bis(label, diversity_positives_top10k_dict['uncertainty'])
diversity_list_uncertainty_uncalibrated, std_list_uncertainty_uncalibrated = extract_data_bis(label, diversity_positives_top10k_dict['uncertainty_uncalibrated'])
trans1 = Affine2D().translate(-0.2, 0.0) + ax.transData
trans2 = Affine2D().translate(-0.1, 0.0) + ax.transData
trans3 = Affine2D().translate(+0.1, 0.0) + ax.transData
#ax.scatter(iter_nb, mean_list_our_method, c='b', label='our method')
ax.errorbar(iter_nb, diversity_list_our_method, std_list_our_method, linestyle='None', marker='o', transform=trans1, label='our method', c='b')
# ax.fill_between(iter_nb, list(np.array(mean_list_our_method) - np.array(std_list_our_method)), list(np.array(mean_list_our_method) + np.array(std_list_our_method)), color='b', alpha=0.2)
#ax.scatter(iter_nb, mean_list_adaptive, c='r', label='adaptive retrieval')
ax.errorbar(iter_nb, diversity_list_adaptive, std_list_adaptive, linestyle='None', marker='v', label='adaptive retrieval', c='r', transform=trans2)
# ax.fill_between(iter_nb, list(np.array(mean_list_adaptive) - np.array(std_list_adaptive)), list(np.array(mean_list_adaptive) + np.array(std_list_adaptive)), color='r', alpha=0.2)
#ax.scatter(range(4), mean_list_uncertainty, c='g', label='uncertainty sampling')
ax.errorbar(iter_nb, diversity_list_uncertainty, std_list_uncertainty, linestyle='None', marker='s', label='uncertainty sampling', c='g')
ax.errorbar(iter_nb, diversity_list_uncertainty_uncalibrated, std_list_uncertainty_uncalibrated, linestyle='None', marker='P', transform=trans3, label='uncertainty sampling (uncalibrated)', c='m')
# ax.set_yscale('log')
ax.tick_params(which='both',direction='in',pad=3)
# ax.locator_params(axis='y',nbins=6)
ax.set_ylabel('Diversity',fontweight='bold')
ax.set_xlabel('Iteration number',fontweight='bold')
ax.set_title(label.replace('_',' ').replace('1mo','').title(),fontweight='bold')
# ax.set_ylim([0.1, 300])
ax.legend(loc='best',fontsize=9)
if num == 4:
ax.legend(loc='best',fontsize=9)
else:
ax.get_legend().remove()
sns.despine()
plt.savefig(os.path.join(output_path,f'{label}_diversity_positives_top10k.png'),bbox_inches='tight', format='png' ,dpi=1200, transparent=False)
```
| github_jupyter |
# Homework 03 - Python introduction
### Exercise 1 - Terminology
Describe the following terms with your own words:
***Function:*** A function is a chunk of code that takes input arguments (if required), preforms a sequence of operations including the provided iput argumentsand returns a function output.
***Variable:*** A variable consists of a variable name and data that is loaded into the memory. The variable points to the data object location in the memory.
***Calling a function:*** A function is called by its function name and passing variables or existing variables to the required input arguments of the function.
***String:*** A string is a sequence of characters that is delimited by quotation marks that define the start and the end of the string sequence.
### Exercise 2 - spurious correlations
Reproduce a *spurious correlations* plot using `plt.plot()`.
First plot both time series as percentage of the last data point (i.e. years on the x-axis, relative data on the y-axis). Then plot the relation between both data sets as scatter plot by using `plt.plot(dataset1, dataset2, 'o')`.
Analyse both plots, come up with a wrong conclusion and explain why it is wrong.
Don't forget to label the axis!
<img src="https://i1.wp.com/flowingdata.com/wp-content/uploads/2012/06/label-your-axes.png?w=500&ssl=1" alt="It is important to always label the axes" style="width: 400px;"/>

Source: https://www.tylervigen.com/spurious-correlations
```
sociology_doctorates = [601, 579, 572, 617, 566, 547, 597, 580, 536, 579, 576, 601, 664]
space_launches = [54, 46, 42, 50, 43, 41, 46, 39, 37, 45, 45, 41, 54]
years = list(range(1997, 2009 + 1))
```
#### Reproduction of the spurious correlation plot
```
import matplotlib
matplotlib.rcParams['text.usetex'] = True
from matplotlib import pyplot as plt
import numpy as np
fig, ax1 = plt.subplots(figsize = (12,5))
ax2 = ax1.twinx()
fig.text(0.5, 1.05, 'Worldwide non-commercial space launches',
fontsize = 28, color = 'red', ha = 'center', va = 'bottom')
fig.text(0.5, 1.0, 'correlates with',
fontsize = 18, color = 'grey', ha = 'center', va = 'bottom')
fig.text(0.5, 0.92, 'Sociology doctorates awarded (US)',
fontsize = 28, color = 'black', ha = 'center', va = 'bottom')
ax1.plot(years, space_launches, color='red', marker='o', linestyle='solid',
label = 'Worldwide non-commercial space launches')
ax2.plot(years, sociology_doctorates, color='black', marker='o', linestyle='solid',
label = 'Sociology doctorates awarder (US)')
ax1.set_ylabel('Worldwide non-commercial space launches', color = 'red', fontsize = 12)
ax2.set_ylabel('Sociology doctorates awarded (US)', color = 'black', fontsize = 12)
ax1.set_xticks(np.arange(min(years), max(years)+1, 1.0))
ax1.set_yticks(np.arange(30, 61, 10))
ax2.set_yticks(np.arange(500, 701, 50))
ax1.spines['left'].set_visible(False)
ax1.spines['right'].set_visible(False)
ax2.spines['left'].set_visible(False)
ax2.spines['right'].set_visible(False)
ax1.set_yticklabels([f'{i} Launches' for i in range(30,70,10)], color = 'red', fontsize = 12)
ax2.set_yticklabels([f'{i} Degrees awarded' for i in range(500,750,50)], color = 'black', fontsize = 12)
ax1.yaxis.set_ticks_position('none')
ax2.yaxis.set_ticks_position('none')
ax1.grid(axis='y')
#plt.title('Infections')
#plt.legend(loc='upper center', bbox_to_anchor=(0.5, 1.05),
ax1.legend(loc='upper center', bbox_to_anchor=(0.7, -0.1), frameon = False, fontsize = 12)
ax2.legend(loc='upper center', bbox_to_anchor=(0.3, -0.1), frameon = False, fontsize = 12)
```
#### Plotting timeseries of relative data (with reference of last data point)
```
soc_doc_rel = [sd_i/sociology_doctorates[-1] for sd_i in sociology_doctorates]
spa_lau_rel = [sl_i/space_launches[-1] for sl_i in space_launches]
fig, ax = plt.subplots(figsize = (12,5))
ax.plot(years, spa_lau_rel, color='red', marker='o', linestyle='solid',
label = 'Worldwide non-commercial space launches')
ax.plot(years, soc_doc_rel, color='black', marker='o', linestyle='solid',
label = 'Sociology doctorates awarded (US)')
ax.set_ylabel('Relative numbers', color = 'black', fontsize = 12)
ax.set_xlabel('Year (yyyy)', color = 'black', fontsize = 12)
ax.set_xticks(np.arange(min(years), max(years)+1, 1.0))
ax.legend(loc='upper center', bbox_to_anchor=(0.5, -0.12), frameon = False, ncol = 2)
```
#### Scatter plot for relative values of both data sets
```
fig, ax = plt.subplots(figsize = (5,5))
ax.plot(spa_lau_rel, soc_doc_rel, marker='o', linestyle='none')
ax.set_xlabel('Worldwide non-commercial space launches', color = 'black', fontsize = 12)
ax.set_ylabel('Sociology doctorates awarded (US)', color = 'black', fontsize = 12)
```
#### Conclusion?
I am not sure what is meant to come up with a "wrong" conclusion. Independent of the absolute and the relative numbers in the plots the temporal upwards and downwards movements of both variables coincide. Yet, the amplitude of the variability of the numbers of space launces is large. In any case however, correlation, coincidence, and causality are not the same things (which is the principle point of the spurious correlation blog).
### Exercise 3 - Flatten the curve
Use the logistic growth model to plot an overslimplified version of the [#flattenthecurve](https://www.biospace.com/getasset/fc2b8ad6-697f-49d5-827e-50f4901baf53/) [graphs](https://evilspeculator.com/wp-content/uploads/2020/03/flattenthecurve.jpg).
Write a function `new_infections(t, k)` which returns the number of new infections given by the following formula:
$i_{\textrm{new}}(t) := \frac{e^\left(-k \cdot P \cdot t\right) \cdot k \cdot P^2 \cdot \left(-1 + \frac{P}{i_0}\right)}{\left(1 + e^\left(-k \cdot P \cdot t \right) \cdot \left(-1 + \frac{P}{i_0}\right) \right)^2}$
Plot the number of infections for $t=0,\ldots,250$, $P=1\,000\,000$, $i_0=1$ and $k= \frac{3}{P \cdot 10}$.
Also plot a second constant line and label it with "healthcare system capacity".
Then analyse the plot for different values of $k$ and explain in detail why one should not use this model/plot to predict the outcome of an epedemic.
Bonus question: Is there something one can still learn from it?
##### Motivation
The motivation is not important for the programming task. There is no need to understand all details in order to do the exercise.
For a fixed population with $P$ individuals, $i(t)$ is the number of infections at time $t$. We assume that every individual stays infectious once infected.
Choosing a random pair of individuals from the population, a new infection will take place with probabilty $2 \cdot \frac{i(t)}{P} \cdot \frac{P - i(t)}{P}$. If we assume that every infected individual will meet approximately $c$ others in every time step (and infect them if they are not yet infected), there is a total number of contacts $\frac{c \cdot P}{2}$ in every time step. That means we expect a total number of new infections:
$i_{\textrm{new}}(t) = \frac{c \cdot P}{2} \cdot 2 \cdot \frac{i(t)}{P} \cdot \frac{P - i(t)}{P} = \underbrace{\frac{c}{P}}_{=:k} \cdot \left(i(t) \cdot \left(P - i(t)\right) \right)$
This leads to the differential equation:
$i'(t) = i_{\textrm{new}}(t) = k \cdot \left(i(t) \cdot \left(P - i(t)\right) \right)$
A solution is given by:
$i(t) = \frac{P}{\left(1 + e^\left(-k \cdot P \cdot t \right) \cdot \left(\frac{P}{i_0} - 1\right)\right)}$
Differanting $i(t)$ gives the number of new infections:
$i_{\textrm{new}}(t) := i'(t) = \frac{e^\left(-k \cdot P \cdot t\right) \cdot k \cdot P^2 \cdot \left(-1 + \frac{P}{i_0}\right)}{\left(1 + e^\left(-k \cdot P \cdot t \right) \cdot \left(-1 + \frac{P}{i_0}\right) \right)^2}$
[3blue1brown](https://www.youtube.com/watch?v=gxAaO2rsdIs) has a great video on the topic. If you are interested in a model which is a bit closer to the real world, but still quite simple, have a look at the [SIR model](https://www.youtube.com/watch?v=Qrp40ck3WpI). A large part of this is inspired by the [German Wikipiedia page](https://de.wikipedia.org/wiki/Logistische_Funktion
##### Solution
Write your code here:
*Function definition*
```
import math
def new_infections(t, k, P, i_0):
i_new = (math.exp(-k*P*t) * k * P**2 * (-1 + P/i_0)) / (1 + math.exp(-k*P*t) * (-1 + P/i_0))**2
return(i_new)
```
*Calcualte and plot new infections*
```
P = 1000000
i_0 = 1
c = [x / 10.0 for x in range(5, 35, 5)]
k = [c_i/(P*10) for c_i in c]
t = range(250)
capacity = 10000
for k_i in k:
new_infections_i = [new_infections(t_i,k_i,P, i_0) for t_i in t]
plt.plot(t,new_infections_i, label = f'k = {k_i}')
plt.plot(t,len(t)*[capacity], 'black', label = "healthcare system capacity")
plt.xlabel("Time since first infection")
plt.ylabel("Number of new infections")
plt.legend()
plt.show
```
##### Bonus: Interactive plot
If you want an interactive widget to control the parameter c, you can use the following Code.
**Warning:** After running `%matplotlib notebook` you cannot plot in other cells anylonger. Restart the Jupyter kernel and the refresh the browser window to disable again.
```Python
%matplotlib notebook
from ipywidgets import interact
line, = plt.plot(x, y)
# write here more plotting code (axis label etc)
def update(c=3.0/10.):
line.set_ydata([new_infections(t, c / P) for t in time_range])
interact(update);
```
```
%matplotlib notebook
from ipywidgets import interact
line, = plt.plot(t, [new_infections(t_i, 1/P, P, i_0) for t_i in t], label = "Number of infections")
plt.plot(t,len(t)*[capacity], 'black', label = "healthcare system capacity")
plt.xlabel("Time since first infection")
plt.ylabel("Number of new infections")
plt.legend()
def update(c=3.0/10.):
line.set_ydata([new_infections(t_i, c/P, P, i_0) for t_i in t])
interact(update, c = (0,3,0.01));
```
### Exercise 4 - Fibonacci
Write a function `fibonacci(n)` which calculates the n-th [Fibonacci number](https://en.wikipedia.org/wiki/Fibonacci_number), defined by $f(0)=0$, $f(1)=1$ and $f(n) = f(n-1)+f(n-2)$.
Use the function to calculate $f(100)$.
```
def fibonacci(n):
f = [0,1]
for i in range(n):
f = [f[-1], f[-1] + f[-2]]
return(f[0])
f_100 = fibonacci(100)
print(f_100)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/JeremyQuijano/ph582_final_project_group_f/blob/main/main.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
from google.colab import drive
drive.mount('gdrive')
%cd gdrive/Shareddrives/ML\ project # change it to the path containing m12f folder https://drive.google.com/drive/folders/1DcmgGiFeJaOAw_6Ogk8Pujg51yNAYNmO?usp=sharing
# !zip -r /content/gdrive/Shareddrives/ML\ project/m12f.zip /content/gdrive/Shareddrives/ML\ project/m12f
!pip install umap-learn
```
# Import libraries
```
import gizmo_analysis as gizmo
import utilities as ut
import os
import matplotlib
from matplotlib import colors
from matplotlib.colors import Normalize
from matplotlib.gridspec import GridSpec
import matplotlib.pyplot as plt
import numpy as np
from astropy.visualization import ImageNormalize, LogStretch
import umap
from time import time
from sklearn.manifold import TSNE
from scipy.interpolate import UnivariateSpline, interp1d
from astropy.convolution import Gaussian1DKernel, convolve
from sklearn.cluster import DBSCAN
matplotlib.rcParams['xtick.labelsize'] = 11
matplotlib.rcParams['ytick.labelsize'] = 11
matplotlib.rcParams['axes.labelsize'] = 14
matplotlib.rcParams['legend.fontsize'] = 13
matplotlib.rcParams['axes.titlesize'] = 20
```
# Functions to plot results and calculate some stellar properties
```
def plot_results(data, c, clabel, xlabel='t-SNE 2d - one', ylabel='t-SNE 2d - two', **kw):
if 'clim' in kw:
vmin, vmax = kw['clim']
else:
vmin, vmax = [np.min(c), np.max(c)]
if data.shape[1] == 2:
plt.figure(figsize=(10, 8))
plt.scatter(data[:, 0], data[:, 1], c=c, cmap='Spectral', s=1, vmin=vmin, vmax=vmax)
plt.gca().set_aspect('equal', 'datalim')
plt.xlabel(xlabel)
plt.ylabel(ylabel)
plt.colorbar(label=clabel)
if 'method' in kw:
plt.title(kw['method'], fontsize=11)
if 'fname' in kw:
plt.tight_layout()
plt.savefig(kw['fname']+'.png', dpi=120)
plt.show()
plt.close()
def limits(data, low=0.001, high=0.999, bins=200, **kwargs):
if len(data) == 0:
return [0, 0]
if min(data) == max(data):
return [0, 0]
lim_min, lim_max = [np.percentile(data, low), np.percentile(data, high)]
return [lim_min, lim_max]
def parse_limits(lim_input, data):
if not isinstance(lim_input, str):
real_lim = lim_input
else:
if '%' in lim_input:
if lim_input.split('%')[-1] == '':
low = float(lim_input.split('%')[0])
percents = [low/100, 1-low/100]
else:
percents = [float(elem)/100 for elem in lim_input.split('%')]
else:
percents = [0.01, 0.99]
if len(data) == 1:
real_lim = limits(data[0], low=percents[0], high=percents[-1])
else:
real_lim = [limits(data[0], low=percents[0], high=percents[-1]),
limits(data[1], low=percents[0], high=percents[-1])]
return real_lim
def density_plot(x, y, xlabel='x', ylabel='z', title='stellar disk', clabel='Number of points per pixel', name='xz.png',
figsize=(12, 8), cbar_flag=True, **kwargs):
if ('clim' in kwargs) and ('c' in kwargs):
climits = parse_limits(kwargs.get('clim'), [kwargs.get('c')])
elif 'clim' in kwargs:
climits = kwargs.get('clim')
else:
climits = [0, 750]
if 'axlim' in kwargs:
lim = parse_limits(kwargs.get('axlim'), [x, y])
if np.shape(lim) == (2,):
xlimits = lim
ylimits = lim
else:
xlimits, ylimits = lim
else:
xlimits = limits(np.ravel(x))
ylimits = limits(np.ravel(y))
if 'stretch' in kwargs:
if kwargs.get('stretch') == 'log':
if climits[0] == 0:
climits[0] += 1
norm = colors.LogNorm(vmin=climits[0], vmax=climits[1])
elif kwargs.get('stretch') == 'lin':
norm = colors.Normalize(vmin=climits[0], vmax=climits[1])
else:
norm = colors.LogNorm(vmin=climits[0], vmax=climits[1])
if 'cmap' in kwargs:
cmap=plt.get_cmap(kwargs.get('cmap'))
else:
cmap=plt.get_cmap('Blues')
if 'bins' in kwargs:
bins = kwargs.get('bins') # should be different at x and y?
else:
bins = 1024
fig = plt.figure()
fig.subplots_adjust(left=0.2)
ax = fig.add_subplot(1, 1, 1)
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
ax.set_title(title)
if 'c' in kwargs:
hist = np.histogram2d(x, y, bins=bins, range=[[xlimits[0], xlimits[1]],[ylimits[0], ylimits[1]]])
hist_weights = np.histogram2d(x, y, bins=bins, range=[[xlimits[0], xlimits[1]], [ylimits[0], ylimits[1]]],
weights=kwargs.get('c'))
dens = ax.imshow((hist_weights[0]/hist[0]).T, origin='lower',
extent=[xlimits[0], xlimits[1], ylimits[0], ylimits[1]], cmap=cmap, norm=norm, aspect='auto')
else:
hist = np.histogram2d(x, y, bins=bins, range=[[xlimits[0], xlimits[1]],[ylimits[0], ylimits[1]]])
dens = ax.imshow(hist[0].T, origin='lower', extent=[xlimits[0], xlimits[1], ylimits[0], ylimits[1]], cmap=cmap,
norm=norm, aspect='auto')
if cbar_flag:
cbar = plt.colorbar(dens, orientation='vertical', fraction=0.1, aspect=60, label=clabel)
cbar.ax.tick_params(labelsize=8)
plt.show()
fig.savefig(name, dpi=320)
plt.close()
def ang_mom(r, v):
'''
:param r: vector of coordinates
:param v: vector of velocities
:return: vector of angular momentum
'''
jz = r[0]*v[1] - r[1]*v[0]
jy = -r[0]*v[2] + r[2]*v[0]
jx = r[1]*v[2] - r[2]*v[1]
return [jx, jy, jz]
def ang_mom_circ(r, v, phi, rmax=200, bins=100):
'''
:param r: radius
:param v: total velocity
:param phi: potential
:return: j_circ(E)
'''
r_phi = np.array(sorted([x for x in zip(r, phi)]))
r_edges = np.linspace(0, rmax, bins+1) # check what is max(r); should I apply limits?
r_bin = 0.5*(r_edges[1:] + r_edges[:-1])
phi_bin = np.zeros_like(r_bin)
for i in range(len(r_bin)-1):
idxs_bin = np.where((r_phi.T[0] > r_edges[i]) & (r_phi.T[0] < r_edges[i+1]))
phi_bin[i] = np.mean(r_phi.T[1][idxs_bin])
phi_spl = UnivariateSpline(r_bin, phi_bin)
phi_der = phi_spl.derivative()
# j(E) interpolation
r = np.linspace(0, rmax, 1000)
j_circ_E_spl = interp1d(0.5 * r * phi_der(r) + phi_spl(r), np.sqrt(r * phi_der(r)) * r,
fill_value=0, bounds_error=False)
j_circ_E = j_circ_E_spl(0.5 * v ** 2 + phi)
return j_circ_E
```
# Read snapshots from path/output directory
```
part = gizmo.io.Read.read_snapshots(['star'], 'index', 600,
simulation_directory='m12f',
assign_host_principal_axes=True, # assgin host principal axis — to put 0 of coords to host galaxy center
assign_formation_coordinates=True) # to track formation coordinates
```
# Divide dataset into sample A ($R_{birth}$ < 20 kpc) and sample B (30 <$R_{birth}$ < 500 kpc)
```
idxs_star = np.where(part['star'].prop('host.distance.principal.total')<20) #sample A
# idxs_star = np.where((part['star'].prop('host.distance.principal.total')>30) & (part['star'].prop('host.distance.principal.total')<500)) #sample B
print('Number of stars in the sample: ', len(idxs_star[0]))
# define arrays with stellar properties and take every k-th star to decrease sample size and speed up calculations
k = 1000 # for sample A
# k = 100 # for sample B
feh = part['star'].prop('metallicity.fe')[idxs_star][::k]
ofe = part['star'].prop('metallicity.o-metallicity.fe')[idxs_star][::k]
mgfe = part['star'].prop('metallicity.mg-metallicity.fe')[idxs_star][::k]
sife = part['star'].prop('metallicity.si-metallicity.fe')[idxs_star][::k]
cafe = part['star'].prop('metallicity.ca-metallicity.fe')[idxs_star][::k]
cfe = part['star'].prop('metallicity.c-metallicity.fe')[idxs_star][::k]
nefe = part['star'].prop('metallicity.ne-metallicity.fe')[idxs_star][::k]
nfe = part['star'].prop('metallicity.n-metallicity.fe')[idxs_star][::k]
sfe = part['star'].prop('metallicity.s-metallicity.fe')[idxs_star][::k]
ages = part['star'].prop('age')[idxs_star][::k]
x = part['star'].prop('host.distance.principal').T[0][idxs_star][::k]
y = part['star'].prop('host.distance.principal').T[1][idxs_star][::k]
z = part['star'].prop('host.distance.principal').T[2][idxs_star][::k]
R = np.sqrt(x**2 + y**2 + z**2)
vx = part['star'].prop('host.velocity.principal').T[0][idxs_star][::k]
vy = part['star'].prop('host.velocity.principal').T[1][idxs_star][::k]
vz = part['star'].prop('host.velocity.principal').T[2][idxs_star][::k]
vR = part['star'].prop('host.velocity.principal.cylindrical').T[0][idxs_star][::k]
vphi = part['star'].prop('host.velocity.principal.cylindrical').T[1][idxs_star][::k]
Rbirth = part['star'].prop('form.host.distance.total')[idxs_star][::k]
xb = part['star'].prop('form.host.distance.principal').T[0][idxs_star][::k]
yb = part['star'].prop('form.host.distance.principal').T[1][idxs_star][::k]
zb = part['star'].prop('form.host.distance.principal').T[2][idxs_star][::k]
#%% angular momentum
j = ang_mom([x, y, z], [vx, vy, vz])
phi = part['star'].prop('potential')[idxs_star][::k]
v = np.sqrt(vx**2 + vy**2 + vz**2)
E = v**2 + phi
# j_circ = ang_mom_circ(R, v, phi) # only for sample A
data_subset = np.array([feh, ofe, mgfe, cafe, sife, nfe, nefe, sfe, cfe]).T
```
# plot stellar distribution in coordinate and chemical space
```
density_plot(part['star'].prop('host.distance.principal').T[0][idxs_star],
part['star'].prop('host.distance.principal').T[1][idxs_star],
xlabel='x', ylabel='y', name='xy.png', axlim=[[-15, 15], [-15, 15]],
clim=[1, 3000], bins=512)
#%% 2d hist + age
density_plot(part['star'].prop('host.distance.principal').T[0][idxs_star],
part['star'].prop('host.distance.principal').T[1][idxs_star],
c=part['star'].prop('age')[idxs_star],
xlabel='x', ylabel='y', name='xy_age.png', axlim=[[-15, 15], [-15, 15]],
clim=[0, 14], stretch='lin', bins=512, cmap='Spectral', clabel='age')
#%% 2d hist FeH-OFe
density_plot(part['star'].prop('metallicity.fe')[idxs_star],
part['star'].prop('metallicity.o-metallicity.fe')[idxs_star],
xlabel='[Fe/H]', ylabel='[O/Fe]', name='feh_ofe_80kpc.png', axlim=[[-4, 1.2], [0.2, 0.8]],
clim=[1, 3000], bins=512)
```
# t-SNE
```
tsne = TSNE(n_components=2, verbose=1, perplexity=100, n_iter=500, learning_rate=1000,
random_state=42)
tsne_results = tsne.fit_transform(data_subset)
plot_results(tsne_results, ages, 'age, Gyr', method=str(tsne), clim=[0., 14.])
```
# UMAP
```
reducer = umap.UMAP(random_state=42, n_neighbors=20, min_dist=0.01, metric='minkowski', n_components=2)
reducer.fit(data_subset)
embedding = reducer.transform(data_subset)
plt.scatter(embedding[:, 0], embedding[:, 1], c=ages, cmap='Spectral', s=1)
plt.gca().set_aspect('equal', 'datalim')
```
# DBSCAN
```
clustering = DBSCAN(eps=0.01, min_samples=5).fit(data_subset)
len(set(clustering.labels_)) # number of clusters
plt.figure()
plt.scatter(x[clustering.labels_>1], z[clustering.labels_>1], c=clustering.labels_[clustering.labels_>1], s=1.)
plt.xlim(-20, 20)
plt.ylim(-20, 20)
plt.show()
plt.close()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/cltl/python-for-text-analysis/blob/colab/Chapters-colab/Chapter_18_Data_formats_III_(XML).ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
%%capture
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Data.zip
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/images.zip
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Extra_Material.zip
!unzip Data.zip -d ../
!unzip images.zip -d ./
!unzip Extra_Material.zip -d ../
!rm Data.zip
!rm Extra_Material.zip
!rm images.zip
```
# Chapter 18 - Data Formats III (XML)
In this chapter, we will learn how to work with one of the most popular structured data formats: [XML](http://www.w3schools.com/xml/). XML is used a lot in Natural Language Processing (NLP), and it is important that you know how to work with it. In theory, you could load XML data just as you read in a text file, but the structure is much too complicated to extract information manually. Therefore, we use an existing library.
### At the end of this chapter, you will be able to
* read an XML file using `etree.parse`
* read XML from string using `etree.fromstring`
* convert an XML element to a string using `etree.tostring`
* use the following methods and attributes of an XML element (of type `lxml.etree._Element`):
* to access elements: methods `find`, `findall`, and `getchildren`
* to access attributes: method `get`
* to access element information: attributes `tag` and `text`
* [not needed for assignment] create your own XML and write it to a file
### If you want to learn more about this chapter, you might find the following links useful:
* [XML](http://www.w3schools.com/xml/)
* [detailled XML introduction](http://www.dfki.de/~uschaefer/esslli09/xmlquerylang.pdf)
* [NAF XML](http://www.newsreader-project.eu/files/2013/01/techreport.pdf)
* [Xpath](http://www.w3schools.com/xml/xpath_syntax.asp)
* Other structured data formats: [JSON-LD](http://json-ld.org/), [MicroData](https://www.w3.org/TR/microdata/), [RDF](https://www.w3.org/RDF/)
If you have **questions** about this chapter, please contact us at cltl.python.course@gmail.com.
## 1. Introduction to XML
Natural language processing (NLP) is all about text data. More specifically, we usually want to annotate (manually or automatically) textual data with information about:
* [part of speech](https://en.wikipedia.org/wiki/Part_of_speech)
* [word senses](https://en.wikipedia.org/wiki/Word_sense)
* [syntactic information (in particulay dependencies)](https://en.wikipedia.org/wiki/Dependency_grammar)
* [entities](https://en.wikipedia.org/wiki/Entity_linking)
* [semantic role labelling](https://en.wikipedia.org/wiki/Semantic_role_labeling)
* Events
* many many many more.....
How should we represent such annotated data? This is what you get from using NLTK tools:
```
import nltk
nltk.download('punkt')
nltk.download('averaged_perceptron_tagger')
text = nltk.word_tokenize("Tom Cruise is an actor.")
text_pos_tagged = nltk.pos_tag(text)
print(text_pos_tagged)
print(type(text_pos_tagged), type(text_pos_tagged[0]))
```
In this example, we see that the format is a list of [tuples](https://docs.python.org/3/tutorial/datastructures.html#tuples-and-sequences). The first element of each tuple is the word and the second element is the part of speech tag. Great, so far, this works.
However, we usually want to store more information. For instance, we may want to indicate that *Tom Cruise* is a named entity. We could also represent syntactic information about this sentence. Now we start to run into difficulties because some annotations are for single words and some are for combinations of words. In addition, we have more than one annotation per token. Data structures such as CSV and TSV are not great at **representing** linguistic information. So is there a format that is better at it? The answer is yes and the format is XML.
## 2. Terminology
Let's look at an example (the line numbers are there for explanation purposes). On purpose, we start with a non-linguistic, hopefully intuitive example. In the folder `../Data/xml_data` this XML is stored as the file `course.xml`. You can inspect this file using a text editor (e.g. [Atom](https://atom.io/), [BBEdit](https://www.barebones.com/products/bbedit/download.html) or [Notepad++](https://notepad-plus-plus.org)).
```xml
1. <Course>
2. <person role="coordinator">Van der Vliet</person>
3. <person role="instructor">Van Miltenburg</person>
4. <person role="instructor">Van Son</person>
5. <person role="instructor">Postma</person>
7. <person role="student">Baloche</person>
8. <person role="student">De Boer</person>
9. <animal role="student">Rubber duck</animal>
10. <person role="student">Van Doorn</person>
11. <person role="student">De Jager</person>
12. <person role="student">King</person>
13. <person role="student">Kingham</person>
14. <person role="student">Mózes</person>
15. <person role="student">Rübsaam</person>
16. <person role="student">Torsi</person>
17. <person role="student">Witteman</person>
18. <person role="student">Wouterse</person>
19. <person/>
20. </Course>
```
### 2.1 Elements
Line 1 to 19 all show examples of [XML elements](http://www.w3schools.com/xml/xml_elements.asp). Each XML element contains a **starting tag** (e.g. ```<person>```) and an **end tag** (e.g. ```</person>```). An element can contain:
* **text** *Van der Vliet* on line 2
* **attributes**: *role* attribute in lines 2 to 18
* **elements**: elements can contain other elements, e.g. *person* elements inside the *Course* element. The terminology to talk about this is as follows. In this example, we call `person` the `child` of `Course` and `Course` the `parent` of `person`.
Please note that on line 19 the **starting tag** and **end tag** are combined. This happens when an element has no children and/or no text. The syntax for an element is then **``` <START_TAG/>```**.
### 2.2 Root element
A special element is the **root** element. In our example, `Course` is our [root element](https://en.wikipedia.org/wiki/Root_element). The element starts at line 1 (```<Course>```) and ends at line 19 (```</Course>```). Notice the difference between the begin tag (no '/') and the end tag (with '/'). A root element is special in that it is the only element, which is the sole parent element to all the other elements.
### 2.3 Attributes
Elements can contain [attributes](http://www.w3schools.com/xml/xml_attributes.asp), which contain information about the element. In this case, this information is the `role` a person has in the course. All attributes are located in the start tag of an XML element.
## 3. Working with XML in Python
Now that we know the basics of XML, we want to be able to access it in Python. In order to work with XML, we will use the [**lxml**](http://lxml.de/) library.
```
from lxml import etree
```
We will focus on the following methods/attributes:
* **to parse the XML from file or string**: the methods `etree.parse()` and `etree.fromstring()`
* **to access the root element**: the methods `getroot()`
* **to access elements**: the methods `find()`, `findall()`, and `getchildren()`
* **to access attributes**: the method `get()`
* **to access element information**: the attributes `tag` and `text`
### 3.1 Parsing XML from file or string
The **`etree.fromstring()`** is used to parse XML from a string:
```
xml_string = """
<Course>
<person role="coordinator">Van der Vliet</person>
<person role="instructor">Van Miltenburg</person>
<person role="instructor">Van Son</person>
<person role="instructor">Marten Postma</person>
<person role="student">Baloche</person>
<person role="student">De Boer</person>
<animal role="student">Rubber duck</animal>
<person role="student">Van Doorn</person>
<person role="student">De Jager</person>
<person role="student">King</person>
<person role="student">Kingham</person>
<person role="student">Mózes</person>
<person role="student">Rübsaam</person>
<person role="student">Torsi</person>
<person role="student">Witteman</person>
<person role="student">Wouterse</person>
<person/>
</Course>
"""
tree = etree.fromstring(xml_string)
print(type(tree))
# printing the tree only shows a reference to the tree, but not the tree itself
# To access information, you will have to use the methods we introduce below.
print(tree)
```
The **`etree.parse()`** method is used to load XML files on your computer:
```
tree = etree.parse('../Data/xml_data/course.xml')
print(tree)
print(type(tree))
```
As you can see, `etree.parse()` returns an `ElementTree`, whereas `etree.fromstring()` returns an `Element`. One of the important differences is that the `ElementTree` class serialises as a complete document, as opposed to a single `Element`. This includes top-level processing instructions and comments, as well as a DOCTYPE and other DTD content in the document. For now, it's not too important that you know what these are; just remember that there is a difference btween `ElementTree` and `Element`.
### 3.1 Accessing root element
While `etree.fromstring()` gives you the root element right away, `etree.parse()` does not. In order to access the root element of `ElementTree`, we first need to use the **`getroot()`** method. Note that this does not show the XML element itself, but only a reference. In order to show the element itself, we can use the **`etree.dump()`** method.
**Hint:** etree.dump is helpful for inspecting (parts of) an xml structure you have loaded from a file. You will see examples of this later.
```
root = tree.getroot()
print('root', type(root), root)
print()
print('etree.dump example')
etree.dump(root, pretty_print=True)
```
As with any python object, we can use the built-in function **`dir()`** to list all methods of an element (which has the type **`lxml.etree._Element`**) , some of which will be illustrated below.
```
print(type(root))
dir(root)
```
### 3.2 Accessing elements
There are several ways of accessing XML elements. The **`find()`** method returns the *first* matching child.
```
first_person_el = root.find('person')
# Printing the element itself again only shows a reference
print(first_person_el)
#instead, we use etree.dump:
etree.dump(first_person_el, pretty_print=True)
```
In order to get a list of all person children, we can use the **`findall()`** method.
Notice that this does not return the `animal` since we are looking for `person` elements.
```
all_person_els = root.findall('person')
# Check how many we found
print(len(all_person_els))
all_person_els
```
Sometimes, we simple want all the children, while ignoring the start tags. This can be achieved using the **`getchildren()`** method. The list created below will contain all elements under root (including the animal element)
```
all_child_els = root.getchildren()
print(len(all_child_els))
all_child_els
```
### 3.3 Accessing element information
We will now show how to access the attributes, text, and tag of an element.
The **`get()`** method is used to access the attribute of an element.
**Attention**: If an attribute does not exist, it will return `None`. Hence, there will not be an error.
```
first_person_el = root.find('person')
role_first_person_el = first_person_el.get('role')
attribute_not_found = first_person_el.get('blabla')
print('role first person element:', role_first_person_el)
print('value if not found:', attribute_not_found)
```
The **text** of an element is found in the attribute **`text`**:
```
print(first_person_el.text)
```
The **tag** of an element is found in the attribute **`tag`**:
```
print(first_person_el.tag)
```
## 4 How to deal with more than one layer
In our previous example, we had an XML with only one nested layer (**person**). However, XML can deal with many more.
Let's look at such an example and think about how you would access the first **`target`** element, i.e.
```xml
<target id="t1" />
```
```xml
<NAF xml:lang="en" version="v3">
<terms>
<term id="t1" type="open" lemma="Tom" pos="N" morphofeat="NNP">
<term id="t2" type="open" lemma="Cruise" pos="N" morphofeat="NNP">
<term id="t3" type="open" lemma="be" pos="V" morphofeat="VBZ">
<term id="t4" type="open" lemma="an" pos="R" morphofeat="DT">
<term id="t5" type="open" lemma="actor" pos="N" morphofeat="NN">
</terms>
<entities>
<entity id="e3" type="PERSON">
<references>
<span>
<target id="t1" />
<target id="t2" />
</span>
</references>
</entity>
</entities>
</NAF>
```
Again, we use `etree.fromstring()` to load XML from a string:
```
naf_string = """
<NAF xml:lang="en" version="v3">
<text>
<wf id="w1" offset="0" length="3" sent="1" para="1">tom</wf>
<wf id="w2" offset="4" length="6" sent="1" para="1">cruise</wf>
<wf id="w3" offset="11" length="2" sent="1" para="1">is</wf>
<wf id="w4" offset="14" length="2" sent="1" para="1">an</wf>
<wf id="w5" offset="17" length="5" sent="1" para="1">actor</wf>
</text>
<terms>
<term id="t1" type="open" lemma="Tom" pos="N" morphofeat="NNP"/>
<term id="t2" type="open" lemma="Cruise" pos="N" morphofeat="NNP"/>
<term id="t3" type="open" lemma="be" pos="V" morphofeat="VBZ"/>
<term id="t4" type="open" lemma="an" pos="R" morphofeat="DT"/>
<term id="t5" type="open" lemma="actor" pos="N" morphofeat="NN"/>
</terms>
<entities>
<entity id="e3" type="PERSON">
<references>
<span>
<target id="t1" />
<target id="t2" />
</span>
</references>
</entity>
</entities>
</NAF>
"""
naf = etree.fromstring(naf_string)
print(type(naf))
etree.dump(naf, pretty_print=True)
```
Please note that the structure is as follows:
* the **`NAF`** element is the parent of the elements **`text`**, **`terms`**, and **`entities`**
* the **`wf`** elements are children of the **`text`** element, which provides us information about the position of words in the text, e.g. that *tom* is the first word in the text (**`id="w1`"**) and in the first sentence (**sent="1"**)
* the **`term`** elements are children of the **`term`** elements, which provide us information about lemmatization and part of speech
* the **`entity`** element is a child of the **`entities`** element. We learn from the **`entity`** element that the terms **`t1`** and **`t2`** (e.g. Tom Cruise) form an entity of type **`person`**.
One way of accessing the first **`target`** element is by going one level at a time:
```
entities_el = naf.find('entities')
entity_el = entities_el.find('entity')
references_el = entity_el.find('references')
span_el = references_el.find('span')
target_el = span_el.find('target')
etree.dump(target_el, pretty_print=True)
```
Is there a better way? The answer is yes! The following way is an easier way to find our `target` element:
```
target_el = naf.find('entities/entity/references/span/target')
etree.dump(target_el, pretty_print=True)
```
You can also use **`findall()`** to find *all* `target` elements:
(Note that **findall()** returns a list of xml elements. We can use a loop to iterate over them and print them individually.)
```
for target_el in naf.findall('entities/entity/references/span/target'):
etree.dump(target_el, pretty_print=True)
```
## 5. Extracting infromation from XML
Often, we want to extract information from an XML file, so we can analyze and possibly manipulate it in python. It can be very useful to use python containers for this. In the following example, we want to collect all tokens (i.e. words as they appear in text) of a part of speech.
To do this, we have to extract infromation from the token layer and combine it with infromation from the term layer.
```
path_to_tokens = 'text/wf'
path_to_terms = 'terms/term'
# We define dictionaries to map identifiers to tokens ('word forms') and term tags including pos information.
# We can use the identifiers to map the tokens to the pos tags in the next step
tokens = naf.findall(path_to_tokens)
terms = naf.findall(path_to_terms)
id_token_dict = dict()
id_pos_dict = dict()
# map ids to tokens
for token in tokens:
id_token_dict[token.get('id')] = token.text
#map ids to terms
for term in terms:
id_pos_dict[term.get('id')] = term.get('pos')
#use ids to map tokens to pos tags
for token_id, token in id_token_dict.items():
# term identifiers start with a t, token identifiers with a w. The numbers correspond.
term_id = token_id.replace('w', 't')
pos = id_pos_dict[term_id]
print(token, pos)
```
## 6. EXTRA: Creating your own XML
Please note that this section is optional, meaning that you don't need to understand this section in order to complete the assignment.
There are three main steps:
* **Step a:** Create an XML object with a root element
* **Step b:** Creating child elements and adding them
* **Step c:** Writing to a file
### Step a: Create an XML object with a root element
You create a new XML object by:
* creating the **`root`** element -> using **`etree.Element`**
* creating the main XML object -> using **`etree.ElementTree`**
You do not have to fully understand how this works. Please make sure you can reuse this code snippet when you create your own XML.
```
our_root = etree.Element('Course')
our_tree = etree.ElementTree(our_root)
```
We can inspect what we have created by using the `etree.dump()` method. As you can see, we only have the root node `Course` currently in our document.
```
etree.dump(our_root, pretty_print=True)
```
As you see, we created an XML object, containing only the root element **Course**.
### Step b: Creating child elements and adding them
There are two ways to add child elements to the root element. The first is to create an element using the **`etree.Element()`** method and using `append()` to add it to the root:
```
# Define tag, attributes and text of the new element
tag = 'person' # what the start and end tag will be
attributes = {'role': 'student'} # dictionary of attributes, can be more than one
name_student = 'Lee' # the text of the elements
# Create new Element
new_person_element = etree.Element(tag, attrib=attributes)
new_person_element.text = name_student
# Add to root
our_root.append(new_person_element)
# Inspect the current XML
etree.dump(our_root, pretty_print=True)
```
However, this is so common that there is a shorter and much more efficient way to do this: by using **`etree.SubElement()`**. It accepts the same arguments as the `etree.Element()` method, but additionally requires the parent as first argument:
```
# Define tag, attributes and text of the new element
tag = 'person'
attributes = {'role': 'student'}
name_student = 'Pitt'
# Add to root
another_person_element = etree.SubElement(our_root, tag, attrib=attributes) # parent is our_root
another_person_element.text = name_student
# Inspect the current XML
etree.dump(our_root, pretty_print=True)
```
As we have seen before, XML can have multiple nested layers. Creating these works the same way as adding child elements to the root, but now we specify one of the other elements as the parent (in this case, `new_person_element`).
```
# Define tag, attributes and text of the new element
tag = 'pet'
attributes = {'role': 'joy'}
name_pet = 'Romeo'
# Add to new_person_element
new_pet_element = etree.SubElement(new_person_element, tag, attrib=attributes) # parent is new_person_element
new_pet_element.text = name_pet
# Inspect the current XML
etree.dump(our_root, pretty_print=True)
```
### Step c: Writing to a file
This is how we can write our selfmade XML to a file. Please inspect `../Data/xml_data/selfmade.xml` using a text editor to check if it worked.
```
with open('../Data/xml_data/selfmade.xml', 'wb') as outfile:
our_tree.write(outfile,
pretty_print=True,
xml_declaration=True,
encoding='utf-8')
```
## Exercises
### Exercise 1:
Have another look at the XML below. Then print the following information:
* the names of all students
* the names of all instructors whose name starts with 'Van'
* all names containing a space
* the role of 'Rubber duck'
```
xml_string = """
<Course>
<person role="coordinator">Van der Vliet</person>
<person role="instructor">Van Miltenburg</person>
<person role="instructor">Van Son</person>
<person role="instructor">Marten Postma</person>
<person role="student">Baloche</person>
<person role="student">De Boer</person>
<animal role="student">Rubber duck</animal>
<person role="student">Van Doorn</person>
<person role="student">De Jager</person>
<person role="student">King</person>
<person role="student">Kingham</person>
<person role="student">Mózes</person>
<person role="student">Rübsaam</person>
<person role="student">Torsi</person>
<person role="student">Witteman</person>
<person role="student">Wouterse</person>
<person/>
</Course>
"""
tree = etree.fromstring(xml_string)
print(type(tree))
```
### Exercise 2:
In the folder `../Data/xml_data` there is an XML file called `framenet.xml`, which is a simplified version of the data provided by the [FrameNet project](https://framenet.icsi.berkeley.edu/fndrupal/).
FrameNet is a lexical database describing **semantic frames**, which are representations of events or situations and the participants in it. For example, cooking typically involves a person doing the cooking (`Cook`), the food that is to be cooked (`Food`), something to hold the food while cooking (`Container`) and a source of heat (`Heating_instrument`). In FrameNet, this is represented as a frame called `Apply_heat`. The `Cook`, `Food`, `Heating_instrument` and `Container` are called **frame elements (FEs)**. Words that evoke this frame, such as *fry*, *bake*, *boil*, and *broil*, are called **lexical units (LUs)** of the `Apply_heat` frame. FrameNet also contains relations between frames. For example, `Apply_heat` has relations with the `Absorb_heat`, `Cooking_creation` and `Intentionally_affect` frames. In FrameNet, frame descriptions are stored in XML format.
`framenet.xml` contains the information about the frame `Waking_up`. Parse the XML file and print the following:
* the name of the frame
* the names of all lexical units
* the definitions of all lexical units
* the related frames with their type of relation to `Waking_up` (e.g. `Event` with the `Inherits from` relation)
```
```
## Exercise 3:
Something were we collect information from multiple files. Not created yet.
```
```
| github_jupyter |
# Multiscale Object Detection
In `Chapter Anchor Boxes`, we generated multiple anchor boxes centered on each pixel of the input image. These anchor boxes are used to sample different regions of the input image. However, if anchor boxes are generated centered on each pixel of the image, soon there will be too many anchor boxes for us to compute. For example, we assume that the input image has a height and a width of 561 and 728 pixels respectively. If five different shapes of anchor boxes are generated centered on each pixel, over two million anchor boxes ($561 \times 728 \times 5$) need to be predicted and labeled on the image.
It is not difficult to reduce the number of anchor boxes. An easy way is to apply uniform sampling on a small portion of pixels from the input image and generate anchor boxes centered on the sampled pixels. In addition, we can generate anchor boxes of varied numbers and sizes on multiple scales. Notice that smaller objects are more likely to be positioned on the image than larger ones. Here, we will use a simple example: Objects with shapes of $1 \times 1$, $1 \times 2$, and $2 \times 2$ may have 4, 2, and 1 possible position(s) on an image with the shape $2 \times 2$. Therefore, when using smaller anchor boxes to detect smaller objects, we can sample more regions; when using larger anchor boxes to detect larger objects, we can sample fewer regions.
To demonstrate how to generate anchor boxes on multiple scales, let us read an image first. It has a height and width of 561 * 728 pixels.
```
%matplotlib inline
import sys
sys.path.insert(0, '..')
import d2l
import torch
img = d2l.plt.imread('../img/catdog.jpg')
h, w = img.shape[0:2]
h, w
```
In
:numref:`chapter_conv_layer`, the 2D array output of the convolutional neural network (CNN) is called
a feature map. We can determine the midpoints of anchor boxes uniformly sampled
on any image by defining the shape of the feature map.
The function `display_anchors` is defined below. We are going to generate anchor boxes `anchors` centered on each unit (pixel) on the feature map `fmap`. Since the coordinates of axes $x$ and $y$ in anchor boxes `anchors` have been divided by the width and height of the feature map `fmap`, values between 0 and 1 can be used to represent relative positions of anchor boxes in the feature map. Since the midpoints of anchor boxes `anchors` overlap with all the units on feature map `fmap`, the relative spatial positions of the midpoints of the `anchors` on any image must have a uniform distribution. Specifically, when the width and height of the feature map are set to `fmap_w` and `fmap_h` respectively, the function will conduct uniform sampling for `fmap_h` rows and `fmap_w` columns of pixels and use them as midpoints to generate anchor boxes with size `s` (we assume that the length of list `s` is 1) and different aspect ratios (`ratios`).
```
def display_anchors(fmap_w, fmap_h, s):
d2l.set_figsize((3.5, 2.5))
# The values from the first two dimensions will not affect the output
fmap = torch.zeros((1, 10, fmap_w, fmap_h))
anchors = d2l.MultiBoxPrior((h,w), sizes=s, aspect_ratios = (2, 0.5))
anchors = anchors.reshape((h, w, 3, 4))
centers = d2l.get_centers(h, w, fmap_h, fmap_w)
anch = torch.cat([anchors[centers[i, 1], centers[i, 0], :, :] for i in range(len(centers))])
bbox_scale = torch.tensor((w, h, w, h)).float()
d2l.show_bboxes(d2l.plt.imshow(img).axes, d2l.center_2_hw(anch) * bbox_scale)
```
We will first focus on the detection of small objects. In order to make it easier to distinguish upon display, the anchor boxes with different midpoints here do not overlap. We assume that the size of the anchor boxes is 0.15 and the height and width of the feature map are 4. We can see that the midpoints of anchor boxes from the 4 rows and 4 columns on the image are uniformly distributed.
```
display_anchors(fmap_w=4, fmap_h=4, s=[0.15])
```
We are going to reduce the height and width of the feature map by half and use a larger anchor box to detect larger objects. When the size is set to 0.4, overlaps will occur between regions of some anchor boxes.
```
display_anchors(fmap_w=2, fmap_h=2, s=[0.4])
```
Finally, we are going to reduce the height and width of the feature map by half and increase the anchor box size to 0.8. Now the midpoint of the anchor box is the center of the image.
```
display_anchors(fmap_w=1, fmap_h=1, s=[0.8])
```
Since we have generated anchor boxes of different sizes on multiple scales, we will use them to detect objects of various sizes at different scales. Now we are going to introduce a method based on convolutional neural networks (CNNs).
At a certain scale, suppose we generate $h \times w$ sets of anchor boxes with different midpoints based on $c_i$ feature maps with the shape $h \times w$ and the number of anchor boxes in each set is $a$. For example, for the first scale of the experiment, we generate 16 sets of anchor boxes with different midpoints based on 10 (number of channels) feature maps with a shape of $4 \times 4$, and each set contains 3 anchor boxes.
Next, each anchor box is labeled with a category and offset based on the classification and position of the ground-truth bounding box. At the current scale, the object detection model needs to predict the category and offset of $h \times w$ sets of anchor boxes with different midpoints based on the input image.
We assume that the $c_i$ feature maps are the intermediate output of the CNN
based on the input image. Since each feature map has $h \times w$ different
spatial positions, the same position will have $c_i$ units. According to the
definition of receptive field in the
`Chapter Convolutional Neural Network`, the $c_i$ units of the feature map at the same spatial position have
the same receptive field on the input image. Thus, they represent the
information of the input image in this same receptive field. Therefore, we can
transform the $c_i$ units of the feature map at the same spatial position into
the categories and offsets of the $a$ anchor boxes generated using that position
as a midpoint. It is not hard to see that, in essence, we use the information
of the input image in a certain receptive field to predict the category and
offset of the anchor boxes close to the field on the input image.
When the feature maps of different layers have receptive fields of different sizes on the input image, they are used to detect objects of different sizes. For example, we can design a network to have a wider receptive field for each unit in the feature map that is closer to the output layer, to detect objects with larger sizes in the input image.
We will implement a multiscale object detection model in the following section.
## Summary
* We can generate anchor boxes with different numbers and sizes on multiple scales to detect objects of different sizes on multiple scales.
* The shape of the feature map can be used to determine the midpoint of the anchor boxes that uniformly sample any image.
* We use the information for the input image from a certain receptive field to predict the category and offset of the anchor boxes close to that field on the image.
## Exercises
* Given an input image, assume $1 \times c_i \times h \times w$ to be the shape of the feature map while $c_i, h, w$ are the number, height, and width of the feature map. What methods can you think of to convert this variable into the anchor box's category and offset? What is the shape of the output?
| github_jupyter |
<h1> Exploring tf.transform </h1>
While Pandas is fine for experimenting, for operationalization of your workflow, it is better to do preprocessing in Apache Beam. This will also help if you need to preprocess data in flight, since Apache Beam also allows for streaming.
Only specific combinations of TensorFlow/Beam are supported by tf.transform. So make sure to get a combo that is.
* TFT 0.8.0
* TF 1.8 or higher
* Apache Beam [GCP] 2.9.0 or higher
```
%%bash
pip install apache-beam[gcp]==2.9.0 tensorflow_transform==0.8.0
```
<b>Restart the kernel</b> after you do a pip install (click on the <b>Reset</b> button in Datalab)
```
%bash
pip freeze | grep -e 'flow\|beam'
import tensorflow as tf
import tensorflow_transform as tft
import shutil
print(tf.__version__)
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
```
## Input source: BigQuery
Get data from BigQuery but defer filtering etc. to Beam.
Note that the dayofweek column is now strings.
```
from google.cloud import bigquery
def create_query(phase, EVERY_N):
"""
phase: 1=train 2=valid
"""
base_query = """
WITH daynames AS
(SELECT ['Sun', 'Mon', 'Tues', 'Wed', 'Thurs', 'Fri', 'Sat'] AS daysofweek)
SELECT
(tolls_amount + fare_amount) AS fare_amount,
daysofweek[ORDINAL(EXTRACT(DAYOFWEEK FROM pickup_datetime))] AS dayofweek,
EXTRACT(HOUR FROM pickup_datetime) AS hourofday,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count AS passengers,
'notneeded' AS key
FROM
`nyc-tlc.yellow.trips`, daynames
WHERE
trip_distance > 0 AND fare_amount > 0
"""
if EVERY_N == None:
if phase < 2:
# training
query = "{0} AND MOD(ABS(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)),4) < 2".format(base_query)
else:
query = "{0} AND MOD(ABS(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)),4) = {1}".format(base_query, phase)
else:
query = "{0} AND MOD(ABS(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING))),{1}) = {2}".format(base_query, EVERY_N, phase)
return query
query = create_query(2, 100000)
df_valid = bigquery.Client().query(query).to_dataframe()
display(df_valid.head())
df_valid.describe()
```
## Create ML dataset using tf.transform and Dataflow
Let's use Cloud Dataflow to read in the BigQuery data and write it out as CSV files. Along the way, let's use tf.transform to do scaling and transforming. Using tf.transform allows us to save the metadata to ensure that the appropriate transformations get carried out during prediction as well.
```
%%writefile requirements.txt
tensorflow-transform==0.8.0
```
Test transform_data is type pcollection. test if _ = is neccesary
```
import datetime
import tensorflow as tf
import apache_beam as beam
import tensorflow_transform as tft
from tensorflow_transform.beam import impl as beam_impl
def is_valid(inputs):
try:
pickup_longitude = inputs['pickuplon']
dropoff_longitude = inputs['dropofflon']
pickup_latitude = inputs['pickuplat']
dropoff_latitude = inputs['dropofflat']
hourofday = inputs['hourofday']
dayofweek = inputs['dayofweek']
passenger_count = inputs['passengers']
fare_amount = inputs['fare_amount']
return (fare_amount >= 2.5 and pickup_longitude > -78 and pickup_longitude < -70 \
and dropoff_longitude > -78 and dropoff_longitude < -70 and pickup_latitude > 37 \
and pickup_latitude < 45 and dropoff_latitude > 37 and dropoff_latitude < 45 \
and passenger_count > 0)
except:
return False
def preprocess_tft(inputs):
import datetime
print inputs
result = {}
result['fare_amount'] = tf.identity(inputs['fare_amount'])
result['dayofweek'] = tft.string_to_int(inputs['dayofweek']) # builds a vocabulary
result['hourofday'] = tf.identity(inputs['hourofday']) # pass through
result['pickuplon'] = (tft.scale_to_0_1(inputs['pickuplon'])) # scaling numeric values
result['pickuplat'] = (tft.scale_to_0_1(inputs['pickuplat']))
result['dropofflon'] = (tft.scale_to_0_1(inputs['dropofflon']))
result['dropofflat'] = (tft.scale_to_0_1(inputs['dropofflat']))
result['passengers'] = tf.cast(inputs['passengers'], tf.float32) # a cast
result['key'] = tf.as_string(tf.ones_like(inputs['passengers'])) # arbitrary TF func
# engineered features
latdiff = inputs['pickuplat'] - inputs['dropofflat']
londiff = inputs['pickuplon'] - inputs['dropofflon']
result['latdiff'] = tft.scale_to_0_1(latdiff)
result['londiff'] = tft.scale_to_0_1(londiff)
dist = tf.sqrt(latdiff * latdiff + londiff * londiff)
result['euclidean'] = tft.scale_to_0_1(dist)
return result
def preprocess(in_test_mode):
import os
import os.path
import tempfile
from apache_beam.io import tfrecordio
from tensorflow_transform.coders import example_proto_coder
from tensorflow_transform.tf_metadata import dataset_metadata
from tensorflow_transform.tf_metadata import dataset_schema
from tensorflow_transform.beam import tft_beam_io
from tensorflow_transform.beam.tft_beam_io import transform_fn_io
job_name = 'preprocess-taxi-features' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S')
if in_test_mode:
import shutil
print 'Launching local job ... hang on'
OUTPUT_DIR = './preproc_tft'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EVERY_N = 100000
else:
print 'Launching Dataflow job {} ... hang on'.format(job_name)
OUTPUT_DIR = 'gs://{0}/taxifare/preproc_tft/'.format(BUCKET)
import subprocess
subprocess.call('gsutil rm -r {}'.format(OUTPUT_DIR).split())
EVERY_N = 10000
options = {
'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),
'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),
'job_name': job_name,
'project': PROJECT,
'max_num_workers': 6,
'teardown_policy': 'TEARDOWN_ALWAYS',
'no_save_main_session': True,
'requirements_file': 'requirements.txt'
}
opts = beam.pipeline.PipelineOptions(flags=[], **options)
if in_test_mode:
RUNNER = 'DirectRunner'
else:
RUNNER = 'DataflowRunner'
# set up raw data metadata
raw_data_schema = {
colname : dataset_schema.ColumnSchema(tf.string, [], dataset_schema.FixedColumnRepresentation())
for colname in 'dayofweek,key'.split(',')
}
raw_data_schema.update({
colname : dataset_schema.ColumnSchema(tf.float32, [], dataset_schema.FixedColumnRepresentation())
for colname in 'fare_amount,pickuplon,pickuplat,dropofflon,dropofflat'.split(',')
})
raw_data_schema.update({
colname : dataset_schema.ColumnSchema(tf.int64, [], dataset_schema.FixedColumnRepresentation())
for colname in 'hourofday,passengers'.split(',')
})
raw_data_metadata = dataset_metadata.DatasetMetadata(dataset_schema.Schema(raw_data_schema))
# run Beam
with beam.Pipeline(RUNNER, options=opts) as p:
with beam_impl.Context(temp_dir=os.path.join(OUTPUT_DIR, 'tmp')):
# save the raw data metadata
raw_data_metadata | 'WriteInputMetadata' >> tft_beam_io.WriteMetadata(
os.path.join(OUTPUT_DIR, 'metadata/rawdata_metadata'),
pipeline=p)
# read training data from bigquery and filter rows
raw_data = (p
| 'train_read' >> beam.io.Read(beam.io.BigQuerySource(query=create_query(1, EVERY_N), use_standard_sql=True))
| 'train_filter' >> beam.Filter(is_valid))
raw_dataset = (raw_data, raw_data_metadata)
# analyze and transform training data
transformed_dataset, transform_fn = (
raw_dataset | beam_impl.AnalyzeAndTransformDataset(preprocess_tft))
transformed_data, transformed_metadata = transformed_dataset
# save transformed training data to disk in efficient tfrecord format
transformed_data | 'WriteTrainData' >> tfrecordio.WriteToTFRecord(
os.path.join(OUTPUT_DIR, 'train'),
file_name_suffix='.gz',
coder=example_proto_coder.ExampleProtoCoder(
transformed_metadata.schema))
# read eval data from bigquery and filter rows
raw_test_data = (p
| 'eval_read' >> beam.io.Read(beam.io.BigQuerySource(query=create_query(2, EVERY_N), use_standard_sql=True))
| 'eval_filter' >> beam.Filter(is_valid))
raw_test_dataset = (raw_test_data, raw_data_metadata)
# transform eval data
transformed_test_dataset = (
(raw_test_dataset, transform_fn) | beam_impl.TransformDataset())
transformed_test_data, _ = transformed_test_dataset
# save transformed training data to disk in efficient tfrecord format
transformed_test_data | 'WriteTestData' >> tfrecordio.WriteToTFRecord(
os.path.join(OUTPUT_DIR, 'eval'),
file_name_suffix='.gz',
coder=example_proto_coder.ExampleProtoCoder(
transformed_metadata.schema))
# save transformation function to disk for use at serving time
transform_fn | 'WriteTransformFn' >> transform_fn_io.WriteTransformFn(
os.path.join(OUTPUT_DIR, 'metadata'))
preprocess(in_test_mode=False) # change to True to run locally
%%bash
# ls preproc_tft
gsutil ls gs://${BUCKET}/taxifare/preproc_tft/
```
<h2> Train off preprocessed data </h2>
```
%%bash
rm -rf taxifare_tft.tar.gz taxi_trained
export PYTHONPATH=${PYTHONPATH}:$PWD/taxifare_tft
python -m trainer.task \
--train_data_paths="gs://${BUCKET}/taxifare/preproc_tft/train*" \
--eval_data_paths="gs://${BUCKET}/taxifare/preproc_tft/eval*" \
--output_dir=./taxi_trained \
--train_steps=10 --job-dir=/tmp \
--num_workers=6 \
--metadata_path=gs://${BUCKET}/taxifare/preproc_tft/metadata
!ls $PWD/taxi_trained/export/exporter
%%writefile /tmp/test.json
{"dayofweek":"Thu","hourofday":17,"pickuplon": -73.885262,"pickuplat": 40.773008,"dropofflon": -73.987232,"dropofflat": 40.732403,"passengers": 2}
%%bash
model_dir=$(ls $PWD/taxi_trained/export/exporter/)
gcloud ai-platform local predict \
--model-dir=./taxi_trained/export/exporter/${model_dir} \
--json-instances=/tmp/test.json
```
Copyright 2016-2018 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| github_jupyter |
# Q-Learning
```
import random
from random import randint
import pandas as pd
import numpy as np
# mostra a posição
def printTable(p):
table = pd.DataFrame(np.zeros((4,4),dtype=int),columns=None)
table.iloc[3,3]='X'
table.iloc[3,1]='T'
table.iloc[0,3]='T'
T = pd.DataFrame({'linhas':[0,1,2,3,0,1,2,3,0,1,2,3,0,1,2,3],'colunas':[0,0,0,0,1,1,1,1,2,2,2,2,3,3,3,3]})
table = table.replace(0,'_')
table.iloc[T['linhas'][p],T['colunas'][p]] = 'o'
print(table.to_string(index=False,header=False))
print('')
printTable(np.random.randint(0,16))
```
```
R = np.array([[-1, 0, -1, -1, 0, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1],
[ 0, -1, 0, -1, 0, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1],
[-1, 0, -1, 0, -1, 0, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1],
[-1, -1, 0, -1, -1, -1, -1, 0, -1, -1, -1, -1, -1, -1, -1, -1],
[ 0, -1, -1, -1, -1, 0, -1, -1, 0, -1, -1, -1, -1, -1, -1, -1],
[-1, 0, -1, -1, 0, -1, 0, -1, -1, 0, -1, -1, -1, -1, -1, -1],
[-1, -1, 0, -1, -1, 0, -1, 0, -1, -1, 0, -1, -1, -1, -1, -1],
[-1, -1, -1, 0, -1, -1, 0, -1, -1, -1, -1, 0, -1, -1, -1, -1],
[-1, -1, -1, -1, 0, -1, -1, -1, -1, 0, -1, -1, 0, -1, -1, -1],
[-1, -1, -1, -1, -1, 0, -1, -1, 0, -1, 0, -1, -1, 0, -1, -1],
[-1, -1, -1, -1, -1, -1, 0, -1, -1, 0, -1, 0, -1, -1, 0, -1],
[-1, -1, -1, -1, -1, -1, -1, 0, -1, -1, 0, -1, -1, -1, -1, 20],
[-1, -1, -1, -1, -1, -1, -1, -1, 0, -1, -1, -1, -1, 0, -1, -1],
[-1, -1, -1, -1, -1, -1, -1, -1, -1, 0, -1, -1, 0, -1, 0, -1],
[-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 0, -1, -1, 0, -1, 20],
[-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 50]])
Rtable = pd.DataFrame(data=R)
Rtable
# fator de exploração
gamma = 1.
# taxa de aprendizado
alpha = 0.5
# zera a matriz Q
Q = np.zeros((16,16))
# aprendendo
def agentLearn(episodes):
for i in range(episodes):
s = randint(0,15)
path = [s]
while True:
a = randint(0,15)
if R[s,a]>=0:
if a==7 or a==12:
print("It's a trap!")
path.append(a)
printTable(a)
break
else:
Q[s,a] = (1.-alpha)*Q[s,a] + alpha*(R[s,a] + gamma*max(Q[a,]))
s = a
path.append(s)
printTable(s)
if s==15:
print("EXIT!")
printTable(s)
break
print('Path:',path)
agentLearn(1)
# verifica como a "memória" está indo
Qtable = pd.DataFrame(data=Q,dtype=int)
Qtable
# aprendendo
def agentLearnFast(episodes):
for i in range(episodes):
s = randint(0,15)
while True:
a = randint(0,15)
if R[s,a]>=0:
if a==7 or a==12:
break
else:
Q[s,a] = (1.-alpha)*Q[s,a] + alpha*(R[s,a] + gamma*max(Q[a,]))
s = a
if s==15:
break
return Q
# zera a matriz Q
Q = np.zeros((16,16))
# aprendendo
Q = agentLearnFast(100)
# verifica como a "memória" está indo
Qtable = pd.DataFrame(data=Q,dtype=int)
Qtable
# saindo do labirinto
def agentExit(Q):
n = 0
nmax = 30
s = randint(0,15)
while True:
ns = random.choice(list(filter(lambda kv: kv[1] == max(Q[s,]), enumerate(Q[s,]))))[0]
if R[s,ns]>=0:
s = ns
n +=1
print(s)
if s == 7 or s == 12:
print("It's a trap!")
break
elif s == 15:
print("EXIT!")
break
if n == nmax:
print("LOST!")
break
print('Total steps:',n)
agentExit(Q)
```
| github_jupyter |
# Sample usage of Boyle module
## Preparing environment
```
# Create new environment
Boyle.mk("matrex_samples")
# Activate new environment and load modules available in that environment
Boyle.activate("matrex_samples")
# Install new dependency
Boyle.install({:matrex, "~> 0.6"})
```
# Matrex usage
* Library: https://github.com/versilov/matrex
* Original notebook: https://github.com/versilov/matrex/blob/master/Matrex.ipynb
## Access behaviour
```
m = Matrex.magic(3)
m[2][3]
m[1..2]
m[:rows]
m[:size]
m[:max]
m[2][:max]
m[:argmax]
m[2][:argmax]
```
## Math operators overloading and logistic regression implementation
```
import Matrex
defmodule LinearRegression do
def lr_cost_fun(%Matrex{} = theta, {%Matrex{} = x, %Matrex{} = y, lambda} = _params, iteration \\ 0)
when is_number(lambda) do
m = y[:rows]
h = Matrex.dot_and_apply(x, theta, :sigmoid)
l = Matrex.ones(theta[:rows], theta[:cols]) |> Matrex.set(1, 1, 0)
regularization =
Matrex.dot_tn(l, Matrex.square(theta))
|> Matrex.scalar()
|> Kernel.*(lambda / (2 * m))
# Compute the cost and add regularization parameter
j =
y
|> Matrex.dot_tn(Matrex.apply(h, :log), -1)
|> Matrex.subtract(
Matrex.dot_tn(
Matrex.subtract(1, y),
Matrex.apply(Matrex.subtract(1, h), :log)
)
)
|> Matrex.scalar()
|> (fn
:nan -> :nan
x -> x / m + regularization
end).()
# Compute gradient
grad =
x
|> Matrex.dot_tn(Matrex.subtract(h, y))
|> Matrex.add(Matrex.multiply(theta, l), 1.0, lambda)
|> Matrex.divide(m)
{j, grad}
end
# The same cost function, implemented with operators from `Matrex.Operators` module.
# Works 2 times slower, than standard implementation. But it's a way more readable.
# It is here for demonstrating possibilites of the library.
def lr_cost_fun_ops(%Matrex{} = theta, {%Matrex{} = x, %Matrex{} = y, lambda} = _params)
when is_number(lambda) do
# Turn off original operators. Use this with caution!
import Kernel, except: [-: 1, +: 2, -: 2, *: 2, /: 2, <|>: 2]
import Matrex
import Matrex.Operators
# This line is needed only when used from iex, to remove ambiguity of t/1 function.
import IEx.Helpers, except: [t: 1]
m = y[:rows]
h = sigmoid(x * theta)
l = ones(size(theta)) |> set(1, 1, 0.0)
j = (-t(y) * log(h) - t(1 - y) * log(1 - h) + lambda / 2 * t(l) * pow2(theta)) / m
grad = (t(x) * (h - y) + (theta <|> l) * lambda) / m
{scalar(j), grad}
end
end
System.cmd("git", ["clone", "https://github.com/versilov/matrex", "resources/matrex"])
ls "resources/matrex/test/data"
x = Matrex.load("resources/matrex/test/data/X.mtx.gz")
x[1100..1115]
|> list_of_rows()
|> Enum.map(&(reshape(&1, 20, 20)
|> transpose()))
|> reshape(4, 4)
|> heatmap()
y = Matrex.load("resources/matrex/test/data/Y.mtx")
theta = Matrex.zeros(x[:cols], 1)
lambda = 0.01
iterations = 100
solutions =
1..10 # Our ten digits, we wish to recognize
|> Task.async_stream(
fn digit ->
# Prepare labels matrix with only current digit labeled with 1.0
y3 = Matrex.apply(y, fn val -> if(val == digit, do: 1.0, else: 0.0) end)
# Use fmincg() optimizer (ported to Elixir with Matrex functions) with previously defined cost function.
{sX, fX, _i} =
Matrex.Algorithms.fmincg(&LinearRegression.lr_cost_fun/3, theta, {x, y3, lambda}, iterations)
# Return the digit itself and the best found solution, which is a column matrix 401x1
{digit, List.last(fX), sX}
end,
max_concurrency: 4
) # Merge all 10 found solution column matrices into one 10x401 solutions matrix
|> Enum.map(fn {:ok, {_d, _l, theta}} -> Matrex.to_list(theta) end)
|> Matrex.new()
predictions =
x
|> Matrex.dot_nt(solutions)
|> Matrex.apply(:sigmoid)
accuracy =
1..predictions[:rows]
|> Enum.reduce(0, fn row, acc ->
if y[row] == predictions[row][:argmax], do: acc + 1, else: acc
end)
|> Kernel./(predictions[:rows])
|> Kernel.*(100)
```
## Enumerable protocol
```
Enum.member?(m, 2.0)
Enum.count(m)
Enum.sum(m)
```
## Saving and loading matrix
```
Matrex.random(5) |> Matrex.save("rand.mtx")
Matrex.load("rand.mtx")
Matrex.magic(5) |> Matrex.divide(Matrex.eye(5)) |> Matrex.save("nan.csv")
Matrex.load("nan.csv")
```
## NaN and Infinity
```
m = Matrex.eye(3)
n = Matrex.divide(m, Matrex.zeros(3))
n[1][1]
n[1][2]
```
| github_jupyter |
```
%matplotlib inline
from __future__ import division
from pdb import set_trace as BP
import os
import numpy as np
import tables
import matplotlib.pyplot as plt
```
# Select an experiment and load it
This will iterate over all experiments in the ```output/``` directory and select the most recent one that matches the pattern.
```
#fname = 'output/learning-and-inference-param-bars-gsc.py.2019-03-26+15:25//result.h5'
fname = None
if fname is None:
found = False
count = -1
pattern = "learning-and-inference"
matching_experiments = ['output/' + dirname for dirname in os.listdir("output") if pattern in dirname]
matching_experiments = sorted(matching_experiments, key=os.path.getmtime)
print("Found %d experiments matching pattern" % len(matching_experiments))
while not found:
try:
selected_experiment = matching_experiments[count]
fname = os.path.join(selected_experiment, "result.h5")
h5 = tables.open_file(fname)
found = True
except:
count -= 1
else:
h5 = tables.open_file(fname)
node_names = h5.root.__members__
print('Loading %s' % fname)
N_plot = 10
test_data = h5.root.test_data[:][-1]
if test_data.shape[0] >= N_plot:
test_data = test_data[:N_plot,:]
else:
N_plot = test_data.shape[0]
Hprime_start, gamma_start = h5.root.Hprime_start[:][-1], h5.root.gamma_start[:][-1]
D, H = h5.root.W[:][-1].shape
D1 = int(np.sqrt(D))
clim_abs = np.max(np.abs(np.array([np.min(test_data), np.max(test_data)])))
clim = [-clim_abs, clim_abs]
data_points = {}
n, k = 0, 0
lookup = True
while lookup:
if (not any(['test_n%i' % n in s for s in node_names])) or (n >= N_plot):
break
results = {}
results['y'] = test_data[n,:]
results['Hprime'] = int(h5.get_node('/test_n%i_Hprime' % n)[:][-1])
results['gamma'] = int(h5.get_node('/test_n%i_gamma' % n)[:][-1])
topK = int([s for s in node_names if 'test_n%i_p_top' % n in s][0].split('top')[1])
results['topK'] = topK
results['p_topK'] = h5.get_node('/test_n%i_p_top%i' % (n,topK))[:][-1]
results['comps'] = {}
for k in range(topK):
results['comps']['top%i' % k] = h5.get_node('/test_n%i_comps_top%i' % (n,k))[:][-1]
results['comps']['gt'] = h5.get_node('/test_n%i_comps_gt' % n)[:][-1]
data_points['%i' % n] = results
n += 1
print('Loaded results for %i test data points' % n)
h5.close()
```
# Draw visualization
This will visualize the output of `bars-learning-and-inference.py`.
```
print("Starting inference with H'/γ = %i/%i\n" % (Hprime_start, gamma_start))
figs = {}
figsize = [12.,12.]
fontsize_large = 20
fontsize_small = 10
fparams = {'clim' : clim, 'interpolation': 'nearest', 'cmap' : 'jet'}
tparams = {'fontsize' : fontsize_large, 'horizontalalignment' : 'center'}
for key in data_points:
figs[key] = {}
n = int(key)
topK = int(data_points[key]['topK'])
plt.figure(figsize=figsize)
figs[key]['ax_y'] = plt.subplot2grid((topK+2,H+2), (0,2), colspan=H)
figs[key]['ax_y'].imshow(data_points[key]['y'].reshape(D1,D1),**fparams)
plt.axis('off')
figs[key]['axyl'] = plt.subplot2grid((topK+2,H+2), (0,0))
figs[key]['axyl'].text(0., .45, r'$\vec{y}^{(%i)}$' % (n+1), **tparams)
tparams['fontsize'] = fontsize_small
figs[key]['axyl'].text(0., .05, r"$H'/\gamma=%i/%i$" % (data_points[key]['Hprime'],
data_points[key]['gamma']),
**tparams)
tparams['fontsize'] = fontsize_large
plt.axis('off')
for k in range(topK):
comps = data_points[key]['comps']['top%i' % k]
comps_gt = data_points[key]['comps']['gt']
if type(comps) == np.int64:
if comps == 0:
n_comps = 0
else:
n_comps = comps.shape[1]
if type(comps_gt) == np.int64:
if comps_gt == 0:
n_comps_gt = 0
else:
n_comps_gt = comps_gt.shape[1]
figs[key]['ax_Wbargtl'] = plt.subplot2grid((topK+2,H+2), (1,0))
if 'gsc' in fname:
tmp = r'$\bar{W}(\vec{s}_{gt},\vec{z}_{gt})$'
else:
tmp = r'$\bar{W}(\vec{s}_{gt})$'
figs[key]['ax_Wbargtl'].text(0,.45, tmp, **tparams)
plt.axis('off')
figs[key]['ax%il' % k] = plt.subplot2grid((topK+2,H+2), (k+2,0))
figs[key]['ax%il' % k].text(0.,.55, r'$\bar{W}(\vec{s}_%i)$' % (k+1), **tparams)
tparams['fontsize'] = fontsize_small
figs[key]['ax%il' % k].text(0.,.25, r'$p(\vec{s}_%i\vert\vec{y}^{(%i)},'
'\Theta)=%.2e$' % (k+1,n+1,data_points[key]['p_topK'][k]),
**tparams)
tparams['fontsize'] = fontsize_large
plt.axis('off')
for h in range(H):
if k == 0 and h < n_comps_gt:
figs[key]['ax_Wbargt'] = plt.subplot2grid((topK+2,H+1), (k+1,h+2))
figs[key]['ax_Wbargt'].imshow(comps_gt[:,h].reshape(D1,D1),**fparams)
plt.axis('off')
if h < n_comps:
figs[key]['ax%i%i' % (k,h)] = plt.subplot2grid((topK+2,H+1), (k+2,h+2))
figs[key]['ax%i%i' % (k,h)].imshow(comps[:,h].reshape(D1,D1),**fparams)
plt.axis('off')
plt.show()
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.