text stringlengths 2.5k 6.39M | kind stringclasses 3 values |
|---|---|
# Improve accuracy of pdf batch processing with Amazon Textract and Amazon A2I
In this chapter and this accompanying notebook learn with an example on how you can use Amazon Textract in asynchronous mode by extracting content from multiple PDF files in batch, and sending specific content from these PDF documents to an Amazon A2I human review loop to review and modify the values, and send them to an Amazon DynamoDB table for downstream processing.
**Important Note:** This is an accompanying notebook for Chapter 16 - Improve accuracy of pdf batch processing with Amazon Textract and Amazon A2I from the Natural Language Processing with AWS AI Services book. Please make sure to read the instructions provided in the book prior to attempting this notebook.
### Step 0 - Create a private human review workforce
This step requires you to use the AWS Console. However, we highly recommend that you follow it, especially when creating your own task with a custom template we will use for this notebook. We will create a private workteam and add only one user (you) to it.
To create a private team:
1. Go to AWS Console > Amazon SageMaker > Labeling workforces
1. Click "Private" and then "Create private team".
1. Enter the desired name for your private workteam.
1. Enter your own email address in the "Email addresses" section.
1. Enter the name of your organization and a contact email to administer the private workteam.
1. Click "Create Private Team".
1. The AWS Console should now return to AWS Console > Amazon SageMaker > Labeling workforces. Your newly created team should be visible under "Private teams". Next to it you will see an ARN which is a long string that looks like arn:aws:sagemaker:region-name-123456:workteam/private-crowd/team-name. Please copy this ARN to paste in the cell below.
1. You should get an email from no-reply@verificationemail.com that contains your workforce username and password.
1. In AWS Console > Amazon SageMaker > Labeling workforces, click on the URL in Labeling portal sign-in URL. Use the email/password combination from Step 8 to log in (you will be asked to create a new, non-default password).
1. This is your private worker's interface. When we create a verification task in Verify your task using a private team below, your task should appear in this window. You can invite your colleagues to participate in the labeling job by clicking the "Invite new workers" button.
Please refer to the [Amazon SageMaker documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/sms-workforce-management.html) if you need more details.
### Step 1 - Import libraries and initiliaze variables
```
# Step 1 - Cell 1
import urllib
import boto3
import os
import json
import time
import uuid
import sagemaker
import pandas as pd
from sagemaker import get_execution_role
from sagemaker.s3 import S3Uploader, S3Downloader
textract = boto3.client('textract')
s3 = boto3.resource('s3')
bucket = "<S3-bucket-name>"
prefix = 'chapter16/input'
# Enter the Workteam ARN you created from point 7 in Step 0 above
WORKTEAM_ARN= '<your-private-workteam-arn>'
# Step 1 - Cell 2
# Upload the SEC registration documents
s3_client = boto3.client('s3')
for secfile in os.listdir():
if secfile.endswith('pdf'):
response = s3_client.upload_file(secfile, bucket, prefix+'/'+secfile)
print("Uploaded {} to S3 bucket {} in folder {}".format(secfile, bucket, prefix))
```
### Step 2 - Start Amazon Textract Text Detection Job
```
# Step 2 - Cell 1
input_bucket = s3.Bucket(bucket)
jobids = {}
# Step 2 - Cell 2
for doc in input_bucket.objects.all():
if doc.key.startswith(prefix) and doc.key.endswith('pdf'):
tres = textract.start_document_text_detection(
DocumentLocation={
"S3Object": {
"Bucket": bucket,
"Name": doc.key
}
}
)
jobids[doc.key.split('/')[2]] = tres['JobId']
# Step 2 - Cell 3
for j in jobids:
print("Textract detection Job ID for {} is {}".format(j,str(jobids[j])))
```
### Step 3 - Get Amazon Textract Text Detection Results
```
# Step 3 - Cell 1
class TextExtractor():
def extract_text(self, jobId):
""" Extract text from document corresponding to jobId and
generate a list of pages containing the text
"""
textract_result = self.__get_textract_result(jobId)
pages = {}
self.__extract_all_pages(jobId, textract_result, pages, [])
return pages
def __get_textract_result(self, jobId):
""" retrieve textract result with job Id """
result = textract.get_document_text_detection(
JobId=jobId
)
return result
def __extract_all_pages(self, jobId, textract_result, pages, page_numbers):
""" extract page content: build the pages array,
recurse if response is too big (when NextToken is provided by textract)
"""
blocks = [x for x in textract_result['Blocks'] if x['BlockType'] == "LINE"]
content = {}
line = 0
for block in blocks:
line += 1
content['Text'+str(line)] = block['Text']
content['Confidence'+str(line)] = block['Confidence']
if block['Page'] not in page_numbers:
page_numbers.append(block['Page'])
pages[block['Page']] = {
"Number": block['Page'],
"Content": content
}
else:
pages[block['Page']]['Content'] = content
nextToken = textract_result.get("NextToken", "")
if nextToken != '':
textract_result = textract.get_document_text_detection(
JobId=jobId,
NextToken=nextToken
)
self.__extract_all_pages(jobId,
textract_result,
pages,
page_numbers)
# Step 3 - Cell 2
text_extractor = TextExtractor()
indoc = {}
df_indoc = pd.DataFrame(columns = ['DocName','LineNr','DetectedText','Confidence', 'CorrectedText', 'Comments'])
for x in jobids:
pages = text_extractor.extract_text(jobids[x])
contdict =pages[1]['Content']
for row in range(1,(int(len(contdict)/2))+1):
df_indoc.loc[len(df_indoc.index)] = [x, row, contdict['Text'+str(row)], round(contdict['Confidence'+str(row)],1),'','']
# Uncomment the line below if you want to review the contents of this dataframe
#df_indoc.to_csv('extract.csv')
# Step 3 - Cell 3
# The lines in each document that are of importance for the human loop to review
bounding_dict = {'lines': '9:11:12:13:15:16:17:18:19:20:21:22:23:24:25'}
# Step 3 - Cell 4
# Let us now create a new dataframe that only contains the subset of lines we need from the bounding_dict
df_newdoc = pd.DataFrame(columns = ['DocName','LineNr','DetectedText','Confidence','CorrectedText','Comments'])
for idx, row in df_indoc.iterrows():
if str(row['LineNr']) in bounding_dict['lines'].split(':'):
df_newdoc.loc[len(df_newdoc.index)] = [row['DocName'],row['LineNr'], row['DetectedText'], row['Confidence'], row['CorrectedText'],row['Comments']]
df_newdoc
```
### Step 4 - Create the Amazon A2I human review Task UI
We will customize a sample tabular template from the Amazon A2I sample Task UI template page - https://github.com/aws-samples/amazon-a2i-sample-task-uis
```
# Step 4 - Cell 1
# Initialize A2I variables
a2i_prefix = "chapter16/a2i-results"
# Define IAM role
role = get_execution_role()
print("RoleArn: {}".format(role))
timestamp = time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime())
# Amazon SageMaker client
sagemaker_client = boto3.client('sagemaker')
# Amazon Augment AI (A2I) client
a2i = boto3.client('sagemaker-a2i-runtime')
# Flow definition name - this value is unique per account and region. You can also provide your own value here.
flowDefinitionName = 'fd-pdf-docs-' + timestamp
# Task UI name - this value is unique per account and region. You can also provide your own value here.
taskUIName = 'ui-pdf-docs-' + timestamp
# Flow definition outputs
OUTPUT_PATH = f's3://' + bucket + '/' + a2i_prefix
# Step 4 - Cell 2
# We will use the tabular liquid template and customize it for our requirements
template = r"""
<script src="https://assets.crowd.aws/crowd-html-elements.js"></script>
<style>
table, tr, th, td {
border: 1px solid black;
border-collapse: collapse;
padding: 5px;
}
</style>
<crowd-form>
<div>
<h1>Instructions</h1>
<p>Please review the SEC registration form inputs, and make corrections where appropriate. </p>
</div>
<div>
<h3>Original Registration Form - Page 1</h3>
<classification-target>
<img style="width: 70%; max-height: 40%; margin-bottom: 10px" src="{{ task.input.image | grant_read_access }}"/>
</classification-target>
</div>
<br>
<h1> Please enter your modifications below </h1>
<table>
<tr>
<th>Line Nr</th>
<th style="width:500px">Detected Text</th>
<th style="width:500px">Confidence</th>
<th>Change Required</th>
<th style="width:500px">Corrected Text</th>
<th>Comments</th>
</tr>
{% for pair in task.input.document %}
<tr>
<td>{{ pair.linenr }}</td>
<td><crowd-text-area name="predicteddoc{{ pair.linenr }}" value="{{ pair.detectedtext }}"></crowd-text-area></td>
<td><crowd-text-area name="confidence{{ pair.linenr }}" value="{{ pair.confidence }}"></crowd-text-area></td>
<td>
<p>
<input type="radio" id="agree{{ pair.linenr }}" name="rating{{ pair.linenr }}" value="agree" required>
<label for="agree{{ pair.linenr }}">Correct</label>
</p>
<p>
<input type="radio" id="disagree{{ pair.linenr }}" name="rating{{ pair.linenr }}" value="disagree" required>
<label for="disagree{{ pair.linenr }}">Incorrect</label>
</p>
</td>
<td>
<p>
<input style="width:500px" rows="3" type="text" name="correcteddoc{{ pair.linenr }}" value="{{pair.detectedtext}}" required/>
</p>
</td>
<td>
<p>
<input style="width:500px" rows="3" type="text" name="comments{{ pair.linenr }}" placeholder="Explain why you changed the value"/>
</p>
</td>
</tr>
{% endfor %}
</table>
<br>
<br>
</crowd-form>
"""
# Step 4 - Cell 2
# Define the method to initialize and create the Task UI
def create_task_ui():
response = sagemaker_client.create_human_task_ui(
HumanTaskUiName=taskUIName,
UiTemplate={'Content': template})
return response
# Step 4 - Cell 3
# Execute the method to create the Task UI
humanTaskUiResponse = create_task_ui()
humanTaskUiArn = humanTaskUiResponse['HumanTaskUiArn']
print(humanTaskUiArn)
```
### Step 5 - Create the Amazon A2I flow definition
In this section, we're going to create a flow definition definition. Flow Definitions allow us to specify:
* The workforce that your tasks will be sent to.
* The instructions that your workforce will receive. This is called a worker task template.
* Where your output data will be stored.
This notebook is going to use the API, but you can optionally create this workflow definition in the console as well.
For more details and instructions, see: https://docs.aws.amazon.com/sagemaker/latest/dg/a2i-create-flow-definition.html.
```
# Step 5 - Cell 1
create_workflow_definition_response = sagemaker_client.create_flow_definition(
FlowDefinitionName=flowDefinitionName,
RoleArn=role,
HumanLoopConfig= {
"WorkteamArn": WORKTEAM_ARN,
"HumanTaskUiArn": humanTaskUiArn,
"TaskCount": 1,
"TaskDescription": "Review the contents and correct values as indicated",
"TaskTitle": "SEC Registration Form Review"
},
OutputConfig={
"S3OutputPath" : OUTPUT_PATH
}
)
flowDefinitionArn = create_workflow_definition_response['FlowDefinitionArn'] # let's save this ARN for future use
# Step 5 - Cell 2
for x in range(60):
describeFlowDefinitionResponse = sagemaker_client.describe_flow_definition(FlowDefinitionName=flowDefinitionName)
print(describeFlowDefinitionResponse['FlowDefinitionStatus'])
if (describeFlowDefinitionResponse['FlowDefinitionStatus'] == 'Active'):
print("Flow Definition is active")
break
time.sleep(2)
```
### Step 6 - Activate the Amazon A2I flow definition
```
# Step 6 - Cell 1
# We will display the PDF first page for reference on what is being edited by the human loop
reg_images = {}
for image in os.listdir():
if image.endswith('png'):
reg_images[image.split('_')[0]] = S3Uploader.upload(image, 's3://{}/{}'.format(bucket, prefix))
# Step 6 - Cell 2
# Activate human loops for all the three documents. These will be delivered for review sequentially in the Task UI.
# We will also send only low confidence detections to A2I so the human team can update the text for what is should actually be
humanLoopName = {}
docs = df_newdoc.DocName.unique()
# confidence threshold
confidence_threshold = 95
for doc in docs:
doc_list = []
humanLoopName[doc] = str(uuid.uuid4())
for idx, line in df_newdoc.iterrows():
# Send only those lines whose confidence score is less than threshold
if line['DocName'] == doc and line['Confidence'] <= confidence_threshold:
doc_list.append({'linenr': line['LineNr'], 'detectedtext': line['DetectedText'], 'confidence':line['Confidence']})
ip_content = {"document": doc_list,
'image': reg_images[doc.split('.')[0]]
}
start_loop_response = a2i.start_human_loop(
HumanLoopName=humanLoopName[doc],
FlowDefinitionArn=flowDefinitionArn,
HumanLoopInput={
"InputContent": json.dumps(ip_content)
}
)
# Step 6 - Cell 3
completed_human_loops = []
for doc in humanLoopName:
resp = a2i.describe_human_loop(HumanLoopName=humanLoopName[doc])
print(f'HumanLoop Name: {humanLoopName[doc]}')
print(f'HumanLoop Status: {resp["HumanLoopStatus"]}')
print(f'HumanLoop Output Destination: {resp["HumanLoopOutput"]}')
print('\n')
# Step 6 - Cell 4
workteamName = WORKTEAM_ARN[WORKTEAM_ARN.rfind('/') + 1:]
print("Navigate to the private worker portal and do the tasks. Make sure you've invited yourself to your workteam!")
print('https://' + sagemaker_client.describe_workteam(WorkteamName=workteamName)['Workteam']['SubDomain'])
# Step 6 - Cell 5
completed_human_loops = []
for doc in humanLoopName:
resp = a2i.describe_human_loop(HumanLoopName=humanLoopName[doc])
print(f'HumanLoop Name: {humanLoopName[doc]}')
print(f'HumanLoop Status: {resp["HumanLoopStatus"]}')
print(f'HumanLoop Output Destination: {resp["HumanLoopOutput"]}')
print('\n')
if resp["HumanLoopStatus"] == "Completed":
completed_human_loops.append(resp)
# Step 6 - Cell 7
import re
import pandas as pd
for resp in completed_human_loops:
splitted_string = re.split('s3://' + bucket + '/', resp['HumanLoopOutput']['OutputS3Uri'])
output_bucket_key = splitted_string[1]
response = s3_client.get_object(Bucket=bucket, Key=output_bucket_key)
content = response["Body"].read()
json_output = json.loads(content)
loop_name = json_output['humanLoopName']
for i in json_output['humanAnswers']:
x = i['answerContent']
docname = list(humanLoopName.keys())[list(humanLoopName.values()).index(loop_name)]
for i, r in df_newdoc.iterrows():
if r['DocName'] == docname:
df_newdoc.at[i,'CorrectedText'] = x['correcteddoc'+str(r['LineNr'])] if 'correcteddoc'+str(r['LineNr']) in x else ''
df_newdoc.at[i,'Comments'] = x['comments'+str(r['LineNr'])] if 'comments'+str(r['LineNr']) in x else ''
# Step 6 - Cell 8
df_newdoc.head(30)
```
### Step 7 - Save changes to Amazon DynamoDB
```
# Step 7 - Cell 1
# Create the Amazon DynamoDB table - note that a new DynamoDB table is created everytime you execute this cell
# Get the service resource.
dynamodb = boto3.resource('dynamodb')
tablename = "SEC-registration-"+str(uuid.uuid4())
# Create the DynamoDB table.
table = dynamodb.create_table(
TableName=tablename,
KeySchema=[
{
'AttributeName': 'row_nr',
'KeyType': 'HASH'
}
],
AttributeDefinitions=[
{
'AttributeName': 'row_nr',
'AttributeType': 'N'
},
],
ProvisionedThroughput={
'ReadCapacityUnits': 5,
'WriteCapacityUnits': 5
}
)
# Wait until the table exists, this will take a minute or so
table.meta.client.get_waiter('table_exists').wait(TableName=tablename)
# Print out some data about the table.
print("Table successfully created")
# Step 7 - Cell 2
# Load the Amazon DynamoDB table
for idx, row in df_newdoc.iterrows():
table.put_item(
Item={
'row_nr': idx,
'doc_name': str(row['DocName']) ,
'line_nr': str(row['LineNr']),
'detected_line': str(row['DetectedText']),
'confidence': str(row['Confidence']),
'corrected_line': str(row['CorrectedText']),
'change_comments': str(row['Comments'])
}
)
print("Items were successfully created in DynamoDB table")
```
### End of Notebook
Please go back to Chapter 16 - Improve accuracy of pdf batch processing with Amazon Textract and Amazon A2I from the Natural Language Processing with AWS AI Services book to proceed further.
| github_jupyter |
import modules and get command-line parameters if running as script
```
from probrnn import models, data, inference
import numpy as np
import json
from matplotlib import pyplot as plt
from IPython.display import clear_output
```
parameters for the model and training
```
params = \
{
"N_ITERATIONS": 10 ** 5,
"VALIDATE_EACH": 100,
"SAVE_EACH": 1000,
"LOG_EVERY": 50,
"LEARNING_RATE": 0.0001,
"N_HIDDEN": 256,
"N_BINS": 50,
"BATCH_SIZE": 50,
}
```
Get some correlated toy data
```
datastruct = data.CoupledToyData(n_bins=params["N_BINS"])
x, _ = datastruct._gen(1).next()
x = datastruct.get_readable(x)
plt.figure()
plt.plot(x)
plt.show()
```
do some training
```
model = models.NADE(datastruct, params=params)
training = models.Training(
model,
"../models/toy_nade_bivariate",
"../models/toy_nade_bivariate_training.json",
)
def print_function(trer, i, batch):
if i % 10 == 0:
clear_output()
print "loss: {}; iteration {}".format(np.mean(trer[-100:]), i)
training.train(print_function)
```
visualize the training errors
```
with open("../models/toy_nade_bivariate_training.json") as f:
errs = json.load(f)
plt.figure()
plt.plot(np.array(errs["training_error"])[:, 0],
np.array(errs["training_error"])[:, 1])
plt.plot(np.array(errs["validation_error"])[:, 0],
np.array(errs["validation_error"])[:, 1], 'r')
plt.legend(["training", "validation"])
plt.show()
```
plot some weight traces
```
for x in errs.keys():
if x != "training_error" and x != "validation_error" and "train" not in x:
plt.figure()
for key in errs[x].keys():
if key == "mean":
plt.plot(errs[x][key], 'b', linewidth=5.0)
elif key == "random":
plt.plot(errs[x][key], 'c')
else:
plt.plot(errs[x][key], 'b', linestyle='--')
plt.title("variable: {}".format(x))
plt.show()
```
load trained model
```
load_name = "../models/toy_nade_bivariate_12000"
model = models.NADE(datastruct, fn=load_name)
print json.dumps(model.params, indent=4)
```
try some sampling
```
x = model.sample(200)
plt.plot(x[::2])
plt.plot(x[1::2])
plt.show()
```
try some imputation
```
x = datastruct.simulate()
x_missing = np.zeros(x.shape[0] * 2)
x_missing[::2] = x[:, 0]
x_missing[1::2] = np.nan
estimate = inference.NaiveSIS(model, x_missing, 1000, binned=False, quiet=False).estimate()
plt.figure()
plt.plot(estimate[::2])
plt.plot(estimate[1::2])
plt.show()
```
| github_jupyter |
# Inference and Validation
Now that you have a trained network, you can use it for making predictions. This is typically called **inference**, a term borrowed from statistics. However, neural networks have a tendency to perform *too well* on the training data and aren't able to generalize to data that hasn't been seen before. This is called **overfitting** and it impairs inference performance. To test for overfitting while training, we measure the performance on data not in the training set called the **validation** set. We avoid overfitting through regularization such as dropout while monitoring the validation performance during training. In this notebook, I'll show you how to do this in PyTorch.
As usual, let's start by loading the dataset through torchvision. You'll learn more about torchvision and loading data in a later part. This time we'll be taking advantage of the test set which you can get by setting `train=False` here:
```python
testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)
```
The test set contains images just like the training set. Typically you'll see 10-20% of the original dataset held out for testing and validation with the rest being used for training.
```
import torch
from torchvision import datasets, transforms
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))])
# Download and load the training data
trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
# Download and load the test data
testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)
```
Here I'll create a model like normal, using the same one from my solution for part 4.
```
from torch import nn, optim
import torch.nn.functional as F
class Classifier(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 128)
self.fc3 = nn.Linear(128, 64)
self.fc4 = nn.Linear(64, 10)
def forward(self, x):
# make sure input tensor is flattened
x = x.view(x.shape[0], -1)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = F.log_softmax(self.fc4(x), dim=1)
return x
```
The goal of validation is to measure the model's performance on data that isn't part of the training set. Performance here is up to the developer to define though. Typically this is just accuracy, the percentage of classes the network predicted correctly. Other options are [precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall#Definition_(classification_context)) and top-5 error rate. We'll focus on accuracy here. First I'll do a forward pass with one batch from the test set.
```
model = Classifier()
images, labels = next(iter(testloader))
# Get the class probabilities
ps = torch.exp(model(images))
# Make sure the shape is appropriate, we should get 10 class probabilities for 64 examples
print(ps.shape)
```
With the probabilities, we can get the most likely class using the `ps.topk` method. This returns the $k$ highest values. Since we just want the most likely class, we can use `ps.topk(1)`. This returns a tuple of the top-$k$ values and the top-$k$ indices. If the highest value is the fifth element, we'll get back 4 as the index.
```
top_p, top_class = ps.topk(1, dim=1)
# Look at the most likely classes for the first 10 examples
print(top_class[:10,:])
```
Now we can check if the predicted classes match the labels. This is simple to do by equating `top_class` and `labels`, but we have to be careful of the shapes. Here `top_class` is a 2D tensor with shape `(64, 1)` while `labels` is 1D with shape `(64)`. To get the equality to work out the way we want, `top_class` and `labels` must have the same shape.
If we do
```python
equals = top_class == labels
```
`equals` will have shape `(64, 64)`, try it yourself. What it's doing is comparing the one element in each row of `top_class` with each element in `labels` which returns 64 True/False boolean values for each row.
```
equals = top_class == labels.view(*top_class.shape)
```
Now we need to calculate the percentage of correct predictions. `equals` has binary values, either 0 or 1. This means that if we just sum up all the values and divide by the number of values, we get the percentage of correct predictions. This is the same operation as taking the mean, so we can get the accuracy with a call to `torch.mean`. If only it was that simple. If you try `torch.mean(equals)`, you'll get an error
```
RuntimeError: mean is not implemented for type torch.ByteTensor
```
This happens because `equals` has type `torch.ByteTensor` but `torch.mean` isn't implement for tensors with that type. So we'll need to convert `equals` to a float tensor. Note that when we take `torch.mean` it returns a scalar tensor, to get the actual value as a float we'll need to do `accuracy.item()`.
```
accuracy = torch.mean(equals.type(torch.FloatTensor))
print(f'Accuracy: {accuracy.item()*100}%')
```
The network is untrained so it's making random guesses and we should see an accuracy around 10%. Now let's train our network and include our validation pass so we can measure how well the network is performing on the test set. Since we're not updating our parameters in the validation pass, we can speed up the by turning off gradients using `torch.no_grad()`:
```python
# turn off gradients
with torch.no_grad():
# validation pass here
for images, labels in testloader:
...
```
>**Exercise:** Implement the validation loop below. You can largely copy and paste the code from above, but I suggest typing it in because writing it out yourself is essential for building the skill. In general you'll always learn more by typing it rather than copy-pasting.
```
model = Classifier()
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr=0.003)
epochs = 30
steps = 0
train_losses, test_losses = [], []
for e in range(epochs):
running_loss = 0
for images, labels in trainloader:
optimizer.zero_grad()
log_ps = model(images)
loss = criterion(log_ps, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
else:
test_loss = 0
accuracy = 0
# Turn off gradients for validation, saves memory and computations
with torch.no_grad():
for images, labels in testloader:
log_ps = model(images)
test_loss += criterion(log_ps, labels)
ps = torch.exp(log_ps)
top_p, top_class = ps.topk(1, dim=1)
equals = top_class == labels.view(*top_class.shape)
accuracy += torch.mean(equals.type(torch.FloatTensor))
train_losses.append(running_loss/len(trainloader))
test_losses.append(test_loss/len(testloader))
print("Epoch: {}/{}.. ".format(e+1, epochs),
"Training Loss: {:.3f}.. ".format(running_loss/len(trainloader)),
"Test Loss: {:.3f}.. ".format(test_loss/len(testloader)),
"Test Accuracy: {:.3f}".format(accuracy/len(testloader)))
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
plt.plot(train_losses, label='Training loss')
plt.plot(test_losses, label='Validation loss')
plt.legend(frameon=False)
```
## Overfitting
If we look at the training and validation losses as we train the network, we can see a phenomenon known as overfitting.
<img src='assets/overfitting.png' width=450px>
The network learns the training set better and better, resulting in lower training losses. However, it starts having problems generalizing to data outside the training set leading to the validation loss increasing. The ultimate goal of any deep learning model is to make predictions on new data, so we should strive to get the lowest validation loss possible. One option is to use the version of the model with the lowest validation loss, here the one around 8-10 training epochs. This strategy is called *early-stopping*. In practice, you'd save the model frequently as you're training then later choose the model with the lowest validation loss.
The most common method to reduce overfitting (outside of early-stopping) is *dropout*, where we randomly drop input units. This forces the network to share information between weights, increasing it's ability to generalize to new data. Adding dropout in PyTorch is straightforward using the [`nn.Dropout`](https://pytorch.org/docs/stable/nn.html#torch.nn.Dropout) module.
```python
class Classifier(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 128)
self.fc3 = nn.Linear(128, 64)
self.fc4 = nn.Linear(64, 10)
# Dropout module with 0.2 drop probability
self.dropout = nn.Dropout(p=0.2)
def forward(self, x):
# make sure input tensor is flattened
x = x.view(x.shape[0], -1)
# Now with dropout
x = self.dropout(F.relu(self.fc1(x)))
x = self.dropout(F.relu(self.fc2(x)))
x = self.dropout(F.relu(self.fc3(x)))
# output so no dropout here
x = F.log_softmax(self.fc4(x), dim=1)
return x
```
During training we want to use dropout to prevent overfitting, but during inference we want to use the entire network. So, we need to turn off dropout during validation, testing, and whenever we're using the network to make predictions. To do this, you use `model.eval()`. This sets the model to evaluation mode where the dropout probability is 0. You can turn dropout back on by setting the model to train mode with `model.train()`. In general, the pattern for the validation loop will look like this, where you turn off gradients, set the model to evaluation mode, calculate the validation loss and metric, then set the model back to train mode.
```python
# turn off gradients
with torch.no_grad():
# set model to evaluation mode
model.eval()
# validation pass here
for images, labels in testloader:
...
# set model back to train mode
model.train()
```
> **Exercise:** Add dropout to your model and train it on Fashion-MNIST again. See if you can get a lower validation loss.
```
class Classifier(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 128)
self.fc3 = nn.Linear(128, 64)
self.fc4 = nn.Linear(64, 10)
# Dropout module with 0.2 drop probability
self.dropout = nn.Dropout(p=0.2)
def forward(self, x):
# make sure input tensor is flattened
x = x.view(x.shape[0], -1)
# Now with dropout
x = self.dropout(F.relu(self.fc1(x)))
x = self.dropout(F.relu(self.fc2(x)))
x = self.dropout(F.relu(self.fc3(x)))
# output so no dropout here
x = F.log_softmax(self.fc4(x), dim=1)
return x
model = Classifier()
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr=0.003)
epochs = 30
steps = 0
train_losses, test_losses = [], []
for e in range(epochs):
running_loss = 0
for images, labels in trainloader:
optimizer.zero_grad()
log_ps = model(images)
loss = criterion(log_ps, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
else:
test_loss = 0
accuracy = 0
# Turn off gradients for validation, saves memory and computations
with torch.no_grad():
model.eval()
for images, labels in testloader:
log_ps = model(images)
test_loss += criterion(log_ps, labels)
ps = torch.exp(log_ps)
top_p, top_class = ps.topk(1, dim=1)
equals = top_class == labels.view(*top_class.shape)
accuracy += torch.mean(equals.type(torch.FloatTensor))
model.train()
train_losses.append(running_loss/len(trainloader))
test_losses.append(test_loss/len(testloader))
print("Epoch: {}/{}.. ".format(e+1, epochs),
"Training Loss: {:.3f}.. ".format(train_losses[-1]),
"Test Loss: {:.3f}.. ".format(test_losses[-1]),
"Test Accuracy: {:.3f}".format(accuracy/len(testloader)))
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
plt.plot(train_losses, label='Training loss')
plt.plot(test_losses, label='Validation loss')
plt.legend(frameon=False)
```
## Inference
Now that the model is trained, we can use it for inference. We've done this before, but now we need to remember to set the model in inference mode with `model.eval()`. You'll also want to turn off autograd with the `torch.no_grad()` context.
```
# Import helper module (should be in the repo)
import helper
# Test out your network!
model.eval()
dataiter = iter(testloader)
images, labels = dataiter.next()
img = images[0]
# Convert 2D image to 1D vector
img = img.view(1, 784)
# Calculate the class probabilities (softmax) for img
with torch.no_grad():
output = model.forward(img)
ps = torch.exp(output)
# Plot the image and probabilities
helper.view_classify(img.view(1, 28, 28), ps, version='Fashion')
```
## Next Up!
In the next part, I'll show you how to save your trained models. In general, you won't want to train a model everytime you need it. Instead, you'll train once, save it, then load the model when you want to train more or use if for inference.
| github_jupyter |
# Point Spread Function Photometry with Photutils
The PSF photometry module of photutils is intended to be a fully modular tool such that users are able to completly customise the photometry procedure, e.g., by using different source detection algorithms, background estimators, PSF models, etc. Photutils provides implementations for each subtask involved in the photometry process, however, users are still able to include their own implementations without having to touch into the photutils core classes!
This modularity characteristic is accomplished by using the object oriented programming approach which provides a more convient user experience while at the same time allows the developers to think in terms of classes and objects rather than isolated functions.
Photutils provides three basic classes to perform PSF photometry: `BasicPSFPhotometry`, `IterativelySubtractedPSFPhotometry`, and `DAOPhotPSFPhotometry`. In this notebook, we will go through them, explaining their differences and particular uses.
# Artificial Starlist
First things first! Let's create an artifical list of stars using photutils in order to explain the PSF procedures through examples.
```
from photutils.datasets import make_random_gaussians
from photutils.datasets import make_noise_image
from photutils.datasets import make_gaussian_sources
num_sources = 150
min_flux = 500
max_flux = 5000
min_xmean = 16
max_xmean = 240
sigma_psf = 2.0
starlist = make_random_gaussians(num_sources, [min_flux, max_flux],
[min_xmean, max_xmean],
[min_xmean, max_xmean],
[sigma_psf, sigma_psf],
[sigma_psf, sigma_psf],
random_state=1234)
shape = (256, 256)
image = (make_gaussian_sources(shape, starlist) +
make_noise_image(shape, type='poisson', mean=6., random_state=1234) +
make_noise_image(shape, type='gaussian', mean=0., stddev=2., random_state=1234))
```
Note that we also added Poisson and Gaussian background noises with the function `make_noise_image`.
Let's keep in mind this fact:
```
type(starlist)
starlist
```
Pretty much all lists of sources in `photutils` are returned or passed in as `astropy` `Table` objects, so this is something to get used to.
Let's also plot our list of stars.
```
%matplotlib inline
from matplotlib import rcParams
import matplotlib.pyplot as plt
rcParams['image.cmap'] = 'magma'
rcParams['image.aspect'] = 1 # to get images with square pixels
rcParams['figure.figsize'] = (20,10)
rcParams['image.interpolation'] = 'nearest'
rcParams['image.origin'] = 'lower'
rcParams['font.size'] = 14
plt.imshow(image)
plt.title('Simulated data')
plt.colorbar(orientation='horizontal', fraction=0.046, pad=0.04)
```
# The `BasicPSFPhotometry` class
As the name suggests, this is a basic class which provides the minimum tools necessary to perform photometry in crowded fields (or non crowded fields). Let's take a look into its attributes and methods.
BasicPSFPhotometry has the following mandatory attributes:
* group_maker : callable or instance of any GroupStarsBase subclass
* bkg_estimator : callable, instance of any BackgroundBase subclass, or None
* psf_model : astropy.modeling.Fittable2DModel instance
* fitshape : integer or length-2 array-like
And the following optional attributes:
* finder : callable or instance of any StarFinderBase subclasses or None
* fitter : Astropy Fitter instance
* aperture_radius : float or int
## Group Maker
`group_maker` can be instantiated using any GroupStarBase subclass, such as `photutils.psf.DAOGroup` or `photutils.psf.DBSCANGroup`, or even using a `callable` provided by the user.
`photutils.psf.DAOGroup` is a class which implements the `GROUP` algorithm proposed by Stetson which is used in DAOPHOT. This class takes one attribute to be initialized namely:
* crit_separation : int or float
Distance, in units of pixels, such that any two stars separated by less than this distance will be placed in the same group.
As it is shown in its description, `crit_separation` plays a crucial role in deciding whether or not a given star belong to some group of stars. Usually, `crit_separation` is set to be a positive real number multiplied by the FWHM of the PSF.
`photutils.psf.DBSCANGroup` is a generalized case of `photutils.psf.DAOGroup`, in fact, it is a wrapper around the `sklearn.cluster.DBSCAN` class. Its usage is very similar to `photutils.psf.DAOGroup` and we refer the photutils API doc page for more information: https://photutils.readthedocs.io/en/latest/api/photutils.psf.DBSCANGroup.html#photutils.psf.DBSCANGroup
The user is welcome to check the narrative docs on the photutils RTD webpage: https://photutils.readthedocs.io/en/latest/photutils/grouping.html
Now, let's instantiate a `group_maker` from `DAOGroup`:
```
from photutils import psf
from astropy.stats import gaussian_sigma_to_fwhm
daogroup = psf.DAOGroup(crit_separation=2.*sigma_psf*gaussian_sigma_to_fwhm)
```
Now, the object `daogroup` is ready to be passed to `BasicPSFPhotometry`.
## Background Estimation
Background estimation is needed in the photometry process in order to reduce the bias added primarily by Poisson noise background into the flux estimation.
Photutils provides several classes to perform both scalar background estimation, i.e., when the background is flat and does not vary strongly across the image, and spatial varying background estimation, i.e., when there exist a gradient field associated with the background.
The user is welcome to refer to the Background Esimation narrative docs in the photutils webpage for a detailed explanation. https://photutils.readthedocs.io/en/latest/photutils/background.html
In this notebook, we will use the class `MMMBackground` which is intended to estimate scalar background. This class is based on the background estimator used in `DAOPHOT`.
`MMMBackground` gets a `SigmaClip` object as an attribute. It's basically used to perform sigma clip on the image before performing background estimation. For our scenario, we will just instatiate a object of `MMMBackground` with default attribute values:
```
from photutils import MMMBackground
mmm_bkg = MMMBackground()
mmm_bkg.sigma_clip.sigma
mmm_bkg.sigma_clip.iters
```
## PSF Models
The attribute ``psf_model`` represents an analytical function with unkwon parameters (e.g., peak center and flux) which describes the underlying point spread function. ``psf_model`` is usually a subclass of `astropy.modeling.Fittable2DModel`. In this notebook, we will use `photutils.psf.IntegratedGaussianPRF` as our underlying PSF model.
Note that the underlying PSF model has to have parameters with the following names ``x_0``, ``y_0``, and ``flux``, to describe the center peak position and the flux, respectively.
```
from photutils.psf import IntegratedGaussianPRF
gaussian_psf = IntegratedGaussianPRF(sigma=2.0)
```
## Finder
Finder is an optional attribute, meaning that if it is `None`, then the user should provide a table with the center positions of each star when calling the `BasicPSFPhotometry` object.
Later, we will see examples of both cases, i.e., when Finder is `None` and when it is not.
The finder attribute is used to perform source detection. It can be any subclass of `photutils.StarFinderBase` such as `photutils.DAOStarFinder` or `photutils.IRAFStarFinder`, which implement a DAOPHOT-like or IRAF-like source detection algorithms, respectively. The user can also set her/his own source detection algorithm as long as the input/output formats are compatible with `photutils.StarFinderBase`.
`photutils.DAOStarFinder`, for instance, receives the following mandatory attributes:
* threshold : float
The absolute image value above which to select sources.
* fwhm : float
The full-width half-maximum (FWHM) of the major axis of the Gaussian kernel in units of pixels.
Now, let's instantiate our `DAOStarFinder` object:
```
from photutils.detection import DAOStarFinder
daofinder = DAOStarFinder(threshold=2.5*mmm_bkg(image), fwhm=sigma_psf*gaussian_sigma_to_fwhm)
```
Note that we choose the `threshold` to be a multiple of the background level and we assumed the `fwhm` to be known from our list of stars.
More details about source detection can be found on the `photutils.detection` narrative docs: https://photutils.readthedocs.io/en/latest/photutils/detection.html
## Fitter
Fitter should be an instance of a fitter implemented in `astropy.modeling.fitting`. Since the PSF model is almost always nonlinear, the fitter should be able to handle nonlinear optimization problems. In this notebook, we will use the `LevMarLSQFitter`, which combines the Levenberg-Marquardt optimization algorithm with the least-squares statistic. The default value for fitter is `LevMarLSQFitter()`.
Look at http://docs.astropy.org/en/stable/modeling/index.html for more details on fitting.
NOTE: At this point it should be stated tha photutils do not have a standard way to compute uncertainties on the fitted parameters. However, this will change in the near future with the addition of a new affiliated package to the Astropy environment, namely, `SABA: Sherpa-Astropy Bridge` which made possible to use astropy models together with Sherpa Fitters.
## Fitshape and Aperture Radius
There are two attributes left: `fitshape` (mandatory) and `aperture_radius` (optional).
`fitshape` corresponds to the size of the rectangular region necessary to enclose one single source. The pixels inside that region will be used in the fitting process. `fitshape` should be an odd integer or a tuple of odd integers.
```
import numpy as np
fitshape = 11
```
The aperture radius corresponds to the radius used to compute initial guesses for the fluxes of the sources. If this value is `None`, then one fwhm will be used if it can be determined by the `psf_model`.
## Example with unknown positions and unknown fluxes
Now we are ready to take a look at an actual example. Let's first create our `BasicPSFPhotometry` object putting together the pieces that we defined along the way:
```
from photutils.psf import BasicPSFPhotometry
basic_photometry = BasicPSFPhotometry(group_maker=daogroup, bkg_estimator=mmm_bkg,
psf_model=gaussian_psf, fitshape=fitshape,
finder=daofinder)
```
To actually perform photometry on our image that we defined previously, we should use `basic_photometry` as a function call:
```
photometry_results = basic_photometry(image)
photometry_results
```
Let's plot the residual image along with the original image:
```
fig, (ax1, ax2) = plt.subplots(1,2)
im1 = ax1.imshow(basic_photometry.get_residual_image())
ax1.set_title('Residual Image')
plt.colorbar(orientation='horizontal', fraction=0.046, pad=0.04,
ax=ax1, mappable=im1)
im2 = ax2.imshow(image)
ax2.set_title('Simulated data')
plt.colorbar(orientation='horizontal', fraction=0.046, pad=0.04,
ax=ax2, mappable=im2)
```
Looking at the residual image we observe that the photometry process was able to fit many stars but not all. This is probably due to inability of the source detection algorithm to decide the number of sources in every crowded group. Therefore, let's play with the source detection classes to see whether we can improve the photometry process.
Let's use the `IRAFStarFinder` and play with the optional parameters. A complete description of these parameters can be seen at the `photutils.dection` API documentation: https://photutils.readthedocs.io/en/latest/api/photutils.detection.IRAFStarFinder.html#photutils.detection.IRAFStarFinder
```
from photutils.detection import IRAFStarFinder
iraffind = IRAFStarFinder(threshold=2.5*mmm_bkg(image),
fwhm=sigma_psf*gaussian_sigma_to_fwhm,
minsep_fwhm=0.01, roundhi=5.0, roundlo=-5.0,
sharplo=0.0, sharphi=2.0)
```
Now let's set the `finder` attribute of our `BasicPSFPhotometry` object with `iraffind`:
```
basic_photometry.finder = iraffind
```
Let's repeat the photometry process:
```
photometry_results = basic_photometry(image)
photometry_results
plt.subplot(1,2,1)
plt.imshow(basic_photometry.get_residual_image())
plt.title('Residual Image')
plt.colorbar(orientation='horizontal', fraction=0.046, pad=0.04)
plt.subplot(1,2,2)
plt.imshow(image)
plt.title('Simulated data')
plt.colorbar(orientation='horizontal', fraction=0.046, pad=0.04)
```
As we can see, the residual presents a better Gaussianity with only three groups that were not fitted well. The reason for that is that the sources may be too close to be distinguishable by the source detection algorithm.
## Example with known positions and unknwon fluxes
Let's assume that somehow we know the true positions of the stars and we only would like to perform fitting on the fluxes. Then we should use the optional argument `positions` when calling the photometry object:
```
from astropy.table import Table
positions = Table(names=['x_0', 'y_0'], data=[starlist['x_mean'], starlist['y_mean']])
photometry_results = basic_photometry(image=image, positions=positions)
plt.subplot(1,2,1)
plt.imshow(basic_photometry.get_residual_image())
plt.title('Residual Image')
plt.colorbar(orientation='horizontal', fraction=0.046, pad=0.04)
plt.subplot(1,2,2)
plt.imshow(image)
plt.title('Simulated data')
plt.colorbar(orientation='horizontal', fraction=0.046, pad=0.04)
```
Let's do a scatter plot between ground-truth fluxes and estimated fluxes:
```
photometry_results.sort('id')
plt.scatter(starlist['flux'], photometry_results['flux_fit'])
plt.xlabel('Ground-truth fluxes')
plt.ylabel('Estimated fluxes')
```
Let's also plot the relative error on the fluxes estimation as a function of the ground-truth fluxes.
```
plt.scatter(starlist['flux'], (photometry_results['flux_fit'] - starlist['flux'])/starlist['flux'])
plt.xlabel('Ground-truth flux')
plt.ylabel('Estimate Relative Error')
```
As we can see, the relative error becomes smaller as the flux increase.
# `IterativelySubtractedPSFPhotometry`
`IterativelySubtractedPSFPhotometry` is a subclass of `BasicPSFPhotometry` which adds iteration functionality to the photometry procedure. It has the same attributes as `BasicPSFPhotometry`, except that it includes an additional `niters` which represents the number of of times to loop through the photometry process, subtracting the best-fit stars each time.
Hence, the process implemented in `IterativelySubtractedPSFPhotometry` resembles the loop used by DAOPHOT: `FIND`, `GROUP`, `NSTAR`, `SUBTRACT`, `FIND`. On its own `IterativelySubtractedPSFPhotometry` doesn't implement the specific algorithms used in DAOPHOT, but it does implement the *structure* to enambe this (and `DAOPhotPSFPhotometry`, discussed below, does).
The attribute `niters` can be `None`, which means that the photometry procedure will continue until no more sources are detected.
One final detail: the attribute `finder` (specifying the star-finder algorithm) for `IterativelySubtractedPSFPhotometry` cannot be `None` (as it can be for `BasicPSFPhotometry`). This is because it would not make sense to have an iterative process where the star finder changes completely at each step. If you want to do that you're better off manually looping over a series of calls to different `BasicPSFPhotometry` objects.
## Example with unknwon positions and unknown fluxes
Let's instantiate an object of `IterativelySubtractedPSFPhotometry`:
```
from photutils.psf import IterativelySubtractedPSFPhotometry
itr_phot = IterativelySubtractedPSFPhotometry(group_maker=daogroup, bkg_estimator=mmm_bkg,
psf_model=gaussian_psf, fitshape=fitshape,
finder=iraffind, niters=2)
```
Let's now perform photometry on our artificil image:
```
photometry_results = itr_phot(image)
photometry_results
```
Observe that there is a new column namely `iter_detected` which shows the number of the iteration in which that source was detected.
Let's plot the residual image:
```
plt.subplot(1,2,1)
plt.imshow(itr_phot.get_residual_image())
plt.title('Residual Image')
plt.colorbar(orientation='horizontal', fraction=0.046, pad=0.04)
plt.subplot(1,2,2)
plt.imshow(image)
plt.title('Simulated data')
plt.colorbar(orientation='horizontal', fraction=0.046, pad=0.04)
```
# `DAOPhotPSFPhotometry`
There is also a class called `DAOPhotPSFPhotometry` that is a subclass of `IterativelySubtractedPSFPhotometry`. `DAOPhotPSFPhotometry` essentially implements the DAOPHOT photometry algorithm using `IterativelySubtractedPSFPhotometry`. So instead of giving it arguments like `finder`, you provide parameters specific for the DAOPhot-like sub-tasks (e.g., the FWHM the star-finder is optimized for).
We leave the use of this class as an exercise to the user to play with the parameters which would optimize the photometry procedure.
```
from photutils.psf import DAOPhotPSFPhotometry
dao_phot = DAOPhotPSFPhotometry(...)
photometry_results = dao_phot(image)
photometry_results
```
## Documentation
Narrative and API docs of the classes used here can be found in https://photutils.readthedocs.io/en/latest/
# Future Works
The PSF Photometry module in photutils is still under development and feedback from users is much appreciated. Please open an issue on the github issue tracker of photutils with any suggestions for improvement, functionalities wanted, bugs, etc.
Near future implementations in the photutils.psf module include:
* FWHM estimation: a Python equivalent to DAOPHOT psfmeasure.
* Uncertainties computation: uncertainties are very critical and it's very likely that we are going to use astropy saba package to integrate uncertainty computation into photutils.psf.
| github_jupyter |
# ๐ฆ RuDOLPH 350M
<b><font color="white" size="+2">Official colab of [RuDOLPH: One Hyper-Modal Transformer can be creative as DALL-E and smart as CLIP](https://github.com/sberbank-ai/ru-dolph)</font></b>
<font color="white" size="-0.75."><b>RuDOLPH</b> is a fast and light text-image-text transformer (350M GPT-3) for generating text like <b>GPT</b>, generating image (e.g.: image by text, image by image prompt) like <b>DALL-E</b>, generating image captions, image classification in Zero-Shot mode and image ranking like <b>CLIP</b>.
<b>RuDOLPH 350M</b> is designed for quick and easy fine-tuning setup for solution of various tasks: from generating images by text description and image classification, to visual question answering and more. This colab demonstates the power of Hyper-Modal Transfomers.</font>
Hyper-modality means generalized multi-modal, e.g., model that consists of two multi-modal parts: text-2-image and image-2-text becomes text and image hyper-modality model
<font color="white" size="-0.75."><b>RuDOLPH for fast zero-shot text to image generation.</b> On the first phase we generate 288 in 5 min images by text! It takes Diffusion decoder is based on [Jack000](https://github.com/Jack000/) solution and ESRGAN-Real for high quality image rendering.</font>
# install all
```
!pip install rudolph==0.0.1rc8 > /dev/null
!pip install bitsandbytes-cuda111 > /dev/null
!pip install wandb > /dev/null
!pip install pytorch-lightning > /dev/null
```
#Download data
```
!pip install --upgrade gdown
import gdown
# a file
url = "http://drive.google.com/uc?id=17bPt7G3N_vGKCCxppIOPbPlhv1qUnv0o"
output = "food.zip"
gdown.download(url, output, quiet=False)
!unzip /content/food.zip
```
# Train this deer๐ฆ๐ฆ๐ฆ
```
import os
import sys
import random
from collections import Counter
import PIL
import torch
import numpy as np
import pandas as pd
import bitsandbytes as bnb
import torchvision.transforms as T
import torchvision.transforms.functional as TF
from tqdm import tqdm
from wordcloud import WordCloud
from matplotlib import pyplot as plt
from torch.utils.data import Dataset, DataLoader
from rudalle import get_tokenizer, get_vae
from rudalle.utils import seed_everything
import pytorch_lightning as pl
from rudolph.model.utils import get_attention_mask
from rudolph.model import get_rudolph_model, ruDolphModel, FP16Module
from rudolph.pipelines import generate_codebooks, self_reranking_by_image, self_reranking_by_text, show, generate_captions, generate_texts, zs_clf
from rudolph import utils
device = 'cuda'
model = get_rudolph_model('350M', fp16=True, device=device)
tokenizer = get_tokenizer()
vae = get_vae(dwt=False).to(device)
class Args():
def __init__(self, model):
self.device = model.get_param('device')
self.l_text_seq_length = model.get_param('l_text_seq_length')
self.r_text_seq_length = model.get_param('r_text_seq_length')
self.image_tokens_per_dim = model.get_param('image_tokens_per_dim')
self.image_seq_length = model.get_param('image_seq_length')
self.epochs = 5
self.save_path='checkpoints/'
self.model_name = 'awesomemodel_'
self.save_every = 500
self.bs = 2
self.clip = 1.0
self.lr = 2e-5
self.freeze = False
self.wandb = False
self.train_steps = 10
self.lt_loss_weight = 0.01
self.img_loss_weight = 1
self.rt_loss_weight = 7
self.image_size = self.image_tokens_per_dim * 8
args = Args(model)
if not os.path.exists(args.save_path):
os.makedirs(args.save_path)
class FoodDataset(Dataset):
def __init__(self, file_path, csv_path, tokenizer, shuffle=True):
self.tokenizer = tokenizer
self.samples = []
self.image_transform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.RandomResizedCrop(args.image_size, scale=(1., 1.), ratio=(1., 1.)),
T.ToTensor()
])
df = pd.read_csv(csv_path)
df.columns = ['index', 'belok', 'fats', 'uglevod', 'kkal', 'name', 'path']
for belok, fats, uglevod, kkal, caption, f_path in zip(
df['belok'],df['fats'], df['uglevod'], df['kkal'], df['name'], df['path']
):
caption = f'ะฑะปัะดะพ: {caption}; ะฑะตะปะบะพะฒ: {belok}; ะถะธัะพะฒ: {fats}; ัะณะปะตะฒะพะดะพะฒ: {uglevod}; ะบะบะฐะป: {kkal};'
if len(caption)>10 and len(caption)<100 and os.path.isfile(f'{file_path}/{f_path}'):
self.samples.append([file_path, f_path, caption.lower()])
if shuffle:
np.random.shuffle(self.samples)
print('Shuffled')
def __len__(self):
return len(self.samples)
def load_image(self, file_path, img_name):
return PIL.Image.open(f'{file_path}/{img_name}')
def __getitem__(self, item):
item = item % len(self.samples)
file_path, img_name, text = self.samples[item]
try:
image = self.load_image(file_path, img_name)
image = self.image_transform(image)
except Exception as err:
print(err)
random_item = random.randint(0, len(self.samples) - 1)
return self.__getitem__(random_item)
text = text.lower().strip()
encoded = self.tokenizer.encode_text(text, text_seq_length=args.r_text_seq_length)
return encoded, image
```
#Lets look what is inside food Dataset ๐ค
```
dataset = FoodDataset(file_path='/content/food' ,csv_path ='/content/food/food.csv',tokenizer=tokenizer)
args.train_steps = len(dataset)//args.bs
class FoodDataModule(pl.LightningDataModule):
def __init__(self, file_path, csv_path, tokenizer):
super().__init__()
def setup(self, stage=None):
self.train_dataset = FoodDataset(file_path='/content/food',
csv_path ='/content/food/food.csv',
tokenizer=tokenizer)
def train_dataloader(self):
return DataLoader(
self.train_dataset,
batch_size=args.bs,
shuffle=True,
)
data_module = FoodDataModule(file_path='/content/food' ,csv_path ='/content/food/food.csv',tokenizer=tokenizer)
idx = random.randint(0, len(dataset)-1)
encoded, image = dataset[idx]
print(tokenizer.decode_text(encoded))
plt.imshow(image.permute(1,2,0).cpu().numpy());
idx = random.randint(0, len(dataset)-1)
encoded, image = dataset[idx]
print(tokenizer.decode_text(encoded))
plt.imshow(image.permute(1,2,0).cpu().numpy());
df = pd.read_csv('/content/food/food.csv')
wc, c = WordCloud(), Counter()
for text in df['name']:
try:
c.update(wc.process_text(text))
except:
continue
wc.fit_words(c)
plt.figure(figsize=(7,7));
plt.imshow(wc, interpolation='bilinear');
plt.axis("off");
import seaborn as sns
text_value_counts = pd.DataFrame(df['name'].value_counts())
ax = sns.histplot(data=text_value_counts, x="name");
ax.set_title('Duplicated text count histogram');
ax.set_xlabel('duplicates count');
```
#Train this deer ๐ฆ๐โ๏ธ
```
class Rudolph_(pl.LightningModule):
def __init__(self, args, vae):
super().__init__()
self.model = get_rudolph_model('350M', fp16=False, device=self.device)
#self.vae = get_vae(dwt=False).to(self.device)
print(self.device)
def forward(self,
input_ids,
lt_loss_weight=0.1,
img_loss_weight=0.8,
rt_loss_weight=0.1,
return_loss=True):
total_seq_length = args.l_text_seq_length + args.image_seq_length*args.image_seq_length + args.r_text_seq_length
masks = torch.ones(args.bs, args.r_text_seq_length, dtype=torch.int32)
attention_mask = get_attention_mask(masks, args.bs, args.l_text_seq_length, args.image_tokens_per_dim,
args.r_text_seq_length, self.device)
loss, loss_values = self.model.forward(input_ids,
attention_mask,
lt_loss_weight=lt_loss_weight,
img_loss_weight=img_loss_weight,
rt_loss_weight=rt_loss_weight,
return_loss=True)
return loss
def training_step(self, batch):
text, images = batch[0], batch[1]
image_input_ids = vae.get_codebook_indices(images).to(self.device)
r_text = text.to(self.device)
l_text = torch.zeros((args.bs, args.l_text_seq_length), dtype=torch.long).to(self.device)
input_ids = torch.cat((l_text, image_input_ids, r_text), dim=1)
loss = self.forward(input_ids,
lt_loss_weight=args.lt_loss_weight,
img_loss_weight=args.img_loss_weight,
rt_loss_weight=args.rt_loss_weight,
return_loss=True)
self.log("train_loss", loss, prog_bar=True, logger=True)
return {"loss": loss}
def training_epoch_end(self, outputs):
pass
def _freeze(self,
params,
freeze_emb=False,
freeze_ln=False,
freeze_attn=True,
freeze_ff=True,
freeze_other=False):
#print(params)
for name, p in enumerate(params):
#print(name, p)
#name = name.lower()
if 'ln' in name or 'norm' in name:
p.requires_grad = not freeze_ln
elif 'embeddings' in name:
p.requires_grad = not freeze_emb
elif 'mlp' in name:
p.requires_grad = not freeze_ff
elif 'attn' in name:
p.requires_grad = not freeze_attn
else:
p.requires_grad = not freeze_other
return model
def configure_optimizers(self):
if args.freeze:
optimizer = torch.optim.Adam(self._freeze(self.parameters()), lr=args.lr)
else:
optimizer = torch.optim.Adam(self.parameters(), lr=args.lr)
#bnb.optim.Adam8bit(self.parameters(), lr=args.lr)
scheduler = torch.optim.lr_scheduler.OneCycleLR(
optimizer,
max_lr=args.lr,
final_div_factor=500,
steps_per_epoch=args.train_steps,
epochs=args.epochs
)
return optimizer
from pytorch_lightning.loggers import WandbLogger
# ั ะธัะฟะพะปัะทัั wandb ะฒ ะบะฐัะตััะฒะต ะปะพะณะตัะฐ, ะตัะปะธ ะฝะฐะดะพ ะทะฐะผะตะฝะธัะต ะฝะฐ ัะตะฝัะพัะฑะพัะดั
wandb_logger = WandbLogger(project="rudolf")
from pytorch_lightning.callbacks import ModelCheckpoint, EarlyStopping
from pytorch_lightning.loggers import TensorBoardLogger
checkpoint_callback = ModelCheckpoint(
dirpath="checkpoints",
filename="best-checkpoint",
save_top_k=1,
verbose=True,
mode="min"
)
model = Rudolph_(args,vae)
data_module = FoodDataModule(file_path='/content/food' ,csv_path ='/content/food/food.csv',tokenizer=tokenizer)
trainer = pl.Trainer(
logger=wandb_logger,
checkpoint_callback=checkpoint_callback,
max_epochs=2,
accelerator="gpu",
progress_bar_refresh_rate=30
)
trainer.fit(model,data_module)
trainer.save_checkpoint('/rudolf')
```
# ๐ผ2โ Lets test trained model
```
def _fix_pl(path):
d = torch.load(path)["state_dict"]
checkpoint = {}
for key in d.keys():
checkpoint[key.replace('model.','')] = d[key]
torch.save(checkpoint,'fixed.pt')
template = 'ะฑะปัะดะพ:'
import requests
from PIL import Image
import torch
device = 'cuda'
model = get_rudolph_model('350M', fp16=True, device=device)
tokenizer = get_tokenizer()
vae = get_vae(dwt=False).to(device)
# path can change because PL
_fix_pl('/content/rudolf/1033wc66/checkpoints/epoch=1-step=474-v1.ckpt')
model.load_state_dict(torch.load('fixed.pt'))
img_by_url = 'https://kulinarenok.ru/img/steps/31445/1-7.jpg' #@param {type:"string"}
# img_by_url = 'https://img.delo-vcusa.ru/2020/11/Borshh-s-yablokami.jpg'
img_by_url = Image.open(requests.get(img_by_url, stream=True).raw).resize((128, 128))
#@markdown number of images
captions_num = 4 #@param{type:'slider'}
display(img_by_url)
texts = generate_captions(img_by_url, tokenizer, model, vae, template=template,
top_k=16, captions_num=captions_num, bs=16, top_p=0.6, seed=43,
temperature=0.8, limit_eos=False)
ppl_text, ppl_image = self_reranking_by_image(texts, img_by_url, tokenizer, model, vae, bs=16, seed=42)
for idx in ppl_image.argsort()[:8]:
print(texts[idx])
```
| github_jupyter |
# Quantitative omics
The exercises of this notebook correspond to different steps of the data analysis of quantitative omics data. We use data from transcriptomics and proteomics experiments.
## Installation of libraries and necessary software
Copy the files *me_bestprobes.csv* and _AllQuantProteinsInAllSamples.csv_ into the folder that contains this jupyter notebook or upload them to http://localhost:8888/tree
Install the necessary libraries (only needed once) by executing (shift-enter) the following cell:
```
install.packages("DAAG", repos='http://cran.us.r-project.org')
install.packages("MASS", repos='http://cran.us.r-project.org')
install.packages("matrixStats", repos='http://cran.us.r-project.org')
if (!requireNamespace("BiocManager", quietly = TRUE))
install.packages("BiocManager")
BiocManager::install(c("Biobase","preprocessCore","qvalue","limma"))
```
## Loading data and libraries
This requires that the installation above have been finished without error
```
library("MASS")
library("DAAG")
library("matrixStats")
library("Biobase")
library("preprocessCore")
library("qvalue")
library("limma")
me_Kalinka <- read.csv("me_bestprobes.csv",row.names=1)
CanceriTRAQ <- read.csv("AllQuantProteinsInAllSamples.csv",row.names=1)
```
### Exercise 1
We apply different ways of normalization to a typical microarray data set.
Get the data ```geneData``` from the ```Biobase``` package. Normalize the columns (by division on normal scale or subtraction on log-scale) by a) mean, b) median, c) mean of log-values, and d) median of log-values. Revise the results extensively by comparing the multiple distributions in histograms, density plots, ranked plots and ```qqnorm```. Do also a direct comparison between replicates by scatter plots.
```
data(geneData)
geneData[geneData<=0] <- NA
logDat <- log2(geneData)
```
##### Question I: <u>Would you plot the data on log-scale or on normal scale?</u>
_Answer_
##### Question II: <u>What does qqnorm tell us?</u>
_Answer_
##### Question III: <u>What is the problem when normalizing by the mean on normal scale?</u>
_Answer_
##### Question IV: <u>What is the difference between normalization b) and d)?</u>
_Answer_
### Exercise 2
Here, we will determine differentially regulated genes from the comparison between different sample groups of geneData.
a) Take the log-transformed ```geneData``` set and perform t-tests for all genes between sample groups (B, I, K, N, P, T) and (C, G, J, O, R, U, V). You can copy and modifiy the code from the lecture. Do not forget to correct for multiple testing. Plot a histogram of the p-values and generate a volcano plot.
b) In order to see whether the t-tests also provide results for any comparison, take randomly chosen samples of 6 versus 6 groups and redo the statistical tests.
c) Carry out a principal component analysis on the entire data set and look for the groups that you tested for significantly different genes (loading plot) in a).
```
data(geneData)
geneData[geneData<=0] <- NA
logDat <- log2(geneData)
logDat <- logDat[complete.cases(logDat),]
pvals <- vector(,nrow(logDat))
for(i in 1:nrow(logDat)) {
pvals[i] <- t.test(logDat[i, c("B", "I", "K", "N", "P", "T")], logDat[i, c("C", "G", "J", "O", "R", "U", "V")])$p.value
}
pvals2 <- apply(logDat, 1, function(x) t.test(x[c("B", "I", "K", "N", "P", "T")] , x[c("C", "G", "J", "O", "R", "U", "V")])$p.value)
hist(pvals, 100)
fdrs <- p.adjust(pvals, method = "BH")
plot(rowMeans(logDat[, c("B", "I", "K", "N", "P", "T")]) -
rowMeans(logDat[, c("C", "G", "J", "O", "R", "U", "V")]),
-log10(fdrs))
abline(h=1)
abline(v=c(-2,2))
samples <- sample(LETTERS, 12)
g1 <- samples[1:6]
g2 <- samples[7:12]
pvals <- vector(,nrow(logDat))
for(i in 1:nrow(logDat)) {
pvals[i] <- t.test(logDat[i, g1], logDat[i, g2])$p.value
}
pvals2 <- apply(logDat, 1, function(x) t.test(x[g1] , x[g2])$p.value)
hist(pvals, 100)
fdrs <- p.adjust(pvals, method = "BH")
plot(rowMeans(logDat[, g1]) -
rowMeans(logDat[, g2]),
-log10(fdrs))
abline(h=1)
abline(v=c(-2,2))
pca.out <- princomp(logDat)
plot(pca.out$loadings)
text(pca.out$loadings, colnames(logDat), pos=2)
# ...
```
##### Question I: <u>How many differentially regulated genes do you find in a) and in b) (p-value below 0.01)?</u>
_Answer_
##### Question II: <u>Why does a volcano plot look like a volcano?</u>
_Answer_
##### Question III: <u>What does the PCA tell you about part a) of this exercise?</u>
_Answer_
### Exercise 3
In bottom-up LC-MS experiments, the output are peptides which can be shared between different proteins. This is why the results most of the time report protein groups instead of single proteins. Here, you will apply different operations on the reported protein groups.
Read the file _Example.csv_ and extract the column with the protein accession numbers.
a) Pick out one of the values and apply ```strsplit``` to separate database name (e.g. TREMBL, SWISS-PROT) from accession id.
b) Take a value with multiple protein accessions and extract only the accession ids.
c) Operate ```strsplit``` on the entire column and try to extract the accession ids.
d) Count the number of proteins per protein group and plot their distribution as histogram.
```
A <- read.csv("ExampleFile.csv")
protaccs <- A$Protein.Accessions
protaccs[60:65]
# a)
example_str <- strsplit(as.character(protaccs[63]),":",fixed = T)
example_str[[1]][2]
# b)
unlist(strsplit(strsplit(as.character(protaccs[63]),":",fixed = T)[[1]][2],";",fixed=T))
# c) Still some SWISS-PROT in the array though
# c) Still some SWISS-PROT in the array though
allprots <- list()
for (i in 1:length(protaccs)) {
str1 <- strsplit(as.character(protaccs[i]),":",fixed = T)
# print(str1[[1]])
if (length(str1[[1]])>1)
allprots[[i]] <- unlist(strsplit(str1[[1]][2],";",fixed=T))
}
# d) This one is on you
hist(sapply(allprots, length), 50)
table(sapply(allprots, length))
```
##### Question I: <u>What is the difference between TREMBL and SWISS-PROT annotations?</u>
_Answer_
##### Question II: <u>What is the advantage of measuring multiple peptides of a protein?</u>
_Answer_
##### Question 3: <u>How many proteins contains the largest protein group?</u>
_Answer_
### Exercise 4
We will test different normalization methods on micro-array data from _Drosophila melanogaster_ development (https://www.nature.com/articles/nature09634).
a) Make a boxplot and compare the different developmental stages.
Make a scatter plot and change sample numbers to see how they compare quantitatively.
Look at the MA plot and understand what it shows
b) Carry out median normalization and look at the plots of the normalized data
c) Carry out quantile normalization ```normalize.quantiles(microarray)``` and look at the plots again
```
microarray <- me_Kalinka[,2:ncol(me_Kalinka)]
#boxplot(microarray)
sample1 <- 1
sample2 <- 7
plot(rowMeans(microarray,na.rm=T),microarray[,sample2]-microarray[,sample1],cex=0.5,pch=15, col="#00000033",
xlab=paste("Sample",sample1), ylab=paste("Sample",sample2))
abline(h=0)
# add different normalizations here
# plot again
```
##### Question I: <u>Can you spot the difference between the developmental states from the boxplot?</u>
_Answer_
##### Question II: <u>What complicates normalization of such a data set with large differences?</u>
_Answer_
##### Question III: <u>What are the sometimes rather drastic changes in the data when using quantile normalization?</u>
_Answer_
##### Question IV: <u>Which normalization would you recommend?</u>
_Answer_
### Exercise 5
In this exercise, you will apply statistical tests to proteomics data.
Carry out t-tests between the two cancer subtypes of the ```CanceriTRAQ``` data (from https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0137048). Plot the p-values (corrected for multiple testing) in a volcano plot and compare the results to the ones in the _IsoProt_ paper (https://pubs.acs.org/doi/10.1021/acs.jproteome.8b00968)
Compare the results for the two types of correction for multiple testing "Benjamini-Hochberg" and the ```qvalue``` library ("Storey" method). You can make a scatter plot of the FDRs (corrected p-values) on log-scale and also compare by making two volcano plots.
```
CanceriTRAQRed <- CanceriTRAQ[rowSums(is.na(CanceriTRAQ))<3,]
# Add your code here:
```
##### Question I: <u>What does the first line of code do?</u>
_Answer_
##### Question II: <u>How many p-values <0.05 and 0.1 do you get? How many after correction for multiple testing?</u>
_Answer_
##### Question III: <u>What would be needed to increase the number of significantly changing proteins?</u>
_Answer_
##### Question IV: <u>How many p-values below 0.05 would a randomized data set of the same size give without correction for multiple testing?</u>
_Answer_
##### Question V: <u>Name the difference you observe when comparing the two methods ("Benjamini-Hochberg" and "Storey")</u>
_Answer_
### Exercise 6
The ```limma``` package provides better estimates of the p-values by adjusting the observed variances of the features to the generally observed trends in the data. We will further use different tools for biological interpretation.
Carry out limma testing on the cancer data and compare the results to the ones from the t-tests.
Take the 50 most regulated proteins and upload them to the following two web services for biological interpretation:
- DAVID: http://david.ncifcrf.gov
- GOrilla http://cbl-gorilla.cs.technion.ac.il/
```
## limma
# Set replicate numbers
Reps <- c(1,1,1,1,2,2,2,2)
Data <- CanceriTRAQ
NumCond <- max(Reps)
design <- model.matrix(~0+factor(Reps-1))
colnames(design)<-paste("i",c(1:NumCond),sep="")
contrasts<-NULL
First <- 1
for (i in (1:NumCond)[-First]) contrasts<-append(contrasts,paste(colnames(design)[i],"-",colnames(design)[First],sep=""))
contrast.matrix<-makeContrasts(contrasts=contrasts,levels=design)
print(dim(Data))
lm.fitted <- lmFit(Data,design)
lm.contr <- contrasts.fit(lm.fitted,contrast.matrix)
lm.bayes<-eBayes(lm.contr)
#topTable(lm.bayes)
# These are the (uncorrected) p-values from the moderated t-test from the limma package:
plvalues <- lm.bayes$p.value
head(sort(p.adjust(plvalues, method="BH")))
```
##### Question I: <u>How many regulated proteins do you find this time (FDR < 0.05)?</u>
_Answer_
##### Question II: <u>Which are the most enriched Gene ontology terms (GO terms, BP) in both web sites?</u>
_Answer_
##### Question III: <u>Which pathways are likely to distinguish the two cancer subtypes?</u>
_Answer_
| github_jupyter |
<table><tr>
<td style="background-color:#ffffff;text-align:left;"><a href="http://qworld.lu.lv" target="_blank"><img src="../images/qworld.jpg" width="30%" align="left"></a></td>
<td style="background-color:#ffffff;"> </td>
<td style="background-color:#ffffff;vertical-align:text-middle;text-align:right;">
<table><tr style="background-color:white;">
<td> Visit</td>
<td><a href="http://qworld.lu.lv" target="_blank"><img src="../images/web-logo.png" width="35px"></a></td>
<td width="10pt"></td>
<td> Join</td>
<td><a href="https://qworldworkspace.slack.com/" target="_blank"><img src="../images/slack-icon.png" width="80px"></a></td>
<td width="10pt"></td>
<td>Follow</td>
<td><a href="https://www.facebook.com/qworld19/" target="_blank"><img src="../images/facebook-icon.png" width="40px"></a></td>
<td><a href="https://twitter.com/QWorld19" target="_blank"><img src="../images/twitter-icon.png" width="40px"></a></td>
</tr></table>
</td>
</tr></table>
<h2> Credits </h2>
<font style="color: #cd7f32;"><b>Bronze</b></font> was created by <a href="http://abu.lu.lv" target="_blank"><b>Dr. Abuzer Yakaryilmaz</b></a> (<a href="http://qworld.lu.lv/index.php/qlatvia/" target="_blank">QLatvia</a>) in October 2018, and the most part of it has been developed by him.
<b>Dr. Maksims Dimitrijevs</b> (<a href="http://qworld.lu.lv/index.php/qlatvia/" target="_blank">QLatvia</a>) and <b>Dr. รzlem Salehi Kรถken</b> (<a href="http://qworld.lu.lv/index.php/qturkey/" target="_blank">QTurkey</a>) have revised all notebooks, proposed certain changes, and prepared a couple of new notebooks.
The first recording lectures were prepared by <b>Dr. Abuzer Yakaryilmaz</b>, <b>Dr. รzlem Salehi Kรถken</b>, and <b>Anastasija Trizna</b> (<a href="http://qworld.lu.lv/index.php/qlatvia/" target="_blank">QLatvia</a>).
Starting from <b>July 7, 2019</b>, Bronze has been on a public gitlab repository (https://gitlab.com/qkitchen/basics-of-quantum-computing) and it is expected to have contributions from public as well.
<hr>
<h3>Bronze 2020</h3>
Bronze has been revised throughout 2020.
We thank to the participants of [QTraining for Bronze program](https://qworld.lu.lv/index.php/qtraining-for-bronze-2020/) for their corrections and suggestions.
<hr>
<h3>Bronze 2019</h3>
We thank to <b><i><a href="https://qworld.lu.lv/index.php/qdrive/" target="_blank">QDrive</a> mentors and participants</i></b> for their very helpful corrections and suggestions.
We thank <b><i><a href="https://pl.linkedin.com/in/adamglos92" target="_blank">Adam Glos</a></i></b> (<a href="http://qworld.lu.lv/index.php/qpoland/" target="_blank">QPoland</a>) for his comments on Bronze 2018.
<hr>
<h3>Bronze 2018</h3>
We thank to <b><i>Katrina Kizenbaha</i></b> from Riga TechGirls for her revisions on our notebooks on python.
We thank to <b><i>Martins Kalis</i></b> (QLatvia) for his technical comments on python, qiskit, and our notebooks.
We thank to <b><i>Maksims Dimitrijevs</i></b> (QLatvia) for his careful reading and corrections on our notebooks.
We thank to QLatvia members and former members <b><i>Martins Kalis</i></b>, <b><i>Maksims Dimitrijevs</i></b>, <b><i>Aleksejs Naumovs</i></b>, <b><i>Andis Draguns</i></b>, and <b><i>Matiss Apinis</i></b> for their help and support.
We thank to <b><i>the students (<a href="https://www.df.lu.lv">DF@LU</a>) attending quantum programming's meetings</i></b> on each Friday (Fall 2018) for their comments while working with our notebooks.
<hr>
| github_jupyter |
```
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
cd 'drive/My Drive/Colab Notebooks/machine_translation'
from dataset import MTDataset
from model import Encoder, Decoder
from language import Language
from utils import preprocess
from train import train
from eval import validate
from translate import translate
sentences_inp_train, sentences_trg_train = preprocess('datasets/train/train.en', 'datasets/train/train.vi', max_len=20)
sentences_inp_val, sentences_trg_val = preprocess('datasets/dev/tst2012.en', 'datasets/dev/tst2012.vi', max_len=20)
train_inp = Language(sentences_inp_train)
train_trg = Language(sentences_trg_train)
val_inp = Language(sentences_inp_val, train=False, word2id=train_inp.word2id, id2word=train_inp.id2word)
val_trg = Language(sentences_trg_val, train=False, word2id=train_trg.word2id, id2word=train_trg.id2word)
train_set = MTDataset(train_inp.wordvec, train_trg.wordvec)
val_set = MTDataset(val_inp.wordvec, val_trg.wordvec)
from torch.utils.data import DataLoader
import torch
import torch.nn as nn
from torch.optim.lr_scheduler import StepLR
train_loader = DataLoader(train_set, batch_size=64, shuffle=True)
val_loader = DataLoader(val_set, batch_size=64)
Tx, Ty = train_inp.max_len, train_trg.max_len
vocab_size_inp, vocab_size_trg = train_inp.vocab_size, train_trg.vocab_size
embedding_dim = 256
hidden_size = 1024
if torch.cuda.is_available():
device='cuda'
else:
device='cpu'
encoder = Encoder(vocab_size_inp, embedding_dim, hidden_size).to(device=device)
decoder = Decoder(hidden_size, vocab_size_trg, embedding_dim).to(device=device)
optimizer = torch.optim.Adam(params=list(encoder.parameters()) + list(decoder.parameters()))
criterion = nn.CrossEntropyLoss()
scheduler = StepLR(optimizer, step_size=2, gamma=0.5)
train(encoder, decoder, train_loader, val_loader, optimizer, criterion, train_trg.id2word, scheduler, 10, 200, device)
torch.save(encoder.state_dict(), 'encoder.pth')
torch.save(decoder.state_dict(), 'decoder.pth')
import string
exclude = list(string.punctuation) + list(string.digits)
test_sen = 'hello i am a student'
test_sen = ''.join([char for char in test_sen if char not in exclude]).strip().lower()
test_sen = '<START> ' + test_sen + ' <END>'
length = len(test_sen.split())
diff = train_inp.max_len -length
test_sen = test_sen + ''.join([' <PAD>']*diff)
test_vec = [train_inp.word2id[s] for s in test_sen.split()]
test_tensor = torch.Tensor(test_vec).to(device='cuda', dtype=torch.long).unsqueeze(0)
with torch.no_grad():
encoder.eval()
decoder.eval()
enc_out, enc_hidden_backward, enc_hidden_forward = encoder(test_tensor)
dec_hidden = enc_hidden_backward
dec_input = torch.Tensor([train_trg.word2id['<START>']]).to(device='cuda', dtype=torch.long)
for t in range(1, Ty):
out, dec_hidden = decoder(dec_input, dec_hidden, enc_out)
dec_input = torch.max(out, dim=-1)[1].squeeze(1)
next_id = dec_input.squeeze().clone().cpu().numpy()
next_word = train_trg.id2word[next_id]
if next_word == '<END>':
break
print(next_word)
translate('i am a student', train_inp.word2id, train_trg.word2id, train_trg.id2word, encoder, decoder, 20, device)
decoder.load_state_dict(torch.load('decoder.pth'))
train_inp.id2word[4112]
train_trg.sentences[0]
from nltk.translate.bleu_score import corpus_bleu, SmoothingFunction
ref, hyp, bleu = validate()
hyp[0]
ref1 = 'the cat is on the mat'.split()
ref2 = 'there is a cat on the mat'.split()
hyp = 'the cat the cat on the mat'.split()
corpus_bleu([[ref1, ref2]], [hyp])
ref3 = 'i am student ngo anh tu'.split()
ref4 = 'my name is student ngo anh tu'.split()
hyp2 = 'there is a student ngo anh tu'.split()
corpus_bleu([[ref1, ref2], [ref3, ref4]], [hyp, hyp2])
sentence_bleu([ref1, ref2], hyp)
sentence_bleu([ref3, ref4], hyp2)
validate()
```
| github_jupyter |

<a href="https://hub.callysto.ca/jupyter/hub/user-redirect/git-pull?repo=https%3A%2F%2Fgithub.com%2Fcallysto%2Fcurriculum-notebooks&branch=master&subPath=Mathematics/FractionMultiplication/FractionMultiplication.ipynb&depth=1" target="_parent"><img src="https://raw.githubusercontent.com/callysto/curriculum-notebooks/master/open-in-callysto-button.svg?sanitize=true" width="123" height="24" alt="Open in Callysto"/></a>
```
import uiButtons
%uiButtons
```
# Fractions and Multiplication
## Visualizing Fraction Multiplication
## Introduction
An important skill to have when it comes to fractions is knowing how to multiply them together.<br>
As we know, fractions are of the form $\frac{a}{b}$ with $a$ and $b$ integers and $b\neq 0$. <br>
You can think of $\frac{a}{b}$ as the number you get when you do $a\div b$. <br>
If we think of a fraction as a division problem then it makes sense that it works well with multiplication.<br>
Unlike addition, multiplying fractions is easy and straightforward. <br>
In this notebook we will look into two forms of fraction multiplication:
- multiplying two fractions together (e.g. $\dfrac{4}{7} \times \dfrac{2}{3}$ )
- multiplying a fraction by an integer (e.g. $\dfrac{4}{7} \times 3$ )
## Procedure
As mentioned earlier, multiplying two fractions together is simple.<br>
Let's say we want to multiply the fractions $\dfrac{4}{7}$ and $\dfrac{2}{3}$.<br>
All we have to do is multiply the numerators (top numbers) together, then multiply the denominators (bottom numbers) together. Let's take a look:
$$
\frac{4}{7} \times \frac{2}{3}=\frac{4\times 2}{7\times 3}=\frac{8}{21}
$$
Let's try another example. Take the fractions $\dfrac{3}{5}$ and $\dfrac{2}{3}$. To multiply them we multiply the numerators together and the denominators together:
$$
\frac{3\times 2}{5\times 3}=\frac{6}{15}
$$
In this example, you might notice that the result is not in lowest terms: both 6 and 15 are divisible by 3, so we get $\dfrac{6}{15} = \dfrac25$. In a later notebook, we'll focus on mechanics like this. For now, we want to focus on a visual understanding of the problem.
Now that we know how to multiply two fractions, let's think about what it actually means.<br>
Recall that a fraction simply represents a part of something. We can think of multiplying fractions together as taking a part of another part. In other words $\dfrac{1}{2}\times\dfrac{1}{2}$ is like saying $\dfrac{1}{2}$ of $\dfrac{1}{2}$ (one half **of** one half). If we have $\dfrac{1}{2}$ of a pizza and we want $\dfrac{1}{2}$ of that half what do we end up with?<br>
<img src="./images/pizza.png" width="400px">
We get $\dfrac{1}{4}$ because $\dfrac{1}{2}\times\dfrac{1}{2}=\dfrac{1}{4}$.<br>
Watch the video below to help us further visualize this concept.
```
%%html
<div align="middle">
<iframe id="vid1" width="640" height="360" src="https://www.youtube.com/embed/hr_mTd-oJ-M" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
<p><a href="https://www.youtube.com/channel/UC4a-Gbdw7vOaccHmFo40b9g" target="_blank">Click here</a> for more videos by Khan Academy</p>
</div>
<script>
$(function() {
var reachable = false;
var myFrame = $('#vid1');
var videoSrc = myFrame.attr("src");
myFrame.attr("src", videoSrc)
.on('load', function(){reachable = true;});
setTimeout(function() {
if(!reachable) {
var ifrm = myFrame[0];
ifrm = (ifrm.contentWindow) ? ifrm.contentWindow : (ifrm.contentDocument.document) ? ifrm.contentDocument.document : ifrm.contentDocument;
ifrm.document.open();
ifrm.document.write('If the video does not start click <a href="' + videoSrc + '" target="_blank">here</a>');
ifrm.document.close();
}
}, 2000)
});
</script>
```
## Interactive visualization
The widget below allows you to visualize fraction multiplication as shown in the video. To begin, enter a fraction in the boxes below.
```
%%html
<script src="./d3/d3.min.js"></script>
<!-- <script src="https://d3js.org/d3.v3.min.js"></script> -->
<script type="text/x-mathjax-config">
MathJax.Hub.Config({
tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]}
});
</script>
<script src="https://code.jquery.com/jquery-1.10.2.js"></script>
<style>
.fractionInput {
max-width: 40px;
}
.fractionBar {
width: 40px;
height: 3px;
background-color: #000000;
}
.ingredientsInput {
margin-left: 10px;
margin-right: 10px;
max-width: 40px;
/* float: right; */
}
#speech {
margin: 50px;
font-size: 150%;
}
li {
margin-bottom: 15px;
}
</style>
%%html
<div class="fractionInputs" style="margin:20px">
<h1 id="leftInputFractionText" style="float: left; display: none"></h1>
<div id="opperandInput" style="float: left; display: block">
<input type="text" class="fractionInput form-control form-control-sm" id="oppNumerator" placeholder="0" style="margin-bottom: -10px;">
<hr align="left" class="fractionBar">
<input type="text" class="fractionInput form-control form-control-sm" id="oppDenominator" placeholder="1" style="margin-top: -10px;">
</div>
<button type="button" id="continueBtn" class="btn btn-primary buttons" style="margin: 30px">Continue</button>
</div>
<div class="canvasDiv" style="clear: left">
<svg height="500" width="500" viewbox="0 0 500 500" mlns="http://www.w3.org/2000/svg" id="mainCanvas" style="float: left">
<rect id="mainBox" height="480" width="480" x="10" y="10" style="outline: solid #000000 3px; fill:#ffffff"></rect>
<rect id="leftOpperand" height="480" width="0" x="10" y="10"></rect>
<rect id="rightOpperand" height="0" width="480" x="10" y="10"></rect>
</svg>
</div>
<div>
<p id="speech">Enter a fraction inside the boxes provided then click continue.</p>
</div>
<div style="clear: left; margin-left: 10px">
<button type="button" id="resetFractionBoxBtn" class="btn btn-primary buttons">Reset</button>
</div>
```
## Multiplying a fraction by an integer
In this section we will talk about multiplying a fraction like $\dfrac{4}{7}$, with an integer such as $3$. A good example of when this could be useful is when you need to double a recipe. <br>
Doing multiplication of this form is simply a special case of multiplying two fractions together since any integer, such as $3$ in this case, can be rewritten as $\dfrac{3}{1}$. On a calculator, try inputting any number divided by $1$, and you will always get back the original number. <br>
Let's demonstrate this with an example. To multiply the fraction $\dfrac{4}{7}$ and the integer $3$, remember that we can write $3$ as $\dfrac31$. We get
$$
\frac{4}{7}\times\frac{3}{1} = \frac{4\times 3}{7\times 1}= \frac{12}{7}
$$
**Note that $\dfrac{3}{1}$ is an improper fraction. Improper fractions follow all the same rules for multiplication as proper fractions.**
The big take away from this is that the denominator does not change as it is simply multiplied by $1$. This means we did not change the "whole", we only changed how many parts of the "whole" we have (the numerator). In effect all we did was triple our fraction, since our constant was 3. <br>
Let's practice what we just learned with a recipe example. Below you will see the ingredient list for the famous **Fresh Tomato and Basil Pasta Salad** recipe. This recipe makes enough for 4 servings, but we would like to double the recipe in order to serve 8 people. Apply what we have learned so far to double the ingredients list for the **tomato and basil pasta salad** in order to make 8 servings.
(Enter your answer in the provided boxes. Fractions should be written using the _forward slash_ key "/" eg. 5/8. When your done click _check answer_ to see if you are correct!)
```
%%html
<div class="ingredientsList">
<h1>Fresh Tomato and Basil Pasta Salad</h1>
<img src="./images/pastaSalad.jpg" width=250 style="float: left; margin-right: 50px; box-shadow: 5px 6px 25px 3px grey">
<ul style="max-width: 700px; margin-bottom">
<li><label>3 medium ripe tomatoes, chopped --></label><input id="tomatoes" class="ingredientsInput"></input><label>tomatoes</label></li>
<li><label>1/3 cup thinly sliced fresh basil --></label><input id="basil" class="ingredientsInput"></input><label>cup</label></li>
<li><label>2 Tbsp. olive oil --></label><input id="olivOil" class="ingredientsInput"></input><label>Tbsp.</label></li>
<li><label>1 clove garlic, minced --></label><input id="garlic" class="ingredientsInput"></input><label>clove</label></li>
<li><label>1/2 tsp. salt --></label><input id="salt" class="ingredientsInput"></input><label>tsp.</label></li>
<li><label>1/4 tsp. pepper --></label><input id="pepper" class="ingredientsInput"></input><label>tsp.</label></li>
<li><label>8 oz. rotini pasta pasta, uncooked --></label><input id="pasta" class="ingredientsInput"></input><label>oz.</label></li>
<li><label>3/4 cup Parmesan Style Grated Topping --></label><input id="parmesan" class="ingredientsInput"></input><label>cup</label></li>
</ul>
<button type="button" id="checkAnswerBtn">Check Answers</button>
<button type="button" id="resetBtn">Reset</button>
</div>
<div>
<h2 id="answerStatus"></h2>
</div>
```
## Conclusion
Throughout this notebook we looked at how easy multiplying fractions together really is. We also looked at how to work with a fraction multiplied by a constant. Lets recap what we have learned:
- When multiplying two fractions together we multiply the numerators together and the denominators together: $\dfrac{a}{b}\times\dfrac{c}{d}=\dfrac{a \times c}{b \times d} = \dfrac{ac}{bd}$
- A constant can always be rewritten as the constant over 1: $c = \dfrac{c}{1}$
- Multiplying a fraction with a constant, multiply the numerator by the constant and keep the denominator the same: $\dfrac{a}{b}\times c=\dfrac{a\times c}{b}=\dfrac{ac}{b}$
- Multiplying two fractions together is the same as saying _a part of a part_: $\dfrac{a}{b}\times\dfrac{c}{d}$ is like saying $\dfrac{a}{b}$ **of** $\dfrac{c}{d}$ (The equation $\dfrac{3}{5}\times\dfrac{1}{4}$ is the same as _three fifths **of** one quarter_)
```
%%html
<script>
var leftOpperand = {
id: 'leftOpperand',
numerator: Number(0),
denominator: Number(0),
colour: '#ff0066'
};
var rightOpperand = {
id: 'rightOpperand',
numerator: Number(0),
denominator: Number(0),
colour: '#0000ff'
};
var currentState = 0;
var getOpperandInput = function(numeratorInput, denominatorInput, opperand) {
opperand.numerator = document.getElementById(numeratorInput).value;
opperand.denominator = document.getElementById(denominatorInput).value;
}
var verticalDivide = function(xVal, lineNum) {
var i = xVal;
while(lineNum > 0){
addLine(Number(i + 10), Number(i + 10), 10, Number(d3.select('#mainBox').attr('height')) + 10);
i += xVal;
lineNum --;
}
};
var horizontalDivide = function(xVal, lineNum) {
var i = Number(xVal);
while(lineNum > 0){
addLine(10, Number(d3.select('#mainBox').attr('width')) + 10, Number(i + 10), Number(i +10));
i += xVal;
lineNum --;
}
};
var addLine = function (x1, x2, y1, y2,) {
var dashed = '0,0';
var stroke = 2;
d3.select('#mainCanvas').append('line')
.attr('class', 'divLine ')
.attr('x1', x1)
.attr('x2', x2)
.attr('y1', y1)
.attr('y2', y2)
.style('stroke', 'black')
.style('stroke-width', stroke);
};
var fillBox = function(box, width, height, colour, opacity) {
d3.select('#' + box.id)
.style('fill', colour)
.style('opacity', opacity)
.transition().delay(function (d, i) {
return i * 300;
}).duration(500)
.attr('width', width)
.attr('height', height);
};
var changeOpacity = function(box, opacity) {
d3.select('#' + box.id).transition().delay(function (d, i) {
return i * 300;
}).duration(500)
.style('opacity', opacity);
d3.selectAll('.divLine').transition().delay(function (d, i) {
return i * 100;
}).duration(200)
.style('opacity', opacity);
};
var resetInputs = function() {
d3.select('#continueBtn').attr('disabled', null);
d3.selectAll('.divLine').remove();
d3.select('#leftOpperand').attr('width', 0);
d3.select('#rightOpperand').attr('height', 0);
d3.select('#leftInputFractionText').text('').style('display', 'none');
clearInput('oppNumerator');
clearInput('oppDenominator');
leftOpperand.numerator = Number(0);
leftOpperand.denominator = Number(0);
rightOpperand.numerator = Number(0);
rightOpperand.denominator = Number(0);
};
var isValid = function(numerator, denominator) {
if (numerator < 0 || numerator > 12) {
return false;
}
if (denominator <= 0 || denominator > 12) {
return false;
}
return (numerator < denominator);
};
var updateMathJax = function() {
MathJax.Hub.Queue(["Typeset",MathJax.Hub]);
};
var showInputBox = function(inputId) {
d3.select('#' + inputId).style('display', 'block');
};
var hideInputBox = function(inputId) {
d3.select('#' + inputId).style('display', 'none');
};
var clearInput = function(inputId) {
document.getElementById(inputId).value = '';
}
var stateControler = function(state) {
currentState = state;
setSpeech(state);
switch(state) {
case 0 :
resetInputs();
showInputBox('opperandInput');
break;
case 1 :
getOpperandInput('oppNumerator', 'oppDenominator', leftOpperand);
d3.select('#leftInputFractionText')
.text('$\\frac{'+leftOpperand.numerator+'}{'+leftOpperand.denominator+'} \\times$')
.style('display', 'block');
updateMathJax();
verticalDivide(Number(d3.select('#mainBox').attr('width')/leftOpperand.denominator), Number(leftOpperand.denominator - 1));
hideInputBox('opperandInput');
break;
case 2 :
fillBox(leftOpperand, Number(d3.select('#mainBox').attr('width')/leftOpperand.denominator) * leftOpperand.numerator, Number(d3.select('#mainBox').attr('height')), leftOpperand.colour, 1);
clearInput('oppNumerator');
clearInput('oppDenominator');
showInputBox('opperandInput');
break;
case 3 :
getOpperandInput('oppNumerator', 'oppDenominator', rightOpperand);
d3.select('#leftInputFractionText')
.text('$\\frac{'+leftOpperand.numerator+'}{'+leftOpperand.denominator+'} \\times$' + '$\\frac{'+rightOpperand.numerator+'}{'+rightOpperand.denominator+'}$');
updateMathJax();
changeOpacity(leftOpperand, 0);
horizontalDivide(Number(d3.select('#mainBox').attr('height')/rightOpperand.denominator), Number(rightOpperand.denominator - 1));
hideInputBox('opperandInput');
break;
case 4 :
fillBox(rightOpperand, Number(d3.select('#mainBox').attr('width')), Number(d3.select('#mainBox').attr('height')/rightOpperand.denominator) * rightOpperand.numerator, rightOpperand.colour, 0.5);
break;
case 5 :
changeOpacity(leftOpperand, 1);
d3.select('#continueBtn').attr('disabled', true);
break;
default:
console.log('not a valid of state, returning to state 0');
stateControler(0);
}
};
var speech = [
"Enter a fraction in the boxes provided, then click continue.",
"Great! Now we see that the square has been divided into rectangles of equal size. The number of rectangles is given by the denominator. Click continue when ready.",
"Some of the equal parts have been filled in with pink. The numerator equals the number of pink rectangles. The ratio of the area in pink to the total area is our fraction. Enter another fraction to multiply then click continue.",
"Letโs focus on the second fraction. The first one is temporarily hidden for clarity. As before, the number of rectangles we see equals the denominator. Click continue when ready.",
"Now we have a blue section representing the numerator of the second fraction. Click continue to multiply these two fractions.",
"Awesome! The first fraction is back and overlaid with the second fraction. The number of rectangles in the purple section is the numerator of our answer. Notice that this is the product of the numerators. The total number of rectangles is the denominator of the product, and this is just the product of the two denominators!"
];
function setSpeech(state) {
d3.select('#speech').text(speech[state]);
};
document.getElementById('continueBtn').onclick = function() {
if(!isValid(Number(document.getElementById('oppNumerator').value), Number(document.getElementById('oppDenominator').value))){
alert('Make sure your factions are proper and the denominators less than or equal to 12');
}
else {
stateControler(currentState + 1);
}
};
document.getElementById('resetFractionBoxBtn').onclick = function() {
console.log("hello");
resetInputs();
stateControler(0);
};
</script>
%%html
<script type="text/javascript">
var x = 2; //Recipie multiplyer
getInput('checkAnswerBtn').onclick = function() {
if(checkAnswers()) {
d3.select('#answerStatus').text('Correct!! Good job.');
} else {
d3.select('#answerStatus').text('Not quite, keep trying!');
}
};
getInput('resetBtn').onclick = function() {
var inputs = document.getElementsByClassName('ingredientsInput');
for(var i = 0; i < inputs.length; i++) {
inputs[i].value = '';
}
d3.selectAll('.ingredientsInput').style('background-color', '#ffffff');
d3.select('#answerStatus').text('');
};
function checkAnswers() {
var isCorrect = true;
if(!checkAnswer('tomatoes', x*3))
isCorrect = false;
if(!checkAnswer('basil', x*(1/3)))
isCorrect = false;
if(!checkAnswer('olivOil', x*2))
isCorrect = false;
if(!checkAnswer('garlic', x*1))
isCorrect = false;
if(!checkAnswer('salt', x*(1/2)))
isCorrect = false;
if(!checkAnswer('pepper', x*(1/4)))
isCorrect = false;
if(!checkAnswer('pasta', x*8))
isCorrect = false;
if(!checkAnswer('parmesan', x*(3/4)))
isCorrect = false;
return isCorrect;
};
function checkAnswer(id, ans) {
if(eval(getInput(id).value) === ans) {
return answerCorrect(id);
}
return answerIncorrect(id);
};
function answerCorrect(id) {
d3.select('#' + id).style('background-color', '#76D177');
return true;
}
function answerIncorrect(id) {
d3.select('#' + id).style('background-color', '#BB4646');
return false;
}
function getInput(id) {
return document.getElementById(id);
};
</script>
```
[](https://github.com/callysto/curriculum-notebooks/blob/master/LICENSE.md)
| github_jupyter |
# 1 - Sequence to Sequence Learning with Neural Networks
In this series we'll be building a machine learning model to go from once sequence to another, using PyTorch and torchtext. This will be done on German to English translations, but the models can be applied to any problem that involves going from one sequence to another, such as summarization, i.e. going from a sequence to a shorter sequence in the same language.
In this first notebook, we'll start simple to understand the general concepts by implementing the model from the [Sequence to Sequence Learning with Neural Networks](https://arxiv.org/abs/1409.3215) paper.
## Introduction
The most common sequence-to-sequence (seq2seq) models are *encoder-decoder* models, which commonly use a *recurrent neural network* (RNN) to *encode* the source (input) sentence into a single vector. In this notebook, we'll refer to this single vector as a *context vector*. We can think of the context vector as being an abstract representation of the entire input sentence. This vector is then *decoded* by a second RNN which learns to output the target (output) sentence by generating it one word at a time.

The above image shows an example translation. The input/source sentence, "guten morgen", is passed through the embedding layer (yellow) and then input into the encoder (green). We also append a *start of sequence* (`<sos>`) and *end of sequence* (`<eos>`) token to the start and end of sentence, respectively. At each time-step, the input to the encoder RNN is both the embedding, $e$, of the current word, $e(x_t)$, as well as the hidden state from the previous time-step, $h_{t-1}$, and the encoder RNN outputs a new hidden state $h_t$. We can think of the hidden state as a vector representation of the sentence so far. The RNN can be represented as a function of both of $e(x_t)$ and $h_{t-1}$:
$$h_t = \text{EncoderRNN}(e(x_t), h_{t-1})$$
We're using the term RNN generally here, it could be any recurrent architecture, such as an *LSTM* (Long Short-Term Memory) or a *GRU* (Gated Recurrent Unit).
Here, we have $X = \{x_1, x_2, ..., x_T\}$, where $x_1 = \text{<sos>}, x_2 = \text{guten}$, etc. The initial hidden state, $h_0$, is usually either initialized to zeros or a learned parameter.
Once the final word, $x_T$, has been passed into the RNN via the embedding layer, we use the final hidden state, $h_T$, as the context vector, i.e. $h_T = z$. This is a vector representation of the entire source sentence.
Now we have our context vector, $z$, we can start decoding it to get the output/target sentence, "good morning". Again, we append start and end of sequence tokens to the target sentence. At each time-step, the input to the decoder RNN (blue) is the embedding, $d$, of current word, $d(y_t)$, as well as the hidden state from the previous time-step, $s_{t-1}$, where the initial decoder hidden state, $s_0$, is the context vector, $s_0 = z = h_T$, i.e. the initial decoder hidden state is the final encoder hidden state. Thus, similar to the encoder, we can represent the decoder as:
$$s_t = \text{DecoderRNN}(d(y_t), s_{t-1})$$
Although the input/source embedding layer, $e$, and the output/target embedding layer, $d$, are both shown in yellow in the diagram they are two different embedding layers with their own parameters.
In the decoder, we need to go from the hidden state to an actual word, therefore at each time-step we use $s_t$ to predict (by passing it through a `Linear` layer, shown in purple) what we think is the next word in the sequence, $\hat{y}_t$.
$$\hat{y}_t = f(s_t)$$
The words in the decoder are always generated one after another, with one per time-step. We always use `<sos>` for the first input to the decoder, $y_1$, but for subsequent inputs, $y_{t>1}$, we will sometimes use the actual, ground truth next word in the sequence, $y_t$ and sometimes use the word predicted by our decoder, $\hat{y}_{t-1}$. This is called *teacher forcing*, see a bit more info about it [here](https://machinelearningmastery.com/teacher-forcing-for-recurrent-neural-networks/).
When training/testing our model, we always know how many words are in our target sentence, so we stop generating words once we hit that many. During inference it is common to keep generating words until the model outputs an `<eos>` token or after a certain amount of words have been generated.
Once we have our predicted target sentence, $\hat{Y} = \{ \hat{y}_1, \hat{y}_2, ..., \hat{y}_T \}$, we compare it against our actual target sentence, $Y = \{ y_1, y_2, ..., y_T \}$, to calculate our loss. We then use this loss to update all of the parameters in our model.
## Preparing Data
We'll be coding up the models in PyTorch and using torchtext to help us do all of the pre-processing required. We'll also be using spaCy to assist in the tokenization of the data.
```
import torch
import torch.nn as nn
import torch.optim as optim
from torchtext.datasets import Multi30k
from torchtext.data import Field, BucketIterator
import spacy
import numpy as np
import random
import math
import time
```
We'll set the random seeds for deterministic results.
```
SEED = 1234
random.seed(SEED)
np.random.seed(SEED)
torch.manual_seed(SEED)
torch.cuda.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
```
Next, we'll create the tokenizers. A tokenizer is used to turn a string containing a sentence into a list of individual tokens that make up that string, e.g. "good morning!" becomes ["good", "morning", "!"]. We'll start talking about the sentences being a sequence of tokens from now, instead of saying they're a sequence of words. What's the difference? Well, "good" and "morning" are both words and tokens, but "!" is a token, not a word.
spaCy has model for each language ("de_core_news_sm" for German and "en_core_web_sm" for English) which need to be loaded so we can access the tokenizer of each model.
**Note**: the models must first be downloaded using the following on the command line:
```
python -m spacy download en_core_web_sm
python -m spacy download de_core_news_sm
```
We load the models as such:
```
spacy_de = spacy.load('de_core_news_sm')
spacy_en = spacy.load('en_core_web_sm')
```
Next, we create the tokenizer functions. These can be passed to torchtext and will take in the sentence as a string and return the sentence as a list of tokens.
In the paper we are implementing, they find it beneficial to reverse the order of the input which they believe "introduces many short term dependencies in the data that make the optimization problem much easier". We copy this by reversing the German sentence after it has been transformed into a list of tokens.
```
def tokenize_de(text):
"""
Tokenizes German text from a string into a list of strings (tokens) and reverses it
"""
return [tok.text for tok in spacy_de.tokenizer(text)][::-1]
def tokenize_en(text):
"""
Tokenizes English text from a string into a list of strings (tokens)
"""
return [tok.text for tok in spacy_en.tokenizer(text)]
```
torchtext's `Field`s handle how data should be processed. All of the possible arguments are detailed [here](https://github.com/pytorch/text/blob/master/torchtext/data/field.py#L61).
We set the `tokenize` argument to the correct tokenization function for each, with German being the `SRC` (source) field and English being the `TRG` (target) field. The field also appends the "start of sequence" and "end of sequence" tokens via the `init_token` and `eos_token` arguments, and converts all words to lowercase.
```
SRC = Field(tokenize = tokenize_de,
init_token = '<sos>',
eos_token = '<eos>',
lower = True)
TRG = Field(tokenize = tokenize_en,
init_token = '<sos>',
eos_token = '<eos>',
lower = True)
```
Next, we download and load the train, validation and test data.
The dataset we'll be using is the [Multi30k dataset](https://github.com/multi30k/dataset). This is a dataset with ~30,000 parallel English, German and French sentences, each with ~12 words per sentence.
`exts` specifies which languages to use as the source and target (source goes first) and `fields` specifies which field to use for the source and target.
```
train_data, valid_data, test_data = Multi30k.splits(exts = ('.de', '.en'),
fields = (SRC, TRG))
```
We can double check that we've loaded the right number of examples:
```
print(f"Number of training examples: {len(train_data.examples)}")
print(f"Number of validation examples: {len(valid_data.examples)}")
print(f"Number of testing examples: {len(test_data.examples)}")
```
We can also print out an example, making sure the source sentence is reversed:
```
print(vars(train_data.examples[0]))
```
The period is at the beginning of the German (src) sentence, so it looks like the sentence has been correctly reversed.
Next, we'll build the *vocabulary* for the source and target languages. The vocabulary is used to associate each unique token with an index (an integer). The vocabularies of the source and target languages are distinct.
Using the `min_freq` argument, we only allow tokens that appear at least 2 times to appear in our vocabulary. Tokens that appear only once are converted into an `<unk>` (unknown) token.
It is important to note that our vocabulary should only be built from the training set and not the validation/test set. This prevents "information leakage" into our model, giving us artifically inflated validation/test scores.
```
SRC.build_vocab(train_data, min_freq = 2)
TRG.build_vocab(train_data, min_freq = 2)
print(f"Unique tokens in source (de) vocabulary: {len(SRC.vocab)}")
print(f"Unique tokens in target (en) vocabulary: {len(TRG.vocab)}")
```
The final step of preparing the data is to create the iterators. These can be iterated on to return a batch of data which will have a `src` attribute (the PyTorch tensors containing a batch of numericalized source sentences) and a `trg` attribute (the PyTorch tensors containing a batch of numericalized target sentences). Numericalized is just a fancy way of saying they have been converted from a sequence of readable tokens to a sequence of corresponding indexes, using the vocabulary.
We also need to define a `torch.device`. This is used to tell torchText to put the tensors on the GPU or not. We use the `torch.cuda.is_available()` function, which will return `True` if a GPU is detected on our computer. We pass this `device` to the iterator.
When we get a batch of examples using an iterator we need to make sure that all of the source sentences are padded to the same length, the same with the target sentences. Luckily, torchText iterators handle this for us!
We use a `BucketIterator` instead of the standard `Iterator` as it creates batches in such a way that it minimizes the amount of padding in both the source and target sentences.
```
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
BATCH_SIZE = 128
train_iterator, valid_iterator, test_iterator = BucketIterator.splits(
(train_data, valid_data, test_data),
batch_size = BATCH_SIZE,
device = device)
```
## Building the Seq2Seq Model
We'll be building our model in three parts. The encoder, the decoder and a seq2seq model that encapsulates the encoder and decoder and will provide a way to interface with each.
### Encoder
First, the encoder, a 2 layer LSTM. The paper we are implementing uses a 4-layer LSTM, but in the interest of training time we cut this down to 2-layers. The concept of multi-layer RNNs is easy to expand from 2 to 4 layers.
For a multi-layer RNN, the input sentence, $X$, after being embedded goes into the first (bottom) layer of the RNN and hidden states, $H=\{h_1, h_2, ..., h_T\}$, output by this layer are used as inputs to the RNN in the layer above. Thus, representing each layer with a superscript, the hidden states in the first layer are given by:
$$h_t^1 = \text{EncoderRNN}^1(e(x_t), h_{t-1}^1)$$
The hidden states in the second layer are given by:
$$h_t^2 = \text{EncoderRNN}^2(h_t^1, h_{t-1}^2)$$
Using a multi-layer RNN also means we'll also need an initial hidden state as input per layer, $h_0^l$, and we will also output a context vector per layer, $z^l$.
Without going into too much detail about LSTMs (see [this](https://colah.github.io/posts/2015-08-Understanding-LSTMs/) blog post to learn more about them), all we need to know is that they're a type of RNN which instead of just taking in a hidden state and returning a new hidden state per time-step, also take in and return a *cell state*, $c_t$, per time-step.
$$\begin{align*}
h_t &= \text{RNN}(e(x_t), h_{t-1})\\
(h_t, c_t) &= \text{LSTM}(e(x_t), h_{t-1}, c_{t-1})
\end{align*}$$
We can just think of $c_t$ as another type of hidden state. Similar to $h_0^l$, $c_0^l$ will be initialized to a tensor of all zeros. Also, our context vector will now be both the final hidden state and the final cell state, i.e. $z^l = (h_T^l, c_T^l)$.
Extending our multi-layer equations to LSTMs, we get:
$$\begin{align*}
(h_t^1, c_t^1) &= \text{EncoderLSTM}^1(e(x_t), (h_{t-1}^1, c_{t-1}^1))\\
(h_t^2, c_t^2) &= \text{EncoderLSTM}^2(h_t^1, (h_{t-1}^2, c_{t-1}^2))
\end{align*}$$
Note how only our hidden state from the first layer is passed as input to the second layer, and not the cell state.
So our encoder looks something like this:

We create this in code by making an `Encoder` module, which requires we inherit from `torch.nn.Module` and use the `super().__init__()` as some boilerplate code. The encoder takes the following arguments:
- `input_dim` is the size/dimensionality of the one-hot vectors that will be input to the encoder. This is equal to the input (source) vocabulary size.
- `emb_dim` is the dimensionality of the embedding layer. This layer converts the one-hot vectors into dense vectors with `emb_dim` dimensions.
- `hid_dim` is the dimensionality of the hidden and cell states.
- `n_layers` is the number of layers in the RNN.
- `dropout` is the amount of dropout to use. This is a regularization parameter to prevent overfitting. Check out [this](https://www.coursera.org/lecture/deep-neural-network/understanding-dropout-YaGbR) for more details about dropout.
We aren't going to discuss the embedding layer in detail during these tutorials. All we need to know is that there is a step before the words - technically, the indexes of the words - are passed into the RNN, where the words are transformed into vectors. To read more about word embeddings, check these articles: [1](https://monkeylearn.com/blog/word-embeddings-transform-text-numbers/), [2](http://p.migdal.pl/2017/01/06/king-man-woman-queen-why.html), [3](http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/), [4](http://mccormickml.com/2017/01/11/word2vec-tutorial-part-2-negative-sampling/).
The embedding layer is created using `nn.Embedding`, the LSTM with `nn.LSTM` and a dropout layer with `nn.Dropout`. Check the PyTorch [documentation](https://pytorch.org/docs/stable/nn.html) for more about these.
One thing to note is that the `dropout` argument to the LSTM is how much dropout to apply between the layers of a multi-layer RNN, i.e. between the hidden states output from layer $l$ and those same hidden states being used for the input of layer $l+1$.
In the `forward` method, we pass in the source sentence, $X$, which is converted into dense vectors using the `embedding` layer, and then dropout is applied. These embeddings are then passed into the RNN. As we pass a whole sequence to the RNN, it will automatically do the recurrent calculation of the hidden states over the whole sequence for us! Notice that we do not pass an initial hidden or cell state to the RNN. This is because, as noted in the [documentation](https://pytorch.org/docs/stable/nn.html#torch.nn.LSTM), that if no hidden/cell state is passed to the RNN, it will automatically create an initial hidden/cell state as a tensor of all zeros.
The RNN returns: `outputs` (the top-layer hidden state for each time-step), `hidden` (the final hidden state for each layer, $h_T$, stacked on top of each other) and `cell` (the final cell state for each layer, $c_T$, stacked on top of each other).
As we only need the final hidden and cell states (to make our context vector), `forward` only returns `hidden` and `cell`.
The sizes of each of the tensors is left as comments in the code. In this implementation `n_directions` will always be 1, however note that bidirectional RNNs (covered in tutorial 3) will have `n_directions` as 2.
```
class Encoder(nn.Module):
def __init__(self, input_dim, emb_dim, hid_dim, n_layers, dropout):
super().__init__()
self.hid_dim = hid_dim
self.n_layers = n_layers
self.embedding = nn.Embedding(input_dim, emb_dim)
self.rnn = nn.LSTM(emb_dim, hid_dim, n_layers, dropout = dropout)
self.dropout = nn.Dropout(dropout)
def forward(self, src):
#src = [src len, batch size]
embedded = self.dropout(self.embedding(src))
#embedded = [src len, batch size, emb dim]
outputs, (hidden, cell) = self.rnn(embedded)
#outputs = [src len, batch size, hid dim * n directions]
#hidden = [n layers * n directions, batch size, hid dim]
#cell = [n layers * n directions, batch size, hid dim]
#outputs are always from the top hidden layer
return hidden, cell
```
### Decoder
Next, we'll build our decoder, which will also be a 2-layer (4 in the paper) LSTM.

The `Decoder` class does a single step of decoding, i.e. it ouputs single token per time-step. The first layer will receive a hidden and cell state from the previous time-step, $(s_{t-1}^1, c_{t-1}^1)$, and feeds it through the LSTM with the current embedded token, $y_t$, to produce a new hidden and cell state, $(s_t^1, c_t^1)$. The subsequent layers will use the hidden state from the layer below, $s_t^{l-1}$, and the previous hidden and cell states from their layer, $(s_{t-1}^l, c_{t-1}^l)$. This provides equations very similar to those in the encoder.
$$\begin{align*}
(s_t^1, c_t^1) = \text{DecoderLSTM}^1(d(y_t), (s_{t-1}^1, c_{t-1}^1))\\
(s_t^2, c_t^2) = \text{DecoderLSTM}^2(s_t^1, (s_{t-1}^2, c_{t-1}^2))
\end{align*}$$
Remember that the initial hidden and cell states to our decoder are our context vectors, which are the final hidden and cell states of our encoder from the same layer, i.e. $(s_0^l,c_0^l)=z^l=(h_T^l,c_T^l)$.
We then pass the hidden state from the top layer of the RNN, $s_t^L$, through a linear layer, $f$, to make a prediction of what the next token in the target (output) sequence should be, $\hat{y}_{t+1}$.
$$\hat{y}_{t+1} = f(s_t^L)$$
The arguments and initialization are similar to the `Encoder` class, except we now have an `output_dim` which is the size of the vocabulary for the output/target. There is also the addition of the `Linear` layer, used to make the predictions from the top layer hidden state.
Within the `forward` method, we accept a batch of input tokens, previous hidden states and previous cell states. As we are only decoding one token at a time, the input tokens will always have a sequence length of 1. We `unsqueeze` the input tokens to add a sentence length dimension of 1. Then, similar to the encoder, we pass through an embedding layer and apply dropout. This batch of embedded tokens is then passed into the RNN with the previous hidden and cell states. This produces an `output` (hidden state from the top layer of the RNN), a new `hidden` state (one for each layer, stacked on top of each other) and a new `cell` state (also one per layer, stacked on top of each other). We then pass the `output` (after getting rid of the sentence length dimension) through the linear layer to receive our `prediction`. We then return the `prediction`, the new `hidden` state and the new `cell` state.
**Note**: as we always have a sequence length of 1, we could use `nn.LSTMCell`, instead of `nn.LSTM`, as it is designed to handle a batch of inputs that aren't necessarily in a sequence. `nn.LSTMCell` is just a single cell and `nn.LSTM` is a wrapper around potentially multiple cells. Using the `nn.LSTMCell` in this case would mean we don't have to `unsqueeze` to add a fake sequence length dimension, but we would need one `nn.LSTMCell` per layer in the decoder and to ensure each `nn.LSTMCell` receives the correct initial hidden state from the encoder. All of this makes the code less concise - hence the decision to stick with the regular `nn.LSTM`.
```
class Decoder(nn.Module):
def __init__(self, output_dim, emb_dim, hid_dim, n_layers, dropout):
super().__init__()
self.output_dim = output_dim
self.hid_dim = hid_dim
self.n_layers = n_layers
self.embedding = nn.Embedding(output_dim, emb_dim)
self.rnn = nn.LSTM(emb_dim, hid_dim, n_layers, dropout = dropout)
self.fc_out = nn.Linear(hid_dim, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, input, hidden, cell):
#input = [batch size]
#hidden = [n layers * n directions, batch size, hid dim]
#cell = [n layers * n directions, batch size, hid dim]
#n directions in the decoder will both always be 1, therefore:
#hidden = [n layers, batch size, hid dim]
#context = [n layers, batch size, hid dim]
input = input.unsqueeze(0)
#input = [1, batch size]
embedded = self.dropout(self.embedding(input))
#embedded = [1, batch size, emb dim]
output, (hidden, cell) = self.rnn(embedded, (hidden, cell))
#output = [seq len, batch size, hid dim * n directions]
#hidden = [n layers * n directions, batch size, hid dim]
#cell = [n layers * n directions, batch size, hid dim]
#seq len and n directions will always be 1 in the decoder, therefore:
#output = [1, batch size, hid dim]
#hidden = [n layers, batch size, hid dim]
#cell = [n layers, batch size, hid dim]
prediction = self.fc_out(output.squeeze(0))
#prediction = [batch size, output dim]
return prediction, hidden, cell
```
### Seq2Seq
For the final part of the implemenetation, we'll implement the seq2seq model. This will handle:
- receiving the input/source sentence
- using the encoder to produce the context vectors
- using the decoder to produce the predicted output/target sentence
Our full model will look like this:

The `Seq2Seq` model takes in an `Encoder`, `Decoder`, and a `device` (used to place tensors on the GPU, if it exists).
For this implementation, we have to ensure that the number of layers and the hidden (and cell) dimensions are equal in the `Encoder` and `Decoder`. This is not always the case, we do not necessarily need the same number of layers or the same hidden dimension sizes in a sequence-to-sequence model. However, if we did something like having a different number of layers then we would need to make decisions about how this is handled. For example, if our encoder has 2 layers and our decoder only has 1, how is this handled? Do we average the two context vectors output by the decoder? Do we pass both through a linear layer? Do we only use the context vector from the highest layer? Etc.
Our `forward` method takes the source sentence, target sentence and a teacher-forcing ratio. The teacher forcing ratio is used when training our model. When decoding, at each time-step we will predict what the next token in the target sequence will be from the previous tokens decoded, $\hat{y}_{t+1}=f(s_t^L)$. With probability equal to the teaching forcing ratio (`teacher_forcing_ratio`) we will use the actual ground-truth next token in the sequence as the input to the decoder during the next time-step. However, with probability `1 - teacher_forcing_ratio`, we will use the token that the model predicted as the next input to the model, even if it doesn't match the actual next token in the sequence.
The first thing we do in the `forward` method is to create an `outputs` tensor that will store all of our predictions, $\hat{Y}$.
We then feed the input/source sentence, `src`, into the encoder and receive out final hidden and cell states.
The first input to the decoder is the start of sequence (`<sos>`) token. As our `trg` tensor already has the `<sos>` token appended (all the way back when we defined the `init_token` in our `TRG` field) we get our $y_1$ by slicing into it. We know how long our target sentences should be (`max_len`), so we loop that many times. The last token input into the decoder is the one **before** the `<eos>` token - the `<eos>` token is never input into the decoder.
During each iteration of the loop, we:
- pass the input, previous hidden and previous cell states ($y_t, s_{t-1}, c_{t-1}$) into the decoder
- receive a prediction, next hidden state and next cell state ($\hat{y}_{t+1}, s_{t}, c_{t}$) from the decoder
- place our prediction, $\hat{y}_{t+1}$/`output` in our tensor of predictions, $\hat{Y}$/`outputs`
- decide if we are going to "teacher force" or not
- if we do, the next `input` is the ground-truth next token in the sequence, $y_{t+1}$/`trg[t]`
- if we don't, the next `input` is the predicted next token in the sequence, $\hat{y}_{t+1}$/`top1`, which we get by doing an `argmax` over the output tensor
Once we've made all of our predictions, we return our tensor full of predictions, $\hat{Y}$/`outputs`.
**Note**: our decoder loop starts at 1, not 0. This means the 0th element of our `outputs` tensor remains all zeros. So our `trg` and `outputs` look something like:
$$\begin{align*}
\text{trg} = [<sos>, &y_1, y_2, y_3, <eos>]\\
\text{outputs} = [0, &\hat{y}_1, \hat{y}_2, \hat{y}_3, <eos>]
\end{align*}$$
Later on when we calculate the loss, we cut off the first element of each tensor to get:
$$\begin{align*}
\text{trg} = [&y_1, y_2, y_3, <eos>]\\
\text{outputs} = [&\hat{y}_1, \hat{y}_2, \hat{y}_3, <eos>]
\end{align*}$$
```
class Seq2Seq(nn.Module):
def __init__(self, encoder, decoder, device):
super().__init__()
self.encoder = encoder
self.decoder = decoder
self.device = device
assert encoder.hid_dim == decoder.hid_dim, \
"Hidden dimensions of encoder and decoder must be equal!"
assert encoder.n_layers == decoder.n_layers, \
"Encoder and decoder must have equal number of layers!"
def forward(self, src, trg, teacher_forcing_ratio = 0.5):
#src = [src len, batch size]
#trg = [trg len, batch size]
#teacher_forcing_ratio is probability to use teacher forcing
#e.g. if teacher_forcing_ratio is 0.75 we use ground-truth inputs 75% of the time
batch_size = trg.shape[1]
trg_len = trg.shape[0]
trg_vocab_size = self.decoder.output_dim
#tensor to store decoder outputs
outputs = torch.zeros(trg_len, batch_size, trg_vocab_size).to(self.device)
#last hidden state of the encoder is used as the initial hidden state of the decoder
hidden, cell = self.encoder(src)
#first input to the decoder is the <sos> tokens
input = trg[0,:]
for t in range(1, trg_len):
#insert input token embedding, previous hidden and previous cell states
#receive output tensor (predictions) and new hidden and cell states
output, hidden, cell = self.decoder(input, hidden, cell)
#place predictions in a tensor holding predictions for each token
outputs[t] = output
#decide if we are going to use teacher forcing or not
teacher_force = random.random() < teacher_forcing_ratio
#get the highest predicted token from our predictions
top1 = output.argmax(1)
#if teacher forcing, use actual next token as next input
#if not, use predicted token
input = trg[t] if teacher_force else top1
return outputs
```
# Training the Seq2Seq Model
Now we have our model implemented, we can begin training it.
First, we'll initialize our model. As mentioned before, the input and output dimensions are defined by the size of the vocabulary. The embedding dimesions and dropout for the encoder and decoder can be different, but the number of layers and the size of the hidden/cell states must be the same.
We then define the encoder, decoder and then our Seq2Seq model, which we place on the `device`.
```
INPUT_DIM = len(SRC.vocab)
OUTPUT_DIM = len(TRG.vocab)
ENC_EMB_DIM = 256
DEC_EMB_DIM = 256
HID_DIM = 512
N_LAYERS = 2
ENC_DROPOUT = 0.5
DEC_DROPOUT = 0.5
enc = Encoder(INPUT_DIM, ENC_EMB_DIM, HID_DIM, N_LAYERS, ENC_DROPOUT)
dec = Decoder(OUTPUT_DIM, DEC_EMB_DIM, HID_DIM, N_LAYERS, DEC_DROPOUT)
model = Seq2Seq(enc, dec, device).to(device)
```
Next up is initializing the weights of our model. In the paper they state they initialize all weights from a uniform distribution between -0.08 and +0.08, i.e. $\mathcal{U}(-0.08, 0.08)$.
We initialize weights in PyTorch by creating a function which we `apply` to our model. When using `apply`, the `init_weights` function will be called on every module and sub-module within our model. For each module we loop through all of the parameters and sample them from a uniform distribution with `nn.init.uniform_`.
```
def init_weights(m):
for name, param in m.named_parameters():
nn.init.uniform_(param.data, -0.08, 0.08)
model.apply(init_weights)
```
We also define a function that will calculate the number of trainable parameters in the model.
```
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(model):,} trainable parameters')
```
We define our optimizer, which we use to update our parameters in the training loop. Check out [this](http://ruder.io/optimizing-gradient-descent/) post for information about different optimizers. Here, we'll use Adam.
```
optimizer = optim.Adam(model.parameters())
```
Next, we define our loss function. The `CrossEntropyLoss` function calculates both the log softmax as well as the negative log-likelihood of our predictions.
Our loss function calculates the average loss per token, however by passing the index of the `<pad>` token as the `ignore_index` argument we ignore the loss whenever the target token is a padding token.
```
TRG_PAD_IDX = TRG.vocab.stoi[TRG.pad_token]
criterion = nn.CrossEntropyLoss(ignore_index = TRG_PAD_IDX)
```
Next, we'll define our training loop.
First, we'll set the model into "training mode" with `model.train()`. This will turn on dropout (and batch normalization, which we aren't using) and then iterate through our data iterator.
As stated before, our decoder loop starts at 1, not 0. This means the 0th element of our `outputs` tensor remains all zeros. So our `trg` and `outputs` look something like:
$$\begin{align*}
\text{trg} = [<sos>, &y_1, y_2, y_3, <eos>]\\
\text{outputs} = [0, &\hat{y}_1, \hat{y}_2, \hat{y}_3, <eos>]
\end{align*}$$
Here, when we calculate the loss, we cut off the first element of each tensor to get:
$$\begin{align*}
\text{trg} = [&y_1, y_2, y_3, <eos>]\\
\text{outputs} = [&\hat{y}_1, \hat{y}_2, \hat{y}_3, <eos>]
\end{align*}$$
At each iteration:
- get the source and target sentences from the batch, $X$ and $Y$
- zero the gradients calculated from the last batch
- feed the source and target into the model to get the output, $\hat{Y}$
- as the loss function only works on 2d inputs with 1d targets we need to flatten each of them with `.view`
- we slice off the first column of the output and target tensors as mentioned above
- calculate the gradients with `loss.backward()`
- clip the gradients to prevent them from exploding (a common issue in RNNs)
- update the parameters of our model by doing an optimizer step
- sum the loss value to a running total
Finally, we return the loss that is averaged over all batches.
```
def train(model, iterator, optimizer, criterion, clip):
model.train()
epoch_loss = 0
for i, batch in enumerate(iterator):
src = batch.src
trg = batch.trg
optimizer.zero_grad()
output = model(src, trg)
#trg = [trg len, batch size]
#output = [trg len, batch size, output dim]
output_dim = output.shape[-1]
output = output[1:].view(-1, output_dim)
trg = trg[1:].view(-1)
#trg = [(trg len - 1) * batch size]
#output = [(trg len - 1) * batch size, output dim]
loss = criterion(output, trg)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), clip)
optimizer.step()
epoch_loss += loss.item()
return epoch_loss / len(iterator)
```
Our evaluation loop is similar to our training loop, however as we aren't updating any parameters we don't need to pass an optimizer or a clip value.
We must remember to set the model to evaluation mode with `model.eval()`. This will turn off dropout (and batch normalization, if used).
We use the `with torch.no_grad()` block to ensure no gradients are calculated within the block. This reduces memory consumption and speeds things up.
The iteration loop is similar (without the parameter updates), however we must ensure we turn teacher forcing off for evaluation. This will cause the model to only use it's own predictions to make further predictions within a sentence, which mirrors how it would be used in deployment.
```
def evaluate(model, iterator, criterion):
model.eval()
epoch_loss = 0
with torch.no_grad():
for i, batch in enumerate(iterator):
src = batch.src
trg = batch.trg
output = model(src, trg, 0) #turn off teacher forcing
#trg = [trg len, batch size]
#output = [trg len, batch size, output dim]
output_dim = output.shape[-1]
output = output[1:].view(-1, output_dim)
trg = trg[1:].view(-1)
#trg = [(trg len - 1) * batch size]
#output = [(trg len - 1) * batch size, output dim]
loss = criterion(output, trg)
epoch_loss += loss.item()
return epoch_loss / len(iterator)
```
Next, we'll create a function that we'll use to tell us how long an epoch takes.
```
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
```
We can finally start training our model!
At each epoch, we'll be checking if our model has achieved the best validation loss so far. If it has, we'll update our best validation loss and save the parameters of our model (called `state_dict` in PyTorch). Then, when we come to test our model, we'll use the saved parameters used to achieve the best validation loss.
We'll be printing out both the loss and the perplexity at each epoch. It is easier to see a change in perplexity than a change in loss as the numbers are much bigger.
```
N_EPOCHS = 10
CLIP = 1
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss = train(model, train_iterator, optimizer, criterion, CLIP)
valid_loss = evaluate(model, valid_iterator, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'tut1-model.pt')
print(f'Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f}')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. PPL: {math.exp(valid_loss):7.3f}')
```
We'll load the parameters (`state_dict`) that gave our model the best validation loss and run it the model on the test set.
```
model.load_state_dict(torch.load('tut1-model.pt'))
test_loss = evaluate(model, test_iterator, criterion)
print(f'| Test Loss: {test_loss:.3f} | Test PPL: {math.exp(test_loss):7.3f} |')
```
In the following notebook we'll implement a model that achieves improved test perplexity, but only uses a single layer in the encoder and the decoder.
| github_jupyter |
# mlforecast
> Scalable machine learning based time series forecasting.
**mlforecast** is a framework to perform time series forecasting using machine learning models, with the option to scale to massive amounts of data using remote clusters.
[](https://github.com/Nixtla/mlforecast/actions/workflows/ci.yaml)
[](https://github.com/Nixtla/mlforecast/actions/workflows/lint.yaml)
[](https://pypi.org/project/mlforecast/)
[](https://pypi.org/project/mlforecast/)
[](https://anaconda.org/conda-forge/mlforecast)
[](https://codecov.io/gh/Nixtla/mlforecast)
[](https://github.com/Nixtla/mlforecast/blob/main/LICENSE)
## Install
### PyPI
`pip install mlforecast`
#### Optional dependencies
If you want more functionality you can instead use `pip install mlforecast[extra1,extra2,...]`. The current extra dependencies are:
* **aws**: adds the functionality to use S3 as the storage in the CLI.
* **cli**: includes the validations necessary to use the CLI.
* **distributed**: installs [dask](https://dask.org/) to perform distributed training. Note that you'll also need to install either [LightGBM](https://github.com/microsoft/LightGBM/tree/master/python-package) or [XGBoost](https://xgboost.readthedocs.io/en/latest/install.html#python).
For example, if you want to perform distributed training through the CLI using S3 as your storage you'll need all three extras, which you can get using: `pip install mlforecast[aws,cli,distributed]`.
### conda-forge
`conda install -c conda-forge mlforecast`
Note that this installation comes with the required dependencies for the local interface. If you want to:
* Use s3 as storage: `conda install -c conda-forge s3path`
* Perform distributed training: `conda install -c conda-forge dask` and either [LightGBM](https://github.com/microsoft/LightGBM/tree/master/python-package) or [XGBoost](https://xgboost.readthedocs.io/en/latest/install.html#python).
## How to use
The following provides a very basic overview, for a more detailed description see the [documentation](https://nixtla.github.io/mlforecast/).
### Programmatic API
```
#hide
import os
import shutil
from pathlib import Path
from IPython.display import display, Markdown
os.chdir('..')
def display_df(df):
display(Markdown(df.to_markdown()))
```
Store your time series in a pandas dataframe with an index named **unique_id** that identifies each time serie, a column **ds** that contains the datestamps and a column **y** with the values.
```
from mlforecast.utils import generate_daily_series
series = generate_daily_series(20)
display_df(series.head())
```
Then create a `TimeSeries` object with the features that you want to use. These include lags, transformations on the lags and date features. The lag transformations are defined as [numba](http://numba.pydata.org/) *jitted* functions that transform an array, if they have additional arguments you supply a tuple (`transform_func`, `arg1`, `arg2`, ...).
```
from mlforecast.core import TimeSeries
from window_ops.expanding import expanding_mean
from window_ops.rolling import rolling_mean
ts = TimeSeries(
lags=[7, 14],
lag_transforms={
1: [expanding_mean],
7: [(rolling_mean, 7), (rolling_mean, 14)]
},
date_features=['dayofweek', 'month']
)
ts
```
Next define a model. If you want to use the local interface this can be any regressor that follows the scikit-learn API. For distributed training there are `LGBMForecast` and `XGBForecast`.
```
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor(random_state=0)
```
Now instantiate your forecast object with the model and the time series. There are two types of forecasters, `Forecast` which is local and `DistributedForecast` which performs the whole process in a distributed way.
```
from mlforecast.forecast import Forecast
fcst = Forecast(model, ts)
```
To compute the features and train the model using them call `.fit` on your `Forecast` object.
```
fcst.fit(series)
```
To get the forecasts for the next 14 days call `.predict(14)` on the forecaster. This will update the target with each prediction and recompute the features to get the next one.
```
predictions = fcst.predict(14)
display_df(predictions.head())
```
### CLI
If you're looking for computing quick baselines, want to avoid some boilerplate or just like using CLIs better then you can use the `mlforecast` binary with a configuration file like the following:
```
!cat sample_configs/local.yaml
```
The configuration is validated using `FlowConfig`.
This configuration will use the data in `data.prefix/data.input` to train and write the results to `data.prefix/data.output` both with `data.format`.
```
data_path = Path('data')
data_path.mkdir()
series.to_parquet(data_path/'train')
!mlforecast sample_configs/local.yaml
list((data_path/'outputs').iterdir())
#hide
shutil.rmtree(data_path)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import ipywidgets as widgets
from IPython.display import HTML
from datetime import datetime
# General
import os
# Drawing
import cartopy
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
from cartopy.io import shapereader
from matplotlib.cm import get_cmap
import matplotlib.cm as cm
import matplotlib.colors as colors
from mpl_toolkits.axes_grid1 import make_axes_locatable
from math import floor
from matplotlib import patheffects
import matplotlib
if os.name == 'nt':
matplotlib.rc('font', family='Arial')
else: # might need tweaking, must support black triangle for N arrow
matplotlib.rc('font', family='DejaVu Sans')
from datetime import date
plt.ioff()
from IPython.display import display, Javascript
Javascript('document.title="{}"'.format("Coronavirus Enforcement"))
DATA_URL = 'https://www.scotland.police.uk/spa-media/ewloducq/coronavirus-enforcement-information-to-30-june-2021.xlsx'
def datesFromData(url):
raw_data = pd.read_excel(url, sheet_name=1)
earlyDate = (min(raw_data["Date"]).strftime("%d %B %Y"))
lateDate = (max(raw_data["Date"]).strftime("%d %B %Y"))
return earlyDate, lateDate
today = date.today()
date_formatted = today.strftime("%d %B %Y")
earliestDate, latestDate = datesFromData(DATA_URL)
EXPLANATION = """\
<div class="app-sidebar">
<p><em>Compare the prevalence of different intervention results - geospatially.</em><p>
<p>As a result of the 2020 introduction of the: <a href="https://www.legislation.gov.uk/ssi/2020/103/contents/made">The Health Protection (Coronavirus) (Restrictions) (Scotland) Regulations 2020</a>
and <a href="https://www.legislation.gov.uk/ukpga/2020/7/contents/enacted">Coronavirus Act 2020</a>,
Police Scotland were mandated to develop a โCoronavirus Interventionsโ (CVI) recording system.</p>
<p>Police Scotland gather data in reference to the public co-operation levels with the new legislation.
However, <b>it should be noted</b>, the system relies on Police officers manually updating the system - with the specific co-operation level they <i>"experienced"</i> when they encounter a contravention of the legislation.</p>
<p>As such, the CVI data is indicative only and actual figures may be higher. CVI data is published <a href="https://www.scotland.police.uk/about-us/covid-19-police-scotland-response/enforcement-and-response-data/">weekly</a>
and broken down by date, Police Scotland division, subdivision and the following five categories of CVI:
<ul>
<li>Total number of people dispersed when informed</li>
<li>Total number of people dispersed but only when instructed</li>
<li>Total number of people removed from place or premise</li>
<li>Total number of people issued a fixed penalty notice (FPN)</li>
<li>Total number of people arrested</li>
</ul></p>
<p> The map can display CVI data from """ + earliestDate + """ to """ + latestDate + """, for each of the above categories,
in terms of: total numbers, numbers per 100,000 people, <a href="https://github.com/groegercesg/CovidEnforcementScotland#officer-numbers">numbers per 100 officers*</a> and average daily arrests within a Police Scotland division.</p>
</div>
"""
CREATED = """ \
<em>Created by: <a href="https://callumgroeger.com">Callum Groeger</a> | """ + date_formatted + """ </em>
<br>
"""
PROJECTION = """ \
<em>Projection: British National Grid (BNG) | License: MIT </em>
<br>
"""
DATA = """ \
<em>Data: Coronavirus Interventions (<a href="https://www.scotland.police.uk/about-us/covid-19-police-scotland-response/enforcement-and-response-data/">Police Scotland</a>),
Population Estimates 2019 (<a href="https://www.nrscotland.gov.uk/statistics-and-data/statistics/statistics-by-theme/population/population-estimates/mid-year-population-estimates/mid-2019">National Records of Scotland</a>),
Police Divisions (<a href="https://spatialdata.gov.scot/geonetwork/srv/eng/catalog.search;jsessionid=61F713CF39B3EE2F440F48E9C31BA806#/metadata/4364af71-167a-4236-b5a0-bd4109913231">Scottish Government</a>),
Police Staffing Q1 2021 (<a href="https://www.scotland.police.uk/about-us/police-scotland/police-scotland-officer-numbers/">Police Scotland</a>)
</em>
"""
GIF_ADDRESS = 'gif.gif'
HTML("""\
<style>
.app-title {
font-size: 2.5em;
}
.app-subtitle {
font-size: 1.5em;
}
.app-subtitle a {
color: #106ba3;
}
.app-subtitle a:hover {
text-decoration: underline;
}
.app-sidebar p {
margin-bottom: 1em;
line-height: 1.7;
}
.app-sidebar a {
color: #106ba3;
}
.app-sidebar a:hover {
text-decoration: underline;
}
</style>
""")
class App:
def __init__(self, df):
self._df = df
self._dfBASE = df.copy(deep=True)
# Get dropdown options, cut out the first five - as this is just Divisions
available_indicators = list(self._df)
del available_indicators[0:4]
# Loading GIF
with open(GIF_ADDRESS, 'rb') as f:
img = f.read()
# create loading bar widget, ready to display when running long function
self.loading_bar = widgets.Image(value=img)
self.loading_bar.layout.object_fit = 'contain'
self._dropdown1 = self._create_indicator_dropdown(available_indicators, 0)
self._dropdown2 = self._create_indicator_dropdown([("Total", 0), ("Per 100,000", 1), ("Per 100 officers", 2), ("Daily Average", 3)], 0)
self._plot_container = widgets.Output()
self._date_slider, date_slider_box = self._create_date_slider(
df, 'Date'
)
self._app_container = widgets.VBox([
widgets.HBox([
self._dropdown1,
self._dropdown2
]),
self._plot_container,
date_slider_box
], layout=widgets.Layout(align_items='center', flex='3 0 auto'))
# flex: https://minrk-ipywidgets.readthedocs.io/en/latest/examples/Widget%20Styling.html#Properties-of-the-items
self.container = widgets.VBox([
widgets.HTML(
(
'<h1 class="app-title">Police Scotland Coronavirus Interventions 2020-1</h1>'
'<h2 class="app-subtitle"><a href="https://github.com/groegercesg/CovidEnforcementScotland">Link to Github</a></h2>'
),
layout=widgets.Layout(margin='0 0 2em 0')
# margin: https://minrk-ipywidgets.readthedocs.io/en/latest/examples/Widget%20Styling.html#Shorthand-CSS-properties
),
widgets.HBox([
self._app_container,
widgets.HTML(EXPLANATION, layout=widgets.Layout(margin='0 0 0 2em')) # 0
], layout=widgets.Layout(margin='0 0 2em 0')),
# layout options for center: align_items='center', align_content='center'
widgets.HTML(
(
'<hr>'
)),
widgets.HBox([
widgets.HTML(CREATED),
widgets.HTML(PROJECTION),
widgets.HTML(DATA)
], layout=widgets.Layout(display='flex', flex_flow='column', align_items='center', width='100%'))
], layout=widgets.Layout(flex='1 1 auto', margin='0 auto 0 auto', max_width='1024px'))
self._update_app()
def _create_date_slider(self, df, column_name):
dates = df[column_name]
options = [(date.strftime(' %d %b %Y '), date) for date in dates]
index = (0, len(options)-1)
date_slider_label = widgets.Label('Date range: ')
date_slider = widgets.SelectionRangeSlider(
options=options,
index=index,
orientation='horizontal',
continuous_update=False,
layout=widgets.Layout(width='500px')
)
date_slider.observe(self._on_change, names=['value'])
date_slider_box = widgets.HBox([date_slider_label, date_slider],
layout=widgets.Layout(flex='1 1 auto', width='auto'))
# We need to manually set the description of our SelectionRangeSlider
# We can do this physically with Inspect Element
# .widget-inline-hbox .widget-readout {
# text-align: center;
# max-width: 200px;
# Discussion at: https://github.com/jupyter-widgets/ipywidgets/issues/2318
return date_slider, date_slider_box
def groupByDailyAverage(self, df, days):
df['Daily Average Asked / Informed'] = df.apply (lambda row: row['Asked / Informed']/days if days > 0 else 0, axis=1)
df['Daily Average Warned / Instructed'] = df.apply (lambda row: row['Warned / Instructed']/days if days > 0 else 0, axis=1)
df['Daily Average Removed from Place or Premises'] = df.apply (lambda row: row['Removed from Place or Premises']/days if days > 0 else 0, axis=1)
df['Daily Average FPN'] = df.apply (lambda row: row['FPN']/days if days > 0 else 0, axis=1)
df['Daily Average Arrested'] = df.apply (lambda row: row['Arrested']/days if days > 0 else 0, axis=1)
return df
def groupByDivision(self, df):
division_grouped = df.groupby('Division Letter', as_index=False
).agg(
{"Asked / Informed": "sum",
"Warned / Instructed": "sum",
"Removed from Place or Premises": "sum",
"FPN": "sum",
"Arrested": "sum",
})
return division_grouped
def groupByOfficerNumber(self, df):
# Process data of police numbers
# Data from: https://www.scotland.police.uk/about-us/police-scotland/police-scotland-officer-numbers/
officer_dict = {'A': 1115,
'C': 626,
'D': 919,
'E': 1099,
'G': 2434,
'J': 902,
'K': 613,
'L': 553,
'N': 661,
'P': 759,
'Q': 1388,
'U': 818,
'V': 382
}
div_officer_data = pd.DataFrame(officer_dict.items(), columns=['Division Letter', 'Officer Numbers'])
# Merge Data
dfMerge = pd.merge(df, div_officer_data, on='Division Letter')
dfMerge['Asked / Informed per 100 officers'] = dfMerge.apply (lambda row: row['Asked / Informed']/(row['Officer Numbers'] / 100) if row['Officer Numbers'] > 0 else 0, axis=1)
dfMerge['Warned / Instructed per 100 officers'] = dfMerge.apply (lambda row: row['Warned / Instructed']/(row['Officer Numbers'] / 100) if row['Officer Numbers'] > 0 else 0, axis=1)
dfMerge['Removed from Place or Premises per 100 officers'] = dfMerge.apply (lambda row: row['Removed from Place or Premises']/(row['Officer Numbers'] / 100) if row['Officer Numbers'] > 0 else 0, axis=1)
dfMerge['FPN per 100 officers'] = dfMerge.apply (lambda row: row['FPN']/(row['Officer Numbers'] / 100) if row['Officer Numbers'] > 0 else 0, axis=1)
dfMerge['Arrested per 100 officers'] = dfMerge.apply (lambda row: row['Arrested']/(row['Officer Numbers'] / 100) if row['Officer Numbers'] > 0 else 0, axis=1)
return dfMerge
def groupByPopulation(self, df):
# Process Population Data
# Data from: https://www.nrscotland.gov.uk/statistics-and-data/statistics/statistics-by-theme/population/population-estimates/mid-year-population-estimates/mid-2019
raw_pop_data = pd.read_csv(os.path.join(os.getcwd(), 'datasets', 'Population', 'mid-year-pop-est-19-data_Table 2.csv'))
# Keep only the specific columns
raw_pop_data = raw_pop_data[['Unnamed: 1','Unnamed: 2']]
# Rename them inplace
raw_pop_data.rename(columns={'Unnamed: 1': 'Council areas', 'Unnamed: 2': 'Population'}, inplace=True)
# Drop upper rows that are bad
raw_pop_data = raw_pop_data.drop(raw_pop_data.index[[0,1,2,3,4]]).reset_index(drop=True)
# Drop from certain row, minus 1 for the row above position
raw_pop_data = raw_pop_data[:(raw_pop_data[raw_pop_data['Council areas'] == 'NHS Board areas'].index[0] - 1)]
# Strip out all the commas in Objects of the Population column
raw_pop_data["Population"].replace(',','', regex=True, inplace=True)
# Convert string to int
raw_pop_data["Population"] = raw_pop_data["Population"].astype(str).astype(int)
# Group Pop Data
# We group the council areas into our police divisions
# First, set our index
raw_pop_data.set_index('Council areas')
# Create our division dictionary
div_dict = {'A': ["Moray", "Aberdeenshire", "Aberdeen City"],
'C': ["Stirling", "Clackmannanshire", "Falkirk"],
'D': ["Angus", "Dundee City", "Perth and Kinross"],
'E': ["City of Edinburgh"],
'G': ["East Renfrewshire", "Glasgow City", "East Dunbartonshire"],
'J': ["Scottish Borders", "East Lothian", "Midlothian", "West Lothian"],
'K': ["Inverclyde", "Renfrewshire"],
'L': ["Argyll and Bute", "West Dunbartonshire"],
'N': ["Na h-Eileanan Siar", "Orkney Islands", "Highland", "Shetland Islands"],
'P': ["Fife"],
'Q': ["South Lanarkshire", "North Lanarkshire"],
'U': ["South Ayrshire", "East Ayrshire", "North Ayrshire"],
'V': ["Dumfries and Galloway"]
}
div_pop = {}
def divisionPopulation(row):
incomingRow = row.tolist()
for div, councils in div_dict.items():
for council in councils:
if (council == incomingRow[0]):
if div in div_pop:
div_pop[div] += incomingRow[1]
else:
div_pop[div] = incomingRow[1]
raw_pop_data.apply(lambda row: divisionPopulation(row), axis=1)
div_pop_data = pd.DataFrame(div_pop.items(), columns=['Division Letter', 'Population'])
# Merge Data
dfMerge = pd.merge(df, div_pop_data, on='Division Letter')
dfMerge['Asked / Informed per 100k'] = dfMerge.apply (lambda row: row['Asked / Informed']/(row['Population'] / 100000) if row['Population'] > 0 else 0, axis=1)
dfMerge['Warned / Instructed per 100k'] = dfMerge.apply (lambda row: row['Warned / Instructed']/(row['Population'] / 100000) if row['Population'] > 0 else 0, axis=1)
dfMerge['Removed from Place or Premises per 100k'] = dfMerge.apply (lambda row: row['Removed from Place or Premises']/(row['Population'] / 100000) if row['Population'] > 0 else 0, axis=1)
dfMerge['FPN per 100k'] = dfMerge.apply (lambda row: row['FPN']/(row['Population'] / 100000) if row['Population'] > 0 else 0, axis=1)
dfMerge['Arrested per 100k'] = dfMerge.apply (lambda row: row['Arrested']/(row['Population'] / 100000) if row['Population'] > 0 else 0, axis=1)
return dfMerge
# The class method, we use this to gather the data then pre-process it
@classmethod
def from_url(cls, url):
raw_data = pd.read_excel(url, sheet_name=1)
raw_data.drop(['Unnamed: 9', 'Unnamed: 10', 'Unnamed: 11', 'Unnamed: 12', 'Unnamed: 13', 'Unnamed: 14', 'Unnamed: 15', 'Unnamed: 16', 'Unnamed: 17'], axis=1, inplace=True)
# Taking account of NaNs
# Explanation:
# The xlsx to pandas dataframe conversion seems to have taken "NA" for a division "N" and an Area Command "Inverness"
# and interpret that "NA" as actually: "NaN". Which is very annoying. So the below overwrites the SD letter of area commands
# that are inverness and turns them back to "NA"
raw_data.loc[raw_data["Area Commands"] == "Inverness", "SD Letter"] = raw_data["SD Letter"].fillna("NA")
if (raw_data.isnull().sum().sum() != 0):
raise ValueError("We have NaNs in our dataframe")
return cls(raw_data)
def _create_indicator_dropdown(self, indicators, initial_index):
# Handling for the two different types of Dropdown options storage
if isinstance(indicators[initial_index], tuple):
valuePos = initial_index
elif isinstance(indicators[initial_index], str):
valuePos = indicators[initial_index]
else:
raise ValueError("Unknown dropdown input type")
dropdown = widgets.Dropdown(options=indicators, value=valuePos)
dropdown.observe(self._on_change, names=['value'])
return dropdown
def utm_from_lon(self, lon):
"""
utm_from_lon - UTM zone for a longitude
Not right for some polar regions (Norway, Svalbard, Antartica)
:param float lon: longitude
:return: UTM zone number
:rtype: int
"""
return floor( ( lon + 180 ) / 6) + 1
def scale_bar(self, ax, proj, length, location=(0.5, 0.05), linewidth=3,
units='km', m_per_unit=1000):
"""
http://stackoverflow.com/a/35705477/1072212
ax is the axes to draw the scalebar on.
proj is the projection the axes are in
location is center of the scalebar in axis coordinates ie. 0.5 is the middle of the plot
length is the length of the scalebar in km.
linewidth is the thickness of the scalebar.
units is the name of the unit
m_per_unit is the number of meters in a unit
"""
# find lat/lon center to find best UTM zone
x0, x1, y0, y1 = ax.get_extent(proj.as_geodetic())
# Projection in metres
utm = ccrs.UTM(self.utm_from_lon((x0+x1)/2))
# Get the extent of the plotted area in coordinates in metres
x0, x1, y0, y1 = ax.get_extent(utm)
# Turn the specified scalebar location into coordinates in metres
sbcx, sbcy = x0 + (x1 - x0) * location[0], y0 + (y1 - y0) * location[1]
# Generate the x coordinate for the ends of the scalebar
bar_xs = [sbcx - length * m_per_unit/2, sbcx + length * m_per_unit/2]
# buffer for scalebar
buffer = [patheffects.withStroke(linewidth=5, foreground="w")]
# Plot the scalebar with buffer
ax.plot(bar_xs, [sbcy, sbcy], transform=utm, color='k',
linewidth=linewidth, path_effects=buffer)
# buffer for text
buffer = [patheffects.withStroke(linewidth=3, foreground="w")]
# Plot the scalebar label
t0 = ax.text(sbcx, sbcy, str(length) + ' ' + units, transform=utm,
horizontalalignment='center', verticalalignment='bottom',
path_effects=buffer, zorder=2)
left = x0+(x1-x0)*0.05
# Plot the N arrow
t1 = ax.text(left, sbcy, u'\u25B2\nN', transform=utm,
horizontalalignment='center', verticalalignment='bottom',
path_effects=buffer, zorder=2)
# Plot the scalebar without buffer, in case covered by text buffer
ax.plot(bar_xs, [sbcy, sbcy], transform=utm, color='k',
linewidth=linewidth, zorder=3)
def _create_plot(self, indicator, scaling):
fig = plt.figure(figsize=(6,8), dpi=100)
projectionPARAM = ccrs.TransverseMercator(central_longitude=-2.0, central_latitude=49.0, false_easting=400000.0, false_northing=-100000.0, scale_factor=0.9996012717, approx=False)
ax = fig.add_subplot(1, 1, 1, projection=projectionPARAM)
ax.set_extent([-8, 0, 54.5, 61]) # Ideal coordinate map range for plotting Scotland
# Process the input from the second dropdown
if scaling == 0:
indicator = indicator
elif scaling == 1:
indicator = indicator + " per 100k"
elif scaling == 2:
indicator = indicator + " per 100 officers"
elif scaling == 3:
indicator = "Daily Average " + indicator
else:
raise ValueError("Bizarre dropdown option achieved, investigation needed!")
police_dict = (self._df[['Division Letter', indicator]].set_index('Division Letter').T.to_dict('records'))[0]
# Downloaded from: https://spatialdata.gov.scot/geonetwork/srv/eng/catalog.search;jsessionid=61F713CF39B3EE2F440F48E9C31BA806#/metadata/4364af71-167a-4236-b5a0-bd4109913231
area_file = os.path.join(os.getcwd(), 'datasets', 'ScottishPoliceDivisions', 'SG_ScottishPoliceDivisions_2019.shp')
police_divisions = shapereader.Reader(area_file)
norm = colors.Normalize(vmin=0., vmax=max(police_dict.values()))
cmap = get_cmap('PuBu')
for record in police_divisions.records():
code = record.attributes['AdminCode']
police_entry = police_dict.get(code, -1)
if police_entry == -1:
police_color = "Silver"
else:
police_color = cmap(police_entry/max(police_dict.values()))
ax.add_geometries(
[record.geometry],
#facecolor=numpy.random.rand(3,),
facecolor=police_color,
linewidth=0,
crs=projectionPARAM,
)
# following https://matplotlib.org/2.0.2/mpl_toolkits/axes_grid/users/overview.html#colorbar-whose-height-or-width-in-sync-with-the-master-axes
# we need to set axes_class=plt.Axes, else it attempts to create
# a GeoAxes as colorbar
divider = make_axes_locatable(ax)
ax_cb = divider.new_horizontal(size="5%", pad=0.1, axes_class=plt.Axes)
fig.add_axes(ax_cb)
sm = plt.cm.ScalarMappable(norm=norm, cmap=cmap)
cb = plt.colorbar(sm, cax=ax_cb)
cb.set_label(indicator)
#self.scale_bar(ax, projectionPARAM, 100, location=(0.85, 0.05)) # 100 km scale bar
plt.plot()
def _on_change(self, _):
self._update_app()
def trimToDateRange(self, df, date_range):
# We want to trim the data, so that it's range is inline with date range
# First we replace _df with our base df, so we can then correctly apply the range
self._df = self._dfBASE.copy(deep=True)
# Then we cut it to only within our date range
df = self._df[self._df['Date'].between(*date_range)]
return df
def _process_data(self, date_range):
numberOfDays = (date_range[1] - date_range[0]).days
self._df = self.trimToDateRange(self._df, date_range)
self._df = self.groupByDivision(self._df)
self._df = self.groupByPopulation(self._df)
self._df = self.groupByOfficerNumber(self._df)
self._df = self.groupByDailyAverage(self._df, numberOfDays)
def _update_app(self):
# Pull in widget attributes for passing to plot function
indicator = self._dropdown1.value
scaling = self._dropdown2.value
date_range = self._date_slider.value
# Process data
self._process_data(date_range)
self._plot_container.clear_output()
# wait=True
with self._plot_container:
#self.loading_bar.layout.visibility = 'visible'
self.loading_bar.layout.display = 'block'
display(self.loading_bar)
self._create_plot(indicator, scaling)
plt.show()
#self.loading_bar.layout.visibility = 'hidden'
self.loading_bar.layout.display = 'none'
app = App.from_url(DATA_URL)
app.container
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
```
# Forecast like observations
Use observation files to produce new files that fit the shape of a forecast file.
That makes them easier to use for ML purposes.
At the core of this task is the forecast_like_observations provided by the organizers.
This notebooks loads the appropriate forecasts and calls this function to generate corresponding obs, from our own set of obs files.
The obs files were modified to make them more consisten w/r to nans, see *land-mask-investigate.ipybn*.
```
import climetlab as cml
import climetlab_s2s_ai_challenge
import dask
import dask.array as da
import dask.distributed
import dask_jobqueue
import pathlib
import xarray as xr
from crims2s.util import fix_dataset_dims
DATA_PATH = '***BASEDIR***'
data_path = pathlib.Path(DATA_PATH)
```
## Boot dask cluster
```
cluster = dask_jobqueue.SLURMCluster(env_extra=['source ***HOME***.bash_profile','conda activate s2s'])
cluster.scale(jobs=4)
client = dask.distributed.Client(cluster)
client
```
## Temperature
```
forecast_dir = data_path / 'training-input'
forecast_files = [f for f in forecast_dir.iterdir() if 'ecmwf' in f.stem and 't2m' in f.stem]
forecast_files[:10]
forecast = xr.open_mfdataset(forecast_files, preprocess=fix_dataset_dims)
obs = xr.open_dataset(data_path / 'obs_t2m_interp_remask.nc')
forecast_shaped_t2m = climetlab_s2s_ai_challenge.extra.forecast_like_observations(forecast, obs)
forecast_shaped_t2m
sample = forecast_shaped_t2m.isel(forecast_dayofyear=0, forecast_year=10, lead_time=40)
sample.valid_time.item()
(sample == obs.sel(time=sample.valid_time)).t2m.plot()
```
Seems legit!
```
forecast_shaped_t2m.isel(forecast_year=0).to_netcdf(data_path / 'processed' / 'training-output-reference' / f'obs_t2m_forecast_shape_2000.nc')
forecast_shaped_t2m.isel(forecast_year=[0])
forecast_files[:10]
for f in forecast_files:
print(f)
forecast = fix_dataset_dims(xr.open_dataset(f))
forecast_shaped_t2m = climetlab_s2s_ai_challenge.extra.forecast_like_observations(forecast, obs)
day_of_year = forecast_shaped_t2m.forecast_time.dt.dayofyear[0].item()
forecast_shaped_t2m = forecast_shaped_t2m.expand_dims('forecast_dayofyear').assign_coords(forecast_dayofyear=[day_of_year])
forecast_shaped_t2m.to_netcdf(data_path / 'processed' / 'training-output-reference' / f'obs_t2m_forecast_shape_{day_of_year:03}.nc')
for y in forecast_shaped_t2m.forecast_year:
print(y.item())
for y in forecast_shaped_t2m.forecast_year:
print(y.item())
forecast_shaped_t2m.sel(forecast_year=[y]).to_netcdf(data_path / 'processed' / 'training-output-reference' / f'obs_t2m_forecast_shape_{y.item()}.nc')
forecast_shaped_t2m.to_netcdf(data_path / 'obs_t2m_forecast_shape.nc')
forecast_shaped_t2m.to_netcdf('***BASEDIR***obs_t2m_forecast_shape.nc')
del obs
del forecast
del forecast_shaped_t2m
```
## Precipitation
```
forecast_dir = data_path / 'training-input'
forecast_files = [f for f in forecast_dir.iterdir() if 'ecmwf' in f.stem and 'tp' in f.stem]
forecast_files[:10]
obs = xr.open_dataset(data_path / 'obs_pr_interp_remask.nc')
for f in forecast_files:
forecast = fix_dataset_dims(xr.open_dataset(f))
forecast_shaped_tp = climetlab_s2s_ai_challenge.extra.forecast_like_observations(forecast, obs)
day_of_year = forecast_shaped_tp.forecast_time.dt.dayofyear[0].item()
forecast_shaped_tp = forecast_shaped_tp.expand_dims('forecast_dayofyear').assign_coords(forecast_dayofyear=[day_of_year])
forecast_shaped_tp.to_netcdf(data_path / 'processed' / 'training-output-reference' / f'obs_tp_forecast_shape_{day_of_year:03}.nc')
forecast_shaped_tp.forecast_time.dt.day[0].item()
day_of_year = 289
forecast_shaped_tp.to_netcdf(data_path / 'processed' / 'training-output-reference' / f'obs_tp_forecast_shape_{day_of_year:03}.nc')
forecast_shaped_tp
sample = forecast.isel(forecast_year=10, lead_time=10)
sample
obs
forecast_shaped_tp
sample = forecast_shaped_tp.isel(forecast_year=10, lead_time=15)
sample
obs_of_sample = obs.sel(time=slice(sample.forecast_time, sample.forecast_time + sample.lead_time)).isel(time=slice(None, -1))
obs_of_sample
(obs_of_sample.sum(dim='time').pr == sample.tp).plot()
```
seems legit! don't forget to exclude the last day when computing the cumsum
| github_jupyter |
Author: Xi Ming.
## Build a Multilayer Perceptron from Scratch based on PyTorch.
PyTorch's automatic differentiation mechanism can help quickly implement multilayer perceptrons.
### Import Packages.
```
import torch
import torchvision
import torch.nn as nn
from torchvision import datasets,transforms
from torch.utils.data import DataLoader
import numpy as np
print('pytorch version:',torch.__version__,'\ntorchvision version: ',torchvision.__version__,'\nnumpy version:' ,np.__version__)
```
### Settings
```
# model runs on GPU or CPU
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# Hyperparameters
learning_rate = 1e-2
momentum = 0.9
num_epochs = 10
batch_size = 128
# Architecture
num_features = 784
num_hidden_1 = 400
num_hidden_2 = 200
num_classes = 10
```
### Dataset: MNIST
```
train_dataset = datasets.MNIST(root='data',
train=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))]),
download=True)
test_dataset = datasets.MNIST(root='data',
train=False,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))]))
train_loader = DataLoader(dataset=train_dataset,
batch_size=batch_size, shuffle=True)
test_loader = DataLoader(dataset=test_dataset,
batch_size=batch_size, shuffle=False)
# Checking the dataset
for images, labels in train_loader:
print('Image batch dimensions:', images.shape)
print('Image label dimensions:', labels.shape)
break
```
### Define model
```
class MultilayerPerceptron(nn.Module):
def __init__(self, num_features, num_classes):
super(MultilayerPerceptron, self).__init__()
self.model = nn.Sequential(
nn.Linear(num_features, num_hidden_1),
nn.Sigmoid(),
nn.Linear(num_hidden_1, num_hidden_2),
nn.Sigmoid(),
nn.Linear(num_hidden_2, num_classes)
)
def forward(self, x):
x = self.model(x)
return x
```
### Init model, define optimizer and loss function
```
model = MultilayerPerceptron(num_features=num_features,
num_classes=num_classes)
model = model.to(device)
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, momentum=momentum)
criterion = nn.CrossEntropyLoss()
```
### Training model
```
train_loss_list = []
test_acc_list = []
for epoch in range(num_epochs):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
data = data.view(-1, 28*28)
# forward
logits = model(data)
loss = criterion(logits, target)
# backprop
optimizer.zero_grad()
loss.backward()
optimizer.step()
if batch_idx % 100 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.data.item()))
train_loss_list.append(loss.data.item())
test_loss = 0
correct = 0
model.eval()
with torch.no_grad():
# test
total_correct = 0
total_num = 0
for data, target in test_loader:
data, target = data.to(device), target.to(device)
data = data.view(-1, 28*28)
logits = model(data)
test_loss += criterion(logits, target).item()
pred = logits.data.max(1)[1]
correct += pred.eq(target.data).sum()
test_loss /= len(test_loader.dataset)
test_acc = 100. * correct / len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset), test_acc))
test_acc_list.append(test_acc)
```
### Plot Training index curve
```
import matplotlib
import matplotlib.pyplot as plt
x = np.arange(0, num_epochs)
plt.title("Training index curve")
plt.plot(x, train_loss_list, label='train loss')
plt.xlabel('epochs')
plt.ylabel('train loss')
plt.show()
plt.title("Training index curve")
plt.plot(x, test_acc_list, label='test accuracy')
plt.xlabel('epochs')
plt.ylabel('train acc')
plt.show()
```
### Visual Inspection
```
for features, targets in test_loader:
break
fig, ax = plt.subplots(1, 4)
data = data.to('cpu')
for i in range(4):
ax[i].imshow(data[i].view(28, 28), cmap=matplotlib.cm.binary)
plt.show()
data = data.to(device)
predictions = model.forward(data[:4].view(-1, 28*28))
predictions = torch.argmax(predictions, dim=1)
print('Predicted labels', predictions)
```
| github_jupyter |
```
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
os.system('rm -rf tacotron2-female-alignment')
os.system('mkdir tacotron2-female-alignment')
import tensorflow as tf
import numpy as np
from glob import glob
import tensorflow as tf
import malaya_speech
import malaya_speech.train
from malaya_speech.train.model import tacotron2_nvidia as tacotron2
import malaya_speech.config
import numpy as np
import json
import malaya_speech.train as train
def norm_mean_std(x, mean, std):
zero_idxs = np.where(x == 0.0)[0]
x = (x - mean) / std
x[zero_idxs] = 0.0
return x
def average_by_duration(x, durs):
mel_len = durs.sum()
durs_cum = np.cumsum(np.pad(durs, (1, 0)))
x_char = np.zeros((durs.shape[0],), dtype=np.float32)
for idx, start, end in zip(range(mel_len), durs_cum[:-1], durs_cum[1:]):
values = x[start:end][np.where(x[start:end] != 0.0)[0]]
x_char[idx] = np.mean(values) if len(values) > 0 else 0.0
return x_char.astype(np.float32)
f0_stat = np.load('../speech-bahasa/female-stats/stats_f0.npy')
energy_stat = np.load('../speech-bahasa/female-stats/stats_energy.npy')
with open('mels-female.json') as fopen:
files = json.load(fopen)
reduction_factor = 1
maxlen = 904
minlen = 32
pad_to = 8
data_min = 1e-2
_pad = 'pad'
_start = 'start'
_eos = 'eos'
_punctuation = "!'(),.:;? "
_special = '-'
_letters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz'
MALAYA_SPEECH_SYMBOLS = (
[_pad, _start, _eos] + list(_special) + list(_punctuation) + list(_letters)
)
def generate(files):
for f in files:
f = f.decode()
mel = np.load(f)
mel_length = len(mel)
if mel_length > maxlen or mel_length < minlen:
continue
stop_token_target = np.zeros([len(mel)], dtype = np.float32)
text_ids = np.load(f.replace('mels', 'text_ids'), allow_pickle = True)[
0
]
text_input = np.array(
[
MALAYA_SPEECH_SYMBOLS.index(c)
for c in text_ids
if c in MALAYA_SPEECH_SYMBOLS
]
)
num_pad = pad_to - ((len(text_input) + 2) % pad_to)
text_input = np.pad(
text_input, ((1, 1)), 'constant', constant_values = ((1, 2))
)
text_input = np.pad(
text_input, ((0, num_pad)), 'constant', constant_values = 0
)
num_pad = pad_to - ((len(mel) + 1) % pad_to) + 1
pad_value_mel = np.log(data_min)
mel = np.pad(
mel,
((0, num_pad), (0, 0)),
'constant',
constant_values = pad_value_mel,
)
stop_token_target = np.pad(
stop_token_target, ((0, num_pad)), 'constant', constant_values = 1
)
len_mel = [len(mel)]
len_text_ids = [len(text_input)]
f0 = np.load(f.replace('mels', 'f0s'))
num_pad = pad_to - ((len(f0) + 1) % pad_to) + 1
f0 = np.pad(
f0,
((0, num_pad)),
'constant',
)
f0 = norm_mean_std(f0, f0_stat[0], f0_stat[1])
len_f0 = [len(f0)]
energy = np.load(f.replace('mels', 'energies'))
num_pad = pad_to - ((len(energy) + 1) % pad_to) + 1
energy = np.pad(
energy,
((0, num_pad)),
'constant',
)
energy = norm_mean_std(energy, energy_stat[0], energy_stat[1])
len_energy = [len(energy)]
yield {
'mel': mel,
'text_ids': text_input,
'len_mel': len_mel,
'len_text_ids': len_text_ids,
'stop_token_target': stop_token_target,
'f0': f0,
'len_f0': len_f0,
'energy': energy,
'len_energy': len_energy,
'f': [f]
}
def parse(example):
mel_len = example['len_mel'][0]
input_len = example['len_text_ids'][0]
g = tacotron2.generate_guided_attention(mel_len, input_len, reduction_factor = reduction_factor)
example['g'] = g
return example
def get_dataset(files, batch_size = 32, shuffle_size = 32, thread_count = 24):
def get():
dataset = tf.data.Dataset.from_generator(
generate,
{
'mel': tf.float32,
'text_ids': tf.int32,
'len_mel': tf.int32,
'len_text_ids': tf.int32,
'stop_token_target': tf.float32,
'f0': tf.float32,
'len_f0': tf.int32,
'energy': tf.float32,
'len_energy': tf.int32,
'f': tf.string
},
output_shapes = {
'mel': tf.TensorShape([None, 80]),
'text_ids': tf.TensorShape([None]),
'len_mel': tf.TensorShape([1]),
'len_text_ids': tf.TensorShape([1]),
'stop_token_target': tf.TensorShape([None]),
'f0': tf.TensorShape([None]),
'len_f0': tf.TensorShape([1]),
'energy': tf.TensorShape([None]),
'len_energy': tf.TensorShape([1]),
'f': tf.TensorShape([1]),
},
args = (files,),
)
dataset = dataset.map(parse, num_parallel_calls = thread_count)
dataset = dataset.padded_batch(
shuffle_size,
padded_shapes = {
'mel': tf.TensorShape([None, 80]),
'text_ids': tf.TensorShape([None]),
'len_mel': tf.TensorShape([1]),
'len_text_ids': tf.TensorShape([1]),
'g': tf.TensorShape([None, None]),
'stop_token_target': tf.TensorShape([None]),
'f0': tf.TensorShape([None]),
'len_f0': tf.TensorShape([1]),
'energy': tf.TensorShape([None]),
'len_energy': tf.TensorShape([1]),
'f': tf.TensorShape([1]),
},
padding_values = {
'mel': tf.constant(0, dtype = tf.float32),
'text_ids': tf.constant(0, dtype = tf.int32),
'len_mel': tf.constant(0, dtype = tf.int32),
'len_text_ids': tf.constant(0, dtype = tf.int32),
'g': tf.constant(-1.0, dtype = tf.float32),
'stop_token_target': tf.constant(0, dtype = tf.float32),
'f0': tf.constant(0, dtype = tf.float32),
'len_f0': tf.constant(0, dtype = tf.int32),
'energy': tf.constant(0, dtype = tf.float32),
'len_energy': tf.constant(0, dtype = tf.int32),
'f': tf.constant('', dtype = tf.string),
},
)
return dataset
return get
features = get_dataset(files['train'])()
features = features.make_one_shot_iterator().get_next()
input_ids = features['text_ids']
input_lengths = features['len_text_ids'][:, 0]
speaker_ids = tf.constant([0], dtype = tf.int32)
mel_outputs = features['mel']
mel_lengths = features['len_mel'][:, 0]
guided = features['g']
stop_token_target = features['stop_token_target']
batch_size = tf.shape(guided)[0]
model = tacotron2.Model(
[input_ids, input_lengths],
[mel_outputs, mel_lengths],
len(MALAYA_SPEECH_SYMBOLS),
)
r = model.decoder_logits['outputs']
decoder_output, post_mel_outputs, alignment_histories, _, _, _ = r
stop_token_predictions = model.decoder_logits['stop_token_prediction']
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
saver = tf.train.Saver()
saver.restore(sess, 'tacotron2-female/model.ckpt-54000')
import matplotlib.pyplot as plt
def decode(x):
return ''.join([MALAYA_SPEECH_SYMBOLS[i] for i in x])
def get_duration_from_alignment(alignment):
D = np.array([0 for _ in range(np.shape(alignment)[0])])
for i in range(np.shape(alignment)[1]):
max_index = list(alignment[:, i]).index(alignment[:, i].max())
D[max_index] = D[max_index] + 1
return D
count = 0
while True:
try:
o = sess.run([decoder_output, post_mel_outputs, stop_token_predictions, alignment_histories, features])
f = o[-1]
for i in range(len(f['f'])):
file = f['f'][i,0].decode().split('/')[-1]
file = f'tacotron2-female-alignment/{file}'
len_mel = f['len_mel'][i, 0]
len_text_ids = f['len_text_ids'][i, 0]
d = get_duration_from_alignment(o[3][i, :len_text_ids, :len_mel])
assert d.sum() == len_mel
np.save(file, d)
print('done', count)
count += 1
except:
break
# import pickle
# with open('dataset-mel.pkl', 'wb') as fopen:
# pickle.dump([o[-1], d], fopen)
# import pickle
# with open('a.pkl', 'wb') as fopen:
# pickle.dump([np.reshape(o[0][0], [-1, 80]), np.reshape(o[1][0], [-1, 80]), o[-1]['mel'][0]], fopen)
```
| github_jupyter |
```
import numpy as np
import cvxpy as cp
import networkx as nx
import matplotlib.pyplot as plt
# Problem data
reservations = np.array([110, 118, 103, 161, 140])
flight_capacities = np.array([100, 100, 100, 150, 150])
cost_per_hour = 50
cost_external_company = 75
# Build transportation grah
G = nx.DiGraph()
# Add nodes
G.add_node(0, supply=reservations[0], label="10am")
G.add_node(1, supply=reservations[1], label="12pm")
G.add_node(2, supply=reservations[2], label="2pm")
G.add_node(3, supply=reservations[3], label="4pm")
G.add_node(4, supply=reservations[4], label="6pm")
G.add_node(5, supply=0, label="9pm")
G.add_node(6, supply=-np.sum(reservations), label="NY")
# Edges
M = 1000
# From 10am
G.add_edge(0, 1, cost=2 * cost_per_hour, capacity=M)
G.add_edge(0, 2, cost=4 * cost_per_hour, capacity=M)
G.add_edge(0, 3, cost=6 * cost_per_hour, capacity=M)
G.add_edge(0, 4, cost=8 * cost_per_hour, capacity=M)
G.add_edge(0, 5, cost=11 * cost_per_hour + cost_external_company, capacity=M)
G.add_edge(0, 6, cost=0, capacity=flight_capacities[0])
# From 12pm
G.add_edge(1, 2, cost=2 * cost_per_hour, capacity=M)
G.add_edge(1, 3, cost=4 * cost_per_hour, capacity=M)
G.add_edge(1, 4, cost=6 * cost_per_hour, capacity=M)
G.add_edge(1, 5, cost=9 * cost_per_hour + cost_external_company, capacity=M)
G.add_edge(1, 6, cost=0, capacity=flight_capacities[1])
# From 2pm
G.add_edge(2, 3, cost=2 * cost_per_hour, capacity=M)
G.add_edge(2, 4, cost=4 * cost_per_hour, capacity=M)
G.add_edge(2, 5, cost=7 * cost_per_hour + cost_external_company, capacity=M)
G.add_edge(2, 6, cost=0, capacity=flight_capacities[2])
# From 4pm
G.add_edge(3, 4, cost=2 * cost_per_hour, capacity=M)
G.add_edge(3, 5, cost=5 * cost_per_hour + cost_external_company, capacity=M)
G.add_edge(3, 6, cost=0, capacity=flight_capacities[3])
# From 6pm
G.add_edge(4, 5, cost=3 * cost_per_hour + cost_external_company, capacity=M)
G.add_edge(4, 6, cost=0, capacity=flight_capacities[4])
# From 9pm
G.add_edge(5, 6, cost=0, capacity=M)
# Note minus sign for convention
# In our formulation:
# -> 1 means arc exits node
# -> -1 means arc enters node
A = -nx.linalg.graphmatrix.incidence_matrix(G, oriented=True)
print("A =\n", A.todense())
# Get weights, capacities, and supply vectors
c = np.array([G[u][v]['cost'] for u,v in G.edges])
u = np.array([G[u][v]['capacity'] for u,v in G.edges])
b = np.array([G.nodes[u]['supply'] for u in G.nodes])
# Solve airline problem
# Note: you need to install GLPK. It is part of CVXOPT.
# Just run:
# pip install cvxopt
#
# GLPK runs a simple method, which, as you know, returns exactly integral
# solutions at vertices. Other solvers such as ECOS use interior-point methods
# and they return slightly imprecise solutions that are not exactly integral.
x = cp.Variable(len(G.edges))
objective = cp.Minimize(c @ x)
constraints = [A @ x == b, 0 <= x, x <= u]
problem = cp.Problem(objective, constraints)
problem.solve(solver=cp.GLPK)
print("Optimal cost = $", problem.objective.value)
# Show solution
# Note: some bounds/capacities are not integral -> Solution not integral
print("x = ", x.value)
fig, ax = plt.subplots(1, 1, figsize=(15, 10))
cmap = plt.cm.Blues
# Positions in 2d plot
layout = {0: np.array([0.0, 0.0]),
1: np.array([1.0, 0.5]),
2: np.array([2.0, 1.0]),
3: np.array([3.0, 0.5]),
4: np.array([4.0, 0.0]),
5: np.array([1.6, -0.3]),
6: np.array([2.0, -2.0]),
}
nx.draw_networkx_nodes(G, layout, node_color='w', edgecolors='k', node_size=2000)
nx.draw_networkx_edges(G, layout, edge_cmap=cmap, edge_color=x.value,
width=2, arrowsize=30, min_target_margin=20)
labels = {u: G.nodes[u]['label'] for u in G.nodes}
nx.draw_networkx_labels(G,layout,labels,font_size=14)
# Print colormap
sm = plt.cm.ScalarMappable(cmap=cmap,
norm=plt.Normalize(vmin=0, vmax=200)
)
cbar = plt.colorbar(sm)
plt.show()
```
| github_jupyter |
```
import tempfile
import urllib.request
train_file = "datasets/thermostat/sample-training-data.csv"
test_file = "datasets/thermostat/test-data.csv"
import pandas as pd
COLUMNS = ["month", "day", "hour", "min", "pirstatus",
"isDay", "extTemp", "extHumidity", "loungeTemp", "loungeHumidity",
"state", "temperature", "label"]
df_train = pd.read_csv(train_file, names=COLUMNS, skipinitialspace=True)
df_test = pd.read_csv(test_file, names=COLUMNS, skipinitialspace=True)
CATEGORICAL_COLUMNS = []
CONTINUOUS_COLUMNS = ["month","day", "hour", "min", "pirstatus",
"isDay", "extTemp", "extHumidity", "loungeTemp", "loungeHumidity"
]
LABEL_COLUMN="label"
df_train[LABEL_COLUMN] = df_train["state"]
df_test[LABEL_COLUMN] = df_test["state"]
print(df_test)
import tensorflow as tf
def input_fn(df):
# Creates a dictionary mapping from each continuous feature column name (k) to
# the values of that column stored in a constant Tensor.
continuous_cols = {k: tf.constant(df[k].values)
for k in CONTINUOUS_COLUMNS}
# Creates a dictionary mapping from each categorical feature column name (k)
# to the values of that column stored in a tf.SparseTensor.
categorical_cols = {k: tf.SparseTensor(
indices=[[i, 0] for i in range(df[k].size)],
values=df[k].values.astype(str),
dense_shape=[df[k].size, 1])
for k in CATEGORICAL_COLUMNS}
# Merges the two dictionaries into one.
feature_cols = dict()
feature_cols.update(continuous_cols.copy())
feature_cols.update(categorical_cols.copy())
#feature_cols = dict(continuous_cols.items() + categorical_cols.items())
# Converts the label column into a constant Tensor.
label = tf.constant(df[LABEL_COLUMN].values)
# Returns the feature columns and the label.
return feature_cols, label
def train_input_fn():
return input_fn(df_train)
def eval_input_fn():
return input_fn(df_test)
month = tf.contrib.layers.real_valued_column("month")
day = tf.contrib.layers.real_valued_column("day")
hour = tf.contrib.layers.real_valued_column("hour")
minute = tf.contrib.layers.real_valued_column("min")
pirstatus = tf.contrib.layers.real_valued_column("pirstatus")
isDay = tf.contrib.layers.real_valued_column("isDay")
extTemp = tf.contrib.layers.real_valued_column("extTemp")
extHumidity = tf.contrib.layers.real_valued_column("extHumidity")
loungeTemp = tf.contrib.layers.real_valued_column("loungeTemp")
loungeHumidity = tf.contrib.layers.real_valued_column("loungeHumidity")
model_dir = tempfile.mkdtemp()
m = tf.contrib.learn.LinearClassifier(feature_columns=[
month, day, hour, minute, pirstatus, isDay,
extTemp, extHumidity, loungeTemp, loungeHumidity],
optimizer=tf.train.FtrlOptimizer(
learning_rate=0.1,
l1_regularization_strength=1.0,
l2_regularization_strength=1.0),
model_dir=model_dir)
m.fit(input_fn=train_input_fn, steps=500)
results = m.evaluate(input_fn=eval_input_fn, steps=1)
print("printin results")
for key in sorted(results):
print("%s: %s" % (key, results[key]))
def predict_input_fn():
test_data = {
"month":[12],
"day":[12],
"hour":[22],
"min":[0],
"pirstatus":[0],
"isDay":[1],
"extTemp":[35],
"extHumidity":[20],
"loungeTemp":[12],
"loungeHumidity":[30],
}
continuous_cols = {k: tf.constant(test_data[k])
for k in test_data}
return continuous_cols
predictions = list(m.predict(input_fn=predict_input_fn, as_iterable=True))
print('Predictions: {}'.format(str(predictions)))
```
| github_jupyter |
# Fuzzing APIs
So far, we have always generated _system input_, i.e. data that the program as a whole obtains via its input channels. However, we can also generate inputs that go directly into individual functions, gaining flexibility and speed in the process. In this chapter, we explore the use of grammars to synthesize code for function calls, which allows you to generate _program code that very efficiently invokes functions directly._
```
from bookutils import YouTubeVideo
YouTubeVideo('U842dC2R3V0')
```
**Prerequisites**
* You have to know how grammar fuzzing work, e.g. from the [chapter on grammars](Grammars.ipynb).
* We make use of _generator functions_, as discussed in the [chapter on fuzzing with generators](GeneratorGrammarFuzzer.ipynb).
* We make use of probabilities, as discussed in the [chapter on fuzzing with probabilities](ProbabilisticGrammarFuzzer.ipynb).
## Synopsis
<!-- Automatically generated. Do not edit. -->
To [use the code provided in this chapter](Importing.ipynb), write
```python
>>> from fuzzingbook.APIFuzzer import <identifier>
```
and then make use of the following features.
This chapter provides *grammar constructors* that are useful for generating _function calls_.
The grammars are [probabilistic](ProbabilisticGrammarFuzzer.ipynb) and make use of [generators](GeneratorGrammarFuzzer.ipynb), so use `ProbabilisticGeneratorGrammarFuzzer` as a producer.
```python
>>> from GeneratorGrammarFuzzer import ProbabilisticGeneratorGrammarFuzzer
```
`INT_GRAMMAR`, `FLOAT_GRAMMAR`, `ASCII_STRING_GRAMMAR` produce integers, floats, and strings, respectively:
```python
>>> fuzzer = ProbabilisticGeneratorGrammarFuzzer(INT_GRAMMAR)
>>> [fuzzer.fuzz() for i in range(10)]
['-51', '9', '0', '0', '0', '0', '32', '0', '0', '0']
>>> fuzzer = ProbabilisticGeneratorGrammarFuzzer(FLOAT_GRAMMAR)
>>> [fuzzer.fuzz() for i in range(10)]
['0e0',
'-9.43e34',
'-7.3282e0',
'-9.5e-9',
'0',
'-30.840386e-5',
'3',
'-4.1e0',
'-9.7',
'413']
>>> fuzzer = ProbabilisticGeneratorGrammarFuzzer(ASCII_STRING_GRAMMAR)
>>> [fuzzer.fuzz() for i in range(10)]
['"#vYV*t@I%KNTT[q~}&-v+[zAzj[X-z|RzC$(g$Br]1tC\':5<F-"',
'""',
'"^S/"',
'"y)QDs_9"',
'")dY~?WYqMh,bwn3\\"A!02Pk`gx"',
'"01n|(dd$-d.sx\\"83\\"h/]qx)d9LPNdrk$}$4t3zhC.%3VY@AZZ0wCs2 N"',
'"D\\6\\xgw#TQ}$\'3"',
'"LaM{"',
'"\\"ux\'1H!=%;2T$.=l"',
'"=vkiV~w.Ypt,?JwcEr}Moc>!5<U+DdYAup\\"N 0V?h3x~jFN3"']
```
`int_grammar_with_range(start, end)` produces an integer grammar with values `N` such that `start <= N <= end`:
```python
>>> int_grammar = int_grammar_with_range(100, 200)
>>> fuzzer = ProbabilisticGeneratorGrammarFuzzer(int_grammar)
>>> [fuzzer.fuzz() for i in range(10)]
['154', '149', '185', '117', '182', '154', '131', '194', '147', '192']
```
`float_grammar_with_range(start, end)` produces a floating-number grammar with values `N` such that `start <= N <= end`.
```python
>>> float_grammar = float_grammar_with_range(100, 200)
>>> fuzzer = ProbabilisticGeneratorGrammarFuzzer(float_grammar)
>>> [fuzzer.fuzz() for i in range(10)]
['121.8092479227325',
'187.18037169119634',
'127.9576486784452',
'125.47768739781723',
'151.8091820472274',
'117.864410860742',
'187.50918008379483',
'119.29335112884749',
'149.2637029583114',
'126.61818995939146']
```
All such values can be immediately used for testing function calls:
```python
>>> from math import sqrt
>>> fuzzer = ProbabilisticGeneratorGrammarFuzzer(int_grammar)
>>> call = "sqrt(" + fuzzer.fuzz() + ")"
>>> call
'sqrt(143)'
>>> eval(call)
11.958260743101398
```
These grammars can also be composed to form more complex grammars. `list_grammar(object_grammar)` returns a grammar that produces lists of objects as defined by `object_grammar`.
```python
>>> int_list_grammar = list_grammar(int_grammar)
>>> fuzzer = ProbabilisticGeneratorGrammarFuzzer(int_list_grammar)
>>> [fuzzer.fuzz() for i in range(5)]
['[118, 111, 188, 137, 129]',
'[170, 172]',
'[171, 161, 117, 191, 175, 183, 164]',
'[189]',
'[129, 110, 178]']
>>> some_list = eval(fuzzer.fuzz())
>>> some_list
[172, 120, 106, 192, 124, 191, 161, 100, 117]
>>> len(some_list)
9
```
In a similar vein, we can construct arbitrary further data types for testing individual functions programmatically.
## Fuzzing a Function
Let us start with our first problem: How do we fuzz a given function? For an interpreted language like Python, this is pretty straight-forward. All we need to do is to generate _calls_ to the function(s) we want to test. This is something we can easily do with a grammar.
As an example, consider the `urlparse()` function from the Python library. `urlparse()` takes a URL and decomposes it into its individual components.
```
import bookutils
from urllib.parse import urlparse
urlparse('https://www.fuzzingbook.com/html/APIFuzzer.html')
```
You see how the individual elements of the URL โ the _scheme_ (`"http"`), the _network location_ (`"www.fuzzingbook.com"`), or the path (`"//html/APIFuzzer.html"`) are all properly identified. Other elements (like `params`, `query`, or `fragment`) are empty, because they were not part of our input.
To test `urlparse()`, we'd want to feed it a large set of different URLs. We can obtain these from the URL grammar we had defined in the ["Grammars"](Grammars.ipynb) chapter.
```
from Grammars import URL_GRAMMAR, is_valid_grammar, START_SYMBOL
from Grammars import opts, extend_grammar, Grammar
from GrammarFuzzer import GrammarFuzzer
url_fuzzer = GrammarFuzzer(URL_GRAMMAR)
for i in range(10):
url = url_fuzzer.fuzz()
print(urlparse(url))
```
This way, we can easily test any Python function โย by setting up a scaffold that runs it. How would we proceed, though, if we wanted to have a test that can be re-run again and again, without having to generate new calls every time?
## Synthesizing Code
The "scaffolding" method, as sketched above, has an important downside: It couples test generation and test execution into a single unit, disallowing running both at different times, or for different languages. To decouple the two, we take another approach: Rather than generating inputs and immediately feeding this input into a function, we _synthesize code_ instead that invokes functions with a given input.
For instance, if we generate the string
```
call = "urlparse('http://www.example.com/')"
```
we can execute this string as a whole (and thus run the test) at any time:
```
eval(call)
```
To systematically generate such calls, we can again use a grammar:
```
URLPARSE_GRAMMAR: Grammar = {
"<call>":
['urlparse("<url>")']
}
# Import definitions from URL_GRAMMAR
URLPARSE_GRAMMAR.update(URL_GRAMMAR)
URLPARSE_GRAMMAR["<start>"] = ["<call>"]
assert is_valid_grammar(URLPARSE_GRAMMAR)
```
This grammar creates calls in the form `urlparse(<url>)`, where `<url>` comes from the "imported" URL grammar. The idea is to create many of these calls and to feed them into the Python interpreter.
```
URLPARSE_GRAMMAR
```
We can now use this grammar for fuzzing and synthesizing calls to `urlparse)`:
```
urlparse_fuzzer = GrammarFuzzer(URLPARSE_GRAMMAR)
urlparse_fuzzer.fuzz()
```
Just as above, we can immediately execute these calls. To better see what is happening, we define a small helper function:
```
# Call function_name(arg[0], arg[1], ...) as a string
def do_call(call_string):
print(call_string)
result = eval(call_string)
print("\t= " + repr(result))
return result
call = urlparse_fuzzer.fuzz()
do_call(call)
```
If `urlparse()` were a C function, for instance, we could embed its call into some (also generated) C function:
```
URLPARSE_C_GRAMMAR: Grammar = {
"<cfile>": ["<cheader><cfunction>"],
"<cheader>": ['#include "urlparse.h"\n\n'],
"<cfunction>": ["void test() {\n<calls>}\n"],
"<calls>": ["<call>", "<calls><call>"],
"<call>": [' urlparse("<url>");\n']
}
URLPARSE_C_GRAMMAR.update(URL_GRAMMAR)
URLPARSE_C_GRAMMAR["<start>"] = ["<cfile>"]
assert is_valid_grammar(URLPARSE_C_GRAMMAR)
urlparse_fuzzer = GrammarFuzzer(URLPARSE_C_GRAMMAR)
print(urlparse_fuzzer.fuzz())
```
## Synthesizing Oracles
In our `urlparse()` example, both the Python as well as the C variant only check for _generic_ errors in `urlparse()`; that is, they only detect fatal errors and exceptions. For a full test, we need to set up a specific *oracle* as well that checks whether the result is valid.
Our plan is to check whether specific parts of the URL reappear in the result โ that is, if the scheme is `http:`, then the `ParseResult` returned should also contain a `http:` scheme. As discussed in the [chapter on fuzzing with generators](GeneratorGrammarFuzzer.ipynb), equalities of strings such as `http:` across two symbols cannot be expressed in a context-free grammar. We can, however, use a _generator function_ (also introduced in the [chapter on fuzzing with generators](GeneratorGrammarFuzzer.ipynb)) to automatically enforce such equalities.
Here is an example. Invoking `geturl()` on a `urlparse()` result should return the URL as originally passed to `urlparse()`.
```
from GeneratorGrammarFuzzer import GeneratorGrammarFuzzer, ProbabilisticGeneratorGrammarFuzzer
URLPARSE_ORACLE_GRAMMAR: Grammar = extend_grammar(URLPARSE_GRAMMAR,
{
"<call>": [("assert urlparse('<url>').geturl() == '<url>'",
opts(post=lambda url_1, url_2: [None, url_1]))]
})
urlparse_oracle_fuzzer = GeneratorGrammarFuzzer(URLPARSE_ORACLE_GRAMMAR)
test = urlparse_oracle_fuzzer.fuzz()
print(test)
exec(test)
```
In a similar way, we can also check individual components of the result:
```
URLPARSE_ORACLE_GRAMMAR: Grammar = extend_grammar(URLPARSE_GRAMMAR,
{
"<call>": [("result = urlparse('<scheme>://<host><path>?<params>')\n"
# + "print(result)\n"
+ "assert result.scheme == '<scheme>'\n"
+ "assert result.netloc == '<host>'\n"
+ "assert result.path == '<path>'\n"
+ "assert result.query == '<params>'",
opts(post=lambda scheme_1, authority_1, path_1, params_1,
scheme_2, authority_2, path_2, params_2:
[None, None, None, None,
scheme_1, authority_1, path_1, params_1]))]
})
# Get rid of unused symbols
del URLPARSE_ORACLE_GRAMMAR["<url>"]
del URLPARSE_ORACLE_GRAMMAR["<query>"]
del URLPARSE_ORACLE_GRAMMAR["<authority>"]
del URLPARSE_ORACLE_GRAMMAR["<userinfo>"]
del URLPARSE_ORACLE_GRAMMAR["<port>"]
urlparse_oracle_fuzzer = GeneratorGrammarFuzzer(URLPARSE_ORACLE_GRAMMAR)
test = urlparse_oracle_fuzzer.fuzz()
print(test)
exec(test)
```
The use of generator functions may feel a bit cumbersome. Indeed, if we uniquely stick to Python, we could also create a _unit test_ that directly invokes the fuzzer to generate individual parts:
```
def fuzzed_url_element(symbol):
return GrammarFuzzer(URLPARSE_GRAMMAR, start_symbol=symbol).fuzz()
scheme = fuzzed_url_element("<scheme>")
authority = fuzzed_url_element("<authority>")
path = fuzzed_url_element("<path>")
query = fuzzed_url_element("<params>")
url = "%s://%s%s?%s" % (scheme, authority, path, query)
result = urlparse(url)
# print(result)
assert result.geturl() == url
assert result.scheme == scheme
assert result.path == path
assert result.query == query
```
Using such a unit test makes it easier to express oracles. However, we lose the ability to systematically cover individual URL elements and alternatives as with [`GrammarCoverageFuzzer`](GrammarCoverageFuzzer.ipynb) as well as the ability to guide generation towards specific elements as with [`ProbabilisticGrammarFuzzer`](ProbabilisticGrammarFuzzer.ipynb). Furthermore, a grammar allows us to generate tests for arbitrary programming languages and APIs.
## Synthesizing Data
For `urlparse()`, we have used a very specific grammar for creating a very specific argument. Many functions take basic data types as (some) arguments, though; we therefore define grammars that generate precisely those arguments. Even better, we can define functions that _generate_ grammars tailored towards our specific needs, returning values in a particular range, for instance.
### Integers
We introduce a simple grammar to produce integers.
```
from Grammars import convert_ebnf_grammar, crange
from ProbabilisticGrammarFuzzer import ProbabilisticGrammarFuzzer
INT_EBNF_GRAMMAR: Grammar = {
"<start>": ["<int>"],
"<int>": ["<_int>"],
"<_int>": ["(-)?<leaddigit><digit>*", "0"],
"<leaddigit>": crange('1', '9'),
"<digit>": crange('0', '9')
}
assert is_valid_grammar(INT_EBNF_GRAMMAR)
INT_GRAMMAR = convert_ebnf_grammar(INT_EBNF_GRAMMAR)
INT_GRAMMAR
int_fuzzer = GrammarFuzzer(INT_GRAMMAR)
print([int_fuzzer.fuzz() for i in range(10)])
```
If we need integers in a specific range, we can add a generator function that does right that:
```
from Grammars import set_opts
import random
def int_grammar_with_range(start, end):
int_grammar = extend_grammar(INT_GRAMMAR)
set_opts(int_grammar, "<int>", "<_int>",
opts(pre=lambda: random.randint(start, end)))
return int_grammar
int_fuzzer = GeneratorGrammarFuzzer(int_grammar_with_range(900, 1000))
[int_fuzzer.fuzz() for i in range(10)]
```
### Floats
The grammar for floating-point values closely resembles the integer grammar.
```
FLOAT_EBNF_GRAMMAR: Grammar = {
"<start>": ["<float>"],
"<float>": [("<_float>", opts(prob=0.9)), "inf", "NaN"],
"<_float>": ["<int>(.<digit>+)?<exp>?"],
"<exp>": ["e<int>"]
}
FLOAT_EBNF_GRAMMAR.update(INT_EBNF_GRAMMAR)
FLOAT_EBNF_GRAMMAR["<start>"] = ["<float>"]
assert is_valid_grammar(FLOAT_EBNF_GRAMMAR)
FLOAT_GRAMMAR = convert_ebnf_grammar(FLOAT_EBNF_GRAMMAR)
FLOAT_GRAMMAR
float_fuzzer = ProbabilisticGrammarFuzzer(FLOAT_GRAMMAR)
print([float_fuzzer.fuzz() for i in range(10)])
def float_grammar_with_range(start, end):
float_grammar = extend_grammar(FLOAT_GRAMMAR)
set_opts(float_grammar, "<float>", "<_float>", opts(
pre=lambda: start + random.random() * (end - start)))
return float_grammar
float_fuzzer = ProbabilisticGeneratorGrammarFuzzer(
float_grammar_with_range(900.0, 900.9))
[float_fuzzer.fuzz() for i in range(10)]
```
### Strings
Finally, we introduce a grammar for producing strings.
```
ASCII_STRING_EBNF_GRAMMAR: Grammar = {
"<start>": ["<ascii-string>"],
"<ascii-string>": ['"<ascii-chars>"'],
"<ascii-chars>": [
("", opts(prob=0.05)),
"<ascii-chars><ascii-char>"
],
"<ascii-char>": crange(" ", "!") + [r'\"'] + crange("#", "~")
}
assert is_valid_grammar(ASCII_STRING_EBNF_GRAMMAR)
ASCII_STRING_GRAMMAR = convert_ebnf_grammar(ASCII_STRING_EBNF_GRAMMAR)
string_fuzzer = ProbabilisticGrammarFuzzer(ASCII_STRING_GRAMMAR)
print([string_fuzzer.fuzz() for i in range(10)])
```
## Synthesizing Composite Data
From basic data, as discussed above, we can also produce _composite data_ in data structures such as sets or lists. We illustrate such generation on lists.
### Lists
```
LIST_EBNF_GRAMMAR: Grammar = {
"<start>": ["<list>"],
"<list>": [
("[]", opts(prob=0.05)),
"[<list-objects>]"
],
"<list-objects>": [
("<list-object>", opts(prob=0.2)),
"<list-object>, <list-objects>"
],
"<list-object>": ["0"],
}
assert is_valid_grammar(LIST_EBNF_GRAMMAR)
LIST_GRAMMAR = convert_ebnf_grammar(LIST_EBNF_GRAMMAR)
```
Our list generator takes a grammar that produces objects; it then instantiates a list grammar with the objects from these grammars.
```
def list_grammar(object_grammar, list_object_symbol=None):
obj_list_grammar = extend_grammar(LIST_GRAMMAR)
if list_object_symbol is None:
# Default: Use the first expansion of <start> as list symbol
list_object_symbol = object_grammar[START_SYMBOL][0]
obj_list_grammar.update(object_grammar)
obj_list_grammar[START_SYMBOL] = ["<list>"]
obj_list_grammar["<list-object>"] = [list_object_symbol]
assert is_valid_grammar(obj_list_grammar)
return obj_list_grammar
int_list_fuzzer = ProbabilisticGrammarFuzzer(list_grammar(INT_GRAMMAR))
[int_list_fuzzer.fuzz() for i in range(10)]
string_list_fuzzer = ProbabilisticGrammarFuzzer(
list_grammar(ASCII_STRING_GRAMMAR))
[string_list_fuzzer.fuzz() for i in range(10)]
float_list_fuzzer = ProbabilisticGeneratorGrammarFuzzer(list_grammar(
float_grammar_with_range(900.0, 900.9)))
[float_list_fuzzer.fuzz() for i in range(10)]
```
Generators for dictionaries, sets, etc. can be defined in a similar fashion. By plugging together grammar generators, we can produce data structures with arbitrary elements.
## Synopsis
This chapter provides *grammar constructors* that are useful for generating _function calls_.
The grammars are [probabilistic](ProbabilisticGrammarFuzzer.ipynb) and make use of [generators](GeneratorGrammarFuzzer.ipynb), so use `ProbabilisticGeneratorGrammarFuzzer` as a producer.
```
from GeneratorGrammarFuzzer import ProbabilisticGeneratorGrammarFuzzer
```
`INT_GRAMMAR`, `FLOAT_GRAMMAR`, `ASCII_STRING_GRAMMAR` produce integers, floats, and strings, respectively:
```
fuzzer = ProbabilisticGeneratorGrammarFuzzer(INT_GRAMMAR)
[fuzzer.fuzz() for i in range(10)]
fuzzer = ProbabilisticGeneratorGrammarFuzzer(FLOAT_GRAMMAR)
[fuzzer.fuzz() for i in range(10)]
fuzzer = ProbabilisticGeneratorGrammarFuzzer(ASCII_STRING_GRAMMAR)
[fuzzer.fuzz() for i in range(10)]
```
`int_grammar_with_range(start, end)` produces an integer grammar with values `N` such that `start <= N <= end`:
```
int_grammar = int_grammar_with_range(100, 200)
fuzzer = ProbabilisticGeneratorGrammarFuzzer(int_grammar)
[fuzzer.fuzz() for i in range(10)]
```
`float_grammar_with_range(start, end)` produces a floating-number grammar with values `N` such that `start <= N <= end`.
```
float_grammar = float_grammar_with_range(100, 200)
fuzzer = ProbabilisticGeneratorGrammarFuzzer(float_grammar)
[fuzzer.fuzz() for i in range(10)]
```
All such values can be immediately used for testing function calls:
```
from math import sqrt
fuzzer = ProbabilisticGeneratorGrammarFuzzer(int_grammar)
call = "sqrt(" + fuzzer.fuzz() + ")"
call
eval(call)
```
These grammars can also be composed to form more complex grammars. `list_grammar(object_grammar)` returns a grammar that produces lists of objects as defined by `object_grammar`.
```
int_list_grammar = list_grammar(int_grammar)
fuzzer = ProbabilisticGeneratorGrammarFuzzer(int_list_grammar)
[fuzzer.fuzz() for i in range(5)]
some_list = eval(fuzzer.fuzz())
some_list
len(some_list)
```
In a similar vein, we can construct arbitrary further data types for testing individual functions programmatically.
## Lessons Learned
* To fuzz individual functions, one can easily set up grammars that produce function calls.
* Fuzzing at the API level can be much faster than fuzzing at the system level, but brings the risk of false alarms by violating implicit preconditions.
## Next Steps
This chapter was all about manually writing test and controlling which data gets generated. [In the next chapter](Carver.ipynb), we will introduce a much higher level of automation:
* _Carving_ automatically records function calls and arguments from program executions.
* We can turn these into _grammars_, allowing to test these functions with various combinations of recorded values.
With these techniques, we automatically obtain grammars that already invoke functions in application contexts, making our work of specifying them much easier.
## Background
The idea of using generator functions to generate input structures was first explored in QuickCheck \cite{Claessen2000}. A very nice implementation for Python is the [hypothesis package](https://hypothesis.readthedocs.io/en/latest/) which allows to write and combine data structure generators for testing APIs.
## Exercises
The exercises for this chapter combine the above techniques with fuzzing techniques introduced earlier.
### Exercise 1: Deep Arguments
In the example generating oracles for `urlparse()`, important elements such as `authority` or `port` are not checked. Enrich `URLPARSE_ORACLE_GRAMMAR` with post-expansion functions that store the generated elements in a symbol table, such that they can be accessed when generating the assertions.
**Solution.** Left to the reader.
### Exercise 2: Covering Argument Combinations
In the chapter on [configuration testing](ConfigurationFuzzer.ipynb), we also discussed _combinatorial testing_ โ that is, systematic coverage of _sets_ of configuration elements. Implement a scheme that by changing the grammar, allows all _pairs_ of argument values to be covered.
**Solution.** Left to the reader.
### Exercise 3: Mutating Arguments
To widen the range of arguments to be used during testing, apply the _mutation schemes_ introduced in [mutation fuzzing](MutationFuzzer.ipynb) โ for instance, flip individual bytes or delete characters from strings. Apply this either during grammar inference or as a separate step when invoking functions.
**Solution.** Left to the reader.
| github_jupyter |
```
#Copyright 2020 Vraj Shah, Arun Kumar
#
#Licensed under the Apache License, Version 2.0 (the "License");
#you may not use this file except in compliance with the License.
#You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#Unless required by applicable law or agreed to in writing, software
#distributed under the License is distributed on an "AS IS" BASIS,
#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#See the License for the specific language governing permissions and
#limitations under the License.
import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.model_selection import KFold
from sklearn import metrics
import joblib
import numpy as np
np.random.seed(512)
xtrain = pd.read_csv('../data/ml/data_train.csv')
xtest = pd.read_csv('../data/ml/data_test.csv')
xtrain = xtrain.sample(frac=1,random_state=100).reset_index(drop=True)
print(len(xtrain))
y_train = xtrain.loc[:,['y_act']]
y_test = xtest.loc[:,['y_act']]
dict_label = {
'numeric': 0,
'categorical': 1,
'datetime': 2,
'sentence': 3,
'url': 4,
'embedded-number': 5,
'list': 6,
'not-generalizable': 7,
'context-specific': 8
}
y_train['y_act'] = [dict_label[i] for i in y_train['y_act']]
y_test['y_act'] = [dict_label[i] for i in y_test['y_act']]
y_train
useStats = 1
useAttributeName = 1
useSample1 = 0
useSample2 = 0
## Using descriptive stats and attribute name
def ProcessStats(data,y):
data1 = data[['total_vals', 'num_nans', '%_nans', 'num_of_dist_val', '%_dist_val', 'mean', 'std_dev', 'min_val', 'max_val','has_delimiters', 'has_url', 'has_email', 'has_date', 'mean_word_count',
'std_dev_word_count', 'mean_stopword_total', 'stdev_stopword_total',
'mean_char_count', 'stdev_char_count', 'mean_whitespace_count',
'stdev_whitespace_count', 'mean_delim_count', 'stdev_delim_count',
'is_list', 'is_long_sentence']]
data1 = data1.reset_index(drop=True)
data1 = data1.fillna(0)
y.y_act = y.y_act.astype(float)
return data1
vectorizerName = CountVectorizer(ngram_range=(2, 2), analyzer='char')
vectorizerSample = CountVectorizer(ngram_range=(2, 2), analyzer='char')
def FeatureExtraction(data,data1,flag):
arr = data['Attribute_name'].values
arr = [str(x) for x in arr]
arr1 = data['sample_1'].values
arr1 = [str(x) for x in arr1]
arr2 = data['sample_2'].values
arr2 = [str(x) for x in arr2]
arr3 = data['sample_3'].values
arr3 = [str(x) for x in arr3]
print(len(arr1),len(arr2))
if flag:
X = vectorizerName.fit_transform(arr)
X1 = vectorizerSample.fit_transform(arr1)
X2 = vectorizerSample.transform(arr2)
else:
X = vectorizerName.transform(arr)
X1 = vectorizerSample.transform(arr1)
X2 = vectorizerSample.transform(arr2)
# print(f"> Length of vectorized feature_names: {len(vectorizer.get_feature_names())}")
attr_df = pd.DataFrame(X.toarray())
sample1_df = pd.DataFrame(X1.toarray())
sample2_df = pd.DataFrame(X2.toarray())
print(len(data1),len(attr_df),len(sample1_df),len(sample2_df))
if useSample1: data2 = sample1_df
if useSample2: data2 = sample2_df
data2 = pd.concat([data1, attr_df], axis=1, sort=False)
print(len(data2))
return data2
xtrain1 = ProcessStats(xtrain,y_train)
xtest1 = ProcessStats(xtest,y_test)
X_train = FeatureExtraction(xtrain,xtrain1,1)
X_test = FeatureExtraction(xtest,xtest1,0)
X_train_new = X_train.reset_index(drop=True)
y_train_new = y_train.reset_index(drop=True)
X_train_new = X_train_new.values
y_train_new = y_train_new.values
k = 5
kf = KFold(n_splits=k,random_state = 100,shuffle=True)
avg_train_acc,avg_test_acc = 0,0
n_estimators_grid = [5,25,50,75,100,500]
max_depth_grid = [5,10,25,50,100,250]
# n_estimators_grid = [25,50,75,100]
# max_depth_grid = [50,100]
avgsc_lst,avgsc_train_lst,avgsc_hld_lst = [],[],[]
avgsc,avgsc_train,avgsc_hld = 0,0,0
best_param_count = {'n_estimator': {}, 'max_depth': {}}
i=0
for train_index, test_index in kf.split(X_train_new):
# if i==1: break
i=i+1
X_train_cur, X_test_cur = X_train_new[train_index], X_train_new[test_index]
y_train_cur, y_test_cur = y_train_new[train_index], y_train_new[test_index]
X_train_train, X_val,y_train_train,y_val = train_test_split(X_train_cur,y_train_cur, test_size=0.25,random_state=100)
bestPerformingModel = RandomForestClassifier(n_estimators=10,max_depth=5,random_state=100)
bestscore = 0
print('='*10)
for ne in n_estimators_grid:
for md in max_depth_grid:
clf = RandomForestClassifier(n_estimators=ne,max_depth=md,random_state=100)
clf.fit(X_train_train, y_train_train.ravel())
sc = clf.score(X_val, y_val)
print(f"[n_estimator: {ne}, max_depth: {md}, accuracy: {sc}]")
if bestscore < sc:
bestne = ne
bestmd = md
bestscore = sc
bestPerformingModel = clf
if str(bestne) in best_param_count['n_estimator']:
best_param_count['n_estimator'][str(bestne)] += 1
else:
best_param_count['n_estimator'][str(bestne)] = 1
if str(bestmd) in best_param_count['max_depth']:
best_param_count['max_depth'][str(bestmd)] += 1
else:
best_param_count['max_depth'][str(bestmd)] = 1
bscr_train = bestPerformingModel.score(X_train_cur, y_train_cur)
bscr = bestPerformingModel.score(X_test_cur, y_test_cur)
bscr_hld = bestPerformingModel.score(X_test, y_test)
avgsc_train_lst.append(bscr_train)
avgsc_lst.append(bscr)
avgsc_hld_lst.append(bscr_hld)
avgsc_train = avgsc_train + bscr_train
avgsc = avgsc + bscr
avgsc_hld = avgsc_hld + bscr_hld
print()
print(f"> Best n_estimator: {bestne} || Best max_depth: {bestmd}")
print(f"> Best training score: {bscr_train}")
print(f"> Best test score: {bscr}")
print(f"> Best held score: {bscr_hld}")
print('='*10)
print(avgsc_train_lst)
print(avgsc_lst)
print(avgsc_hld_lst)
print(avgsc_train/k)
print(avgsc/k)
print(avgsc_hld/k)
y_pred = bestPerformingModel.predict(X_test)
bscr_hld = bestPerformingModel.score(X_test, y_test)
print(bscr_hld)
bestPerformingModel.score(X_test, y_test)
joblib.dump(bestPerformingModel, 'rf.joblib')
joblib.dump(vectorizerName, 'vectorizerName.joblib')
joblib.dump(vectorizerSample, 'vectorizerSample.joblib')
```
| github_jupyter |
```
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
This colab notebook demonstrates how to read and visualize the data in the Didi dataset: Digital Ink Diagram data.
More information about this data is available at
* https://github.com/google-research/google-research/tree/master/didi_dataset
* [The Didi dataset: Digital Ink Diagram data](https://arxiv.org/abs/2002.09303). P. Gervais, T. Deselaers, E. Aksan, O. Hilliges, 2020.
The colab demonstrates how to:
1. display the data along with the prompt images.
1. convert the data to a sharded `TFRecord` file of `TFExample`s.
```
from __future__ import division
import collections
import contextlib
import io
import json
import os
import random
import statistics
from googleapiclient.discovery import build
from google.colab import auth
from google.colab import files
from googleapiclient.http import MediaIoBaseDownload
from apiclient import errors
%tensorflow_version 2.x
import tensorflow as tf
import numpy as np
from matplotlib import pylab
from IPython.display import Image, display
# Setup and settings.
# Settings
JSON_FILES=["diagrams_wo_text_20200131.ndjson", "diagrams_20200131.ndjson"]
PROJECT_ID = "digital-ink-diagram-data"
BUCKET_NAME = "digital_ink_diagram_data"
LOCAL_DATA_DIR = "/tmp"
NUM_TFRECORD_SHARDS = 1
auth.authenticate_user()
# Creating the service client.
gcs_service = build("storage", "v1")
# Download the data
def download_file_from_gcs(filename):
directory_name = os.path.join(LOCAL_DATA_DIR, os.path.dirname(filename))
if not os.path.exists(directory_name):
os.mkdir(directory_name)
with open(os.path.join(LOCAL_DATA_DIR, filename), "wb") as f:
request = gcs_service.objects().get_media(bucket=BUCKET_NAME, object=filename)
media = MediaIoBaseDownload(f, request)
done = False
while not done:
status, done = media.next_chunk()
if not done:
print("Downloading '%s': %-3.0f%%" % (filename, status.progress() * 100))
def get_label_file(type, labelid):
file_id = os.path.join(type, "%s.%s" % (labelid, type))
fname = os.path.join(LOCAL_DATA_DIR, file_id)
if os.path.exists(fname):
return fname
download_file_from_gcs(file_id)
return fname
for json_file in JSON_FILES:
download_file_from_gcs(json_file)
# Displays prompt images with drawing overlaid.
def PrepareDrawing():
pylab.clf()
pylab.axes().set_aspect("equal")
pylab.gca().yaxis.set_visible(False)
pylab.gca().xaxis.set_visible(False)
def display_image(ink):
im = pylab.imread(os.path.join(LOCAL_DATA_DIR, "png", ink["label_id"] + ".png"))
# Compute scaling of the image.
guide_width = ink["writing_guide"]["width"]
guide_height = ink["writing_guide"]["height"]
im_height, im_width, _ = im.shape
scale=min(guide_width / im_width, guide_height / im_height)
offset_x = (guide_width - scale * im_width) / 2
offset_y = (guide_height - scale * im_height) / 2
pylab.imshow(im, origin="upper",
extent=(offset_x, offset_x + scale * im_width,
offset_y + scale * im_height, offset_y),
aspect="equal")
def display_strokes(ink):
for s in ink["drawing"]:
pylab.plot(s[0], [y for y in s[1]], color="red")
def display_ink(ink):
# Fetch the corresponding PNG image.
get_label_file("png", ink["label_id"])
# Draw image, overlay strokes.
PrepareDrawing()
display_image(ink)
display_strokes(ink)
pylab.show()
for json_file in JSON_FILES:
count = 0
with open(os.path.join(LOCAL_DATA_DIR, json_file)) as f:
for line in f:
ink = json.loads(line)
display_ink(ink)
count += 1
if count == 10:
break
# This cell converts the file to tf.Record of tf.Example.
# This cell takes long time to run.
def get_label_file_contents(type, labelid):
get_label_file(type, labelid)
with open(os.path.join(LOCAL_DATA_DIR, type, "%s.%s" %(labelid, type))) as f:
return f.read()
def ink_to_tfexample(ink, dot=None):
"""Takes a LabeledInk and outputs a TF.Example with stroke information.
Args:
ink: A JSON array containing the drawing information.
dot: (Optional) textual content of the GrahViz dotfile that was used to
generate the prompt image.
Returns:
a Tensorflow Example proto with the drawing data.
"""
features = {}
features["key"] = tf.train.Feature(
bytes_list=tf.train.BytesList(value=[ink["key"].encode("utf-8")]))
features["label_id"] = tf.train.Feature(
bytes_list=tf.train.BytesList(value=[ink["label_id"].encode("utf-8")]))
if dot:
features["label_dot"] = tf.train.Feature(
bytes_list=tf.train.BytesList(value=[dot.encode("utf-8")]))
max_len = np.array([len(stroke[0]) for stroke in ink["drawing"]]).max()
strokes = []
stroke_lengths = []
for stroke in ink["drawing"]:
stroke_len = len(stroke[0])
padded_stroke_with_pen = np.zeros([1, max_len, 4], dtype=np.float32)
padded_stroke_with_pen[0, 0:stroke_len, 0] = stroke[0]
padded_stroke_with_pen[0, 0:stroke_len, 1] = stroke[1]
padded_stroke_with_pen[0, 0:stroke_len, 2] = stroke[2]
padded_stroke_with_pen[0, stroke_len - 1, 3] = 1
strokes.append(padded_stroke_with_pen)
stroke_lengths.append(stroke_len)
all_strokes = np.concatenate(strokes, axis=0).astype(float) # (num_strokes, max_len, 4)
all_stroke_lengths = np.array(stroke_lengths).astype(int)
features["ink"] = tf.train.Feature(
float_list=tf.train.FloatList(value=all_strokes.flatten()))
features["stroke_length"] = tf.train.Feature(
int64_list=tf.train.Int64List(value=all_stroke_lengths))
features["shape"] = tf.train.Feature(
int64_list=tf.train.Int64List(value=all_strokes.shape))
features["num_strokes"] = tf.train.Feature(
int64_list=tf.train.Int64List(value=[len(ink["drawing"])]))
example = tf.train.Example(features=tf.train.Features(feature=features))
return example
@contextlib.contextmanager
def create_tfrecord_writers(output_file, num_output_shards):
writers = collections.defaultdict(list)
for split in ["train", "valid", "test"]:
for i in range(num_output_shards):
writers[split].append(
tf.io.TFRecordWriter("%s-%s-%05i-of-%05i" %
(output_file, split, i, num_output_shards)))
try:
yield writers
finally:
for split in ["train", "valid", "test"]:
for w in writers[split]:
w.close()
def pick_output_shard(num_shards):
return random.randint(0, num_shards - 1)
def size_normalization(drawing):
def get_bounding_box(drawing):
minx = 99999
miny = 99999
maxx = 0
maxy = 0
for s in drawing:
minx = min(minx, min(s[0]))
maxx = max(maxx, max(s[0]))
miny = min(miny, min(s[1]))
maxy = max(maxy, max(s[1]))
return (minx, miny, maxx, maxy)
bb = get_bounding_box(drawing)
width, height = bb[2] - bb[0], bb[3] - bb[1]
offset_x, offset_y = bb[0], bb[1]
if height < 1e-6:
height = 1
size_normalized_drawing = [[[(x - offset_x) / height for x in stroke[0]],
[(y - offset_y) / height for y in stroke[1]],
[t for t in stroke[2]]]
for stroke in drawing]
return size_normalized_drawing
def resample_ink(drawing, timestep):
def resample_stroke(stroke, timestep):
def interpolate(t, t_prev, t_next, v0, v1):
d0 = abs(t-t_prev)
d1 = abs(t-t_next)
dist_sum = d0 + d1
d0 /= dist_sum
d1 /= dist_sum
return d1 * v0 + d0 * v1
x,y,t = stroke
if len(t) < 3:
return stroke
r_x, r_y, r_t = [x[0]], [y[0]], [t[0]]
final_time = t[-1]
stroke_time = final_time - t[0]
necessary_steps = int(stroke_time / timestep)
i = 1
current_time = t[i]
while current_time < final_time:
current_time += timestep
while i < len(t) - 1 and current_time > t[i]:
i += 1
r_x.append(interpolate(current_time, t[i-1], t[i], x[i-1], x[i]))
r_y.append(interpolate(current_time, t[i-1], t[i], y[i-1], y[i]))
r_t.append(interpolate(current_time, t[i-1], t[i], t[i-1], t[i]))
return [r_x, r_y, r_t]
resampled = [resample_stroke(s, timestep) for s in drawing]
return resampled
for json_file in JSON_FILES:
counts = collections.defaultdict(int)
with create_tfrecord_writers(os.path.join(LOCAL_DATA_DIR, json_file + ".tfrecord"), NUM_TFRECORD_SHARDS) as writers:
with open(os.path.join(LOCAL_DATA_DIR, json_file)) as f:
for line in f:
ink = json.loads(line)
dot = get_label_file_contents("dot", ink["label_id"])
ink["drawing"] = size_normalization(ink["drawing"])
ink["drawing"] = resample_ink(ink["drawing"], 20)
example = ink_to_tfexample(ink, dot)
counts[ink["split"]] += 1
writers[ink["split"]][pick_output_shard(NUM_TFRECORD_SHARDS)].write(example.SerializeToString())
print ("Finished writing: %s train: %i valid: %i test: %i" %(json_file, counts["train"], counts["valid"], counts["test"]))
# Download the TFRecord files to local machine (or use the filemanager on the left).
for json_file in JSON_FILES:
for split in ["train", "valid", "test"]:
for i in range(NUM_TFRECORD_SHARDS):
filename = os.path.join(LOCAL_DATA_DIR, json_file + ".tfrecord-%s-%05i-of-%05i" % (split, i, NUM_TFRECORD_SHARDS))
print(filename)
files.download(filename)
stats = {}
# Compute some dataset statistics
def count_points_strokes(ink):
return sum([len(stroke[0]) for stroke in ink]), len(ink)
# Collect data to compute statistics
for json_file in JSON_FILES:
stats[json_file] = collections.defaultdict(list)
with open(os.path.join(LOCAL_DATA_DIR, json_file)) as f:
for line in f:
ink = json.loads(line)
points, strokes = count_points_strokes(ink["drawing"])
stats[json_file]["points"].append(points)
stats[json_file]["strokes"].append(strokes)
stats[json_file]["labels"].append(ink["label_id"])
print (json_file)
for i in ["points", "strokes"]:
print (i, min(stats[json_file][i]), max(stats[json_file][i]), statistics.median(stats[json_file][i]))
for i in ["labels"]:
labels, counts = np.unique(stats[json_file][i], return_counts=True)
print (i, len(labels), min(counts), max(counts), statistics.median(counts))
print()
```
| github_jupyter |
# Creating a class
```
class Student: # created a class "Student"
name = "Tom"
grade = "A"
age = 15
def display(self):
print(self.name,self.grade,self.age)
# There will be no output here, because we are not invoking (calling) the "display" function to print
```
## Creating an object
```
class Student:
name = "Tom"
grade = "A"
age = 15
def display(self):
print(self.name,self.grade,self.age)
s1 = Student() # created an object "s1" of class "Student"
s1.display() # displaying the details through the "display" finction
```
## Creating a constructor
> If we give parameters inside the constructor (inside __init__) then that type of representation is called "Parameterized constructor"
> If we don't give parameters inside the constructor (inside __init__) then that type of representation is called "Non-Parameterized constructor"

```
# This is a parameterized constructor
class Student:
def __init__(self,name,study,occupation): # intializing all the parameters we need i.e, name, study, occupation in the constructor
self.name = name
self.study = study
self.occupation = occupation
def output(self):
print(self.name + " completed " + self.study + " and working as a " + self.occupation)
s1 = Student('Tom', 'Btech' ,'software engineer') # creating two objects and giving the
s2 = Student('Jerry', "MBBS", 'doctor') # input as the order mentioned in the " __init__ " function
s1.output()
s2.output()
# This is a non-parameterized constructor
class Student:
def __init__(self):
print(" This is a Non parameterized constructor")
s1 = Student()
```
## Python in-built class functions
```
class Student:
def __init__(self,name,grade,age):
self.name = name
self.grade = grade
self.age = age
s1 = Student("Tom","A",15)
print(getattr(s1,'name')) # we get the value of the particular attribute
print(getattr(s1,"age")) # Here,we are asking for attributes "name","age" and the value of those attributes are "Tom",15 respectively
setattr(s1,"age",20) # setting the attribute (changing)
print("Age of the tom is changed using 'setattr' ")
print(getattr(s1,"age"))
print("Checking whether the particular attribute is there or not")
print(hasattr(s1,"name")) # Returns "True" if the attribute is intialized on our class
print(hasattr(s1,"school")) # or else gives "False"
```
## Built-in class attributes
```
class Student:
'''This is doc string where we mention,what's the idea of this progam '''
def __init__(self,name,grade,age):
self.name = name
self.grade = grade
self.age = age
s1 = Student("Tom","A",15)
print(Student.__doc__) # printing the doc string
print(s1.__dict__) # printing the attributes in a dictionary data type way
```
# Inheritance
```
class Parent:
print("This is the parent class")
def dog(self):
print("Dog barks")
class Child(Parent): # Inheriting the "parent" class using "child" class
def lion(self):
print("Lion roars")
c1 = Child() # "c1" is the object of "Child" class
c1.lion()
c1.dog() # because of inheritance, the print statement inside the "dog" function , which is inside the "Parent" class is also printed.
```
## Multi-level inheritance
```
class Parent:
print("This is the parent class")
def dog(self):
print("Dog barks")
class Child(Parent): # Inheriting the "parent" class using "child" class
def lion(self):
print("Lion roars")
class Grandchild(Child): # Inheriting the "Child" class
def pegion(self):
print("pegion coos")
c1 = Grandchild() # "c1" is the object of "Grandchild" class
c1.lion()
c1.dog() # because of inheritance, the print statement inside the "dog" function , which is inside the "Parent" class is also printed.
c1.pegion() # because of inheritance, the print statement inside the "lion" function , which is inside the "Child" class is also printed.
```
# Multiple inheritance
```
class Calculator1:
def sum(self,a,b):
return a + b
class Calculator2:
def mul(self,a,b):
return a * b
class Derived(Calculator1,Calculator2): # Multiple inheritance, since it is having multiple (in this case 2) class arguments.
def div(self,a,b):
return a / b
d = Derived()
print(d.sum(20,30))
print(d.mul(20,30))
print(d.div(20,30))
```
# Polymorphism
```
class Teacher:
def intro(self):
print("I am a teacher")
def experience(self):
print("3 to 4 years")
class Lecturer:
def intro(self):
print("I am a lecturer")
def experience(self):
print("5 to 6 years")
class Professor:
def intro(self):
print("I am a professor")
def experience(self):
print("8 to 10 years")
# Common Interface for all persons
def category(person):
person.intro() # only intros are printed
# type "person.experience" instead of "person.intro", we get only experience. If we type both "person.intro" and "person.experience" , then both statements are printed.
# instantiate objects
t = Teacher()
l = Lecturer()
p = Professor()
# passing the object
category(t)
category(l)
category(p)
```
# Encapsulation
```
class Computer:
def __init__(self):
self.__maxprice = 900 # maxprice is a private data bcz, it is starting with " __ " underscores
def sell(self):
print("Selling Price: {}".format(self.__maxprice))
def setMaxPrice(self, price): # This method is used to set the private data
self.__maxprice = price
c = Computer() # c is an object of "Computer" class
c.sell()
# change the price
c.__maxprice = 1000 # Here, we are modifying our data directly "__maxprice" to 1000. But the data is not modified because it is a private data
c.sell()
# using setter function
c.setMaxPrice(1000) # In order to change the private data, we have to take help of the method "setMaxPrice" and then now the data is modified
c.sell() # Invoking (calling) the "sell" method (function)
```
## Data abstraction
```
from abc import ABC,abstractclassmethod
class Company(ABC): # this is the abstract class and "ABC" is called as "Abstract Base Class" which is imported from module "abc"
# this is the abstact class method and that "@" is called as decorators. With the help of the decorator only we can make the method as abstract class method
@abstractclassmethod
def developer(self):
pass
class Jr_developer(Company):
def developer(self):
print("I am a jr.developer and develops small applications")
class Sr_developer(Company):
def developer(self):
print("I am a sr.developer and develops large applications")
j = Jr_developer()
s = Sr_developer()
j.developer()
s.developer()
```
| github_jupyter |
# Linear Discriminant Analysis (LDA)
## Importing the libraries
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
```
## Importing the dataset
```
dataset = pd.read_csv('Wine.csv')
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
```
## Splitting the dataset into the Training set and Test set
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
```
## Feature Scaling
```
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
```
## Applying LDA
```
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
lda = LDA(n_components = 2)
X_train = lda.fit_transform(X_train, y_train)
X_test = lda.transform(X_test)
```
## Training the Logistic Regression model on the Training set
```
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(random_state = 0)
classifier.fit(X_train, y_train)
```
## Making the Confusion Matrix
```
from sklearn.metrics import confusion_matrix, accuracy_score
y_pred = classifier.predict(X_test)
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
```
## Visualising the Training set results
```
from matplotlib.colors import ListedColormap
X_set, y_set = X_train, y_train
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),
np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green', 'blue')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],
c = ListedColormap(('red', 'green', 'blue'))(i), label = j)
plt.title('Logistic Regression (Training set)')
plt.xlabel('LD1')
plt.ylabel('LD2')
plt.legend()
plt.show()
```
## Visualising the Test set results
```
from matplotlib.colors import ListedColormap
X_set, y_set = X_test, y_test
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),
np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green', 'blue')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],
c = ListedColormap(('red', 'green', 'blue'))(i), label = j)
plt.title('Logistic Regression (Test set)')
plt.xlabel('LD1')
plt.ylabel('LD2')
plt.legend()
plt.show()
```
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.
# Inference PyTorch Bert Model with ONNX Runtime on CPU
In this tutorial, you'll be introduced to how to load a Bert model from PyTorch, convert it to ONNX, and inference it for high performance using ONNX Runtime. In the following sections, we are going to use the Bert model trained with Stanford Question Answering Dataset (SQuAD) dataset as an example. Bert SQuAD model is used in question answering scenarios, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
This notebook is for CPU inference. For GPU inferenece, please look at another notebook [Inference PyTorch Bert Model with ONNX Runtime on GPU](PyTorch_Bert-Squad_OnnxRuntime_GPU.ipynb).
## 0. Prerequisites ##
If you have Jupyter Notebook, you may directly run this notebook. We will use pip to install or upgrade [PyTorch](https://pytorch.org/), [OnnxRuntime](https://microsoft.github.io/onnxruntime/) and other required packages.
Otherwise, you can setup a new environment. First, we install [AnaConda](https://www.anaconda.com/distribution/). Then open an AnaConda prompt window and run the following commands:
```console
conda create -n cpu_env python=3.6
conda activate cpu_env
conda install jupyter
jupyter notebook
```
The last command will launch Jupyter Notebook and we can open this notebook in browser to continue.
```
# Install or upgrade PyTorch 1.5.0 and OnnxRuntime 1.3.0 for CPU-only.
import sys
!{sys.executable} -m pip install --upgrade torch==1.5.0+cpu torchvision==0.6.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
!{sys.executable} -m pip install --upgrade onnxruntime==1.3.0
!{sys.executable} -m pip install --upgrade onnxruntime-tools
# Install other packages used in this notebook.
!{sys.executable} -m pip install transformers==2.11.0
!{sys.executable} -m pip install wget netron
```
## 1. Load Pretrained Bert model ##
We begin by downloading the SQuAD data file and store them in the specified location.
```
import os
cache_dir = "./squad"
if not os.path.exists(cache_dir):
os.makedirs(cache_dir)
predict_file_url = "https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json"
predict_file = os.path.join(cache_dir, "dev-v1.1.json")
if not os.path.exists(predict_file):
import wget
print("Start downloading predict file.")
wget.download(predict_file_url, predict_file)
print("Predict file downloaded.")
```
Specify some model configuration variables and constant.
```
# For fine tuned large model, the model name is "bert-large-uncased-whole-word-masking-finetuned-squad". Here we use bert-base for demo.
model_name_or_path = "bert-base-cased"
max_seq_length = 128
doc_stride = 128
max_query_length = 64
# Enable overwrite to export onnx model and download latest script each time when running this notebook.
enable_overwrite = True
# Total samples to inference. It shall be large enough to get stable latency measurement.
total_samples = 100
```
Start to load model from pretrained. This step could take a few minutes.
```
# The following code is adapted from HuggingFace transformers
# https://github.com/huggingface/transformers/blob/master/examples/run_squad.py
from transformers import (BertConfig, BertForQuestionAnswering, BertTokenizer)
# Load pretrained model and tokenizer
config_class, model_class, tokenizer_class = (BertConfig, BertForQuestionAnswering, BertTokenizer)
config = config_class.from_pretrained(model_name_or_path, cache_dir=cache_dir)
tokenizer = tokenizer_class.from_pretrained(model_name_or_path, do_lower_case=True, cache_dir=cache_dir)
model = model_class.from_pretrained(model_name_or_path,
from_tf=False,
config=config,
cache_dir=cache_dir)
# load some examples
from transformers.data.processors.squad import SquadV1Processor
processor = SquadV1Processor()
examples = processor.get_dev_examples(None, filename=predict_file)
from transformers import squad_convert_examples_to_features
features, dataset = squad_convert_examples_to_features(
examples=examples[:total_samples], # convert just enough examples for this notebook
tokenizer=tokenizer,
max_seq_length=max_seq_length,
doc_stride=doc_stride,
max_query_length=max_query_length,
is_training=False,
return_dataset='pt'
)
```
## 2. Export the loaded model ##
Once the model is loaded, we can export the loaded PyTorch model to ONNX.
```
output_dir = "./onnx"
if not os.path.exists(output_dir):
os.makedirs(output_dir)
export_model_path = os.path.join(output_dir, 'bert-base-cased-squad.onnx')
import torch
device = torch.device("cpu")
# Get the first example data to run the model and export it to ONNX
data = dataset[0]
inputs = {
'input_ids': data[0].to(device).reshape(1, max_seq_length),
'attention_mask': data[1].to(device).reshape(1, max_seq_length),
'token_type_ids': data[2].to(device).reshape(1, max_seq_length)
}
# Set model to inference mode, which is required before exporting the model because some operators behave differently in
# inference and training mode.
model.eval()
model.to(device)
if enable_overwrite or not os.path.exists(export_model_path):
with torch.no_grad():
symbolic_names = {0: 'batch_size', 1: 'max_seq_len'}
torch.onnx.export(model, # model being run
args=tuple(inputs.values()), # model input (or a tuple for multiple inputs)
f=export_model_path, # where to save the model (can be a file or file-like object)
opset_version=11, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names=['input_ids', # the model's input names
'input_mask',
'segment_ids'],
output_names=['start', 'end'], # the model's output names
dynamic_axes={'input_ids': symbolic_names, # variable length axes
'input_mask' : symbolic_names,
'segment_ids' : symbolic_names,
'start' : symbolic_names,
'end' : symbolic_names})
print("Model exported at ", export_model_path)
```
## 3. PyTorch Inference ##
Use PyTorch to evaluate an example input for comparison purpose.
```
import time
# Measure the latency. It is not accurate using Jupyter Notebook, it is recommended to use standalone python script.
latency = []
with torch.no_grad():
for i in range(total_samples):
data = dataset[i]
inputs = {
'input_ids': data[0].to(device).reshape(1, max_seq_length),
'attention_mask': data[1].to(device).reshape(1, max_seq_length),
'token_type_ids': data[2].to(device).reshape(1, max_seq_length)
}
start = time.time()
outputs = model(**inputs)
latency.append(time.time() - start)
print("PyTorch {} Inference time = {} ms".format(device.type, format(sum(latency) * 1000 / len(latency), '.2f')))
```
## 4. Inference ONNX Model with ONNX Runtime ##
### OpenMP Environment Variable
OpenMP environment variables are very important for CPU inference of Bert model. It has large performance impact on Bert model so you might need set it carefully according to [Performance Test Tool](#Performance-Test-Tool) result in later part of this notebook.
Setting environment variables shall be done before importing onnxruntime. Otherwise, they might not take effect.
```
import psutil
# You may change the settings in this cell according to Performance Test Tool result.
os.environ["OMP_NUM_THREADS"] = str(psutil.cpu_count(logical=True))
os.environ["OMP_WAIT_POLICY"] = 'ACTIVE'
```
Now we are ready to inference the model with ONNX Runtime. Here we can see that OnnxRuntime has better performance than PyTorch.
It is better to use standalone python script like [Performance Test tool](#Performance-Test-tool) to get accurate performance results.
```
import onnxruntime
import numpy
# Print warning if user uses onnxruntime-gpu instead of onnxruntime package.
if 'CUDAExecutionProvider' in onnxruntime.get_available_providers():
print("warning: onnxruntime-gpu is not built with OpenMP. You might try onnxruntime package to test CPU inference.")
sess_options = onnxruntime.SessionOptions()
# Optional: store the optimized graph and view it using Netron to verify that model is fully optimized.
# Note that this will increase session creation time, so it is for debugging only.
sess_options.optimized_model_filepath = os.path.join(output_dir, "optimized_model_cpu.onnx")
# intra_op_num_threads is needed for OnnxRuntime 1.2.0.
# For OnnxRuntime 1.3.0 or later, this does not have effect unless you are using onnxruntime-gpu package.
sess_options.intra_op_num_threads=1
# Specify providers when you use onnxruntime-gpu for CPU inference.
session = onnxruntime.InferenceSession(export_model_path, sess_options, providers=['CPUExecutionProvider'])
latency = []
for i in range(total_samples):
data = dataset[i]
# TODO: use IO Binding (see https://github.com/microsoft/onnxruntime/pull/4206) to improve performance.
ort_inputs = {
'input_ids': data[0].cpu().reshape(1, max_seq_length).numpy(),
'input_mask': data[1].cpu().reshape(1, max_seq_length).numpy(),
'segment_ids': data[2].cpu().reshape(1, max_seq_length).numpy()
}
start = time.time()
ort_outputs = session.run(None, ort_inputs)
latency.append(time.time() - start)
print("OnnxRuntime cpu Inference time = {} ms".format(format(sum(latency) * 1000 / len(latency), '.2f')))
print("***** Verifying correctness *****")
for i in range(2):
print('PyTorch and ONNX Runtime output {} are close:'.format(i), numpy.allclose(ort_outputs[i], outputs[i].cpu(), rtol=1e-05, atol=1e-04))
```
## 5. Offline Optimization Script and Test Tools
It is recommended to try [OnnxRuntime Transformer Model Optimization Tool](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/python/tools/transformers) on the exported ONNX models. It could help verify whether the model can be fully optimized, and get performance test results.
#### Transformer Optimizer
Although OnnxRuntime could optimize Bert model exported by PyTorch. Sometime, model cannot be fully optimized due to different reasons:
* A new subgraph pattern is generated by new version of export tool, and the pattern is not covered by older version of OnnxRuntime.
* The exported model uses dynamic axis and this makes it harder for shape inference of the graph. That blocks some optimization to be applied.
* Some optimization is better to be done offline. Like change input tensor type from int64 to int32 to avoid extra Cast nodes, or convert model to float16 to achieve better performance in V100 or T4 GPU.
We have python script **optimizer.py**, which is more flexible in graph pattern matching and model conversion (like float32 to float16). You can also use it to verify whether a Bert model is fully optimized.
In this example, we can see that it introduces optimization that is not provided by onnxruntime: SkipLayerNormalization and bias fusion, which is not fused in OnnxRuntime due to shape inference as mentioned.
It will also tell whether the model is fully optimized or not. If not, that means you might need change the script to fuse some new pattern of subgraph.
Example Usage:
```
from onnxruntime_tools import optimizer
optimized_model = optimizer.optimize_model(export_model_path, model_type='bert', num_heads=12, hidden_size=768)
optimized_model.save_model_to_file(optimized_model_path)
```
You can also use optimizer_cli like the following:
```
optimized_model_path = './onnx/bert-base-cased-squad_opt_cpu.onnx'
!{sys.executable} -m onnxruntime_tools.optimizer_cli --input $export_model_path --output $optimized_model_path --model_type bert --num_heads 12 --hidden_size 768
```
#### Optimized Graph
When you can open the optimized model using Netron to visualize, the graph is like the following:
<img src='images/optimized_bert_gpu.png'>
For CPU, optimized graph is slightly different: FastGelu is replaced by BiasGelu.
```
import netron
# Change it to False to skip viewing the optimized model in browser.
enable_netron = True
if enable_netron:
# If you encounter error "access a socket in a way forbidden by its access permissions", install Netron as standalone application instead.
netron.start(optimized_model_path)
```
#### Model Results Comparison Tool
If your BERT model has three inputs, a script compare_bert_results.py can be used to do a quick verification. The tool will generate some fake input data, and compare results from both the original and optimized models. If outputs are all close, it is safe to use the optimized model.
Example of verifying models:
```
!{sys.executable} -m onnxruntime_tools.transformers.compare_bert_results --baseline_model $export_model_path --optimized_model $optimized_model_path --batch_size 1 --sequence_length 128 --samples 100
```
#### Performance Test Tool
This tool measures performance of BERT model inference using OnnxRuntime Python API.
The following command will create 100 samples of batch_size 1 and sequence length 128 to run inference, then calculate performance numbers like average latency and throughput etc. You can increase number of samples (recommended 1000) to get more stable result.
```
!{sys.executable} -m onnxruntime_tools.transformers.bert_perf_test --model $optimized_model_path --batch_size 1 --sequence_length 128 --samples 100 --test_times 1 --intra_op_num_threads 1 --inclusive --all
```
Let's load the summary file and take a look.
```
import os
import glob
import pandas
latest_result_file = max(glob.glob("./onnx/perf_results_*.txt"), key=os.path.getmtime)
result_data = pandas.read_table(latest_result_file, converters={'OMP_NUM_THREADS': str, 'OMP_WAIT_POLICY':str})
print(latest_result_file)
# Remove some columns that have same values for all rows.
columns_to_remove = ['model', 'graph_optimization_level', 'batch_size', 'sequence_length', 'test_cases', 'test_times', 'use_gpu', 'warmup']
# Hide some latency percentile columns to fit screen width.
columns_to_remove.extend(['Latency_P50', 'Latency_P95'])
result_data.drop(columns_to_remove, axis=1, inplace=True)
result_data
```
## 6. Additional Info
Note that running Jupyter Notebook has slight impact on performance result since Jupyter Notebook is using system resources like CPU and memory etc. It is recommended to close Jupyter Notebook and other applications, then run the performance test tool in a console to get more accurate performance numbers.
We have a [benchmark script](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/transformers/run_benchmark.sh). It is recommended to use it compare inference speed of OnnxRuntime with PyTorch.
[OnnxRuntime C API](https://github.com/microsoft/onnxruntime/blob/master/docs/C_API.md) could get slightly better performance than python API. If you use C API in inference, you can use OnnxRuntime_Perf_Test.exe built from source to measure performance instead.
Here is the machine configuration that generated the above results. The machine has GPU but not used in CPU inference.
You might get slower or faster result based on your hardware.
```
!{sys.executable} -m onnxruntime_tools.transformers.machine_info --silent
```
| github_jupyter |
```
!pip install chart_studio
import plotly.graph_objects as go
import plotly.offline as offline_py
from wordcloud import WordCloud
import matplotlib.pyplot as plt
import plotly.figure_factory as ff
import numpy as np
%matplotlib inline
import pandas as pd
df = pd.read_csv("https://raw.githubusercontent.com/DSEI21000-S21/project-product-price-prediction/main/data/random_samples/stratified_sampling_data_by_price_whigh_sz50000_1619218354.csv")
# size of dataset
print('The size of the dataset is: {} \n'.format(df.shape))
# different data types in the dataset
print('The types of the dataset: {}'.format(df.dtypes))
df.head()
df.price.describe()
# most popular categories -- Women, electronics and men
x = df['c1'].value_counts().index.values.astype('str')[:15]
y = df['c1'].value_counts().values[:15]
pct = [("%.2f"%(v*100))+"%" for v in (y/len(df))] [:15]
trace1 = go.Bar(x=x, y=y, text=pct)
layout = dict(title= 'Number of Items by Main Category',
yaxis = dict(title='Count'),
xaxis = dict(title='Brand'))
fig=dict(data=[trace1], layout=layout)
offline_py.iplot(fig)
x = df['brand_name'].value_counts().index.values.astype('str')[:15]
y = df['brand_name'].value_counts().values[:15]
pct = [("%.2f"%(v*100))+"%" for v in (y/len(df))] [:15]
colorscale = [[0, '#FAEE1C'], [0.33, '#F3558E'], [0.66, '#9C1DE7'], [1, '#581B98']]
# most popular brands -- Nike & PINK
trace1 = go.Bar(x=x, y=y, text=pct, marker=dict(color = y, colorscale=colorscale, showscale=True))
layout = dict(title= 'Number of Items by brand name',
yaxis = dict(title='Count'),
xaxis = dict(title='Brand'))
fig=dict(data=[trace1], layout=layout)
offline_py.iplot(fig)
dataframe = df[df.brand_name == 'Nike'][:100]
datawomen = dataframe.loc[:, ['price', 'shipping']]
datawomen["index"] = np.arange(1,len(datawomen)+1)
fig = ff.create_scatterplotmatrix(datawomen, diag='box', index='index',colormap='Portland',
colormap_type='cat',
height=700, width=700)
offline_py.iplot(fig)
# visualize which words has the highest frequencies within the top1 category
description = df.item_description[df.c1 == 'women']
plt.subplots(figsize = (8,8))
wordcloud = WordCloud (
background_color = 'white',
width = 512,
height = 384
).generate(' '.join(description))
plt.imshow(wordcloud) # image show
plt.axis('off') # to off the axis of x and y
plt.title('Top Words -- Women')
plt.show()
description = df.item_description[df.c1 == 'electronics']
plt.subplots(figsize = (8,8))
wordcloud = WordCloud (
background_color = 'white',
width = 512,
height = 384
).generate(' '.join(description))
plt.imshow(wordcloud) # image show
plt.axis('off') # to off the axis of x and y
plt.title('Top Words -- Electronics')
plt.show()
description = df.item_description[df.c1 == 'men']
plt.subplots(figsize = (8,8))
wordcloud = WordCloud (
background_color = 'white',
width = 512,
height = 384
).generate(' '.join(description))
plt.imshow(wordcloud) # image show
plt.axis('off') # to off the axis of x and y
plt.title('Top Words -- Men')
plt.show()
```
| github_jupyter |
```
# ๅ ่ฝฝๆๆฌๅ็ฑปๆฐๆฎ้
from sklearn.datasets import fetch_20newsgroups
import random
newsgroups_train = fetch_20newsgroups(subset='train')
newsgroups_test = fetch_20newsgroups(subset='test')
X_train = newsgroups_train.data
X_test = newsgroups_test.data
y_train = newsgroups_train.target
y_test = newsgroups_test.target
print("sample several datas: ")
print("X_train: ", X_train[0: 2])
print("Y_train:", y_train[0: 2])
# ๆๅๆๆฌTF-IDFๆฐๆฎ็นๅพ
from sklearn.feature_extraction.text import TfidfVectorizer
import numpy as np
def TFIDF(X_train, X_test, MAX_NB_WORDS=75000):
vectorizer_x = TfidfVectorizer(max_features=MAX_NB_WORDS)
X_train = vectorizer_x.fit_transform(X_train).toarray()
X_test = vectorizer_x.transform(X_test).toarray()
print("tf-idf with", str(np.array(X_train).shape[1]),"features")
return X_train, X_test
X_train, X_test = TFIDF(X_train, X_test)
# ไฝฟ็จPCAๅฐๆๆฌ็นๅพ้็บฌ
from sklearn.decomposition import PCA
pca = PCA(n_components=2000)
X_train_new = pca.fit_transform(X_train)
X_test_new = pca.transform(X_test)
print("train with old features: ", np.array(X_train).shape)
print("train with new features:", np.array(X_train_new).shape)
print("test with old features: ", np.array(X_test).shape)
print("test with new features:", np.array(X_test_new).shape)
# ไฝฟ็จLDAๅฐๆฐๆฎ้็บฌ
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
LDA = LinearDiscriminantAnalysis(n_components=15)
X_train_new = LDA.fit(X_train, y_train)
X_train_new = LDA.transform(X_train)
X_test_new = LDA.transform(X_test)
print("train with old features: ", np.array(X_train).shape)
print("train with new features:", np.array(X_train_new).shape)
print("test with old features: ", np.array(X_test).shape)
print("test with new features:", np.array(X_test_new).shape)
# ไฝฟ็จNMFๅฐๆฐๆฎ้็บฌ
from sklearn.decomposition import NMF
NMF_ = NMF(n_components=2000)
X_train_new = NMF_.fit(X_train)
X_train_new = NMF_.transform(X_train)
X_test_new = NMF_.transform(X_test)
print("train with old features: ", np.array(X_train).shape)
print("train with new features:", np.array(X_train_new).shape)
print("test with old features: ", np.array(X_test).shape)
print("test with new features:", np.array(X_test_new))
# ไฝฟ็จrandom projectionๅฐๆฐๆฎ้็บฌ
from sklearn import random_projection
RandomProjection = random_projection.GaussianRandomProjection(n_components=2000)
X_train_new = RandomProjection.fit_transform(X_train)
X_test_new = RandomProjection.transform(X_test)
print("train with old features: ", np.array(X_train).shape)
print("train with new features:", np.array(X_train_new).shape)
print("test with old features: ", np.array(X_test).shape)
print("test with new features:", np.array(X_test_new).shape)
# about T-SNE
import numpy as np
from sklearn.manifold import TSNE
X = np.array([[0, 0, 0], [0, 1, 1], [1, 0, 1], [1, 1, 1]])
X_embedded = TSNE(n_components=2).fit_transform(X)
print(X_embedded.shape)
# Rocchio classification
from sklearn.neighbors.nearest_centroid import NearestCentroid
from sklearn.pipeline import Pipeline
from sklearn import metrics
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.datasets import fetch_20newsgroups
newsgroups_train = fetch_20newsgroups(subset='train')
newsgroups_test = fetch_20newsgroups(subset='test')
X_train = newsgroups_train.data
X_test = newsgroups_test.data
y_train = newsgroups_train.target
y_test = newsgroups_test.target
text_clf = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', NearestCentroid()),
])
text_clf.fit(X_train, y_train)
predicted = text_clf.predict(X_test)
print(metrics.classification_report(y_test, predicted))
# boosting classification
import numpy as np
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.pipeline import Pipeline
from sklearn import metrics
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.datasets import fetch_20newsgroups
newsgroups_train = fetch_20newsgroups(subset='train')
newsgroups_test = fetch_20newsgroups(subset='test')
X_train = newsgroups_train.data
X_test = newsgroups_test.data
y_train = newsgroups_train.target
y_test = newsgroups_test.target
text_clf = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', GradientBoostingClassifier(n_estimators=100)),
])
text_clf.fit(X_train, y_train)
predicted = text_clf.predict(X_test)
print(metrics.classification_report(y_test, predicted))
# bagging classifier
from sklearn.ensemble import BaggingClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.pipeline import Pipeline
from sklearn import metrics
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.datasets import fetch_20newsgroups
newsgroups_train = fetch_20newsgroups(subset='train')
newsgroups_test = fetch_20newsgroups(subset='test')
X_train = newsgroups_train.data
X_test = newsgroups_test.data
y_train = newsgroups_train.target
y_test = newsgroups_test.target
text_clf = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', BaggingClassifier(KNeighborsClassifier())),
])
text_clf.fit(X_train, y_train)
predicted = text_clf.predict(X_test)
print(metrics.classification_report(y_test, predicted))
# Naive Bayes Classifier
from sklearn.naive_bayes import MultinomialNB
from sklearn.pipeline import Pipeline
from sklearn import metrics
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.datasets import fetch_20newsgroups
newsgroups_train = fetch_20newsgroups(subset='train')
newsgroups_test = fetch_20newsgroups(subset='test')
X_train = newsgroups_train.data
X_test = newsgroups_test.data
y_train = newsgroups_train.target
y_test = newsgroups_test.target
text_clf = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', MultinomialNB()),
])
text_clf.fit(X_train, y_train)
predicted = text_clf.predict(X_test)
print(metrics.classification_report(y_test, predicted))
# K-nearest Neighbor
from sklearn.neighbors import KNeighborsClassifier
from sklearn.pipeline import Pipeline
from sklearn import metrics
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.datasets import fetch_20newsgroups
newsgroups_train = fetch_20newsgroups(subset='train')
newsgroups_test = fetch_20newsgroups(subset='test')
X_train = newsgroups_train.data
X_test = newsgroups_test.data
y_train = newsgroups_train.target
y_test = newsgroups_test.target
text_clf = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', KNeighborsClassifier()),
])
text_clf.fit(X_train, y_train)
predicted = text_clf.predict(X_test)
print(metrics.classification_report(y_test, predicted))
# Support Vector Machine (SVM)
from sklearn.svm import LinearSVC
from sklearn.pipeline import Pipeline
from sklearn import metrics
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.datasets import fetch_20newsgroups
newsgroups_train = fetch_20newsgroups(subset='train')
newsgroups_test = fetch_20newsgroups(subset='test')
X_train = newsgroups_train.data
X_test = newsgroups_test.data
y_train = newsgroups_train.target
y_test = newsgroups_test.target
text_clf = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', LinearSVC()),
])
text_clf.fit(X_train, y_train)
predicted = text_clf.predict(X_test)
print(metrics.classification_report(y_test, predicted))
# Decision Tree
from sklearn import tree
from sklearn.pipeline import Pipeline
from sklearn import metrics
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.datasets import fetch_20newsgroups
newsgroups_train = fetch_20newsgroups(subset='train')
newsgroups_test = fetch_20newsgroups(subset='test')
X_train = newsgroups_train.data
X_test = newsgroups_test.data
y_train = newsgroups_train.target
y_test = newsgroups_test.target
text_clf = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', tree.DecisionTreeClassifier()),
])
text_clf.fit(X_train, y_train)
predicted = text_clf.predict(X_test)
print(metrics.classification_report(y_test, predicted))
# Random Forest
from sklearn.ensemble import RandomForestClassifier
from sklearn.pipeline import Pipeline
from sklearn import metrics
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.datasets import fetch_20newsgroups
newsgroups_train = fetch_20newsgroups(subset='train')
newsgroups_test = fetch_20newsgroups(subset='test')
X_train = newsgroups_train.data
X_test = newsgroups_test.data
y_train = newsgroups_train.target
y_test = newsgroups_test.target
text_clf = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', RandomForestClassifier(n_estimators=100)),
])
text_clf.fit(X_train, y_train)
predicted = text_clf.predict(X_test)
print(metrics.classification_report(y_test, predicted))
```
| github_jupyter |
# MDP from multidimensional HJB
see [pdf](https://github.com/songqsh/foo1/blob/master/doc/191206HJB.pdf) for its math derivation
see souce code at
- [py](hjb_mdp_v05_3.py) for tabular approach and
- [py](hjb_mdp_nn_v05.py) for deep learning approach
```
import numpy as np
import time
#import ipdb
import itertools
def deep_iter(*shape):
iters = (range(i) for i in shape)
return itertools.product(*iters)
class Pde:
def __init__(
self,
dim=1,
lam=0.0,
drift = lambda s,a: a,
run_cost = lambda s,a: len(s) + np.sum(s**2)*2.+ np.sum(a**2)/2.0,
term_cost = lambda s: -np.sum(s**2),
limit_s = 1.0, #l-infinity limit for state
limit_a = 2.0, #l-infinity limit for action
verbose=True
):
self.dim = dim
self.lam = lam
self.drift = drift
self.run_cost = run_cost
self.term_cost = term_cost
self.limit_s = limit_s
self.limit_a = limit_a
if verbose:
print(str(dim) + '-dim HJB')
#domain is a unit hyper cube
def is_interior(self, s):
return all(0<s<1)
#cfd2mdp
def mdp(self, n_mesh_s = 8, n_mesh_a = 16, method='cfd'):
out = {}
####domain of mdp
h_s = self.limit_s/n_mesh_s #mesh size in state
h_a = self.limit_a/n_mesh_a #mesh size in action
v_shape = tuple([n_mesh_s + 1]*self.dim)
a_shape = tuple([n_mesh_a + 1]*self.dim)
def is_interior(*ix_s):
return all([0<x<n_mesh_s for x in ix_s])
out.update({
'v_shape': v_shape,
'a_shape': a_shape,
'is_interior': is_interior
})
####domain
# convert index(tuple) to state
def i2s(*ix):
return np.array([x * h_s for x in ix])
out['i2s'] = i2s
#convert index to action
def i2a(*ix):
return np.array([x * h_a for x in ix])
#out['i2a'] = i2a
########running and terminal costs and discount rate
def run_cost(ix_s,ix_a):
return self.run_cost(i2s(*ix_s), i2a(*ix_a))*h_s**2/self.dim
def term_cost(ix_s):
return self.term_cost(i2s(*ix_s))
rate = self.dim/(self.dim+self.lam*(h_s**2))
out.update({
'run_cost': run_cost,
'term_cost': term_cost,
'rate': rate
})
#########
#####transition
#return:
# a list of nbd indices
# a list of prob
def step(ix_s, ix_a):
ix_next_s_up = (np.array(ix_s)+np.eye(self.dim)).astype(int).tolist()
ix_next_s_dn = (np.array(ix_s)-np.eye(self.dim)).astype(int).tolist()
ix_next_s = [tuple(ix) for ix in ix_next_s_up+ix_next_s_dn]
pr=[]
if method == 'cfd':
b = self.drift(i2s(*ix_s), i2a(*ix_a))
pr_up = ((1+2.*h_s*b)/self.dim/2.0).tolist()
pr_dn = ((1-2.*h_s*b)/self.dim/2.0).tolist()
pr = pr_up+pr_dn
return ix_next_s, pr
out.update({'step': step})
return out
def value_iter(v_shape, a_shape, i2s, is_interior,
run_cost, term_cost, rate, step):
dim = len(v_shape)
v0 = np.zeros(v_shape)
# boundary value
for ix_s in deep_iter(*v_shape):
if not is_interior(*ix_s):
v0[ix_s]=term_cost(ix_s)
v1 = v0.copy()
for iter_n in range(100):
for ix_s0 in deep_iter(*v_shape):
if is_interior(*ix_s0):
q1 = []
for ix_a in deep_iter(*a_shape):
rhs = run_cost(ix_s0, ix_a)
ix_s1, pr = step(ix_s0, ix_a);
for k in range(2*dim):
rhs += v0[ix_s1[k]]*pr[k]
q1 += [rhs,]
v1[ix_s0] = rate*min(q1);
if np.max(np.abs(v0 - v1)) < 1e-3:
v0 = v1.copy()
break
v0 = v1.copy();
#iter_n += 1
return iter_n, v0
p = Pde(dim=2); m = p.mdp(n_mesh_s=16)
start_time = time.time()
n, v = value_iter(**m)
end_time = time.time()
print('>>>time elapsed is: ' + str(end_time - start_time))
def true_soln(s):
return -np.sum(s**2)
err = []
for ix_s in deep_iter(*m['v_shape']):
err0 = np.abs(v[ix_s] - true_soln(m['i2s'](*ix_s)))
err += [err0, ]
print('>>> sup norm error is: ' + str(max(err)))
print('>>> number of iterations is: ' + str(n))
```
| github_jupyter |
```
!nvidia-smi
import sys
if 'google.colab' in sys.modules:
!pip install -Uqq fastcore onnx onnxruntime sentencepiece seqeval rouge-score
!pip install -Uqq --no-deps fastai ohmeow-blurr
!pip install -Uqq transformers datasets wandb
from fastai.text.all import *
from fastai.callback.wandb import *
from transformers import *
from datasets import load_dataset, concatenate_datasets
from blurr.data.all import *
from blurr.modeling.all import *
```
## Data preprocessing
```
ds_name = 'snli'
train_ds = load_dataset(ds_name, split='train')
valid_ds = load_dataset(ds_name, split='validation')
len(train_ds), len(valid_ds)
train_ds.column_names
train_ds[2]
from collections import Counter
Counter(train_ds['label'])
train_ds = train_ds.filter(lambda sample: sample['label'] in [0,1,2])
valid_ds = valid_ds.filter(lambda sample: sample['label'] in [0,1,2])
```
## Setup
```
model_name = 'distilbert-base-uncased'
# data
max_len = 512
bs = 32
val_bs = bs*2
# training
lr = 2e-5
```
## Tracking
```
import wandb
WANDB_NAME = f'{ds_name}-{model_name}-alum'
GROUP = f'{ds_name}-{model_name}-alum-{lr:.0e}'
NOTES = f'Simple finetuning {model_name} with RAdam lr={lr:.0e}'
CONFIG = {}
TAGS =[model_name,ds_name,'radam','alum']
wandb.init(reinit=True, project="vat", entity="fastai_community",
name=WANDB_NAME, group=GROUP, notes=NOTES, tags=TAGS, config=CONFIG);
```
## Training
```
def _to_device(e, device):
if hasattr(e, 'to'): return e.to(device)
elif isinstance(e, dict):
for _, v in e.items():
if hasattr(v, 'to'): v.to(device)
return {k:(v.to(device) if hasattr(v, 'to') else v) for k, v in e.items()}
@patch
def one_batch(self:Learner, i, b):
self.iter = i
b_on_device = tuple(_to_device(e, self.dls.device) for e in b) if self.dls.device is not None else b
self._split(b_on_device)
self._with_events(self._do_one_batch, 'batch', CancelBatchException)
hf_arch, hf_config, hf_tokenizer, hf_model = BLURR_MODEL_HELPER.get_hf_objects(model_name, model_cls=AutoModelForSequenceClassification, tokenizer_cls=AutoTokenizer,
config_kwargs={'num_labels':3}, tokenizer_kwargs={'max_len':512})
def get_x(sample):
return sample['premise'], sample['hypothesis']
ds = concatenate_datasets([train_ds, valid_ds])
train_idx = list(range(len(train_ds)))
valid_idx = list(range(len(train_ds), len(train_ds)+len(valid_ds)))
# use number of chars as proxy to number of tokens for simplicity
lens = ds.map(lambda s: {'len': len(s['premise'])+len(s['hypothesis'])}, remove_columns=ds.column_names, num_proc=4)
train_lens = lens.select(train_idx)['len']
valid_lens = lens.select(valid_idx)['len']
blocks = (HF_TextBlock(hf_arch, hf_config, hf_tokenizer, hf_model),
CategoryBlock(vocab={0:'entailment', 1:'neutral', 2:'contradiction'}))
dblock = DataBlock(blocks=blocks,
get_x = get_x,
get_y=ItemGetter('label'),
splitter=IndexSplitter(list(range(len(train_ds), len(train_ds)+len(valid_ds)))))
# dblock.summary(train_ds)
%%time
dls = dblock.dataloaders(ds, bs=bs, val_bs=val_bs, dl_kwargs=[{'res':train_lens}, {'val_res':valid_lens}], num_workers=4)
# b = dls.one_batch()
model = HF_BaseModelWrapper(hf_model)
learn = Learner(dls,
model,
opt_func=RAdam,
metrics=[accuracy],
cbs=[HF_BaseModelCallback],
splitter=hf_splitter).to_fp16()
# learn.blurr_summary()
```
### ALUM finetuning
```
# !pip install git+git://github.com/aikindergarten/vat.git --no-deps -q
from vat.core import ALUMCallback
learn.add_cb(ALUMCallback(learn.model.hf_model.base_model.embeddings, start_epoch=2, alpha=0.5));
learn.fit_one_cycle(5, lr, cbs=WandbCallback(log_preds=False, log_model=False))
learn.validate()
test_ds = load_dataset('snli', split='test')
test_ds[0]
test_ds = test_ds.filter(lambda s: s['label'] in [0,1,2])
test_dl = dls.test_dl(test_ds, with_labels=True)
learn.validate(dl=test_dl)
wandb.finish()
```
## Validation on adversarial data
```
adv_ds = load_dataset('anli', split='test_r1')
adv_ds[0]
test_dl = dls.test_dl(adv_ds, with_labels=True)
learn.validate(dl=test_dl)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
from pathlib import Path
dir_path = Path().resolve().parent / 'demand_patterns'
low_patterns = "demand_patterns_train_low.csv"
fullrange_patterns = "demand_patterns_train_full_range.csv"
combined_pattern = 'demand_patterns_train_combined.csv'
comb = pd.read_csv(dir_path / combined_pattern)
comb
# FARE PER CREARE MIXED DEMAND_PATTERNS PER TRAINING
low_demand = pd.read_csv(dir_path / low_patterns)
fullrange_demand = pd.read_csv(dir_path / fullrange_patterns)
new = pd.concat([low_demand, fullrange_demand], axis=1, ignore_index=True)
new
output_file = dir_path / 'demand_patterns_train_combined.csv'
new.to_csv(output_file, index=False)
new = pd.read_csv(output_file)
new
import pandas as pd
import numpy as np
from pathlib import Path
dir_path = Path().resolve().parent / 'demand_patterns'
test_low_patterns = 'demand_patterns_test_low.csv'
test_full_range_patterns = 'demand_patterns_test_full_range.csv'
test_high_patterns = "demand_patterns_test.csv"
test_middle_patterns = "demand_patterns_test_middle.csv"
df_low = pd.read_csv(dir_path / test_low_patterns)
df_low
sum_of_columns = [df_low.loc[:, index].sum() for index in df_low.columns.values]
max_column = np.argmax(sum_of_columns)
min_column = np.argmin(sum_of_columns)
print("min: " + str(min_column) + ' --> ' + str(df_low[str(min_column)].sum()))
print("max: " + str(max_column) + ' --> ' + str(df_low[str(max_column)].sum()))
df_low['4'].sum()
df_full_range = pd.read_csv(dir_path / test_full_range_patterns)
df_full_range
sum_of_columns = [df_full_range.loc[:, index].sum() for index in df_full_range.columns.values]
max_column = np.argmax(sum_of_columns)
min_column = np.argmin(sum_of_columns)
print("min: " + str(min_column) + ' --> ' + str(df_full_range[str(min_column)].sum()))
print("max: " + str(max_column) + ' --> ' + str(df_full_range[str(max_column)].sum()))
df_high = pd.read_csv(dir_path / test_high_patterns)
df_high
sum_of_columns = [df_high.loc[:, index].sum() for index in df_high.columns.values]
max_column = np.argmax(sum_of_columns)
min_column = np.argmin(sum_of_columns)
print("min: " + str(min_column) + ' --> ' + str(df_high[str(min_column)].sum()))
print("max: " + str(max_column) + ' --> ' + str(df_high[str(max_column)].sum()))
df_middle = pd.read_csv(dir_path / test_middle_patterns)
df_middle
sum_of_columns = [df_middle.loc[:, index].sum() for index in df_middle.columns.values]
max_column = np.argmax(sum_of_columns)
min_column = np.argmin(sum_of_columns)
print("min: " + str(min_column) + ' --> ' + str(df_middle[str(min_column)].sum()))
print("max: " + str(max_column) + ' --> ' + str(df_middle[str(max_column)].sum()))
# Creation of appropriate test dataframe (we take the lower demand patten, a central one and the higher one)
df_new = pd.DataFrame(df_low['4'].values)
df_new.insert(1, '1', df_middle['54'])
df_new.insert(2, '2', df_full_range['132'])
df_new.insert(3, '3', df_high['6'])
df_new
output_file = dir_path / 'demand_patterns_test_mixed.csv'
df_new.to_csv(output_file, index=False)
df = pd.read_csv(output_file)
df
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(15,7))
ax.plot(df['0'].values)
ax.plot(df['1'].values)
ax.plot(df['2'].values)
ax.plot(df['3'].values)
ax.set_title("Test demand patterns trend")
ax.legend(('Low', 'Midium', 'Semi-high', 'High' ))
```
| github_jupyter |
<a href="http://cocl.us/pytorch_link_top">
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/Pytochtop.png" width="750" alt="IBM Product " />
</a>
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/cc-logo-square.png" width="200" alt="cognitiveclass.ai logo" />
<h1>Linear Regression Multiple Outputs</h1>
<h2>Table of Contents</h2>
<p>In this lab, you will create a model the PyTroch way. This will help you more complicated models.</p>
<ul>
<li><a href="#Makeup_Data">Make Some Data</a></li>
<li><a href="#Model_Cost">Create the Model and Cost Function the PyTorch way</a></li>
<li><a href="#BGD">Train the Model: Batch Gradient Descent</a></li>
</ul>
<p>Estimated Time Needed: <strong>20 min</strong></p>
<hr>
<h2>Preparation</h2>
We'll need the following libraries:
```
# Import the libraries we need for this lab
from torch import nn,optim
import torch
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from torch.utils.data import Dataset, DataLoader
```
Set the random seed:
```
# Set the random seed.
torch.manual_seed(1)
```
Use this function for plotting:
```
# The function for plotting 2D
def Plot_2D_Plane(model, dataset, n=0):
w1 = model.state_dict()['linear.weight'].numpy()[0][0]
w2 = model.state_dict()['linear.weight'].numpy()[0][0]
b = model.state_dict()['linear.bias'].numpy()
# Data
x1 = data_set.x[:, 0].view(-1, 1).numpy()
x2 = data_set.x[:, 1].view(-1, 1).numpy()
y = data_set.y.numpy()
# Make plane
X, Y = np.meshgrid(np.arange(x1.min(), x1.max(), 0.05), np.arange(x2.min(), x2.max(), 0.05))
yhat = w1 * X + w2 * Y + b
# Plotting
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot(x1[:, 0], x2[:, 0], y[:, 0],'ro', label='y') # Scatter plot
ax.plot_surface(X, Y, yhat) # Plane plot
ax.set_xlabel('x1 ')
ax.set_ylabel('x2 ')
ax.set_zlabel('y')
plt.title('estimated plane iteration:' + str(n))
ax.legend()
plt.show()
```
<!--Empty Space for separating topics-->
<h2 id="Makeup_Data"r>Make Some Data </h2>
Create a dataset class with two-dimensional features:
```
# Create a 2D dataset
class Data2D(Dataset):
# Constructor
def __init__(self):
self.x = torch.zeros(20, 2)
self.x[:, 0] = torch.arange(-1, 1, 0.1)
self.x[:, 1] = torch.arange(-1, 1, 0.1)
self.w = torch.tensor([[1.0], [1.0]])
self.b = 1
self.f = torch.mm(self.x, self.w) + self.b
self.y = self.f + 0.1 * torch.randn((self.x.shape[0],1))
self.len = self.x.shape[0]
# Getter
def __getitem__(self, index):
return self.x[index], self.y[index]
# Get Length
def __len__(self):
return self.len
```
Create a dataset object:
```
# Create the dataset object
data_set = Data2D()
```
<h2 id="Model_Cost">Create the Model, Optimizer, and Total Loss Function (Cost)</h2>
Create a customized linear regression module:
```
# Create a customized linear
class linear_regression(nn.Module):
# Constructor
def __init__(self, input_size, output_size):
super(linear_regression, self).__init__()
self.linear = nn.Linear(input_size, output_size)
# Prediction
def forward(self, x):
yhat = self.linear(x)
return yhat
```
Create a model. Use two features: make the input size 2 and the output size 1:
```
# Create the linear regression model and print the parameters
model = linear_regression(2,1)
print("The parameters: ", list(model.parameters()))
```
Create an optimizer object. Set the learning rate to 0.1. <b>Don't forget to enter the model parameters in the constructor.</b>
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/chapter2/2.6.2paramater_hate.png" width = "100" alt="How the optimizer works" />
```
# Create the optimizer
optimizer = optim.SGD(model.parameters(), lr=0.1)
```
Create the criterion function that calculates the total loss or cost:
```
# Create the cost function
criterion = nn.MSELoss()
```
Create a data loader object. Set the batch_size equal to 2:
```
# Create the data loader
train_loader = DataLoader(dataset=data_set, batch_size=2)
```
<!--Empty Space for separating topics-->
<h2 id="BGD">Train the Model via Mini-Batch Gradient Descent</h2>
Run 100 epochs of Mini-Batch Gradient Descent and store the total loss or cost for every iteration. Remember that this is an approximation of the true total loss or cost:
```
# Train the model
LOSS = []
print("Before Training: ")
Plot_2D_Plane(model, data_set)
epochs = 100
def train_model(epochs):
for epoch in range(epochs):
for x,y in train_loader:
yhat = model(x)
loss = criterion(yhat, y)
LOSS.append(loss.item())
optimizer.zero_grad()
loss.backward()
optimizer.step()
train_model(epochs)
print("After Training: ")
Plot_2D_Plane(model, data_set, epochs)
# Plot out the Loss and iteration diagram
plt.plot(LOSS)
plt.xlabel("Iterations ")
plt.ylabel("Cost/total loss ")
```
<h3>Practice</h3>
Create a new <code>model1</code>. Train the model with a batch size 30 and learning rate 0.1, store the loss or total cost in a list <code>LOSS1</code>, and plot the results.
```
# Practice create model1. Train the model with batch size 30 and learning rate 0.1, store the loss in a list <code>LOSS1</code>. Plot the results.
data_set = Data2D()
model1=linear_regression(2,1)
trainloader=DataLoader(dataset=data_set, batch_size=30)
optimizer1=optim.SGD(model.parameters(),lr=0.1)
LOSS1=[]
for epoch in range(epochs):
for x,y in trainloader:
yhat=model1(x)
loss=criterion(yhat,y)
LOSS1.append(loss)
optimizer1.zero_grad()
loss.backward()
optimizer.step()
print("After Training: ")
Plot_2D_Plane(model, data_set, epochs)
```
Double-click <b>here</b> for the solution.
<!-- Your answer is below:
train_loader = DataLoader(dataset = data_set, batch_size = 30)
model1 = linear_regression(2, 1)
optimizer = optim.SGD(model1.parameters(), lr = 0.1)
LOSS1 = []
epochs = 100
def train_model(epochs):
for epoch in range(epochs):
for x,y in train_loader:
yhat = model1(x)
loss = criterion(yhat,y)
LOSS1.append(loss.item())
optimizer.zero_grad()
loss.backward()
optimizer.step()
train_model(epochs)
Plot_2D_Plane(model1 , data_set)
plt.plot(LOSS1)
plt.xlabel("iterations ")
plt.ylabel("Cost/total loss ")
-->
Use the following validation data to calculate the total loss or cost for both models:
```
torch.manual_seed(2)
validation_data = Data2D()
Y = validation_data.y
X = validation_data.x
print("For model:")
totalloss=criterion(model(X),Y)
print(totalloss)
print("For model1:")
totalloss=criterion(model1(X),Y)
print(totalloss)
```
Double-click <b>here</b> for the solution.
<!-- Your answer is below:
print("total loss or cost for model: ",criterion(model(X),Y))
print("total loss or cost for model: ",criterion(model1(X),Y))
-->
<!--Empty Space for separating topics-->
<a href="http://cocl.us/pytorch_link_bottom">
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/notebook_bottom%20.png" width="750" alt="PyTorch Bottom" />
</a>
<h2>About the Authors:</h2>
<a href="https://www.linkedin.com/in/joseph-s-50398b136/">Joseph Santarcangelo</a> has a PhD in Electrical Engineering, his research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.
Other contributors: <a href="https://www.linkedin.com/in/michelleccarey/">Michelle Carey</a>, <a href="www.linkedin.com/in/jiahui-mavis-zhou-a4537814a">Mavis Zhou</a>
<hr>
Copyright © 2018 <a href="cognitiveclass.ai?utm_source=bducopyrightlink&utm_medium=dswb&utm_campaign=bdu">cognitiveclass.ai</a>. This notebook and its source code are released under the terms of the <a href="https://bigdatauniversity.com/mit-license/">MIT License</a>.
| github_jupyter |
# Configuring pandas
```
# import numpy and pandas
import numpy as np
import pandas as pd
# used for dates
import datetime
from datetime import datetime, date
# Set some pandas options controlling output format
pd.set_option('display.notebook_repr_html', False)
pd.set_option('display.max_columns', 8)
pd.set_option('display.max_rows', 10)
pd.set_option('display.width', 90)
# bring in matplotlib for graphics
import matplotlib.pyplot as plt
%matplotlib inline
# view the first five lines of data/msft.csv
!head -n 5 data/msft.csv # mac or Linux
# type data/msft.csv # on windows, but shows the entire file
```
# Reading a CSV into a DataFrame
```
# read in msft.csv into a DataFrame
msft = pd.read_csv("data/msft.csv")
msft[:5]
```
# Specifying the index column when reading a CSV file
```
# use column 0 as the index
msft = pd.read_csv("data/msft.csv", index_col=0)
msft[:5]
```
# Data type inference and specification
```
# examine the types of the columns in this DataFrame
msft.dtypes
# specify that the Volume column should be a float64
msft = pd.read_csv("data/msft.csv",
dtype = { 'Volume' : np.float64})
msft.dtypes
```
# Specifying column names
```
# specify a new set of names for the columns
# all lower case, remove space in Adj Close
# also, header=0 skips the header row
df = pd.read_csv("data/msft.csv",
header=0,
names=['date', 'open', 'high', 'low',
'close', 'volume'])
df[:5]
```
# Specifying specific columns to load
```
# read in data only in the Date and Close columns
# and index by the Date column
df2 = pd.read_csv("data/msft.csv",
usecols=['Date', 'Close'],
index_col=['Date'])
df2[:5]
```
# Saving a DataFrame to a CSV
```
# save df2 to a new csv file
# also specify naming the index as date
df2.to_csv("data/msft_modified.csv", index_label='date')
# view the start of the file just saved
!head -n 5 data/msft_modified.csv
#type data/msft_modified.csv # windows
```
# General field-delimited data
```
# use read_table with sep=',' to read a CSV
df = pd.read_table("data/msft.csv", sep=',')
df[:5]
# save as pipe delimited
df.to_csv("data/msft_piped.txt", sep='|')
# check that it worked
!head -n 5 data/msft_piped.txt # osx or Linux
# type data/psft_piped.txt # on windows
```
# Handling variants of formats in field-delimited data
```
# messy file
!head -n 6 data/msft2.csv # osx or Linux
# type data/msft2.csv # windows
# read, but skip rows 0, 2 and 3
df = pd.read_csv("data/msft2.csv", skiprows=[0, 2, 3])
df[:5]
# another messy file, with the mess at the end
!cat data/msft_with_footer.csv # osx or Linux
# type data/msft_with_footer.csv # windows
# skip only two lines at the end
df = pd.read_csv("data/msft_with_footer.csv",
skipfooter=2,
engine = 'python')
df
# only process the first three rows
pd.read_csv("data/msft.csv", nrows=3)
# skip 100 lines, then only process the next five
pd.read_csv("data/msft.csv", skiprows=100, nrows=5,
header=0,
names=['date', 'open', 'high', 'low',
'close', 'vol'])
```
# Reading and writing data in Excel format
```
# read excel file
# only reads first sheet (msft in this case)
df = pd.read_excel("data/stocks.xlsx")
df[:5]
# read from the aapl worksheet
aapl = pd.read_excel("data/stocks.xlsx", sheetname='aapl')
aapl[:5]
# save to an .XLS file, in worksheet 'Sheet1'
df.to_excel("data/stocks2.xls")
# write making the worksheet name MSFT
df.to_excel("data/stocks_msft.xls", sheet_name='MSFT')
# write multiple sheets
# requires use of the ExcelWriter class
from pandas import ExcelWriter
with ExcelWriter("data/all_stocks.xls") as writer:
aapl.to_excel(writer, sheet_name='AAPL')
df.to_excel(writer, sheet_name='MSFT')
# write to xlsx
df.to_excel("data/msft2.xlsx")
```
# Reading and writing JSON files
```
# wirite the excel data to a JSON file
df[:5].to_json("data/stocks.json")
!cat data/stocks.json # osx or Linux
#type data/stocks.json # windows
# read data in from JSON
df_from_json = pd.read_json("data/stocks.json")
df_from_json[:5]
# the URL to read
url = "http://www.fdic.gov/bank/individual/failed/banklist.html"
# read it
banks = pd.read_html(url)
# examine a subset of the first table read
banks[0][0:5].iloc[:,0:2]
# read the stock data
df = pd.read_excel("data/stocks.xlsx")
# write the first two rows to HTML
df.head(2).to_html("data/stocks.html")
# check the first 28 lines of the output
!head -n 10 data/stocks.html # max or Linux
# type data/stocks.html # window, but prints the entire file
```
# Reading and writing HDF5 format files
```
# seed for replication
np.random.seed(123456)
# create a DataFrame of dates and random numbers in three columns
df = pd.DataFrame(np.random.randn(8, 3),
index=pd.date_range('1/1/2000', periods=8),
columns=['A', 'B', 'C'])
# create HDF5 store
store = pd.HDFStore('data/store.h5')
store['df'] = df # persisting happened here
store
# read in data from HDF5
store = pd.HDFStore("data/store.h5")
df = store['df']
df[:5]
# this changes the DataFrame, but did not persist
df.iloc[0].A = 1
# to persist the change, assign the DataFrame to the
# HDF5 store object
store['df'] = df
# it is now persisted
# the following loads the store and
# shows the first two rows, demonstrating
# the the persisting was done
pd.HDFStore("data/store.h5")['df'][:5] # it's now in there
```
# Accessing data on the web and in the cloud
```
# read csv directly from Yahoo! Finance from a URL
msft_hist = pd.read_csv(
"http://www.google.com/finance/historical?" +
"q=NASDAQ:MSFT&startdate=Apr+01%2C+2017&" +
"enddate=Apr+30%2C+2017&output=csv")
msft_hist[:5]
```
# Reading and writing from/to SQL databases
```
# reference SQLite
import sqlite3
# read in the stock data from CSV
msft = pd.read_csv("data/msft.csv")
msft["Symbol"]="MSFT"
aapl = pd.read_csv("data/aapl.csv")
aapl["Symbol"]="AAPL"
# create connection
connection = sqlite3.connect("data/stocks.sqlite")
# .to_sql() will create SQL to store the DataFrame
# in the specified table. if_exists specifies
# what to do if the table already exists
msft.to_sql("STOCK_DATA", connection, if_exists="replace")
aapl.to_sql("STOCK_DATA", connection, if_exists="append")
# commit the SQL and close the connection
connection.commit()
connection.close()
# connect to the database file
connection = sqlite3.connect("data/stocks.sqlite")
# query all records in STOCK_DATA
# returns a DataFrame
# inde_col specifies which column to make the DataFrame index
stocks = pd.io.sql.read_sql("SELECT * FROM STOCK_DATA;",
connection, index_col='index')
# close the connection
connection.close()
# report the head of the data retrieved
stocks[:5]
# open the connection
connection = sqlite3.connect("data/stocks.sqlite")
# construct the query string
query = "SELECT * FROM STOCK_DATA WHERE " + \
"Volume>29200100 AND Symbol='MSFT';"
# execute and close connection
items = pd.io.sql.read_sql(query, connection, index_col='index')
connection.close()
# report the query result
items
```
# Reading stock data from Google Finance
```
# import data reader package
import pandas_datareader as pdr
# read from google and display the head of the data
start = datetime(2017, 4, 1)
end = datetime(2017, 4, 30)
goog = pdr.data.DataReader("MSFT", 'google', start, end)
goog[:5]
```
# Retrieving options data from Google Finance
```
# read options for MSFT
options = pdr.data.Options('MSFT', 'google')
options.expiry_dates
data = options.get_options_data(expiry=options.expiry_dates[0])
data.iloc[:5,:3]
# get all puts at strike price of $30 (first four columns only)
data.loc[(30, slice(None), 'put'), :].iloc[0:5, 0:3]
# put options at strike of $80, between 2017-06-01 and 2017-06-30
data.loc[(30, slice('20180119','20180130'), 'put'), :] \
.iloc[:, 0:3]
```
# Reading economic data from the Federal Reserve Bank of St. Louis
```
# read GDP data from FRED
gdp = pdr.data.FredReader("GDP",
date(2012, 1, 1),
date(2014, 1, 27))
gdp.read()[:5]
# Get Compensation of employees: Wages and salaries
pdr.data.FredReader("A576RC1A027NBEA",
date(1929, 1, 1),
date(2013, 1, 1)).read()[:5]
```
# Accessing Kenneth French data
```
# read from Kenneth French fama global factors data set
factors = pdr.data.FamaFrenchReader("Global_Factors").read()
factors[0][:5]
```
# Reading from the World Bank
```
# get all indicators
from pandas_datareader import wb
all_indicators = pdr.wb.get_indicators()
all_indicators.iloc[:5,:2]
# search of life expectancy indicators
le_indicators = pdr.wb.search("life expectancy")
# report first three rows, first two columns
le_indicators.iloc[:5,:2]
# get countries and show the 3 digit code and name
countries = pdr.wb.get_countries()
# show a subset of the country data
countries.loc[0:5,['name', 'capitalCity', 'iso2c']]
# get life expectancy at birth for all countries from 1980 to 2014
le_data_all = pdr.wb.download(indicator="SP.DYN.LE00.IN",
start='1980',
end='2014')
le_data_all
# only US, CAN, and MEX are returned by default
le_data_all.index.levels[0]
# retrieve life expectancy at birth for all countries
# from 1980 to 2014
le_data_all = wb.download(indicator="SP.DYN.LE00.IN",
country = countries['iso2c'],
start='1980',
end='2012')
le_data_all
#le_data_all.pivot(index='country', columns='year')
le_data = le_data_all.reset_index().pivot(index='country',
columns='year')
# examine pivoted data
le_data.iloc[:5,0:3]
# ask what is the name of country for each year
# with the least life expectancy
country_with_least_expectancy = le_data.idxmin(axis=0)
country_with_least_expectancy[:5]
# and what is the minimum life expectancy for each year
expectancy_for_least_country = le_data.min(axis=0)
expectancy_for_least_country[:5]
# this merges the two frames together and gives us
# year, country and expectancy where there minimum exists
least = pd.DataFrame(
data = {'Country': country_with_least_expectancy.values,
'Expectancy': expectancy_for_least_country.values},
index = country_with_least_expectancy.index.levels[1])
least[:5]
```
| github_jupyter |
Synergetics<br/>[Oregon Curriculum Network](http://4dsolutions.net/ocn/)
<h3 align="center">Computing Volumes in XYZ and IVM units</h3>
<h4 align="center">by Kirby Urner, July 2016</h4>

A cube is composed of 24 identical not-regular tetrahedrons, each with a corner at the cube's center, an edge from cube's center to a face center, and two more to adjacent cube corners on that face, defining six edges in all (Fig. 1).
If we define the cube's edges to be โ2 then the whole cube would have volume โ2 * โ2 * โ2 in XYZ units.
However, in IVM units, the very same cube has a volume of 3, owing to the differently-shaped volume unit, a tetrahedron of edges 2, inscribed in this same cube. [Fig. 986.210](http://www.rwgrayprojects.com/synergetics/findex/fx0900.html) from *Synergetics*:

Those lengths would be in R-units, where R is the radius of a unit sphere. In D-units, twice as long (D = 2R), the tetrahedron has edges 1 and the cube has edges โ2/2.
By XYZ we mean the XYZ coordinate system of Renรฉ Descartes (1596 โ 1650).
By IVM we mean the "octet-truss", a space-frame consisting of tetrahedrons and octahedrons in a space-filling matrix, with twice as many tetrahedrons as octahedrons.

The tetrahedron and octahedron have relative volumes of 1:4. The question then becomes, how to superimpose the two.
The canonical solution is to start with unit-radius balls (spheres) of radius R. R = 1 in other words, whereas D, the diameter, is 2. Alternatively, we may set D = 1 and R = 0.5, keeping the same 2:1 ratio for D:R.
The XYZ cube has edges R, whereas the IVM tetrahedron has edges D. That relative sizing convention brings their respective volumes fairly close together, with the cube's volume exceeding the tetrahedron's by about six percent.
```
import math
xyz_volume = math.sqrt(2)**3
ivm_volume = 3
print("XYZ units:", xyz_volume)
print("IVM units:", ivm_volume)
print("Conversion constant:", ivm_volume/xyz_volume)
```
The Python code below encodes a Tetrahedron type based solely on its six edge lengths. The code makes no attempt to determine the consequent angles.
A complicated volume formula, mined from the history books and streamlined by mathematician Gerald de Jong, outputs the volume of said tetrahedron in both IVM and XYZ units.
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/45589318711/in/dateposted-public/" title="dejong"><img src="https://farm2.staticflickr.com/1935/45589318711_677d272397.jpg" width="417" height="136" alt="dejong"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
The [unittests](http://pythontesting.net/framework/unittest/unittest-introduction/) that follow assure it's producing the expected results. The formula bears great resemblance to the one by [Piero della Francesca](https://mathpages.com/home/kmath424/kmath424.htm).
```
from math import sqrt as rt2
from qrays import Qvector, Vector
R =0.5
D =1.0
S3 = pow(9/8, 0.5)
root2 = rt2(2)
root3 = rt2(3)
root5 = rt2(5)
root6 = rt2(6)
PHI = (1 + root5)/2.0
class Tetrahedron:
"""
Takes six edges of tetrahedron with faces
(a,b,d)(b,c,e)(c,a,f)(d,e,f) -- returns volume
in ivm and xyz units
"""
def __init__(self, a,b,c,d,e,f):
self.a, self.a2 = a, a**2
self.b, self.b2 = b, b**2
self.c, self.c2 = c, c**2
self.d, self.d2 = d, d**2
self.e, self.e2 = e, e**2
self.f, self.f2 = f, f**2
def ivm_volume(self):
ivmvol = ((self._addopen() - self._addclosed() - self._addopposite())/2) ** 0.5
return ivmvol
def xyz_volume(self):
xyzvol = rt2(8/9) * self.ivm_volume()
return xyzvol
def _addopen(self):
a2,b2,c2,d2,e2,f2 = self.a2, self.b2, self.c2, self.d2, self.e2, self.f2
sumval = f2*a2*b2
sumval += d2 * a2 * c2
sumval += a2 * b2 * e2
sumval += c2 * b2 * d2
sumval += e2 * c2 * a2
sumval += f2 * c2 * b2
sumval += e2 * d2 * a2
sumval += b2 * d2 * f2
sumval += b2 * e2 * f2
sumval += d2 * e2 * c2
sumval += a2 * f2 * e2
sumval += d2 * f2 * c2
return sumval
def _addclosed(self):
a2,b2,c2,d2,e2,f2 = self.a2, self.b2, self.c2, self.d2, self.e2, self.f2
sumval = a2 * b2 * d2
sumval += d2 * e2 * f2
sumval += b2 * c2 * e2
sumval += a2 * c2 * f2
return sumval
def _addopposite(self):
a2,b2,c2,d2,e2,f2 = self.a2, self.b2, self.c2, self.d2, self.e2, self.f2
sumval = a2 * e2 * (a2 + e2)
sumval += b2 * f2 * (b2 + f2)
sumval += c2 * d2 * (c2 + d2)
return sumval
def make_tet(v0,v1,v2):
"""
three edges from any corner, remaining three edges computed
"""
tet = Tetrahedron(v0.length(), v1.length(), v2.length(),
(v0-v1).length(), (v1-v2).length(), (v2-v0).length())
return tet.ivm_volume(), tet.xyz_volume()
tet = Tetrahedron(D, D, D, D, D, D)
print(tet.ivm_volume())
```
The ```make_tet``` function takes three vectors from a common corner, in terms of vectors with coordinates, and computes the remaining missing lengths, thereby getting the information it needs to use the Tetrahedron class as before.
```
import unittest
from qrays import Vector, Qvector
class Test_Tetrahedron(unittest.TestCase):
def test_unit_volume(self):
tet = Tetrahedron(D, D, D, D, D, D)
self.assertEqual(tet.ivm_volume(), 1, "Volume not 1")
def test_e_module(self):
e0 = D
e1 = root3 * PHI**-1
e2 = rt2((5 - root5)/2)
e3 = (3 - root5)/2
e4 = rt2(5 - 2*root5)
e5 = 1/PHI
tet = Tetrahedron(e0, e1, e2, e3, e4, e5)
self.assertTrue(1/23 > tet.ivm_volume()/8 > 1/24, "Wrong E-mod")
def test_unit_volume2(self):
tet = Tetrahedron(R, R, R, R, R, R)
self.assertAlmostEqual(float(tet.xyz_volume()), 0.117851130)
def test_phi_edge_tetra(self):
tet = Tetrahedron(D, D, D, D, D, PHI)
self.assertAlmostEqual(float(tet.ivm_volume()), 0.70710678)
def test_right_tetra(self):
e = pow((root3/2)**2 + (root3/2)**2, 0.5) # right tetrahedron
tet = Tetrahedron(D, D, D, D, D, e)
self.assertAlmostEqual(tet.xyz_volume(), 1)
def test_quadrant(self):
qA = Qvector((1,0,0,0))
qB = Qvector((0,1,0,0))
qC = Qvector((0,0,1,0))
tet = make_tet(qA, qB, qC)
self.assertAlmostEqual(tet[0], 0.25)
def test_octant(self):
x = Vector((0.5, 0, 0))
y = Vector((0 , 0.5, 0))
z = Vector((0 , 0 , 0.5))
tet = make_tet(x,y,z)
self.assertAlmostEqual(tet[1], 1/6, 5) # good to 5 places
def test_quarter_octahedron(self):
a = Vector((1,0,0))
b = Vector((0,1,0))
c = Vector((0.5,0.5,root2/2))
tet = make_tet(a, b, c)
self.assertAlmostEqual(tet[0], 1, 5) # good to 5 places
def test_xyz_cube(self):
a = Vector((0.5, 0.0, 0.0))
b = Vector((0.0, 0.5, 0.0))
c = Vector((0.0, 0.0, 0.5))
R_octa = make_tet(a,b,c)
self.assertAlmostEqual(6 * R_octa[1], 1, 4) # good to 4 places
def test_s3(self):
D_tet = Tetrahedron(D, D, D, D, D, D)
a = Vector((0.5, 0.0, 0.0))
b = Vector((0.0, 0.5, 0.0))
c = Vector((0.0, 0.0, 0.5))
R_cube = 6 * make_tet(a,b,c)[1]
self.assertAlmostEqual(D_tet.xyz_volume() * S3, R_cube, 4)
def test_martian(self):
p = Qvector((2,1,0,1))
q = Qvector((2,1,1,0))
r = Qvector((2,0,1,1))
result = make_tet(5*q, 2*p, 2*r)
self.assertAlmostEqual(result[0], 20, 7)
def test_phi_tet(self):
"edges from common vertex: phi, 1/phi, 1"
p = Vector((1, 0, 0))
q = Vector((1, 0, 0)).rotz(60) * PHI
r = Vector((0.5, root3/6, root6/3)) * 1/PHI
result = make_tet(p, q, r)
self.assertAlmostEqual(result[0], 1, 7)
def test_phi_tet_2(self):
p = Qvector((2,1,0,1))
q = Qvector((2,1,1,0))
r = Qvector((2,0,1,1))
result = make_tet(PHI*q, (1/PHI)*p, r)
self.assertAlmostEqual(result[0], 1, 7)
def test_phi_tet_3(self):
T = Tetrahedron(PHI, 1/PHI, 1.0,
root2, root2/PHI, root2)
result = T.ivm_volume()
self.assertAlmostEqual(result, 1, 7)
def test_koski(self):
a = 1
b = PHI ** -1
c = PHI ** -2
d = (root2) * PHI ** -1
e = (root2) * PHI ** -2
f = (root2) * PHI ** -1
T = Tetrahedron(a,b,c,d,e,f)
result = T.ivm_volume()
self.assertAlmostEqual(result, PHI ** -3, 7)
a = Test_Tetrahedron()
R =0.5
D =1.0
suite = unittest.TestLoader().loadTestsFromModule(a)
unittest.TextTestRunner().run(suite)
```
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/41211295565/in/album-72157624750749042/" title="Martian Multiplication"><img src="https://farm1.staticflickr.com/907/41211295565_59145e2f63.jpg" width="500" height="312" alt="Martian Multiplication"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
The above tetrahedron has a=2, b=2, c=5, for a volume of 20. The remaining three lengths have not been computed as it's sufficient to know only a, b, c if the angles between them are those of the regular tetrahedron.
That's how IVM volume is computed: multiply a * b * c from a regular tetrahedron corner, then "close the lid" to see the volume.
```
a = 2
b = 4
c = 5
d = 3.4641016151377544
e = 4.58257569495584
f = 4.358898943540673
tetra = Tetrahedron(a,b,c,d,e,f)
print("IVM volume of tetra:", round(tetra.ivm_volume(),5))
```
Lets define a MITE, one of these 24 identical space-filling tetrahedrons, with reference to D=1, R=0.5, as this is how our Tetrahedron class is calibrated. The cubes 12 edges will all be โ2/2.
Edges 'a' 'b' 'c' fan out from the cube center, with 'b' going up to a face center, with 'a' and 'c' to adjacent ends of the face's edge.
From the cube's center to mid-face is โ2/4 (half an edge), our 'b'. 'a' and 'c' are both half the cube's body diagonal of โ(3/2)/2 or โ(3/8).
Edges 'd', 'e' and 'f' define the facet opposite the cube's center.
'd' and 'e' are both half face diagonals or 0.5, whereas 'f' is a cube edge, โ2/2. This gives us our tetrahedron:
```
b = rt2(2)/4
a = c = rt2(3/8)
d = e = 0.5
f = rt2(2)/2
mite = Tetrahedron(a, b, c, d, e, f)
print("IVM volume of Mite:", round(mite.ivm_volume(),5))
print("XYZ volume of Mite:", round(mite.xyz_volume(),5))
```
Allowing for floating point error, this space-filling right tetrahedron has a volume of 0.125 or 1/8. Since 24 of them form a cube, said cube has a volume of 3. The XYZ volume, on the other hand, is what we'd expect from a regular tetrahedron of edges 0.5 in the current calibration system.
```
regular = Tetrahedron(0.5, 0.5, 0.5, 0.5, 0.5, 0.5)
print("MITE volume in XYZ units:", round(regular.xyz_volume(),5))
print("XYZ volume of 24-Mite Cube:", round(24 * regular.xyz_volume(),5))
```
The MITE (minimum tetrahedron) further dissects into component modules, a left and right A module, then either a left or right B module. Outwardly, the positive and negative MITEs look the same. Here are some drawings from R. Buckminster Fuller's research, the chief popularizer of the A and B modules.

In a different Jupyter Notebook, we could run these tetrahedra through our volume computer to discover both As and Bs have a volume of 1/24 in IVM units.
Instead, lets take a look at the E-module and compute its volume.
<br />
The black hub is at the center of the RT, as shown here...
<br />
<div style="text-align: center">
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/24971714468/in/dateposted-public/" title="E module with origin"><img src="https://farm5.staticflickr.com/4516/24971714468_46e14ce4b5_z.jpg" width="640" height="399" alt="E module with origin"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
<b>RT center is the black hub (Koski with vZome)</b>
</div>
```
from math import sqrt as rt2
from tetravolume import make_tet, Vector
รธ = (rt2(5)+1)/2
e0 = Black_Yellow = rt2(3)*รธ**-1
e1 = Black_Blue = 1
e3 = Yellow_Blue = (3 - rt2(5))/2
e6 = Black_Red = rt2((5 - rt2(5))/2)
e7 = Blue_Red = 1/รธ
# E-mod is a right tetrahedron, so xyz is easy
v0 = Vector((Black_Blue, 0, 0))
v1 = Vector((Black_Blue, Yellow_Blue, 0))
v2 = Vector((Black_Blue, 0, Blue_Red))
# assumes R=0.5 so computed result is 8x needed
# volume, ergo divide by 8.
ivm, xyz = make_tet(v0,v1,v2)
print("IVM volume:", round(ivm/8, 5))
print("XYZ volume:", round(xyz/8, 5))
```
This information is being shared around Portland in various contexts. Below, an image from a hands-on workshop in 2010 organized by the Portland Free School.

| github_jupyter |
```
!pip install yacs
!pip install gdown
import os, sys, time
import argparse
import importlib
from tqdm.notebook import tqdm
from imageio import imread
import torch
import numpy as np
import matplotlib.pyplot as plt
```
### Download pretrained
- We use HoHoNet w/ hardnet encoder in this demo
- Download other version [here](https://drive.google.com/drive/folders/1raT3vRXnQXRAQuYq36dE-93xFc_hgkTQ?usp=sharing)
```
PRETRAINED_PTH = 'ckpt/mp3d_layout_HOHO_layout_aug_efficienthc_Transen1_resnet34/ep300.pth'
if not os.path.exists(PRETRAINED_PTH):
os.makedirs(os.path.split(PRETRAINED_PTH)[0], exist_ok=True)
!gdown 'https://drive.google.com/uc?id=1OU9uyuNiswkPovJuvG3sevm3LqHJgazJ' -O $PRETRAINED_PTH
```
### Download image
- We use a out-of-distribution image from PanoContext
```
if not os.path.exists('assets/pano_asmasuxybohhcj.png'):
!gdown 'https://drive.google.com/uc?id=1CXl6RPK6yPRFXxsa5OisHV9KwyRcejHu' -O 'assets/pano_asmasuxybohhcj.png'
rgb = imread('assets/pano_asmasuxybohhcj.png')
plt.imshow(rgb)
plt.show()
```
### Load model config
- We use HoHoNet w/ hardnet encoder in this demo
- Find out other version in `mp3d_depth/` and `s2d3d_depth`
```
from lib.config import config
config.defrost()
config.merge_from_file('config/mp3d_layout/HOHO_layout_aug_efficienthc_Transen1_resnet34.yaml')
config.freeze()
```
### Load model
```
device = 'cuda' if torch.cuda.is_available() else 'cpu'
print('devcie:', device)
model_file = importlib.import_module(config.model.file)
model_class = getattr(model_file, config.model.modelclass)
net = model_class(**config.model.kwargs)
net.load_state_dict(torch.load(PRETRAINED_PTH, map_location=device))
net = net.eval().to(device)
```
### Move image into tensor, normzlie to [0, 255], resize to 512x1024
```
x = torch.from_numpy(rgb).permute(2,0,1)[None].float() / 255.
if x.shape[2:] != (512, 1024):
x = torch.nn.functional.interpolate(x, self.hw, mode='area')
x = x.to(device)
```
### Model feedforward
```
with torch.no_grad():
ts = time.time()
layout = net.infer(x)
if torch.cuda.is_available():
torch.cuda.synchronize()
print(f'Eps time: {time.time() - ts:.2f} sec.')
cor_id = layout['cor_id']
y_bon_ = layout['y_bon_']
y_cor_ = layout['y_cor_']
```
### Visualize result in 2d
```
from eval_layout import layout_2_depth
plt.figure(figsize=(12,6))
plt.subplot(121)
plt.imshow(np.concatenate([
(y_cor_ * 255).reshape(1,-1,1).repeat(30, 0).repeat(3, 2).astype(np.uint8),
rgb[30:]
], 0))
plt.plot(np.arange(y_bon_.shape[1]), y_bon_[0], 'r-')
plt.plot(np.arange(y_bon_.shape[1]), y_bon_[1], 'r-')
plt.scatter(cor_id[:, 0], cor_id[:, 1], marker='x', c='b')
plt.axis('off')
plt.title('y_bon_ (red) / y_cor_ (up-most bar) / cor_id (blue x)')
plt.subplot(122)
plt.imshow(layout_2_depth(cor_id, *rgb.shape[:2]), cmap='inferno_r')
plt.axis('off')
plt.title('rendered depth from the estimated layout (cor_id)')
plt.show()
```
### Visualize result as 3d mesh
```
!pip install open3d
!pip install plotly
import open3d as o3d
import plotly.graph_objects as go
from scipy.signal import correlate2d
from scipy.ndimage import shift
from skimage.transform import resize
from lib.misc.post_proc import np_coor2xy, np_coorx2u, np_coory2v
H, W = 256, 512
ignore_floor = False
ignore_ceiling = True
ignore_wall = False
# Convert corners to layout
depth, floor_mask, ceil_mask, wall_mask = [
resize(v, [H, W], order=0, preserve_range=True).astype(v.dtype)
for v in layout_2_depth(cor_id, *rgb.shape[:2], return_mask=True)]
coorx, coory = np.meshgrid(np.arange(W), np.arange(H))
us = np_coorx2u(coorx, W)
vs = np_coory2v(coory, H)
zs = depth * np.sin(vs)
cs = depth * np.cos(vs)
xs = cs * np.sin(us)
ys = -cs * np.cos(us)
# Aggregate mask
mask = np.ones_like(floor_mask)
if ignore_floor:
mask &= ~floor_mask
if ignore_ceiling:
mask &= ~ceil_mask
if ignore_wall:
mask &= ~wall_mask
# Prepare ply's points and faces
xyzrgb = np.concatenate([
xs[...,None], ys[...,None], zs[...,None],
resize(rgb, [H, W])], -1)
xyzrgb = np.concatenate([xyzrgb, xyzrgb[:,[0]]], 1)
mask = np.concatenate([mask, mask[:,[0]]], 1)
lo_tri_template = np.array([
[0, 0, 0],
[0, 1, 0],
[0, 1, 1]])
up_tri_template = np.array([
[0, 0, 0],
[0, 1, 1],
[0, 0, 1]])
ma_tri_template = np.array([
[0, 0, 0],
[0, 1, 1],
[0, 1, 0]])
lo_mask = (correlate2d(mask, lo_tri_template, mode='same') == 3)
up_mask = (correlate2d(mask, up_tri_template, mode='same') == 3)
ma_mask = (correlate2d(mask, ma_tri_template, mode='same') == 3) & (~lo_mask) & (~up_mask)
ref_mask = (
lo_mask | (correlate2d(lo_mask, np.flip(lo_tri_template, (0,1)), mode='same') > 0) |\
up_mask | (correlate2d(up_mask, np.flip(up_tri_template, (0,1)), mode='same') > 0) |\
ma_mask | (correlate2d(ma_mask, np.flip(ma_tri_template, (0,1)), mode='same') > 0)
)
points = xyzrgb[ref_mask]
ref_id = np.full(ref_mask.shape, -1, np.int32)
ref_id[ref_mask] = np.arange(ref_mask.sum())
faces_lo_tri = np.stack([
ref_id[lo_mask],
ref_id[shift(lo_mask, [1, 0], cval=False, order=0)],
ref_id[shift(lo_mask, [1, 1], cval=False, order=0)],
], 1)
faces_up_tri = np.stack([
ref_id[up_mask],
ref_id[shift(up_mask, [1, 1], cval=False, order=0)],
ref_id[shift(up_mask, [0, 1], cval=False, order=0)],
], 1)
faces_ma_tri = np.stack([
ref_id[ma_mask],
ref_id[shift(ma_mask, [1, 0], cval=False, order=0)],
ref_id[shift(ma_mask, [0, 1], cval=False, order=0)],
], 1)
faces = np.concatenate([faces_lo_tri, faces_up_tri, faces_ma_tri])
fig = go.Figure(
data=[
go.Mesh3d(
x=points[:,0],
y=points[:,1],
z=points[:,2],
i=faces[:,0],
j=faces[:,1],
k=faces[:,2],
facecolor=points[:,3:][faces[:,0]])
],
layout=dict(
scene=dict(
xaxis=dict(visible=False),
yaxis=dict(visible=False),
zaxis=dict(visible=False)
)
)
)
fig.show()
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df = pd.read_csv("test2_result.csv")
df
df2 = pd.read_excel("Test_2.xlsx")
# ๅชๅซ็นๅพๅผ็ๅฎๆดๆฐๆฎ้
data = df2.drop("TRUE VALUE", axis=1)
# ๅชๅซ็ๅฎๅ็ฑปไฟกๆฏ็ๅฎๆดๆฐๆฎ้
labels = df2["TRUE VALUE"]
# data2ๆฏๅปๆ็ๅฎๅ็ฑปไฟกๆฏ็ๆฐๆฎ้๏ผๅซๆ่็ฑปๅ็็ปๆ๏ผ
data2 = df.drop("TRUE VALUE", axis=1)
data2
# ๆฅ็ไฝฟ็จkmeans่็ฑปๅ็ๅ็ฑปๆ ็ญพๅผ๏ผไธค็ฑป
data2['km_clustering_label'].hist()
from sklearn.model_selection import StratifiedShuffleSplit
# ๅบไบkmeans่็ฑป็ปๆ็ๅๅฑๆฝๆ ท
split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)
for train_index, test_index in split.split(data2, data2["km_clustering_label"]):
strat_train_set = data2.loc[train_index]
strat_test_set = data2.loc[test_index]
def clustering_result_propotions(data):
"""
ๅๅฑๆฝๆ ทๅ๏ผ่ฎญ็ป้ๆๆต่ฏ้้ไธๅๅ็ฑปๆ ็ญพ็ๆฐ้ๆฏ
:param data: ่ฎญ็ป้ๆๆต่ฏ้๏ผ็บฏ้ๆบๅๆ ทๆๅๅฑๅๆ ท
"""
return data["km_clustering_label"].value_counts() / len(data)
# ็ป่ฟๅๅฑๆฝๆ ท็ๆต่ฏ้ไธญ๏ผไธๅๅ็ฑปๆ ็ญพ็ๆฐ้ๆฏ
clustering_result_propotions(strat_test_set)
# ็ป่ฟๅๅฑๆฝๆ ท็่ฎญ็ป้ไธญ๏ผไธๅๅ็ฑปๆ ็ญพ็ๆฐ้ๆฏ
clustering_result_propotions(strat_train_set)
# ๅฎๆด็ๆฐๆฎ้ไธญ๏ผไธๅๅ็ฑปๆ ็ญพ็ๆฐ้ๆฏ
clustering_result_propotions(data2)
from sklearn.model_selection import train_test_split
# ็บฏ้ๆบๅๆ ท
random_train_set, random_test_set = train_test_split(data2, test_size=0.2, random_state=42)
# ๅฎๆด็ๆฐๆฎ้ใๅๅฑๆฝๆ ทๅ็ๆต่ฏ้ใ็บฏ้ๆบๆฝๆ ทๅ็ๆต่ฏ้ไธญ๏ผไธๅๅ็ฑปๆ ็ญพ็ๆฐ้ๆฏ
compare_props = pd.DataFrame({
"Overall": clustering_result_propotions(data2),
"Stratified": clustering_result_propotions(strat_test_set),
"Random": clustering_result_propotions(random_test_set),
}).sort_index()
# ่ฎก็ฎๅๅฑๆฝๆ ทๅ็บฏ้ๆบๆฝๆ ทๅ็ๆต่ฏ้ไธญไธๅๅ็ฑปๆ ็ญพ็ๆฐ้ๆฏ๏ผๅๅฎๆด็ๆฐๆฎ้ไธญไธๅๅ็ฑปๆ ็ญพ็ๆฐ้ๆฏ็่ฏฏๅทฎ
compare_props["Rand. %error"] = 100 * compare_props["Random"] / compare_props["Overall"] - 100
compare_props["Start. %error"] = 100 * compare_props["Stratified"] / compare_props["Overall"] - 100
compare_props
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import f1_score
def get_classification_marks(model, data, labels, train_index, test_index):
"""
่ทๅๅ็ฑปๆจกๅ๏ผไบๅ
ๆๅคๅ
ๅ็ฑปๅจ๏ผ็่ฏๅ๏ผF1ๅผ
:param data: ๅชๅซๆ็นๅพๅผ็ๆฐๆฎ้
:param labels: ๅชๅซๆๆ ็ญพๅผ็ๆฐๆฎ้
:param train_index: ๅๅฑๆฝๆ ท่ทๅ็่ฎญ็ป้ไธญๆฐๆฎ็็ดขๅผ
:param test_index: ๅๅฑๆฝๆ ท่ทๅ็ๆต่ฏ้ไธญๆฐๆฎ็็ดขๅผ
:return: F1่ฏๅๅผ
"""
m = model(random_state=42)
m.fit(data.loc[train_index], labels.loc[train_index])
test_labels_predict = m.predict(data.loc[test_index])
score = f1_score(labels.loc[test_index], test_labels_predict, average="weighted")
return score
# ็จๅๅฑๆฝๆ ทๅ็่ฎญ็ป้่ฎญ็ปๅ็ฑปๆจกๅๅ็่ฏๅๅผ
start_marks = get_classification_marks(LogisticRegression, data, labels, strat_train_set.index, strat_test_set.index)
start_marks
# ็จ็บฏ้ๆบๆฝๆ ทๅ็่ฎญ็ป้่ฎญ็ปๅ็ฑปๆจกๅๅ็่ฏๅๅผ
random_marks = get_classification_marks(LogisticRegression, data, labels, random_train_set.index, random_test_set.index)
random_marks
from sklearn.metrics import f1_score
from sklearn.model_selection import StratifiedKFold
from sklearn.base import clone, BaseEstimator, TransformerMixin
class stratified_cross_val_score(BaseEstimator, TransformerMixin):
"""ๅฎ็ฐๅบไบๅๅฑๆฝๆ ท็kๆไบคๅ้ช่ฏ"""
def __init__(self, model, data, labels, random_state=0, cv=5):
"""
:model: ่ฎญ็ป็ๆจกๅ๏ผๅๅฝๆๅ็ฑป๏ผ
:data: ๅชๅซ็นๅพๅผ็ๅฎๆดๆฐๆฎ้
:labels: ๅชๅซๆ ็ญพๅผ็ๅฎๆดๆฐๆฎ้
:random_state: ๆจกๅ็้ๆบ็งๅญๅผ
:cv: ไบคๅ้ช่ฏ็ๆฌกๆฐ
"""
self.model = model
self.data = data
self.labels = labels
self.random_state = random_state
self.cv = cv
self.score = [] # ๅจๅญๆฏๆๆต่ฏ้็ๆจกๅ่ฏๅ
self.i = 0
def fit(self, X, y):
"""
:param X: ๅซๆ็นๅพๅผๅ่็ฑป็ปๆ็ๅฎๆดๆฐๆฎ้
:param y: ๅซๆ่็ฑป็ปๆ็ๅฎๆดๆฐๆฎ้
"""
skfolds = StratifiedKFold(n_splits=self.cv, random_state=self.random_state)
for train_index, test_index in skfolds.split(X, y):
# ๅคๅถ่ฆ่ฎญ็ป็ๆจกๅ๏ผๅ็ฑปๆๅๅฝ๏ผ
clone_model = clone(self.model)
strat_X_train_folds = self.data.loc[train_index]
strat_y_train_folds = self.labels.loc[train_index]
strat_X_test_fold = self.data.loc[test_index]
strat_y_test_fold = self.labels.loc[test_index]
# ่ฎญ็ปๆจกๅ
clone_model.fit(strat_X_train_folds, strat_y_train_folds)
# ้ขๆตๅผ๏ผ่ฟ้ๆฏๅ็ฑปๆจกๅ็ๅ็ฑป็ปๆ๏ผ
test_labels_pred = clone_model.predict(strat_X_test_fold)
# ่ฟ้ไฝฟ็จ็ๆฏๅ็ฑปๆจกๅ็จ็F1ๅผ๏ผๅฆๆๆฏๅๅฝๆจกๅๅฏไปฅๆขๆ็ธๅบ็ๆจกๅ
score_fold = f1_score(labels.loc[test_index], test_labels_pred, average="weighted")
# ้ฟๅ
้ๅคๅๅ่กจ้้ๅคๆทปๅ ๅผ
if self.i < self.cv:
self.score.append(score_fold)
else:
None
self.i += 1
def transform(self, X, y=None):
return self
def mean(self):
"""่ฟๅไบคๅ้ช่ฏ่ฏๅ็ๅนณๅๅผ"""
return np.array(self.score).mean()
def std(self):
"""่ฟๅไบคๅ้ช่ฏ่ฏๅ็ๆ ๅๅทฎ"""
return np.array(self.score).std()
from sklearn.linear_model import SGDClassifier
# ๅ็ฑปๆจกๅ
clf_model = SGDClassifier(max_iter=5, tol=-np.infty, random_state=42)
# ๅบไบๅๅฑๆฝๆ ท็ไบคๅ้ช่ฏ๏ผdataๆฏๅชๅซ็นๅพๅผ็ๅฎๆดๆฐๆฎ้๏ผlabelsๆฏๅชๅซๆ ็ญพๅผ็ๅฎๆดๆฐๆฎ้
clf_cross_val = stratified_cross_val_score(clf_model, data, labels, cv=5, random_state=42)
# data2ๆฏๅซๆ็นๅพๅผๅ่็ฑป็ปๆ็ๅฎๆดๆฐๆฎ้
clf_cross_val.fit(data2, data2["km_clustering_label"])
# ๆฏๆไบคๅ้ช่ฏ็่ฏๅ
clf_cross_val.score
# ไบคๅ้ช่ฏ่ฏๅ็ๅนณๅๅผ
clf_cross_val.mean()
# ไบคๅ้ช่ฏ่ฏๅ็ๆ ๅๅทฎ
clf_cross_val.std()
```
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Datasets/Water/usgs_watersheds.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Datasets/Water/usgs_watersheds.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Datasets/Water/usgs_watersheds.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as geemap
except:
import geemap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
```
Map = geemap.Map(center=[40,-100], zoom=4)
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
dataset = ee.FeatureCollection('USGS/WBD/2017/HUC02')
styleParams = {
'fillColor': '000070',
'color': '0000be',
'width': 3.0,
}
regions = dataset.style(**styleParams)
Map.setCenter(-96.8, 40.43, 4)
Map.addLayer(regions, {}, 'USGS/WBD/2017/HUC02')
dataset = ee.FeatureCollection('USGS/WBD/2017/HUC04')
styleParams = {
'fillColor': '5885E3',
'color': '0000be',
'width': 3.0,
}
subregions = dataset.style(**styleParams)
Map.setCenter(-110.904, 36.677, 7)
Map.addLayer(subregions, {}, 'USGS/WBD/2017/HUC04')
dataset = ee.FeatureCollection('USGS/WBD/2017/HUC06')
styleParams = {
'fillColor': '588593',
'color': '587193',
'width': 3.0,
}
basins = dataset.style(**styleParams)
Map.setCenter(-96.8, 40.43, 7)
Map.addLayer(basins, {}, 'USGS/WBD/2017/HUC06')
dataset = ee.FeatureCollection('USGS/WBD/2017/HUC08')
styleParams = {
'fillColor': '2E8593',
'color': '587193',
'width': 2.0,
}
subbasins = dataset.style(**styleParams)
Map.setCenter(-96.8, 40.43, 8)
Map.addLayer(subbasins, {}, 'USGS/WBD/2017/HUC08')
dataset = ee.FeatureCollection('USGS/WBD/2017/HUC10')
styleParams = {
'fillColor': '2E85BB',
'color': '2E5D7E',
'width': 1.0,
}
watersheds = dataset.style(**styleParams)
Map.setCenter(-96.8, 40.43, 9)
Map.addLayer(watersheds, {}, 'USGS/WBD/2017/HUC10')
dataset = ee.FeatureCollection('USGS/WBD/2017/HUC12')
styleParams = {
'fillColor': '2E85BB',
'color': '2E5D7E',
'width': 0.1,
}
subwatersheds = dataset.style(**styleParams)
Map.setCenter(-96.8, 40.43, 10)
Map.addLayer(subwatersheds, {}, 'USGS/WBD/2017/HUC12')
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
| github_jupyter |
<img align="right" src="images/ninologo.png" width="150"/>
<img align="right" src="images/tf-small.png" width="125"/>
<img align="right" src="images/dans.png" width="150"/>
# Start
This notebook gets you started with using
[Text-Fabric](https://github.com/Nino-cunei/uruk/blob/master/docs/textfabric.md) for coding in cuneiform tablet transcriptions.
Familiarity with the underlying
[data model](https://annotation.github.io/text-fabric/tf/about/datamodel.html)
is recommended.
For provenance, see the documentation:
[about](https://github.com/Nino-cunei/uruk/blob/master/docs/about.md).
## Overview
* we tell you how to get Text-Fabric on your system;
* we tell you how to get the Uruk IV-III corpus on your system.
## Installing Text-Fabric
See [here](https://annotation.github.io/text-fabric/tf/about/install.html)
### Get the data
Text-Fabric will get the data for you and store it on your system.
If you have cloned the github repo with the data,
[Nino-cunei/uruk](https://github.com/Nino-cunei/uruk),
your data is already in place, and nothing will be downloaded.
Otherwise, on first run, Text-Fabric will load the data and store it in the folder
`text-fabric-data` in your home directory.
This only happens if the data is not already there.
Not only transcription data will be downloaded, also linearts and photos.
These images are contained in a zipfile of 550 MB,
so take care that you have a good internet connection when it comes to downloading the images.
## Start the engines
Navigate to this directory in a terminal and say
```
jupyter notebook
```
(just literally).
Your browser opens with a directory view, and you'll see `start.ipynb`.
Click on it. A new browser tab opens, and a Python engine has been allocated to this
notebook.
Now we are ready to compute .
The next cell is a code cell that can be executed if you have downloaded this
notebook and have issued the `jupyter notebook` command.
You execute a code cell by standing in it and press `Shift Enter`.
### The code
```
%load_ext autoreload
%autoreload 2
import sys, os
from tf.app import use
```
View the next cell as an *incantation*.
You just have to say it to get things underway.
For the very last version, use `hot`.
For the latest release, use `latest`.
If you have cloned the repos (TF app and data), use `clone`.
If you do not want/need to upgrade, leave out the checkout specifiers.
```
A = use("uruk:clone", checkout="clone", hoist=globals())
# A = use('uruk:hot', checkout="hot", hoist=globals())
# A = use('uruk:latest', checkout="latest", hoist=globals())
# A = use('uruk', hoist=globals())
```
### The output
The output shows some statistics about the images found in the Uruk data.
Then there are links to the documentation.
**Tip:** open them, and have a quick look.
Every notebook that you set up with `Cunei` will have such links.
**GitHub and NBViewer**
If you have made your own notebook, and used this incantation,
and pushed the notebook to GitHub, links to the online version
of *your* notebook on GitHub and NBViewer will be generated and displayed.
By the way, GitHub shows notebooks nicely.
Sometimes NBViewer does it better, although it fetches exactly the same notebook from GitHub.
NBViewer is handy to navigate all the notebooks of a particular organization.
Try the [Nino-cunei starting point](http://nbviewer.jupyter.org/github/Nino-cunei/).
These links you can share with colleagues.
## Test
We perform a quick test to see that everything works.
### Count the signs
We count how many signs there are in the corpus.
In a next notebook we'll explain code like this.
```
len(F.otype.s("sign"))
```
### Show photos and lineart
We show the photo and lineart of a tablet, to whet your appetite.
```
example = T.nodeFromSection(("P005381",))
A.photo(example)
```
Note that you can click on the photo to see a better version on CDLI.
Here comes the lineart:
```
A.lineart(example)
```
A pretty representation of the transcription with embedded lineart for quads and signs:
```
A.pretty(example, withNodes=True)
```
We can suppress the lineart:
```
A.pretty(example, showGraphics=False)
```
The transliteration:
```
A.getSource(example)
```
Now the lines ans cases of this tablet in a table:
```
table = []
for sub in L.d(example):
if F.otype.v(sub) in {"line", "case"}:
table.append((sub,))
A.table(table, showGraphics=False)
```
We can include the lineart in plain displays:
```
A.table(table, showGraphics=True)
```
This is just the beginning.
In the next chapters we show you how to
* fine-tune tablet displays,
* step and jump around in the corpus,
* search for patterns,
* drill down to quads and signs,
* and study frequency distributions of signs in subcases.
# Next
[imagery](imagery.ipynb)
*Get the big picture ...*
All chapters:
**start**
[imagery](imagery.ipynb)
[steps](steps.ipynb)
[search](search.ipynb)
[calc](calc.ipynb)
[signs](signs.ipynb)
[quads](quads.ipynb)
[jumps](jumps.ipynb)
[cases](cases.ipynb)
---
CC-BY Dirk Roorda
| github_jupyter |
```
import zmq
import msgpack
import sys
from pprint import pprint
import json
import numpy as np
import ceo
import matplotlib.pyplot as plt
%matplotlib inline
port = "5556"
```
# SETUP
```
context = zmq.Context()
print "Connecting to server..."
socket = context.socket(zmq.REQ)
socket.connect ("tcp://localhost:%s" % port)
print "Sending request ", "ubuntu_cuda70","..."
socket.send ("ubuntu_cuda70")
message = socket.recv_json()
pprint(message)
optical_path = {}
for kk, vv in message.iteritems():
print kk, ' is ', vv
socket.send_string (vv)
message = socket.recv_json()
pprint(message)
if kk=="Source":
optical_path[vv] = ceo.Source(message["band"],
zenith=message["zenith"],
azimuth=message["azimuth"],
height=np.float(message["height"]),
magnitude = message["magnitude"],
rays_box_size=message["pupil size"],
rays_box_sampling=message["pupil sampling"],
rays_origin=[0.0,0.0,25])
N_SRC = optical_path[vv].N_SRC
elif kk=="GMT_MX":
D_px = message["pupil sampling"]
optical_path[vv] = ceo.GMT_MX(message["pupil size"],
message["pupil sampling"],
M1_radial_order=message["M1"]["Zernike radial order"],
M2_radial_order=message["M2"]["Zernike radial order"])
elif kk=="Imaging":
optical_path[vv] = ceo.Imaging(1, D_px-1,
DFT_osf=2*message["nyquist oversampling"],
N_PX_IMAGE=message["resolution"],
N_SOURCE=N_SRC)
optical_path["star"].reset()
optical_path["GMT"].propagate(optical_path["star"])
optical_path["imager"].propagate(optical_path["star"])
plt.imshow(optical_path["star"].phase.host(),interpolation='None')
plt.imshow(optical_path["imager"].frame.host())
```
# DATA SERVER
```
port = "5557"
context = zmq.Context()
socket = context.socket(zmq.REP)
socket.bind("tcp://*:%s" % port)
port = "5558"
sub_context = zmq.Context()
sub_socket = sub_context.socket(zmq.SUB)
sub_socket.connect ("tcp://localhost:%s" % port)
message = socket.recv()
print "Received request: ", message
optical_path["star"].reset()
optical_path["GMT"].propagate(optical_path["star"])
optical_path["imager"].propagate(optical_path["star"])
data = optical_path["star"].phase.host()
msg = msgpack.packb(data.tolist())
socket.send(msg)
```
| github_jupyter |
This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges).
# Challenge Notebook
## Problem: Format license keys.
See the [LeetCode](https://leetcode.com/problems/license-key-formatting/) problem page.
<pre>
Now you are given a string S, which represents a software license key which we would like to format. The string S is composed of alphanumerical characters and dashes. The dashes split the alphanumerical characters within the string into groups. (i.e. if there are M dashes, the string is split into M+1 groups). The dashes in the given string are possibly misplaced.
We want each group of characters to be of length K (except for possibly the first group, which could be shorter, but still must contain at least one character). To satisfy this requirement, we will reinsert dashes. Additionally, all the lower case letters in the string must be converted to upper case.
So, you are given a non-empty string S, representing a license key to format, and an integer K. And you need to return the license key formatted according to the description above.
Example 1:
Input: S = "2-4A0r7-4k", K = 4
Output: "24A0-R74K"
Explanation: The string S has been split into two parts, each part has 4 characters.
Example 2:
Input: S = "2-4A0r7-4k", K = 3
Output: "24-A0R-74K"
Explanation: The string S has been split into three parts, each part has 3 characters except the first part as it could be shorter as said above.
Note:
The length of string S will not exceed 12,000, and K is a positive integer.
String S consists only of alphanumerical characters (a-z and/or A-Z and/or 0-9) and dashes(-).
String S is non-empty.
</pre>
* [Constraints](#Constraints)
* [Test Cases](#Test-Cases)
* [Algorithm](#Algorithm)
* [Code](#Code)
* [Unit Test](#Unit-Test)
* [Solution Notebook](#Solution-Notebook)
## Constraints
* Is the output a string?
* Yes
* Can we change the input string?
* No, you can't modify the input string
* Can we assume the inputs are valid?
* No
* Can we assume this fits memory?
* Yes
## Test Cases
* None -> TypeError
* '---', k=3 -> ''
* '2-4A0r7-4k', k=3 -> '24-A0R-74K'
* '2-4A0r7-4k', k=4 -> '24A0-R74K'
## Algorithm
Refer to the [Solution Notebook](). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start.
## Code
```
class Solution(object):
def format_license_key(self, license_key, k):
# TODO: Implement me
pass
```
## Unit Test
**The following unit test is expected to fail until you solve the challenge.**
```
# %load test_format_license_key.py
from nose.tools import assert_equal, assert_raises
class TestSolution(object):
def test_format_license_key(self):
solution = Solution()
assert_raises(TypeError, solution.format_license_key, None, None)
license_key = '---'
k = 3
expected = ''
assert_equal(solution.format_license_key(license_key, k), expected)
license_key = '2-4A0r7-4k'
k = 3
expected = '24-A0R-74K'
assert_equal(solution.format_license_key(license_key, k), expected)
license_key = '2-4A0r7-4k'
k = 4
expected = '24A0-R74K'
assert_equal(solution.format_license_key(license_key, k), expected)
print('Success: test_format_license_key')
def main():
test = TestSolution()
test.test_format_license_key()
if __name__ == '__main__':
main()
```
## Solution Notebook
Review the [Solution Notebook]() for a discussion on algorithms and code solutions.
| github_jupyter |
##### Copyright 2018 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
```
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Fitting Dirichlet Process Mixture Model Using Preconditioned Stochastic Gradient Langevin Dynamics
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/probability/examples/Fitting_DPMM_Using_pSGLD"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Fitting_DPMM_Using_pSGLD.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Fitting_DPMM_Using_pSGLD.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/Fitting_DPMM_Using_pSGLD.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
In this notebook, we will demonstrate how to cluster a large number of samples and infer the number of clusters simultaneously by fitting a Dirichlet Process Mixture of Gaussian distribution. We use Preconditioned Stochastic Gradient Langevin Dynamics (pSGLD) for inference.
## Table of contents
1. Samples
1. Model
1. Optimization
1. Visualize the result
4.1. Clustered result
4.2. Visualize uncertainty
4.3. Mean and scale of selected mixture component
4.4. Mixture weight of each mixture component
4.5. Convergence of $\alpha$
4.6. Inferred number of clusters over iterations
4.7. Fitting the model using RMSProp
1. Conclusion
---
## 1. Samples
First, we set up a toy dataset. We generate 50,000 random samples from three bivariate Gaussian distributions.
```
import time
import numpy as np
import matplotlib.pyplot as plt
import tensorflow.compat.v1 as tf
import tensorflow_probability as tfp
plt.style.use('ggplot')
tfd = tfp.distributions
def session_options(enable_gpu_ram_resizing=True):
"""Convenience function which sets common `tf.Session` options."""
config = tf.ConfigProto()
config.log_device_placement = True
if enable_gpu_ram_resizing:
# `allow_growth=True` makes it possible to connect multiple colabs to your
# GPU. Otherwise the colab malloc's all GPU ram.
config.gpu_options.allow_growth = True
return config
def reset_sess(config=None):
"""Convenience function to create the TF graph and session, or reset them."""
if config is None:
config = session_options()
tf.reset_default_graph()
global sess
try:
sess.close()
except:
pass
sess = tf.InteractiveSession(config=config)
# For reproducibility
rng = np.random.RandomState(seed=45)
tf.set_random_seed(76)
# Precision
dtype = np.float64
# Number of training samples
num_samples = 50000
# Ground truth loc values which we will infer later on. The scale is 1.
true_loc = np.array([[-4, -4],
[0, 0],
[4, 4]], dtype)
true_components_num, dims = true_loc.shape
# Generate training samples from ground truth loc
true_hidden_component = rng.randint(0, true_components_num, num_samples)
observations = (true_loc[true_hidden_component]
+ rng.randn(num_samples, dims).astype(dtype))
# Visualize samples
plt.scatter(observations[:, 0], observations[:, 1], 1)
plt.axis([-10, 10, -10, 10])
plt.show()
```
## 2. Model
Here, we define a Dirichlet Process Mixture of Gaussian distribution with Symmetric Dirichlet Prior. Throughout the notebook, vector quantities are written in bold. Over $i\in\{1,\ldots,N\}$ samples, the model with a mixture of $j \in\{1,\ldots,K\}$ Gaussian distributions is formulated as follow:
$$\begin{align*}
p(\boldsymbol{x}_1,\cdots, \boldsymbol{x}_N) &=\prod_{i=1}^N \text{GMM}(x_i), \\
&\,\quad \text{with}\;\text{GMM}(x_i)=\sum_{j=1}^K\pi_j\text{Normal}(x_i\,|\,\text{loc}=\boldsymbol{\mu_{j}},\,\text{scale}=\boldsymbol{\sigma_{j}})\\
\end{align*}$$
where:
$$\begin{align*}
x_i&\sim \text{Normal}(\text{loc}=\boldsymbol{\mu}_{z_i},\,\text{scale}=\boldsymbol{\sigma}_{z_i}) \\
z_i &= \text{Categorical}(\text{prob}=\boldsymbol{\pi}),\\
&\,\quad \text{with}\;\boldsymbol{\pi}=\{\pi_1,\cdots,\pi_K\}\\
\boldsymbol{\pi}&\sim\text{Dirichlet}(\text{concentration}=\{\frac{\alpha}{K},\cdots,\frac{\alpha}{K}\})\\
\alpha&\sim \text{InverseGamma}(\text{concentration}=1,\,\text{rate}=1)\\
\boldsymbol{\mu_j} &\sim \text{Normal}(\text{loc}=\boldsymbol{0}, \,\text{scale}=\boldsymbol{1})\\
\boldsymbol{\sigma_j} &\sim \text{InverseGamma}(\text{concentration}=\boldsymbol{1},\,\text{rate}=\boldsymbol{1})\\
\end{align*}$$
Our goal is to assign each $x_i$ to the $j$th cluster through $z_i$ which represents the inferred index of a cluster.
For an ideal Dirichlet Mixture Model, $K$ is set to $\infty$. However, it is known that one can approximate a Dirichlet Mixture Model with a sufficiently large $K$. Note that although we arbitrarily set an initial value of $K$, an optimal number of clusters is also inferred through optimization, unlike a simple Gaussian Mixture Model.
In this notebook, we use a bivariate Gaussian distribution as a mixture component and set $K$ to 30.
```
reset_sess()
# Upperbound on K
max_cluster_num = 30
# Define trainable variables.
mix_probs = tf.nn.softmax(
tf.Variable(
name='mix_probs',
initial_value=np.ones([max_cluster_num], dtype) / max_cluster_num))
loc = tf.Variable(
name='loc',
initial_value=np.random.uniform(
low=-9, #set around minimum value of sample value
high=9, #set around maximum value of sample value
size=[max_cluster_num, dims]))
precision = tf.nn.softplus(tf.Variable(
name='precision',
initial_value=
np.ones([max_cluster_num, dims], dtype=dtype)))
alpha = tf.nn.softplus(tf.Variable(
name='alpha',
initial_value=
np.ones([1], dtype=dtype)))
training_vals = [mix_probs, alpha, loc, precision]
# Prior distributions of the training variables
#Use symmetric Dirichlet prior as finite approximation of Dirichlet process.
rv_symmetric_dirichlet_process = tfd.Dirichlet(
concentration=np.ones(max_cluster_num, dtype) * alpha / max_cluster_num,
name='rv_sdp')
rv_loc = tfd.Independent(
tfd.Normal(
loc=tf.zeros([max_cluster_num, dims], dtype=dtype),
scale=tf.ones([max_cluster_num, dims], dtype=dtype)),
reinterpreted_batch_ndims=1,
name='rv_loc')
rv_precision = tfd.Independent(
tfd.InverseGamma(
concentration=np.ones([max_cluster_num, dims], dtype),
rate=np.ones([max_cluster_num, dims], dtype)),
reinterpreted_batch_ndims=1,
name='rv_precision')
rv_alpha = tfd.InverseGamma(
concentration=np.ones([1], dtype=dtype),
rate=np.ones([1]),
name='rv_alpha')
# Define mixture model
rv_observations = tfd.MixtureSameFamily(
mixture_distribution=tfd.Categorical(probs=mix_probs),
components_distribution=tfd.MultivariateNormalDiag(
loc=loc,
scale_diag=precision))
```
## 3. Optimization
We optimize the model with Preconditioned Stochastic Gradient Langevin Dynamics (pSGLD), which enables us to optimize a model over a large number of samples in a mini-batch gradient descent manner.
To update parameters $\boldsymbol{\theta}\equiv\{\boldsymbol{\pi},\,\alpha,\, \boldsymbol{\mu_j},\,\boldsymbol{\sigma_j}\}$ in $t\,$th iteration with mini-batch size $M$, the update is sampled as:
$$\begin{align*}
\Delta \boldsymbol { \theta } _ { t } & \sim \frac { \epsilon _ { t } } { 2 } \bigl[ G \left( \boldsymbol { \theta } _ { t } \right) \bigl( \nabla _ { \boldsymbol { \theta } } \log p \left( \boldsymbol { \theta } _ { t } \right)
+ \frac { N } { M } \sum _ { k = 1 } ^ { M } \nabla _ \boldsymbol { \theta } \log \text{GMM}(x_{t_k})\bigr) + \sum_\boldsymbol{\theta}\nabla_\theta G \left( \boldsymbol { \theta } _ { t } \right) \bigr]\\
&+ G ^ { \frac { 1 } { 2 } } \left( \boldsymbol { \theta } _ { t } \right) \text { Normal } \left( \text{loc}=\boldsymbol{0} ,\, \text{scale}=\epsilon _ { t }\boldsymbol{1} \right)\\
\end{align*}$$
In the above equation, $\epsilon _ { t }$ is learning rate at $t\,$th iteration and $\log p(\theta_t)$ is a sum of log prior distributions of $\theta$. $G ( \boldsymbol { \theta } _ { t })$ is a preconditioner which adjusts the scale of the gradient of each parameter.
```
# Learning rates and decay
starter_learning_rate = 1e-6
end_learning_rate = 1e-10
decay_steps = 1e4
# Number of training steps
training_steps = 10000
# Mini-batch size
batch_size = 20
# Sample size for parameter posteriors
sample_size = 100
```
We will use the joint log probability of the likelihood $\text{GMM}(x_{t_k})$ and the prior probabilities $p(\theta_t)$ as the loss function for pSGLD.
Note that as specified in the [API of pSGLD](https://www.tensorflow.org/probability/api_docs/python/tfp/optimizer/StochasticGradientLangevinDynamics), we need to divide the sum of the prior probabilities by sample size $N$.
```
# Placeholder for mini-batch
observations_tensor = tf.compat.v1.placeholder(dtype, shape=[batch_size, dims])
# Define joint log probabilities
# Notice that each prior probability should be divided by num_samples and
# likelihood is divided by batch_size for pSGLD optimization.
log_prob_parts = [
rv_loc.log_prob(loc) / num_samples,
rv_precision.log_prob(precision) / num_samples,
rv_alpha.log_prob(alpha) / num_samples,
rv_symmetric_dirichlet_process.log_prob(mix_probs)[..., tf.newaxis]
/ num_samples,
rv_observations.log_prob(observations_tensor) / batch_size
]
joint_log_prob = tf.reduce_sum(tf.concat(log_prob_parts, axis=-1), axis=-1)
# Make mini-batch generator
dx = tf.compat.v1.data.Dataset.from_tensor_slices(observations)\
.shuffle(500).repeat().batch(batch_size)
iterator = tf.compat.v1.data.make_one_shot_iterator(dx)
next_batch = iterator.get_next()
# Define learning rate scheduling
global_step = tf.Variable(0, trainable=False)
learning_rate = tf.train.polynomial_decay(
starter_learning_rate,
global_step, decay_steps,
end_learning_rate, power=1.)
# Set up the optimizer. Don't forget to set data_size=num_samples.
optimizer_kernel = tfp.optimizer.StochasticGradientLangevinDynamics(
learning_rate=learning_rate,
preconditioner_decay_rate=0.99,
burnin=1500,
data_size=num_samples)
train_op = optimizer_kernel.minimize(-joint_log_prob)
# Arrays to store samples
mean_mix_probs_mtx = np.zeros([training_steps, max_cluster_num])
mean_alpha_mtx = np.zeros([training_steps, 1])
mean_loc_mtx = np.zeros([training_steps, max_cluster_num, dims])
mean_precision_mtx = np.zeros([training_steps, max_cluster_num, dims])
init = tf.global_variables_initializer()
sess.run(init)
start = time.time()
for it in range(training_steps):
[
mean_mix_probs_mtx[it, :],
mean_alpha_mtx[it, 0],
mean_loc_mtx[it, :, :],
mean_precision_mtx[it, :, :],
_
] = sess.run([
*training_vals,
train_op
], feed_dict={
observations_tensor: sess.run(next_batch)})
elapsed_time_psgld = time.time() - start
print("Elapsed time: {} seconds".format(elapsed_time_psgld))
# Take mean over the last sample_size iterations
mean_mix_probs_ = mean_mix_probs_mtx[-sample_size:, :].mean(axis=0)
mean_alpha_ = mean_alpha_mtx[-sample_size:, :].mean(axis=0)
mean_loc_ = mean_loc_mtx[-sample_size:, :].mean(axis=0)
mean_precision_ = mean_precision_mtx[-sample_size:, :].mean(axis=0)
```
## 4. Visualize the result
### 4.1. Clustered result
First, we visualize the result of clustering.
For assigning each sample $x_i$ to a cluster $j$, we calculate the posterior of $z_i$ as:
$$\begin{align*}
j = \underset{z_i}{\arg\max}\,p(z_i\,|\,x_i,\,\boldsymbol{\theta})
\end{align*}$$
```
loc_for_posterior = tf.compat.v1.placeholder(
dtype, [None, max_cluster_num, dims], name='loc_for_posterior')
precision_for_posterior = tf.compat.v1.placeholder(
dtype, [None, max_cluster_num, dims], name='precision_for_posterior')
mix_probs_for_posterior = tf.compat.v1.placeholder(
dtype, [None, max_cluster_num], name='mix_probs_for_posterior')
# Posterior of z (unnormalized)
unnomarlized_posterior = tfd.MultivariateNormalDiag(
loc=loc_for_posterior, scale_diag=precision_for_posterior)\
.log_prob(tf.expand_dims(tf.expand_dims(observations, axis=1), axis=1))\
+ tf.log(mix_probs_for_posterior[tf.newaxis, ...])
# Posterior of z (normarizad over latent states)
posterior = unnomarlized_posterior\
- tf.reduce_logsumexp(unnomarlized_posterior, axis=-1)[..., tf.newaxis]
cluster_asgmt = sess.run(tf.argmax(
tf.reduce_mean(posterior, axis=1), axis=1), feed_dict={
loc_for_posterior: mean_loc_mtx[-sample_size:, :],
precision_for_posterior: mean_precision_mtx[-sample_size:, :],
mix_probs_for_posterior: mean_mix_probs_mtx[-sample_size:, :]})
idxs, count = np.unique(cluster_asgmt, return_counts=True)
print('Number of inferred clusters = {}\n'.format(len(count)))
np.set_printoptions(formatter={'float': '{: 0.3f}'.format})
print('Number of elements in each cluster = {}\n'.format(count))
def convert_int_elements_to_consecutive_numbers_in(array):
unique_int_elements = np.unique(array)
for consecutive_number, unique_int_element in enumerate(unique_int_elements):
array[array == unique_int_element] = consecutive_number
return array
cmap = plt.get_cmap('tab10')
plt.scatter(
observations[:, 0], observations[:, 1],
1,
c=cmap(convert_int_elements_to_consecutive_numbers_in(cluster_asgmt)))
plt.axis([-10, 10, -10, 10])
plt.show()
```
We can see an almost equal number of samples are assigned to appropriate clusters and the model has successfully inferred the correct number of clusters as well.
### 4.2. Visualize uncertainty
Here, we look at the uncertainty of the clustering result by visualizing it for each sample.
We calculate uncertainty by using entropy:
$$\begin{align*}
\text{Uncertainty}_\text{entropy} = -\frac{1}{K}\sum^{K}_{z_i=1}\sum^{O}_{l=1}p(z_i\,|\,x_i,\,\boldsymbol{\theta}_l)\log p(z_i\,|\,x_i,\,\boldsymbol{\theta}_l)
\end{align*}$$
In pSGLD, we treat the value of a training parameter at each iteration as a sample from its posterior distribution. Thus, we calculate entropy over values from $O$ iterations for each parameter. The final entropy value is calculated by averaging entropies of all the cluster assignments.
```
# Calculate entropy
posterior_in_exponential = tf.exp(posterior)
uncertainty_in_entropy = tf.reduce_mean(-tf.reduce_sum(
posterior_in_exponential
* posterior,
axis=1), axis=1)
uncertainty_in_entropy_ = sess.run(uncertainty_in_entropy, feed_dict={
loc_for_posterior: mean_loc_mtx[-sample_size:, :],
precision_for_posterior: mean_precision_mtx[-sample_size:, :],
mix_probs_for_posterior: mean_mix_probs_mtx[-sample_size:, :]
})
plt.title('Entropy')
sc = plt.scatter(observations[:, 0],
observations[:, 1],
1,
c=uncertainty_in_entropy_,
cmap=plt.cm.viridis_r)
cbar = plt.colorbar(sc,
fraction=0.046,
pad=0.04,
ticks=[uncertainty_in_entropy_.min(),
uncertainty_in_entropy_.max()])
cbar.ax.set_yticklabels(['low', 'high'])
cbar.set_label('Uncertainty', rotation=270)
plt.show()
```
In the above graph, less luminance represents more uncertainty.
We can see the samples near the boundaries of the clusters have especially higher uncertainty. This is intuitively true, that those samples are difficult to cluster.
### 4.3. Mean and scale of selected mixture component
Next, we look at selected clusters' $\mu_j$ and $\sigma_j$.
```
for idx, numbe_of_samples in zip(idxs, count):
print(
'Component id = {}, Number of elements = {}'
.format(idx, numbe_of_samples))
print(
'Mean loc = {}, Mean scale = {}\n'
.format(mean_loc_[idx, :], mean_precision_[idx, :]))
```
Again, the $\boldsymbol{\mu_j}$ and $\boldsymbol{\sigma_j}$ close to the ground truth.
### 4.4 Mixture weight of each mixture component
We also look at inferred mixture weights.
```
plt.ylabel('Mean posterior of mixture weight')
plt.xlabel('Component')
plt.bar(range(0, max_cluster_num), mean_mix_probs_)
plt.show()
```
We see only a few (three) mixture component have significant weights and the rest of the weights have values close to zero. This also shows the model successfully inferred the correct number of mixture components which constitutes the distribution of the samples.
### 4.5. Convergence of $\alpha$
We look at convergence of Dirichlet distribution's concentration parameter $\alpha$.
```
print('Value of inferred alpha = {0:.3f}\n'.format(mean_alpha_[0]))
plt.ylabel('Sample value of alpha')
plt.xlabel('Iteration')
plt.plot(mean_alpha_mtx)
plt.show()
```
Considering the fact that smaller $\alpha$ results in less expected number of clusters in a Dirichlet mixture model, the model seems to be learning the optimal number of clusters over iterations.
### 4.6. Inferred number of clusters over iterations
We visualize how the inferred number of clusters changes over iterations.
To do so, we infer the number of clusters over the iterations.
```
step = sample_size
num_of_iterations = 50
estimated_num_of_clusters = []
interval = (training_steps - step) // (num_of_iterations - 1)
iterations = np.asarray(range(step, training_steps+1, interval))
for iteration in iterations:
start_position = iteration-step
end_position = iteration
result = sess.run(tf.argmax(
tf.reduce_mean(posterior, axis=1), axis=1), feed_dict={
loc_for_posterior:
mean_loc_mtx[start_position:end_position, :],
precision_for_posterior:
mean_precision_mtx[start_position:end_position, :],
mix_probs_for_posterior:
mean_mix_probs_mtx[start_position:end_position, :]})
idxs, count = np.unique(result, return_counts=True)
estimated_num_of_clusters.append(len(count))
plt.ylabel('Number of inferred clusters')
plt.xlabel('Iteration')
plt.yticks(np.arange(1, max(estimated_num_of_clusters) + 1, 1))
plt.plot(iterations - 1, estimated_num_of_clusters)
plt.show()
```
Over the iterations, the number of clusters is getting closer to three. With the result of convergence of $\alpha$ to smaller value over iterations, we can see the model is successfully learning the parameters to infer an optimal number of clusters.
Interestingly, we can see the inference has already converged to the correct number of clusters in the early iterations, unlike $\alpha$ converged in much later iterations.
### 4.7. Fitting the model using RMSProp
In this section, to see the effectiveness of Monte Carlo sampling scheme of pSGLD, we use RMSProp to fit the model. We choose RMSProp for comparison because it comes without the sampling scheme and pSGLD is based on RMSProp.
```
# Learning rates and decay
starter_learning_rate_rmsprop = 1e-2
end_learning_rate_rmsprop = 1e-4
decay_steps_rmsprop = 1e4
# Number of training steps
training_steps_rmsprop = 50000
# Mini-batch size
batch_size_rmsprop = 20
# Define trainable variables.
mix_probs_rmsprop = tf.nn.softmax(
tf.Variable(
name='mix_probs_rmsprop',
initial_value=np.ones([max_cluster_num], dtype) / max_cluster_num))
loc_rmsprop = tf.Variable(
name='loc_rmsprop',
initial_value=np.zeros([max_cluster_num, dims], dtype)
+ np.random.uniform(
low=-9, #set around minimum value of sample value
high=9, #set around maximum value of sample value
size=[max_cluster_num, dims]))
precision_rmsprop = tf.nn.softplus(tf.Variable(
name='precision_rmsprop',
initial_value=
np.ones([max_cluster_num, dims], dtype=dtype)))
alpha_rmsprop = tf.nn.softplus(tf.Variable(
name='alpha_rmsprop',
initial_value=
np.ones([1], dtype=dtype)))
training_vals_rmsprop =\
[mix_probs_rmsprop, alpha_rmsprop, loc_rmsprop, precision_rmsprop]
# Prior distributions of the training variables
#Use symmetric Dirichlet prior as finite approximation of Dirichlet process.
rv_symmetric_dirichlet_process_rmsprop = tfd.Dirichlet(
concentration=np.ones(max_cluster_num, dtype)
* alpha_rmsprop / max_cluster_num,
name='rv_sdp_rmsprop')
rv_loc_rmsprop = tfd.Independent(
tfd.Normal(
loc=tf.zeros([max_cluster_num, dims], dtype=dtype),
scale=tf.ones([max_cluster_num, dims], dtype=dtype)),
reinterpreted_batch_ndims=1,
name='rv_loc_rmsprop')
rv_precision_rmsprop = tfd.Independent(
tfd.InverseGamma(
concentration=np.ones([max_cluster_num, dims], dtype),
rate=np.ones([max_cluster_num, dims], dtype)),
reinterpreted_batch_ndims=1,
name='rv_precision_rmsprop')
rv_alpha_rmsprop = tfd.InverseGamma(
concentration=np.ones([1], dtype=dtype),
rate=np.ones([1]),
name='rv_alpha_rmsprop')
# Define mixture model
rv_observations_rmsprop = tfd.MixtureSameFamily(
mixture_distribution=tfd.Categorical(probs=mix_probs_rmsprop),
components_distribution=tfd.MultivariateNormalDiag(
loc=loc_rmsprop,
scale_diag=precision_rmsprop))
og_prob_parts_rmsprop = [
rv_loc_rmsprop.log_prob(loc_rmsprop),
rv_precision_rmsprop.log_prob(precision_rmsprop),
rv_alpha_rmsprop.log_prob(alpha_rmsprop),
rv_symmetric_dirichlet_process_rmsprop
.log_prob(mix_probs_rmsprop)[..., tf.newaxis],
rv_observations_rmsprop.log_prob(observations_tensor)
* num_samples / batch_size
]
joint_log_prob_rmsprop = tf.reduce_sum(
tf.concat(log_prob_parts_rmsprop, axis=-1), axis=-1)
# Define learning rate scheduling
global_step_rmsprop = tf.Variable(0, trainable=False)
learning_rate = tf.train.polynomial_decay(
starter_learning_rate_rmsprop,
global_step_rmsprop, decay_steps_rmsprop,
end_learning_rate_rmsprop, power=1.)
# Set up the optimizer. Don't forget to set data_size=num_samples.
optimizer_kernel_rmsprop = tf.train.RMSPropOptimizer(
learning_rate=learning_rate,
decay=0.99)
train_op_rmsprop = optimizer_kernel_rmsprop.minimize(-joint_log_prob_rmsprop)
init_rmsprop = tf.global_variables_initializer()
sess.run(init_rmsprop)
start = time.time()
for it in range(training_steps_rmsprop):
[
_
] = sess.run([
train_op_rmsprop
], feed_dict={
observations_tensor: sess.run(next_batch)})
elapsed_time_rmsprop = time.time() - start
print("RMSProp elapsed_time: {} seconds ({} iterations)"
.format(elapsed_time_rmsprop, training_steps_rmsprop))
print("pSGLD elapsed_time: {} seconds ({} iterations)"
.format(elapsed_time_psgld, training_steps))
mix_probs_rmsprop_, alpha_rmsprop_, loc_rmsprop_, precision_rmsprop_ =\
sess.run(training_vals_rmsprop)
```
Compare to pSGLD, although the number of iterations for RMSProp is longer, optimization by RMSProp is much faster.
Next, we look at the clustering result.
```
cluster_asgmt_rmsprop = sess.run(tf.argmax(
tf.reduce_mean(posterior, axis=1), axis=1), feed_dict={
loc_for_posterior: loc_rmsprop_[tf.newaxis, :],
precision_for_posterior: precision_rmsprop_[tf.newaxis, :],
mix_probs_for_posterior: mix_probs_rmsprop_[tf.newaxis, :]})
idxs, count = np.unique(cluster_asgmt_rmsprop, return_counts=True)
print('Number of inferred clusters = {}\n'.format(len(count)))
np.set_printoptions(formatter={'float': '{: 0.3f}'.format})
print('Number of elements in each cluster = {}\n'.format(count))
cmap = plt.get_cmap('tab10')
plt.scatter(
observations[:, 0], observations[:, 1],
1,
c=cmap(convert_int_elements_to_consecutive_numbers_in(
cluster_asgmt_rmsprop)))
plt.axis([-10, 10, -10, 10])
plt.show()
```
The number of clusters was not correctly inferred by RMSProp optimization in our experiment. We also look at the mixture weight.
```
plt.ylabel('MAP inferece of mixture weight')
plt.xlabel('Component')
plt.bar(range(0, max_cluster_num), mix_probs_rmsprop_)
plt.show()
```
We can see the incorrect number of components have significant mixture weights.
Although the optimization takes longer time, pSGLD, which has Monte Carlo sampling scheme, performed better in our experiment.
## 5. Conclusion
In this notebook, we have described how to cluster a large number of samples as well as to infer the number of clusters simultaneously by fitting a Dirichlet Process Mixture of Gaussian distribution using pSGLD.
The experiment has shown the model successfully clustered samples and inferred the correct number of clusters. Also, we have shown the Monte Carlo sampling scheme of pSGLD allows us to visualize uncertainty in the result. Not only clustering the samples but also we have seen the model could infer the correct parameters of mixture components. On the relationship between the parameters and the number of inferred clusters, we have investigated how the model learns the parameter to control the number of effective clusters by visualizing the correlation between convergence of ๐ผ and the number of inferred clusters. Lastly, we have looked at the results of fitting the model using RMSProp. We have seen RMSProp, which is the optimizer without Monte Carlo sampling scheme, works considerably faster than pSGLD but has produced less accuracy in clustering.
Although the toy dataset only had 50,000 samples with only two dimensions, the mini-batch manner optimization used here is scalable for much larger datasets.
| github_jupyter |
<a href="https://colab.research.google.com/github/ElizaLo/Practice-Python/blob/master/Data%20Compression%20Methods/Huffman%20Code/Huffman_code.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Huffman Coding
## **Solution**
```
import heapq
from collections import Counter, namedtuple
class Node(namedtuple("Node", ["left", "right"])):
def walk(self, code, acc): # code - ะฟัะตัะธะบั ะบะพะดะฐ, ะบะพัะพััะน ะผั ะฝะฐะบะพะฟะธะปะธ ัะฟััะบะฐััั ะพั ะบะพัะฝั ะบ ัะทะปั/ะปะธัั
self.left.walk(code, acc + "0")
self.right.walk(code, acc + "1")
class Leaf(namedtuple("Leaf", ["char"])):
def walk(self, code, acc):
code[self.char] = acc or "0"
```
**Encoding**
```
def huffman_encode(s):
h = []
for ch, freq in Counter(s).items():
h.append((freq, len(h), Leaf(ch)))
# h = [(freq, Leaf(ch)) for ch, freq in Counter(s).items()]
heapq.heapify(h)
count = len(h)
while len(h) > 1: # ะฟะพะบะฐ ะฒ ะพัะตัะตะดะธ ะตััั ัะปะตะผ
freq1, _count1, left = heapq.heappop(h) # ะดะพััะฐัะผ ัะปะตะผ ั ะผะธะฝะธะผะฐะปัะฝะพะน ัะฐััะพัะพะน
freq2, _count2, right = heapq.heappop(h)
heapq.heappush(h, (freq1 + freq2, count, Node(left, right)))
count += 1
code = {}
if h:
[(_freq, _count, root)] = h # ะบะพัะตะฝั ะดะตัะตะฒะฐ
root.walk(code,"")
return code
```
**Decoding**
```
def huffman_decode(encoded, code):
sx = []
enc_ch = ""
for ch in encoded:
enc_ch += ch
for dec_ch in code:
if code.get(dec_ch) == enc_ch:
sx.append(dec_ch)
enc_ch = ""
break
return "".join(sx)
def main():
s = input()
code = huffman_encode(s)
"""
ะทะฐะบะพะดะธัะพะฒะฐะฝะฝะฐั ะฒะตััะธั ัััะพะบะธ s
ะพัะพะฑัะฐะถะฐะตั ะบะฐะถะดัะน ัะธะผะฒะพะผ ะฒ ัะพะพัะฒะตัััะฒัััะธะน ะตะผั ะบะพะด
"""
encoded = "".join(code[ch] for ch in s)
"""
len(code) - ะบะพะปะธัะตััะฒะพ ัะฐะทะปะธัะฝัั
ัะธะผะฒะพะปะพะฒ ะฒ ัััะพะบะต s, ัะปะพะฒะฐัั
len(encoded) - ะดะปะธะฝะฐ ะทะฐะบะพะดะธัะพะฒะฐะฝะฝะพะน ัััะพะบะธ
"""
print("\nDictionary =", len(code), "\nLength of string =", len(encoded))
# ะพะฟะธััะฒะฐะตะผ ะบะฐะบ ะผั ะบะพะดะธััะตะผ ะบะฐะถะดัะน ัะธะผะฒะพะป
print("\n")
for ch in sorted(code):
print("{}: {}".format(ch, code[ch]))
print("\nEncoded string: ",encoded) # ะทะฐะบะพะดะธัะพะฒะฐะฝะฝะฐั ัััะพะบะฐ
print("\nDecoded string:",huffman_decode(encoded, code))
if __name__ == "__main__":
main()
```
## Testing
```
import random
import string
def test(n_iter=100):
for i in range(n_iter):
length = random.randint(0, 32)
s = "".join(random.choice(string.ascii_letters) for _ in range(length))
code = huffman_encode(s)
encoded = "".join(code[ch] for ch in s)
assert huffman_decode(encoded, code) == s
```
## Simple code
```
def huffman_encode(s):
return {ch: ch for ch in s} # ะบะพะดะธััะตั ัะฐะผ ะฒ ัะตะฑั (ะพัะพะฑัะฐะถะฐะตั ะบะฐะถะดัะน ัะธะผะฒะพะป ะฒ ัะพะพัะฒะตัััะฒัััะธะน ะตะผั ะบะพะด)
def main():
s = input()
code = huffman_encode(s)
# ะทะฐะบะพะดะธัะพะฒะฐะฝะฝะฐั ะฒะตััะธั ัััะพะบะธ s
# ะพัะพะฑัะฐะถะฐะตั ะบะฐะถะดัะน ัะธะผะฒะพะผ ะฒ ัะพะพัะฒะตัััะฒัััะธะน ะตะผั ะบะพะด
encoded = "".join(code[ch] for ch in s)
# len(code) - ะบะพะปะธัะตััะฒะพ ัะฐะทะปะธัะฝัั
ัะธะผะฒะพะปะพะฒ ะฒ ัััะพะบะต s, ัะปะพะฒะฐัั
# len(encoded) - ะดะปะธะฝะฐ ะทะฐะบะพะดะธัะพะฒะฐะฝะฝะพะน ัััะพะบะธ
print("\nDictionary =", len(code), "\nLength of string =", len(encoded))
# ะพะฟะธััะฒะฐะตะผ ะบะฐะบ ะผั ะบะพะดะธััะตะผ ะบะฐะถะดัะน ัะธะผะฒะพะป
print("\n")
for ch in sorted(code):
print("{}: {}".format(ch, code[ch]))
print("\n", encoded) # ะทะฐะบะพะดะธัะพะฒะฐะฝะฝะฐั ัััะพะบะฐ
if __name__ == "__main__":
main()
```
| github_jupyter |
```
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import datetime
dataset = pd.read_csv(r'C:\Users\ANOVA AJAY PANDEY\Desktop\SEM4\CSE 3021 SIN\proj\stock analysis\Google_Stock_Price_Train.csv',index_col="Date",parse_dates=True)
dataset = pd.read_csv(r'C:\Users\ANOVA AJAY PANDEY\Desktop\SEM4\CSE 3021 SIN\proj\stock analysis\Google_Stock_Price_Train.csv',index_col="Date",parse_dates=True)
dataset.tail()
dataset.isna().any()
dataset.info()
dataset['Open'].plot(figsize=(16,6))
# convert column "a" of a DataFrame
dataset["Close"] = dataset["Close"].str.replace(',', '').astype(float)
dataset["Volume"] = dataset["Volume"].str.replace(',', '').astype(float)
# 7 day rolling mean
dataset.rolling(7).mean().tail(20)
dataset['Open'].plot(figsize=(16,6))
dataset.rolling(window=30).mean()['Close'].plot()
dataset['Close: 30 Day Mean'] = dataset['Close'].rolling(window=30).mean()
dataset[['Close','Close: 30 Day Mean']].plot(figsize=(16,6))
# Optional specify a minimum number of periods
dataset['Close'].expanding(min_periods=1).mean().plot(figsize=(16,6))
training_set=dataset['Open']
training_set=pd.DataFrame(training_set)
# Feature Scaling
from sklearn.preprocessing import MinMaxScaler
sc = MinMaxScaler(feature_range = (0, 1))
training_set_scaled = sc.fit_transform(training_set)
# Creating a data structure with 60 timesteps and 1 output
X_train = []
y_train = []
for i in range(60, 1258):
X_train.append(training_set_scaled[i-60:i, 0])
y_train.append(training_set_scaled[i, 0])
X_train, y_train = np.array(X_train), np.array(y_train)
# Reshaping
X_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[1], 1))
from tensorflow.keras.models import Sequential
# Part 2 - Building the RNN
# Importing the Keras libraries and packages
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Dropout
# Initialising the RNN
regressor = Sequential()
# Adding the first LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 50, return_sequences = True, input_shape = (X_train.shape[1], 1)))
regressor.add(Dropout(0.2))
# Adding a second LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 50, return_sequences = True))
regressor.add(Dropout(0.2))
# Adding a third LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 50, return_sequences = True))
regressor.add(Dropout(0.2))
regressor.add(Dropout(0.2))
# Adding a fourth LSTM layer and some Dropout regularisation
regressor.add(LSTM(units = 50))
regressor.add(Dropout(0.2))
# Adding the output layer
regressor.add(Dense(units = 1))
# Compiling the RNN
regressor.compile(optimizer = 'adam', loss = 'mean_squared_error')
# Fitting the RNN to the Training set
regressor.fit(X_train, y_train, epochs = 100, batch_size = 32)
# Part 3 - Making the predictions and visualising the results
# Getting the real stock price of 2017
dataset_test = pd.read_csv(r'C:\Users\ANOVA AJAY PANDEY\Desktop\SEM4\CSE 3021 SIN\proj\stock analysis\Google_Stock_Price_Test.csv',index_col="Date",parse_dates=True)
real_stock_price = dataset_test.iloc[:, 1:2].values
dataset_test.head()
dataset_test.info()
dataset_test["Volume"] = dataset_test["Volume"].str.replace(',', '').astype(float)
test_set=dataset_test['Open']
test_set=pd.DataFrame(test_set)
test_set.info()
# Getting the predicted stock price of 2017
dataset_total = pd.concat((dataset['Open'], dataset_test['Open']), axis = 0)
inputs = dataset_total[len(dataset_total) - len(dataset_test) - 60:].values
inputs = inputs.reshape(-1,1)
inputs = sc.transform(inputs)
X_test = []
for i in range(60, 80):
X_test.append(inputs[i-60:i, 0])
X_test = np.array(X_test)
X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 1))
predicted_stock_price = regressor.predict(X_test)
predicted_stock_price = sc.inverse_transform(predicted_stock_price)
predicted_stock_price=pd.DataFrame(predicted_stock_price)
predicted_stock_price.info()
# Visualising the results
plt.plot(real_stock_price, color = 'red', label = 'Real Google Stock Price')
plt.plot(predicted_stock_price, color = 'blue', label = 'Predicted Google Stock Price')
plt.title('Google Stock Price Prediction')
plt.xlabel('Time')
plt.ylabel('Google Stock Price')
plt.legend()
plt.show()
```
| github_jupyter |
```
import os
import sys
import itertools
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import statsmodels.regression.linear_model as sm
from scipy import io
from mpl_toolkits.axes_grid1 import make_axes_locatable
path_root = os.environ.get('DECIDENET_PATH')
path_code = os.path.join(path_root, 'code')
if path_code not in sys.path:
sys.path.append(path_code)
from dn_utils.behavioral_models import load_behavioral_data
%matplotlib inline
# Directory for PPI analysis
path_out = os.path.join(path_root, 'data/main_fmri_study/derivatives/ppi')
path_timeries = os.path.join(path_out, 'timeseries')
# Load behavioral data
path_beh = os.path.join(path_root, 'data/main_fmri_study/sourcedata/behavioral')
beh, meta = load_behavioral_data(path=path_beh, verbose=False)
n_subjects, n_conditions, n_trials, _ = beh.shape
# Load neural & BOLD timeseries
data = io.loadmat(os.path.join(
path_timeries,
'timeseries_pipeline-24HMPCSFWM_atlas-metaROI_neural.mat'))
timeseries_neural_aggregated = data['timeseries_neural_aggregated']
timeseries_denoised_aggregated = np.load(os.path.join(
path_timeries,
'timeseries_pipeline-24HMPCSFWM_atlas-metaROI_bold.npy'))
downsamples = data['k'].flatten()
# Acquisition parameters
_, _, n_volumes, n_rois = timeseries_denoised_aggregated.shape
# Input data shape
print('timeseries_neural_aggregated.shape', timeseries_neural_aggregated.shape)
print('timeseries_denoised_aggregated.shape', timeseries_denoised_aggregated.shape)
mpl.rcParams.update({"font.size": 15})
fc_rest = np.zeros((n_subjects, n_conditions, n_rois, n_rois))
for i in range(n_subjects):
for j in range(n_conditions):
fc_rest[i, j] = np.corrcoef(timeseries_denoised_aggregated[i, j].T)
fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(15, 15))
im = [[None, None], [None, None]]
im[0][0] = ax[0][0].imshow(fc_rest[:, 0, :, :].mean(axis=0), clim=[-1, 1], cmap='RdBu_r')
im[0][1] = ax[0][1].imshow(fc_rest[:, 1, :, :].mean(axis=0), clim=[-1, 1], cmap='RdBu_r')
im[1][0] = ax[1][0].imshow(fc_rest[:, 0, :, :].std(axis=0), clim=[0, .2], cmap='RdBu_r')
im[1][1] = ax[1][1].imshow(fc_rest[:, 1, :, :].std(axis=0), clim=[0, .2], cmap='RdBu_r')
for i, j in itertools.product([0, 1], repeat=2):
divider = make_axes_locatable(ax[i][j])
cax = divider.append_axes("right", size="5%", pad=0.05)
fig.colorbar(im[i][j], cax=cax)
ax[0][0].set_title("Reward-seeking")
ax[0][1].set_title("Punishment-avoiding")
ax[0][0].set_ylabel("Mean connectivity")
ax[1][0].set_ylabel("Variability of connectivity")
plt.tight_layout()
```
| github_jupyter |
# Loading and Checking Data
## Importing Libraries
```
import torch
import torchvision
import torchvision.transforms as transforms
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
import math
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
use_cuda = torch.cuda.is_available()
```
## Loading Data
```
batch_size = 4
# These are the mean and standard deviation values for all pictures in the training set.
mean = (0.4914 , 0.48216, 0.44653)
std = (0.24703, 0.24349, 0.26159)
# Class to denormalize images to display later.
class DeNormalize(object):
def __init__(self, mean, std):
self.mean = mean
self.std = std
def __call__(self, tensor):
for t, m, s in zip(tensor, self.mean, self.std):
t.mul_(s).add_(m)
return tensor
# Creating instance of Functor
denorm = DeNormalize(mean, std)
# Load data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize(mean, std)])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size,
shuffle=True, num_workers=4)
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
# Do NOT shuffle the test set or else the order will be messed up
testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size,
shuffle=False, num_workers=4)
# Classes in order
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
```
## Sample Images and Labels
```
# functions to show an image
def imshow(img):
img = denorm(img) # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))
```
# Defining Model
## Fully-Connected DNN
```
class Net_DNN(nn.Module):
def __init__(self, architecture):
super().__init__()
self.layers = nn.ModuleList([
nn.Linear(architecture[layer], architecture[layer + 1])
for layer in range(len(architecture) - 1)])
def forward(self, data):
# Flatten the Tensor (i.e., dimensions 3 x 32 x 32) to a single column
data = data.view(data.size(0), -1)
for layer in self.layers:
layer_data = layer(data)
data = F.relu(layer_data)
return F.log_softmax(layer_data, dim=-1)
```
## Fully-CNN
```
class Net_CNN(nn.Module):
# Padding is set to 2 and stride to 2
# Padding ensures all edge pixels are exposed to the filter
# Stride = 2 is common practice
def __init__(self, layers, c, stride=2):
super().__init__()
self.layers = nn.ModuleList([
nn.Conv2d(layers[i], layers[i + 1], kernel_size=3, padding=2, stride=stride)
for i in range(len(layers) - 1)])
self.pool = nn.AdaptiveMaxPool2d(1) # Simply takes the maximum value from the Tensor
self.out = nn.Linear(layers[-1], c)
def forward(self, data):
for layer in self.layers:
data = F.relu(layer(data))
data = self.pool(data)
data = data.view(data.size(0), -1)
return F.log_softmax(self.out(data), dim=-1)
```
## Chained CNN and NN
```
class Net_CNN_NN(nn.Module):
# Padding is set to 2 and stride to 2
# Padding ensures all edge pixels are exposed to the filter
# Stride = 2 is common practice
def __init__(self, layers, architecture, stride=2):
super().__init__()
# Fully Convolutional Layers
self.layers = nn.ModuleList([
nn.Conv2d(layers[i], layers[i + 1], kernel_size=3, padding=2,stride=stride)
for i in range(len(layers) - 1)])
# Fully Connected Neural Network to map to output
self.layers_NN = nn.ModuleList([
nn.Linear(architecture[layer], architecture[layer + 1])
for layer in range(len(architecture) - 1)])
self.pool = nn.AdaptiveMaxPool2d(1) # Simply takes the maximum value from the Tensor
def forward(self, data):
for layer in self.layers:
data = F.relu(layer(data))
data = self.pool(data)
data = data.view(data.size(0), -1)
for layer in self.layers_NN:
layer_data = layer(data)
data = F.relu(layer_data)
return F.log_softmax(layer_data, dim=-1)
```
## Defining the NN, Loss Function and Optimizer
```
# ---------------------------------------------
# Uncomment the architecture you want to use
# ---------------------------------------------
# # DNN
# architecture = [32*32*3, 100, 100, 100, 100, 10]
# net = Net_DNN(architecture)
# # CNN
# architecture = [3, 20, 40, 80, 160]
# num_outputs = 10
# net = Net_CNN(architecture, num_outputs)
# # CNN with NN
# architecture = [3, 20, 40, 80]
# architecture_NN = [80, 40, 20, 10]
# num_outputs = 10
# net = Net_CNN_NN(architecture, architecture_NN)
if use_cuda:
net = net.cuda() # Training on the GPU
criterion = nn.CrossEntropyLoss()
```
## Loading Model
```
# ---------------------------------------------
# Uncomment the architecture you want to use
# ---------------------------------------------
# # DNN
# architecture = [32*32*3, 100, 100, 10]
# net = Net_DNN(architecture)
# # CNN
# architecture = [3, 20, 40, 80, 160]
# num_outputs = 10
# net = Net_CNN(architecture, num_outputs)
# criterion = nn.CrossEntropyLoss()
if use_cuda:
net = net.cuda() # Training on the GPU
# ---------------------------------------------
# Uetermine the path for the saved weights
# ---------------------------------------------
PATH = './checkpoints_CNN_v2/5'
# Load weights
net.load_state_dict(torch.load(PATH))
```
## Recording Loss
```
# Initialize a list of loss_results
loss_results = []
```
# Training Manual
```
# Set the Learning rate and epoch start and end points
start_epoch = 11
end_epoch = 15
lr = 0.0001
# Define the optimizer
optimizer = optim.SGD(net.parameters(), lr=lr, momentum=0.9)
for epoch in range(start_epoch, end_epoch+1): # loop over the dataset multiple times
print("Epoch:", epoch)
running_loss = 0.0
for i, (inputs, labels) in enumerate(trainloader, 0):
# get the inputs
if use_cuda:
inputs, labels = inputs.cuda(), labels.cuda()
# wrap them in Variable
inputs, labels = Variable(inputs), Variable(labels) # Inputs and Target values to GPU
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.data[0]
if i % 2000 == 1999: # print every 2000 mini-batches
print(running_loss / 2000)
loss_results.append(running_loss / 2000)
running_loss = 0.0
PATH = './checkpoints_hybrid/' + str(epoch)
torch.save(net.state_dict(), PATH)
```
## Sample of the Results
```
# load a min-batch of the images
dataiter = iter(testloader)
images, labels = dataiter.next()
# print images
imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
```
## Sample of Predictions
```
# For the images shown above, show the predictions
# first activate GPU processing
images, labels = images.cuda(), labels.cuda()
# Feed forward
outputs = net(Variable(images))
_, predicted = torch.max(outputs.data, 1)
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
for j in range(4)))
```
## Total Test Set Accuracy
```
# Small code snippet to determine test accuracy
correct = 0
total = 0
for data in testloader:
# load images
images, labels = data
if use_cuda:
images, labels = images.cuda(), labels.cuda()
# feed forward
outputs = net(Variable(images))
# perform softmax regression
_, predicted = torch.max(outputs.data, 1)
# update stats
total += labels.size(0)
correct += (predicted == labels).sum()
# print the results
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
```
## Accuracy per Class for Test Set
```
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
for data in testloader:
images, labels = data
if use_cuda:
images, labels = images.cuda(), labels.cuda()
outputs = net(Variable(images))
_, predicted = torch.max(outputs.data, 1)
c = (predicted == labels).squeeze()
for i in range(4):
label = labels[i]
class_correct[label] += c[i]
class_total[label] += 1
# Print the accuracy per class
for i in range(10):
print(classes[i], 100 * class_correct[i] / class_total[i])
```
# Plot Loss
```
batch_size = 4
loss_samples_per_epoch = 6
num_epochs = 15
epochs_list = [(i/loss_samples_per_epoch) for i in range(1, num_epochs*loss_samples_per_epoch + 1)]
plt.semilogy(epochs_list, loss_results[:-6])
plt.ylabel('Loss')
plt.xlabel('Epoch Number')
plt.savefig('./DNN_v2.png', format='png', pad_inches=1, dpi=1200)
```
| github_jupyter |
## Apprentissage supervisรฉ: Forรชts d'arbres alรฉatoires (Random Forests)
Intรฉressons nous maintenant ร un des algorithmes les plus popualires de l'รฉtat de l'art. Cet algorithme est non-paramรฉtrique et porte le nom de **forรชts d'arbres alรฉatoires**
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
plt.style.use('seaborn')
```
## A l'origine des forรชts d'arbres alรฉatoires : l'arbre de dรฉcision
Les fรดrets alรฉatoires appartiennent ร la famille des mรฉthodes d'**apprentissage ensembliste** et sont construits ร partir d'**arbres de dรฉcision**. Pour cette raison, nous allons tout d'abord prรฉsenter les arbres de dรฉcisions.
Un arbre de dรฉcision est une maniรจre trรจs intuitive de rรฉsoudre un problรจme de classification. On se contente de dรฉfinir un certain nombre de questions qui vont permetre d'identifier la classe adรฉquate.
```
import fig_code.figures as fig
fig.plot_example_decision_tree()
```
Le dรฉcoupage binaire des donnรฉes est rapide a mettre en oeuvre. La difficultรฉ va rรฉsider dans la maniรจre de dรฉterminer quelle est la "bonne" question ร poser.
C'est tout l'enjeu de la phase d'apprentissage d'un arbre de dรฉcision. L'algorithme va dรฉterminer, au vue d'un ensemble de donnรฉes, quelle question (ou dรฉcoupage...) va apporter le plus gros gain d'information.
### Construction d'un arbre de dรฉcision
Voici un exemple de classifieur ร partir d'un arbre de dรฉcision en utlisiant la libraire scikit-learn.
Nous commencons par dรฉfinir un jeu de donnรฉes en 2 dimensions avec des labels associรฉs:
```
from sklearn.datasets import make_blobs
X, y = make_blobs(n_samples=300, centers=4,
random_state=0, cluster_std=1.0)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='rainbow');
```
Nous avons prรฉcemment dรฉfini une fonction qui va faciliter la visualisation du processus :
```
from fig_code.figures import visualize_tree, plot_tree_interactive
```
On utilise maintenant le module ``interact`` dans Ipython pour visualiser les dรฉcoupages effectuรฉs par l'arbre de dรฉcision en fonction de la profondeur de l'arbre (*depth* en anglais), i.e. le nombre de questions que l'arbre peut poser :
```
plot_tree_interactive(X, y);
```
**Remarque** : ร chaque augmentation de la profondeur de l'arbre, chaque branche est dรฉcoupรฉe en deux **ร l'expection** des branches qui contiennent uniquement des points d'une unique classe.
L'arbre de dรฉcision est une mรฉthode de classification non paramรฉtrique facile ร mettre en oeuvre
**Question: Observez-vous des problรจmes avec cette modรฉlisation ?**
## Arbre de dรฉcision et sur-apprentissage
Un problรจme avec les arbres de dรฉcision est qu'ils ont tendance ร **sur-apprendre** rapidement sur les donnรฉes d'apprentissage. En effet, ils ont une forte tendance ร capturer le bruit prรฉsent dans les donnรฉes plutรดt que la vraie distribution recherchรฉe. Par exemple, si on construit 2 arbres ร partir de sous ensembles des donnรฉes dรฉfinies prรฉcรฉdemment, on obtient les deux classifieurs suivants:
```
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier()
plt.figure()
visualize_tree(clf, X[:200], y[:200], boundaries=False)
plt.figure()
visualize_tree(clf, X[-200:], y[-200:], boundaries=False)
```
Les 2 classifieurs ont des diffรฉrences notables si on regarde en dรฉtails les figures. Lorsque'on va prรฉdire la classe d'un nouveau point, cela risque d'รชtre impactรฉ par le bruit dans les donnรฉes plus que par le signal que l'on cherche ร modรฉliser.
## Prรฉdictions ensemblistes: Forรชts alรฉatoires
Une faรงon de limiter ce problรจme de sur-apprentissage est d'utiliser un **modรจle ensembliste**: un mรฉta-estimateur qui va aggrรฉger les predictions de mutliples estimateurs (qui peuvent sur-apprendre individuellement). Grace ร des propriรฉtรฉs mathรฉmatiques plutรดt magiques (!), la prรฉdiction aggrรฉgรฉe de ces estimateurs s'avรจre plus performante et robuste que les performances des estimateurs considรฉrรฉs individuellement.
Une des mรฉthodes ensemblistes les plus cรฉlรจbres est la mรฉthode des **forรชts d'arbres alรฉatoires** qui aggrรจge les prรฉdictions de multiples arbres de dรฉcision.
Il y a beaucoup de littรฉratures scientifiques pour dรฉterminer la faรงon de rendre alรฉatoires ces arbres mais donner un exemple concret, voici un ensemble de modรจle qui utilise seulement un sous รฉchantillon des donnรฉes :
```
X, y = make_blobs(n_samples=300, centers=4,
random_state=0, cluster_std=2.0)
def fit_randomized_tree(random_state=0):
rng = np.random.RandomState(random_state)
i = np.arange(len(y))
rng.shuffle(i)
clf = DecisionTreeClassifier(max_depth=5)
#on utilise seulement 250 exemples choisis alรฉatoirement sur les 300 disponibles
visualize_tree(clf, X[i[:250]], y[i[:250]], boundaries=False,
xlim=(X[:, 0].min(), X[:, 0].max()),
ylim=(X[:, 1].min(), X[:, 1].max()))
from ipywidgets import interact
interact(fit_randomized_tree, random_state=(0, 100));
```
On peut observer dans le dรฉtail les changements du modรจle en fonction du tirage alรฉatoire des donnรฉes qu'il utilise pour l'apprentissage, alors que la distribution des donnรฉes est figรฉe !
La forรชt alรฉatoire va faire des caluls similaires, mais va aggrรฉger l'ensemble des arbres alรฉatoires gรฉnรฉrรฉs pour construire une unique prรฉdiction:
```
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=100, max_depth=5, random_state=0)
visualize_tree(clf, X, y, boundaries=False);
from sklearn.svm import SVC
clf = SVC(kernel='linear')
clf.fit(X, y)
visualize_tree(clf,X, y, boundaries=False)
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],
s=200, facecolors='none');
```
En moyennant 100 arbres de dรฉcision "perturbรฉs" alรฉatoirement, nous obtenons une prรฉdiction aggrรฉgรฉ qui modรฉlise avec plus de prรฉcision nos donnรฉes.
*(Remarque: ci dessus, notre perturbation alรฉatoire est effectuรฉ en echantillonant de maniรจre alรฉatoire nos donnรฉes... Les arbres alรฉatoires utilisent des techniques plus sophistiquรฉes, pour plus de dรฉtails voir la [documentation de scikit-learn](http://scikit-learn.org/stable/modules/ensemble.html#forest)*)
## Exemple 1 : utilisation en rรฉgression
On considรจre pour cet exemple un cas d'tรฉtude diffรฉrent des exemples prรฉcรฉdent de classification. Les arbres alรฉatoires peuvent รชtre รฉgalement utilisรฉs sur des problรจmes de rรฉgression (c'est ร dire la prรฉdiction d'une variable continue plutรดt que discrรจte).
L'estimateur que nous utiliserons est le suivant: ``sklearn.ensemble.RandomForestRegressor``.
Nous prรฉsentons rapidement comment il peut รชtre utilisรฉ:
```
from sklearn.ensemble import RandomForestRegressor
# On commence par crรฉer un jeu de donnรฉes d'apprentissage
x = 10 * np.random.rand(100)
def model(x, sigma=0.):
# sigma controle le bruit
# sigma=0 pour avoir une distribution "parfaite"
oscillation_rapide = np.sin(5 * x)
oscillation_lente = np.sin(0.5 * x)
bruit = sigma * np.random.randn(len(x))
return oscillation_rapide + oscillation_lente + bruit
y = model(x)
plt.figure(figsize=(10,5))
plt.scatter(x, y);
xfit = np.linspace(0, 10, num=1000)
# yfit contient les prรฉdictions de la forรชt alรฉatoire ร partir des donnรฉes bruitรฉs
yfit = RandomForestRegressor(100).fit(x[:, None], y).predict(xfit[:, None])
# ytrue contient les valuers du modรจle qui gรฉnรจrent nos donnรฉes avec un bruit nul
ytrue = model(xfit, sigma=0)
plt.figure(figsize=(10,5))
#plt.scatter(x, y)
plt.plot(xfit, yfit, '-r', label = 'forรชt alรฉatoire')
plt.plot(xfit, ytrue, '-g', alpha=0.5, label = 'distribution non bruitรฉe')
plt.legend();
```
On observe que les forรชts alรฉatoires, de maniรจre non-paramรฉtrique, arrivent ร estimer une distribution avec de mutliples pรฉriodicitรฉs sans aucune intervention de notre part pour dรฉfinir ces pรฉriodicitรฉs !
---
**Hyperparamรจtres**
Utilisons l'outil d'aide inclus dans Ipython pour explorer la classe ``RandomForestRegressor``. Pour cela on rajoute un ? ร la fin de l'objet
```
RandomForestRegressor?
```
Quelle sont les options disponibles pour le ``RandomForestRegressor``?
Quelle influence sur le graphique prรฉcรฉdent si on modifie ces valeurs?
Ces paramรจtres de classe sont appelรฉs les **hyperparamรจtres** d'un modรจle.
---
```
# Exercice : proposer un modรจle de rรฉgression ร vecteur support permettant de modรฉliser le phรฉnomรจne
from sklearn.svm import SVR
SVMreg = SVR().fit(x[:, None], y)
yfit_SVM = SVMreg.predict(xfit[:, None])
plt.figure(figsize=(10,5))
plt.scatter(x, y)
plt.plot(xfit, yfit_SVM, '-r', label = 'SVM')
plt.plot(xfit, ytrue, '-g', alpha=0.5, label = 'distribution non bruitรฉe')
plt.legend();
SVR?
```
| github_jupyter |
[Sascha Spors](https://orcid.org/0000-0001-7225-9992),
Professorship Signal Theory and Digital Signal Processing,
[Institute of Communications Engineering (INT)](https://www.int.uni-rostock.de/),
Faculty of Computer Science and Electrical Engineering (IEF),
[University of Rostock, Germany](https://www.uni-rostock.de/en/)
# Tutorial Signals and Systems (Signal- und Systemtheorie)
Summer Semester 2021 (Bachelor Course #24015)
- lecture: https://github.com/spatialaudio/signals-and-systems-lecture
- tutorial: https://github.com/spatialaudio/signals-and-systems-exercises
WIP...
The project is currently under heavy development while adding new material for the summer semester 2021
Feel free to contact lecturer [frank.schultz@uni-rostock.de](https://orcid.org/0000-0002-3010-0294)
## Fourier Series Right Time Shift <-> Phase Mod
```
import numpy as np
import matplotlib.pyplot as plt
def my_sinc(x): # we rather use definition sinc(x) = sin(x)/x, thus:
return np.sinc(x/np.pi)
Th_des = [1, 0.2]
om = np.linspace(-100, 100, 1000)
plt.figure(figsize=(10, 8))
plt.subplot(2,1,1)
for idx, Th in enumerate(Th_des):
A = 1/Th # such that sinc amplitude is always 1
# Fourier transform for single rect pulse
Xsinc = A*Th * my_sinc(om*Th/2)
Xsinc_phase = Xsinc*np.exp(-1j*om*Th/2)
plt.plot(om, Xsinc, 'C7', lw=1)
plt.plot(om, np.abs(Xsinc_phase), label=r'$T_h$=%1.0e s' % Th, lw=5-idx)
plt.legend()
plt.title(r'Fourier transform of single rectangular impulse with $A=1/T_h$ right-shifted by $\tau=T_h/2$')
plt.ylabel(r'magnitude $|X(\mathrm{j}\omega)|$')
plt.xlim(om[0], om[-1])
plt.grid(True)
plt.subplot(2,1,2)
for idx, Th in enumerate(Th_des):
Xsinc = A*Th * my_sinc(om*Th/2)
Xsinc_phase = Xsinc*np.exp(-1j*om*Th/2)
plt.plot(om, np.angle(Xsinc_phase), label=r'$T_h$=%1.0e s' % Th, lw=5-idx)
plt.legend()
plt.xlabel(r'$\omega$ / (rad/s)')
plt.ylabel(r'phase $\angle X(\mathrm{j}\omega)$')
plt.xlim(om[0], om[-1])
plt.ylim(-4, +4)
plt.grid(True)
plt.savefig('A8A2DEE53A.pdf')
```
## Copyright
This tutorial is provided as Open Educational Resource (OER), to be found at
https://github.com/spatialaudio/signals-and-systems-exercises
accompanying the OER lecture
https://github.com/spatialaudio/signals-and-systems-lecture.
Both are licensed under a) the Creative Commons Attribution 4.0 International
License for text and graphics and b) the MIT License for source code.
Please attribute material from the tutorial as *Frank Schultz,
Continuous- and Discrete-Time Signals and Systems - A Tutorial Featuring
Computational Examples, University of Rostock* with
``main file, github URL, commit number and/or version tag, year``.
| github_jupyter |
```
import os
import argparse
from keras.preprocessing.image import ImageDataGenerator
from keras import callbacks
import numpy as np
from keras import layers, models, optimizers
from keras import backend as K
from keras.utils import to_categorical
import matplotlib.pyplot as plt
from utils import combine_images
from PIL import Image
from capsulelayers import CapsuleLayer, PrimaryCap, Length, Mask
from keras.utils import multi_gpu_model
K.set_image_data_format('channels_last')
class dotdict(dict):
"""dot.notation access to dictionary attributes"""
__getattr__ = dict.get
__setattr__ = dict.__setitem__
__delattr__ = dict.__delitem__
args={
'epochs':200,
'batch_size':32,
'lr':0.001, #Initial learning rate
'lr_decay':0.9, #The value multiplied by lr at each epoch. Set a larger value for larger epochs
'lam_recon':0.392, #The coefficient for the loss of decoder
'routings':3, #Number of iterations used in routing algorithm. should > 0
'shift_fraction':0.2, #Fraction of pixels to shift at most in each direction.
'debug':False, #Save weights by TensorBoard
'save_dir':'./result',
'digit':1,
'gpus':2,
'train_dir':'./data/train/',
'test_dir':'./data/test/'
}
args=dotdict(args)
if not os.path.exists(args.save_dir):
os.makedirs(args.save_dir)
# Load Data
train_datagen = ImageDataGenerator(rescale = 1./255,
horizontal_flip=True,
rotation_range = args.shift_fraction,
zoom_range = args.shift_fraction,
width_shift_range = args.shift_fraction,
height_shift_range = args.shift_fraction)
#generator = train_datagen.flow(x, y, batch_size=batch_size)
train_set = train_datagen.flow_from_directory(args.train_dir,
target_size = (64, 64),
batch_size = args.batch_size,
class_mode = 'categorical')
test_datagen = ImageDataGenerator(rescale = 1./255)
test_set = test_datagen.flow_from_directory(args.test_dir,
target_size = (64, 64),
batch_size = 3753, #args.batch_size,
class_mode = 'categorical')
def margin_loss(y_true, y_pred):
"""
Margin loss for Eq.(4). When y_true[i, :] contains not just one `1`, this loss should work too. Not test it.
:param y_true: [None, n_classes]
:param y_pred: [None, num_capsule]
:return: a scalar loss value.
"""
L = y_true * K.square(K.maximum(0., 0.9 - y_pred)) + \
0.5 * (1 - y_true) * K.square(K.maximum(0., y_pred - 0.1))
return K.mean(K.sum(L, 1))
def train(model, args):
"""
Training a CapsuleNet
:param model: the CapsuleNet model
:param data: a tuple containing training and testing data, like `((x_train, y_train), (x_test, y_test))`
:param args: arguments
:return: The trained model
"""
# callbacks
log = callbacks.CSVLogger(args.save_dir + '/log.csv')
tb = callbacks.TensorBoard(log_dir=args.save_dir + '/tensorboard-logs',
batch_size=args.batch_size, histogram_freq=int(args.debug))
checkpoint = callbacks.ModelCheckpoint(args.save_dir + '/Model{epoch:02d}_{val_acc:.2f}.h5', monitor='val_capsnet_acc',
save_best_only=True, save_weights_only=False, verbose=1)
lr_decay = callbacks.LearningRateScheduler(schedule=lambda epoch: args.lr * (args.lr_decay ** epoch))
# compile the model
model.compile(optimizer=optimizers.Adam(lr=args.lr),
loss=[margin_loss, 'mse'],
loss_weights=[1., args.lam_recon],
metrics={'capsnet': 'accuracy'})
# Begin: Training with data augmentation ---------------------------------------------------------------------#
def train_generator(batch_size, shift_fraction=0.2):
while 1:
x_batch, y_batch = train_set.next()
yield ([x_batch, y_batch], [y_batch, x_batch])
# Training with data augmentation. If shift_fraction=0., also no augmentation.
x_test, y_test = test_set.next()
model.fit_generator(generator=train_generator(args.batch_size,args.shift_fraction),
steps_per_epoch=int(len(train_set.classes) / args.batch_size),
epochs=args.epochs,
validation_data = [[x_test, y_test], [y_test, x_test]],
callbacks=[log, tb, checkpoint, lr_decay])
# End: Training with data augmentation -----------------------------------------------------------------------#
model.save(args.save_dir + '/trained_model.h5')
print('Trained model saved to \'%s/trained_model.h5\'' % args.save_dir)
from utils import plot_log
plot_log(args.save_dir + '/log.csv', show=True)
return model
#define model
def CapsNet(input_shape, n_class, routings):
"""
A Capsule Network.
:param input_shape: data shape, 3d, [width, height, channels]
:param n_class: number of classes
:param routings: number of routing iterations
:return: Two Keras Models, the first one used for training, and the second one for evaluation.
`eval_model` can also be used for training.
"""
x = layers.Input(shape=input_shape)
# Layer 1: Just a conventional Conv2D layer
conv1 = layers.Conv2D(filters=256, kernel_size=9, strides=1, padding='valid', activation='relu', name='conv1')(x)
# Layer 2: Conv2D layer with `squash` activation, then reshape to [None, num_capsule, dim_capsule]
primarycaps = PrimaryCap(conv1, dim_capsule=8, n_channels=32, kernel_size=9, strides=2, padding='valid')
# Layer 3: Capsule layer. Routing algorithm works here.
digitcaps = CapsuleLayer(num_capsule=n_class, dim_capsule=16, routings=routings,
name='digitcaps')(primarycaps)
# Layer 4: This is an auxiliary layer to replace each capsule with its length. Just to match the true label's shape.
# If using tensorflow, this will not be necessary. :)
out_caps = Length(name='capsnet')(digitcaps)
# Decoder network.
y = layers.Input(shape=(n_class,))
masked_by_y = Mask()([digitcaps, y]) # The true label is used to mask the output of capsule layer. For training
masked = Mask()(digitcaps) # Mask using the capsule with maximal length. For prediction
# Shared Decoder model in training and prediction
decoder = models.Sequential(name='decoder')
decoder.add(layers.Dense(512, activation='relu', input_dim=16*n_class))
decoder.add(layers.Dense(1024, activation='relu'))
decoder.add(layers.Dense(np.prod(input_shape), activation='sigmoid'))
decoder.add(layers.Reshape(target_shape=input_shape, name='out_recon'))
# Models for training and evaluation (prediction)
train_model = models.Model([x, y], [out_caps, decoder(masked_by_y)])
eval_model = models.Model(x, [out_caps, decoder(masked)])
# manipulate model
noise = layers.Input(shape=(n_class, 16))
noised_digitcaps = layers.Add()([digitcaps, noise])
masked_noised_y = Mask()([noised_digitcaps, y])
manipulate_model = models.Model([x, y, noise], decoder(masked_noised_y))
return train_model, eval_model, manipulate_model
model, eval_model, manipulate_model = CapsNet(input_shape=train_set.image_shape,
n_class=train_set.num_classes,
routings=args.routings)
model.summary()
# train or model
train(model=model, args=args)
final_test_set = test_datagen.flow_from_directory(args.test_dir,
target_size = (64, 64),
batch_size = 3753, #args.batch_size,
class_mode = 'categorical')
#Reconstruct the image
def manipulate_latent(model):
print('-'*30 + 'Begin: manipulate' + '-'*30)
x_test, y_test = final_test_set.next()
index = np.argmax(y_test, 1) == args.digit
number = np.random.randint(low=0, high=sum(index) - 1)
x, y = x_test[index][number], y_test[index][number]
x, y = np.expand_dims(x, 0), np.expand_dims(y, 0)
noise = np.zeros([1, 5, 16])
x_recons = []
for dim in range(16):
for r in [-0.15, -0.1, -0.05, 0, 0.05, 0.1, 0.15]:
tmp = np.copy(noise)
tmp[:,:,dim] = r
x_recon = model.predict([x, y, tmp])
x_recons.append(x_recon)
x_recons = np.concatenate(x_recons)
img = combine_images(x_recons, height=16)
image = img*255
Image.fromarray(image.astype(np.uint8)).save(args.save_dir + '/manipulate-%d.png' % args.digit)
print('manipulated result saved to %s/manipulate-%d.png' % (args.save_dir, args.digit))
print('-' * 30 + 'End: manipulate' + '-' * 30)
#function to test
final_test_set = test_datagen.flow_from_directory(args.test_dir,
target_size = (64, 64),
shuffle=False,
batch_size = 3753, #args.batch_size,
class_mode = 'categorical')
def test(model):
x_test, y_test = final_test_set.next()
y_pred, x_recon = model.predict(x_test, batch_size=100)
print('-'*30 + 'Begin: test' + '-'*30)
print('Test acc:', np.sum(np.argmax(y_pred, 1) == np.argmax(y_test, 1))/y_test.shape[0])
img = combine_images(np.concatenate([x_test[:50],x_recon[:50]]))
image = img * 255
Image.fromarray(image.astype(np.uint8)).save(args.save_dir + "/real_and_recon.png")
print()
print('Reconstructed images are saved to %s/real_and_recon.png' % args.save_dir)
print('-' * 30 + 'End: test' + '-' * 30)
plt.imshow(plt.imread(args.save_dir + "/real_and_recon.png"))
plt.show()
print('-' * 30 + 'Test Metrics' + '-' * 30)
np.savetxt("./result/capsnet_657.csv", y_pred, delimiter=",")
y_pred = np.argmax(y_pred,axis = 1)
y_actual = np.argmax(y_test, axis = 1)
classnames=[]
for classname in final_test_set.class_indices:
classnames.append(classname)
confusion_mtx = confusion_matrix(y_actual, y_pred)
print(confusion_mtx)
target_names = classnames
print(classification_report(y_actual, y_pred, target_names=target_names))
print("accuracy= ",(confusion_mtx.diagonal().sum()/confusion_mtx.sum())*100)
##Evaluation
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import confusion_matrix, classification_report
from keras.models import load_model
model.load_weights('./result/trained_model.h5')
#manipulate_latent(manipulate_model)
test(model=eval_model)
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Custom Federated Algorithms, Part 2: Implementing Federated Averaging
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/federated/tutorials/custom_federated_algorithms_2"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/federated/blob/v0.15.0/docs/tutorials/custom_federated_algorithms_2.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/federated/blob/v0.15.0/docs/tutorials/custom_federated_algorithms_2.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
This tutorial is the second part of a two-part series that demonstrates how to
implement custom types of federated algorithms in TFF using the
[Federated Core (FC)](../federated_core.md), which serves as a foundation for
the [Federated Learning (FL)](../federated_learning.md) layer (`tff.learning`).
We encourage you to first read the
[first part of this series](custom_federated_algorithms_1.ipynb), which
introduce some of the key concepts and programming abstractions used here.
This second part of the series uses the mechanisms introduced in the first part
to implement a simple version of federated training and evaluation algorithms.
We encourage you to review the
[image classification](federated_learning_for_image_classification.ipynb) and
[text generation](federated_learning_for_text_generation.ipynb) tutorials for a
higher-level and more gentle introduction to TFF's Federated Learning APIs, as
they will help you put the concepts we describe here in context.
## Before we start
Before we start, try to run the following "Hello World" example to make sure
your environment is correctly setup. If it doesn't work, please refer to the
[Installation](../install.md) guide for instructions.
```
#@test {"skip": true}
!pip install --quiet --upgrade tensorflow_federated
!pip install --quiet --upgrade nest_asyncio
import nest_asyncio
nest_asyncio.apply()
import collections
import numpy as np
import tensorflow as tf
import tensorflow_federated as tff
# TODO(b/148678573,b/148685415): must use the ReferenceExecutor because it
# supports unbounded references and tff.sequence_* intrinsics.
tff.framework.set_default_context(tff.test.ReferenceExecutor())
@tff.federated_computation
def hello_world():
return 'Hello, World!'
hello_world()
```
## Implementing Federated Averaging
As in
[Federated Learning for Image Classification](federated_learning_for_image_classification.ipynb),
we are going to use the MNIST example, but since this is intended as a low-level
tutorial, we are going to bypass the Keras API and `tff.simulation`, write raw
model code, and construct a federated data set from scratch.
### Preparing federated data sets
For the sake of a demonstration, we're going to simulate a scenario in which we
have data from 10 users, and each of the users contributes knowledge how to
recognize a different digit. This is about as
non-[i.i.d.](https://en.wikipedia.org/wiki/Independent_and_identically_distributed_random_variables)
as it gets.
First, let's load the standard MNIST data:
```
mnist_train, mnist_test = tf.keras.datasets.mnist.load_data()
[(x.dtype, x.shape) for x in mnist_train]
```
The data comes as Numpy arrays, one with images and another with digit labels, both
with the first dimension going over the individual examples. Let's write a
helper function that formats it in a way compatible with how we feed federated
sequences into TFF computations, i.e., as a list of lists - the outer list
ranging over the users (digits), the inner ones ranging over batches of data in
each client's sequence. As is customary, we will structure each batch as a pair
of tensors named `x` and `y`, each with the leading batch dimension. While at
it, we'll also flatten each image into a 784-element vector and rescale the
pixels in it into the `0..1` range, so that we don't have to clutter the model
logic with data conversions.
```
NUM_EXAMPLES_PER_USER = 1000
BATCH_SIZE = 100
def get_data_for_digit(source, digit):
output_sequence = []
all_samples = [i for i, d in enumerate(source[1]) if d == digit]
for i in range(0, min(len(all_samples), NUM_EXAMPLES_PER_USER), BATCH_SIZE):
batch_samples = all_samples[i:i + BATCH_SIZE]
output_sequence.append({
'x':
np.array([source[0][i].flatten() / 255.0 for i in batch_samples],
dtype=np.float32),
'y':
np.array([source[1][i] for i in batch_samples], dtype=np.int32)
})
return output_sequence
federated_train_data = [get_data_for_digit(mnist_train, d) for d in range(10)]
federated_test_data = [get_data_for_digit(mnist_test, d) for d in range(10)]
```
As a quick sanity check, let's look at the `Y` tensor in the last batch of data
contributed by the fifth client (the one corresponding to the digit `5`).
```
federated_train_data[5][-1]['y']
```
Just to be sure, let's also look at the image corresponding to the last element of that batch.
```
from matplotlib import pyplot as plt
plt.imshow(federated_train_data[5][-1]['x'][-1].reshape(28, 28), cmap='gray')
plt.grid(False)
plt.show()
```
### On combining TensorFlow and TFF
In this tutorial, for compactness we immediately decorate functions that
introduce TensorFlow logic with `tff.tf_computation`. However, for more complex
logic, this is not the pattern we recommend. Debugging TensorFlow can already be
a challenge, and debugging TensorFlow after it has been fully serialized and
then re-imported necessarily loses some metadata and limits interactivity,
making debugging even more of a challenge.
Therefore, **we strongly recommend writing complex TF logic as stand-alone
Python functions** (that is, without `tff.tf_computation` decoration). This way
the TensorFlow logic can be developed and tested using TF best practices and
tools (like eager mode), before serializing the computation for TFF (e.g., by invoking `tff.tf_computation` with a Python function as the argument).
### Defining a loss function
Now that we have the data, let's define a loss function that we can use for
training. First, let's define the type of input as a TFF named tuple. Since the
size of data batches may vary, we set the batch dimension to `None` to indicate
that the size of this dimension is unknown.
```
BATCH_SPEC = collections.OrderedDict(
x=tf.TensorSpec(shape=[None, 784], dtype=tf.float32),
y=tf.TensorSpec(shape=[None], dtype=tf.int32))
BATCH_TYPE = tff.to_type(BATCH_SPEC)
str(BATCH_TYPE)
```
You may be wondering why we can't just define an ordinary Python type. Recall
the discussion in [part 1](custom_federated_algorithms_1.ipynb), where we
explained that while we can express the logic of TFF computations using Python,
under the hood TFF computations *are not* Python. The symbol `BATCH_TYPE`
defined above represents an abstract TFF type specification. It is important to
distinguish this *abstract* TFF type from concrete Python *representation*
types, e.g., containers such as `dict` or `collections.namedtuple` that may be
used to represent the TFF type in the body of a Python function. Unlike Python,
TFF has a single abstract type constructor `tff.StructType` for tuple-like
containers, with elements that can be individually named or left unnamed. This
type is also used to model formal parameters of computations, as TFF
computations can formally only declare one parameter and one result - you will
see examples of this shortly.
Let's now define the TFF type of model parameters, again as a TFF named tuple of
*weights* and *bias*.
```
MODEL_SPEC = collections.OrderedDict(
weights=tf.TensorSpec(shape=[784, 10], dtype=tf.float32),
bias=tf.TensorSpec(shape=[10], dtype=tf.float32))
MODEL_TYPE = tff.to_type(MODEL_SPEC)
print(MODEL_TYPE)
```
With those definitions in place, now we can define the loss for a given model, over a single batch. Note the usage of `@tf.function` decorator inside the `@tff.tf_computation` decorator. This allows us to write TF using Python like semantics even though were inside a `tf.Graph` context created by the `tff.tf_computation` decorator.
```
# NOTE: `forward_pass` is defined separately from `batch_loss` so that it can
# be later called from within another tf.function. Necessary because a
# @tf.function decorated method cannot invoke a @tff.tf_computation.
@tf.function
def forward_pass(model, batch):
predicted_y = tf.nn.softmax(
tf.matmul(batch['x'], model['weights']) + model['bias'])
return -tf.reduce_mean(
tf.reduce_sum(
tf.one_hot(batch['y'], 10) * tf.math.log(predicted_y), axis=[1]))
@tff.tf_computation(MODEL_TYPE, BATCH_TYPE)
def batch_loss(model, batch):
return forward_pass(model, batch)
```
As expected, computation `batch_loss` returns `float32` loss given the model and
a single data batch. Note how the `MODEL_TYPE` and `BATCH_TYPE` have been lumped
together into a 2-tuple of formal parameters; you can recognize the type of
`batch_loss` as `(<MODEL_TYPE,BATCH_TYPE> -> float32)`.
```
str(batch_loss.type_signature)
```
As a sanity check, let's construct an initial model filled with zeros and
compute the loss over the batch of data we visualized above.
```
initial_model = collections.OrderedDict(
weights=np.zeros([784, 10], dtype=np.float32),
bias=np.zeros([10], dtype=np.float32))
sample_batch = federated_train_data[5][-1]
batch_loss(initial_model, sample_batch)
```
Note that we feed the TFF computation with the initial model defined as a
`dict`, even though the body of the Python function that defines it consumes
model parameters as `model.weight` and `model.bias`. The arguments of the call
to `batch_loss` aren't simply passed to the body of that function.
What happens when we invoke `batch_loss`?
The Python body of `batch_loss` has already been traced and serialized in the above cell where it was defined. TFF acts as the caller to `batch_loss`
at the computation definition time, and as the target of invocation at the time
`batch_loss` is invoked. In both roles, TFF serves as the bridge between TFF's
abstract type system and Python representation types. At the invocation time,
TFF will accept most standard Python container types (`dict`, `list`, `tuple`,
`collections.namedtuple`, etc.) as concrete representations of abstract TFF
tuples. Also, although as noted above, TFF computations formally only accept a
single parameter, you can use the familiar Python call syntax with positional
and/or keyword arguments in case where the type of the parameter is a tuple - it
works as expected.
### Gradient descent on a single batch
Now, let's define a computation that uses this loss function to perform a single
step of gradient descent. Note how in defining this function, we use
`batch_loss` as a subcomponent. You can invoke a computation constructed with
`tff.tf_computation` inside the body of another computation, though typically
this is not necessary - as noted above, because serialization looses some
debugging information, it is often preferable for more complex computations to
write and test all the TensorFlow without the `tff.tf_computation` decorator.
```
@tff.tf_computation(MODEL_TYPE, BATCH_TYPE, tf.float32)
def batch_train(initial_model, batch, learning_rate):
# Define a group of model variables and set them to `initial_model`. Must
# be defined outside the @tf.function.
model_vars = collections.OrderedDict([
(name, tf.Variable(name=name, initial_value=value))
for name, value in initial_model.items()
])
optimizer = tf.keras.optimizers.SGD(learning_rate)
@tf.function
def _train_on_batch(model_vars, batch):
# Perform one step of gradient descent using loss from `batch_loss`.
with tf.GradientTape() as tape:
loss = forward_pass(model_vars, batch)
grads = tape.gradient(loss, model_vars)
optimizer.apply_gradients(
zip(tf.nest.flatten(grads), tf.nest.flatten(model_vars)))
return model_vars
return _train_on_batch(model_vars, batch)
str(batch_train.type_signature)
```
When you invoke a Python function decorated with `tff.tf_computation` within the
body of another such function, the logic of the inner TFF computation is
embedded (essentially, inlined) in the logic of the outer one. As noted above,
if you are writing both computations, it is likely preferable to make the inner
function (`batch_loss` in this case) a regular Python or `tf.function` rather
than a `tff.tf_computation`. However, here we illustrate that calling one
`tff.tf_computation` inside another basically works as expected. This may be
necessary if, for example, you do not have the Python code defining
`batch_loss`, but only its serialized TFF representation.
Now, let's apply this function a few times to the initial model to see whether
the loss decreases.
```
model = initial_model
losses = []
for _ in range(5):
model = batch_train(model, sample_batch, 0.1)
losses.append(batch_loss(model, sample_batch))
losses
```
### Gradient descent on a sequence of local data
Now, since `batch_train` appears to work, let's write a similar training
function `local_train` that consumes the entire sequence of all batches from one
user instead of just a single batch. The new computation will need to now
consume `tff.SequenceType(BATCH_TYPE)` instead of `BATCH_TYPE`.
```
LOCAL_DATA_TYPE = tff.SequenceType(BATCH_TYPE)
@tff.federated_computation(MODEL_TYPE, tf.float32, LOCAL_DATA_TYPE)
def local_train(initial_model, learning_rate, all_batches):
# Mapping function to apply to each batch.
@tff.federated_computation(MODEL_TYPE, BATCH_TYPE)
def batch_fn(model, batch):
return batch_train(model, batch, learning_rate)
return tff.sequence_reduce(all_batches, initial_model, batch_fn)
str(local_train.type_signature)
```
There are quite a few details buried in this short section of code, let's go
over them one by one.
First, while we could have implemented this logic entirely in TensorFlow,
relying on `tf.data.Dataset.reduce` to process the sequence similarly to how
we've done it earlier, we've opted this time to express the logic in the glue
language, as a `tff.federated_computation`. We've used the federated operator
`tff.sequence_reduce` to perform the reduction.
The operator `tff.sequence_reduce` is used similarly to
`tf.data.Dataset.reduce`. You can think of it as essentially the same as
`tf.data.Dataset.reduce`, but for use inside federated computations, which as
you may remember, cannot contain TensorFlow code. It is a template operator with
a formal parameter 3-tuple that consists of a *sequence* of `T`-typed elements,
the initial state of the reduction (we'll refer to it abstractly as *zero*) of
some type `U`, and the *reduction operator* of type `(<U,T> -> U)` that alters the
state of the reduction by processing a single element. The result is the final
state of the reduction, after processing all elements in a sequential order. In
our example, the state of the reduction is the model trained on a prefix of the
data, and the elements are data batches.
Second, note that we have again used one computation (`batch_train`) as a
component within another (`local_train`), but not directly. We can't use it as a
reduction operator because it takes an additional parameter - the learning rate.
To resolve this, we define an embedded federated computation `batch_fn` that
binds to the `local_train`'s parameter `learning_rate` in its body. It is
allowed for a child computation defined this way to capture a formal parameter
of its parent as long as the child computation is not invoked outside the body
of its parent. You can think of this pattern as an equivalent of
`functools.partial` in Python.
The practical implication of capturing `learning_rate` this way is, of course,
that the same learning rate value is used across all batches.
Now, let's try the newly defined local training function on the entire sequence
of data from the same user who contributed the sample batch (digit `5`).
```
locally_trained_model = local_train(initial_model, 0.1, federated_train_data[5])
```
Did it work? To answer this question, we need to implement evaluation.
### Local evaluation
Here's one way to implement local evaluation by adding up the losses across all data
batches (we could have just as well computed the average; we'll leave it as an
exercise for the reader).
```
@tff.federated_computation(MODEL_TYPE, LOCAL_DATA_TYPE)
def local_eval(model, all_batches):
# TODO(b/120157713): Replace with `tff.sequence_average()` once implemented.
return tff.sequence_sum(
tff.sequence_map(
tff.federated_computation(lambda b: batch_loss(model, b), BATCH_TYPE),
all_batches))
str(local_eval.type_signature)
```
Again, there are a few new elements illustrated by this code, let's go over them
one by one.
First, we have used two new federated operators for processing sequences:
`tff.sequence_map` that takes a *mapping function* `T->U` and a *sequence* of
`T`, and emits a sequence of `U` obtained by applying the mapping function
pointwise, and `tff.sequence_sum` that just adds all the elements. Here, we map
each data batch to a loss value, and then add the resulting loss values to
compute the total loss.
Note that we could have again used `tff.sequence_reduce`, but this wouldn't be
the best choice - the reduction process is, by definition, sequential, whereas
the mapping and sum can be computed in parallel. When given a choice, it's best
to stick with operators that don't constrain implementation choices, so that
when our TFF computation is compiled in the future to be deployed to a specific
environment, one can take full advantage of all potential opportunities for a
faster, more scalable, more resource-efficient execution.
Second, note that just as in `local_train`, the component function we need
(`batch_loss`) takes more parameters than what the federated operator
(`tff.sequence_map`) expects, so we again define a partial, this time inline by
directly wrapping a `lambda` as a `tff.federated_computation`. Using wrappers
inline with a function as an argument is the recommended way to use
`tff.tf_computation` to embed TensorFlow logic in TFF.
Now, let's see whether our training worked.
```
print('initial_model loss =', local_eval(initial_model,
federated_train_data[5]))
print('locally_trained_model loss =',
local_eval(locally_trained_model, federated_train_data[5]))
```
Indeed, the loss decreased. But what happens if we evaluated it on another
user's data?
```
print('initial_model loss =', local_eval(initial_model,
federated_train_data[0]))
print('locally_trained_model loss =',
local_eval(locally_trained_model, federated_train_data[0]))
```
As expected, things got worse. The model was trained to recognize `5`, and has
never seen a `0`. This brings the question - how did the local training impact
the quality of the model from the global perspective?
### Federated evaluation
This is the point in our journey where we finally circle back to federated types
and federated computations - the topic that we started with. Here's a pair of
TFF types definitions for the model that originates at the server, and the data
that remains on the clients.
```
SERVER_MODEL_TYPE = tff.FederatedType(MODEL_TYPE, tff.SERVER)
CLIENT_DATA_TYPE = tff.FederatedType(LOCAL_DATA_TYPE, tff.CLIENTS)
```
With all the definitions introduced so far, expressing federated evaluation in
TFF is a one-liner - we distribute the model to clients, let each client invoke
local evaluation on its local portion of data, and then average out the loss.
Here's one way to write this.
```
@tff.federated_computation(SERVER_MODEL_TYPE, CLIENT_DATA_TYPE)
def federated_eval(model, data):
return tff.federated_mean(
tff.federated_map(local_eval, [tff.federated_broadcast(model), data]))
```
We've already seen examples of `tff.federated_mean` and `tff.federated_map`
in simpler scenarios, and at the intuitive level, they work as expected, but
there's more in this section of code than meets the eye, so let's go over it
carefully.
First, let's break down the *let each client invoke local evaluation on its
local portion of data* part. As you may recall from the preceding sections,
`local_eval` has a type signature of the form `(<MODEL_TYPE, LOCAL_DATA_TYPE> ->
float32)`.
The federated operator `tff.federated_map` is a template that accepts as a
parameter a 2-tuple that consists of the *mapping function* of some type `T->U`
and a federated value of type `{T}@CLIENTS` (i.e., with member constituents of
the same type as the parameter of the mapping function), and returns a result of
type `{U}@CLIENTS`.
Since we're feeding `local_eval` as a mapping function to apply on a per-client
basis, the second argument should be of a federated type `{<MODEL_TYPE,
LOCAL_DATA_TYPE>}@CLIENTS`, i.e., in the nomenclature of the preceding sections,
it should be a federated tuple. Each client should hold a full set of arguments
for `local_eval` as a member consituent. Instead, we're feeding it a 2-element
Python `list`. What's happening here?
Indeed, this is an example of an *implicit type cast* in TFF, similar to
implicit type casts you may have encountered elsewhere, e.g., when you feed an
`int` to a function that accepts a `float`. Implicit casting is used scarcily at
this point, but we plan to make it more pervasive in TFF as a way to minimize
boilerplate.
The implicit cast that's applied in this case is the equivalence between
federated tuples of the form `{<X,Y>}@Z`, and tuples of federated values
`<{X}@Z,{Y}@Z>`. While formally, these two are different type signatures,
looking at it from the programmers's perspective, each device in `Z` holds two
units of data `X` and `Y`. What happens here is not unlike `zip` in Python, and
indeed, we offer an operator `tff.federated_zip` that allows you to perform such
conversions explicity. When the `tff.federated_map` encounters a tuple as a
second argument, it simply invokes `tff.federated_zip` for you.
Given the above, you should now be able to recognize the expression
`tff.federated_broadcast(model)` as representing a value of TFF type
`{MODEL_TYPE}@CLIENTS`, and `data` as a value of TFF type
`{LOCAL_DATA_TYPE}@CLIENTS` (or simply `CLIENT_DATA_TYPE`), the two getting
filtered together through an implicit `tff.federated_zip` to form the second
argument to `tff.federated_map`.
The operator `tff.federated_broadcast`, as you'd expect, simply transfers data
from the server to the clients.
Now, let's see how our local training affected the average loss in the system.
```
print('initial_model loss =', federated_eval(initial_model,
federated_train_data))
print('locally_trained_model loss =',
federated_eval(locally_trained_model, federated_train_data))
```
Indeed, as expected, the loss has increased. In order to improve the model for
all users, we'll need to train in on everyone's data.
### Federated training
The simplest way to implement federated training is to locally train, and then
average the models. This uses the same building blocks and patters we've already
discussed, as you can see below.
```
SERVER_FLOAT_TYPE = tff.FederatedType(tf.float32, tff.SERVER)
@tff.federated_computation(SERVER_MODEL_TYPE, SERVER_FLOAT_TYPE,
CLIENT_DATA_TYPE)
def federated_train(model, learning_rate, data):
return tff.federated_mean(
tff.federated_map(local_train, [
tff.federated_broadcast(model),
tff.federated_broadcast(learning_rate), data
]))
```
Note that in the full-featured implementation of Federated Averaging provided by
`tff.learning`, rather than averaging the models, we prefer to average model
deltas, for a number of reasons, e.g., the ability to clip the update norms,
for compression, etc.
Let's see whether the training works by running a few rounds of training and
comparing the average loss before and after.
```
model = initial_model
learning_rate = 0.1
for round_num in range(5):
model = federated_train(model, learning_rate, federated_train_data)
learning_rate = learning_rate * 0.9
loss = federated_eval(model, federated_train_data)
print('round {}, loss={}'.format(round_num, loss))
```
For completeness, let's now also run on the test data to confirm that our model
generalizes well.
```
print('initial_model test loss =',
federated_eval(initial_model, federated_test_data))
print('trained_model test loss =', federated_eval(model, federated_test_data))
```
This concludes our tutorial.
Of course, our simplified example doesn't reflect a number of things you'd need
to do in a more realistic scenario - for example, we haven't computed metrics
other than loss. We encourage you to study
[the implementation](https://github.com/tensorflow/federated/blob/master/tensorflow_federated/python/learning/federated_averaging.py)
of federated averaging in `tff.learning` as a more complete example, and as a
way to demonstrate some of the coding practices we'd like to encourage.
| github_jupyter |
```
import networkx as nx
import numpy as np
import matplotlib.pyplot as plt
from functools import lru_cache
from numba import jit
import community
import warnings; warnings.simplefilter('ignore')
@jit(nopython = True)
def generator(A):
B = np.zeros((len(A)+2, len(A)+2), np.int_)
B[1:-1,1:-1] = A
for i in range(len(B)):
for j in range(len(B)):
count = 0
count += B[i][j]
if i-1 > 0:
count += B[i-1][j]
if i+1 < len(B):
count += B[i+1][j]
if j-1 > 0:
count += B[i][j-1]
if j+1 < len(B):
count += B[i][j+1]
if count == 0:
B[i][j] = 1
if count > 4:
B[i][j] = 1
if count <= 4 and count > 0:
B[i][j] = 0
Bnext = np.zeros_like(B, np.int_)
Bnext = np.triu(B,1) + B.T - np.diag(np.diag(B))
for i in range(len(Bnext)):
for j in range(len(Bnext)):
if Bnext[i][j] > 1:
Bnext[i][j] = 1
return(Bnext)
try:
from functools import lru_cache
except ImportError:
from backports.functools_lru_cache import lru_cache
def generator2(A_, number):
time = 0
while time < number:
A_ = generator(A_)
time += 1
return A_
g1 = nx.erdos_renyi_graph(3, 0.8)
A1 = nx.to_numpy_matrix(g1)
print(A1)
nx.draw(g1, node_size=150, alpha=0.5, with_labels=True, font_weight = 'bold')
#plt.savefig('g1_0.png')
plt.show()
gen_A1 = generator2(A1, 100)
gen_g1 = nx.from_numpy_matrix(gen_A1)
nx.draw(gen_g1, node_size=10, alpha=0.5)
#plt.savefig('g1_100.png')
plt.show()
partition = community.best_partition(gen_g1)
pos = nx.spring_layout(gen_g1)
plt.figure(figsize=(8, 8))
plt.axis('off')
nx.draw_networkx_nodes(gen_g1, pos, node_size=10, cmap=plt.cm.RdYlBu, node_color=list(partition.values()))
nx.draw_networkx_edges(gen_g1, pos, alpha=0.3)
#plt.savefig('g1_100_community.png')
plt.show(gen_g1)
g2 = nx.erdos_renyi_graph(4, 0.8)
A2 = nx.to_numpy_matrix(g2)
print(A2)
nx.draw(g2, node_size=150, alpha=0.5, with_labels=True, font_weight = 'bold')
#plt.savefig('g2_0.png')
plt.show()
gen_A2 = generator2(A2, 100)
gen_g2 = nx.from_numpy_matrix(gen_A2)
nx.draw(gen_g2, node_size=10, alpha=0.5)
#plt.savefig('g2_100.png')
plt.show()
partition = community.best_partition(gen_g2)
pos = nx.spring_layout(gen_g2)
plt.figure(figsize=(8, 8))
plt.axis('off')
nx.draw_networkx_nodes(gen_g2, pos, node_size=10, cmap=plt.cm.RdYlBu, node_color=list(partition.values()))
nx.draw_networkx_edges(gen_g2, pos, alpha=0.3)
#plt.savefig('g2_100_community.png')
plt.show(gen_g2)
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import os
import sys
os.chdir(sys.path[0]+"/../data")
import urllib.request
from bs4 import BeautifulSoup
import pandas as pd
import re
from tqdm import tqdm
categories = [
"100 metres, Men",
"200 metres, Men",
"400 metres, Men",
"800 metres, Men",
"1,500 metres, Men",
"5,000 metres, Men",
"10,000 metres, Men",
"Marathon, Men",
"110 metres Hurdles, Men",
"400 metres Hurdles, Men",
"3,000 metres Steeplechase, Men",
"4 x 100 metres Relay, Men",
"4 x 400 metres Relay, Men",
"20 kilometres Walk, Men",
"50 kilometres Walk, Men",
"100 metres, Women",
"200 metres, Women",
"400 metres, Women",
"800 metres, Women",
"1,500 metres, Women",
"5,000 metres, Women",
"10,000 metres, Women",
"Marathon, Women",
"110 metres Hurdles, Women",
"400 metres Hurdles, Women",
"3,000 metres Steeplechase, Women",
"4 x 100 metres Relay, Women",
"4 x 400 metres Relay, Women",
"20 kilometres Walk, Women",
]
data = []
for edition in tqdm(range(1,62)): # Data from 1896 to 2020
edition_url = f"http://www.olympedia.org/editions/{edition}/sports/ATH"
response = urllib.request.urlopen(edition_url)
edition_soup = BeautifulSoup(response, 'html.parser')
title = edition_soup.find_all("h1")[0]
if "Winter" in title.text: continue # Skip winter olympics
try:
edition_year = int(re.findall(r"\d+", title.text)[0])
except IndexError:
continue # Sometimes the page seems to not exist?
for category in categories:
try:
elem = edition_soup.find_all("a", string=category)[0]
except IndexError:
continue
href = elem.get('href')
event_url = "http://www.olympedia.org" + href
response = urllib.request.urlopen(event_url)
soup = BeautifulSoup(response, 'html.parser')
table = soup.find_all("table", {"class":"table table-striped"})[0]
df = pd.read_html(str(table))[0]
try:
# gold_medal_time = float(re.findall(r"[+-]?([0-9]+([.][0-9]*)?|[.][0-9]+)",
# df['Final'][0].split()[0])[0][0]
# )
final_time_raw = df['Final'][0].split()[0]
h, m, s = re.findall(r"^(?:(\d{0,2})-)?(?:(\d{0,2}):)?(\d{0,2}\.?\d*)",
final_time_raw)[0]
h,m,s = int(h) if len(h)>0 else 0, int(m) if len(m)>0 else 0, float(s)
gold_medal_time = h*60*60+m*60+s
except KeyError:
continue
data.append({
"category" : category,
"year": edition_year,
"time" : gold_medal_time,
"reference": event_url,
})
df = pd.DataFrame(data)
df.to_csv('olympics_athletic_gold_medal_times.csv')
df
```
| github_jupyter |
```
# Jovian Commit Essentials
# Please retain and execute this cell without modifying the contents for `jovian.commit` to work
!pip install jovian --upgrade -q
import jovian
jovian.set_project('pandas-practice-assignment')
jovian.set_colab_id('1EMzM1GAuekn6b3mjbgjC83UH-2XgQHAe')
```
# Assignment 3 - Pandas Data Analysis Practice
*This assignment is a part of the course ["Data Analysis with Python: Zero to Pandas"](https://jovian.ml/learn/data-analysis-with-python-zero-to-pandas)*
In this assignment, you'll get to practice some of the concepts and skills covered this tutorial: https://jovian.ml/aakashns/python-pandas-data-analysis
As you go through this notebook, you will find a **???** in certain places. To complete this assignment, you must replace all the **???** with appropriate values, expressions or statements to ensure that the notebook runs properly end-to-end.
Some things to keep in mind:
* Make sure to run all the code cells, otherwise you may get errors like `NameError` for undefined variables.
* Do not change variable names, delete cells or disturb other existing code. It may cause problems during evaluation.
* In some cases, you may need to add some code cells or new statements before or after the line of code containing the **???**.
* Since you'll be using a temporary online service for code execution, save your work by running `jovian.commit` at regular intervals.
* Questions marked **(Optional)** will not be considered for evaluation, and can be skipped. They are for your learning.
You can make submissions on this page: https://jovian.ml/learn/data-analysis-with-python-zero-to-pandas/assignment/assignment-3-pandas-practice
If you are stuck, you can ask for help on the community forum: https://jovian.ml/forum/t/assignment-3-pandas-practice/11225/3 . You can get help with errors or ask for hints, describe your approach in simple words, link to documentation, but **please don't ask for or share the full working answer code** on the forum.
## How to run the code and save your work
The recommended way to run this notebook is to click the "Run" button at the top of this page, and select "Run on Binder". This will run the notebook on [mybinder.org](https://mybinder.org), a free online service for running Jupyter notebooks.
Before staring the assignment, let's save a snapshot of the assignment to your Jovian.ml profile, so that you can access it later, and continue your work.
```
import jovian
jovian.commit(project='pandas-practice-assignment', environment=None)
# Run the next line to install Pandas
!pip install pandas --upgrade
import pandas as pd
```
In this assignment, we're going to analyze an operate on data from a CSV file. Let's begin by downloading the CSV file.
```
from urllib.request import urlretrieve
urlretrieve('https://hub.jovian.ml/wp-content/uploads/2020/09/countries.csv',
'countries.csv')
```
Let's load the data from the CSV file into a Pandas data frame.
```
countries_df = pd.read_csv('countries.csv')
countries_df
```
**Q: How many countries does the dataframe contain?**
Hint: Use the `.shape` method.
```
num_countries = countries_df.shape[0]
print('There are {} countries in the dataset'.format(num_countries))
jovian.commit(project='pandas-practice-assignment', environment=None)
```
**Q: Retrieve a list of continents from the dataframe?**
*Hint: Use the `.unique` method of a series.*
```
continents = countries_df["continent"].unique()
continents
jovian.commit(project='pandas-practice-assignment', environment=None)
```
**Q: What is the total population of all the countries listed in this dataset?**
```
total_population = countries_df["population"].sum()
print('The total population is {}.'.format(int(total_population)))
jovian.commit(project='pandas-practice-assignment', environment=None)
```
**Q: (Optional) What is the overall life expectancy across in the world?**
*Hint: You'll need to take a weighted average of life expectancy using populations as weights.*
```
jovian.commit(project='pandas-practice-assignment', environment=None)
```
**Q: Create a dataframe containing 10 countries with the highest population.**
*Hint: Chain the `sort_values` and `head` methods.*
```
most_populous_df = countries_df.sort_values("population", ascending=False)
most_populous_df
jovian.commit(project='pandas-practice-assignment', environment=None)
```
**Q: Add a new column in `countries_df` to record the overall GDP per country (product of population & per capita GDP).**
```
countries_df['gdp'] = countries_df["population"]*countries_df["gdp_per_capita"]
countries_df
jovian.commit(project='pandas-practice-assignment', environment=None)
```
**Q: (Optional) Create a dataframe containing 10 countries with the lowest GDP per capita, among the counties with population greater than 100 million.**
```
lowest_gdp_df = countries_df[countries_df["population"] > 100000000].sort_values("gdp_per_capita").head(10).reset_index()
lowest_gdp_df
jovian.commit(project='pandas-practice-assignment', environment=None)
```
**Q: Create a data frame that counts the number countries in each continent?**
*Hint: Use `groupby`, select the `location` column and aggregate using `count`.*
```
country_counts_df = countries_df.groupby("continent")["location"].count()
country_counts_df
jovian.commit(project='pandas-practice-assignment', environment=None)
```
**Q: Create a data frame showing the total population of each continent.**
*Hint: Use `groupby`, select the population column and aggregate using `sum`.*
```
continent_populations_df = countries_df.groupby("continent")["population"].sum()
continent_populations_df
jovian.commit(project='pandas-practice-assignment', environment=None)
```
Let's download another CSV file containing overall Covid-19 stats for various countires, and read the data into another Pandas data frame.
```
urlretrieve('https://hub.jovian.ml/wp-content/uploads/2020/09/covid-countries-data.csv',
'covid-countries-data.csv')
covid_data_df = pd.read_csv('covid-countries-data.csv')
covid_data_df
```
**Q: Count the number of countries for which the `total_tests` data is missing.**
*Hint: Use the `.isna` method.*
```
total_tests_missing = covid_data_df[covid_data_df["total_tests"].isna()]["location"].count()
print("The data for total tests is missing for {} countries.".format(int(total_tests_missing)))
jovian.commit(project='pandas-practice-assignment', environment=None)
```
Let's merge the two data frames, and compute some more metrics.
**Q: Merge `countries_df` with `covid_data_df` on the `location` column.**
*Hint: Use the `.merge` method on `countries_df`.
```
combined_df = countries_df.merge(covid_data_df, on="location")
combined_df
jovian.commit(project='pandas-practice-assignment', environment=None)
```
**Q: Add columns `tests_per_million`, `cases_per_million` and `deaths_per_million` into `combined_df`.**
```
combined_df['tests_per_million'] = combined_df['total_tests'] * 1e6 / combined_df['population']
combined_df['cases_per_million'] = combined_df['total_cases'] * 1e6 / combined_df['population']
combined_df['deaths_per_million'] = combined_df['total_deaths'] * 1e6 / combined_df['population']
combined_df
jovian.commit(project='pandas-practice-assignment', environment=None)
```
**Q: Create a dataframe with 10 countires that have highest number of tests per million people.**
```
highest_tests_df = combined_df.sort_values("tests_per_million",ascending=False).head(10).reset_index()
highest_tests_df
jovian.commit(project='pandas-practice-assignment', environment=None)
```
**Q: Create a dataframe with 10 countires that have highest number of positive cases per million people.**
```
highest_cases_df = combined_df.sort_values("cases_per_million",ascending=False).head(10).reset_index()
highest_cases_df
jovian.commit(project='pandas-practice-assignment', environment=None)
```
**Q: Create a dataframe with 10 countires that have highest number of deaths cases per million people?**
```
highest_deaths_df = combined_df.sort_values("deaths_per_million",ascending=False).head(10).reset_index()
highest_deaths_df
jovian.commit(project='pandas-practice-assignment', environment=None)
```
**(Optional) Q: Count number of countries that feature in both the lists of "highest number of tests per million" and "highest number of cases per million".**
```
highest_test_and_cases = highest_cases_df["location"].isin(highest_tests_df["location"]).sum()
print(f"Total number of Contries having both highest number of cases and covid tests are: {highest_test_and_cases}")
jovian.commit(project='pandas-practice-assignment', environment=None)
```
**(Optional) Q: Count number of countries that feature in both the lists "20 countries with lowest GDP per capita" and "20 countries with the lowest number of hospital beds per thousand population". Only consider countries with a population higher than 10 million while creating the list.**
```
lowest_gdp_df = countries_df[countries_df["population"] > 10000000].sort_values("gdp_per_capita").head(20).reset_index()
lowest_gdp_df
lowest_hospital_df = countries_df[countries_df["population"] > 10000000].sort_values("hospital_beds_per_thousand").head(20).reset_index()
lowest_hospital_df
# set(lowest_gdp_df['location']).intersection(set(lowest_hospital_df['location']))
total_countries_having_low_gdp_and_bed = lowest_gdp_df['location'].isin(lowest_hospital_df['location']).sum()
print(f"Countries having lowest gdp per capita and lowest hosiptal bed are: {total_countries_having_low_gdp_and_bed}")
import jovian
jovian.commit(project='pandas-practice-assignment', environment=None)
```
## Submission
Congratulations on making it this far! You've reached the end of this assignment, and you just completed your first real-world data analysis problem. It's time to record one final version of your notebook for submission.
Make a submission here by filling the submission form: https://jovian.ml/learn/data-analysis-with-python-zero-to-pandas/assignment/assignment-3-pandas-practice
Also make sure to help others on the forum: https://jovian.ml/forum/t/assignment-3-pandas-practice/11225/2
```
jovian.submit(assignment="zero-to-pandas-a3")
```
| github_jupyter |
```
import numpy as np
import S_Dbw as sdbw
from sklearn.cluster import KMeans
from sklearn.datasets.samples_generator import make_blobs
from sklearn.metrics.pairwise import pairwise_distances_argmin
np.random.seed(0)
S_Dbw_result = []
batch_size = 45
centers = [[1, 1], [-1, -1], [1, -1]]
cluster_std=[0.7,0.3,1.2]
n_clusters = len(centers)
X1, _ = make_blobs(n_samples=3000, centers=centers, cluster_std=cluster_std[0])
X2, _ = make_blobs(n_samples=3000, centers=centers, cluster_std=cluster_std[1])
X3, _ = make_blobs(n_samples=3000, centers=centers, cluster_std=cluster_std[2])
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(9, 3))
fig.subplots_adjust(left=0.02, right=0.98, bottom=0.08, top=0.9)
colors = ['#4EACC5', '#FF9C34', '#4E9A06']
for item, X in enumerate(list([X1, X2, X3])):
k_means = KMeans(init='k-means++', n_clusters=3, n_init=10)
k_means.fit(X)
k_means_cluster_centers = k_means.cluster_centers_
k_means_labels = pairwise_distances_argmin(X, k_means_cluster_centers)
KS = sdbw.S_Dbw(X, k_means_labels, k_means_cluster_centers)
S_Dbw_result.append(KS.S_Dbw_result())
ax = fig.add_subplot(1,3,item+1)
for k, col in zip(range(n_clusters), colors):
my_members = k_means_labels == k
cluster_center = k_means_cluster_centers[k]
ax.plot(X[my_members, 0], X[my_members, 1], 'w',
markerfacecolor=col, marker='.')
ax.plot(cluster_center[0], cluster_center[1], 'o', markerfacecolor=col,
markeredgecolor='k', markersize=6)
ax.set_title('S_Dbw: %.3f' %(S_Dbw_result[item]))
ax.set_ylim((-4,4))
ax.set_xlim((-4,4))
plt.text(-3.5, 1.8, 'cluster_std: %f' %(cluster_std[item]))
plt.savefig('./pic1.png', dpi=150)
np.random.seed(0)
S_Dbw_result = []
batch_size = 45
centers = [[[1, 1], [-1, -1], [1, -1]],
[[0.8, 0.8], [-0.8, -0.8], [0.8, -0.8]],
[[1.2, 1.2], [-1.2, -1.2], [1.2, -1.2]]]
n_clusters = len(centers)
X1, _ = make_blobs(n_samples=3000, centers=centers[0], cluster_std=0.7)
X2, _ = make_blobs(n_samples=3000, centers=centers[1], cluster_std=0.7)
X3, _ = make_blobs(n_samples=3000, centers=centers[2], cluster_std=0.7)
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(8, 3))
fig.subplots_adjust(left=0.02, right=0.98, bottom=0.2, top=0.9)
colors = ['#4EACC5', '#FF9C34', '#4E9A06']
for item, X in enumerate(list([X1, X2, X3])):
k_means = KMeans(init='k-means++', n_clusters=3, n_init=10)
k_means.fit(X)
k_means_cluster_centers = k_means.cluster_centers_
k_means_labels = pairwise_distances_argmin(X, k_means_cluster_centers)
KS = sdbw.S_Dbw(X, k_means_labels, k_means_cluster_centers)
S_Dbw_result.append(KS.S_Dbw_result())
ax = fig.add_subplot(1,3,item+1)
for k, col in zip(range(n_clusters), colors):
my_members = k_means_labels == k
cluster_center = k_means_cluster_centers[k]
ax.plot(X[my_members, 0], X[my_members, 1], 'w',
markerfacecolor=col, marker='.')
ax.plot(cluster_center[0], cluster_center[1], 'o', markerfacecolor=col,
markeredgecolor='k', markersize=6)
ax.set_title('S_Dbw: %.3f ' %(S_Dbw_result[item]))
# ax.set_xticks(())
# ax.set_yticks(())
ax.set_ylim((-4,4))
ax.set_xlim((-4,4))
ax.set_xlabel('centers: \n%s' %(centers[item]))
plt.savefig('./pic2.png', dpi=150)
```
| github_jupyter |
```
from xml.etree import ElementTree
from xml.dom import minidom
from xml.etree.ElementTree import Element, SubElement, Comment, indent
def prettify(elem):
"""Return a pretty-printed XML string for the Element.
"""
rough_string = ElementTree.tostring(elem, encoding="ISO-8859-1")
reparsed = minidom.parseString(rough_string)
return reparsed.toprettyxml(indent="\t")
import numpy as np
import os
valve_ids = np.arange(2,4+1)
hyb_ids = np.arange(34,36+1)
reg_names = [f'GM{_i}' for _i in np.arange(1,3+1)]
source_folder = r'D:\Shiwei\20210706-P_Forebrain_CTP09_only'
target_drive = r'\\KOLMOGOROV\Chromatin_NAS_5'
imaging_protocol = r'Zscan_750_647_488_s60_n200'
bleach_protocol = r'Bleach_740_647_s5'
cmd_seq = Element('command_sequence')
for _vid, _hid, _rname in zip(valve_ids, hyb_ids, reg_names):
# comments
comment = Comment(f"Hyb {_hid} for {_rname}")
cmd_seq.append(comment)
# TCEP
tcep = SubElement(cmd_seq, 'valve_protocol')
tcep.text = "Flow TCEP"
# flow adaptor
adt = SubElement(cmd_seq, 'valve_protocol')
adt.text = f"Hybridize {_vid}"
# delay time
adt_incubation = SubElement(cmd_seq, 'delay')
adt_incubation.text = "60000"
# change bleach directory
change_dir = SubElement(cmd_seq, 'change_directory')
change_dir.text = os.path.join(source_folder, f"Bleach")
# wakeup
wakeup = SubElement(cmd_seq, 'wakeup')
wakeup.text = "5000"
# bleach loop
loop = SubElement(cmd_seq, 'loop', name='Position Loop Zscan', increment="name")
loop_item = SubElement(loop, 'item', name=bleach_protocol)
loop_item.text = " "
# wash
wash = SubElement(cmd_seq, 'valve_protocol')
wash.text = "Short Wash"
# readouts
readouts = SubElement(cmd_seq, 'valve_protocol')
readouts.text = "Flow Readouts"
# delay time
adt_incubation = SubElement(cmd_seq, 'delay')
adt_incubation.text = "60000"
# change directory
change_dir = SubElement(cmd_seq, 'change_directory')
change_dir.text = os.path.join(source_folder, f"H{_hid}{_rname.upper()}")
# wakeup
wakeup = SubElement(cmd_seq, 'wakeup')
wakeup.text = "5000"
# hybridization loop
loop = SubElement(cmd_seq, 'loop', name='Position Loop Zscan', increment="name")
loop_item = SubElement(loop, 'item', name=imaging_protocol)
loop_item.text = " "
# delay time
delay = SubElement(cmd_seq, 'delay')
delay.text = "2000"
# copy folder
copy_dir = SubElement(cmd_seq, 'copy_directory')
source_dir = SubElement(copy_dir, 'source_path')
source_dir.text = change_dir.text#cmd_seq.findall('change_directory')[-1].text
target_dir = SubElement(copy_dir, 'target_path')
target_dir.text = os.path.join(target_drive,
os.path.basename(os.path.dirname(source_dir.text)),
os.path.basename(source_dir.text))
del_source = SubElement(copy_dir, 'delete_source')
del_source.text = "True"
# empty line
indent(target_dir)
print( prettify(cmd_seq))
```
| github_jupyter |
# Multi-variate Rregression Metamodel with DOE based on random sampling
* Input variable space should be constructed using random sampling, not classical factorial DOE
* Linear fit is often inadequate but higher-order polynomial fits often leads to overfitting i.e. learns spurious, flawed relationships between input and output
* R-square fit can often be misleding measure in case of high-dimensional regression
* Metamodel can be constructed by selectively discovering features (or their combination) which matter and shrinking other high-order terms towards zero
** [LASSO](https://en.wikipedia.org/wiki/Lasso_(statistics)) is an effective regularization technique for this purpose**
#### LASSO: Least Absolute Shrinkage and Selection Operator
$$ {\displaystyle \min _{\beta _{0},\beta }\left\{{\frac {1}{N}}\sum _{i=1}^{N}(y_{i}-\beta _{0}-x_{i}^{T}\beta )^{2}\right\}{\text{ subject to }}\sum _{j=1}^{p}|\beta _{j}|\leq t.} $$
### Import libraries
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
```
### Global variables
```
N_points = 20 # Number of sample points
# start with small < 40 points and see how the regularized model makes a difference.
# Then increase the number is see the difference
noise_mult = 50 # Multiplier for the noise term
noise_mean = 10 # Mean for the Gaussian noise adder
noise_sd = 10 # Std. Dev. for the Gaussian noise adder
```
### Generate feature vectors based on random sampling
```
X=np.array(10*np.random.randn(N_points,5))
df=pd.DataFrame(X,columns=['Feature'+str(l) for l in range(1,6)])
df.head()
```
### Plot the random distributions of input features
```
for i in df.columns:
df.hist(i,bins=5,xlabelsize=15,ylabelsize=15,figsize=(8,6))
```
### Generate the output variable by analytic function + Gaussian noise (our goal will be to *'learn'* this function)
#### Let's construst the ground truth or originating function as follows:
$ y=f(x_1,x_2,x_3,x_4,x_5)= 5x_1^2+13x_2+0.1x_1x_3^2+2x_4x_5+0.1x_5^3+0.8x_1x_4x_5+\psi(x)\ :\ \psi(x) = {\displaystyle f(x\;|\;\mu ,\sigma ^{2})={\frac {1}{\sqrt {2\pi \sigma ^{2}}}}\;e^{-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}}}$
```
df['y']=5*df['Feature1']**2+13*df['Feature2']+0.1*df['Feature3']**2*df['Feature1'] \
+2*df['Feature4']*df['Feature5']+0.1*df['Feature5']**3+0.8*df['Feature1']*df['Feature4']*df['Feature5'] \
+noise_mult*np.random.normal(loc=noise_mean,scale=noise_sd)
df.head()
```
### Plot single-variable scatterplots
** It is clear that no clear pattern can be gussed with these single-variable plots **
```
for i in df.columns:
df.plot.scatter(i,'y', edgecolors=(0,0,0),s=50,c='g',grid=True)
```
### Standard linear regression
```
from sklearn.linear_model import LinearRegression
linear_model = LinearRegression(normalize=True)
X_linear=df.drop('y',axis=1)
y_linear=df['y']
linear_model.fit(X_linear,y_linear)
y_pred_linear = linear_model.predict(X_linear)
```
### R-square of simple linear fit is very bad, coefficients have no meaning i.e. we did not 'learn' the function
```
RMSE_linear = np.sqrt(np.sum(np.square(y_pred_linear-y_linear)))
print("Root-mean-square error of linear model:",RMSE_linear)
coeff_linear = pd.DataFrame(linear_model.coef_,index=df.drop('y',axis=1).columns, columns=['Linear model coefficients'])
coeff_linear
print ("R2 value of linear model:",linear_model.score(X_linear,y_linear))
plt.figure(figsize=(12,8))
plt.xlabel("Predicted value with linear fit",fontsize=20)
plt.ylabel("Actual y-values",fontsize=20)
plt.grid(1)
plt.scatter(y_pred_linear,y_linear,edgecolors=(0,0,0),lw=2,s=80)
plt.plot(y_pred_linear,y_pred_linear, 'k--', lw=2)
```
### Create polynomial features
```
from sklearn.preprocessing import PolynomialFeatures
poly1 = PolynomialFeatures(3,include_bias=False)
X_poly = poly1.fit_transform(X)
X_poly_feature_name = poly1.get_feature_names(['Feature'+str(l) for l in range(1,6)])
print("The feature vector list:\n",X_poly_feature_name)
print("\nLength of the feature vector:",len(X_poly_feature_name))
df_poly = pd.DataFrame(X_poly, columns=X_poly_feature_name)
df_poly.head()
df_poly['y']=df['y']
df_poly.head()
X_train=df_poly.drop('y',axis=1)
y_train=df_poly['y']
```
### Polynomial model without regularization and cross-validation
```
poly2 = LinearRegression(normalize=True)
model_poly=poly2.fit(X_train,y_train)
y_poly = poly2.predict(X_train)
RMSE_poly=np.sqrt(np.sum(np.square(y_poly-y_train)))
print("Root-mean-square error of simple polynomial model:",RMSE_poly)
```
### The non-regularized polunomial model (notice the coeficients are not learned properly)
** Recall that the originating function is: **
$ y= 5x_1^2+13x_2+0.1x_1x_3^2+2x_4x_5+0.1x_5^3+0.8x_1x_4x_5+noise $
```
coeff_poly = pd.DataFrame(model_poly.coef_,index=df_poly.drop('y',axis=1).columns,
columns=['Coefficients polynomial model'])
coeff_poly
```
#### R-square value of the simple polynomial model is perfect but the model is flawed as shown above i.e. it learned wrong coefficients and overfitted the to the data
```
print ("R2 value of simple polynomial model:",model_poly.score(X_train,y_train))
```
### Polynomial model with cross-validation and LASSO regularization
** This is an advanced machine learning method which prevents over-fitting by penalizing high-valued coefficients i.e. keep them bounded **
```
from sklearn.linear_model import LassoCV
model1 = LassoCV(cv=10,verbose=0,normalize=True,eps=0.001,n_alphas=100, tol=0.0001,max_iter=5000)
model1.fit(X_train,y_train)
y_pred1 = np.array(model1.predict(X_train))
RMSE_1=np.sqrt(np.sum(np.square(y_pred1-y_train)))
print("Root-mean-square error of Metamodel:",RMSE_1)
coeff1 = pd.DataFrame(model1.coef_,index=df_poly.drop('y',axis=1).columns, columns=['Coefficients Metamodel'])
coeff1
model1.score(X_train,y_train)
model1.alpha_
```
### Printing only the non-zero coefficients of the regularized model (notice the coeficients are learned well enough)
** Recall that the originating function is: **
$ y= 5x_1^2+13x_2+0.1x_1x_3^2+2x_4x_5+0.1x_5^3+0.8x_1x_4x_5+noise $
```
coeff1[coeff1['Coefficients Metamodel']!=0]
plt.figure(figsize=(12,8))
plt.xlabel("Predicted value with Regularized Metamodel",fontsize=20)
plt.ylabel("Actual y-values",fontsize=20)
plt.grid(1)
plt.scatter(y_pred1,y_train,edgecolors=(0,0,0),lw=2,s=80)
plt.plot(y_pred1,y_pred1, 'k--', lw=2)
```
| github_jupyter |
# Histograms of time-mean surface temperature
## Import the libraries
```
# Data analysis and viz libraries
import aeolus.plot as aplt
import matplotlib.pyplot as plt
import numpy as np
import xarray as xr
# Local modules
from calc import sfc_temp
import mypaths
from names import names
from commons import MODELS
import const_ben1_hab1 as const
from plot_func import (
KW_MAIN_TTL,
KW_SBPLT_LABEL,
figsave,
)
plt.style.use("paper.mplstyle")
```
## Load the data
Load the time-averaged data previously preprocessed.
```
THAI_cases = ["Hab1", "Hab2"]
# Load data
datasets = {} # Create an empty dictionary to store all data
# for each of the THAI cases, create a nested directory for models
for THAI_case in THAI_cases:
datasets[THAI_case] = {}
for model_key in MODELS.keys():
datasets[THAI_case][model_key] = xr.open_dataset(
mypaths.datadir / model_key / f"{THAI_case}_time_mean_{model_key}.nc"
)
bin_step = 10
bins = np.arange(170, 321, bin_step)
bin_mid = (bins[:-1] + bins[1:]) * 0.5
t_sfc_step = abs(bins - const.t_melt).max()
ncols = 1
nrows = 2
width = 0.75 * bin_step / len(MODELS)
fig, axs = plt.subplots(ncols=ncols, nrows=nrows, figsize=(ncols * 8, nrows * 4.5))
iletters = aplt.subplot_label_generator()
for THAI_case, ax in zip(THAI_cases, axs.flat):
ax.set_title(f"{next(iletters)}", **KW_SBPLT_LABEL)
ax.set_xlim(bins[0], bins[-1])
ax.set_xticks(bins)
ax.grid(axis="x")
if ax.get_subplotspec().is_last_row():
ax.set_xlabel("Surface temperature [$K$]")
ax.set_title(THAI_case, **KW_MAIN_TTL)
# ax2 = ax.twiny()
# ax2.set_xlim(bins[0], bins[-1])
# ax2.axvline(const.t_melt, color="k", linestyle="--")
# ax2.set_xticks([const.t_melt])
# ax2.set_xticklabels([const.t_melt])
# ax.vlines(const.t_melt, ymin=0, ymax=38.75, color="k", linestyle="--")
# ax.vlines(const.t_melt, ymin=41.5, ymax=45, color="k", linestyle="--")
# ax.text(const.t_melt, 40, f"{const.t_melt:.2f}", ha="center", va="center", fontsize="small")
ax.imshow(
np.linspace(0, 1, 100).reshape(1, -1),
extent=[const.t_melt - t_sfc_step, const.t_melt + t_sfc_step, 0, 45],
aspect="auto",
cmap="seismic",
alpha=0.25,
)
ax.set_ylim([0, 45])
if ax.get_subplotspec().is_first_col():
ax.set_ylabel("Area fraction [%]")
for i, (model_key, model_dict) in zip([-3, -1, 1, 3], MODELS.items()):
model_names = names[model_key]
ds = datasets[THAI_case][model_key]
arr = sfc_temp(ds, model_key, const)
weights = xr.broadcast(np.cos(np.deg2rad(arr.latitude)), arr)[0].values.ravel()
# tot_pnts = arr.size
hist, _ = np.histogram(
arr.values.ravel(), bins=bins, weights=weights, density=True
)
hist *= 100 * bin_step
# hist = hist / tot_pnts * 100
# hist[hist==0] = np.nan
ax.bar(
bin_mid + (i * width / 2),
hist,
width=width,
facecolor=model_dict["color"],
edgecolor="none",
alpha=0.8,
label=model_dict["title"],
)
ax.legend(loc="upper left")
fig.tight_layout()
fig.align_labels()
figsave(
fig,
mypaths.plotdir / f"{'_'.join(THAI_cases)}__hist__t_sfc_weighted",
)
```
| github_jupyter |
# Near to far field transformation
See on [github](https://github.com/flexcompute/tidy3d-notebooks/blob/main/Near2Far_ZonePlate.ipynb), run on [colab](https://colab.research.google.com/github/flexcompute/tidy3d-notebooks/blob/main/Near2Far_ZonePlate.ipynb), or just follow along with the output below.
This tutorial will show you how to solve for electromagnetic fields far away from your structure using field information stored on a nearby surface.
This technique is called a 'near field to far field transformation' and is very useful for reducing the simulation size needed for structures involving lots of empty space.
As an example, we will simulate a simple zone plate lens with a very thin domain size to get the transmitted fields measured just above the structure. Then, we'll show how to use the `Near2Far` feature from `tidy3D` to extrapolate to the fields at the focal plane above the lens.
```
# get the most recent version of tidy3d
!pip install -q --upgrade tidy3d
# make sure notebook plots inline
%matplotlib inline
# standard python imports
import numpy as np
import matplotlib.pyplot as plt
import sys
# import client side tidy3d
import tidy3d as td
from tidy3d import web
```
## Problem Setup
Below is a rough sketch of the setup of a near field to far field transformation.
The transmitted near fields are measured just above the metalens on the blue line, and the near field to far field transformation is then used to project the fields to the focal plane above at the red line.
<img src="img/n2f_diagram.png" width=800>
## Define Simulation Parameters
As always, we first need to define our simulation parameters. As a reminder, all length units in `tidy3D` are specified in microns.
```
# 1 nanometer in units of microns (for conversion)
nm = 1e-3
# free space central wavelength
wavelength = 1.0
# numerical aperture
NA = 0.8
# thickness of lens features
H = 200 * nm
# space between bottom PML and substrate (-z)
# and the space between lens structure and top pml (+z)
space_below_sub = 1.5 * wavelength
# thickness of substrate (um)
thickness_sub = wavelength / 2
# side length (xy plane) of entire metalens (um)
length_xy = 40 * wavelength
# Lens and substrate refractive index
n_TiO2 = 2.40
n_SiO2 = 1.46
# define material properties
air = td.Medium(epsilon=1.0)
SiO2 = td.Medium(epsilon=n_SiO2**2)
TiO2 = td.Medium(epsilon=n_TiO2**2)
# resolution of simulation (15 or more grids per wavelength is adequate)
grids_per_wavelength = 20
# Number of PML layers to use around edges of simulation, choose thickness of one wavelength to be safe
npml = grids_per_wavelength
```
## Process Geometry
Next we perform some conversions based on these parameters to define the simulation.
```
# grid size (um)
dl = wavelength / grids_per_wavelength
# because the wavelength is in microns, use builtin td.C_0 (um/s) to get frequency in Hz
f0 = td.C_0 / wavelength
# Define PML layers, for this application we surround the whole structure in PML to isolate the fields
pml_layers = [npml, npml, npml]
# domain size in z, note, we're just simulating a thin slice: (space -> substrate -> lens thickness -> space)
length_z = space_below_sub + thickness_sub + H + space_below_sub
# construct simulation size array
sim_size = np.array([length_xy, length_xy, length_z])
```
## Create Geometry
Now we create the ring metalens programatically
```
# define substrate
substrate = td.Box(
center=[0, 0, -length_z/2 + space_below_sub + thickness_sub / 2.0],
size=[td.inf, td.inf, thickness_sub],
material=SiO2)
# create a running list of structures
geometry = [substrate]
# focal length
focal_length = length_xy / 2 / NA * np.sqrt(1 - NA**2)
# location from center for edge of the n-th inner ring, see https://en.wikipedia.org/wiki/Zone_plate
def edge(n):
return np.sqrt(n * wavelength * focal_length + n**2 * wavelength**2 / 4)
# loop through the ring indeces until it's too big and add each to geometry list
n = 1
r = edge(n)
while r < 2 * length_xy:
# progressively wider cylinders, material alternating between air and TiO2
cyl = td.Cylinder(
center = [0,0,-length_z/2 + space_below_sub + thickness_sub + H / 2],
axis='z',
radius=r,
height=H,
material=TiO2 if n % 2 == 0 else air,
name=f'cylinder_n={n}'
)
geometry.append(cyl)
n += 1
r = edge(n)
# reverse geometry list so that inner, smaller rings are added last and therefore override larger rings.
geometry.reverse()
```
## Create Source
Create a plane wave incident from below the metalens
```
# Bandwidth in Hz
fwidth = f0 / 10.0
# Gaussian source offset; the source peak is at time t = offset/fwidth
offset = 4.
# time dependence of source
gaussian = td.GaussianPulse(f0, fwidth, offset=offset, phase=0)
source = td.PlaneWave(
source_time=gaussian,
injection_axis='+z',
position=-length_z/2 + space_below_sub / 2, # halfway between PML and substrate
polarization='x')
# Simulation run time
run_time = 40 / fwidth
```
## Create Monitor
Create a near field monitor to measure the fields just above the metalens
```
# place it halfway between top of lens and PML
monitor_near = td.FreqMonitor(
center=[0., 0., -length_z/2 + space_below_sub + thickness_sub + H + space_below_sub / 2],
size=[length_xy, length_xy, 0],
freqs=[f0],
name='near_field')
```
## Create Simulation
Put everything together and define a simulation object
```
sim = td.Simulation(size=sim_size,
mesh_step=[dl, dl, dl],
structures=geometry,
sources=[source],
monitors=[monitor_near],
run_time=run_time,
pml_layers=pml_layers)
```
## Visualize Geometry
Lets take a look and make sure everything is defined properly
```
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(24, 8))
# Time the visualization of the 2D plane
sim.viz_eps_2D(normal='x', position=0.1, ax=ax1);
sim.viz_eps_2D(normal='z', position=-length_z/2 + space_below_sub + thickness_sub + H / 2, ax=ax2);
```
## Run Simulation
Now we can run the simulation and download the results
```
# Run simulation
project = web.new_project(sim.export(), task_name='near2far_docs')
web.monitor_project(project['taskId'])
# download and load the results
print('Downloading results')
web.download_results(project['taskId'], target_folder='output')
sim.load_results('output/monitor_data.hdf5')
# print stats from the logs
with open("output/tidy3d.log") as f:
print(f.read())
```
## Visualization
Let's inspect the near field using the Tidy3D builtin field visualization methods.
For more details see the documentation of [viz_field_2D](https://simulation.cloud/docs/html/generated/tidy3d.Simulation.viz_field_2D.html#tidy3d.Simulation.viz_field_2D).
```
fig, axes = plt.subplots(1, 3, figsize=(15, 4))
for ax, val in zip(axes, ('re', 'abs', 'int')):
im = sim.viz_field_2D(monitor_near, eps_alpha=0, comp='x', val=val, cbar=True, ax=ax)
plt.show()
```
## Setting Up Near 2 Far
To set up near to far, we first need to grab the data from the nearfield monitor.
```
# near field monitor data dictionary
monitor_data = sim.data(monitor_near)
# grab the raw data for plotting later
xs = monitor_data['xmesh']
ys =monitor_data['ymesh']
E_near = np.squeeze(monitor_data['E'])
```
Then, we create a `td.Near2Far` object using the monitor data dictionary as follows.
This object just stores near field data and provides [various methods](https://simulation.cloud/docs/html/generated/tidy3d.Near2Far.html#tidy3d.Near2Far) for looking at various far field quantities.
```
# from near2far_tidy3d import Near2Far
n2f = td.Near2Far(monitor_data)
```
## Getting Far Field Data
With the `Near2Far` object initialized, we just need to call one of it's methods to get a far field quantity.
For this example, we use `Near2Far.get_fields_cartesian(x,y,z)` to get the fields at an `x,y,z` point relative to the monitor center.
Below, we scan through x and y points in a plane located at `z=z0` and record the far fields.
```
# points to project to
num_far = 40
xs_far = 4 * wavelength * np.linspace(-0.5, 0.5, num_far)
ys_far = 4 * wavelength * np.linspace(-0.5, 0.5, num_far)
# get a mesh in cartesian, convert to spherical
Nx, Ny = len(xs), len(ys)
# initialize the far field values
E_far = np.zeros((3, num_far, num_far), dtype=complex)
H_far = np.zeros((3, num_far, num_far), dtype=complex)
# loop through points in the output plane
for i in range(num_far):
sys.stdout.write(" \rGetting far fields, %2d%% done"%(100*i/(num_far + 1)))
sys.stdout.flush()
x = xs_far[i]
for j in range(num_far):
y = ys_far[j]
# compute and store the outputs from projection function at the focal plane
E, H = n2f.get_fields_cartesian(x, y, focal_length)
E_far[:, i, j] = E
H_far[:, i, j] = H
sys.stdout.write("\nDone!")
```
## Plot Results
Now we can plot the near and far fields together
```
# plot everything
f, ((ax1, ax2, ax3),
(ax4, ax5, ax6)) = plt.subplots(2, 3, tight_layout=True, figsize=(10, 5))
def pmesh(xs, ys, array, ax, cmap):
im = ax.pcolormesh(xs, ys, array.T, cmap=cmap, shading='auto')
return im
im1 = pmesh(xs, ys, np.real(E_near[0]), ax=ax1, cmap='RdBu')
im2 = pmesh(xs, ys, np.real(E_near[1]), ax=ax2, cmap='RdBu')
im3 = pmesh(xs, ys, np.real(E_near[2]), ax=ax3, cmap='RdBu')
im4 = pmesh(xs_far, ys_far, np.real(E_far[0]), ax=ax4, cmap='RdBu')
im5 = pmesh(xs_far, ys_far, np.real(E_far[1]), ax=ax5, cmap='RdBu')
im6 = pmesh(xs_far, ys_far, np.real(E_far[2]), ax=ax6, cmap='RdBu')
ax1.set_title('near field $E_x(x,y)$')
ax2.set_title('near field $E_y(x,y)$')
ax3.set_title('near field $E_z(x,y)$')
ax4.set_title('far field $E_x(x,y)$')
ax5.set_title('far field $E_y(x,y)$')
ax6.set_title('far field $E_z(x,y)$')
plt.colorbar(im1, ax=ax1)
plt.colorbar(im2, ax=ax2)
plt.colorbar(im3, ax=ax3)
plt.colorbar(im4, ax=ax4)
plt.colorbar(im5, ax=ax5)
plt.colorbar(im6, ax=ax6)
plt.show()
# we can also use the far field data and plot the field intensity to see the focusing effect
intensity_far = np.sum(np.square(np.abs(E_far)), axis=0)
_, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 5))
im1 = pmesh(xs_far, ys_far, intensity_far, ax=ax1, cmap='magma')
im2 = pmesh(xs_far, ys_far, np.sqrt(intensity_far), ax=ax2, cmap='magma')
ax1.set_title('$|E(x,y)|^2$')
ax2.set_title('$|E(x,y)|$')
plt.colorbar(im1, ax=ax1)
plt.colorbar(im2, ax=ax2)
plt.show()
```
| github_jupyter |
# Data Visualization
The RAPIDS AI ecosystem and `cudf.DataFrame` are built on a series of standards that simplify interoperability with established and emerging data science tools.
With a growing number of libraries adding GPU support, and a `cudf.DataFrame`โs ability to convert `.to_pandas()`, a large portion of the Python Visualization ([PyViz](pyviz.org/tools.html)) stack is immediately available to display your data.
In this Notebook, weโll walk through some of the data visualization possibilities with BlazingSQL.
Blog post: [Data Visualization with BlazingSQL](https://blog.blazingdb.com/data-visualization-with-blazingsql-12095862eb73?source=friends_link&sk=94fc5ee25f2a3356b4a9b9a49fd0f3a1)
#### Overview
- [Matplotlib](#Matplotlib)
- [Datashader](#Datashader)
- [HoloViews](#HoloViews)
- [cuxfilter](#cuxfilter)
```
from blazingsql import BlazingContext
bc = BlazingContext()
```
### Dataset
The data weโll be using for this demo comes from the [NYC Taxi dataset](https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page) and is stored in a public AWS S3 bucket.
```
bc.s3('blazingsql-colab', bucket_name='blazingsql-colab')
bc.create_table('taxi', 's3://blazingsql-colab/yellow_taxi/taxi_data.parquet')
```
Let's give the data a quick look to get a clue what we're looking at.
```
bc.sql('select * from taxi').tail()
```
## Matplotlib
[GitHub](https://github.com/matplotlib/matplotlib)
> _Matplotlib is a comprehensive library for creating static, animated, and interactive visualizations in Python._
By calling the `.to_pandas()` method, we can convert a `cudf.DataFrame` into a `pandas.DataFrame` and instantly access Matplotlib with `.plot()`.
For example, **does the `passenger_count` influence the `tip_amount`?**
```
bc.sql('SELECT * FROM taxi').to_pandas().plot(kind='scatter', x='passenger_count', y='tip_amount')
```
Other than the jump from 0 to 1 or outliers at 5 and 6, having more passengers might not be a good deal for the driver's `tip_amount`.
Let's see what demand is like. Based on dropoff time, **how many riders were transported by hour?** i.e. column `7` will be the total number of passengers dropped off from 7:00 AM through 7:59 AM for all days in this time period.
```
riders_by_hour = '''
select
sum(passenger_count) as sum_riders,
hour(cast(tpep_dropoff_datetime || '.0' as TIMESTAMP)) as hour_of_the_day
from
taxi
group by
hour(cast(tpep_dropoff_datetime || '.0' as TIMESTAMP))
order by
hour(cast(tpep_dropoff_datetime || '.0' as TIMESTAMP))
'''
bc.sql(riders_by_hour).to_pandas().plot(kind='bar', x='hour_of_the_day', y='sum_riders', title='Sum Riders by Hour', figsize=(12, 6))
```
Looks like the morning gets started around 6:00 AM, and builds up to a sustained lunchtime double peak from 12:00 PM - 3:00 PM. After a quick 3:00 PM - 5:00 PM siesta, we're right back for prime time from 6:00 PM to 8:00 PM. It's downhill from there, but tomorrow is a new day!
```
solo_rate = len(bc.sql('select * from taxi where passenger_count = 1')) / len(bc.sql('select * from taxi')) * 100
print(f'{solo_rate}% of rides have only 1 passenger.')
```
The overwhelming majority of rides have just 1 passenger. How consistent is this solo rider rate? **What's the average `passenger_count` per trip by hour?**
And maybe time of day plays a role in `tip_amount` as well, **what's the average `tip_amount` per trip by hour?**
We can run both queries in the same cell and the results will display inline.
```
xticks = [n for n in range(24)]
avg_riders_by_hour = '''
select
avg(passenger_count) as avg_passenger_count,
hour(dropoff_ts) as hour_of_the_day
from (
select
passenger_count,
cast(tpep_dropoff_datetime || '.0' as TIMESTAMP) dropoff_ts
from
taxi
)
group by
hour(dropoff_ts)
order by
hour(dropoff_ts)
'''
bc.sql(avg_riders_by_hour).to_pandas().plot(kind='line', x='hour_of_the_day', y='avg_passenger_count', title='Avg. # Riders per Trip by Hour', xticks=xticks, figsize=(12, 6))
avg_tip_by_hour = '''
select
avg(tip_amount) as avg_tip_amount,
hour(dropoff_ts) as hour_of_the_day
from (
select
tip_amount,
cast(tpep_dropoff_datetime || '.0' as TIMESTAMP) dropoff_ts
from
taxi
)
group by
hour(dropoff_ts)
order by
hour(dropoff_ts)
'''
bc.sql(avg_tip_by_hour).to_pandas().plot(kind='line', x='hour_of_the_day', y='avg_tip_amount', title='Avg. Tip ($) per Trip by Hour', xticks=xticks, figsize=(12, 6))
```
Interestingly, they almost resemble each other from 8:00 PM to 9:00 AM, but where average `passenger_count` continues to rise until 3:00 PM, average `tip_amount` takes a dip until 3:00 PM.
From 3:00 PM - 8:00 PM average `tip_amount` starts rising and average `passenger_count` waits patiently for it to catch up.
Average `tip_amount` peaks at midnight, and bottoms out at 5:00 AM. Average `passenger_count` is highest around 3:00 AM, and lowest at 6:00 AM.
## Datashader
[GitHub](https://github.com/holoviz/datashader)
> Datashader is a data rasterization pipeline for automating the process of creating meaningful representations of large amounts of data.
As of [holoviz/datashader#793](https://github.com/holoviz/datashader/pull/793), the following Datashader features accept `cudf.DataFrame` and `dask_cudf.DataFrame` input:
- `Canvas.points`, `Canvas.line` and `Canvas.area` rasterization
- All reduction operations except `var` and `std`.
- `transfer_functions.shade` (both 2D and 3D) inputs
#### Colorcet
[GitHub](https://github.com/holoviz/colorcet)
> Colorcet is a collection of perceptually uniform colormaps for use with Python plotting programs like bokeh, matplotlib, holoviews, and datashader based on the set of perceptually uniform colormaps created by Peter Kovesi at the Center for Exploration Targeting.
```
from datashader import Canvas, transfer_functions as tf
from colorcet import fire
```
**Do dropoff locations change based on the time of day?** Let's say 6AM-4PM vs 6PM-4AM.
Dropoffs from 6:00 AM to 4:00 PM
```
query = '''
select
dropoff_x, dropoff_y
from
taxi
where
hour(cast(tpep_pickup_datetime || '.0' as TIMESTAMP)) BETWEEN 6 AND 15
'''
nyc = Canvas().points(bc.sql(query), 'dropoff_x', 'dropoff_y')
tf.set_background(tf.shade(nyc, cmap=fire), "black")
```
Dropoffs from 6:00 PM to 4:00 AM
```
query = '''
select
dropoff_x, dropoff_y
from
taxi
where
hour(cast(tpep_pickup_datetime || '.0' as TIMESTAMP)) BETWEEN 18 AND 23
OR hour(cast(tpep_pickup_datetime || '.0' as TIMESTAMP)) BETWEEN 0 AND 3
'''
nyc = Canvas().points(bc.sql(query), 'dropoff_x', 'dropoff_y')
tf.set_background(tf.shade(nyc, cmap=fire), "black")
```
While Manhattan makes up the majority of the dropoff geography from 6:00 AM to 4:00 PM, Midtown's spark grows and spreads deeper into Brooklyn and Queens in the 6:00 PM to 4:00 AM window.
Consistent with the more decentralized look across the map, dropoffs near LaGuardia Airport (upper-middle right side) also die down relative to surrounding areas as the night rolls in.
## HoloViews
[GitHub](https://github.com/holoviz/holoviews)
> HoloViews is an open-source Python library designed to make data analysis and visualization seamless and simple. With HoloViews, you can usually express what you want to do in very few lines of code, letting you focus on what you are trying to explore and convey, not on the process of plotting.
By calling the `.to_pandas()` method, we can convert a `cudf.DataFrame` into a `pandas.DataFrame` and hand off to HoloViews or other CPU visualization packages.
```
from holoviews import extension, opts
from holoviews import Scatter, Dimension
import holoviews.operation.datashader as hd
extension('bokeh')
opts.defaults(opts.Scatter(height=425, width=425), opts.RGB(height=425, width=425))
cmap = [(49,130,189), (107,174,214), (123,142,216), (226,103,152), (255,0,104), (50,50,50)]
```
With HoloViews, we can easily explore the relationship of multiple scatter plots by saving them as variables and displaying them side-by-side with the same code cell.
For example, let's reexamine `passenger_count` vs `tip_amount` next to a new `holoviews.Scatter` of `fare_amount` vs `tip_amount`.
**Does `passenger_count` affect `tip_amount`?**
```
s = Scatter(bc.sql('select passenger_count, tip_amount from taxi').to_pandas(), 'passenger_count', 'tip_amount')
# 0-6 passengers, $0-$100 tip
ranged = s.redim.range(passenger_count=(-0.5, 6.5), tip_amount=(0, 100))
shaded = hd.spread(hd.datashade(ranged, x_sampling=0.25, cmap=cmap))
riders_v_tip = shaded.redim.label(passenger_count="Passenger Count", tip_amount="Tip ($)")
```
**How do `fare_amount` and `tip_amount` relate?**
```
s = Scatter(bc.sql('select fare_amount, tip_amount from taxi').to_pandas(), 'fare_amount', 'tip_amount')
# 0-30 miles, $0-$60 tip
ranged = s.redim.range(fare_amount=(0, 100), tip_amount=(0, 100))
shaded = hd.spread(hd.datashade(ranged, cmap=cmap))
fare_v_tip = shaded.redim.label(fare_amount="Fare Amount ($)", tip_amount="Tip ($)")
```
Display the answers to both side by side.
```
riders_v_tip + fare_v_tip
```
## cuxfilter
[GitHub](https://github.com/rapidsai/cuxfilter)
> cuxfilter (ku-cross-filter) is a RAPIDS framework to connect web visualizations to GPU accelerated crossfiltering. Inspired by the javascript version of the original, it enables interactive and super fast multi-dimensional filtering of 100 million+ row tabular datasets via cuDF.
cuxfilter allows us to culminate these charts into a dashboard.
```
import cuxfilter
```
Create `cuxfilter.DataFrame` from a `cudf.DataFrame`.
```
cux_df = cuxfilter.DataFrame.from_dataframe(bc.sql('SELECT passenger_count, tip_amount, dropoff_x, dropoff_y FROM taxi'))
```
Create some charts & define a dashboard object.
```
chart_0 = cuxfilter.charts.datashader.scatter_geo(x='dropoff_x', y='dropoff_y')
chart_1 = cuxfilter.charts.bokeh.bar('passenger_count', add_interaction=False)
chart_2 = cuxfilter.charts.datashader.heatmap(x='passenger_count', y='tip_amount', x_range=[-0.5, 6.5], y_range=[0, 100],
color_palette=cmap, title='Passenger Count vs Tip Amount ($)')
dashboard = cux_df.dashboard([chart_0, chart_1, chart_2], title='NYC Yellow Cab')
```
Display charts in Notebook with `.view()`.
```
chart_0.view()
chart_2.view()
```
## Multi-GPU Data Visualization
Packages like Datashader and cuxfilter support dask_cudf distributed objects (Series, DataFrame).
```
from dask_cuda import LocalCUDACluster
from dask.distributed import Client
cluster = LocalCUDACluster()
client = Client(cluster)
bc = BlazingContext(dask_client=client, network_interface='lo')
bc.s3('blazingsql-colab', bucket_name='blazingsql-colab')
bc.create_table('distributed_taxi', 's3://blazingsql-colab/yellow_taxi/taxi_data.parquet')
```
Dropoffs from 6:00 PM to 4:00 AM
```
query = '''
select
dropoff_x, dropoff_y
from
distributed_taxi
where
hour(cast(tpep_pickup_datetime || '.0' as TIMESTAMP)) BETWEEN 18 AND 23
OR hour(cast(tpep_pickup_datetime || '.0' as TIMESTAMP)) BETWEEN 0 AND 3
'''
nyc = Canvas().points(bc.sql(query), 'dropoff_x', 'dropoff_y')
tf.set_background(tf.shade(nyc, cmap=fire), "black")
```
## That's the Data Vizualization Tour!
You've seen the basics of Data Visualization in BlazingSQL Notebooks and how to utilize it. Now is a good time to experiment with your own data and see how to parse, clean, and extract meaningful insights from it.
We'll now get into how to run Machine Learning with popular Python and GPU-accelerated Python packages.
Continue to the [Machine Learning introductory Notebook](machine_learning.ipynb)
| github_jupyter |
<a href="https://colab.research.google.com/github/AWH-GlobalPotential-X/AWH-Geo/blob/master/notebooks/AWH-Geo.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Welcome to AWH-Geo
This tool requires a [Google Drive](https://drive.google.com/drive/my-drive) and [Earth Engine](https://developers.google.com/earth-engine/) Account.
[Start here](https://drive.google.com/drive/u/1/folders/1EzuqsbADrtdXChcpHqygTh7SuUw0U_QB) to create a new Output Table from the template:
1. Right-click on "OutputTable_TEMPLATE" file > Make a Copy to your own Drive folder
2. Rename the new file "OuputTable_CODENAME" with CODENAME (max 83 characters!) as a unique output table code. If including a date in the code, use the YYYYMMDD date format.
3. Enter in the output values in L/hr to each cell in each of the 10%-interval rH bins... interpolate in Sheets as necessary.
Then, click "Connect" at the top right of this notebook.
Then run each of the code blocks below, following instructions. For "OutputTableCode" inputs, use the CODENAME you created in Sheets.
```
#@title Basic setup and earthengine access.
print('Welcome to AWH-Geo')
# import, authenticate, then initialize EarthEngine module ee
# https://developers.google.com/earth-engine/python_install#package-import
import ee
print('Make sure the EE version is v0.1.215 or greater...')
print('Current EE version = v' + ee.__version__)
print('')
ee.Authenticate()
ee.Initialize()
worldGeo = ee.Geometry.Polygon( # Created for some masking and geo calcs
coords=[[-180,-90],[-180,0],[-180,90],[-30,90],[90,90],[180,90],
[180,0],[180,-90],[30,-90],[-90,-90],[-180,-90]],
geodesic=False,
proj='EPSG:4326'
)
#@title Test Earth Engine connection (see Mt Everest elev and a green map)
# Print the elevation of Mount Everest.
dem = ee.Image('USGS/SRTMGL1_003')
xy = ee.Geometry.Point([86.9250, 27.9881])
elev = dem.sample(xy, 30).first().get('elevation').getInfo()
print('Mount Everest elevation (m):', elev)
# Access study assets
from IPython.display import Image
jmpGeofabric_image = ee.Image('users/awhgeoglobal/jmpGeofabric_image') # access to study folder in EE
Image(url=jmpGeofabric_image.getThumbUrl({'min': 0, 'max': 1, 'dimensions': 512,
'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5']}))
#@title Set up access to Google Sheets (follow instructions)
from google.colab import auth
auth.authenticate_user()
# gspread is module to access Google Sheets through python
# https://gspread.readthedocs.io/en/latest/index.html
import gspread
from oauth2client.client import GoogleCredentials
gc = gspread.authorize(GoogleCredentials.get_application_default()) # get credentials
#@title STEP 1: Export timeseries for given OutputTable: enter CODENAME (without "OutputTable_" prefix) below
OutputTableCode = "" #@param {type:"string"}
StartYear = 2010 #@param {type:"integer"}
EndYear = 2020 #@param {type:"integer"}
ExportWeekly_1_or_0 = 0#@param {type:"integer"}
ee_username = ee.String(ee.Dictionary(ee.List(ee.data.getAssetRoots()).get(0)).get('id'))
ee_username = ee_username.getInfo()
years = list(range(StartYear,EndYear))
print('Time Period: ', years)
def timeseriesExport(outputTable_code):
"""
This script runs the output table value over the climate variables using the
nearest lookup values, worldwide, every three hours during a user-determined
period. It then resamples the temporal interval by averaging the hourly output
over semi-week periods. It then converts the resulting image collection into a
single image with several bands, each of which representing one (hourly or
semi-week) interval. Finally, it exports this image over 3-month tranches and
saves each as an EE Image Assets with appropriate names corresponding to the
tranche's time period.
"""
# print the output table code from user input for confirmation
print('outputTable code:', outputTable_code)
# CLIMATE DATA PRE-PROCESSING
# ERA5-Land climate dataset used for worldwide (derived) climate metrics
# https://www.ecmwf.int/en/era5-land
# era5-land HOURLY images in EE catalog
era5Land = ee.ImageCollection('ECMWF/ERA5_LAND/HOURLY')
# print('era5Land',era5Land.limit(50)) # print some data for inspection (debug)
era5Land_proj = era5Land.first().projection() # get ERA5-Land projection & scale for export
era5Land_scale = era5Land_proj.nominalScale()
print('era5Land_scale (should be ~11132):',era5Land_scale.getInfo())
era5Land_filtered = era5Land.filterDate( # ERA5-Land climate data
str(StartYear-1) + '-12-31', str(EndYear) + '-01-01').select( # filter by date
# filter by ERA5-Land image collection bands
[
'dewpoint_temperature_2m', # K (https://apps.ecmwf.int/codes/grib/param-db?id=168)
'surface_solar_radiation_downwards', # J/m^2 (Accumulated value. Divide by 3600 to get W/m^2 over hourly interval https://apps.ecmwf.int/codes/grib/param-db?id=176)
'temperature_2m' # K
])
# print('era5Land_filtered',era5Land_filtered.limit(50))
print('Wait... retrieving data from sheets takes a couple minutes')
# COLLECT OUTPUT TABLE DATA FROM SHEETS INTO PYTHON ARRAYS
# gspread function which will look in list of gSheets accessible to user
# in Earth Engine, an array is a list of lists.
# loop through worksheet tabs and build a list of lists of lists (3 dimensional)
# to organize output values [L/hr] by the 3 physical variables in the following
# order: by temperature (first nesting leve), ghi (second nesting level), then
# rH (third nesting level).
spreadsheet = gc.open('OutputTable_' + outputTable_code)
outputArray = list() # create empty array
rH_labels = ['rH0','rH10','rH20','rH30','rH40','rH50', # worksheet tab names
'rH60','rH70','rH80','rH90','rH100']
for rH in rH_labels: # loop to create 3-D array (list of lists of lists)
rH_interval_array = list()
worksheet = spreadsheet.worksheet(rH)
for x in list(range(7,26)): # relevant ranges in output table sheet
rH_interval_array.append([float(y) for y in worksheet.row_values(x)])
outputArray.append(rH_interval_array)
# print('Output Table values:', outputArray) # for debugging
# create an array image in EE (each pixel is a multi-dimensional matrix)
outputImage_arrays = ee.Image(ee.Array(outputArray)) # values are in [L/hr]
def processTimeseries(i): # core processing algorithm with lookups to outputTable
"""
This is the core AWH-Geo algorithm to convert image-based input climate data
into an image of AWG device output [L/time] based on a given output lookup table.
It runs across the ERA5-Land image collection timeseries and runs the lookup table
on each pixel of each image representing each hourly climate timestep.
"""
i = ee.Image(i) # cast as image
i = i.updateMask(i.select('temperature_2m').mask()) # ensure mask is applied to all bands
timestamp_millis = ee.Date(i.get('system:time_start'))
i_previous = ee.Image(era5Land_filtered.filterDate(
timestamp_millis.advance(-1,'hour')).first())
rh = ee.Image().expression( # relative humidity calculation [%]
# from http://bmcnoldy.rsmas.miami.edu/Humidity.html
'100 * (e**((17.625 * Td) / (To + Td)) / e**((17.625 * T) / (To + T)))', {
'e': 2.718281828459045, # Euler's constant
'T': i.select('temperature_2m').subtract(273.15), # temperature K converted to Celsius [ยฐC]
'Td': i.select('dewpoint_temperature_2m').subtract(273.15), # dewpoint temperature K converted to Celsius [ยฐC]
'To': 243.04 # reference temperature [K]
}).rename('rh')
ghi = ee.Image(ee.Algorithms.If( # because this parameter in ERA5 is cumulative in J/m^2...
condition=ee.Number(timestamp_millis.get('hour')).eq(1), # ...from last obseration...
trueCase=i.select('surface_solar_radiation_downwards'), # ...current value must be...
falseCase=i.select('surface_solar_radiation_downwards').subtract( # ...subtracted from last...
i_previous.select('surface_solar_radiation_downwards')) # ... then divided by seconds
)).divide(3600).rename('ghi') # solar global horizontal irradiance [W/m^2]
temp = i.select('temperature_2m'
).subtract(273.15).rename('temp') # temperature K converted to Celsius [ยฐC]
rhClamp = rh.clamp(0.1,100) # relative humdity clamped to output table range [%]
ghiClamp = ghi.clamp(0.1,1300) # global horizontal irradiance clamped to range [W/m^2]
tempClamp = temp.clamp(0.1,45) # temperature clamped to output table range [ยฐC]
# convert climate variables to lookup integers
rhLookup = rhClamp.divide(10
).round().int().rename('rhLookup') # rH lookup interval
tempLookup = tempClamp.divide(2.5
).round().int().rename('tempLookup') # temp lookup interval
ghiLookup = ghiClamp.divide(100
).add(1).round().int().rename('ghiLookup') # ghi lookup interval
# combine lookup values in a 3-band image
xyzLookup = ee.Image(rhLookup).addBands(tempLookup).addBands(ghiLookup)
# lookup values in 3D array for each pixel to return AWG output from table [L/hr]
# set output to 0 if temperature is less than 0 deg C
output = outputImage_arrays.arrayGet(xyzLookup).multiply(temp.gt(0))
nightMask = ghi.gt(0.5) # mask pixels which have no incident sunlight
return ee.Image(output.rename('O').addBands( # return image of output labeled "O" [L/hr]
rh.updateMask(nightMask)).addBands(
ghi.updateMask(nightMask)).addBands(
temp.updateMask(nightMask)).setMulti({ # add physical variables as bands
'system:time_start': timestamp_millis # set time as property
})).updateMask(1) # close partial masks at continental edges
def outputHourly_export(timeStart, timeEnd, year):
"""
Run the lookup processing function (from above) across the entire climate
timeseries at the finest temporal interval (1 hr for ERA5-Land). Convert the
resulting image collection as a single image with a band for each timestep
to allow for export as an Earth Engine asset (you cannot export/save image
collections as assets).
"""
# filter ERA5-Land climate data by time
era5Land_filtered_section = era5Land_filtered.filterDate(timeStart, timeEnd)
# print('era5Land_filtered_section',era5Land_filtered_section.limit(1).getInfo())
outputHourly = era5Land_filtered_section.map(processTimeseries)
# outputHourly_toBands_pre = outputHourly.select(['ghi']).toBands()
outputHourly_toBands_pre = outputHourly.select(['O']).toBands()
outputHourly_toBands = outputHourly_toBands_pre.select(
# input climate variables as multiband image with each band representing timestep
outputHourly_toBands_pre.bandNames(),
# rename bands by timestamp
outputHourly_toBands_pre.bandNames().map(
lambda name: ee.String('H').cat( # "H" for hourly
ee.String(name).replace('T','')
)
)
)
# notify user of export
print('Exporting outputHourly year:', year)
task = ee.batch.Export.image.toAsset(
image=ee.Image(outputHourly_toBands),
region=worldGeo,
description='O_hourly_' + outputTable_code + '_' + year,
assetId=ee_username + '/O_hourly_' + outputTable_code + '_' + year,
scale=era5Land_scale.getInfo(),
crs='EPSG:4326',
crsTransform=[0.1,0,-180.05,0,-0.1,90.05],
maxPixels=1e10,
maxWorkers=2000
)
task.start()
# run timeseries export on entire hourly ERA5-Land for each yearly tranche
for y in years:
y = str(y)
outputHourly_export(y + '-01-01', y + '-04-01', y + 'a')
outputHourly_export(y + '-04-01', y + '-07-01', y + 'b')
outputHourly_export(y + '-07-01', y + '-10-01', y + 'c')
outputHourly_export(y + '-10-01', str(int(y)+1) + '-01-01', y + 'd')
def outputWeekly_export(timeStart, timeEnd, year):
era5Land_filtered_section = era5Land_filtered.filterDate(timeStart, timeEnd) # filter ERA5-Land climate data by time
outputHourly = era5Land_filtered_section.map(processTimeseries)
# resample values over time by 2-week aggregations
# Define a time interval
start = ee.Date(timeStart)
end = ee.Date(timeEnd)
# Number of years, in DAYS_PER_RANGE-day increments.
DAYS_PER_RANGE = 14
# DateRangeCollection, which contains the ranges we're interested in.
drc = ee.call("BetterDateRangeCollection",
start,
end,
DAYS_PER_RANGE,
"day",
True)
# This filter will join images with the date range that contains their start time.
filter = ee.Filter.dateRangeContains("date_range", None, "system:time_start")
# Save all of the matching values under "matches".
join = ee.Join.saveAll("matches")
# Do the join.
joinedResult = join.apply(drc, outputHourly, filter)
# print('joinedResult',joinedResult)
# Map over the functions, and add the mean of the matches as "meanForRange".
joinedResult = joinedResult.map(
lambda e: e.set("meanForRange", ee.ImageCollection.fromImages(e.get("matches")).mean())
)
# print('joinedResult',joinedResult)
# roll resampled images into new image collection
outputWeekly = ee.ImageCollection(joinedResult.map(
lambda f: ee.Image(f.get('meanForRange'))
))
# print('outputWeekly',outputWeekly.getInfo())
# convert image collection into image with many bands which can be saved as EE asset
outputWeekly_toBands_pre = outputWeekly.toBands()
outputWeekly_toBands = outputWeekly_toBands_pre.select(
outputWeekly_toBands_pre.bandNames(), # input climate variables as multiband image with each band representing timestep
outputWeekly_toBands_pre.bandNames().map(
lambda name: ee.String('W').cat(name)
)
)
task = ee.batch.Export.image.toAsset(
image=ee.Image(outputWeekly_toBands),
region=worldGeo,
description='O_weekly_' + outputTable_code + '_' + year,
assetId=ee_username + '/O_weekly_' + outputTable_code + '_' + year,
scale=era5Land_scale.getInfo(),
crs='EPSG:4326',
crsTransform=[0.1,0,-180.05,0,-0.1,90.05],
maxPixels=1e10,
maxWorkers=2000
)
if ExportWeekly_1_or_0 == 1:
task.start() # remove comment hash if weekly exports are desired
print('Exporting outputWeekly year:', year)
# run semi-weekly timeseries export on ERA5-Land by year
for y in years:
y = str(y)
outputWeekly_export(y + '-01-01', y + '-04-01', y + 'a')
outputWeekly_export(y + '-04-01', y + '-07-01', y + 'b')
outputWeekly_export(y + '-07-01', y + '-10-01', y + 'c')
outputWeekly_export(y + '-10-01', str(int(y)+1) + '-01-01', y + 'd')
timeseriesExport(OutputTableCode)
print('Complete! Read instructions below')
```
# *Before moving on to the next step... Wait until above tasks are complete in the task manager: https://code.earthengine.google.com/*
(right pane, tab "tasks", click "refresh"; the should show up once the script prints "Exporting...")
```
#@title Re-instate earthengine access (follow instructions)
print('Welcome Back to AWH-Geo')
print('')
# import, authenticate, then initialize EarthEngine module ee
# https://developers.google.com/earth-engine/python_install#package-import
import ee
print('Make sure the EE version is v0.1.215 or greater...')
print('Current EE version = v' + ee.__version__)
print('')
ee.Authenticate()
ee.Initialize()
worldGeo = ee.Geometry.Polygon( # Created for some masking and geo calcs
coords=[[-180,-90],[-180,0],[-180,90],[-30,90],[90,90],[180,90],
[180,0],[180,-90],[30,-90],[-90,-90],[-180,-90]],
geodesic=False,
proj='EPSG:4326'
)
#@title STEP 2: Export statistical results for given OutputTable: enter CODENAME (without "OutputTable_" prefix) below
ee_username = ee.String(ee.Dictionary(ee.List(ee.data.getAssetRoots()).get(0)).get('id'))
ee_username = ee_username.getInfo()
OutputTableCode = "" #@param {type:"string"}
StartYear = 2010 #@param {type:"integer"}
EndYear = 2020 #@param {type:"integer"}
SuffixName_optional = "" #@param {type:"string"}
ExportMADP90s_1_or_0 = 0#@param {type:"integer"}
years = list(range(StartYear,EndYear))
print('Time Period: ', years)
def generateStats(outputTable_code):
"""
This function generates single images which contain time-aggregated output
statistics including overall mean and shortfall metrics such as MADP90s.
"""
# CLIMATE DATA PRE-PROCESSING
# ERA5-Land climate dataset used for worldwide (derived) climate metrics
# https://www.ecmwf.int/en/era5-land
# era5-land HOURLY images in EE catalog
era5Land = ee.ImageCollection('ECMWF/ERA5_LAND/HOURLY')
# print('era5Land',era5Land.limit(50)) # print some data for inspection (debug)
era5Land_proj = era5Land.first().projection() # get ERA5-Land projection & scale for export
era5Land_scale = era5Land_proj.nominalScale()
# setup the image collection timeseries to chart
# unravel and concatenate all the image stages into a single image collection
def unravel(i): # function to "unravel" image bands into an image collection
def setDate(bandName): # loop over band names in image and return a LIST of ...
dateCode = ee.Date.parse( # ... images, one for each band
format='yyyyMMddHH',
date=ee.String(ee.String(bandName).split('_').get(0)).slice(1) # get date periods from band name
)
return i.select([bandName]).rename('O').set('system:time_start',dateCode)
i = ee.Image(i)
return i.bandNames().map(setDate) # returns a LIST of images
yearCode_list = ee.List(sum([[ # each image units in [L/hr]
unravel(ee.Image(ee_username + '/O_hourly_' + outputTable_code + '_' + str(y)+'a')),
unravel(ee.Image(ee_username + '/O_hourly_' + outputTable_code + '_' + str(y)+'b')),
unravel(ee.Image(ee_username + '/O_hourly_' + outputTable_code + '_' + str(y)+'c')),
unravel(ee.Image(ee_username + '/O_hourly_' + outputTable_code + '_' + str(y)+'d'))
] for y in years], [])).flatten()
outputTimeseries = ee.ImageCollection(yearCode_list)
Od_overallMean = outputTimeseries.mean().multiply(24).rename('Od') # hourly output x 24 = mean daily output [L/day]
# export overall daily mean
task = ee.batch.Export.image.toAsset(
image=Od_overallMean,
region=worldGeo,
description='Od_overallMean_' + outputTable_code + SuffixName_optional,
assetId=ee_username + '/Od_overallMean_' + outputTable_code + SuffixName_optional,
scale=era5Land_scale.getInfo(),
crs='EPSG:4326',
crsTransform=[0.1,0,-180.05,0,-0.1,90.05],
maxPixels=1e10,
maxWorkers=2000
)
task.start()
print('Exporting Od_overallMean_' + outputTable_code + SuffixName_optional)
## run the moving average function over the timeseries using DAILY averages
# start and end dates over which to calculate aggregate statistics
startDate = ee.Date(str(StartYear) + '-01-01')
endDate = ee.Date(str(EndYear) + '-01-01')
# resample values over time by daily aggregations
# Number of years, in DAYS_PER_RANGE-day increments.
DAYS_PER_RANGE = 1
# DateRangeCollection, which contains the ranges we're interested in.
drc = ee.call('BetterDateRangeCollection',
startDate,
endDate,
DAYS_PER_RANGE,
'day',
True)
# This filter will join images with the date range that contains their start time.
filter = ee.Filter.dateRangeContains('date_range', None, 'system:time_start')
# Save all of the matching values under "matches".
join = ee.Join.saveAll('matches')
# Do the join.
joinedResult = join.apply(drc, outputTimeseries, filter)
# print('joinedResult',joinedResult)
# Map over the functions, and add the mean of the matches as "meanForRange".
joinedResult = joinedResult.map(
lambda e: e.set('meanForRange', ee.ImageCollection.fromImages(e.get('matches')).mean())
)
# print('joinedResult',joinedResult)
# roll resampled images into new image collection
outputDaily = ee.ImageCollection(joinedResult.map(
lambda f: ee.Image(f.get('meanForRange')).set(
'system:time_start',
ee.Date.parse('YYYYMMdd',f.get('system:index')).millis()
)
))
# print('outputDaily',outputDaily.getInfo())
outputDaily_p90 = ee.ImageCollection( # collate rolling periods into new image collection of rolling average values
outputDaily.toList(outputDaily.size())).reduce(
ee.Reducer.percentile( # reduce image collection by percentile
[10] # 100% - 90% = 10%
)).multiply(24).rename('Od')
task = ee.batch.Export.image.toAsset(
image=outputDaily_p90,
region=worldGeo,
description='Od_DailyP90_' + outputTable_code + SuffixName_optional,
assetId=ee_username + '/Od_DailyP90_' + outputTable_code + SuffixName_optional,
scale=era5Land_scale.getInfo(),
crs='EPSG:4326',
crsTransform=[0.1,0,-180.05,0,-0.1,90.05],
maxPixels=1e10,
maxWorkers=2000
)
if ExportMADP90s_1_or_0 == 1:
task.start()
print('Exporting Od_DailyP90_' + outputTable_code + SuffixName_optional)
def rollingStats(period): # run rolling stat function for each rolling period scenerio
# collect neighboring time periods into a join
timeFilter = ee.Filter.maxDifference(
difference=float(period)/2 * 24 * 60 * 60 * 1000, # mid-centered window
leftField='system:time_start',
rightField='system:time_start'
)
rollingPeriod_join = ee.ImageCollection(ee.Join.saveAll('images').apply(
primary=outputDaily, # apply the join on itself to collect images
secondary=outputDaily,
condition=timeFilter
))
def rollingPeriod_mean(i): # get the mean across each collected periods
i = ee.Image(i) # collected images stored in "images" property of each timestep image
return ee.ImageCollection.fromImages(i.get('images')).mean()
outputDaily_rollingMean = rollingPeriod_join.filterDate(
startDate.advance(float(period)/2,'days'),
endDate.advance(float(period)/-2,'days')
).map(rollingPeriod_mean,True)
Od_p90_rolling = ee.ImageCollection( # collate rolling periods into new image collection of rolling average values
outputDaily_rollingMean.toList(outputDaily_rollingMean.size())).reduce(
ee.Reducer.percentile( # reduce image collection by percentile
[10] # 100% - 90% = 10%
)).multiply(24).rename('Od') # hourly output x 24 = mean daily output [L/day]
task = ee.batch.Export.image.toAsset(
image=Od_p90_rolling,
region=worldGeo,
description='Od_MADP90_'+ period + 'day_' + outputTable_code + SuffixName_optional,
assetId=ee_username + '/Od_MADP90_'+ period + 'day_' + outputTable_code + SuffixName_optional,
scale=era5Land_scale.getInfo(),
crs='EPSG:4326',
crsTransform=[0.1,0,-180.05,0,-0.1,90.05],
maxPixels=1e10,
maxWorkers=2000
)
if ExportMADP90s_1_or_0 == 1:
task.start()
print('Exporting Od_MADP90_' + period + 'day_' + outputTable_code + SuffixName_optional)
rollingPeriods = [
'007',
'030',
# '060',
'090',
# '180',
] # define custom rolling periods over which to calc MADP90 [days]
for period in rollingPeriods: # execute the calculations & export
# print(period)
rollingStats(period)
generateStats(OutputTableCode) # run stats function
print('Complete! Go to next step.')
```
Wait until these statistics are completed processing. Track them in the task manager: https://code.earthengine.google.com/
When they are finished.... [Go here to see maps](https://code.earthengine.google.com/fac0cc72b2ac2e431424cbf45b2852cf)
| github_jupyter |
# Chapter 7
```
import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pymc3 as pm
import statsmodels.api as sm
import statsmodels.formula.api as smf
from patsy import dmatrix
from scipy import stats
from scipy.special import logsumexp
%config Inline.figure_format = 'retina'
az.style.use("arviz-darkgrid")
az.rcParams["stats.hdi_prob"] = 0.89 # set credible interval for entire notebook
np.random.seed(0)
```
#### Code 7.1
```
brains = pd.DataFrame.from_dict(
{
"species": [
"afarensis",
"africanus",
"habilis",
"boisei",
"rudolfensis",
"ergaster",
"sapiens",
],
"brain": [438, 452, 612, 521, 752, 871, 1350], # volume in cc
"mass": [37.0, 35.5, 34.5, 41.5, 55.5, 61.0, 53.5], # mass in kg
}
)
brains
# Figure 7.2
plt.scatter(brains.mass, brains.brain)
# point labels
for i, r in brains.iterrows():
if r.species == "afarensis":
plt.text(r.mass + 0.5, r.brain, r.species, ha="left", va="center")
elif r.species == "sapiens":
plt.text(r.mass, r.brain - 25, r.species, ha="center", va="top")
else:
plt.text(r.mass, r.brain + 25, r.species, ha="center")
plt.xlabel("body mass (kg)")
plt.ylabel("brain volume (cc)");
```
#### Code 7.2
```
brains.loc[:, "mass_std"] = (brains.loc[:, "mass"] - brains.loc[:, "mass"].mean()) / brains.loc[
:, "mass"
].std()
brains.loc[:, "brain_std"] = brains.loc[:, "brain"] / brains.loc[:, "brain"].max()
```
#### Code 7.3
This is modified from [Chapter 6 of 1st Edition](https://nbviewer.jupyter.org/github/pymc-devs/resources/blob/master/Rethinking/Chp_06.ipynb) (6.2 - 6.6).
```
m_7_1 = smf.ols("brain_std ~ mass_std", data=brains).fit()
m_7_1.summary()
```
#### Code 7.4
```
p, cov = np.polyfit(brains.loc[:, "mass_std"], brains.loc[:, "brain_std"], 1, cov=True)
post = stats.multivariate_normal(p, cov).rvs(1000)
az.summary({k: v for k, v in zip("ba", post.T)}, kind="stats")
```
#### Code 7.5
```
1 - m_7_1.resid.var() / brains.brain_std.var()
```
#### Code 7.6
```
def R2_is_bad(model):
return 1 - model.resid.var() / brains.brain_std.var()
R2_is_bad(m_7_1)
```
#### Code 7.7
```
m_7_2 = smf.ols("brain_std ~ mass_std + I(mass_std**2)", data=brains).fit()
m_7_2.summary()
```
#### Code 7.8
```
m_7_3 = smf.ols("brain_std ~ mass_std + I(mass_std**2) + I(mass_std**3)", data=brains).fit()
m_7_4 = smf.ols(
"brain_std ~ mass_std + I(mass_std**2) + I(mass_std**3) + I(mass_std**4)",
data=brains,
).fit()
m_7_5 = smf.ols(
"brain_std ~ mass_std + I(mass_std**2) + I(mass_std**3) + I(mass_std**4) + I(mass_std**5)",
data=brains,
).fit()
```
#### Code 7.9
```
m_7_6 = smf.ols(
"brain_std ~ mass_std + I(mass_std**2) + I(mass_std**3) + I(mass_std**4) + I(mass_std**5) + I(mass_std**6)",
data=brains,
).fit()
```
#### Code 7.10
The chapter gives code to produce the first panel of Figure 7.3. Here, produce the entire figure by looping over models 7.1-7.6.
To sample the posterior predictive on a new independent variable we make use of theano SharedVariable objects, as outlined [here](https://docs.pymc.io/notebooks/data_container.html)
```
models = [m_7_1, m_7_2, m_7_3, m_7_4, m_7_5, m_7_6]
names = ["m_7_1", "m_7_2", "m_7_3", "m_7_4", "m_7_5", "m_7_6"]
mass_plot = np.linspace(33, 62, 100)
mass_new = (mass_plot - brains.mass.mean()) / brains.mass.std()
fig, axs = plt.subplots(3, 2, figsize=[6, 8.5], sharex=True, sharey="row")
for model, name, ax in zip(models, names, axs.flat):
prediction = model.get_prediction({"mass_std": mass_new})
pred = prediction.summary_frame(alpha=0.11) * brains.brain.max()
ax.plot(mass_plot, pred["mean"])
ax.fill_between(mass_plot, pred["mean_ci_lower"], pred["mean_ci_upper"], alpha=0.3)
ax.scatter(brains.mass, brains.brain, color="C0", s=15)
ax.set_title(f"{name}: R^2: {model.rsquared:.2f}", loc="left", fontsize=11)
if ax.is_first_col():
ax.set_ylabel("brain volume (cc)")
if ax.is_last_row():
ax.set_xlabel("body mass (kg)")
if ax.is_last_row():
ax.set_ylim(-500, 2100)
ax.axhline(0, ls="dashed", c="k", lw=1)
ax.set_yticks([0, 450, 1300])
else:
ax.set_ylim(300, 1600)
ax.set_yticks([450, 900, 1300])
fig.tight_layout()
```
#### Code 7.11 - this is R specific notation for dropping rows
```
brains_new = brains.drop(brains.index[-1])
# Figure 7.4
# this code taken from PyMC3 port of Rethinking/Chp_06.ipynb
f, (ax1, ax2) = plt.subplots(1, 2, sharey=True, figsize=(8, 3))
ax1.scatter(brains.mass, brains.brain, alpha=0.8)
ax2.scatter(brains.mass, brains.brain, alpha=0.8)
for i in range(len(brains)):
d_new = brains.drop(brains.index[-i]) # drop each data point in turn
# first order model
m0 = smf.ols("brain ~ mass", d_new).fit()
# need to calculate regression line
# need to add intercept term explicitly
x = sm.add_constant(d_new.mass) # add constant to new data frame with mass
x_pred = pd.DataFrame(
{"mass": np.linspace(x.mass.min() - 10, x.mass.max() + 10, 50)}
) # create linspace dataframe
x_pred2 = sm.add_constant(x_pred) # add constant to newly created linspace dataframe
y_pred = m0.predict(x_pred2) # calculate predicted values
ax1.plot(x_pred, y_pred, "gray", alpha=0.5)
ax1.set_ylabel("body mass (kg)", fontsize=12)
ax1.set_xlabel("brain volume (cc)", fontsize=12)
ax1.set_title("Underfit model")
# fifth order model
m1 = smf.ols(
"brain ~ mass + I(mass**2) + I(mass**3) + I(mass**4) + I(mass**5)", data=d_new
).fit()
x = sm.add_constant(d_new.mass) # add constant to new data frame with mass
x_pred = pd.DataFrame(
{"mass": np.linspace(x.mass.min() - 10, x.mass.max() + 10, 200)}
) # create linspace dataframe
x_pred2 = sm.add_constant(x_pred) # add constant to newly created linspace dataframe
y_pred = m1.predict(x_pred2) # calculate predicted values from fitted model
ax2.plot(x_pred, y_pred, "gray", alpha=0.5)
ax2.set_xlim(32, 62)
ax2.set_ylim(-250, 2200)
ax2.set_ylabel("body mass (kg)", fontsize=12)
ax2.set_xlabel("brain volume (cc)", fontsize=12)
ax2.set_title("Overfit model")
```
#### Code 7.12
```
p = np.array([0.3, 0.7])
-np.sum(p * np.log(p))
# Figure 7.5
p = np.array([0.3, 0.7])
q = np.arange(0.01, 1, 0.01)
DKL = np.sum(p * np.log(p / np.array([q, 1 - q]).T), 1)
plt.plot(q, DKL)
plt.xlabel("q[1]")
plt.ylabel("Divergence of q from p")
plt.axvline(0.3, ls="dashed", color="k")
plt.text(0.315, 1.22, "q = p");
```
#### Code 7.13 & 7.14
```
n_samples = 3000
intercept, slope = stats.multivariate_normal(m_7_1.params, m_7_1.cov_params()).rvs(n_samples).T
pred = intercept + slope * brains.mass_std.values.reshape(-1, 1)
n, ns = pred.shape
# PyMC3 does not have a way to calculate LPPD directly, so we use the approach from 7.14
sigmas = (np.sum((pred - brains.brain_std.values.reshape(-1, 1)) ** 2, 0) / 7) ** 0.5
ll = np.zeros((n, ns))
for s in range(ns):
logprob = stats.norm.logpdf(brains.brain_std, pred[:, s], sigmas[s])
ll[:, s] = logprob
lppd = np.zeros(n)
for i in range(n):
lppd[i] = logsumexp(ll[i]) - np.log(ns)
lppd
```
#### Code 7.15
```
# make an lppd function that can be applied to all models (from code above)
def lppd(model, n_samples=1e4):
n_samples = int(n_samples)
pars = stats.multivariate_normal(model.params, model.cov_params()).rvs(n_samples).T
dmat = dmatrix(
model.model.data.design_info, brains, return_type="dataframe"
).values # get model design matrix
pred = dmat.dot(pars)
n, ns = pred.shape
# this approach for calculating lppd isfrom 7.14
sigmas = (np.sum((pred - brains.brain_std.values.reshape(-1, 1)) ** 2, 0) / 7) ** 0.5
ll = np.zeros((n, ns))
for s in range(ns):
logprob = stats.norm.logpdf(brains.brain_std, pred[:, s], sigmas[s])
ll[:, s] = logprob
lppd = np.zeros(n)
for i in range(n):
lppd[i] = logsumexp(ll[i]) - np.log(ns)
return lppd
# model 7_6 does not work with OLS because its covariance matrix is not finite.
lppds = np.array(list(map(lppd, models[:-1], [1000] * len(models[:-1]))))
lppds.sum(1)
```
#### Code 7.16
This relies on the `sim.train.test` function in the `rethinking` package. [This](https://github.com/rmcelreath/rethinking/blob/master/R/sim_train_test.R) is the original function.
The python port of this function below is from [Rethinking/Chp_06](https://nbviewer.jupyter.org/github/pymc-devs/resources/blob/master/Rethinking/Chp_06.ipynb) Code 6.12.
```
def sim_train_test(N=20, k=3, rho=[0.15, -0.4], b_sigma=100):
n_dim = 1 + len(rho)
if n_dim < k:
n_dim = k
Rho = np.diag(np.ones(n_dim))
Rho[0, 1:3:1] = rho
i_lower = np.tril_indices(n_dim, -1)
Rho[i_lower] = Rho.T[i_lower]
x_train = stats.multivariate_normal.rvs(cov=Rho, size=N)
x_test = stats.multivariate_normal.rvs(cov=Rho, size=N)
mm_train = np.ones((N, 1))
np.concatenate([mm_train, x_train[:, 1:k]], axis=1)
# Using pymc3
with pm.Model() as m_sim:
vec_V = pm.MvNormal(
"vec_V",
mu=0,
cov=b_sigma * np.eye(n_dim),
shape=(1, n_dim),
testval=np.random.randn(1, n_dim) * 0.01,
)
mu = pm.Deterministic("mu", 0 + pm.math.dot(x_train, vec_V.T))
y = pm.Normal("y", mu=mu, sd=1, observed=x_train[:, 0])
with m_sim:
trace_m_sim = pm.sample(return_inferencedata=True)
vec = az.summary(trace_m_sim)["mean"][:n_dim]
vec = np.array([i for i in vec]).reshape(n_dim, -1)
dev_train = -2 * sum(stats.norm.logpdf(x_train, loc=np.matmul(x_train, vec), scale=1))
mm_test = np.ones((N, 1))
mm_test = np.concatenate([mm_test, x_test[:, 1 : k + 1]], axis=1)
dev_test = -2 * sum(stats.norm.logpdf(x_test[:, 0], loc=np.matmul(mm_test, vec), scale=1))
return np.mean(dev_train), np.mean(dev_test)
n = 20
tries = 10
param = 6
r = np.zeros(shape=(param - 1, 4))
train = []
test = []
for j in range(2, param + 1):
print(j)
for i in range(1, tries + 1):
tr, te = sim_train_test(N=n, k=param)
train.append(tr), test.append(te)
r[j - 2, :] = (
np.mean(train),
np.std(train, ddof=1),
np.mean(test),
np.std(test, ddof=1),
)
```
#### Code 7.17
Does not apply because multi-threading is automatic in PyMC3.
#### Code 7.18
```
num_param = np.arange(2, param + 1)
plt.figure(figsize=(10, 6))
plt.scatter(num_param, r[:, 0], color="C0")
plt.xticks(num_param)
for j in range(param - 1):
plt.vlines(
num_param[j],
r[j, 0] - r[j, 1],
r[j, 0] + r[j, 1],
color="mediumblue",
zorder=-1,
alpha=0.80,
)
plt.scatter(num_param + 0.1, r[:, 2], facecolors="none", edgecolors="k")
for j in range(param - 1):
plt.vlines(
num_param[j] + 0.1,
r[j, 2] - r[j, 3],
r[j, 2] + r[j, 3],
color="k",
zorder=-2,
alpha=0.70,
)
dist = 0.20
plt.text(num_param[1] - dist, r[1, 0] - dist, "in", color="C0", fontsize=13)
plt.text(num_param[1] + dist, r[1, 2] - dist, "out", color="k", fontsize=13)
plt.text(num_param[1] + dist, r[1, 2] + r[1, 3] - dist, "+1 SD", color="k", fontsize=10)
plt.text(num_param[1] + dist, r[1, 2] - r[1, 3] - dist, "+1 SD", color="k", fontsize=10)
plt.xlabel("Number of parameters", fontsize=14)
plt.ylabel("Deviance", fontsize=14)
plt.title(f"N = {n}", fontsize=14)
plt.show()
```
These uncertainties are a *lot* larger than in the book... MCMC vs OLS again?
#### Code 7.19
7.19 to 7.25 transcribed directly from 6.15-6.20 in [Chapter 6 of 1st Edition](https://nbviewer.jupyter.org/github/pymc-devs/resources/blob/master/Rethinking/Chp_06.ipynb).
```
data = pd.read_csv("Data/cars.csv", sep=",", index_col=0)
with pm.Model() as m:
a = pm.Normal("a", mu=0, sd=100)
b = pm.Normal("b", mu=0, sd=10)
sigma = pm.Uniform("sigma", 0, 30)
mu = pm.Deterministic("mu", a + b * data["speed"])
dist = pm.Normal("dist", mu=mu, sd=sigma, observed=data["dist"])
m = pm.sample(5000, tune=10000)
```
#### Code 7.20
```
n_samples = 1000
n_cases = data.shape[0]
logprob = np.zeros((n_cases, n_samples))
for s in range(0, n_samples):
mu = m["a"][s] + m["b"][s] * data["speed"]
p_ = stats.norm.logpdf(data["dist"], loc=mu, scale=m["sigma"][s])
logprob[:, s] = p_
```
#### Code 7.21
```
n_cases = data.shape[0]
lppd = np.zeros(n_cases)
for a in range(1, n_cases):
lppd[a] = logsumexp(logprob[a]) - np.log(n_samples)
```
#### Code 7.22
```
pWAIC = np.zeros(n_cases)
for i in range(1, n_cases):
pWAIC[i] = np.var(logprob[i])
```
#### Code 7.23
```
-2 * (sum(lppd) - sum(pWAIC))
```
#### Code 7.24
```
waic_vec = -2 * (lppd - pWAIC)
(n_cases * np.var(waic_vec)) ** 0.5
```
#### Setup for Code 7.25+
Have to reproduce m6.6-m6.8 from Code 6.13-6.17 in Chapter 6
```
# number of plants
N = 100
# simulate initial heights
h0 = np.random.normal(10, 2, N)
# assign treatments and simulate fungus and growth
treatment = np.repeat([0, 1], N / 2)
fungus = np.random.binomial(n=1, p=0.5 - treatment * 0.4, size=N)
h1 = h0 + np.random.normal(5 - 3 * fungus, size=N)
# compose a clean data frame
d = pd.DataFrame.from_dict({"h0": h0, "h1": h1, "treatment": treatment, "fungus": fungus})
with pm.Model() as m_6_6:
p = pm.Lognormal("p", 0, 0.25)
mu = pm.Deterministic("mu", p * d.h0)
sigma = pm.Exponential("sigma", 1)
h1 = pm.Normal("h1", mu=mu, sigma=sigma, observed=d.h1)
m_6_6_trace = pm.sample(return_inferencedata=True)
with pm.Model() as m_6_7:
a = pm.Normal("a", 0, 0.2)
bt = pm.Normal("bt", 0, 0.5)
bf = pm.Normal("bf", 0, 0.5)
p = a + bt * d.treatment + bf * d.fungus
mu = pm.Deterministic("mu", p * d.h0)
sigma = pm.Exponential("sigma", 1)
h1 = pm.Normal("h1", mu=mu, sigma=sigma, observed=d.h1)
m_6_7_trace = pm.sample(return_inferencedata=True)
with pm.Model() as m_6_8:
a = pm.Normal("a", 0, 0.2)
bt = pm.Normal("bt", 0, 0.5)
p = a + bt * d.treatment
mu = pm.Deterministic("mu", p * d.h0)
sigma = pm.Exponential("sigma", 1)
h1 = pm.Normal("h1", mu=mu, sigma=sigma, observed=d.h1)
m_6_8_trace = pm.sample(return_inferencedata=True)
```
#### Code 7.25
```
az.waic(m_6_7_trace, m_6_7, scale="deviance")
```
#### Code 7.26
```
compare_df = az.compare(
{
"m_6_6": m_6_6_trace,
"m_6_7": m_6_7_trace,
"m_6_8": m_6_8_trace,
},
method="pseudo-BMA",
ic="waic",
scale="deviance",
)
compare_df
```
#### Code 7.27
```
waic_m_6_7 = az.waic(m_6_7_trace, pointwise=True, scale="deviance")
waic_m_6_8 = az.waic(m_6_8_trace, pointwise=True, scale="deviance")
# pointwise values are stored in the waic_i attribute.
diff_m_6_7_m_6_8 = waic_m_6_7.waic_i - waic_m_6_8.waic_i
n = len(diff_m_6_7_m_6_8)
np.sqrt(n * np.var(diff_m_6_7_m_6_8)).values
```
#### Code 7.28
```
40.0 + np.array([-1, 1]) * 10.4 * 2.6
```
#### Code 7.29
```
az.plot_compare(compare_df);
```
#### Code 7.30
```
waic_m_6_6 = az.waic(m_6_6_trace, pointwise=True, scale="deviance")
diff_m6_6_m6_8 = waic_m_6_6.waic_i - waic_m_6_8.waic_i
n = len(diff_m6_6_m6_8)
np.sqrt(n * np.var(diff_m6_6_m6_8)).values
```
#### Code 7.31
dSE is calculated by compare above, but `rethinking` produces a pairwise comparison. This is not implemented in `arviz`, but we can hack it together:
```
dataset_dict = {"m_6_6": m_6_6_trace, "m_6_7": m_6_7_trace, "m_6_8": m_6_8_trace}
# compare all models
s0 = az.compare(dataset_dict, ic="waic", scale="deviance")["dse"]
# the output compares each model to the 'best' model - i.e. two models are compared to one.
# to complete a pair-wise comparison we need to compare the remaining two models.
# to do this, remove the 'best' model from the input data
del dataset_dict[s0.index[0]]
# re-run compare with the remaining two models
s1 = az.compare(dataset_dict, ic="waic", scale="deviance")["dse"]
# s0 compares two models to one model, and s1 compares the remaining two models to each other
# now we just nee to wrangle them together!
# convert them both to dataframes, setting the name to the 'best' model in each `compare` output.
# (i.e. the name is the model that others are compared to)
df_0 = s0.to_frame(name=s0.index[0])
df_1 = s1.to_frame(name=s1.index[0])
# merge these dataframes to create a pairwise comparison
pd.merge(df_0, df_1, left_index=True, right_index=True)
```
**Note:** this work for three models, but will get increasingly hack-y with additional models. The function below can be applied to *n* models:
```
def pairwise_compare(dataset_dict, metric="dse", **kwargs):
"""
Calculate pairwise comparison of models in dataset_dict.
Parameters
----------
dataset_dict : dict
A dict containing two ore more {'name': pymc3.backends.base.MultiTrace}
items.
metric : str
The name of the matric to be calculated. Can be any valid column output
by `arviz.compare`. Note that this may change depending on the **kwargs
that are specified.
kwargs
Arguments passed to `arviz.compare`
"""
data_dict = dataset_dict.copy()
dicts = []
while len(data_dict) > 1:
c = az.compare(data_dict, **kwargs)[metric]
dicts.append(c.to_frame(name=c.index[0]))
del data_dict[c.index[0]]
return pd.concat(dicts, axis=1)
dataset_dict = {"m_6_6": m_6_6_trace, "m_6_7": m_6_7_trace, "m_6_8": m_6_8_trace}
pairwise_compare(dataset_dict, metric="dse", ic="waic", scale="deviance")
```
#### Code 7.32
```
d = pd.read_csv("Data/WaffleDivorce.csv", delimiter=";")
d["A"] = stats.zscore(d["MedianAgeMarriage"])
d["D"] = stats.zscore(d["Divorce"])
d["M"] = stats.zscore(d["Marriage"])
with pm.Model() as m_5_1:
a = pm.Normal("a", 0, 0.2)
bA = pm.Normal("bA", 0, 0.5)
mu = a + bA * d["A"]
sigma = pm.Exponential("sigma", 1)
D = pm.Normal("D", mu, sigma, observed=d["D"])
m_5_1_trace = pm.sample(return_inferencedata=True)
with pm.Model() as m_5_2:
a = pm.Normal("a", 0, 0.2)
bM = pm.Normal("bM", 0, 0.5)
mu = a + bM * d["M"]
sigma = pm.Exponential("sigma", 1)
D = pm.Normal("D", mu, sigma, observed=d["D"])
m_5_2_trace = pm.sample(return_inferencedata=True)
with pm.Model() as m_5_3:
a = pm.Normal("a", 0, 0.2)
bA = pm.Normal("bA", 0, 0.5)
bM = pm.Normal("bM", 0, 0.5)
mu = a + bA * d["A"] + bM * d["M"]
sigma = pm.Exponential("sigma", 1)
D = pm.Normal("D", mu, sigma, observed=d["D"])
m_5_3_trace = pm.sample(return_inferencedata=True)
```
#### Code 7.33
```
az.compare(
{"m_5_1": m_5_1_trace, "m_5_2": m_5_2_trace, "m_5_3": m_5_3_trace},
scale="deviance",
)
```
#### Code 7.34
```
psis_m_5_3 = az.loo(m_5_3_trace, pointwise=True, scale="deviance")
waic_m_5_3 = az.waic(m_5_3_trace, pointwise=True, scale="deviance")
# Figure 7.10
plt.scatter(psis_m_5_3.pareto_k, waic_m_5_3.waic_i)
plt.xlabel("PSIS Pareto k")
plt.ylabel("WAIC");
# Figure 7.11
v = np.linspace(-4, 4, 100)
g = stats.norm(loc=0, scale=1)
t = stats.t(df=2, loc=0, scale=1)
fig, (ax, lax) = plt.subplots(1, 2, figsize=[8, 3.5])
ax.plot(v, g.pdf(v), color="b")
ax.plot(v, t.pdf(v), color="k")
lax.plot(v, -g.logpdf(v), color="b")
lax.plot(v, -t.logpdf(v), color="k");
```
#### Code 7.35
```
with pm.Model() as m_5_3t:
a = pm.Normal("a", 0, 0.2)
bA = pm.Normal("bA", 0, 0.5)
bM = pm.Normal("bM", 0, 0.5)
mu = a + bA * d["A"] + bM * d["M"]
sigma = pm.Exponential("sigma", 1)
D = pm.StudentT("D", 2, mu, sigma, observed=d["D"])
m_5_3t_trace = pm.sample(return_inferencedata=True)
az.loo(m_5_3t_trace, pointwise=True, scale="deviance")
az.plot_forest([m_5_3_trace, m_5_3t_trace], model_names=["m_5_3", "m_5_3t"], figsize=[6, 3.5]);
%load_ext watermark
%watermark -n -u -v -iv -w
```
| github_jupyter |
MIT License
Copyright (c) 2017 Erik Linder-Norรฉn
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
#Please make sure you have at least TensorFlow version 1.12 installed, if not please uncomment and use the
# pip command below to upgrade. When in a jupyter environment (especially IBM Watson Studio),
# please don't forget to restart the kernel
import tensorflow as tf
tf.__version__
!pip install --upgrade tensorflow
from __future__ import print_function, division
from keras.datasets import mnist
from keras.layers import Input, Dense, Reshape, Flatten, Dropout
from keras.layers import BatchNormalization, Activation, ZeroPadding2D
from keras.layers.advanced_activations import LeakyReLU
from keras.layers.convolutional import UpSampling2D, Conv2D
from keras.models import Sequential, Model
from keras.optimizers import Adam
import matplotlib.pyplot as plt
import sys
import numpy as np
img_rows = 28
img_cols = 28
channels = 1
latent_dim = 100
img_shape = (img_rows, img_cols, channels)
def build_generator():
model = Sequential()
model.add(Dense(128 * 7 * 7, activation="relu", input_dim=latent_dim))
model.add(Reshape((7, 7, 128)))
model.add(UpSampling2D())
model.add(Conv2D(128, kernel_size=3, padding="same"))
model.add(BatchNormalization(momentum=0.8))
model.add(Activation("relu"))
model.add(UpSampling2D())
model.add(Conv2D(64, kernel_size=3, padding="same"))
model.add(BatchNormalization(momentum=0.8))
model.add(Activation("relu"))
model.add(Conv2D(channels, kernel_size=3, padding="same"))
model.add(Activation("tanh"))
model.summary()
noise = Input(shape=(latent_dim,))
img = model(noise)
return Model(noise, img)
def build_discriminator():
model = Sequential()
model.add(Conv2D(32, kernel_size=3, strides=2, input_shape=img_shape, padding="same"))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(Conv2D(64, kernel_size=3, strides=2, padding="same"))
model.add(ZeroPadding2D(padding=((0,1),(0,1))))
model.add(BatchNormalization(momentum=0.8))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(Conv2D(128, kernel_size=3, strides=2, padding="same"))
model.add(BatchNormalization(momentum=0.8))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(Conv2D(256, kernel_size=3, strides=1, padding="same"))
model.add(BatchNormalization(momentum=0.8))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
model.summary()
img = Input(shape=img_shape)
validity = model(img)
return Model(img, validity)
optimizer = Adam(0.0002, 0.5)
# Build and compile the discriminator
discriminator = build_discriminator()
discriminator.compile(loss='binary_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
# Build the generator
generator = build_generator()
# The generator takes noise as input and generates imgs
z = Input(shape=(latent_dim,))
img = generator(z)
# For the combined model we will only train the generator
discriminator.trainable = False
# The discriminator takes generated images as input and determines validity
valid = discriminator(img)
# The combined model (stacked generator and discriminator)
# Trains the generator to fool the discriminator
combined = Model(z, valid)
combined.compile(loss='binary_crossentropy', optimizer=optimizer)
def save_imgs(epoch):
r, c = 5, 5
noise = np.random.normal(0, 1, (r * c, latent_dim))
gen_imgs = generator.predict(noise)
# Rescale images 0 - 1
gen_imgs = 0.5 * gen_imgs + 0.5
fig, axs = plt.subplots(r, c)
cnt = 0
for i in range(r):
for j in range(c):
axs[i,j].imshow(gen_imgs[cnt, :,:,0], cmap='gray')
axs[i,j].axis('off')
cnt += 1
fig.savefig("images/mnist_%d.png" % epoch)
plt.close()
def train(epochs, batch_size=128, save_interval=50):
# Load the dataset
(X_train, _), (_, _) = mnist.load_data()
# Rescale -1 to 1
X_train = X_train / 127.5 - 1.
X_train = np.expand_dims(X_train, axis=3)
# Adversarial ground truths
valid = np.ones((batch_size, 1))
fake = np.zeros((batch_size, 1))
for epoch in range(epochs):
# ---------------------
# Train Discriminator
# ---------------------
# Select a random half of images
idx = np.random.randint(0, X_train.shape[0], batch_size)
imgs = X_train[idx]
# Sample noise and generate a batch of new images
noise = np.random.normal(0, 1, (batch_size, latent_dim))
gen_imgs = generator.predict(noise)
# Train the discriminator (real classified as ones and generated as zeros)
d_loss_real = discriminator.train_on_batch(imgs, valid)
d_loss_fake = discriminator.train_on_batch(gen_imgs, fake)
d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)
# ---------------------
# Train Generator
# ---------------------
# Train the generator (wants discriminator to mistake images as real)
g_loss = combined.train_on_batch(noise, valid)
# Plot the progress
print ("%d [D loss: %f, acc.: %.2f%%] [G loss: %f]" % (epoch, d_loss[0], 100*d_loss[1], g_loss))
# If at save interval => save generated image samples
if epoch % save_interval == 0:
save_imgs(epoch)
!mkdir -p images
train(epochs=4000, batch_size=32, save_interval=50)
ls images
from IPython.display import display
from PIL import Image
path="images/mnist_0.png"
display(Image.open(path))
from IPython.display import display
from PIL import Image
path="images/mnist_3950.png"
display(Image.open(path))
```
| github_jupyter |
# Training and hosting SageMaker Models using the Apache MXNet Gluon API
When there is a person in front of you, your human eyes can immediately recognize what direction the person is looking at (e.g. either facing straight up to you or looking at somewhere else). The direction is defined as the head-pose. We are going to develop a deep neural learning model to estimate such a head-pose based on an input human head image. The **SageMaker Python SDK** makes it easy to train and deploy MXNet models. In this example, we train a ResNet-50 model using the Apache MXNet [Gluon API](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html) and the head-pose dataset.
The task at hand is to train a model using the head-pose dataset so that the trained model is able to classify head-pose into 9 different categories (the combinations of 3 tilt and 3 pan angles).
```
import sys
print(sys.version)
```
### Setup
First we need to define a few variables that will be needed later in the example.
```
from sagemaker import get_execution_role
s3_bucket = '<your S3 bucket >'
headpose_folder = 'headpose'
#Bucket location to save your custom code in tar.gz format.
custom_code_upload_location = 's3://{}/{}/customMXNetcodes'.format(s3_bucket, headpose_folder)
#Bucket location where results of model training are saved.
model_artifacts_location = 's3://{}/{}/artifacts'.format(s3_bucket, headpose_folder)
#IAM execution role that gives SageMaker access to resources in your AWS account.
#We can use the SageMaker Python SDK to get the role from our notebook environment.
role = get_execution_role()
```
### The training script
The ``EntryPt-headpose.py`` script provides all the code we need for training and hosting a SageMaker model. The script we will use is adapted and modified from Apache MXNet [MNIST tutorial](https://github.com/awslabs/amazon-sagemaker-examples/tree/master/sagemaker-python-sdk/mxnet_mnist) and [HeadPose tutorial](https://)
```
!cat EntryPt-headpose-Gluon.py
```
You may find a similarity between this ``EntryPt-headpose-Gluon.py`` and [Head Pose Gluon Tutorial](https://)
### SageMaker's MXNet estimator class
The SageMaker ```MXNet``` estimator allows us to run single machine or distributed training in SageMaker, using CPU or GPU-based instances.
When we create the estimator, we pass in the filename of our training script, the name of our IAM execution role, and the S3 locations we defined in the setup section. We also provide a few other parameters. ``train_instance_count`` and ``train_instance_type`` determine the number and type of SageMaker instances that will be used for the training job. The ``hyperparameters`` parameter is a ``dict`` of values that will be passed to your training script -- you can see how to access these values in the ``EntryPt-headpose.py`` script above.
For this example, we will choose one ``ml.p2.xlarge`` instance.
```
from sagemaker.mxnet import MXNet
headpose_estimator = MXNet(entry_point='EntryPt-headpose-Gluon.py',
role=role,
output_path=model_artifacts_location,
code_location=custom_code_upload_location,
train_instance_count=1,
train_instance_type='ml.p2.xlarge',
hyperparameters={'learning_rate': 0.0005},
train_max_run = 432000,
train_volume_size=100)
```
The default volume size in a training instance is 30GB. However, the actual free space is much less. Make sure that you have enough free space in the training instance to download your training data (e.g. 100 GB).
```
print(headpose_estimator.train_volume_size)
```
We name this job as **deeplens-sagemaker-headpose**. The ``base_job_name`` will be the prefix of output folders we are going to create.
```
headpose_estimator.base_job_name = 'deeplens-sagemaker-headpose'
```
### Running the Training Job
After we've constructed our MXNet object, we can fit it using data stored in S3.
During training, SageMaker makes this data stored in S3 available in the local filesystem where the headpose script is running. The ```EntryPt-headpose-Gluon.py``` script simply loads the train and test data from disk.
```
%%time
'''
# Load preprocessed data and run the training#
'''
# Head-pose dataset "HeadPoseData_trn_test_x15_py2.pkl" is in the following S3 folder.
dataset_location = 's3://{}/{}/datasets'.format(s3_bucket, headpose_folder)
# You can specify multiple input file directories (i.e. channel_input_dirs) in the dictionary.
# e.g. {'dataset1': dataset1_location, 'dataset2': dataset2_location, 'dataset3': dataset3_location}
# Start training !
headpose_estimator.fit({'dataset': dataset_location})
```
The latest training job name is...
```
print(headpose_estimator.latest_training_job.name)
```
The training is done
### Creating an inference Endpoint
After training, we use the ``MXNet estimator`` object to build and deploy an ``MXNetPredictor``. This creates a Sagemaker **Endpoint** -- a hosted prediction service that we can use to perform inference.
The arguments to the ``deploy`` function allow us to set the number and type of instances that will be used for the Endpoint. These do not need to be the same as the values we used for the training job. For example, you can train a model on a set of GPU-based instances, and then deploy the Endpoint to a fleet of CPU-based instances. Here we will deploy the model to a single ``ml.c4.xlarge`` instance.
```
from sagemaker.mxnet.model import MXNetModel
'''
You will find the name of training job on the top of the training log.
e.g.
INFO:sagemaker:Creating training-job with name: HeadPose-Gluon-YYYY-MM-DD-HH-MM-SS-XXX
'''
training_job_name = headpose_estimator.latest_training_job.name
sagemaker_model = MXNetModel(model_data= model_artifacts_location + '/{}/output/model.tar.gz'.format(training_job_name),
role=role,
entry_point='EntryPt-headpose-Gluon-wo-cv2.py')
predictor = sagemaker_model.deploy(initial_instance_count=1,
instance_type='ml.c4.xlarge')
```
The request handling behavior of the Endpoint is determined by the ``EntryPt-headpose-Gluon-wo-cv2.py`` script. The difference between ``EntryPt-headpose-Gluon-wo-cv2.py`` and ``EntryPt-headpose-Gluon.py`` is just OpenCV module (``cv2``). We found the inference instance does not support ``cv2``. If you use ``EntryPt-headpose-Gluon.py``, the inference instance will return an error `` AllTraffic did not pass the ping health check``.
### Making an inference request
Now our Endpoint is deployed and we have a ``predictor`` object, we can use it to classify the head-pose of our own head-torso image.
```
import cv2
import numpy as np
import boto3
import os
import matplotlib.pyplot as plt
%matplotlib inline
role = get_execution_role()
import urllib.request
sample_ims_location = 'https://s3.amazonaws.com/{}/{}/testIMs/IMG_1242.JPG'.format(s3_bucket,headpose_folder)
print(sample_ims_location)
def download(url):
filename = url.split("/")[-1]
if not os.path.exists(filename):
urllib.request.urlretrieve(url, filename)
return cv2.imread(filename)
im_true = download(sample_ims_location)
im = im_true.astype(np.float32)/255 # Normalized
crop_uly = 62
crop_height = 360
crop_ulx = 100
crop_width = 360
im = im[crop_uly:crop_uly + crop_height, crop_ulx:crop_ulx + crop_width]
im_crop = im
plt.imshow(im_crop[:,:,::-1])
plt.show()
im = cv2.resize(im, (84, 84))
plt.imshow(im[:,:,::-1])
plt.show()
im = np.swapaxes(im, 0, 2)
im = np.swapaxes(im, 1, 2)
im = im[np.newaxis, :]
im = (im -0.5) * 2
print(im.shape)
```
Now we can use the ``predictor`` object to classify the head pose:
```
data = im
prob = predictor.predict(data)
print('Raw prediction result:')
print(prob)
labeled_predictions = list(zip(range(10), prob[0]))
print('Labeled predictions: ')
print(labeled_predictions)
labeled_predictions.sort(key=lambda label_and_prob: 1.0 - label_and_prob[1])
print('Most likely answer: {}'.format(labeled_predictions[0]))
n_grid_cls = 9
n_tilt_cls = 3
pred = labeled_predictions[0][0]
### Tilt Prediction
pred_tilt_pic = pred % n_tilt_cls
### Pan Prediction
pred_pan_pic = pred // n_tilt_cls
extent = 0, im_true.shape[1]-1, im_true.shape[0]-1, 0
Panel_Pred = np.zeros((n_tilt_cls, n_tilt_cls))
Panel_Pred[pred_tilt_pic, pred_pan_pic] = 1
Panel_Pred = np.fliplr(Panel_Pred)
Panel_Pred = np.flipud(Panel_Pred)
plt.imshow(im_true[:,:,[2,1,0]], extent=extent)
plt.imshow(Panel_Pred, cmap=plt.cm.Blues, alpha=.2, interpolation='nearest', extent=extent)
plt.axis('off')
arrw_mg = 100
arrw_x_rad = 1 * (prob[0][0] + prob[0][1] + prob[0][2] - prob[0][6] -prob[0][7] - prob[0][8]) * 90 * np.pi / 180.
arrw_y_rad = 1 * (prob[0][0] + prob[0][3] + prob[0][6] - prob[0][2] -prob[0][5] - prob[0][8]) * 90 * np.pi / 180.
plt.arrow(im_true.shape[1]//2, im_true.shape[0]//2,
np.sin(arrw_x_rad) * arrw_mg, np.sin(arrw_y_rad) * arrw_mg,
head_width=10, head_length=10, fc='b', ec='b')
plt.show()
```
# (Optional) Delete the Endpoint
After you have finished with this example, remember to delete the prediction endpoint to release the instance(s) associated with it.
```
print("Endpoint name: " + predictor.endpoint)
import sagemaker
sagemaker.Session().delete_endpoint(predictor.endpoint)
```
# End
| github_jupyter |
# Kaggle San Francisco Crime Classification
## Berkeley MIDS W207 Final Project: Sam Goodgame, Sarah Cha, Kalvin Kao, Bryan Moore
### Environment and Data
```
# Additional Libraries
%matplotlib inline
import matplotlib.pyplot as plt
# Import relevant libraries:
import time
import numpy as np
import pandas as pd
from sklearn.neighbors import KNeighborsClassifier
from sklearn import preprocessing
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import StandardScaler
from sklearn.naive_bayes import BernoulliNB
from sklearn.naive_bayes import MultinomialNB
from sklearn.naive_bayes import GaussianNB
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import log_loss
from sklearn.linear_model import LogisticRegression
from sklearn import svm
from sklearn.neural_network import MLPClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
# Import Meta-estimators
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import BaggingClassifier
from sklearn.ensemble import GradientBoostingClassifier
# Import Calibration tools
from sklearn.calibration import CalibratedClassifierCV
# Set random seed and format print output:
np.random.seed(0)
np.set_printoptions(precision=3)
```
### Local, individual load of updated data set (with weather data integrated) into training, development, and test subsets.
```
# Data path to your local copy of Kalvin's "x_data.csv", which was produced by the negated cell above
data_path = "./data/x_data_3.csv"
df = pd.read_csv(data_path, header=0)
x_data = df.drop('category', 1)
y = df.category.as_matrix()
# Impute missing values with mean values:
#x_complete = df.fillna(df.mean())
x_complete = x_data.fillna(x_data.mean())
X_raw = x_complete.as_matrix()
# Scale the data between 0 and 1:
X = MinMaxScaler().fit_transform(X_raw)
####
X = np.around(X, decimals=2)
####
# Shuffle data to remove any underlying pattern that may exist. Must re-run random seed step each time:
np.random.seed(0)
shuffle = np.random.permutation(np.arange(X.shape[0]))
X, y = X[shuffle], y[shuffle]
test_data, test_labels = X[800000:], y[800000:]
dev_data, dev_labels = X[700000:800000], y[700000:800000]
train_data, train_labels = X[:700000], y[:700000]
mini_train_data, mini_train_labels = X[:200000], y[:200000]
mini_dev_data, mini_dev_labels = X[430000:480000], y[430000:480000]
crime_labels = list(set(y))
crime_labels_mini_train = list(set(mini_train_labels))
crime_labels_mini_dev = list(set(mini_dev_labels))
print(len(crime_labels), len(crime_labels_mini_train), len(crime_labels_mini_dev))
print(len(train_data),len(train_labels))
print(len(dev_data),len(dev_labels))
print(len(mini_train_data),len(mini_train_labels))
print(len(mini_dev_data),len(mini_dev_labels))
print(len(test_data),len(test_labels))
```
### Logistic Regression
###### Hyperparameter tuning:
For the Logistic Regression classifier, we can seek to optimize the following classifier parameters: penalty (l1 or l2), C (inverse of regularization strength), solver ('newton-cg', 'lbfgs', 'liblinear', or 'sag')
###### Model calibration:
See above
## Fit the Best LR Parameters
```
bestLR = LogisticRegression(penalty='l2', solver='newton-cg', tol=0.01, C=500)
bestLR.fit(mini_train_data, mini_train_labels)
bestLRPredictions = bestLR.predict(mini_dev_data)
bestLRPredictionProbabilities = bestLR.predict_proba(mini_dev_data)
print("L1 Multi-class Log Loss:", log_loss(y_true = mini_dev_labels, y_pred = bestLRPredictionProbabilities, \
labels = crime_labels_mini_dev), "\n\n")
pd.DataFrame(np.amax(bestLRPredictionProbabilities, axis=1)).hist()
```
## Error Analysis: Calibration
```
#clf_probabilities, clf_predictions, labels
def error_analysis_calibration(buckets, clf_probabilities, clf_predictions, labels):
"""inputs:
clf_probabilities = clf.predict_proba(dev_data)
clf_predictions = clf.predict(dev_data)
labels = dev_labels"""
#buckets = [0.05, 0.15, 0.3, 0.5, 0.8]
#buckets = [0.15, 0.25, 0.3, 1.0]
correct = [0 for i in buckets]
total = [0 for i in buckets]
lLimit = 0
uLimit = 0
for i in range(len(buckets)):
uLimit = buckets[i]
for j in range(clf_probabilities.shape[0]):
if (np.amax(clf_probabilities[j]) > lLimit) and (np.amax(clf_probabilities[j]) <= uLimit):
if clf_predictions[j] == labels[j]:
correct[i] += 1
total[i] += 1
lLimit = uLimit
print(sum(correct))
print(sum(total))
print(correct)
print(total)
#here we report the classifier accuracy for each posterior probability bucket
accuracies = []
for k in range(len(buckets)):
print(1.0*correct[k]/total[k])
accuracies.append(1.0*correct[k]/total[k])
print('p(pred) <= %.13f total = %3d correct = %3d accuracy = %.3f' \
%(buckets[k], total[k], correct[k], 1.0*correct[k]/total[k]))
plt.plot(buckets,accuracies)
plt.title("Calibration Analysis")
plt.xlabel("Posterior Probability")
plt.ylabel("Classifier Accuracy")
return buckets, accuracies
#i think you'll need to look at how the posteriors are distributed in order to set the best bins in 'buckets'
pd.DataFrame(np.amax(bestLRPredictionProbabilities, axis=1)).hist()
buckets = [0.15, 0.25, 0.3, 1.0]
calibration_buckets, calibration_accuracies = error_analysis_calibration(buckets, clf_probabilities=bestLRPredictionProbabilities, \
clf_predictions=bestLRPredictions, \
labels=mini_dev_labels)
```
## Error Analysis: Classification Report
```
def error_analysis_classification_report(clf_predictions, labels):
"""inputs:
clf_predictions = clf.predict(dev_data)
labels = dev_labels"""
print('Classification Report:')
report = classification_report(labels, clf_predictions)
print(report)
return report
classification_report = error_analysis_classification_report(clf_predictions=bestLRPredictions, \
labels=mini_dev_labels)
```
## Error Analysis: Confusion Matrix
```
crime_labels_mini_dev
def error_analysis_confusion_matrix(label_names, clf_predictions, labels):
"""inputs:
clf_predictions = clf.predict(dev_data)
labels = dev_labels"""
cm = pd.DataFrame(confusion_matrix(labels, clf_predictions, labels=label_names))
cm.columns=label_names
cm.index=label_names
cm.to_csv(path_or_buf="./confusion_matrix.csv")
#print(cm)
return cm
error_analysis_confusion_matrix(label_names=crime_labels_mini_dev, clf_predictions=bestLRPredictions, \
labels=mini_dev_labels)
```
| github_jupyter |
# Data Cleaning
For each IMU file, clean the IMU data, adjust the labels, and output these as CSV files.
```
%load_ext autoreload
%autoreload 2
%matplotlib notebook
import numpy as np
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RepeatedStratifiedKFold
from sklearn.ensemble import GradientBoostingClassifier
from matplotlib.lines import Line2D
import joblib
from src.data.labels_util import load_labels, LabelCol, get_labels_file, load_clean_labels, get_workouts
from src.data.imu_util import (
get_sensor_file, ImuCol, load_imu_data, Sensor, fix_epoch, resample_uniformly, time_to_row_range, get_data_chunk,
normalize_with_bounds, data_to_features, list_imu_abspaths, clean_imu_data
)
from src.data.util import find_nearest, find_nearest_index, shift, low_pass_filter, add_col
from src.data.workout import Activity, Workout
from src.data.data import DataState
from src.data.clean_dataset import main as clean_dataset
from src.data.clean_labels import main as clean_labels
from src.visualization.visualize import multiplot
# import data types
from pandas import DataFrame
from numpy import ndarray
from typing import List, Tuple, Optional
```
## Clean IMU data
```
# Clean data (UNCOMMENT when needed)
# clean_dataset()
# Test
cleaned_files = list_imu_abspaths(sensor_type=Sensor.Accelerometer, data_state=DataState.Clean)
def plot_helper(idx, plot):
imu_data = np.load(cleaned_files[idx])
plot.plot(imu_data[:, ImuCol.TIME], imu_data[:, ImuCol.XACCEL])
# plot.plot(imu_data[:, ImuCol.TIME], imu_data[:, ImuCol.YACCEL])
# plot.plot(imu_data[:, ImuCol.TIME], imu_data[:, ImuCol.ZACCEL])
multiplot(len(cleaned_files), plot_helper)
```
## Adjust Labels
A few raw IMU files seems to have corrupted timestamps, causing some labels to not properly map to their data point. We note these labels in the cleaned/adjusted labels. They'll be handled in the model fitting.
```
# Adjust labels (UNCOMMENT when needed)
# clean_labels()
# Test
raw_boot_labels: ndarray = load_labels(get_labels_file(Activity.Boot, DataState.Raw), Activity.Boot)
raw_pole_labels: ndarray = load_labels(get_labels_file(Activity.Pole, DataState.Raw), Activity.Pole)
clean_boot_labels: ndarray = load_clean_labels(Activity.Boot)
clean_pole_labels: ndarray = load_clean_labels(Activity.Pole)
# Check cleaned data content
# print('Raw Boot')
# print(raw_boot_labels[:50,])
# print('Clean Boot')
# print(clean_boot_labels[:50,])
# print('Raw Pole')
# print(raw_pole_labels[:50,])
# print('Clean Pole')
# print(clean_pole_labels[:50,])
```
## Examine Data Integrity
Make sure that labels for steps are still reasonable after data cleaning.
**Something to consider**: one area of concern are the end steps labels for poles labels. Pole lift-off (end of step) occurs at a min-peak. Re-sampling, interpolation, and the adjustment of labels may cause the end labels to deviate slightly from the min-peak. (The graph seems okay, with some points slightly off the peak, but it's not too common.) We can make the reasonable assumption that data points are sampled approximately uniformly. This may affect the accuracy of using a low-pass filter and (for workout detection) FFT.
```
# CHOOSE a workout and test type (pole or boot) to examine
workout_idx = 5
selected_labels = clean_boot_labels
workouts: List[Workout] = get_workouts(selected_labels)
print('Number of workouts: %d' % len(workouts))
workout = workouts[workout_idx]
print('Sensor %s' % workout.sensor)
def plot_helper(idx, plot):
# Plot IMU data
imu_data: ndarray = np.load(
get_sensor_file(sensor_name=workout.sensor, sensor_type=Sensor.Accelerometer, data_state=DataState.Clean))
plot.plot(imu_data[:, ImuCol.TIME], imu_data[:, ImuCol.XACCEL])
# plot.plot(imu_data[:, ImuCol.TIME], imu_data[:, ImuCol.YACCEL])
# plot.plot(imu_data[:, ImuCol.TIME], imu_data[:, ImuCol.ZACCEL])
plot.set_xlabel('Epoch Time')
# Plot step labels
for i in range(workout.labels.shape[0]):
start_row, end_row = workout.labels[i, LabelCol.START], workout.labels[i, LabelCol.END]
plot.axvline(x=imu_data[start_row, ImuCol.TIME], color='green', linestyle='dashed')
plot.axvline(x=imu_data[end_row, ImuCol.TIME], color='red', linestyle='dotted')
legend_items = [Line2D([], [], color='green', linestyle='dashed', label='Step start'),
Line2D([], [], color='red', linestyle='dotted', label='Step end')]
plot.legend(handles=legend_items)
# Zoom (REMOVE to see the entire graph)
# plot.set_xlim([1597340600000, 1597340615000])
multiplot(1, plot_helper)
```
Let's compare the cleaned labels to the original labels.
```
# CHOOSE a workout and test type (pole or boot) to examine
workout_idx = 5
selected_labels = raw_pole_labels
workouts: List[Workout] = get_workouts(selected_labels)
print('Number of workouts: %d' % len(workouts))
workout = workouts[workout_idx]
print('Sensor %s' % workout.sensor)
def plot_helper(idx, plot):
# Plot IMU data
imu_data: ndarray = load_imu_data(
get_sensor_file(sensor_name=workout.sensor, sensor_type=Sensor.Accelerometer, data_state=DataState.Raw))
plot.plot(imu_data[:, ImuCol.XACCEL])
# plot.plot(imu_data[:, ImuCol.TIME], imu_data[:, ImuCol.YACCEL])
# plot.plot(imu_data[:, ImuCol.TIME], imu_data[:, ImuCol.ZACCEL])
plot.set_xlabel('Row Index')
# Plot step labels
for i in range(workout.labels.shape[0]):
# find labels rows
start_epoch, end_epoch = workout.labels[i, LabelCol.START], workout.labels[i, LabelCol.END]
start_row = np.where(imu_data[:, ImuCol.TIME].astype(int) == int(start_epoch))[0]
end_row = np.where(imu_data[:, ImuCol.TIME].astype(int) == int(end_epoch))[0]
if len(start_row) != 1 or len(end_row) != 1:
print('Bad workout')
return
start_row, end_row = start_row[0], end_row[0]
plot.axvline(x=start_row, color='green', linestyle='dashed')
plot.axvline(x=end_row, color='red', linestyle='dotted')
legend_items = [Line2D([], [], color='green', linestyle='dashed', label='Step start'),
Line2D([], [], color='red', linestyle='dotted', label='Step end')]
plot.legend(handles=legend_items)
# Zoom (REMOVE to see the entire graph)
plot.set_xlim([124500, 125000])
multiplot(1, plot_helper)
```
Make sure NaN labels were persisted during the label data's save/load process.
```
def count_errors(labels: ndarray):
for workout in get_workouts(labels):
boot: ndarray = workout.labels
num_errors = np.count_nonzero(
np.isnan(boot[:, LabelCol.START].astype(np.float64)) | np.isnan(boot[:, LabelCol.END].astype(np.float64)))
if num_errors != 0:
print('Number of labels that could not be mapped for sensor %s: %d' % (workout.sensor, num_errors))
clean_boot_labels: ndarray = load_clean_labels(Activity.Boot)
clean_pole_labels: ndarray = load_clean_labels(Activity.Pole)
print('Boot labels')
count_errors(clean_boot_labels)
print('Pole labels')
count_errors(clean_pole_labels)
```
| github_jupyter |
```
import csv
import numpy as np
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from google.colab import files
```
The data for this exercise is available at: https://www.kaggle.com/datamunge/sign-language-mnist/home
Sign up and download to find 2 CSV files: sign_mnist_test.csv and sign_mnist_train.csv -- You will upload both of them using this button before you can continue.
```
uploaded=files.upload()
def get_data(filename):
# You will need to write code that will read the file passed
# into this function. The first line contains the column headers
# so you should ignore it
# Each successive line contians 785 comma separated values between 0 and 255
# The first value is the label
# The rest are the pixel values for that picture
# The function will return 2 np.array types. One with all the labels
# One with all the images
#
# Tips:
# If you read a full line (as 'row') then row[0] has the label
# and row[1:785] has the 784 pixel values
# Take a look at np.array_split to turn the 784 pixels into 28x28
# You are reading in strings, but need the values to be floats
# Check out np.array().astype for a conversion
with open(filename) as training_file:
# Your code starts here
# Your code ends here
return images, labels
training_images, training_labels = get_data('sign_mnist_train.csv')
testing_images, testing_labels = get_data('sign_mnist_test.csv')
# Keep these
print(training_images.shape)
print(training_labels.shape)
print(testing_images.shape)
print(testing_labels.shape)
# Their output should be:
# (27455, 28, 28)
# (27455,)
# (7172, 28, 28)
# (7172,)
# In this section you will have to add another dimension to the data
# So, for example, if your array is (10000, 28, 28)
# You will need to make it (10000, 28, 28, 1)
# Hint: np.expand_dims
training_images = # Your Code Here
testing_images = # Your Code Here
# Create an ImageDataGenerator and do Image Augmentation
train_datagen = ImageDataGenerator(
# Your Code Here
)
validation_datagen = ImageDataGenerator(
# Your Code Here)
# Keep These
print(training_images.shape)
print(testing_images.shape)
# Their output should be:
# (27455, 28, 28, 1)
# (7172, 28, 28, 1)
# Define the model
# Use no more than 2 Conv2D and 2 MaxPooling2D
model = tf.keras.models.Sequential([
# Your Code Here
)
# Compile Model.
model.compile(# Your Code Here)
# Train the Model
history = model.fit_generator(# Your Code Here)
model.evaluate(testing_images, testing_labels)
# The output from model.evaluate should be close to:
[6.92426086682151, 0.56609035]
# Plot the chart for accuracy and loss on both training and validation
import matplotlib.pyplot as plt
acc = # Your Code Here
val_acc = # Your Code Here
loss = # Your Code Here
val_loss = # Your Code Here
epochs = range(len(acc))
plt.plot(epochs, acc, 'r', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'r', label='Training Loss')
plt.plot(epochs, val_loss, 'b', label='Validation Loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
```
| github_jupyter |
# Getting Started with *pyFTracks* v 1.0
**Romain Beucher, Roderick Brown, Louis Moresi and Fabian Kohlmann**
The Australian National University
The University of Glasgow
Lithodat
*pyFTracks* is a Python package that can be used to predict Fission Track ages and Track lengths distributions for some given thermal-histories and kinetic parameters.
*pyFTracks* is an open-source project licensed under the MiT license. See LICENSE.md for details.
The functionalities provided are similar to Richard Ketcham HeFTy sofware.
The main advantage comes from its Python interface which allows users to easily integrate *pyFTracks* with other Python libraries and existing scientific applications.
*pyFTracks* is available on all major operating systems.
For now, *pyFTracks* only provide forward modelling functionalities. Integration with inverse problem schemes is planned for version 2.0.
# Installation
*pyFTracks* is availabe on pypi. The code should work on all major operating systems (Linux, MaxOSx and Windows)
`pip install pyFTracks`
# Importing *pyFTracks*
The recommended way to import pyFTracks is to run:
```
import pyFTracks as FT
```
# Input
## Specifying a Thermal history
```
thermal_history = FT.ThermalHistory(name="My Thermal History",
time=[0., 43., 44., 100.],
temperature=[283., 283., 403., 403.])
import matplotlib.pyplot as plt
plt.figure(figsize=(15, 5))
plt.plot(thermal_history.input_time, thermal_history.input_temperature, label=thermal_history.name, marker="o")
plt.xlim(100., 0.)
plt.ylim(150. + 273.15, 0.+273.15)
plt.ylabel("Temperature in degC")
plt.xlabel("Time in (Ma)")
plt.legend()
```
## Predefined thermal histories
We provide predefined thermal histories for convenience.
```
from pyFTracks.thermal_history import WOLF1, WOLF2, WOLF3, WOLF4, WOLF5, FLAXMANS1, VROLIJ
thermal_histories = [WOLF1, WOLF2, WOLF3, WOLF4, WOLF5, FLAXMANS1, VROLIJ]
plt.figure(figsize=(15, 5))
for thermal_history in thermal_histories:
plt.plot(thermal_history.input_time, thermal_history.input_temperature, label=thermal_history.name, marker="o")
plt.xlim(100., 0.)
plt.ylim(150. + 273.15, 0.+273.15)
plt.ylabel("Temperature in degC")
plt.xlabel("Time in (Ma)")
plt.legend()
```
## Annealing Models
```
annealing_model = FT.Ketcham1999(kinetic_parameters={"ETCH_PIT_LENGTH": 1.65})
annealing_model.history = WOLF1
annealing_model.calculate_age()
annealing_model = FT.Ketcham2007(kinetic_parameters={"ETCH_PIT_LENGTH": 1.65})
annealing_model.history = WOLF1
annealing_model.calculate_age()
FT.Viewer(history=WOLF1, annealing_model=annealing_model)
```
# Simple Fission-Track data Predictions
```
Ns = [31, 19, 56, 67, 88, 6, 18, 40, 36, 54, 35, 52, 51, 47, 27, 36, 64, 68, 61, 30]
Ni = [41, 22, 63, 71, 90, 7, 14, 41, 49, 79, 52, 76, 74, 66, 39, 44, 86, 90, 91, 41]
zeta = 350.
zeta_err = 10. / 350.
rhod = 1.304
rhod_err = 0.
Nd = 2936
FT.central_age(Ns, Ni, zeta, zeta_err, rhod, Nd)
FT.pooled_age(Ns, Ni, zeta, zeta_err, rhod, Nd)
FT.single_grain_ages(Ns, Ni, zeta, zeta_err, rhod, Nd)
FT.chi2_test(Ns, Ni)
```
# Included datasets
*pyFTracks* comes with some sample datasets that can be used for testing and designing general code.
```
from pyFTracks.ressources import Gleadow
from pyFTracks.ressources import Miller
Gleadow
FT.central_age(Gleadow.Ns,
Gleadow.Ni,
Gleadow.zeta,
Gleadow.zeta_error,
Gleadow.rhod,
Gleadow.nd)
Miller
FT.central_age(Miller.Ns,
Miller.Ni,
Miller.zeta,
Miller.zeta_error,
Miller.rhod,
Miller.nd)
Miller.calculate_central_age()
Miller.calculate_pooled_age()
Miller.calculate_ages()
```
| github_jupyter |
```
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.data as Data
import torchvision
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
path = 'data/mnist/'
raw_train = pd.read_csv(path + 'train.csv')
raw_test = pd.read_csv(path + 'train.csv')
raw_train_array = raw_train.values
raw_test_array = raw_test.values
raw_train_array = np.random.permutation(raw_train_array)
len(raw_train_array)
raw_train = raw_train_array[:40000, :]
raw_valid = raw_train_array[40000:, :]
# train_label = np.eye(10)[raw_train[:,0]]
train_label = raw_train[:,0]
train_data = raw_train[:,1:]
# valid_label = np.eye(10)[raw_valid[:,0]]
valid_label = raw_valid[:,0]
valid_data = raw_valid[:,1:]
train_data.shape
def reshape(data, target_size): return np.reshape(data, target_size)
train_data = reshape(train_data, [40000, 1, 28, 28])
valid_data = reshape(valid_data, [2000, 1, 28, 28])
train_data.shape, train_label.shape, valid_label.shape, valid_data.shape
BATCH_SIZE = 64
LEARNING_RATE = 0.1
EPOCH = 2
#convert to pytorch tensor
train_data = torch.from_numpy(train_data)..type(torch.FloatTensor)
train_label = torch.from_numpy(train_label).type(torch.LongTensor)
val_data = torch.from_numpy(valid_data).type(torch.FloatTensor)
val_label = torch.from_numpy(valid_label).type(torch.LongTensor)
train_data.size(),train_label.size(),val_data.size(),val_label.size()
train_dataset = Data.TensorDataset(data_tensor=train_data, target_tensor=train_label)
val_dataset = Data.TensorDataset(data_tensor=val_data, target_tensor=val_label)
train_loader = Data.DataLoader(dataset=train_dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=2)
val_loader = Data.DataLoader(dataset=val_dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=2)
#pyton opp
class CNN(nn.Module):
def __init__(self):
super(CNN, self).__init__()
#in_chanel out_chanel kernel stride padding
self.conv1 = nn.Conv2d(1, 32, 3)
self.conv2 = nn.Conv2d(32, 32, 3)
self.conv3 = nn.Conv2d(32, 64, 3)
self.conv4 = nn.Conv2d(64, 64, 3)
self.fc1 = nn.Linear(64*4*4, 512)
self.fc2 = nn.Linear(512, 10)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = F.relu(self.conv3(x))
x = F.max_pool2d(F.relu(self.conv4(x)), 2)
x = x.view(x.size(0), -1)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
def num_flat_features(self, x):
size = x.size()[1:]
num_features = 1
for s in size:
num_features *= s
return num_features
cnn = CNN()
print(cnn)
list(cnn.parameters())[2].size() #conv2 weights
#Loss and Optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(cnn.parameters(), lr=LEARNING_RATE)
#train the model
for epoch in range(2):
for i, (images, labels) in enumerate(train_loader):
# print(type(images))
# print(type(labels))
images = Variable(images)
labels = Variable(labels)
#print(type(images))
# Forward + Backward + Optimize
optimizer.zero_grad()
outputs = cnn(images)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
if (i+1) % 100 == 0:
print(loss.data)
print ('Epoch [%d/%d], Iter [%d/%d] Loss: %.4f'
%(epoch+1, 2, i+1, len(train_dataset)//BATCH_SIZE, loss.data[0]))
# save and load(ไฟๅญๅๆๅ)
# save
def save():
pass
#torch.save(net_name, 'net.pkl')
#torch.save(net_name.state_dict(), 'net_params.pkl')
# load
def restore_net():
pass
#net_new = torch.load('net.pkl')
def restore_params():
pass
#net_new_old_params = NET()
#net_new_old_params = net_new_old_params.load_state_dict(torch.load()'net_params.pkl'))
#ๆนๅค็
#optimizer ไผๅๅจ
# optimizer = torch.optim.SGD()
# torch.optim.Adam
# momentum (m)
# alpha (RMSprop)
# Adam (betas)
```
| github_jupyter |
```
import sys
sys.path.append('C:\\Users\dell-pc\Desktop\ๅคงๅไธ\Computer_Vision\CNN')
from data import *
from network import three_layer_cnn
# data
train_data, test_data = loaddata()
import numpy as np
print(train_data.keys())
print("Number of train items: %d" % len(train_data['images']))
print("Number of test items: %d" % len(test_data['labels']))
print("Edge length of picture : %f" % np.sqrt(len(train_data['images'][0])))
Class = set(train_data['labels'])
print("Total classes: ", Class)
# reshape
def imageC(data_list):
data = np.array(data_list).reshape(len(data_list), 1, 28, 28)
return data
data = imageC(train_data['images'][0:3])
print(np.shape(data))
# test
def test(cnn, test_batchSize):
test_pred = []
for i in range(int(len(test_data['images']) / test_batchSize)):
out = cnn.inference(imageC(test_data['images'][i*test_batchSize:(i+1)*test_batchSize]))
y = np.array(test_data['labels'][i*test_batchSize:(i+1)*test_batchSize])
loss, pred = cnn.svm_loss(out, y, mode='test')
test_pred.extend(pred)
# accuracy
count = 0
for i in range(len(test_pred)):
if test_pred[i] == test_data['labels'][i]:
count += 1
acc = count / len(test_pred)
return acc, loss
# train
print('Begin training ...')
cnn = three_layer_cnn()
cnn.initial()
epoch = 3
batchSize = 30
train_loss = []
train_acc = []
test_loss = []
test_acc = []
for i in range(epoch):
for j in range(int(len(train_data['images']) / batchSize)):
# for j in range(30):
data = imageC(train_data['images'][j*batchSize:(j+1)*batchSize])
label = np.array(train_data['labels'][j*batchSize:(j+1)*batchSize])
output = cnn.forward(data)
loss1, pred = cnn.svm_loss(output, label)
train_loss.append(loss1)
if j % 200 == 0:
# train
count = 0
for k in range(batchSize):
if pred[k] == label[k]:
count += 1
acc1 = count / batchSize
train_acc.append(acc1)
cnn.backward()
if j % 200 == 0:
# test
acc2, loss2 = test(cnn, 10)
test_loss.append(loss2)
test_acc.append(acc2)
print('Epoch: %d; Item: %d; Train loss: %f; Test loss: %f; Train acc: %f; Test acc: %f ' % (i, (j + 1) * batchSize, loss1, loss2, acc1, acc2))
print('End training!')
# test
acc, loss = test(cnn, 10)
print('Accuracy for 3-layers convolutional neural networks: %f' % acc)
# plot
import matplotlib.pyplot as plt
from matplotlib.ticker import MultipleLocator, FormatStrFormatter
ax = plt.subplot(2, 1, 1)
plt.title('Training loss (Batch Size: 30)')
plt.xlabel('Iteration')
plt.plot(train_loss, 'o')
plt.subplot(2, 1, 2)
plt.title('Accuracy')
plt.xlabel('Iteration(x100)')
plt.plot(train_acc, '-o', label='train')
plt.plot(test_acc, '-o', label='test')
plt.legend(loc='upper right', ncol=1)
plt.gcf().set_size_inches(15, 15)
plt.show()
```
| github_jupyter |
# Preliminary instruction
To follow the code in this chapter, the `yfinance` package must be installed in your environment. If you do not have this installed yet, review Chapter 4 for instructions on how to do so.
# Chapter 9: Risk is a Number
```
# Chapter 9: Risk is a Number
import pandas as pd
import numpy as np
import yfinance as yf
%matplotlib inline
import matplotlib.pyplot as plt
```
#### Mock Strategy: Turtle for dummies
```
# Chapter 9: Risk is a Number
def regime_breakout(df,_h,_l,window):
hl = np.where(df[_h] == df[_h].rolling(window).max(),1,
np.where(df[_l] == df[_l].rolling(window).min(), -1,np.nan))
roll_hl = pd.Series(index= df.index, data= hl).fillna(method= 'ffill')
return roll_hl
def turtle_trader(df, _h, _l, slow, fast):
'''
_slow: Long/Short direction
_fast: trailing stop loss
'''
_slow = regime_breakout(df,_h,_l,window = slow)
_fast = regime_breakout(df,_h,_l,window = fast)
turtle = pd. Series(index= df.index,
data = np.where(_slow == 1,np.where(_fast == 1,1,0),
np.where(_slow == -1, np.where(_fast ==-1,-1,0),0)))
return turtle
```
#### Run the strategy with Softbank in absolute
Plot: Softbank turtle for dummies, positions, and returns
Plot: Softbank cumulative returns and Sharpe ratios: rolling and cumulative
```
# Chapter 9: Risk is a Number
ticker = '9984.T' # Softbank
start = '2017-12-31'
end = None
df = round(yf.download(tickers= ticker,start= start, end = end,
interval = "1d",group_by = 'column',auto_adjust = True,
prepost = True, treads = True, proxy = None),0)
slow = 50
fast = 20
df['tt'] = turtle_trader(df, _h= 'High', _l= 'Low', slow= slow,fast= fast)
df['stop_loss'] = np.where(df['tt'] == 1, df['Low'].rolling(fast).min(),
np.where(df['tt'] == -1, df['High'].rolling(fast).max(),np.nan))
df['tt_chg1D'] = df['Close'].diff() * df['tt'].shift()
df['tt_PL_cum'] = df['tt_chg1D'].cumsum()
df['tt_returns'] = df['Close'].pct_change() * df['tt'].shift()
tt_log_returns = np.log(df['Close']/df['Close'].shift()) * df['tt'].shift()
df['tt_cumul'] = tt_log_returns.cumsum().apply(np.exp) - 1
df[['Close','stop_loss','tt','tt_cumul']].plot(secondary_y=['tt','tt_cumul'],
figsize=(20,8),style= ['k','r--','b:','b'],
title= str(ticker)+' Close Price, Turtle L/S entries, cumulative returns')
df[['tt_PL_cum','tt_chg1D']].plot(secondary_y=['tt_chg1D'],
figsize=(20,8),style= ['b','c:'],
title= str(ticker) +' Daily P&L & Cumulative P&L')
```
#### Sharpe ratio: the right mathematical answer to the wrong question
Plot: Softbank cumulative returns and Sharpe ratios: rolling and cumulative
```
# Chapter 9: Risk is a Number
r_f = 0.00001 # risk free returns
def rolling_sharpe(returns, r_f, window):
avg_returns = returns.rolling(window).mean()
std_returns = returns.rolling(window).std(ddof=0)
return (avg_returns - r_f) / std_returns
def expanding_sharpe(returns, r_f):
avg_returns = returns.expanding().mean()
std_returns = returns.expanding().std(ddof=0)
return (avg_returns - r_f) / std_returns
window= 252
df['sharpe_roll'] = rolling_sharpe(returns= tt_log_returns, r_f= r_f, window= window) * 252**0.5
df['sharpe']= expanding_sharpe(returns=tt_log_returns,r_f= r_f) * 252**0.5
df[window:][['tt_cumul','sharpe_roll','sharpe'] ].plot(figsize = (20,8),style = ['b','c-.','c'],grid=True,
title = str(ticker)+' cumulative returns, Sharpe ratios: rolling & cumulative')
```
### Grit Index
This formula was originally invented by Peter G. Martin in 1987 and published as the Ulcer Index in his book The Investor's Guide to Fidelity Funds. Legendary trader Ed Seykota recycled it into the Seykota Lake ratio.
Investors react to drawdowns in three ways:
1. Magnitude: never test the stomach of your investors
2. Frequency: never test the nerves of your investors
3. Duration: never test the patience of your investors
The Grit calculation sequence is as follows:
1. Calculate the peak cumulative returns using rolling().max() or expanding().max()
2. Calculate the squared drawdown from the peak and square them
3. Calculate the least square sum by taking the square root of the squared drawdowns
4. Divide the cumulative returns by the surface of losses
Plot: Softbank cumulative returns and Grit ratios: rolling and cumulative
```
# Chapter 9: Risk is a Number
def rolling_grit(cumul_returns, window):
tt_rolling_peak = cumul_returns.rolling(window).max()
drawdown_squared = (cumul_returns - tt_rolling_peak) ** 2
ulcer = drawdown_squared.rolling(window).sum() ** 0.5
return cumul_returns / ulcer
def expanding_grit(cumul_returns):
tt_peak = cumul_returns.expanding().max()
drawdown_squared = (cumul_returns - tt_peak) ** 2
ulcer = drawdown_squared.expanding().sum() ** 0.5
return cumul_returns / ulcer
window = 252
df['grit_roll'] = rolling_grit(cumul_returns= df['tt_cumul'] , window = window)
df['grit'] = expanding_grit(cumul_returns= df['tt_cumul'])
df[window:][['tt_cumul','grit_roll', 'grit'] ].plot(figsize = (20,8),
secondary_y = 'tt_cumul',style = ['b','g-.','g'],grid=True,
title = str(ticker) + ' cumulative returns & Grit Ratios: rolling & cumulative '+ str(window) + ' days')
```
### Common Sense Ratio
1. Risk metric for trend following strategies: profit ratio, gain-to-pain ratio
2. Risk metric for trend following strategies: tail ratio
3. Combined risk metric: profit ratio * tail ratio
Plot: Cumulative returns and common sense ratios: cumulative and rolling
```
# Chapter 9: Risk is a Number
def rolling_profits(returns,window):
profit_roll = returns.copy()
profit_roll[profit_roll < 0] = 0
profit_roll_sum = profit_roll.rolling(window).sum().fillna(method='ffill')
return profit_roll_sum
def rolling_losses(returns,window):
loss_roll = returns.copy()
loss_roll[loss_roll > 0] = 0
loss_roll_sum = loss_roll.rolling(window).sum().fillna(method='ffill')
return loss_roll_sum
def expanding_profits(returns):
profit_roll = returns.copy()
profit_roll[profit_roll < 0] = 0
profit_roll_sum = profit_roll.expanding().sum().fillna(method='ffill')
return profit_roll_sum
def expanding_losses(returns):
loss_roll = returns.copy()
loss_roll[loss_roll > 0] = 0
loss_roll_sum = loss_roll.expanding().sum().fillna(method='ffill')
return loss_roll_sum
def profit_ratio(profits, losses):
pr = profits.fillna(method='ffill') / abs(losses.fillna(method='ffill'))
return pr
def rolling_tail_ratio(cumul_returns, window, percentile,limit):
left_tail = np.abs(cumul_returns.rolling(window).quantile(percentile))
right_tail = cumul_returns.rolling(window).quantile(1-percentile)
np.seterr(all='ignore')
tail = np.maximum(np.minimum(right_tail / left_tail,limit),-limit)
return tail
def expanding_tail_ratio(cumul_returns, percentile,limit):
left_tail = np.abs(cumul_returns.expanding().quantile(percentile))
right_tail = cumul_returns.expanding().quantile(1 - percentile)
np.seterr(all='ignore')
tail = np.maximum(np.minimum(right_tail / left_tail,limit),-limit)
return tail
def common_sense_ratio(pr,tr):
return pr * tr
```
#### Plot: Cumulative returns and profit ratios: cumulative and rolling
```
# Chapter 9: Risk is a Number
window = 252
df['pr_roll'] = profit_ratio(profits= rolling_profits(returns = tt_log_returns,window = window),
losses= rolling_losses(returns = tt_log_returns,window = window))
df['pr'] = profit_ratio(profits= expanding_profits(returns= tt_log_returns),
losses= expanding_losses(returns = tt_log_returns))
df[window:] [['tt_cumul','pr_roll','pr'] ].plot(figsize = (20,8),secondary_y= ['tt_cumul'],
style = ['r','y','y:'],grid=True)
```
#### Plot: Cumulative returns and common sense ratios: cumulative and rolling
```
# Chapter 9: Risk is a Number
window = 252
df['tr_roll'] = rolling_tail_ratio(cumul_returns= df['tt_cumul'],
window= window, percentile= 0.05,limit=5)
df['tr'] = expanding_tail_ratio(cumul_returns= df['tt_cumul'], percentile= 0.05,limit=5)
df['csr_roll'] = common_sense_ratio(pr= df['pr_roll'],tr= df['tr_roll'])
df['csr'] = common_sense_ratio(pr= df['pr'],tr= df['tr'])
df[window:] [['tt_cumul','csr_roll','csr'] ].plot(secondary_y= ['tt_cumul'],style = ['b','r-.','r'], figsize = (20,8),
title= str(ticker)+' cumulative returns, Common Sense Ratios: cumulative & rolling '+str(window)+ ' days')
```
### T-stat of gain expectancy, Van Tharp's System Quality Number (SQN)
Plot: Softbank cumulative returns and t-stat (Van Tharp's SQN): cumulative and rolling
```
# Chapter 9: Risk is a Number
def expectancy(win_rate,avg_win,avg_loss):
# win% * avg_win% - loss% * abs(avg_loss%)
return win_rate * avg_win + (1-win_rate) * avg_loss
def t_stat(signal_count, trading_edge):
sqn = (signal_count ** 0.5) * trading_edge / trading_edge.std(ddof=0)
return sqn
# Trade Count
df['trades'] = df.loc[(df['tt'].diff() !=0) & (pd.notnull(df['tt'])),'tt'].abs().cumsum()
signal_count = df['trades'].fillna(method='ffill')
signal_roll = signal_count.diff(window)
# Rolling t_stat
window = 252
win_roll = tt_log_returns.copy()
win_roll[win_roll < 0] = np.nan
win_rate_roll = win_roll.rolling(window,min_periods=0).count() / window
avg_win_roll = rolling_profits(returns = tt_log_returns,window = window) / window
avg_loss_roll = rolling_losses(returns = tt_log_returns,window = window) / window
edge_roll= expectancy(win_rate= win_rate_roll,avg_win=avg_win_roll,avg_loss=avg_loss_roll)
df['sqn_roll'] = t_stat(signal_count= signal_roll, trading_edge=edge_roll)
# Cumulative t-stat
tt_win_count = tt_log_returns[tt_log_returns>0].expanding().count().fillna(method='ffill')
tt_count = tt_log_returns[tt_log_returns!=0].expanding().count().fillna(method='ffill')
win_rate = (tt_win_count / tt_count).fillna(method='ffill')
avg_win = expanding_profits(returns= tt_log_returns) / tt_count
avg_loss = expanding_losses(returns= tt_log_returns) / tt_count
trading_edge = expectancy(win_rate,avg_win,avg_loss).fillna(method='ffill')
df['sqn'] = t_stat(signal_count, trading_edge)
df[window:][['tt_cumul','sqn','sqn_roll'] ].plot(figsize = (20,8),
secondary_y= ['tt_cumul'], grid= True,style = ['b','y','y-.'],
title= str(ticker)+' Cumulative Returns and SQN: cumulative & rolling'+ str(window)+' days')
```
### Robustness score
Combined risk metric:
1. The Grit Index integrates losses throughout the period
2. The CSR combines risks endemic to the two types of strategies in a single measure
3. The t-stat SQN incorporates trading frequency into the trading edge formula to show the most efficient use of capital.
```
# Chapter 9: Risk is a Number
def robustness_score(grit,csr,sqn):
start_date = max(grit[pd.notnull(grit)].index[0],
csr[pd.notnull(csr)].index[0],
sqn[pd.notnull(sqn)].index[0])
score = grit * csr * sqn / (grit[start_date] * csr[start_date] * sqn[start_date])
return score
df['score_roll'] = robustness_score(grit = df['grit_roll'], csr = df['csr_roll'],sqn= df['sqn_roll'])
df['score'] = robustness_score(grit = df['grit'],csr = df['csr'],sqn = df['sqn'])
df[window:][['tt_cumul','score','score_roll']].plot(
secondary_y= ['score'],figsize=(20,6),style = ['b','k','k-.'],
title= str(ticker)+' Cumulative Returns and Robustness Score: cumulative & rolling '+ str(window)+' days')
```
| github_jupyter |
# Boucles
https://python.sdv.univ-paris-diderot.fr/05_boucles_comparaisons/
Rรฉpรฉter des actions
## Itรฉration sur les รฉlรฉments d'une liste
```
placard = ["farine", "oeufs", "lait", "sucre"]
for ingredient in placard:
print(ingredient)
```
Remarques :
- La variable *ingredient* est appelรฉe *variable d'itรฉration* et change de valeur ร chaque itรฉration de la boucle.
- La ligne dรฉbutant par `for` se termine toujours par `:`
- Le bloc d'instructions `print(ingredient)` est indentรฉ : dรฉcalage vers la droite du contenu du bloc d'instructions.
```
placard = ["farine", "oeufs", "lait", "sucre"]
for ingredient in placard:
print("J'ajoute un ingrรฉdient :")
print(ingredient)
print("Les crรจpes sont prรชtes !")
```
Ici, le bloc d'instructions de la boucle `for` est composรฉ des 2 instructions :
```
print("J'ajoute un ingrรฉdient :")
print(ingredient)
```
L'instruction `print("Les crรจpes sont prรชtes !")` est en dehors du bloc d'instructions.
## Itรฉration sur les caractรจres d'une chaรฎne de caractรจres
```
sequence = "ATCG"
for base in sequence:
print(base)
sequence = "ATCG"
for base in sequence:
print("La base est : {}".format(base))
```
# Tests
https://python.sdv.univ-paris-diderot.fr/06_tests/
Prendre des dรฉcisions
```
nombre = 2
if nombre == 2:
print("Gagnรฉ !")
```
Remarques :
- `:` aprรจs `if`
- Un bloc d'instructions aprรจs `if`
## Tests ร deux cas
```
nombre = 2
if nombre == 2:
print("Gagnรฉ !")
else:
print("Perdu !")
```
Remarques :
- `:` aprรจs `if` et `else`
- Un bloc d'instructions aprรจs `if`
- Un bloc d'instructions aprรจs `else`
## Tests ร plusieurs cas
```
base = "T"
if base == "A":
print("Choix d'une adรฉnine")
elif base == "T":
print("Choix d'une thymine")
elif base == "C":
print("Choix d'une cytosine")
elif base == "G":
print("Choix d'une guanine")
```
On peut รฉgalement dรฉfinir un cas ยซ par dรฉfaut ยป avec `else` :
```
base = "P"
if base == "A":
print("Choix d'une adรฉnine")
elif base == "T":
print("Choix d'une thymine")
elif base == "C":
print("Choix d'une cytosine")
elif base == "G":
print("Choix d'une guanine")
else:
print("Rรฉvise ta biologie !")
```
## Tirage alรฉatoire
```
import random
random.choice(["Sandra", "Julie", "Magali", "Benoist", "Hubert"])
base = random.choice(["A", "T", "C", "G"])
if base == "A":
print("Choix d'une adรฉnine")
elif base == "T":
print("Choix d'une thymine")
elif base == "C":
print("Choix d'une cytosine")
elif base == "G":
print("Choix d'une guanine")
```
Remarques :
- `:` aprรจs `if` et `elif`
- Un bloc d'instructions aprรจs `if`
- Un bloc d'instructions aprรจs `elif`
## Attention ร l'indentation !
```
nombres = [4, 5, 6]
for nb in nombres:
if nb == 5:
print("Le test est vrai")
print("car la variable nb vaut {}".format(nb))
nombres = [4, 5, 6]
for nb in nombres:
if nb == 5:
print("Le test est vrai")
print("car la variable nb vaut {}".format(nb))
```
# Exercices
## Notes d'un รฉtudiant
Voici les notes d'un รฉtudiant :
```
notes = [14, 9, 6, 8, 12]
```
Calculez la moyenne de ces notes.
Utilisez l'รฉcriture formatรฉe pour afficher la valeur de la moyenne avec deux dรฉcimales.
## Sรฉquence complรฉmentaire
La liste ci-dessous reprรฉsente la sรฉquence d'un brin d'ADN :
```
sequence = ["A","C","G","T","T","A","G","C","T","A","A","C","G"]
```
Crรฉez un code qui transforme cette sรฉquence en sa sรฉquence complรฉmentaire.
Rappel : la sรฉquence complรฉmentaire s'obtient en remplaรงant A par T, T par A, C par G et G par C.
| github_jupyter |
```
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os
print(os.listdir("../input"))
import time
# import pytorch
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.optim import SGD,Adam,lr_scheduler
from torch.utils.data import random_split
import torchvision
from torchvision import transforms, datasets
from torch.utils.data import DataLoader
# define transformations for train
train_transform = transforms.Compose([
transforms.RandomHorizontalFlip(p=.40),
transforms.RandomRotation(30),
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])
# define transformations for test
test_transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])
# define training dataloader
def get_training_dataloader(train_transform, batch_size=128, num_workers=0, shuffle=True):
""" return training dataloader
Args:
train_transform: transfroms for train dataset
path: path to cifar100 training python dataset
batch_size: dataloader batchsize
num_workers: dataloader num_works
shuffle: whether to shuffle
Returns: train_data_loader:torch dataloader object
"""
transform_train = train_transform
cifar10_training = torchvision.datasets.CIFAR10(root='.', train=True, download=True, transform=transform_train)
cifar10_training_loader = DataLoader(
cifar10_training, shuffle=shuffle, num_workers=num_workers, batch_size=batch_size)
return cifar10_training_loader
# define test dataloader
def get_testing_dataloader(test_transform, batch_size=128, num_workers=0, shuffle=True):
""" return training dataloader
Args:
test_transform: transforms for test dataset
path: path to cifar100 test python dataset
batch_size: dataloader batchsize
num_workers: dataloader num_works
shuffle: whether to shuffle
Returns: cifar100_test_loader:torch dataloader object
"""
transform_test = test_transform
cifar10_test = torchvision.datasets.CIFAR10(root='.', train=False, download=True, transform=transform_test)
cifar10_test_loader = DataLoader(
cifar10_test, shuffle=shuffle, num_workers=num_workers, batch_size=batch_size)
return cifar10_test_loader
# implement mish activation function
def f_mish(input):
'''
Applies the mish function element-wise:
mish(x) = x * tanh(softplus(x)) = x * tanh(ln(1 + exp(x)))
'''
return input * torch.tanh(F.softplus(input))
# implement class wrapper for mish activation function
class mish(nn.Module):
'''
Applies the mish function element-wise:
mish(x) = x * tanh(softplus(x)) = x * tanh(ln(1 + exp(x)))
Shape:
- Input: (N, *) where * means, any number of additional
dimensions
- Output: (N, *), same shape as the input
Examples:
>>> m = mish()
>>> input = torch.randn(2)
>>> output = m(input)
'''
def __init__(self):
'''
Init method.
'''
super().__init__()
def forward(self, input):
'''
Forward pass of the function.
'''
return f_mish(input)
# implement swish activation function
def f_swish(input):
'''
Applies the swish function element-wise:
swish(x) = x * sigmoid(x)
'''
return input * torch.sigmoid(input)
# implement class wrapper for swish activation function
class swish(nn.Module):
'''
Applies the swish function element-wise:
swish(x) = x * sigmoid(x)
Shape:
- Input: (N, *) where * means, any number of additional
dimensions
- Output: (N, *), same shape as the input
Examples:
>>> m = swish()
>>> input = torch.randn(2)
>>> output = m(input)
'''
def __init__(self):
'''
Init method.
'''
super().__init__()
def forward(self, input):
'''
Forward pass of the function.
'''
return f_swish(input)
class BasicResidualSEBlock(nn.Module):
expansion = 1
def __init__(self, in_channels, out_channels, stride, r=16, activation = 'relu'):
super().__init__()
if activation == 'relu':
f_activation = nn.ReLU(inplace=True)
self.activation = F.relu
if activation == 'swish':
f_activation = swish()
self.activation = f_swish
if activation == 'mish':
f_activation = mish()
self.activation = f_mish
self.residual = nn.Sequential(
nn.Conv2d(in_channels, out_channels, 3, stride=stride, padding=1),
nn.BatchNorm2d(out_channels),
f_activation,
nn.Conv2d(out_channels, out_channels * self.expansion, 3, padding=1),
nn.BatchNorm2d(out_channels * self.expansion),
f_activation
)
self.shortcut = nn.Sequential()
if stride != 1 or in_channels != out_channels * self.expansion:
self.shortcut = nn.Sequential(
nn.Conv2d(in_channels, out_channels * self.expansion, 1, stride=stride),
nn.BatchNorm2d(out_channels * self.expansion)
)
self.squeeze = nn.AdaptiveAvgPool2d(1)
self.excitation = nn.Sequential(
nn.Linear(out_channels * self.expansion, out_channels * self.expansion // r),
f_activation,
nn.Linear(out_channels * self.expansion // r, out_channels * self.expansion),
nn.Sigmoid()
)
def forward(self, x):
shortcut = self.shortcut(x)
residual = self.residual(x)
squeeze = self.squeeze(residual)
squeeze = squeeze.view(squeeze.size(0), -1)
excitation = self.excitation(squeeze)
excitation = excitation.view(residual.size(0), residual.size(1), 1, 1)
x = residual * excitation.expand_as(residual) + shortcut
return self.activation(x)
class BottleneckResidualSEBlock(nn.Module):
expansion = 4
def __init__(self, in_channels, out_channels, stride, r=16, activation = 'relu'):
super().__init__()
if activation == 'relu':
f_activation = nn.ReLU(inplace=True)
self.activation = F.relu
if activation == 'swish':
f_activation = swish()
self.activation = f_swish
if activation == 'mish':
f_activation = mish()
self.activation = f_mish
self.residual = nn.Sequential(
nn.Conv2d(in_channels, out_channels, 1),
nn.BatchNorm2d(out_channels),
f_activation,
nn.Conv2d(out_channels, out_channels, 3, stride=stride, padding=1),
nn.BatchNorm2d(out_channels),
f_activation,
nn.Conv2d(out_channels, out_channels * self.expansion, 1),
nn.BatchNorm2d(out_channels * self.expansion),
f_activation
)
self.squeeze = nn.AdaptiveAvgPool2d(1)
self.excitation = nn.Sequential(
nn.Linear(out_channels * self.expansion, out_channels * self.expansion // r),
f_activation,
nn.Linear(out_channels * self.expansion // r, out_channels * self.expansion),
nn.Sigmoid()
)
self.shortcut = nn.Sequential()
if stride != 1 or in_channels != out_channels * self.expansion:
self.shortcut = nn.Sequential(
nn.Conv2d(in_channels, out_channels * self.expansion, 1, stride=stride),
nn.BatchNorm2d(out_channels * self.expansion)
)
def forward(self, x):
shortcut = self.shortcut(x)
residual = self.residual(x)
squeeze = self.squeeze(residual)
squeeze = squeeze.view(squeeze.size(0), -1)
excitation = self.excitation(squeeze)
excitation = excitation.view(residual.size(0), residual.size(1), 1, 1)
x = residual * excitation.expand_as(residual) + shortcut
return self.activation(x)
class SEResNet(nn.Module):
def __init__(self, block, block_num, class_num=10, activation = 'relu'):
super().__init__()
self.in_channels = 64
if activation == 'relu':
f_activation = nn.ReLU(inplace=True)
self.activation = F.relu
if activation == 'swish':
f_activation = swish()
self.activation = f_swish
if activation == 'mish':
f_activation = mish()
self.activation = f_mish
self.pre = nn.Sequential(
nn.Conv2d(3, 64, 3, padding=1),
nn.BatchNorm2d(64),
f_activation
)
self.stage1 = self._make_stage(block, block_num[0], 64, 1, activation = activation)
self.stage2 = self._make_stage(block, block_num[1], 128, 2, activation = activation)
self.stage3 = self._make_stage(block, block_num[2], 256, 2, activation = activation)
self.stage4 = self._make_stage(block, block_num[3], 516, 2, activation = activation)
self.linear = nn.Linear(self.in_channels, class_num)
def forward(self, x):
x = self.pre(x)
x = self.stage1(x)
x = self.stage2(x)
x = self.stage3(x)
x = self.stage4(x)
x = F.adaptive_avg_pool2d(x, 1)
x = x.view(x.size(0), -1)
x = self.linear(x)
return x
def _make_stage(self, block, num, out_channels, stride, activation = 'relu'):
layers = []
layers.append(block(self.in_channels, out_channels, stride, activation = activation))
self.in_channels = out_channels * block.expansion
while num - 1:
layers.append(block(self.in_channels, out_channels, 1, activation = activation))
num -= 1
return nn.Sequential(*layers)
def seresnet18(activation = 'relu'):
return SEResNet(BasicResidualSEBlock, [2, 2, 2, 2], activation = activation)
def seresnet34(activation = 'relu'):
return SEResNet(BasicResidualSEBlock, [3, 4, 6, 3], activation = activation)
def seresnet50(activation = 'relu'):
return SEResNet(BottleneckResidualSEBlock, [3, 4, 6, 3], activation = activation)
def seresnet101(activation = 'relu'):
return SEResNet(BottleneckResidualSEBlock, [3, 4, 23, 3], activation = activation)
def seresnet152(activation = 'relu'):
return SEResNet(BottleneckResidualSEBlock, [3, 8, 36, 3], activation = activation)
trainloader = get_training_dataloader(train_transform)
testloader = get_testing_dataloader(test_transform)
epochs = 100
batch_size = 128
learning_rate = 0.001
device = torch.device('cuda:0' if torch.cuda.is_available() else "cpu")
device
model = seresnet18(activation = 'mish')
# set loss function
criterion = nn.CrossEntropyLoss()
# set optimizer, only train the classifier parameters, feature parameters are frozen
optimizer = Adam(model.parameters(), lr=learning_rate)
train_stats = pd.DataFrame(columns = ['Epoch', 'Time per epoch', 'Avg time per step', 'Train loss', 'Train accuracy', 'Train top-3 accuracy','Test loss', 'Test accuracy', 'Test top-3 accuracy'])
#train the model
model.to(device)
steps = 0
running_loss = 0
for epoch in range(epochs):
since = time.time()
train_accuracy = 0
top3_train_accuracy = 0
for inputs, labels in trainloader:
steps += 1
# Move input and label tensors to the default device
inputs, labels = inputs.to(device), labels.to(device)
optimizer.zero_grad()
logps = model.forward(inputs)
loss = criterion(logps, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
# calculate train top-1 accuracy
ps = torch.exp(logps)
top_p, top_class = ps.topk(1, dim=1)
equals = top_class == labels.view(*top_class.shape)
train_accuracy += torch.mean(equals.type(torch.FloatTensor)).item()
# Calculate train top-3 accuracy
np_top3_class = ps.topk(3, dim=1)[1].cpu().numpy()
target_numpy = labels.cpu().numpy()
top3_train_accuracy += np.mean([1 if target_numpy[i] in np_top3_class[i] else 0 for i in range(0, len(target_numpy))])
time_elapsed = time.time() - since
test_loss = 0
test_accuracy = 0
top3_test_accuracy = 0
model.eval()
with torch.no_grad():
for inputs, labels in testloader:
inputs, labels = inputs.to(device), labels.to(device)
logps = model.forward(inputs)
batch_loss = criterion(logps, labels)
test_loss += batch_loss.item()
# Calculate test top-1 accuracy
ps = torch.exp(logps)
top_p, top_class = ps.topk(1, dim=1)
equals = top_class == labels.view(*top_class.shape)
test_accuracy += torch.mean(equals.type(torch.FloatTensor)).item()
# Calculate test top-3 accuracy
np_top3_class = ps.topk(3, dim=1)[1].cpu().numpy()
target_numpy = labels.cpu().numpy()
top3_test_accuracy += np.mean([1 if target_numpy[i] in np_top3_class[i] else 0 for i in range(0, len(target_numpy))])
print(f"Epoch {epoch+1}/{epochs}.. "
f"Time per epoch: {time_elapsed:.4f}.. "
f"Average time per step: {time_elapsed/len(trainloader):.4f}.. "
f"Train loss: {running_loss/len(trainloader):.4f}.. "
f"Train accuracy: {train_accuracy/len(trainloader):.4f}.. "
f"Top-3 train accuracy: {top3_train_accuracy/len(trainloader):.4f}.. "
f"Test loss: {test_loss/len(testloader):.4f}.. "
f"Test accuracy: {test_accuracy/len(testloader):.4f}.. "
f"Top-3 test accuracy: {top3_test_accuracy/len(testloader):.4f}")
train_stats = train_stats.append({'Epoch': epoch, 'Time per epoch':time_elapsed, 'Avg time per step': time_elapsed/len(trainloader), 'Train loss' : running_loss/len(trainloader), 'Train accuracy': train_accuracy/len(trainloader), 'Train top-3 accuracy':top3_train_accuracy/len(trainloader),'Test loss' : test_loss/len(testloader), 'Test accuracy': test_accuracy/len(testloader), 'Test top-3 accuracy':top3_test_accuracy/len(testloader)}, ignore_index=True)
running_loss = 0
model.train()
train_stats.to_csv('train_log_SENet18_Mish.csv')
```
| github_jupyter |
```
from os import path
# Third-party
import astropy
import astropy.coordinates as coord
from astropy.table import Table, vstack
from astropy.io import fits
import astropy.units as u
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
from pyvo.dal import TAPService
from pyia import GaiaData
import gala.coordinates as gc
import scipy.stats
plt.style.use('notebook')
t = Table.read('../data/gd1-all-ps1-red.fits')
# deredden
bands = ['g', 'r', 'i', 'z', 'y']
for band in bands:
t[band] = t[band] - t['A_{}'.format(band)]
g = GaiaData(t)
c = coord.SkyCoord(ra=g.ra, dec=g.dec, pm_ra_cosdec=g.pmra, pm_dec=g.pmdec)
def gd1_dist(phi1):
# 0, 10
# -60, 7
m = (10-7) / (60)
return (m*phi1.wrap_at(180*u.deg).value + 10) * u.kpc
gd1_c = c.transform_to(gc.GD1)
gd1_c_dist = gc.GD1(phi1=gd1_c.phi1, phi2=gd1_c.phi2,
distance=gd1_dist(gd1_c.phi1),
pm_phi1_cosphi2=gd1_c.pm_phi1_cosphi2,
pm_phi2=gd1_c.pm_phi2,
radial_velocity=[0]*len(gd1_c)*u.km/u.s)
# Correct for reflex motion
v_sun = coord.Galactocentric.galcen_v_sun
observed = gd1_c_dist.transform_to(coord.Galactic)
rep = observed.cartesian.without_differentials()
rep = rep.with_differentials(observed.cartesian.differentials['s'] + v_sun)
gd1_c = coord.Galactic(rep).transform_to(gc.GD1)
wangle = 180*u.deg
pm_mask = ((gd1_c.pm_phi1_cosphi2 < -5*u.mas/u.yr) & (gd1_c.pm_phi1_cosphi2 > -10*u.mas/u.yr) &
(gd1_c.pm_phi2 < 1*u.mas/u.yr) & (gd1_c.pm_phi2 > -2*u.mas/u.yr) &
(g.bp_rp < 1.5*u.mag) & (g.bp_rp > 0*u.mag))
phi_mask_stream = ((np.abs(gd1_c.phi2)<1*u.deg) & (gd1_c.phi1.wrap_at(wangle)>-50*u.deg) &
(gd1_c.phi1.wrap_at(wangle)<-10*u.deg))
phi_mask_off = ((gd1_c.phi2<-2*u.deg) & (gd1_c.phi2>-3*u.deg)) | ((gd1_c.phi2<3*u.deg) & (gd1_c.phi2>2*u.deg))
iso = Table.read('../data/mist_12.0_-1.35.cmd', format='ascii.commented_header', header_start=12)
phasecut = (iso['phase']>=0) & (iso['phase']<3)
iso = iso[phasecut]
# distance modulus
distance_app = 7.8*u.kpc
dm = 5*np.log10((distance_app.to(u.pc)).value)-5
# main sequence + rgb
i_gi = iso['PS_g']-iso['PS_i']
i_g = iso['PS_g']+dm
i_left = i_gi - 0.4*(i_g/28)**5
i_right = i_gi + 0.5*(i_g/28)**5
poly = np.hstack([np.array([i_left, i_g]), np.array([i_right[::-1], i_g[::-1]])]).T
ind = (poly[:,1]<21.3) & (poly[:,1]>17.8)
poly_main = poly[ind]
points = np.array([g.g - g.i, g.g]).T
path_main = mpl.path.Path(poly_main)
cmd_mask = path_main.contains_points(points)
pm1_min = -9*u.mas/u.yr
pm1_max = -4.5*u.mas/u.yr
pm2_min = -1.7*u.mas/u.yr
pm2_max = 1.*u.mas/u.yr
pm_mask = ((gd1_c.pm_phi1_cosphi2 < pm1_max) & (gd1_c.pm_phi1_cosphi2 > pm1_min) &
(gd1_c.pm_phi2 < pm2_max) & (gd1_c.pm_phi2 > pm2_min))
```
## Define target fields
```
targets = {}
targets['phi1'] = np.array([-36.35, -39.5, -32.4, -29.8, -29.8])*u.deg
targets['phi2'] = np.array([0.2, 0.2, 1.1, 0, 1])*u.deg
Nf = len(targets['phi1'])
plt.figure(figsize=(10,8))
plt.plot(gd1_c.phi1[pm_mask & cmd_mask].wrap_at(wangle), gd1_c.phi2[pm_mask & cmd_mask],
'ko', ms=4)
for i in range(Nf):
c = mpl.patches.Circle((targets['phi1'][i].value, targets['phi2'][i].value),
radius=0.5, fc='none', ec='r', lw=2, zorder=2)
plt.gca().add_patch(c)
plt.gca().set_aspect('equal')
plt.xlim(-45,-25)
plt.ylim(-5,5)
plt.xlabel('$\phi_1$ [deg]')
plt.ylabel('$\phi_2$ [deg]')
plt.tight_layout()
```
### Show overall stream
```
plt.figure(figsize=(13,10))
plt.plot(gd1_c.phi1[pm_mask & cmd_mask].wrap_at(wangle), gd1_c.phi2[pm_mask & cmd_mask],
'ko', ms=0.7, alpha=0.7, rasterized=True)
for i in range(Nf):
c = mpl.patches.Circle((targets['phi1'][i].value, targets['phi2'][i].value),
radius=0.5, fc='none', ec='r', lw=1, zorder=2)
plt.gca().add_patch(c)
plt.gca().set_aspect('equal')
plt.xlabel('$\phi_1$ [deg]')
plt.ylabel('$\phi_2$ [deg]')
plt.xlim(-90,10)
plt.ylim(-12,12)
plt.tight_layout()
targets_c = coord.SkyCoord(phi1=targets['phi1'], phi2=targets['phi2'], frame=gc.GD1)
ra_field = targets_c.icrs.ra.to_string(unit=u.hour, sep=':')
dec_field = targets_c.icrs.dec.to_string(unit=u.degree, sep=':')
tfield = Table(np.array([ra_field, dec_field]).T, names=('ra', 'dec'))
tfield.write('../data/GD1_fields_2018B.txt', format='ascii.commented_header', overwrite=True)
tfield
```
## Target priorities
```
iso = Table.read('/home/ana/data/isochrones/panstarrs/mist_12.6_-1.50.cmd',
format='ascii.commented_header', header_start=12)
phasecut = (iso['phase']>=0) & (iso['phase']<3)
iso = iso[phasecut]
# distance modulus
distance_app = 7.8*u.kpc
dm = 5*np.log10((distance_app.to(u.pc)).value)-5
# main sequence + rgb
i_gi = iso['PS_g']-iso['PS_i']
i_g = iso['PS_g']+dm
i_left_narrow = i_gi - 0.4*(i_g/28)**5
i_right_narrow = i_gi + 0.5*(i_g/28)**5
poly_narrow = np.hstack([np.array([i_left_narrow, i_g]), np.array([i_right_narrow[::-1], i_g[::-1]])]).T
i_left_wide = i_gi - 0.6*(i_g/28)**3
i_right_wide = i_gi + 0.7*(i_g/28)**3
poly_wide = np.hstack([np.array([i_left_wide, i_g]), np.array([i_right_wide[::-1], i_g[::-1]])]).T
ind = (poly_wide[:,1]<18.3) & (poly_wide[:,1]>14)
poly_low = poly_wide[ind]
ind = (poly_narrow[:,1]<20.5) & (poly_narrow[:,1]>14)
poly_med = poly_narrow[ind]
ind = (poly_narrow[:,1]<20.5) & (poly_narrow[:,1]>17.5)
poly_high = poly_narrow[ind]
plt.figure(figsize=(5,10))
plt.plot(g.g[phi_mask_stream & pm_mask] - g.i[phi_mask_stream & pm_mask], g.g[phi_mask_stream & pm_mask],
'ko', ms=2, alpha=1, rasterized=True, label='')
plt.plot(i_gi, i_g, 'r-')
pml = mpl.patches.Polygon(poly_low, color='moccasin', alpha=0.4, zorder=2)
plt.gca().add_artist(pml)
pmm = mpl.patches.Polygon(poly_med, color='orange', alpha=0.3, zorder=2)
plt.gca().add_artist(pmm)
pmh = mpl.patches.Polygon(poly_high, color='green', alpha=0.3, zorder=2)
plt.gca().add_artist(pmh)
plt.xlim(-0.2, 1.8)
plt.ylim(21, 13)
plt.xlabel('g - i')
plt.ylabel('g')
plt.tight_layout()
pm1_bmin = -12*u.mas/u.yr
pm1_bmax = 2*u.mas/u.yr
pm2_bmin = -5*u.mas/u.yr
pm2_bmax = 5*u.mas/u.yr
pm_broad_mask = ((gd1_c.pm_phi1_cosphi2 < pm1_bmax) & (gd1_c.pm_phi1_cosphi2 > pm1_bmin) &
(gd1_c.pm_phi2 < pm2_bmax) & (gd1_c.pm_phi2 > pm2_bmin))
plt.plot(gd1_c.pm_phi1_cosphi2[phi_mask_stream].to(u.mas/u.yr),
gd1_c.pm_phi2[phi_mask_stream].to(u.mas/u.yr),
'ko', ms=0.5, alpha=0.5, rasterized=True)
rect_xy = [pm1_bmin.to(u.mas/u.yr).value, pm2_bmin.to(u.mas/u.yr).value]
rect_w = pm1_bmax.to(u.mas/u.yr).value - pm1_bmin.to(u.mas/u.yr).value
rect_h = pm2_bmax.to(u.mas/u.yr).value - pm2_bmin.to(u.mas/u.yr).value
pr = mpl.patches.Rectangle(rect_xy, rect_w, rect_h, color='orange', alpha=0.3)
plt.gca().add_artist(pr)
rect_xy = [pm1_min.to(u.mas/u.yr).value, pm2_min.to(u.mas/u.yr).value]
rect_w = pm1_max.to(u.mas/u.yr).value - pm1_min.to(u.mas/u.yr).value
rect_h = pm2_max.to(u.mas/u.yr).value - pm2_min.to(u.mas/u.yr).value
pr = mpl.patches.Rectangle(rect_xy, rect_w, rect_h, color='green', alpha=0.3)
plt.gca().add_artist(pr)
plt.xlim(-12,12)
plt.ylim(-12,12)
plt.xlabel('$\mu_{\phi_1}$ [mas yr$^{-1}$]')
plt.ylabel('$\mu_{\phi_2}$ [mas yr$^{-1}$]')
plt.tight_layout()
```
## 2018C proposal
```
path_high = mpl.path.Path(poly_high)
ms_mask = path_high.contains_points(points)
plt.figure(figsize=(13,10))
plt.plot(gd1_c.phi1[pm_mask & cmd_mask].wrap_at(wangle), gd1_c.phi2[pm_mask & cmd_mask],
'ko', ms=0.7, alpha=0.7, rasterized=True)
# plt.annotate('Progenitor?', xy=(-13, 0.5), xytext=(-10, 7),
# arrowprops=dict(color='0.3', shrink=0.05, width=1.5, headwidth=6, headlength=8, alpha=0.4),
# fontsize='small')
# plt.annotate('Blob', xy=(-14, -2), xytext=(-14, -10),
# arrowprops=dict(color='0.3', shrink=0.08, width=1.5, headwidth=6, headlength=8, alpha=0.4),
# fontsize='small')
plt.annotate('Spur', xy=(-33, 2), xytext=(-42, 7),
arrowprops=dict(color='0.3', shrink=0.08, width=1.5, headwidth=6, headlength=8, alpha=0.4),
fontsize='small')
plt.annotate('Gaps', xy=(-40, -2), xytext=(-35, -10),
arrowprops=dict(color='0.3', shrink=0.08, width=1.5, headwidth=6, headlength=8, alpha=0.4),
fontsize='small')
plt.annotate('Gaps', xy=(-21, -1), xytext=(-35, -10),
arrowprops=dict(color='0.3', shrink=0.08, width=1.5, headwidth=6, headlength=8, alpha=0.4),
fontsize='small')
# plt.axvline(-55, ls='--', color='0.3', alpha=0.4, dashes=(6,4), lw=2)
# plt.text(-60, 9.5, 'Previously\nundetected', fontsize='small', ha='right', va='top')
pr = mpl.patches.Rectangle([-50, -5], 25, 10, color='none', ec='darkorange', lw=2)
plt.gca().add_artist(pr)
plt.gca().set_aspect('equal')
plt.xlabel('$\phi_1$ [deg]')
plt.ylabel('$\phi_2$ [deg]')
plt.xlim(-90,10)
plt.ylim(-12,12)
plt.tight_layout()
ax_inset = plt.axes([0.2,0.62,0.6,0.2])
plt.sca(ax_inset)
plt.plot(gd1_c.phi1[pm_mask & cmd_mask].wrap_at(wangle), gd1_c.phi2[pm_mask & cmd_mask],
'ko', ms=4, alpha=0.2, rasterized=True, label='All likely GD-1 members')
plt.plot(gd1_c.phi1[pm_mask & cmd_mask & ms_mask].wrap_at(wangle), gd1_c.phi2[pm_mask & cmd_mask & ms_mask],
'ko', ms=4, alpha=1, rasterized=True, label='High priority targets')
plt.text(-0.07, 0.5, 'GD-1 region for\nHectochelle follow-up', transform=plt.gca().transAxes, ha='right')
plt.legend(bbox_to_anchor=(1, 0.85), frameon=False, loc='upper left', handlelength=0.3, markerscale=1.5)
for pos in ['top', 'bottom', 'right', 'left']:
plt.gca().spines[pos].set_edgecolor('orange')
plt.gca().set_aspect('equal')
plt.xlim(-50,-25)
plt.ylim(-5,5)
plt.setp(plt.gca().get_xticklabels(), visible=False)
plt.setp(plt.gca().get_yticklabels(), visible=False)
plt.gca().tick_params(bottom='off', left='off', right='off', top='off');
plt.savefig('../plots/prop_fig1.pdf')
ts = Table.read('../data/gd1_4_vels.tab', format='ascii.commented_header', delimiter='\t')
# ts = Table.read('../data/gd1_both.tab', format='ascii.commented_header', delimiter='\t')
vbins = np.arange(-200,200,10)
fig, ax = plt.subplots(1,3,figsize=(15,5))
plt.sca(ax[0])
plt.plot(gd1_c.pm_phi1_cosphi2[phi_mask_stream].to(u.mas/u.yr),
gd1_c.pm_phi2[phi_mask_stream].to(u.mas/u.yr),
'ko', ms=0.5, alpha=0.1, rasterized=True)
rect_xy = [pm1_bmin.to(u.mas/u.yr).value, pm2_bmin.to(u.mas/u.yr).value]
rect_w = pm1_bmax.to(u.mas/u.yr).value - pm1_bmin.to(u.mas/u.yr).value
rect_h = pm2_bmax.to(u.mas/u.yr).value - pm2_bmin.to(u.mas/u.yr).value
pr = mpl.patches.Rectangle(rect_xy, rect_w, rect_h, color='k', alpha=0.1)
plt.gca().add_artist(pr)
rect_xy = [pm1_min.to(u.mas/u.yr).value, pm2_min.to(u.mas/u.yr).value]
rect_w = pm1_max.to(u.mas/u.yr).value - pm1_min.to(u.mas/u.yr).value
rect_h = pm2_max.to(u.mas/u.yr).value - pm2_min.to(u.mas/u.yr).value
pr = mpl.patches.Rectangle(rect_xy, rect_w, rect_h, color='w', alpha=1)
plt.gca().add_artist(pr)
pr = mpl.patches.Rectangle(rect_xy, rect_w, rect_h, color='tab:blue', alpha=0.5)
plt.gca().add_artist(pr)
plt.xlim(-12,12)
plt.ylim(-12,12)
plt.xlabel('$\mu_{\phi_1}$ [mas yr$^{-1}$]')
plt.ylabel('$\mu_{\phi_2}$ [mas yr$^{-1}$]')
plt.sca(ax[1])
plt.plot(g.g[phi_mask_stream & pm_mask] - g.i[phi_mask_stream & pm_mask], g.g[phi_mask_stream & pm_mask],
'ko', ms=2, alpha=0.5, rasterized=True, label='')
# plt.plot(i_gi, i_g, 'r-')
# pml = mpl.patches.Polygon(poly_low, color='moccasin', alpha=0.4, zorder=2)
# plt.gca().add_artist(pml)
# pmm = mpl.patches.Polygon(poly_med, color='orange', alpha=0.3, zorder=2)
# plt.gca().add_artist(pmm)
pmh = mpl.patches.Polygon(poly_high, color='tab:blue', alpha=0.5, zorder=2)
plt.gca().add_artist(pmh)
plt.gca().set_facecolor('0.95')
plt.xlim(-0.2, 1.8)
plt.ylim(21, 13)
plt.xlabel('g - i [mag]')
plt.ylabel('g [mag]')
plt.sca(ax[2])
plt.hist(ts['VELOCITY'][ts['rank']==1], bins=vbins, alpha=0.5, color='tab:blue', label='Priority 1')
plt.hist(ts['VELOCITY'][ts['rank']==5], bins=vbins, alpha=0.1, histtype='stepfilled', color='k', label='Priority 5')
plt.legend(fontsize='small')
plt.xlabel('Radial velocity [km s$^{-1}$]')
plt.ylabel('Number')
plt.tight_layout()
plt.savefig('../plots/prop_fig3.pdf')
```
## Target list
```
# check total number of stars per field
r_fov = 0.5*u.deg
mag_mask = g.g<20.5*u.mag
guide = (g.g>13*u.mag) & (g.g<15*u.mag)
for i in range(Nf):
infield = (gd1_c.phi1.wrap_at(wangle) - targets['phi1'][i])**2 + (gd1_c.phi2 - targets['phi2'][i])**2 < r_fov**2
print(i, np.sum(infield & pm_broad_mask & mag_mask),
np.sum(infield & pm_mask & mag_mask), np.sum(infield & guide))
# plt.plot(g.g[infield]-g.i[infield],g.g[infield], 'k.')
plt.plot(g.pmra[infield],g.pmdec[infield], 'k.')
# plt.xlim(-1,3)
# plt.ylim(22,12)
# find ra, dec corners for querying for guide stars
cornersgd1 = astropy.coordinates.SkyCoord(phi1=np.array([-45,-45,-25,-25])*u.deg,
phi2=np.array([-3,3,3,-3])*u.deg, frame=gc.GD1)
corners = cornersgd1.icrs
query ='''SELECT * FROM gaiadr2.gaia_source
WHERE phot_g_mean_mag < 16 AND phot_g_mean_mag > 13 AND
CONTAINS(POINT('ICRS', ra, dec),
POLYGON('ICRS',
{0.ra.degree}, {0.dec.degree},
{1.ra.degree}, {1.dec.degree},
{2.ra.degree}, {2.dec.degree},
{3.ra.degree}, {3.dec.degree})) = 1
'''.format(corners[0], corners[1], corners[2], corners[3])
print(query)
spatial_mask = ((gd1_c.phi1.wrap_at(wangle)<-25*u.deg) & (gd1_c.phi1.wrap_at(wangle)>-45*u.deg) &
(gd1_c.phi2<3*u.deg) & (gd1_c.phi2>-2*u.deg))
shape_mask = spatial_mask & mag_mask & pm_broad_mask
Nout = np.sum(shape_mask)
points = np.array([g.g[shape_mask] - g.i[shape_mask], g.g[shape_mask]]).T
pm_mask = ((gd1_c.pm_phi1_cosphi2[shape_mask] < pm1_max) & (gd1_c.pm_phi1_cosphi2[shape_mask] > pm1_min) &
(gd1_c.pm_phi2[shape_mask] < pm2_max) & (gd1_c.pm_phi2[shape_mask] > pm2_min))
path_med = mpl.path.Path(poly_med)
path_low = mpl.path.Path(poly_low)
path_high = mpl.path.Path(poly_high)
# guide = (g.g[shape_mask]>13*u.mag) & (g.g[shape_mask]<15*u.mag)
priority4 = pm_mask
priority3 = path_low.contains_points(points) & pm_mask
priority2 = path_main.contains_points(points) & pm_mask
priority1 = path_high.contains_points(points) & pm_mask
# set up output priorities
priority = np.zeros(Nout, dtype=np.int64) + 5
# priority[guide] = -1
priority[priority4] = 4
priority[priority3] = 3
priority[priority2] = 2
priority[priority1] = 1
ttype = np.empty(Nout, dtype='S10')
nontarget = priority>-1
ttype[~nontarget] = 'guide'
ttype[nontarget] = 'target'
name = np.arange(Nout)
ara = coord.Angle(t['ra'][shape_mask]*u.deg)
adec = coord.Angle(t['dec'][shape_mask]*u.deg)
ra = ara.to_string(unit=u.hour, sep=':', precision=2)
dec = adec.to_string(unit=u.degree, sep=':', precision=2)
tcatalog = Table(np.array([ra, dec, name, priority, ttype, g.g[shape_mask]]).T,
names=('ra', 'dec', 'object', 'rank', 'type', 'mag'), masked=True)
tcatalog['rank'].mask = ~nontarget
tguide = Table.read('../data/guides.fits.gz')
plt.plot(tguide['ra'], tguide['dec'],'k.')
# add guides
Nguide = len(tguide)
name_guides = np.arange(Nout, Nout+Nguide)
priority_guides = np.zeros(Nguide, dtype='int') - 1
nontarget_guides = priority_guides==-1
ttype_guides = np.empty(Nguide, dtype='S10')
ttype_guides[nontarget_guides] = 'guide'
ara_guides = coord.Angle(tguide['ra'])
adec_guides = coord.Angle(tguide['dec'])
ra_guides = ara_guides.to_string(unit=u.hour, sep=':', precision=2)
dec_guides = adec_guides.to_string(unit=u.degree, sep=':', precision=2)
tguides_out = Table(np.array([ra_guides, dec_guides, name_guides, priority_guides,
ttype_guides, tguide['phot_g_mean_mag']]).T,
names=('ra', 'dec', 'object', 'rank', 'type', 'mag'), masked=True)
tguides_out['rank'].mask = ~nontarget_guides
tguides_out
tcatalog = astropy.table.vstack([tcatalog, tguides_out])
tcatalog
tcatalog.write('../data/gd1_catalog.cat', format='ascii.fixed_width_two_line',
fill_values=[(astropy.io.ascii.masked, '')], delimiter='\t', overwrite=True)
# output cutout of the whole input catalog
shape_mask_arr = np.array(shape_mask)
tcat_input = t[shape_mask_arr]
tcat_input['name'] = name
tcat_input['priority'] = priority
tcat_input['type'] = ttype
tcat_input.write('../data/gd1_input_catalog.fits', overwrite=True)
```
| github_jupyter |
```
import csv
import matplotlib
import matplotlib.pyplot as plt
auth_csv_path = "./auth_endpoint_values.csv"
service_csv_path = "./service_endpoint_values.csv"
def convert_cpu_to_dict(file_path):
data = []
with open(file_path) as csv_file:
csv_reader = csv.reader(csv_file, delimiter=',')
csv_reader = list(csv_reader)
for idx, row in enumerate(csv_reader):
if idx == 0:
pass #skip the first and last row
else:
data.append({'workers':row[0], 'cpu_utils': row[1]})
return data
def convert_resp_to_dict(file_path):
data = []
with open(file_path) as csv_file:
csv_reader = csv.reader(csv_file, delimiter=',')
csv_reader = list(csv_reader)
for idx, row in enumerate(csv_reader):
if idx == 0:
pass #skip the first and last row
else:
data.append({'workers':row[0], 'response_time': row[3]})
return data
def convert_resp_95_to_dict(file_path):
data = []
with open(file_path) as csv_file:
csv_reader = csv.reader(csv_file, delimiter=',')
csv_reader = list(csv_reader)
for idx, row in enumerate(csv_reader):
if idx == 0:
pass #skip the first and last row
else:
data.append({'workers':row[0], 'p95_response_time': row[4]})
return data
auth_service_values = convert_cpu_to_dict(auth_csv_path)
service_endpoint_values = convert_cpu_to_dict(service_csv_path)
workers = [int(x['workers']) for x in auth_service_values]
auth_cpu_utlis = [(float(x['cpu_utils']))/4 for x in auth_service_values]
service_cpu_utlis = [(float(x['cpu_utils']))/4 for x in service_endpoint_values]
total_cpu = [x + y for x, y in zip(auth_cpu_utlis, service_cpu_utlis)]
plt.rc('font', size=14)
fig, axs = plt.subplots()
axs.set_ylim([0, 10])
axs.set_facecolor('#fcfcfc')
axs.set_xlabel('Parallel virtual users')
axs.set_ylabel('CPU utilization as % $\it{(4~cores)}$')
axs.plot(workers, total_cpu, 'r', label='total CPU usage', marker='d', markersize=7)
axs.plot(workers, service_cpu_utlis, linestyle='dotted',label='service-endpoint', marker='o', markersize=5)
axs.plot(workers, auth_cpu_utlis, 'g--' ,label='auth-service', marker='x', mec='k', markersize=5)
axs.legend()
axs.grid(axis='both', color='#7D7D7D', linestyle='-', linewidth=0.5)
plt.savefig("auth_token_cpu_util.pdf")
plt.show()
service_endpoint_resp_values = convert_resp_to_dict(service_csv_path)
service_endpoint_resp_values = [float(x['response_time']) for x in service_endpoint_resp_values]
service_endpoint_95_resp_values = convert_resp_95_to_dict(service_csv_path)
service_endpoint_95_resp_values = [float(x['p95_response_time']) for x in service_endpoint_95_resp_values]
#plt.rc('font', size=20) # controls default text sizes
fig, axs = plt.subplots()
axs.set_ylim([60, 180])
axs.set_facecolor('#fcfcfc')
axs.grid(axis='both', color='#7D7D7D', linestyle='-', linewidth=0.5, zorder=0)
axs.set_xlabel('Parallel virtual users')
axs.set_ylabel('Response time (ms)')
#p1 = axs.bar(workers, service_endpoint_resp_values, 3, zorder=3, alpha=0.9)
axs.plot(workers, service_endpoint_resp_values, label='avg response time', marker='x', markersize=7)
axs.plot(workers, service_endpoint_95_resp_values, 'r', linestyle='dotted',label='p(95) response time', marker='o', markersize=5)
axs.legend(loc='upper left')
plt.savefig("auth_token_response_time.pdf")
plt.show()
```
| github_jupyter |
<a href="https://bmi.readthedocs.io"><img src="https://raw.githubusercontent.com/csdms/espin/main/media/bmi-logo-header-text.png"></a>
# Run the `Heat` model through its BMI
`Heat` models the diffusion of temperature on a uniform rectangular plate with Dirichlet boundary conditions. This is the canonical example used in the [bmi-example-python](https://github.com/csdms/bmi-example-python) repository. View the source code for the [model](https://github.com/csdms/bmi-example-python/blob/master/heat/heat.py) and its [BMI](https://github.com/csdms/bmi-example-python/blob/master/heat/bmi_heat.py) on GitHub.
Start by importing `os`, `numpy` and the `Heat` BMI:
```
import os
import numpy as np
from heat import BmiHeat
```
Create an instance of the model's BMI.
```
x = BmiHeat()
```
What's the name of this model?
```
print(x.get_component_name())
```
Start the `Heat` model through its BMI using a configuration file:
```
cat heat.yaml
x.initialize('heat.yaml')
```
Check the time information for the model.
```
print('Start time:', x.get_start_time())
print('End time:', x.get_end_time())
print('Current time:', x.get_current_time())
print('Time step:', x.get_time_step())
print('Time units:', x.get_time_units())
```
Show the input and output variables for the component (aside on [Standard Names](https://csdms.colorado.edu/wiki/CSDMS_Standard_Names)):
```
print(x.get_input_var_names())
print(x.get_output_var_names())
```
Next, get the identifier for the grid on which the temperature variable is defined:
```
grid_id = x.get_var_grid('plate_surface__temperature')
print('Grid id:', grid_id)
```
Then get the grid attributes:
```
print('Grid type:', x.get_grid_type(grid_id))
rank = x.get_grid_rank(grid_id)
print('Grid rank:', rank)
shape = np.ndarray(rank, dtype=int)
x.get_grid_shape(grid_id, shape)
print('Grid shape:', shape)
spacing = np.ndarray(rank, dtype=float)
x.get_grid_spacing(grid_id, spacing)
print('Grid spacing:', spacing)
```
These commands are made somewhat un-Pythonic by the generic design of the BMI.
Through the model's BMI, zero out the initial temperature field, except for an impulse near the middle.
Note that *set_value* expects a one-dimensional array for input.
```
temperature = np.zeros(shape)
temperature[3, 4] = 100.0
x.set_value('plate_surface__temperature', temperature)
```
Check that the temperature field has been updated. Note that *get_value* expects a one-dimensional array to receive output.
```
temperature_flat = np.empty_like(temperature).flatten()
x.get_value('plate_surface__temperature', temperature_flat)
print(temperature_flat.reshape(shape))
```
Now advance the model by a single time step:
```
x.update()
```
View the new state of the temperature field:
```
x.get_value('plate_surface__temperature', temperature_flat)
print(temperature_flat.reshape(shape))
```
There's diffusion!
Advance the model to some distant time:
```
distant_time = 2.0
while x.get_current_time() < distant_time:
x.update()
```
View the final state of the temperature field:
```
np.set_printoptions(formatter={'float': '{: 5.1f}'.format})
x.get_value('plate_surface__temperature', temperature_flat)
print(temperature_flat.reshape(shape))
```
Note that temperature isn't conserved on the plate:
```
print(temperature_flat.sum())
```
End the model:
```
x.finalize()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/iamsoroush/DeepEEGAbstractor/blob/master/cv_rnr_8s_proposed_gap.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
#@title # Clone the repository and upgrade Keras {display-mode: "form"}
!git clone https://github.com/iamsoroush/DeepEEGAbstractor.git
!pip install --upgrade keras
#@title # Imports {display-mode: "form"}
import os
import pickle
import sys
sys.path.append('DeepEEGAbstractor')
import numpy as np
from src.helpers import CrossValidator
from src.models import SpatioTemporalWFB, TemporalWFB, TemporalDFB, SpatioTemporalDFB
from src.dataset import DataLoader, Splitter, FixedLenGenerator
from google.colab import drive
drive.mount('/content/gdrive')
#@title # Set data path {display-mode: "form"}
#@markdown ---
#@markdown Type in the folder in your google drive that contains numpy _data_ folder:
parent_dir = 'soroush'#@param {type:"string"}
gdrive_path = os.path.abspath(os.path.join('gdrive/My Drive', parent_dir))
data_dir = os.path.join(gdrive_path, 'data')
cv_results_dir = os.path.join(gdrive_path, 'cross_validation')
if not os.path.exists(cv_results_dir):
os.mkdir(cv_results_dir)
print('Data directory: ', data_dir)
print('Cross validation results dir: ', cv_results_dir)
#@title ## Set Parameters
batch_size = 80
epochs = 50
k = 10
t = 10
instance_duration = 8 #@param {type:"slider", min:3, max:10, step:0.5}
instance_overlap = 2 #@param {type:"slider", min:0, max:3, step:0.5}
sampling_rate = 256 #@param {type:"number"}
n_channels = 20 #@param {type:"number"}
task = 'rnr'
data_mode = 'cross_subject'
#@title ## Spatio-Temporal WFB
model_name = 'ST-WFB-GAP'
train_generator = FixedLenGenerator(batch_size=batch_size,
duration=instance_duration,
overlap=instance_overlap,
sampling_rate=sampling_rate,
is_train=True)
test_generator = FixedLenGenerator(batch_size=8,
duration=instance_duration,
overlap=instance_overlap,
sampling_rate=sampling_rate,
is_train=False)
params = {'task': task,
'data_mode': data_mode,
'main_res_dir': cv_results_dir,
'model_name': model_name,
'epochs': epochs,
'train_generator': train_generator,
'test_generator': test_generator,
't': t,
'k': k,
'channel_drop': True}
validator = CrossValidator(**params)
dataloader = DataLoader(data_dir,
task,
data_mode,
sampling_rate,
instance_duration,
instance_overlap)
data, labels = dataloader.load_data()
input_shape = (sampling_rate * instance_duration,
n_channels)
model_obj = SpatioTemporalWFB(input_shape,
model_name=model_name,
spatial_dropout_rate=0.2,
dropout_rate=0.4)
scores = validator.do_cv(model_obj,
data,
labels)
#@title ## Temporal WFB
model_name = 'T-WFB-GAP'
train_generator = FixedLenGenerator(batch_size=batch_size,
duration=instance_duration,
overlap=instance_overlap,
sampling_rate=sampling_rate,
is_train=True)
test_generator = FixedLenGenerator(batch_size=8,
duration=instance_duration,
overlap=instance_overlap,
sampling_rate=sampling_rate,
is_train=False)
params = {'task': task,
'data_mode': data_mode,
'main_res_dir': cv_results_dir,
'model_name': model_name,
'epochs': epochs,
'train_generator': train_generator,
'test_generator': test_generator,
't': t,
'k': k,
'channel_drop': True}
validator = CrossValidator(**params)
dataloader = DataLoader(data_dir,
task,
data_mode,
sampling_rate,
instance_duration,
instance_overlap)
data, labels = dataloader.load_data()
input_shape = (sampling_rate * instance_duration,
n_channels)
model_obj = TemporalWFB(input_shape,
model_name=model_name,
spatial_dropout_rate=0.2,
dropout_rate=0.4)
scores = validator.do_cv(model_obj,
data,
labels)
#@title ## Spatio-Temporal DFB
model_name = 'ST-DFB-GAP'
train_generator = FixedLenGenerator(batch_size=batch_size,
duration=instance_duration,
overlap=instance_overlap,
sampling_rate=sampling_rate,
is_train=True)
test_generator = FixedLenGenerator(batch_size=8,
duration=instance_duration,
overlap=instance_overlap,
sampling_rate=sampling_rate,
is_train=False)
params = {'task': task,
'data_mode': data_mode,
'main_res_dir': cv_results_dir,
'model_name': model_name,
'epochs': epochs,
'train_generator': train_generator,
'test_generator': test_generator,
't': t,
'k': k,
'channel_drop': True}
validator = CrossValidator(**params)
dataloader = DataLoader(data_dir,
task,
data_mode,
sampling_rate,
instance_duration,
instance_overlap)
data, labels = dataloader.load_data()
input_shape = (sampling_rate * instance_duration,
n_channels)
model_obj = SpatioTemporalDFB(input_shape,
model_name=model_name,
spatial_dropout_rate=0.2,
dropout_rate=0.4)
scores = validator.do_cv(model_obj,
data,
labels)
#@title ## Spatio-Temporal DFB (Normalized Kernels)
model_name = 'ST-DFB-NK-GAP'
train_generator = FixedLenGenerator(batch_size=batch_size,
duration=instance_duration,
overlap=instance_overlap,
sampling_rate=sampling_rate,
is_train=True)
test_generator = FixedLenGenerator(batch_size=8,
duration=instance_duration,
overlap=instance_overlap,
sampling_rate=sampling_rate,
is_train=False)
params = {'task': task,
'data_mode': data_mode,
'main_res_dir': cv_results_dir,
'model_name': model_name,
'epochs': epochs,
'train_generator': train_generator,
'test_generator': test_generator,
't': t,
'k': k,
'channel_drop': True}
validator = CrossValidator(**params)
dataloader = DataLoader(data_dir,
task,
data_mode,
sampling_rate,
instance_duration,
instance_overlap)
data, labels = dataloader.load_data()
input_shape = (sampling_rate * instance_duration,
n_channels)
model_obj = SpatioTemporalDFB(input_shape,
model_name=model_name,
spatial_dropout_rate=0.2,
dropout_rate=0.4,
normalize_kernels=True)
scores = validator.do_cv(model_obj,
data,
labels)
#@title ## Temporal DFB
model_name = 'T-DFB-GAP'
train_generator = FixedLenGenerator(batch_size=batch_size,
duration=instance_duration,
overlap=instance_overlap,
sampling_rate=sampling_rate,
is_train=True)
test_generator = FixedLenGenerator(batch_size=8,
duration=instance_duration,
overlap=instance_overlap,
sampling_rate=sampling_rate,
is_train=False)
params = {'task': task,
'data_mode': data_mode,
'main_res_dir': cv_results_dir,
'model_name': model_name,
'epochs': epochs,
'train_generator': train_generator,
'test_generator': test_generator,
't': t,
'k': k,
'channel_drop': True}
validator = CrossValidator(**params)
dataloader = DataLoader(data_dir,
task,
data_mode,
sampling_rate,
instance_duration,
instance_overlap)
data, labels = dataloader.load_data()
input_shape = (sampling_rate * instance_duration,
n_channels)
model_obj = TemporalDFB(input_shape,
model_name=model_name,
spatial_dropout_rate=0.2,
dropout_rate=0.4)
scores = validator.do_cv(model_obj,
data,
labels)
#@title ## Temporal DFB (Normalized Kernels)
model_name = 'T-DFB-NK-GAP'
train_generator = FixedLenGenerator(batch_size=batch_size,
duration=instance_duration,
overlap=instance_overlap,
sampling_rate=sampling_rate,
is_train=True)
test_generator = FixedLenGenerator(batch_size=8,
duration=instance_duration,
overlap=instance_overlap,
sampling_rate=sampling_rate,
is_train=False)
params = {'task': task,
'data_mode': data_mode,
'main_res_dir': cv_results_dir,
'model_name': model_name,
'epochs': epochs,
'train_generator': train_generator,
'test_generator': test_generator,
't': t,
'k': k,
'channel_drop': True}
validator = CrossValidator(**params)
dataloader = DataLoader(data_dir,
task,
data_mode,
sampling_rate,
instance_duration,
instance_overlap)
data, labels = dataloader.load_data()
input_shape = (sampling_rate * instance_duration,
n_channels)
model_obj = TemporalDFB(input_shape,
model_name=model_name,
spatial_dropout_rate=0.2,
dropout_rate=0.4,
normalize_kernels=True)
scores = validator.do_cv(model_obj,
data,
labels)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Yoshibansal/ML-practical/blob/main/Cat_vs_Dog_Part-1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
##Cat vs Dog (Binary class classification)
ImageDataGenerator
(Understanding overfitting)
Download dataset
```
!wget --no-check-certificate \
https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip \
-O /tmp/cats_and_dogs_filtered.zip
#importing libraries
import os
import zipfile
import tensorflow as tf
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.preprocessing.image import ImageDataGenerator
#unzip
local_zip = '/tmp/cats_and_dogs_filtered.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp')
zip_ref.close()
base_dir = '/tmp/cats_and_dogs_filtered'
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'validation')
# Directory with our training cat pictures
train_cats_dir = os.path.join(train_dir, 'cats')
# Directory with our training dog pictures
train_dogs_dir = os.path.join(train_dir, 'dogs')
# Directory with our validation cat pictures
validation_cats_dir = os.path.join(validation_dir, 'cats')
# Directory with our validation dog pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs')
INPUT_SHAPE = (150, 150)
MODEL_INPUT_SHAPE = INPUT_SHAPE + (3,)
#HYPERPARAMETERS
LEARNING_RATE = 1e-4
BATCH_SIZE = 20
EPOCHS = 50
#model architecture
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape = MODEL_INPUT_SHAPE),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(loss='binary_crossentropy',
optimizer=RMSprop(lr=LEARNING_RATE),
metrics=['accuracy'])
#summary of model (including type of layer, Ouput shape and number of parameters)
model.summary()
#plotting model and saving it architecture picture
dot_img_file = '/tmp/model_1.png'
tf.keras.utils.plot_model(model, to_file=dot_img_file, show_shapes=True)
# All images will be rescaled by 1./255
train_datagen = ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)
# Flow training images in batches of 20 using train_datagen generator
train_generator = train_datagen.flow_from_directory(
train_dir, # This is the source directory for training images
target_size=INPUT_SHAPE, # All images will be resized to 150x150
batch_size=BATCH_SIZE,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
# Flow validation images in batches of 20 using test_datagen generator
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=INPUT_SHAPE,
batch_size=BATCH_SIZE,
class_mode='binary')
#Fitting data into model -> training model
history = model.fit(
train_generator,
steps_per_epoch=100, # steps = 2000 images / batch_size
epochs=EPOCHS,
validation_data=validation_generator,
validation_steps=50, # steps = 1000 images / batch_size
verbose=1)
#PLOTTING model performance
import matplotlib.pyplot as plt
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.figure()
plt.plot(epochs, loss, 'ro', label='Training Loss')
plt.plot(epochs, val_loss, 'r', label='Validation Loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
```
The Training Accuracy is close to 100%, and the validation accuracy is in the 70%-80% range. This is a great example of overfitting -- which in short means that it can do very well with images it has seen before, but not so well with images it hasn't.
next we see how we can do better to avoid overfitting -- and one simple method is to **augment** the images a bit.
```
```
| github_jupyter |
# Part I. ETL Pipeline for Pre-Processing the Files
## PLEASE RUN THE FOLLOWING CODE FOR PRE-PROCESSING THE FILES
#### Import Python packages
```
# Import Python packages
import pandas as pd
import cassandra
import re
import os
import glob
import numpy as np
import json
import csv
```
#### Creating list of filepaths to process original event csv data files
```
# checking your current working directory
print(os.getcwd())
# Get current folder and subfolder event data
filepath = os.getcwd() + '/event_data'
# Create a list of files and collect each filepath
file_path_list = []
for root, dirs, files in os.walk(filepath):
for f in files :
file_path_list.append(os.path.abspath(f))
# get total number of files found
num_files = len(file_path_list)
print('{} files found in {}\n'.format(num_files, filepath))
# join the file path and roots with the subdirectories using glob
file_path_list = glob.glob(os.path.join(root,'*'))
print(file_path_list)
```
#### Processing the files to create the data file csv that will be used for Apache Casssandra tables
```
# initiating an empty list of rows that will be generated from each file
full_data_rows_list = []
# for every filepath in the file path list
for f in file_path_list:
# reading csv file
with open(f, 'r', encoding = 'utf8', newline='') as csvfile:
# creating a csv reader object
csvreader = csv.reader(csvfile)
next(csvreader)
# extracting each data row one by one and append it
for line in csvreader:
#print(line)
full_data_rows_list.append(line)
# uncomment the code below if you would like to get total number of rows
#print(len(full_data_rows_list))
# uncomment the code below if you would like to check to see what the list of event data rows will look like
#print(full_data_rows_list)
# creating a smaller event data csv file called event_datafile_full csv that will be used to insert data into the \
# Apache Cassandra tables
csv.register_dialect('myDialect', quoting=csv.QUOTE_ALL, skipinitialspace=True)
with open('event_datafile_new.csv', 'w', encoding = 'utf8', newline='') as f:
writer = csv.writer(f, dialect='myDialect')
writer.writerow(['artist','firstName','gender','itemInSession','lastName','length',\
'level','location','sessionId','song','userId'])
for row in full_data_rows_list:
if (row[0] == ''):
continue
writer.writerow((row[0], row[2], row[3], row[4], row[5], row[6], row[7], row[8], row[12], row[13], row[16]))
# check the number of rows in your csv file
with open('event_datafile_new.csv', 'r', encoding = 'utf8') as f:
print(sum(1 for line in f))
```
# Part II. Complete the Apache Cassandra coding portion of your project.
## Now you are ready to work with the CSV file titled <font color=red>event_datafile_new.csv</font>, located within the Workspace directory. The event_datafile_new.csv contains the following columns:
- artist
- firstName of user
- gender of user
- item number in session
- last name of user
- length of the song
- level (paid or free song)
- location of the user
- sessionId
- song title
- userId
The image below is a screenshot of what the denormalized data should appear like in the <font color=red>**event_datafile_new.csv**</font> after the code above is run:<br>
<img src="images/image_event_datafile_new.jpg">
## Begin writing your Apache Cassandra code in the cells below
#### Creating a Cluster
```
# This should make a connection to a Cassandra instance your local machine
# (127.0.0.1)
from cassandra.cluster import Cluster
try:
# Connect to local Apache Cassandra instance
cluster = Cluster(['127.0.0.1'])
# Set session to connect andexecute queries.
session = cluster.connect()
except Exception as e:
print(e)
```
#### Create Keyspace
```
# Create a Keyspace
try:
session.execute("""
CREATE KEYSPACE IF NOT EXISTS sparkifydb
WITH REPLICATION =
{ 'class' : 'SimpleStrategy', 'replication_factor' : 1 }"""
)
except Exception as e:
print(e)
```
#### Set Keyspace
```
# Set KEYSPACE
try:
session.set_keyspace('sparkifydb')
except Exception as e:
print(e)
```
### Now we need to create tables to run the following queries. Remember, with Apache Cassandra you model the database tables on the queries you want to run.
## Create queries to ask the following three questions of the data
### 1. Give me the artist, song title and song's length in the music app history that was heard during sessionId = 338, and itemInSession = 4
### 2. Give me only the following: name of artist, song (sorted by itemInSession) and user (first and last name) for userid = 10, sessionid = 182
### 3. Give me every user name (first and last) in my music app history who listened to the song 'All Hands Against His Own'
### Query-1
```
## Query 1: Give me the artist, song title and song's length in the music app history that was heard during \
## sessionId = 338, and itemInSession = 4
# CREATE TABLE:
# This CQL query creates song_in_session table which contains the following columns (with data type):
# * session_id INT,
# * item_in_session INT,
# * artist TEXT,
# * song TEXT,
# * length FLOAT
#
# To uniquely identify each row and allow efficient distribution in Cassandra cluster,
# * session_id and item_in_session columns: are used as table's Primary Key (composite Partition Key).
query = "CREATE TABLE IF NOT EXISTS song_in_session "
query = query + "(session_id int, item_in_session int, artist text, song text, length float, \
PRIMARY KEY(session_id, item_in_session))"
try:
session.execute(query)
except Exception as e:
print(e)
# INSERT data
# Set new file name.
file = 'event_datafile_new.csv'
with open(file, encoding = 'utf8') as f:
csvreader = csv.reader(f)
next(csvreader) # skip header
for line in csvreader:
# Assign the INSERT statements into the `query` variable
query = "INSERT INTO song_in_session (session_id, item_in_session, artist, song, length)"
query = query + " VALUES (%s, %s, %s, %s, %s)"
## Assign column elements in the INSERT statement.
session.execute(query, (int(line[8]), int(line[3]), line[0], line[9], float(line[5])))
```
#### Do a SELECT to verify that the data have been inserted into each table
```
# SELECT statement:
# To answer Query-1, this CQL query
# * matches session_id (=338) and item_in_session (=4) to
# * return artist, song and length from song_in_session table.
query = "SELECT artist, song, length \
FROM song_in_session \
WHERE session_id = 338 AND \
item_in_session = 4"
try:
songs = session.execute(query)
except Exception as e:
print(e)
for row in songs:
print (row.artist, row.song, row.length)
```
### COPY AND REPEAT THE ABOVE THREE CELLS FOR EACH OF THE THREE QUESTIONS
### Query-2
```
## Query 2: Give me only the following: name of artist, song (sorted by itemInSession) and
# user (first and last name) for userid = 10, sessionid = 182
# CREATE TABLE
# This CQL query creates artist_in_session table which contains the following columns (with data type):
# * user_id INT,
# * session_id INT,
# * artist TEXT,
# * song TEXT,
# * item_in_session INT,
# * first_name TEXT,
# * last_name TEXT,
#
# To uniquely identify each row and allow efficient distribution in Cassandra cluster,
# * user_id and session_id columns: are used as Composite Partition Key in table's Primary Key.
# * item_in_session column: is used as Clustering Key in table's Primary Key and allows sorting order of the data.
query = "CREATE TABLE IF NOT EXISTS artist_in_session "
query = query + "( user_id int, \
session_id int, \
artist text, \
song text, \
item_in_session int, \
first_name text, \
last_name text, \
PRIMARY KEY((user_id, session_id), item_in_session))"
try:
session.execute(query)
except Exception as e:
print(e)
# INSERT data
file = 'event_datafile_new.csv'
with open(file, encoding = 'utf8') as f:
csvreader = csv.reader(f)
next(csvreader)
for line in csvreader:
query = "INSERT INTO artist_in_session (user_id, \
session_id, \
artist, \
song, \
item_in_session, \
first_name, \
last_name)"
query = query + " VALUES (%s, %s, %s, %s, %s, %s, %s)"
session.execute(query, (int(line[10]), int(line[8]), line[0], line[9], int(line[3]), line[1], line[4]))
# SELECT statement:
# To answer Query-2, this CQL query
# * matches user_id (=10) and session_id (=182) to
# * return artist, song, first_name, and last_name (of user) from artist_in_session table.
query = "SELECT artist, song, first_name, last_name \
FROM artist_in_session \
WHERE user_id = 10 AND \
session_id = 182"
try:
artists = session.execute(query)
except Exception as e:
print(e)
for row in artists:
print (row.artist, row.song, row.first_name, row.last_name)
```
### Query-3
```
## Query 3: Give me every user name (first and last) in my music app history who listened
# to the song 'All Hands Against His Own'
# CREATE TABLE
# CREATE TABLE
# This CQL query creates artist_in_session table which contains the following columns (with data type):
# * song TEXT,
# * user_id INT,
# * first_name TEXT,
# * last_name TEXT,
#
# To uniquely identify each row and allow efficient distribution in Cassandra cluster,
# * song, user_id columns: are used as Composite Partition Key in table's Primary Key.
query = "CREATE TABLE IF NOT EXISTS user_and_song "
query = query + "( song text, \
user_id int, \
first_name text, \
last_name text, \
PRIMARY KEY(song, user_id))"
try:
session.execute(query)
except Exception as e:
print(e)
# INSERT data
file = 'event_datafile_new.csv'
with open(file, encoding = 'utf8') as f:
csvreader = csv.reader(f)
next(csvreader)
for line in csvreader:
query = "INSERT INTO user_and_song (song, \
user_id, \
first_name, \
last_name)"
query = query + " VALUES (%s, %s, %s, %s)"
session.execute(query, (line[9], int(line[10]), line[1], line[4]))
# SELECT statement:
# To answer Query-3, this CQL query
# * matches song (=All Hands Against His Own) to
# * return first_name and last_name (of users) from user_and_song table.
query = "SELECT first_name, last_name \
FROM user_and_song \
WHERE song = 'All Hands Against His Own'"
try:
users = session.execute(query)
except Exception as e:
print(e)
for row in users:
print (row.first_name, row.last_name)
```
### Drop the tables before closing out the sessions
```
## Drop the table before closing out the sessions
query = "DROP TABLE song_in_session"
try:
rows = session.execute(query)
except Exception as e:
print(e)
query2 = "DROP TABLE artist_in_session"
try:
rows = session.execute(query2)
except Exception as e:
print(e)
query3 = "DROP TABLE user_and_song"
try:
rows = session.execute(query3)
except Exception as e:
print(e)
```
### Close the session and cluster connectionยถ
```
session.shutdown()
cluster.shutdown()
```
| github_jupyter |
# MPLPPT
`mplppt` is a simple library made from some hacky scripts I used to use to convert matplotlib figures to powerpoint figures. Which makes this a hacky library, I guess ๐.
## Goal
`mplppt` seeks to implement an alternative `savefig` function for `matplotlib` figures. This `savefig` function saves a `matplotlib` figure with a single axis to a powerpoint presentation with a single slide containing the figure.
## Installation
```bash
pip install mplppt
```
## Imports
```
import mplppt
%matplotlib inline
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
```
## Supported Conversions
`mplppt` supports [partly] conversion of the following matplotlib objects:
* Lines [`matplotlib.lines.Line2D`]
* Rectangles [`matplotlib.patches.Rectangle`]
* Polygons [`matplotlib.patches.Polygon`]
* pcolormesh [`matplotlib.collections.QuadMesh`]
* text [`matplotlib.text.Text`]
so far `mplppt` does not (yet) support (out of many other things):
* markers (including tick marks)
* linestyle
## Simple Example
An example of all different conversions available for mplppt. Below we give an example of how all these objects can be combined into a single plot, which can then be exported to powerpoint:
```
# plot [Line2D]
x = np.linspace(-1,5)
y = np.sin(x)
plt.plot(x,y,color='C1')
# rectangle
plt.gca().add_patch(mpl.patches.Rectangle((0, 0), 3, 0.5))
# polygon
plt.gca().add_patch(mpl.patches.Polygon(np.array([[5.0,1.0],[4.0,-0.2],[2.0,0.6]]), color="red"))
# pcolormesh
x = np.linspace(0,1, 100)
y = np.linspace(0,1, 100)
X, Y = np.meshgrid(x,y)
Z = X**2 + Y**2
plt.pcolormesh(X,Y,Z)
# text
text = plt.text(0,0,'hello')
# set limits
plt.ylim(-0.5,1)
# Save figure to pptx
mplppt.savefig('first_example.pptx')
# show figure
plt.show()
```
Which results in a powerpoint slide which looks as follows:

## Cool! What else can I do with this?
You are not bound to using matplotlib! The `mplppt` repository contains some standard powerpoint shapes that you can use. Try something like:
```
ppt = mplppt.Group() # Create a new group of objects
ppt += mplppt.Rectangle(name='rect', x=0, y=0, cx=100, cy=100, slidesize=(10,5)) # add an object to the group
ppt.save('second_example.pptx') # export the group as a ppt slide
```
## Is any of this documented?
No.
## How does this work?
The repository contains a template folder, which is nothing more than an empty powerpoint presentation which is unzipped. After making a copy of the template folder and adding some `xml` code for the shapes, the modified folder is zipped into a `.pptx` file.
## Copyright
ยฉ Floris Laporte - MIT License
| github_jupyter |
```
import sys
import torch
import torch.nn as nn
import torch.nn.functional as F
# Releasing the GPU memory
torch.cuda.empty_cache()
def conv3x3(in_planes, out_planes, stride=1, groups=1, dilation=1):
"""3x3 convolution with padding"""
return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
padding=dilation, groups=groups, bias=False, dilation=dilation)
def conv1x1(in_planes, out_planes, stride=1):
"""1x1 convolution"""
return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False)
class Bottleneck(nn.Module):
expansion = 4
def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1,
base_width=64, dilation=1, norm_layer=None):
super(Bottleneck, self).__init__()
if norm_layer is None:
norm_layer = nn.BatchNorm2d
width = int(planes * (base_width / 64.)) * groups
# Both self.conv2 and self.downsample layers downsample the input when stride != 1
self.conv1 = conv1x1(inplanes, width)
self.bn1 = norm_layer(width)
self.conv2 = conv3x3(width, width, stride, groups, dilation)
self.bn2 = norm_layer(width)
self.conv3 = conv1x1(width, planes * self.expansion)
self.bn3 = norm_layer(planes * self.expansion)
self.relu = nn.ReLU(inplace=True)
self.downsample = downsample
self.stride = stride
def forward(self, x):
identity = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.bn3(out)
if self.downsample is not None:
identity = self.downsample(x)
out += identity
out = self.relu(out)
return out
class ResNet(nn.Module):
def __init__(self, block, layers, num_classes=1000, zero_init_residual=False,
groups=1, width_per_group=64, replace_stride_with_dilation=None,
norm_layer=None):
super(ResNet, self).__init__()
if norm_layer is None:
norm_layer = nn.BatchNorm2d
self._norm_layer = norm_layer
self.inplanes = 64
self.dilation = 1
if replace_stride_with_dilation is None:
# each element in the tuple indicates if we should replace
# the 2x2 stride with a dilated convolution instead
replace_stride_with_dilation = [False, False, False]
if len(replace_stride_with_dilation) != 3:
raise ValueError("replace_stride_with_dilation should be None "
"or a 3-element tuple, got {}".format(replace_stride_with_dilation))
self.groups = groups
self.base_width = width_per_group
self.conv1 = nn.Conv2d(3, self.inplanes, kernel_size=7, stride=2, padding=3,
bias=False)
self.bn1 = norm_layer(self.inplanes)
self.relu = nn.ReLU(inplace=True)
self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
self.layer1 = self._make_layer(block, 64, layers[0])
self.layer2 = self._make_layer(block, 128, layers[1], stride=2,
dilate=replace_stride_with_dilation[0])
self.layer3 = self._make_layer(block, 256, layers[2], stride=2,
dilate=replace_stride_with_dilation[1])
self.layer4 = self._make_layer(block, 512, layers[3], stride=2,
dilate=replace_stride_with_dilation[2])
self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
self.fc = nn.Linear(512 * block.expansion, num_classes)
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
def _make_layer(self, block, planes, blocks, stride=1, dilate=False):
norm_layer = self._norm_layer
downsample = None
previous_dilation = self.dilation
if dilate:
self.dilation *= stride
stride = 1
if stride != 1 or self.inplanes != planes * block.expansion:
downsample = nn.Sequential(
conv1x1(self.inplanes, planes * block.expansion, stride),
norm_layer(planes * block.expansion),
)
layers = []
layers.append(block(self.inplanes, planes, stride, downsample, self.groups,
self.base_width, previous_dilation, norm_layer))
self.inplanes = planes * block.expansion
for _ in range(1, blocks):
layers.append(block(self.inplanes, planes, groups=self.groups,
base_width=self.base_width, dilation=self.dilation,
norm_layer=norm_layer))
return nn.Sequential(*layers)
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.maxpool(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
x = self.avgpool(x)
x = x.reshape(x.size(0), -1)
x = self.fc(x)
return x
class Net (nn.Module):
def __init__(self, num_class, freeze_conv=False, n_extra_info=0, p_dropout=0.5, neurons_class=256,
feat_reducer=None, classifier=None):
super(Net, self).__init__()
resnet = ResNet(Bottleneck, [3, 4, 6, 3])
self.features = nn.Sequential(*list(resnet.children())[:-1])
# freezing the convolution layers
if freeze_conv:
for param in self.features.parameters():
param.requires_grad = False
# Feature reducer
if feat_reducer is None:
self.feat_reducer = nn.Sequential(
nn.Linear(2048, neurons_class),
nn.BatchNorm1d(neurons_class),
nn.ReLU(),
nn.Dropout(p=p_dropout)
)
else:
self.feat_reducer = feat_reducer
# Here comes the extra information (if applicable)
if classifier is None:
self.classifier = nn.Linear(neurons_class + n_extra_info, num_class)
else:
self.classifier = classifier
self.collecting = False
def forward(self, img, extra_info=None):
x = self.features(img)
# Flatting
x = x.view(x.size(0), -1)
x = self.feat_reducer(x)
res = self.classifier(x)
return res
torch_model = Net(8)
ckpt = torch.load("checkpoints/resnet-50_checkpoint.pth")
torch_model.load_state_dict(ckpt['model_state_dict'])
torch_model.eval()
torch_model.cuda()
print("Done!")
```
| github_jupyter |
<a href="https://colab.research.google.com/github/BachiLi/A-Tour-of-Computer-Animation/blob/main/A_Tour_of_Computer_Animation_Table_of_Contents.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
**A Tour of Computer Animation** -- [Tzu-Mao Li](https://cseweb.ucsd.edu/~tzli/)
This is a note that records my journey into computer animation. The structure of this tour is inspired by the books ["Physically Based Rendering:From Theory To Implementation"](https://www.pbr-book.org/), ["Ray Tracing in One Weekend"](https://raytracing.github.io/books/RayTracingInOneWeekend.html), and ["Numerical Tours"](https://www.numerical-tours.com/). Most books and articles about computer animation and physics simulation are mathematic centric and do not contain much code and experiments. This note is an attempt to bridge the gap.
This is the hub to the chapters of this tour. I do not assume background on computer animation or graphics, but I do assume basic familiarity on calculus and linear algebra. There will be quite a bit of math -- sorry. The code is going to be all written in numpy and visualize with matplotlib. These are going to be unoptimized code. We will focus slightly more on the foundation instead of immediate practical implementations, so it might take a while before we can render fancy animations. Don't be afraid to play with the code.
**Table of Contents**
1. [Newtonian Mechanics and Forward Euler Method](https://colab.research.google.com/drive/1K-Ly9vqZbymrAYe6Krg1ZfSMPY6CnAcY)
2. [Lagrangian Mechanics and Pendulums](https://colab.research.google.com/drive/1L4QJyq8hSlgllSYytYW5UHTPvd6w7Vz9)
3. [Time Integration and Stability](https://colab.research.google.com/drive/1mXTlYt2nRnXLrXpnP26BgjHKghjGPTCL?usp=sharing)
4. [Elastic Simulation and Mass Spring Systems](https://colab.research.google.com/drive/1erjL0a_KCVx8p3lDcE747k8wqbEaxYPY?usp=sharing)
5. Physics as Constraints Solving and Position-based Dynamics
Some useful textbooks and lectures (they are not prerequisite, but instead this note should be used as an accompanying material to these):
- [David Levin: CSC417 - Physics-based Animation](https://www.youtube.com/playlist?list=PLTkE7n2CwG_PH09_q0Q7ttjqE2F9yGeM3) (the structure of this tour takes huge inspirations from this course)
- [The Feynman Lectures on Physics](https://www.feynmanlectures.caltech.edu/)
- [Doug James: CS348C - Computer Graphics: Animation and Simulation](https://graphics.stanford.edu/courses/cs348c/)
- [Arnold: Mathematical Methods of Classical Mechanics](https://www.amazon.com/Mathematical-Classical-Mechanics-Graduate-Mathematics/dp/0387968903)
- [Witkin and Baraff: Physics-based Modeling](https://graphics.pixar.com/pbm2001/)
- [Bargteil and Shinar: An Introduction to Physics-based Animation](https://cal.cs.umbc.edu/Courses/PhysicsBasedAnimation/)
- [ร
strรถm and Akenine-Mรถller: Immersive Linear Algebra](http://immersivemath.com/ila/index.html)
The notes are still work in progress and probably contain a lot of errors. Please send email to me (tzli@ucsd.edu) if you have any suggestions and comments.
| github_jupyter |
```
import urllib2
from bs4 import BeautifulSoup
url = 'https://www.baidu.com/'
content = urllib2.urlopen(url).read()
soup = BeautifulSoup(content, 'html.parser')
soup
print(soup.prettify())
for tag in soup.find_all(True):
print(tag.name)
soup('head')# or soup.head
soup.body
soup.body.name
soup.meta.string
soup.find_all('noscript',content_='0;url=http://www.baidu.com/')
soup.find_all('noscript')[0]
soup.find_all(["head","script"])
soup.get_text()
print(soup.get_text())
from IPython.display import display_html, HTML
HTML('<iframe src=http://bbs.tianya.cn/list.jsp?item=free&nextid=%d&order=8&k=PX width=1000 height=500></iframe>')
# the webpage we would like to crawl
page_num = 0
url = "http://bbs.tianya.cn/list.jsp?item=free&nextid=%d&order=8&k=PX" % page_num
content = urllib2.urlopen(url).read() #่ทๅ็ฝ้กต็htmlๆๆฌ
soup = BeautifulSoup(content, "lxml")
articles = soup.find_all('tr')
print articles[0]
print articles[1]
len(articles[1:])
for t in articles[1].find_all('td'): print t
td = articles[1].find_all('td')
print td[0]
print(td[0].text)
print td[0].a['href']
print td[1]
print td[2]
print td[3]
print td[4]
records = []
for i in articles[1:]:
td = i.find_all('td')
title = td[0].text.strip()
title_url = td[0].a['href']
author = td[1].text
author_url = td[1].a['href']
views = td[2].text
replies = td[3].text
date = td[4]['title']
record = title + '\t' + title_url+ '\t' + author + '\t'+ author_url + '\t' + views+ '\t' + replies+ '\t'+ date
records.append(record)
print records[2]
def crawler(page_num, file_name):
try:
# open the browser
url = "http://bbs.tianya.cn/list.jsp?item=free&nextid=%d&order=8&k=PX" % page_num
content = urllib2.urlopen(url).read() #่ทๅ็ฝ้กต็htmlๆๆฌ
soup = BeautifulSoup(content, "lxml")
articles = soup.find_all('tr')
# write down info
for i in articles[1:]:
td = i.find_all('td')
title = td[0].text.strip()
title_url = td[0].a['href']
author = td[1].text
author_url = td[1].a['href']
views = td[2].text
replies = td[3].text
date = td[4]['title']
record = title + '\t' + title_url+ '\t' + author + '\t'+ \
author_url + '\t' + views+ '\t' + replies+ '\t'+ date
with open(file_name,'a') as p: # '''Note'''๏ผ๏ผกppend mode, run only once!
p.write(record.encode('utf-8')+"\n") ##!!encode here to utf-8 to avoid encoding
except Exception, e:
print e
pass
# crawl all pages
for page_num in range(10):
print (page_num)
crawler(page_num, 'D:/GitHub/computational-communication-2016/shenliting/homework4/tianya_bbs_threads_list.txt')
import pandas as pd
df = pd.read_csv('D:/GitHub/computational-communication-2016/shenliting/homework4/tianya_bbs_threads_list.txt', sep = "\t", header=None)
df[: 2]
len(df)
df=df.rename(columns = {0:'title', 1:'link', 2:'author',3:'author_page', 4:'click', 5:'reply', 6:'time'})
df[:2]
len(df.link)
df.author_page[:5]
def author_crawler(url, file_name):
try:
content = urllib2.urlopen(url).read() #่ทๅ็ฝ้กต็htmlๆๆฌ
soup = BeautifulSoup(content, "lxml")
link_info = soup.find_all('div', {'class', 'link-box'})
followed_num, fans_num = [i.a.text for i in link_info]
try:
activity = soup.find_all('span', {'class', 'subtitle'})
post_num, reply_num = [j.text[2:] for i in activity[:1] for j in i('a')]
except:
post_num, reply_num = 1, 0
record = '\t'.join([url, followed_num, fans_num, post_num, reply_num])
with open(file_name,'a') as p: # '''Note'''๏ผ๏ผกppend mode, run only once!
p.write(record.encode('utf-8')+"\n") ##!!encode here to utf-8 to avoid encoding
except Exception, e:
print e, url
record = '\t'.join([url, 'na', 'na', 'na', 'na'])
with open(file_name,'a') as p: # '''Note'''๏ผ๏ผกppend mode, run only once!
p.write(record.encode('utf-8')+"\n") ##!!encode here to utf-8 to avoid encoding
pass
for k, url in enumerate(df.author_page):
if k % 10==0:
print k
author_crawler(url, 'D:/GitHub/computational-communication-2016/shenliting/homework4/tianya_bbs_threads_author_info.txt')
url = df.author_page[1]
content = urllib2.urlopen(url).read() #่ทๅ็ฝ้กต็htmlๆๆฌ
soup1 = BeautifulSoup(content, "lxml")
activity = soup1.find_all('span', {'class', 'subtitle'})
post_num, reply_num = [j.text[2:] for i in activity[:1] for j in i('a')]
print post_num, reply_num
print activity[0]
df.link[2]
url = 'http://bbs.tianya.cn' + df.link[2]
url
from IPython.display import display_html, HTML
HTML('<iframe src=http://bbs.tianya.cn/post-free-2848797-1.shtml width=1000 height=500></iframe>')
# the webpage we would like to crawl
post = urllib2.urlopen(url).read() #่ทๅ็ฝ้กต็htmlๆๆฌ
post_soup = BeautifulSoup(post, "lxml")
#articles = soup.find_all('tr')
print (post_soup.prettify())[:1000]
pa = post_soup.find_all('div', {'class', 'atl-item'})
len(pa)
print pa[0]
print pa[1]
print pa[0].find('div', {'class', 'bbs-content'}).text.strip()
print pa[87].find('div', {'class', 'bbs-content'}).text.strip()
pa[1].a
print pa[0].find('a', class_ = 'reportme a-link')
print pa[0].find('a', class_ = 'reportme a-link')['replytime']
print pa[0].find('a', class_ = 'reportme a-link')['author']
for i in pa[:10]:
p_info = i.find('a', class_ = 'reportme a-link')
p_time = p_info['replytime']
p_author_id = p_info['authorid']
p_author_name = p_info['author']
p_content = i.find('div', {'class', 'bbs-content'}).text.strip()
p_content = p_content.replace('\t', '')
print p_time, '--->', p_author_id, '--->', p_author_name,'--->', p_content, '\n'
post_soup.find('div', {'class', 'atl-pages'})#['onsubmit']
post_pages = post_soup.find('div', {'class', 'atl-pages'})
post_pages = post_pages.form['onsubmit'].split(',')[-1].split(')')[0]
post_pages
url = 'http://bbs.tianya.cn' + df.link[2]
url_base = ''.join(url.split('-')[:-1]) + '-%d.shtml'
url_base
def parsePage(pa):
records = []
for i in pa:
p_info = i.find('a', class_ = 'reportme a-link')
p_time = p_info['replytime']
p_author_id = p_info['authorid']
p_author_name = p_info['author']
p_content = i.find('div', {'class', 'bbs-content'}).text.strip()
p_content = p_content.replace('\t', '').replace('\n', '')#.replace(' ', '')
record = p_time + '\t' + p_author_id+ '\t' + p_author_name + '\t'+ p_content
records.append(record)
return records
import sys
def flushPrint(s):
sys.stdout.write('\r')
sys.stdout.write('%s' % s)
sys.stdout.flush()
url_1 = 'http://bbs.tianya.cn' + df.link[10]
content = urllib2.urlopen(url_1).read() #่ทๅ็ฝ้กต็htmlๆๆฌ
post_soup = BeautifulSoup(content, "lxml")
pa = post_soup.find_all('div', {'class', 'atl-item'})
b = post_soup.find('div', class_= 'atl-pages')
b
url_1 = 'http://bbs.tianya.cn' + df.link[0]
content = urllib2.urlopen(url_1).read() #่ทๅ็ฝ้กต็htmlๆๆฌ
post_soup = BeautifulSoup(content, "lxml")
pa = post_soup.find_all('div', {'class', 'atl-item'})
a = post_soup.find('div', {'class', 'atl-pages'})
a
a.form
if b.form:
print 'true'
else:
print 'false'
import random
import time
def crawler(url, file_name):
try:
# open the browser
url_1 = 'http://bbs.tianya.cn' + url
content = urllib2.urlopen(url_1).read() #่ทๅ็ฝ้กต็htmlๆๆฌ
post_soup = BeautifulSoup(content, "lxml")
# how many pages in a post
post_form = post_soup.find('div', {'class', 'atl-pages'})
if post_form.form:
post_pages = post_form.form['onsubmit'].split(',')[-1].split(')')[0]
post_pages = int(post_pages)
url_base = '-'.join(url_1.split('-')[:-1]) + '-%d.shtml'
else:
post_pages = 1
# for the first page
pa = post_soup.find_all('div', {'class', 'atl-item'})
records = parsePage(pa)
with open(file_name,'a') as p: # '''Note'''๏ผ๏ผกppend mode, run only once!
for record in records:
p.write('1'+ '\t' + url + '\t' + record.encode('utf-8')+"\n")
# for the 2nd+ pages
if post_pages > 1:
for page_num in range(2, post_pages+1):
time.sleep(random.random())
flushPrint(page_num)
url2 =url_base % page_num
content = urllib2.urlopen(url2).read() #่ทๅ็ฝ้กต็htmlๆๆฌ
post_soup = BeautifulSoup(content, "lxml")
pa = post_soup.find_all('div', {'class', 'atl-item'})
records = parsePage(pa)
with open(file_name,'a') as p: # '''Note'''๏ผ๏ผกppend mode, run only once!
for record in records:
p.write(str(page_num) + '\t' +url + '\t' + record.encode('utf-8')+"\n")
else:
pass
except Exception, e:
print e
pass
url = 'http://bbs.tianya.cn' + df.link[2]
file_name = 'D:/GitHub/computational-communication-2016/shenliting/homework4/tianya_bbs_threads_test.txt'
crawler(url, file_name)
for k, link in enumerate(df.link):
flushPrint(link)
if k % 10== 0:
print 'This it the post of : ' + str(k)
file_name = 'D:/GitHub/computational-communication-2016/shenliting/homework4/tianya_bbs_threads_network.txt'
crawler(link, file_name)
dtt = []
with open('D:/GitHub/computational-communication-2016/shenliting/homework4/tianya_bbs_threads_network.txt', 'r') as f:
for line in f:
pnum, link, time, author_id, author, content = line.replace('\n', '').split('\t')
dtt.append([pnum, link, time, author_id, author, content])
len(dtt)
dt = pd.DataFrame(dtt)
dt[:5]
dt=dt.rename(columns = {0:'page_num', 1:'link', 2:'time', 3:'author',4:'author_name', 5:'reply'})
dt[:5]
dt.reply[:100]
18459/50
```
| github_jupyter |
# Approximate q-learning
In this notebook you will teach a __tensorflow__ neural network to do Q-learning.
__Frameworks__ - we'll accept this homework in any deep learning framework. This particular notebook was designed for tensorflow, but you will find it easy to adapt it to almost any python-based deep learning framework.
```
import sys, os
if 'google.colab' in sys.modules:
%tensorflow_version 1.x
if not os.path.exists('.setup_complete'):
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/master/setup_colab.sh -O- | bash
!touch .setup_complete
# This code creates a virtual display to draw game images on.
# It will have no effect if your machine has a monitor.
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
!bash ../xvfb start
os.environ['DISPLAY'] = ':1'
import gym
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
env = gym.make("CartPole-v0").env
env.reset()
n_actions = env.action_space.n
state_dim = env.observation_space.shape
plt.imshow(env.render("rgb_array"))
```
# Approximate (deep) Q-learning: building the network
To train a neural network policy one must have a neural network policy. Let's build it.
Since we're working with a pre-extracted features (cart positions, angles and velocities), we don't need a complicated network yet. In fact, let's build something like this for starters:

For your first run, please only use linear layers (`L.Dense`) and activations. Stuff like batch normalization or dropout may ruin everything if used haphazardly.
Also please avoid using nonlinearities like sigmoid & tanh: since agent's observations are not normalized, sigmoids might be saturated at initialization. Instead, use non-saturating nonlinearities like ReLU.
Ideally you should start small with maybe 1-2 hidden layers with < 200 neurons and then increase network size if agent doesn't beat the target score.
```
import tensorflow as tf
import keras
import keras.layers as L
tf.reset_default_graph()
sess = tf.InteractiveSession()
keras.backend.set_session(sess)
assert not tf.test.is_gpu_available(), \
"Please complete this assignment without a GPU. If you use a GPU, the code " \
"will run a lot slower due to a lot of copying to and from GPU memory. " \
"To disable the GPU in Colab, go to Runtime โ Change runtime type โ None."
network = keras.models.Sequential()
network.add(L.InputLayer(state_dim))
<YOUR CODE: stack layers!!!1>
def get_action(state, epsilon=0):
"""
sample actions with epsilon-greedy policy
recap: with p = epsilon pick random action, else pick action with highest Q(s,a)
"""
q_values = network.predict(state[None])[0]
<YOUR CODE>
return <YOUR CODE: epsilon-greedily selected action>
assert network.output_shape == (None, n_actions), "please make sure your model maps state s -> [Q(s,a0), ..., Q(s, a_last)]"
assert network.layers[-1].activation == keras.activations.linear, "please make sure you predict q-values without nonlinearity"
# test epsilon-greedy exploration
s = env.reset()
assert np.shape(get_action(s)) == (), "please return just one action (integer)"
for eps in [0., 0.1, 0.5, 1.0]:
state_frequencies = np.bincount([get_action(s, epsilon=eps) for i in range(10000)], minlength=n_actions)
best_action = state_frequencies.argmax()
assert abs(state_frequencies[best_action] - 10000 * (1 - eps + eps / n_actions)) < 200
for other_action in range(n_actions):
if other_action != best_action:
assert abs(state_frequencies[other_action] - 10000 * (eps / n_actions)) < 200
print('e=%.1f tests passed'%eps)
```
### Q-learning via gradient descent
We shall now train our agent's Q-function by minimizing the TD loss:
$$ L = { 1 \over N} \sum_i (Q_{\theta}(s,a) - [r(s,a) + \gamma \cdot max_{a'} Q_{-}(s', a')]) ^2 $$
Where
* $s, a, r, s'$ are current state, action, reward and next state respectively
* $\gamma$ is a discount factor defined two cells above.
The tricky part is with $Q_{-}(s',a')$. From an engineering standpoint, it's the same as $Q_{\theta}$ - the output of your neural network policy. However, when doing gradient descent, __we won't propagate gradients through it__ to make training more stable (see lectures).
To do so, we shall use `tf.stop_gradient` function which basically says "consider this thing constant when doingbackprop".
```
# Create placeholders for the <s, a, r, s'> tuple and a special indicator for game end (is_done = True)
states_ph = keras.backend.placeholder(dtype='float32', shape=(None,) + state_dim)
actions_ph = keras.backend.placeholder(dtype='int32', shape=[None])
rewards_ph = keras.backend.placeholder(dtype='float32', shape=[None])
next_states_ph = keras.backend.placeholder(dtype='float32', shape=(None,) + state_dim)
is_done_ph = keras.backend.placeholder(dtype='bool', shape=[None])
#get q-values for all actions in current states
predicted_qvalues = network(states_ph)
#select q-values for chosen actions
predicted_qvalues_for_actions = tf.reduce_sum(predicted_qvalues * tf.one_hot(actions_ph, n_actions), axis=1)
gamma = 0.99
# compute q-values for all actions in next states
predicted_next_qvalues = <YOUR CODE: apply network to get q-values for next_states_ph>
# compute V*(next_states) using predicted next q-values
next_state_values = <YOUR CODE>
# compute "target q-values" for loss - it's what's inside square parentheses in the above formula.
target_qvalues_for_actions = <YOUR CODE>
# at the last state we shall use simplified formula: Q(s,a) = r(s,a) since s' doesn't exist
target_qvalues_for_actions = tf.where(is_done_ph, rewards_ph, target_qvalues_for_actions)
#mean squared error loss to minimize
loss = (predicted_qvalues_for_actions - tf.stop_gradient(target_qvalues_for_actions)) ** 2
loss = tf.reduce_mean(loss)
# training function that resembles agent.update(state, action, reward, next_state) from tabular agent
train_step = tf.train.AdamOptimizer(1e-4).minimize(loss)
assert tf.gradients(loss, [predicted_qvalues_for_actions])[0] is not None, "make sure you update q-values for chosen actions and not just all actions"
assert tf.gradients(loss, [predicted_next_qvalues])[0] is None, "make sure you don't propagate gradient w.r.t. Q_(s',a')"
assert predicted_next_qvalues.shape.ndims == 2, "make sure you predicted q-values for all actions in next state"
assert next_state_values.shape.ndims == 1, "make sure you computed V(s') as maximum over just the actions axis and not all axes"
assert target_qvalues_for_actions.shape.ndims == 1, "there's something wrong with target q-values, they must be a vector"
```
### Playing the game
```
sess.run(tf.global_variables_initializer())
def generate_session(env, t_max=1000, epsilon=0, train=False):
"""play env with approximate q-learning agent and train it at the same time"""
total_reward = 0
s = env.reset()
for t in range(t_max):
a = get_action(s, epsilon=epsilon)
next_s, r, done, _ = env.step(a)
if train:
sess.run(train_step,{
states_ph: [s], actions_ph: [a], rewards_ph: [r],
next_states_ph: [next_s], is_done_ph: [done]
})
total_reward += r
s = next_s
if done:
break
return total_reward
epsilon = 0.5
for i in range(1000):
session_rewards = [generate_session(env, epsilon=epsilon, train=True) for _ in range(100)]
print("epoch #{}\tmean reward = {:.3f}\tepsilon = {:.3f}".format(i, np.mean(session_rewards), epsilon))
epsilon *= 0.99
assert epsilon >= 1e-4, "Make sure epsilon is always nonzero during training"
if np.mean(session_rewards) > 300:
print("You Win!")
break
```
### How to interpret results
Welcome to the f.. world of deep f...n reinforcement learning. Don't expect agent's reward to smoothly go up. Hope for it to go increase eventually. If it deems you worthy.
Seriously though,
* __ mean reward__ is the average reward per game. For a correct implementation it may stay low for some 10 epochs, then start growing while oscilating insanely and converges by ~50-100 steps depending on the network architecture.
* If it never reaches target score by the end of for loop, try increasing the number of hidden neurons or look at the epsilon.
* __ epsilon__ - agent's willingness to explore. If you see that agent's already at < 0.01 epsilon before it's is at least 200, just reset it back to 0.1 - 0.5.
### Record videos
As usual, we now use `gym.wrappers.Monitor` to record a video of our agent playing the game. Unlike our previous attempts with state binarization, this time we expect our agent to act ~~(or fail)~~ more smoothly since there's no more binarization error at play.
As you already did with tabular q-learning, we set epsilon=0 for final evaluation to prevent agent from exploring himself to death.
```
# Record sessions
import gym.wrappers
with gym.wrappers.Monitor(gym.make("CartPole-v0"), directory="videos", force=True) as env_monitor:
sessions = [generate_session(env_monitor, epsilon=0, train=False) for _ in range(100)]
# Show video. This may not work in some setups. If it doesn't
# work for you, you can download the videos and view them locally.
from pathlib import Path
from IPython.display import HTML
video_names = sorted([s for s in Path('videos').iterdir() if s.suffix == '.mp4'])
HTML("""
<video width="640" height="480" controls>
<source src="{}" type="video/mp4">
</video>
""".format(video_names[-1])) # You can also try other indices
```
| github_jupyter |
# Developing Advanced User Interfaces
*Using Jupyter Widgets, Pandas Dataframes and Matplotlib*
While BPTK-Py offers a number of high-level functions to quickly plot equations (such as `bptk.plot_scenarios`) or create a dashboard (e.g. `bptk.dashboard`), you may sometimes be in a situation when you want to create more sophisticated plots (e.g. plots with two axes) or a more sophisticated interface dashboard for your simulation.
This is actually quite easy, because BPTK-Py's high-level functions already utilize some very powerfull open source libraries for data management, plotting and dashboards: Pandas, Matplotlib and Jupyter Widgets.
In order to harness the full power of these libraries, you only need to understand how to make the data generated by BPTK-Py available to them. This _How To_ illustrates this using a neat little simulation of customer acquisition strategies. You don't need to understand the simulation to follow this document, but if you are interested you can read more about it on our [blog](https://www.transentis.com/an-example-to-illustrate-the-business-prototyping-methodology/).
## Advanced Plotting
We'll start with some advanced plotting of simulation results.
```
## Load the BPTK Package
from BPTK_Py.bptk import bptk
bptk = bptk()
```
BPTK-Py's workhorse for creating plots is the `bptk.plot_scenarios`function. The function generates all the data you would like to plot using the simulation defined by the scenario manager and the settings defined by the scenarios. The data are stored in a Pandas dataframe. When it comes to plotting the results, the framework uses Matplotlib. To illustrate this, we will recreate the plot below directly from the underlying data:
```
bptk.plot_scenarios(
scenario_managers=["smCustomerAcquisition"],
scenarios=["base"],
equations=['customers'],
title="Base",
freq="M",
x_label="Time",
y_label="No. of Customers"
)
```
You can access the data generated by a scenario by saving it into a dataframe. You can do this by adding the `return_df` flag to `bptk.plot_scenario`:
```
df=bptk.plot_scenarios(
scenario_managers=["smCustomerAcquisition"],
scenarios=["base"],
equations=['customers'],
title="Base",
freq="M",
x_label="Time",
y_label="No. of Customers",
return_df=True
)
```
The dataframe is indexed by time and stores the equations (in SD models) or agent properties (in Agent-based models) in the columns
```
df[0:10] # just show the first ten items
```
The frameworks `bptk.plot_scenarios` method first runs the simulation using the setting defined in the scenario and stores the data in a dataframe. It then plots the dataframe using Pandas `df.plot`method.
We can do the same:
```
subplot=df.plot(None,"customers")
```
The plot above doesn't look quite as neat as the plots created by `bptk.plot_scenarios`โ this is because the framework applies some styling information. The styling information is stored in BPTK_Py.config, and you can access (and modify) it there.
Now let's apply the config to `df.plot`:
```
import BPTK_Py.config as config
subplot=df.plot(kind=config.configuration["kind"],
alpha=config.configuration["alpha"], stacked=config.configuration["stacked"],
figsize=config.configuration["figsize"],
title="Base",
color=config.configuration["colors"],
lw=config.configuration["linewidth"])
```
Yes! We've recreated the plot from the high level `btpk.plot_scenarios` method using basic plotting functions.
Now let's do something that currently isn't possible using the high-level BPTK-Py methods - let's create a graph that has two axes.
This is useful when you want to show the results of two equations at the same time, but they have different orders of magnitudes. For instance in the plot below, the number of customers is much smaller than the profit made, so the customer graph looks like a straight line. But it would still be intersting to be able to compare the two graphs.
```
bptk.plot_scenarios(
scenario_managers=["smCustomerAcquisition"],
scenarios=["base"],
equations=['customers','profit'],
title="Base",
freq="M",
x_label="Time",
y_label="No. of Customers"
)
```
As before, we collect the data in a dataframe.
```
df=bptk.plot_scenarios(
scenario_managers=["smCustomerAcquisition"],
scenarios=["base"],
equations=['customers','profit'],
title="Base",
freq="M",
x_label="Time",
y_label="No. of Customers",
return_df = True
)
df[0:10]
```
Plotting two axes is easy in Pandas (which itself uses the Matplotlib library):
```
ax = df.plot(None,'customers', kind=config.configuration["kind"],
alpha=config.configuration["alpha"], stacked=config.configuration["stacked"],
figsize=config.configuration["figsize"],
title="Profit vs. Customers",
color=config.configuration["colors"],
lw=config.configuration["linewidth"])
# ax is a Matplotlib Axes object
ax1 = ax.twinx()
# Matplotlib.axes.Axes.twinx creates a twin y-axis.
plot =df.plot(None,'profit',ax=ax1)
```
Voila! This is actually quite easy one you understand how to access the data (and of course a little knowledge of Pandas and Matplotlib is also useful). If you were writing a document that needed a lot of plots of this kind, you could create your own high-level function to avoide having to copy and paste the code above multiple times.
## Advanced interactive user interfaces
Now let's try something a little more challenging: Let's build a dashboard for our simulation that let's you manipulate some of the scenrio settings interactively and plots results in tabs.
> Note: You need to have widgets enabled in Jupyter for the following to work. Please check the [BPTK-Py installation instructions](https://bptk.transentis-labs.com/en/latest/docs/usage/installation.html) or refer to the [Jupyter Widgets](https://ipywidgets.readthedocs.io/en/latest/user_install.html) documentation
First, we need to understand how to create tabs. For this we need to import the `ipywidget` Library and we also need to access Matplotlib's `pyplot`
```
%matplotlib inline
import matplotlib.pyplot as plt
from ipywidgets import interact
import ipywidgets as widgets
```
Then we can create some tabs that display scenario results as follows:
```
out1 = widgets.Output()
out2 = widgets.Output()
tab = widgets.Tab(children = [out1, out2])
tab.set_title(0, 'Customers')
tab.set_title(1, 'Profit')
display(tab)
with out1:
# turn of pyplot's interactive mode to ensure the plot is not created directly
plt.ioff()
# create the plot, but don't show it yet
bptk.plot_scenarios(
scenario_managers=["smCustomerAcquisition"],
scenarios=["hereWeGo"],
equations=['customers'],
title="Here We Go",
freq="M",
x_label="Time",
y_label="No. of Customers"
)
# show the plot
plt.show()
# turn interactive mode on again
plt.ion()
with out2:
plt.ioff()
bptk.plot_scenarios(
scenario_managers=["smCustomerAcquisition"],
scenarios=["hereWeGo"],
equations=['profit'],
title="Here We Go",
freq="M",
x_label="Time",
y_label="Euro"
)
plt.show()
plt.ion()
```
That was easy! The only thing you really need to understand is to turn interactive plotting in `pyplot` off before creating the tabs and then turn it on again to create the plots. If you forget to do that, the plots appear above the tabs (try it and see!).
In the next step, we need to add some sliders to manipulate the following scenario settings:
* Referrals
* Referral Free Months
* Referral Program Adoption %
* Advertising Success %
Creating a slider for the referrals is easy using the integer slider from the `ipywidgets` widget library:
```
widgets.IntSlider(
value=7,
min=0,
max=15,
step=1,
description='Referrals:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='d'
)
```
When manipulating a simulation model, we mostly want to start with a particular scenario and then manipulate some of the scenario settings using interactive widgets. Let's set up a new scenario for this purpose and call it `interactiveScenario`:
```
bptk.register_scenarios(scenario_manager="smCustomerAcquisition", scenarios=
{
"interactiveScenario":{
"constants":{
"referrals":0,
"advertisingSuccessPct":0.1,
"referralFreeMonths":3,
"referralProgamAdoptionPct":10
}
}
}
)
```
We can then access the scenario using `bptk.get_scenarios`:
```
scenario = bptk.get_scenario("smCustomerAcquisition","interactiveScenario")
scenario.constants
bptk.plot_scenarios(scenario_managers=["smCustomerAcquisition"],
scenarios=["interactiveScenario"],
equations=['profit'],
title="Interactive Scenario",
freq="M",
x_label="Time",
y_label="Euro"
)
```
The scenario constants can be accessed in the constants variable:
Now we have all the right pieces, we can put them together using the interact function.
```
@interact(advertising_success_pct=widgets.FloatSlider(
value=0.1,
min=0,
max=1,
step=0.01,
continuous_update=False,
description='Advertising Success Pct'
))
def dashboard(advertising_success_pct):
scenario= bptk.get_scenario("smCustomerAcquisition",
"interactiveScenario")
scenario.constants["advertisingSuccessPct"]=advertising_success_pct
bptk.reset_scenario_cache(scenario_manager="smCustomerAcquisition",
scenario="interactiveScenario")
bptk.plot_scenarios(scenario_managers=["smCustomerAcquisition"],
scenarios=["interactiveScenario"],
equations=['profit'],
title="Interactive Scenario",
freq="M",
x_label="Time",
y_label="Euro"
)
```
Now let's combine this with the tabs from above.
```
out1 = widgets.Output()
out2 = widgets.Output()
tab = widgets.Tab(children = [out1, out2])
tab.set_title(0, 'Customers')
tab.set_title(1, 'Profit')
display(tab)
@interact(advertising_success_pct=widgets.FloatSlider(
value=0.1,
min=0,
max=10,
step=0.01,
continuous_update=False,
description='Advertising Success Pct'
))
def dashboardWithTabs(advertising_success_pct):
scenario= bptk.get_scenario("smCustomerAcquisition","interactiveScenario")
scenario.constants["advertisingSuccessPct"]=advertising_success_pct
bptk.reset_scenario_cache(scenario_manager="smCustomerAcquisition",
scenario="interactiveScenario")
with out1:
# turn of pyplot's interactive mode to ensure the plot is not created directly
plt.ioff()
# clear the widgets output ... otherwise we will end up with a long list of plots, one for each change of settings
# create the plot, but don't show it yet
bptk.plot_scenarios(
scenario_managers=["smCustomerAcquisition"],
scenarios=["interactiveScenario"],
equations=['customers'],
title="Interactive Scenario",
freq="M",
x_label="Time",
y_label="No. of Customers"
)
# show the plot
out1.clear_output()
plt.show()
# turn interactive mode on again
plt.ion()
with out2:
plt.ioff()
out2.clear_output()
bptk.plot_scenarios(
scenario_managers=["smCustomerAcquisition"],
scenarios=["interactiveScenario"],
equations=['profit'],
title="Interactive Scenario",
freq="M",
x_label="Time",
y_label="Euro"
)
plt.show()
plt.ion()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/kartikgill/The-GAN-Book/blob/main/Skill-08/Cycle-GAN-No-Outputs.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Import Useful Libraries
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from tqdm import tqdm_notebook
%matplotlib inline
import tensorflow
print (tensorflow.__version__)
```
# Download and Unzip Data
```
!wget https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/horse2zebra.zip
!unzip horse2zebra.zip
!ls horse2zebra
import glob
path = ""
horses_train = glob.glob(path + 'horse2zebra/trainA/*.jpg')
zebras_train = glob.glob(path + 'horse2zebra/trainB/*.jpg')
horses_test = glob.glob(path + 'horse2zebra/testA/*.jpg')
zebras_test = glob.glob(path + 'horse2zebra/testB/*.jpg')
len(horses_train), len(zebras_train), len(horses_test), len(zebras_test)
import cv2
for file in horses_train[:10]:
img = cv2.imread(file)
print (img.shape)
```
# Display few Samples
```
print ("Horses")
for k in range(2):
plt.figure(figsize=(15, 15))
for j in range(6):
file = np.random.choice(horses_train)
img = cv2.imread(file)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.subplot(660 + 1 + j)
plt.imshow(img)
plt.axis('off')
#plt.title(trainY[i])
plt.show()
print ("-"*80)
print ("Zebras")
for k in range(2):
plt.figure(figsize=(15, 15))
for j in range(6):
file = np.random.choice(zebras_train)
img = cv2.imread(file)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.subplot(660 + 1 + j)
plt.imshow(img)
plt.axis('off')
#plt.title(trainY[i])
plt.show()
```
# Define Generator Model (Res-Net Like)
```
#Following function is taken from: https://keras.io/examples/generative/cyclegan/
class ReflectionPadding2D(tensorflow.keras.layers.Layer):
"""Implements Reflection Padding as a layer.
Args:
padding(tuple): Amount of padding for the
spatial dimensions.
Returns:
A padded tensor with the same type as the input tensor.
"""
def __init__(self, padding=(1, 1), **kwargs):
self.padding = tuple(padding)
super(ReflectionPadding2D, self).__init__(**kwargs)
def call(self, input_tensor, mask=None):
padding_width, padding_height = self.padding
padding_tensor = [
[0, 0],
[padding_height, padding_height],
[padding_width, padding_width],
[0, 0],
]
return tensorflow.pad(input_tensor, padding_tensor, mode="REFLECT")
import tensorflow_addons as tfa
# Weights initializer for the layers.
kernel_init = tensorflow.keras.initializers.RandomNormal(mean=0.0, stddev=0.02)
# Gamma initializer for instance normalization.
gamma_init = tensorflow.keras.initializers.RandomNormal(mean=0.0, stddev=0.02)
def custom_resnet_block(input_data, filters):
x = ReflectionPadding2D()(input_data)
x = tensorflow.keras.layers.Conv2D(filters, kernel_size=(3,3), padding='valid', kernel_initializer=kernel_init)(x)
x = tfa.layers.InstanceNormalization()(x)
x = tensorflow.keras.layers.Activation('relu')(x)
x = ReflectionPadding2D()(x)
x = tensorflow.keras.layers.Conv2D(filters, kernel_size=(3,3), padding='valid', kernel_initializer=kernel_init)(x)
x = tfa.layers.InstanceNormalization()(x)
x = tensorflow.keras.layers.Add()([x, input_data])
return x
def make_generator():
source_image = tensorflow.keras.layers.Input(shape=(256, 256, 3))
x = ReflectionPadding2D(padding=(3, 3))(source_image)
x = tensorflow.keras.layers.Conv2D(64, kernel_size=(7,7), kernel_initializer=kernel_init, use_bias=False)(x)
x = tfa.layers.InstanceNormalization()(x)
x = tensorflow.keras.layers.Activation('relu')(x)
x = tensorflow.keras.layers.Conv2D(128, kernel_size=(3,3), strides=(2,2), padding='same', kernel_initializer=kernel_init)(x)
x = tfa.layers.InstanceNormalization()(x)
x = tensorflow.keras.layers.Activation('relu')(x)
x = tensorflow.keras.layers.Conv2D(256, kernel_size=(3,3), strides=(2,2), padding='same', kernel_initializer=kernel_init)(x)
x = tfa.layers.InstanceNormalization()(x)
x = tensorflow.keras.layers.Activation('relu')(x)
x = custom_resnet_block(x, 256)
x = custom_resnet_block(x, 256)
x = custom_resnet_block(x, 256)
x = custom_resnet_block(x, 256)
x = custom_resnet_block(x, 256)
x = custom_resnet_block(x, 256)
x = custom_resnet_block(x, 256)
x = custom_resnet_block(x, 256)
x = custom_resnet_block(x, 256)
x = tensorflow.keras.layers.Conv2DTranspose(128, kernel_size=(3,3), strides=(2,2), padding='same', kernel_initializer=kernel_init)(x)
x = tfa.layers.InstanceNormalization()(x)
x = tensorflow.keras.layers.Activation('relu')(x)
x = tensorflow.keras.layers.Conv2DTranspose(64, kernel_size=(3,3), strides=(2,2), padding='same', kernel_initializer=kernel_init)(x)
x = tfa.layers.InstanceNormalization()(x)
x = tensorflow.keras.layers.Activation('relu')(x)
x = ReflectionPadding2D(padding=(3, 3))(x)
x = tensorflow.keras.layers.Conv2D(3, kernel_size=(7,7), padding='valid')(x)
x = tfa.layers.InstanceNormalization()(x)
translated_image = tensorflow.keras.layers.Activation('tanh')(x)
return source_image, translated_image
source_image, translated_image = make_generator()
generator_network_AB = tensorflow.keras.models.Model(inputs=source_image, outputs=translated_image)
source_image, translated_image = make_generator()
generator_network_BA = tensorflow.keras.models.Model(inputs=source_image, outputs=translated_image)
print (generator_network_AB.summary())
```
# Define Discriminator Network
```
def my_conv_layer(input_layer, filters, strides, bn=True):
x = tensorflow.keras.layers.Conv2D(filters, kernel_size=(4,4), strides=strides, padding='same', kernel_initializer=kernel_init)(input_layer)
x = tensorflow.keras.layers.LeakyReLU(alpha=0.2)(x)
if bn:
x = tfa.layers.InstanceNormalization()(x)
return x
def make_discriminator():
target_image_input = tensorflow.keras.layers.Input(shape=(256, 256, 3))
x = my_conv_layer(target_image_input, 64, (2,2), bn=False)
x = my_conv_layer(x, 128, (2,2))
x = my_conv_layer(x, 256, (2,2))
x = my_conv_layer(x, 512, (1,1))
patch_features = tensorflow.keras.layers.Conv2D(1, kernel_size=(4,4), padding='same')(x)
return target_image_input, patch_features
target_image_input, patch_features = make_discriminator()
discriminator_network_A = tensorflow.keras.models.Model(inputs=target_image_input, outputs=patch_features)
target_image_input, patch_features = make_discriminator()
discriminator_network_B = tensorflow.keras.models.Model(inputs=target_image_input, outputs=patch_features)
print (discriminator_network_A.summary())
adam_optimizer = tensorflow.keras.optimizers.Adam(learning_rate=0.0002, beta_1=0.5)
discriminator_network_A.compile(loss='mse', optimizer=adam_optimizer, metrics=['accuracy'])
discriminator_network_B.compile(loss='mse', optimizer=adam_optimizer, metrics=['accuracy'])
```
# Define Cycle-GAN
```
source_image_A = tensorflow.keras.layers.Input(shape=(256, 256, 3))
source_image_B = tensorflow.keras.layers.Input(shape=(256, 256, 3))
# Domain Transfer
fake_B = generator_network_AB(source_image_A)
fake_A = generator_network_BA(source_image_B)
# Restoring original Domain
get_back_A = generator_network_BA(fake_B)
get_back_B = generator_network_AB(fake_A)
# Get back Identical/Same Image
get_same_A = generator_network_BA(source_image_A)
get_same_B = generator_network_AB(source_image_B)
discriminator_network_A.trainable=False
discriminator_network_B.trainable=False
# Tell Real vs Fake, for a given domain
verify_A = discriminator_network_A(fake_A)
verify_B = discriminator_network_B(fake_B)
cycle_gan = tensorflow.keras.models.Model(inputs = [source_image_A, source_image_B], \
outputs = [verify_A, verify_B, get_back_A, get_back_B, get_same_A, get_same_B])
cycle_gan.summary()
```
# Compiling Model
```
cycle_gan.compile(loss=['mse', 'mse', 'mae', 'mae', 'mae', 'mae'], loss_weights=[1, 1, 10, 10, 5, 5],\
optimizer=adam_optimizer)
```
# Define Data Generators
```
def horses_to_zebras(horses, generator_network):
generated_samples = generator_network.predict_on_batch(horses)
return generated_samples
def zebras_to_horses(zebras, generator_network):
generated_samples = generator_network.predict_on_batch(zebras)
return generated_samples
def get_horse_samples(batch_size):
random_files = np.random.choice(horses_train, size=batch_size)
images = []
for file in random_files:
img = cv2.imread(file)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
images.append((img-127.5)/127.5)
horse_images = np.array(images)
return horse_images
def get_zebra_samples(batch_size):
random_files = np.random.choice(zebras_train, size=batch_size)
images = []
for file in random_files:
img = cv2.imread(file)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
images.append((img-127.5)/127.5)
zebra_images = np.array(images)
return zebra_images
def show_generator_results_horses_to_zebras(generator_network_AB, generator_network_BA):
images = []
for j in range(5):
file = np.random.choice(horses_test)
img = cv2.imread(file)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
images.append(img)
print ('Input Horse Images')
plt.figure(figsize=(13, 13))
for j, img in enumerate(images):
plt.subplot(550 + 1 + j)
plt.imshow(img)
plt.axis('off')
#plt.title(trainY[i])
plt.show()
print ('Translated (Horse -> Zebra) Images')
translated = []
plt.figure(figsize=(13, 13))
for j, img in enumerate(images):
img = (img-127.5)/127.5
output = horses_to_zebras(np.array([img]), generator_network_AB)[0]
translated.append(output)
output = (output+1.0)/2.0
plt.subplot(550 + 1 + j)
plt.imshow(output)
plt.axis('off')
#plt.title(trainY[i])
plt.show()
print ('Translated reverse ( Fake Zebras -> Fake Horses)')
plt.figure(figsize=(13, 13))
for j, img in enumerate(translated):
output = zebras_to_horses(np.array([img]), generator_network_BA)[0]
output = (output+1.0)/2.0
plt.subplot(550 + 1 + j)
plt.imshow(output)
plt.axis('off')
#plt.title(trainY[i])
plt.show()
def show_generator_results_zebras_to_horses(generator_network_AB, generator_network_BA):
images = []
for j in range(5):
file = np.random.choice(zebras_test)
img = cv2.imread(file)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
images.append(img)
print ('Input Zebra Images')
plt.figure(figsize=(13, 13))
for j, img in enumerate(images):
plt.subplot(550 + 1 + j)
plt.imshow(img)
plt.axis('off')
#plt.title(trainY[i])
plt.show()
print ('Translated (Zebra -> Horse) Images')
translated = []
plt.figure(figsize=(13, 13))
for j, img in enumerate(images):
img = (img-127.5)/127.5
output = zebras_to_horses(np.array([img]), generator_network_BA)[0]
translated.append(output)
output = (output+1.0)/2.0
plt.subplot(550 + 1 + j)
plt.imshow(output)
plt.axis('off')
#plt.title(trainY[i])
plt.show()
print ('Translated reverse (Fake Horse -> Fake Zebra)')
plt.figure(figsize=(13, 13))
for j, img in enumerate(translated):
output = horses_to_zebras(np.array([img]), generator_network_AB)[0]
output = (output+1.0)/2.0
plt.subplot(550 + 1 + j)
plt.imshow(output)
plt.axis('off')
#plt.title(trainY[i])
plt.show()
```
# Training Cycle-GAN
```
len(horses_train), len(zebras_train)
epochs = 500
batch_size = 1
steps = 1067
for i in range(0, epochs):
if i%5 == 0:
show_generator_results_horses_to_zebras(generator_network_AB, generator_network_BA)
print ("-"*100)
show_generator_results_zebras_to_horses(generator_network_AB, generator_network_BA)
for j in range(steps):
# A == Horses
# B == Zebras
domain_A_images = get_horse_samples(batch_size)
domain_B_images = get_zebra_samples(batch_size)
fake_patch = np.zeros((batch_size, 32, 32, 1))
real_patch = np.ones((batch_size, 32, 32, 1))
fake_B_images = generator_network_AB(domain_A_images)
fake_A_images = generator_network_BA(domain_B_images)
# Updating Discriminator A weights
discriminator_network_A.trainable=True
discriminator_network_B.trainable=False
loss_d_real_A = discriminator_network_A.train_on_batch(domain_A_images, real_patch)
loss_d_fake_A = discriminator_network_A.train_on_batch(fake_A_images, fake_patch)
loss_d_A = np.add(loss_d_real_A, loss_d_fake_A)/2.0
# Updating Discriminator B weights
discriminator_network_B.trainable=True
discriminator_network_A.trainable=False
loss_d_real_B = discriminator_network_B.train_on_batch(domain_B_images, real_patch)
loss_d_fake_B = discriminator_network_B.train_on_batch(fake_B_images, fake_patch)
loss_d_B = np.add(loss_d_real_B, loss_d_fake_B)/2.0
# Make the Discriminator belive that these are real samples and calculate loss to train the generator
discriminator_network_A.trainable=False
discriminator_network_B.trainable=False
# Updating Generator weights
loss_g = cycle_gan.train_on_batch([domain_A_images, domain_B_images],\
[real_patch, real_patch, domain_A_images, domain_B_images, domain_A_images, domain_B_images])
if j%100 == 0:
print ("Epoch:%.0f, Step:%.0f, DA-Loss:%.3f, DA-Acc:%.3f, DB-Loss:%.3f, DB-Acc:%.3f, G-Loss:%.3f"\
%(i,j,loss_d_A[0],loss_d_A[1]*100,loss_d_B[0],loss_d_B[1]*100,loss_g[0]))
```
| github_jupyter |
# 5. Algorithmic Question
You consult for a personal trainer who has a back-to-back sequence of requests for appointments. A sequence of requests is of the form : 30, 40, 25, 50, 30, 20 where each number is the time that the person who makes the appointment wants to spend. You need to accept some requests, however you need a break between them, so you cannot accept two consecutive requests. For example, [30, 50, 20] is an acceptable solution (of duration 100), but [30, 40, 50, 20] is not, because 30 and 40 are two consecutive appointments. Your goal is to provide to the personal trainer a schedule that maximizes the total length of the accepted appointments. For example, in the previous instance, the optimal solution is [40, 50, 20], of total duration 110.
-----------------------------------------
1. Write an algorithm that computes the acceptable solution with the longest possible duration.
2. Implement a program that given in input an instance in the form given above, gives the optimal solution.
The following algorithm is actually the merge of 2 algorithm :
--
- Simple Comparison of the two possible sub lists (taking every other number)
- Using a greedy Heuristic
The app_setter function basically checks the input array with this two very simple algorithm and then compares the results. We chose this approach in order to shield the function from the vulnerabilities of the single algorithms since there are specific cases in which is possible to demonstrate the ineffectivness of the two. Nevertheless this cross result solves many problems from this point of view
```
def app_setter(A):
l1,l2,l3,B,t = [],[],[],A.copy(),0
try:
for i in range(0,len(A),2): #simple comparison of the two everyother lists
l1.append(A[i])
for i in range(1,len(A),2):
l2.append(A[i])
except IndexError:
pass
while t < len(B)/2: #greedy
m = max(B)
try:
l3.append(m)
except:
pass
try :
B[B.index(m)+1] = 0
B[B.index(m)-1] = 0
B[B.index(m)] = 0
B.remove(0)
except IndexError:
pass
t+=1
if sum(l1)>= sum(l2) and sum(l1)>=sum(l3):
return l1
if sum(l2)>= sum(l1) and sum(l2)>=sum(l3):
return l2
if sum(l3)>= sum(l1) and sum(l3)>=sum(l2):
return l3
app_setter([10, 50, 10, 50, 10, 50, 150, 120])
[10, 50, 10, 50, 10, 50, 150, 120]
[150, 50, 50] = 250 ---> Algorithm greedy
[50, 50, 50, 120] = 270 ---> Simple every other i+1
[10, 10, 10, 150] = 180 ---> Simple every other
```
| github_jupyter |
```
import pickle
import pandas as pd
import re
import nltk
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
import numpy as np
import bcolz
import unicodedata
import torch
import torch.nn as nn
import torch.nn.functional as F
import time
import torch.optim as optim
import matplotlib.pyplot as plt
plt.switch_backend('agg')
import matplotlib.ticker as ticker
import numpy as np
import random
```
# Preprocessing the text data
```
def unicodeToAscii(s):
return ''.join(
c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn'
)
def normalizeString(s):
s = unicodeToAscii(s.lower().strip())
s = s.replace("'","")
s = re.sub(r"([.!?])", r" \1", s)
s = re.sub(r"[^a-zA-Z.!?]+", r" ", s)
return s
def preprocess(df):
nrows = len(df)
real_preprocess = []
df['Content_Parsed_1'] = df['transcription']
for row in range(0, nrows):
# Create an empty list containing preprocessed words
real_preprocess = []
# Save the text and its words into an object
text = df.loc[row]['transcription']
text = normalizeString(text)
df.loc[row]['Content_Parsed_1'] = text
df['action'] = df['action'].str.lower()
df['object'] = df['object'].str.lower()
df['location'] = df['location'].str.lower()
nltk.download('wordnet')
def lemmatize(df):
wordnet_lemmatizer = WordNetLemmatizer()
# Lemmatizing the content
nrows = len(df)
lemmatized_text_list = []
for row in range(0, nrows):
# Create an empty list containing lemmatized words
lemmatized_list = []
# Save the text and its words into an object
text = df.loc[row]['Content_Parsed_1']
text_words = text.split(" ")
# Iterate through every word to lemmatize
for word in text_words:
lemmatized_list.append(wordnet_lemmatizer.lemmatize(word, pos="v"))
# Join the list
lemmatized_text = " ".join(lemmatized_list)
# Append to the list containing the texts
lemmatized_text_list.append(lemmatized_text)
df['Content_Parsed_2'] = lemmatized_text_list
path_df = "E:/saarthi/task_data/train_data.csv"
with open(path_df, 'rb') as data:
df = pd.read_csv(data)
path_df_val = "E:/saarthi/task_data/valid_data.csv"
with open(path_df, 'rb') as data:
df_val = pd.read_csv(data)
preprocess(df_val)
lemmatize(df_val)
preprocess(df)
lemmatize(df)
```
# Getting Glove Word embeddings
```
glove_path = "E:"
vectors = bcolz.open(f'{glove_path}/6B.50.dat')[:]
words = pickle.load(open(f'{glove_path}/6B.50_words.pkl', 'rb'))
word2idx = pickle.load(open(f'{glove_path}/6B.50_idx.pkl', 'rb'))
glove = {w: vectors[word2idx[w]] for w in words}
target_vocab = []
nrows = len(df)
for row in range(0, nrows):
text = df.loc[row]['Content_Parsed_2']
text_words = text.split(" ")
for word in text_words:
if word not in target_vocab:
target_vocab.append(word)
target_vocab = []
nrows = len(df_val)
for row in range(0, nrows):
text = df.loc[row]['Content_Parsed_2']
text_words = text.split(" ")
for word in text_words:
if word not in target_vocab:
target_vocab.append(word)
nrows = len(df)
for row in range(0, nrows):
text = df.loc[row]['action']
text_words = text.split(" ")
for word in text_words:
if word not in target_vocab:
target_vocab.append(word)
nrows = len(df_val)
for row in range(0, nrows):
text = df.loc[row]['action']
text_words = text.split(" ")
for word in text_words:
if word not in target_vocab:
target_vocab.append(word)
nrows = len(df)
for row in range(0, nrows):
text = df.loc[row]['object']
text_words = text.split(" ")
for word in text_words:
if word not in target_vocab:
target_vocab.append(word)
nrows = len(df_val)
for row in range(0, nrows):
text = df.loc[row]['object']
text_words = text.split(" ")
for word in text_words:
if word not in target_vocab:
target_vocab.append(word)
nrows = len(df)
for row in range(0, nrows):
text = df.loc[row]['location']
text_words = text.split(" ")
for word in text_words:
if word not in target_vocab:
target_vocab.append(word)
nrows = len(df_val)
for row in range(0, nrows):
text = df.loc[row]['location']
text_words = text.split(" ")
for word in text_words:
if word not in target_vocab:
target_vocab.append(word)
```
# Creating an embedding matrix
```
vocab_size = len(target_vocab)
input_size = 50
embedding_matrix = torch.zeros((vocab_size, input_size))
for w in target_vocab:
i = word_to_idx(w)
embedding_matrix[i, :] = torch.from_numpy(glove[w]).float()
```
# Defining utility functions
```
def word_to_idx(word):
for i, w in enumerate(target_vocab):
if w == word:
return i
return -1
def sentence_to_matrix(sentence):
words = sentence.split(" ")
n = len(words)
m = torch.zeros((n, input_size))
for i, w in enumerate(words):
m[i] = embedding_matrix[word_to_idx(w)]
return m
def sentence_to_index(sentence):
w = sentence.split(" ")
l = []
for word in w:
l.append(word_to_idx(word))
t = torch.tensor(l, dtype=torch.float32)
return t
output_size = len(target_vocab)
input_size = 50
hidden_size = 50
def showPlot(points):
plt.figure()
fig, ax = plt.subplots()
# this locator puts ticks at regular intervals
loc = ticker.MultipleLocator(base=0.2)
ax.yaxis.set_major_locator(loc)
plt.plot(points)
import time
import math
def asMinutes(s):
m = math.floor(s / 60)
s -= m * 60
return '%dm %ds' % (m, s)
def timeSince(since, percent):
now = time.time()
s = now - since
es = s / (percent)
rs = es - s
return '%s (- %s)' % (asMinutes(s), asMinutes(rs))
```
# Creating the Networks
```
class EncoderRNN(nn.Module):
def __init__(self, input_size, hidden_size):
super(EncoderRNN, self).__init__()
self.hidden_size = hidden_size
self.gru = nn.GRU(input_size, hidden_size)
def forward(self, x, hidden):
x = x.unsqueeze(0)
output, hidden = self.gru(x, hidden)
return output, hidden
def initHidden(self):
return torch.zeros(1, 1, self.hidden_size, device=device)
s = "turn down the bathroom temperature"
device = 'cuda:0' if torch.cuda.is_available() else 'cpu'
matrix = sentence_to_matrix(s)
print(matrix[0].unsqueeze(0).shape)
encoder = EncoderRNN(input_size, hidden_size)
hidden = encoder.initHidden()
for i in range(matrix.shape[0]):
out, hidden = encoder(matrix[i].unsqueeze(0), hidden)
print(out.shape)
print(hidden.shape)
class DecoderRNN(nn.Module):
def __init__(self, hidden_size, output_size):
super(DecoderRNN, self).__init__()
self.hidden_size = hidden_size
self.gru = nn.GRU(hidden_size, hidden_size)
self.out = nn.Linear(hidden_size, output_size)
self.softmax = nn.LogSoftmax(dim=1)
def forward(self, x, hidden):
output = F.relu(x)
output, hidden = self.gru(output, hidden)
output_softmax = self.softmax(self.out(output[0]))
return output, hidden, output_softmax
def initHidden(self):
return torch.zeros(1, 1, self.hidden_size, device=device)
decoder_hidden = hidden
decoder_input = torch.ones((1,1,50))
decoder = DecoderRNN(hidden_size, output_size)
output_sentence = df.loc[3]["action"] + " "+ df.loc[3]["object"] + " " + df.loc[3]["location"]
print(output_sentence)
target_tensor = sentence_to_index(output_sentence)
criterion = nn.NLLLoss()
loss = 0
for i in range(target_tensor.shape[0]):
decoder_input, decoder_hidden, decoder_output_softmax = decoder(decoder_input, decoder_hidden)
loss += criterion(decoder_output_softmax, target_tensor[i].unsqueeze(0).long())
print(torch.argmax(decoder_output_softmax, dim=1))
```
# Training the networks
```
def train(input_tensor, target_tensor, encoder, decoder, encoder_optimizer, decoder_optimizer, criterion):
encoder_hidden = encoder.initHidden()
encoder_optimizer.zero_grad()
decoder_optimizer.zero_grad()
input_length = input_tensor.size(0)
target_length = target_tensor.size(0)
loss = 0
for ei in range(input_length):
encoder_output, encoder_hidden = encoder(input_tensor[ei].unsqueeze(0), encoder_hidden)
decoder_input = torch.ones((1,1,50))
decoder_hidden = encoder_hidden
for i in range(target_tensor.shape[0]):
decoder_input, decoder_hidden, decoder_output_softmax = decoder(decoder_input, decoder_hidden)
loss += criterion(decoder_output_softmax, target_tensor[i].unsqueeze(0).long())
loss.backward()
encoder_optimizer.step()
decoder_optimizer.step()
return loss.item() / target_length
def trainIters(encoder, decoder, n_iters, df, print_every=1000, plot_every=100, learning_rate=0.01):
start = time.time()
plot_losses = []
print_loss_total = 0 # Reset every print_every
plot_loss_total = 0 # Reset every plot_every
encoder_optimizer = optim.SGD(encoder.parameters(), lr=learning_rate)
decoder_optimizer = optim.SGD(decoder.parameters(), lr=learning_rate)
criterion = nn.NLLLoss()
nrows = len(df)
for iter in range(1, n_iters + 1):
i = random.randint(0, n_iters)
i = (i % nrows)
s = df.loc[i]["Content_Parsed_2"]
input_tensor = sentence_to_matrix(s)
output_sentence = df.loc[i]["action"] + " "+ df.loc[i]["object"] + " " + df.loc[i]["location"]
target_tensor = sentence_to_index(output_sentence)
loss = train(input_tensor, target_tensor, encoder, decoder, encoder_optimizer, decoder_optimizer, criterion)
print_loss_total += loss
plot_loss_total += loss
if iter % print_every == 0:
print_loss_avg = print_loss_total / print_every
print_loss_total = 0
print('%s (%d %d%%) %.4f' % (timeSince(start, iter / n_iters),
iter, iter / n_iters * 100, print_loss_avg))
if iter % plot_every == 0:
plot_loss_avg = plot_loss_total / plot_every
plot_losses.append(plot_loss_avg)
plot_loss_total = 0
showPlot(plot_losses)
def predict(encoder, decoder, input_sentence):
encoder_hidden = encoder.initHidden()
input_tensor = sentence_to_matrix(input_sentence)
decoder_input = torch.ones((1,1,50))
input_length = input_tensor.size(0)
for ei in range(input_length):
encoder_output, encoder_hidden = encoder(input_tensor[ei].unsqueeze(0), encoder_hidden)
decoder_hidden = encoder_hidden
for i in range(3):
decoder_input, decoder_hidden, decoder_output_softmax = decoder(decoder_input, decoder_hidden)
idx = torch.argmax(decoder_output_softmax)
print(target_vocab[idx])
def evaluate(encoder, decoder, input_sentence, target_tensor):
encoder_hidden = encoder.initHidden()
input_tensor = sentence_to_matrix(input_sentence)
decoder_input = torch.ones((1,1,50))
input_length = input_tensor.size(0)
for ei in range(input_length):
encoder_output, encoder_hidden = encoder(input_tensor[ei].unsqueeze(0), encoder_hidden)
decoder_hidden = encoder_hidden
correct = 0
for i in range(3):
decoder_input, decoder_hidden, decoder_output_softmax = decoder(decoder_input, decoder_hidden)
idx = torch.argmax(decoder_output_softmax)
if(idx == target_tensor[i]):
correct += 1
if(correct == 3):
return 1
else:
return 0
encoder = EncoderRNN(input_size, hidden_size).to(device)
decoder = DecoderRNN(hidden_size, output_size)
trainIters(encoder, decoder, 150000, df)
```
# Evaluating the model
```
n = len(df_val)
total = 0
correct = 0
for i in range(n):
output_sentence = df_val.loc[i]["action"] + " "+ df_val.loc[i]["object"] + " " + df_val.loc[i]["location"]
target_tensor = sentence_to_index(output_sentence)
input_sentence = df_val.loc[i]["Content_Parsed_2"]
correct += evaluate(encoder, decoder, input_sentence, target_tensor)
total += 1
print(correct)
print(total)
print(f"Accuracy on Val test : {(float(correct)/total)*100}")
```
| github_jupyter |
<a href="https://colab.research.google.com/github/agemagician/Prot-Transformers/blob/master/Embedding/Advanced/Electra.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<h3> Extracting protein sequences' features using ProtElectra pretrained-model <h3>
<b>1. Load necessry libraries including huggingface transformers<b>
```
!pip install -q transformers
import torch
from transformers import ElectraTokenizer, ElectraForPreTraining, ElectraForMaskedLM, ElectraModel
import re
import os
import requests
from tqdm.auto import tqdm
```
<b>2. Set the url location of ProtElectra and the vocabulary file<b>
```
generatorModelUrl = 'https://www.dropbox.com/s/5x5et5q84y3r01m/pytorch_model.bin?dl=1'
discriminatorModelUrl = 'https://www.dropbox.com/s/9ptrgtc8ranf0pa/pytorch_model.bin?dl=1'
generatorConfigUrl = 'https://www.dropbox.com/s/9059fvix18i6why/config.json?dl=1'
discriminatorConfigUrl = 'https://www.dropbox.com/s/jq568evzexyla0p/config.json?dl=1'
vocabUrl = 'https://www.dropbox.com/s/wck3w1q15bc53s0/vocab.txt?dl=1'
```
<b>3. Download ProtElectra models and vocabulary files<b>
```
downloadFolderPath = 'models/electra/'
discriminatorFolderPath = os.path.join(downloadFolderPath, 'discriminator')
generatorFolderPath = os.path.join(downloadFolderPath, 'generator')
discriminatorModelFilePath = os.path.join(discriminatorFolderPath, 'pytorch_model.bin')
generatorModelFilePath = os.path.join(generatorFolderPath, 'pytorch_model.bin')
discriminatorConfigFilePath = os.path.join(discriminatorFolderPath, 'config.json')
generatorConfigFilePath = os.path.join(generatorFolderPath, 'config.json')
vocabFilePath = os.path.join(downloadFolderPath, 'vocab.txt')
if not os.path.exists(discriminatorFolderPath):
os.makedirs(discriminatorFolderPath)
if not os.path.exists(generatorFolderPath):
os.makedirs(generatorFolderPath)
def download_file(url, filename):
response = requests.get(url, stream=True)
with tqdm.wrapattr(open(filename, "wb"), "write", miniters=1,
total=int(response.headers.get('content-length', 0)),
desc=filename) as fout:
for chunk in response.iter_content(chunk_size=4096):
fout.write(chunk)
if not os.path.exists(generatorModelFilePath):
download_file(generatorModelUrl, generatorModelFilePath)
if not os.path.exists(discriminatorModelFilePath):
download_file(discriminatorModelUrl, discriminatorModelFilePath)
if not os.path.exists(generatorConfigFilePath):
download_file(generatorConfigUrl, generatorConfigFilePath)
if not os.path.exists(discriminatorConfigFilePath):
download_file(discriminatorConfigUrl, discriminatorConfigFilePath)
if not os.path.exists(vocabFilePath):
download_file(vocabUrl, vocabFilePath)
```
<b>4. Load the vocabulary and ProtElectra discriminator and generator Models<b>
```
tokenizer = ElectraTokenizer(vocabFilePath, do_lower_case=False )
discriminator = ElectraForPreTraining.from_pretrained(discriminatorFolderPath)
generator = ElectraForMaskedLM.from_pretrained(generatorFolderPath)
electra = ElectraModel.from_pretrained(discriminatorFolderPath)
```
<b>5. Load the model into the GPU if avilabile and switch to inference mode<b>
```
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
discriminator = discriminator.to(device)
discriminator = discriminator.eval()
generator = generator.to(device)
generator = generator.eval()
electra = electra.to(device)
electra = electra.eval()
```
<b>6. Create or load sequences and map rarely occured amino acids (U,Z,O,B) to (X)<b>
```
sequences_Example = ["A E T C Z A O","S K T Z P"]
sequences_Example = [re.sub(r"[UZOB]", "X", sequence) for sequence in sequences_Example]
```
<b>7. Tokenize, encode sequences and load it into the GPU if possibile<b>
```
ids = tokenizer.batch_encode_plus(sequences_Example, add_special_tokens=True, pad_to_max_length=True)
input_ids = torch.tensor(ids['input_ids']).to(device)
attention_mask = torch.tensor(ids['attention_mask']).to(device)
```
<b>8. Extracting sequences' features and load it into the CPU if needed<b>
```
with torch.no_grad():
discriminator_embedding = discriminator(input_ids=input_ids,attention_mask=attention_mask)[0]
discriminator_embedding = discriminator_embedding.cpu().numpy()
with torch.no_grad():
generator_embedding = generator(input_ids=input_ids,attention_mask=attention_mask)[0]
generator_embedding = generator_embedding.cpu().numpy()
with torch.no_grad():
electra_embedding = electra(input_ids=input_ids,attention_mask=attention_mask)[0]
electra_embedding = electra_embedding.cpu().numpy()
```
<b>9. Remove padding ([PAD]) and special tokens ([CLS],[SEP]) that is added by Electra model<b>
```
features = []
for seq_num in range(len(electra_embedding)):
seq_len = (attention_mask[seq_num] == 1).sum()
seq_emd = electra_embedding[seq_num][1:seq_len-1]
features.append(seq_emd)
print(features)
```
| github_jupyter |
# Decision Point Price Momentum Oscillator (PMO)
https://stockcharts.com/school/doku.php?id=chart_school:technical_indicators:dppmo
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
# fix_yahoo_finance is used to fetch data
import fix_yahoo_finance as yf
yf.pdr_override()
# input
symbol = 'AAPL'
start = '2017-01-01'
end = '2019-01-01'
# Read data
df = yf.download(symbol,start,end)
# View Columns
df.head()
df.tail()
df['ROC'] = ((df['Adj Close'] - df['Adj Close'].shift(1))/df['Adj Close'].shift(1)) * 100
df = df.dropna()
df.head()
df['35_Custom_EMA_ROC'] = df['ROC'].ewm(ignore_na=False,span=35,min_periods=0,adjust=True).mean()
df.head()
df['35_Custom_EMA_ROC_10'] = df['35_Custom_EMA_ROC']*10
df.head()
df = df.dropna()
df.head(20)
df['PMO_Line'] = df['35_Custom_EMA_ROC_10'].ewm(ignore_na=False,span=20,min_periods=0,adjust=True).mean()
df.head()
df['PMO_Signal_Line'] = df['PMO_Line'].ewm(ignore_na=False,span=10,min_periods=0,adjust=True).mean()
df = df.dropna()
df.head()
fig = plt.figure(figsize=(14,10))
ax1 = plt.subplot(2, 1, 1)
ax1.plot(df['Adj Close'])
ax1.set_title('Stock '+ symbol +' Closing Price')
ax1.set_ylabel('Price')
ax1.legend(loc='best')
ax2 = plt.subplot(2, 1, 2)
ax2.plot(df['PMO_Line'], label='PMO Line')
ax2.plot(df['PMO_Signal_Line'], label='PMO Signal Line')
ax2.axhline(y=0, color='red')
ax2.grid()
ax2.legend(loc='best')
ax2.set_ylabel('PMO')
ax2.set_xlabel('Date')
```
## Candlestick with PMO
```
from matplotlib import dates as mdates
import datetime as dt
dfc = df.copy()
dfc['VolumePositive'] = dfc['Open'] < dfc['Adj Close']
#dfc = dfc.dropna()
dfc = dfc.reset_index()
dfc['Date'] = mdates.date2num(dfc['Date'].astype(dt.date))
dfc.head()
from mpl_finance import candlestick_ohlc
fig = plt.figure(figsize=(14,10))
ax1 = plt.subplot(2, 1, 1)
candlestick_ohlc(ax1,dfc.values, width=0.5, colorup='g', colordown='r', alpha=1.0)
ax1.xaxis_date()
ax1.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y'))
ax1.grid(True, which='both')
ax1.minorticks_on()
ax1v = ax1.twinx()
colors = dfc.VolumePositive.map({True: 'g', False: 'r'})
ax1v.bar(dfc.Date, dfc['Volume'], color=colors, alpha=0.4)
ax1v.axes.yaxis.set_ticklabels([])
ax1v.set_ylim(0, 3*df.Volume.max())
ax1.set_title('Stock '+ symbol +' Closing Price')
ax1.set_ylabel('Price')
ax2 = plt.subplot(2, 1, 2)
ax2.plot(df['PMO_Line'], label='PMO_Line')
ax2.plot(df['PMO_Signal_Line'], label='PMO_Signal_Line')
ax2.axhline(y=0, color='red')
ax2.grid()
ax2.set_ylabel('PMO')
ax2.set_xlabel('Date')
ax2.legend(loc='best')
```
| github_jupyter |
```
import re
import requests
import time
from requests_html import HTML
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
options = Options()
options.add_argument("--headless")
driver = webdriver.Chrome(options=options)
categories = [
"https://www.amazon.com/Best-Sellers-Toys-Games/zgbs/toys-and-games/",
"https://www.amazon.com/Best-Sellers-Electronics/zgbs/electronics/",
"https://www.amazon.com/Best-Sellers/zgbs/fashion/"
]
# categories
first_url = categories[0]
driver.get(first_url)
body_el = driver.find_element_by_css_selector("body")
html_str = body_el.get_attribute("innerHTML")
html_obj = HTML(html=html_str)
page_links = [f"https://www.amazon.com{x}" for x in html_obj.links if x.startswith("/")]
# new_links = [x for x in new_links if "product-reviews/" not in x]
# page_links
def scrape_product_page(url, title_lookup = "#productTitle", price_lookup = "#priceblock_ourprice"):
driver.get(url)
time.sleep(0.5)
body_el = driver.find_element_by_css_selector("body")
html_str = body_el.get_attribute("innerHTML")
html_obj = HTML(html=html_str)
product_title = html_obj.find(title_lookup, first=True).text
product_price = html_obj.find(price_lookup, first=True).text
return product_title, product_price
# https://www.amazon.com/LEGO-Classic-Medium-Creative-Brick/dp/B00NHQFA1I/
# https://www.amazon.com/Crayola-Washable-Watercolors-8-ea/dp/B000HHKAE2/
# <base-url>/<slug>/dp/<product_id>/
# my_regex_pattern = r"https://www.amazon.com/(?P<slug>[\w-]+)/dp/(?P<product_id>[\w-]+)/"
# my_url = 'https://www.amazon.com/Crayola-Washable-Watercolors-8-ea/dp/B000HHKAE2/'
# regex = re.compile(my_regex_pattern)
# my_match = regex.match(my_url)
# print(my_match)
# my_match['product_id']
# my_match['slug']
regex_options = [
r"https://www.amazon.com/gp/product/(?P<product_id>[\w-]+)/",
r"https://www.amazon.com/dp/(?P<product_id>[\w-]+)/",
r"https://www.amazon.com/(?P<slug>[\w-]+)/dp/(?P<product_id>[\w-]+)/",
]
def extract_product_id_from_url(url):
product_id = None
for regex_str in regex_options:
regex = re.compile(regex_str)
match = regex.match(url)
if match != None:
try:
product_id = match['product_id']
except:
pass
return product_id
# page_links = [x for x in page_links if extract_product_id_from_url(x) != None]
def clean_page_links(page_links=[]):
final_page_links = []
for url in page_links:
product_id = extract_product_id_from_url(url)
if product_id != None:
final_page_links.append({"url": url, "product_id": product_id})
return final_page_links
cleaned_links = clean_page_links(page_links)
len(page_links) # == len(cleaned_links)
len(cleaned_links)
def perform_scrape(cleaned_items=[]):
data_extracted = []
for obj in cleaned_items:
link = obj['url']
product_id = obj['product_id']
title, price = (None, None)
try:
title, price = scrape_product_page(link)
except:
pass
if title != None and price != None:
print(link, title, price)
product_data = {
"url": link,
"product_id": product_id,
"title": title,
"price": price
}
data_extracted.append(product_data)
return data_extracted
extracted_data = perform_scrape(cleaned_items=cleaned_links)
print(extracted_data)
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import Imputer
# ไธ่ฟฐๅฝๆฐ๏ผๅ
ถ่พๅ
ฅๆฏๅ
ๅซ1ไธชๅคไธชๆไธพ็ฑปๅซ็2Dๆฐ็ป๏ผ้่ฆreshapeๆไธบ่ฟ็งๆฐ็ป
# from sklearn.preprocessing import CategoricalEncoder #ๅ้ขไผๆทปๅ ่ฟไธชๆนๆณ
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.utils import check_array
from sklearn.preprocessing import LabelEncoder
from scipy import sparse
# ๅ้ขๅๅป็่งฃ
class CategoricalEncoder(BaseEstimator, TransformerMixin):
"""Encode categorical features as a numeric array.
The input to this transformer should be a matrix of integers or strings,
denoting the values taken on by categorical (discrete) features.
The features can be encoded using a one-hot aka one-of-K scheme
(``encoding='onehot'``, the default) or converted to ordinal integers
(``encoding='ordinal'``).
This encoding is needed for feeding categorical data to many scikit-learn
estimators, notably linear models and SVMs with the standard kernels.
Read more in the :ref:`User Guide <preprocessing_categorical_features>`.
Parameters
----------
encoding : str, 'onehot', 'onehot-dense' or 'ordinal'
The type of encoding to use (default is 'onehot'):
- 'onehot': encode the features using a one-hot aka one-of-K scheme
(or also called 'dummy' encoding). This creates a binary column for
each category and returns a sparse matrix.
- 'onehot-dense': the same as 'onehot' but returns a dense array
instead of a sparse matrix.
- 'ordinal': encode the features as ordinal integers. This results in
a single column of integers (0 to n_categories - 1) per feature.
categories : 'auto' or a list of lists/arrays of values.
Categories (unique values) per feature:
- 'auto' : Determine categories automatically from the training data.
- list : ``categories[i]`` holds the categories expected in the ith
column. The passed categories are sorted before encoding the data
(used categories can be found in the ``categories_`` attribute).
dtype : number type, default np.float64
Desired dtype of output.
handle_unknown : 'error' (default) or 'ignore'
Whether to raise an error or ignore if a unknown categorical feature is
present during transform (default is to raise). When this is parameter
is set to 'ignore' and an unknown category is encountered during
transform, the resulting one-hot encoded columns for this feature
will be all zeros.
Ignoring unknown categories is not supported for
``encoding='ordinal'``.
Attributes
----------
categories_ : list of arrays
The categories of each feature determined during fitting. When
categories were specified manually, this holds the sorted categories
(in order corresponding with output of `transform`).
Examples
--------
Given a dataset with three features and two samples, we let the encoder
find the maximum value per feature and transform the data to a binary
one-hot encoding.
>>> from sklearn.preprocessing import CategoricalEncoder
>>> enc = CategoricalEncoder(handle_unknown='ignore')
>>> enc.fit([[0, 0, 3], [1, 1, 0], [0, 2, 1], [1, 0, 2]])
... # doctest: +ELLIPSIS
CategoricalEncoder(categories='auto', dtype=<... 'numpy.float64'>,
encoding='onehot', handle_unknown='ignore')
>>> enc.transform([[0, 1, 1], [1, 0, 4]]).toarray()
array([[ 1., 0., 0., 1., 0., 0., 1., 0., 0.],
[ 0., 1., 1., 0., 0., 0., 0., 0., 0.]])
See also
--------
sklearn.preprocessing.OneHotEncoder : performs a one-hot encoding of
integer ordinal features. The ``OneHotEncoder assumes`` that input
features take on values in the range ``[0, max(feature)]`` instead of
using the unique values.
sklearn.feature_extraction.DictVectorizer : performs a one-hot encoding of
dictionary items (also handles string-valued features).
sklearn.feature_extraction.FeatureHasher : performs an approximate one-hot
encoding of dictionary items or strings.
"""
def __init__(self, encoding='onehot', categories='auto', dtype=np.float64,
handle_unknown='error'):
self.encoding = encoding
self.categories = categories
self.dtype = dtype
self.handle_unknown = handle_unknown
def fit(self, X, y=None):
"""Fit the CategoricalEncoder to X.
Parameters
----------
X : array-like, shape [n_samples, n_feature]
The data to determine the categories of each feature.
Returns
-------
self
"""
if self.encoding not in ['onehot', 'onehot-dense', 'ordinal']:
template = ("encoding should be either 'onehot', 'onehot-dense' "
"or 'ordinal', got %s")
raise ValueError(template % self.handle_unknown)
if self.handle_unknown not in ['error', 'ignore']:
template = ("handle_unknown should be either 'error' or "
"'ignore', got %s")
raise ValueError(template % self.handle_unknown)
if self.encoding == 'ordinal' and self.handle_unknown == 'ignore':
raise ValueError("handle_unknown='ignore' is not supported for"
" encoding='ordinal'")
X = check_array(X, dtype=np.object, accept_sparse='csc', copy=True)
n_samples, n_features = X.shape
self._label_encoders_ = [LabelEncoder() for _ in range(n_features)]
for i in range(n_features):
le = self._label_encoders_[i]
Xi = X[:, i]
if self.categories == 'auto':
le.fit(Xi)
else:
valid_mask = np.in1d(Xi, self.categories[i])
if not np.all(valid_mask):
if self.handle_unknown == 'error':
diff = np.unique(Xi[~valid_mask])
msg = ("Found unknown categories {0} in column {1}"
" during fit".format(diff, i))
raise ValueError(msg)
le.classes_ = np.array(np.sort(self.categories[i]))
self.categories_ = [le.classes_ for le in self._label_encoders_]
return self
def transform(self, X):
"""Transform X using one-hot encoding.
Parameters
----------
X : array-like, shape [n_samples, n_features]
The data to encode.
Returns
-------
X_out : sparse matrix or a 2-d array
Transformed input.
"""
X = check_array(X, accept_sparse='csc', dtype=np.object, copy=True)
n_samples, n_features = X.shape
X_int = np.zeros_like(X, dtype=np.int)
X_mask = np.ones_like(X, dtype=np.bool)
for i in range(n_features):
valid_mask = np.in1d(X[:, i], self.categories_[i])
if not np.all(valid_mask):
if self.handle_unknown == 'error':
diff = np.unique(X[~valid_mask, i])
msg = ("Found unknown categories {0} in column {1}"
" during transform".format(diff, i))
raise ValueError(msg)
else:
# Set the problematic rows to an acceptable value and
# continue `The rows are marked `X_mask` and will be
# removed later.
X_mask[:, i] = valid_mask
X[:, i][~valid_mask] = self.categories_[i][0]
X_int[:, i] = self._label_encoders_[i].transform(X[:, i])
if self.encoding == 'ordinal':
return X_int.astype(self.dtype, copy=False)
mask = X_mask.ravel()
n_values = [cats.shape[0] for cats in self.categories_]
n_values = np.array([0] + n_values)
indices = np.cumsum(n_values)
column_indices = (X_int + indices[:-1]).ravel()[mask]
row_indices = np.repeat(np.arange(n_samples, dtype=np.int32),
n_features)[mask]
data = np.ones(n_samples * n_features)[mask]
out = sparse.csc_matrix((data, (row_indices, column_indices)),
shape=(n_samples, indices[-1]),
dtype=self.dtype).tocsr()
if self.encoding == 'onehot-dense':
return out.toarray()
else:
return out
# ๅฆไธไธช่ฝฌๆขๅจ๏ผ็จไบ้ๆฉๅญ้
from sklearn.base import BaseEstimator, TransformerMixin
class DataFrameSelector(BaseEstimator, TransformerMixin):
def __init__(self, attribute_names):
self.attribute_names = attribute_names
def fit(self, X, y=None):
return self
def transform(self, X):
return X[self.attribute_names]
class DataFrameFillCat(BaseEstimator, TransformerMixin):
def __init__(self, arrtibute_names):
self.attribute_names = arrtibute_names
def fit(self, X):
return self
def transform(self, X):
print(type(X))
for attributename in self.attribute_names:
# print(X[attributename])
freq_cat = X[attributename].dropna().mode()[0]
# print(freq_cat)
X[attributename] = X[attributename].fillna(freq_cat)
return X.values
# ๅ ่ฝฝๆฐๆฎ
train_df = pd.read_csv("./datasets/train.csv")
test_df = pd.read_csv("./datasets/test.csv")
combine = [train_df, test_df]
train_df.head()
train_df.info()
train_df.describe()
train_df.describe(include=np.object)
num_attribute = ['MSSubClass', 'LotArea', 'OverallQual',
'OverallCond', 'YearBuilt', 'YearRemodAdd', 'MasVnrArea', 'BsmtFinSF1',
'BsmtFinSF2', 'BsmtUnfSF', 'TotalBsmtSF', '1stFlrSF', '2ndFlrSF',
'LowQualFinSF', 'GrLivArea', 'BsmtFullBath', 'BsmtHalfBath', 'FullBath',
'HalfBath', 'BedroomAbvGr', 'KitchenAbvGr', 'TotRmsAbvGrd',
'Fireplaces', 'GarageYrBlt', 'GarageCars', 'GarageArea', 'WoodDeckSF',
'OpenPorchSF', 'EnclosedPorch', '3SsnPorch', 'ScreenPorch', 'PoolArea',
'MiscVal', 'MoSold', 'YrSold',]
cat_attribute = ['MSZoning', 'Street', 'LotShape', 'LandContour', 'Utilities',
'LotConfig', 'LandSlope', 'Neighborhood', 'Condition1', 'Condition2',
'BldgType', 'HouseStyle', 'RoofStyle', 'RoofMatl', 'Exterior1st',
'Exterior2nd', 'MasVnrType', 'ExterQual', 'ExterCond', 'Foundation',
'BsmtQual', 'BsmtCond', 'BsmtExposure', 'BsmtFinType1', 'BsmtFinType2',
'Heating', 'HeatingQC', 'CentralAir', 'Electrical', 'KitchenQual',
'Functional', 'GarageType', 'GarageFinish', 'GarageQual',
'GarageCond', 'PavedDrive',
'SaleType', 'SaleCondition']
from sklearn.preprocessing import StandardScaler
num_pipeline = Pipeline([
("selector", DataFrameSelector(num_attribute)),
("imputer", Imputer(strategy="median")),
("std_scaler", StandardScaler())
])
cat_pipeline = Pipeline([
("selector", DataFrameSelector(cat_attribute)),
("fillna", DataFrameFillCat(cat_attribute)),
("cat_encoder", CategoricalEncoder(encoding="onehot-dense"))
])
X_train = train_df
X_train_cat_pipeline = num_pipeline.fit_transform(X_train)
from sklearn.pipeline import FeatureUnion
full_pipeline = FeatureUnion(transformer_list=[
("num_pipeline", num_pipeline),
("cat_pipeline", cat_pipeline),
])
from sklearn.model_selection import train_test_split
X_train = train_df.drop(["Id", "SalePrice"], axis = 1)
y_train = train_df["SalePrice"]
# X_train.info()
X_train_pipeline = full_pipeline.fit_transform(X_train)
X_train, X_test, y_train, y_test = train_test_split(X_train_pipeline, y_train, test_size=0.1)
X_train.shape, X_test.shape, y_train.shape
# X_test_pipeline = full_pipeline.transform(X_test)
from sklearn.ensemble import RandomForestRegressor
rdf_reg = RandomForestRegressor()
rdf_reg.fit(X_train, y_train)
y_pred = rdf_reg.predict(X_test)
# y_pred = rdf_reg.predict(X_test_pipeline)
from sklearn.metrics import mean_squared_error
scores_mse = mean_squared_error(y_pred, y_test)
scores_mse
from sklearn.ensemble import GradientBoostingRegressor
gbr_reg = GradientBoostingRegressor(n_estimators=1000, max_depth=2)
gbr_reg.fit(X_train, y_train)
y_pred = gbr_reg.predict(X_test)
scores_mse = mean_squared_error(y_pred, y_test)
scores_mse
test_df_data = test_df.drop(["Id"], axis=1)
X_test_pipeline = full_pipeline.transform(test_df_data)
# test_df_data.info()
# test_df_data.info()
y_pred = gbr_reg.predict(X_test_pipeline)
result =pd.DataFrame({
"Id": test_df["Id"],
"SalePrice": y_pred
})
result.to_csv("result.csv", index=False)
```
| github_jupyter |
```
library(keras)
```
**Loading MNIST dataset from the library datasets**
```
mnist <- dataset_mnist()
x_train <- mnist$train$x
y_train <- mnist$train$y
x_test <- mnist$test$x
y_test <- mnist$test$y
```
**Data Preprocessing**
```
# reshape
x_train <- array_reshape(x_train, c(nrow(x_train), 784))
x_test <- array_reshape(x_test, c(nrow(x_test), 784))
# rescale
x_train <- x_train / 255
x_test <- x_test / 255
```
The y data is an integer vector with values ranging from 0 to 9.
To prepare this data for training we one-hot encode the vectors into binary class matrices using the Keras to_categorical() function:
```
y_train <- to_categorical(y_train, 10)
y_test <- to_categorical(y_test, 10)
```
**Building model**
```
model <- keras_model_sequential()
model %>%
layer_dense(units = 256, activation = 'relu', input_shape = c(784)) %>%
layer_dropout(rate = 0.4) %>%
layer_dense(units = 128, activation = 'relu') %>%
layer_dropout(rate = 0.3) %>%
layer_dense(units = 10, activation = 'softmax')
# Use the summary() function to print the details of the model:
summary(model)
```
**Compiling the model**
```
model %>% compile(
loss = 'categorical_crossentropy',
optimizer = optimizer_rmsprop(),
metrics = c('accuracy')
)
```
**Training and Evaluation**
```
history <- model %>% fit(
x_train, y_train,
epochs = 30, batch_size = 128,
validation_split = 0.2
)
plot(history)
# Plot the accuracy of the training data
plot(history$metrics$acc, main="Model Accuracy", xlab = "epoch", ylab="accuracy", col="blue", type="l")
# Plot the accuracy of the validation data
lines(history$metrics$val_acc, col="green")
# Add Legend
legend("bottomright", c("train","test"), col=c("blue", "green"), lty=c(1,1))
# Plot the model loss of the training data
plot(history$metrics$loss, main="Model Loss", xlab = "epoch", ylab="loss", col="blue", type="l")
# Plot the model loss of the test data
lines(history$metrics$val_loss, col="green")
# Add legend
legend("topright", c("train","test"), col=c("blue", "green"), lty=c(1,1))
```
**Predicting for the test data**
```
model %>% predict_classes(x_test)
# Evaluate on test data and labels
score <- model %>% evaluate(x_test, y_test, batch_size = 128)
# Print the score
print(score)
```
## Hyperparameter tuning
```
# install.packages("tfruns")
library(tfruns)
runs <- tuning_run(file = "hyperparameter_tuning_model.r", flags = list(
dense_units1 = c(8,16),
dropout1 = c(0.2, 0.3, 0.4),
dense_units2 = c(8,16),
dropout2 = c(0.2, 0.3, 0.4)
))
runs
```
| github_jupyter |
## This notebook contains a sample code for the COMPAS data experiment in Section 5.2.
Before running the code, please check README.md and install LEMON.
```
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from sklearn import feature_extraction
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import stealth_sampling
```
### Functions
```
# split data to bins (s, y) = (1, 1), (1, 0), (0, 1), (0, 0)
def split_to_four(X, S, Y):
Z = np.c_[X, S, Y]
Z_pos_pos = Z[np.logical_and(S, Y), :]
Z_pos_neg = Z[np.logical_and(S, np.logical_not(Y)), :]
Z_neg_pos = Z[np.logical_and(np.logical_not(S), Y), :]
Z_neg_neg = Z[np.logical_and(np.logical_not(S), np.logical_not(Y)), :]
Z = [Z_pos_pos, Z_pos_neg, Z_neg_pos, Z_neg_neg]
return Z
# compute demographic parity
def demographic_parity(W):
p_pos = np.mean(np.concatenate(W[:2]))
p_neg = np.mean(np.concatenate(W[2:]))
return np.abs(p_pos - p_neg)
# compute the sampling size from each bin
def computeK(Z, Nsample, sampled_spos, sampled_ypos):
Kpp = Nsample*sampled_spos*sampled_ypos[0]
Kpn = Nsample*sampled_spos*(1-sampled_ypos[0])
Knp = Nsample*(1-sampled_spos)*sampled_ypos[1]
Knn = Nsample*(1-sampled_spos)*(1-sampled_ypos[1])
K = [Kpp, Kpn, Knp, Knn]
kratio = min([min(1, z.shape[0]/k) for (z, k) in zip(Z, K)])
Kpp = int(np.floor(Nsample*kratio*sampled_spos*sampled_ypos[0]))
Kpn = int(np.floor(Nsample*kratio*sampled_spos*(1-sampled_ypos[0])))
Knp = int(np.floor(Nsample*kratio*(1-sampled_spos)*sampled_ypos[1]))
Knn = int(np.floor(Nsample*kratio*(1-sampled_spos)*(1-sampled_ypos[1])))
K = [max([k, 1]) for k in [Kpp, Kpn, Knp, Knn]]
return K
# case-contrl sampling
def case_control_sampling(X, K):
q = [(K[i]/sum(K)) * np.ones(x.shape[0]) / x.shape[0] for i, x in enumerate(X)]
return q
# compute wasserstein distance
def compute_wasserstein(X1, S1, X2, S2, timeout=10.0):
dx = stealth_sampling.compute_wasserstein(X1, X2, path='./', prefix='compas', timeout=timeout)
dx_s1 = stealth_sampling.compute_wasserstein(X1[S1>0.5, :], X2[S2>0.5, :], path='./', prefix='compas', timeout=timeout)
dx_s0 = stealth_sampling.compute_wasserstein(X1[S1<0.5, :], X2[S2<0.5, :], path='./', prefix='compas', timeout=timeout)
return dx, dx_s1, dx_s0
```
### Fetch data and preprocess
We modified [https://github.com/mbilalzafar/fair-classification/blob/master/disparate_mistreatment/propublica_compas_data_demo/load_compas_data.py]
```
url = 'https://raw.githubusercontent.com/propublica/compas-analysis/master/compas-scores-two-years.csv'
feature_list = ['age_cat', 'race', 'sex', 'priors_count', 'c_charge_degree', 'two_year_recid']
sensitive = 'race'
label = 'score_text'
# fetch data
df = pd.read_table(url, sep=',')
df = df.dropna(subset=['days_b_screening_arrest'])
# convert to np array
data = df.to_dict('list')
for k in data.keys():
data[k] = np.array(data[k])
# filtering records
idx = np.logical_and(data['days_b_screening_arrest']<=30, data['days_b_screening_arrest']>=-30)
idx = np.logical_and(idx, data['is_recid'] != -1)
idx = np.logical_and(idx, data['c_charge_degree'] != 'O')
idx = np.logical_and(idx, data['score_text'] != 'NA')
idx = np.logical_and(idx, np.logical_or(data['race'] == 'African-American', data['race'] == 'Caucasian'))
for k in data.keys():
data[k] = data[k][idx]
# label Y
Y = 1 - np.logical_not(data[label]=='Low').astype(np.int32)
# feature X, sensitive feature S
X = []
for feature in feature_list:
vals = data[feature]
if feature == 'priors_count':
vals = [float(v) for v in vals]
vals = preprocessing.scale(vals)
vals = np.reshape(vals, (Y.size, -1))
else:
lb = preprocessing.LabelBinarizer()
lb.fit(vals)
vals = lb.transform(vals)
if feature == sensitive:
S = vals[:, 0]
X.append(vals)
X = np.concatenate(X, axis=1)
```
### Experiment
```
# parameter settings
seed = 0 # random seed
# parameter settings for sampling
Nsample = 2000 # number of data to sample
sampled_ypos = [0.5, 0.5] # the ratio of positive decisions '\alpha' in sampling
# parameter settings for complainer
Nref = 1278 # number of referential data
def sample_and_evaluate(X, S, Y, Nref=1278, Nsample=2000, sampled_ypos=[0.5, 0.5], seed=0):
# load data
Xbase, Xref, Sbase, Sref, Ybase, Yref = train_test_split(X, S, Y, test_size=Nref, random_state=seed)
N = Xbase.shape[0]
scaler = StandardScaler()
scaler.fit(Xbase)
Xbase = scaler.transform(Xbase)
Xref = scaler.transform(Xref)
# wasserstein distance between base and ref
np.random.seed(seed)
idx = np.random.permutation(Xbase.shape[0])[:Nsample]
dx, dx_s1, dx_s0 = compute_wasserstein(Xbase[idx, :], Sbase[idx], Xref, Sref, timeout=10.0)
# demographic parity
Z = split_to_four(Xbase, Sbase, Ybase)
parity = demographic_parity([z[:, -1] for z in Z])
# sampling
results = [[parity, dx, dx_s1, dx_s0]]
sampled_spos = np.mean(Sbase)
K = computeK(Z, Nsample, sampled_spos, sampled_ypos)
for i, sampling in enumerate(['case-control', 'stealth']):
#print('%s: sampling ...' % (sampling,), end='')
np.random.seed(seed+i)
if sampling == 'case-control':
p = case_control_sampling([z[:, :-1] for z in Z], K)
elif sampling == 'stealth':
p = stealth_sampling.stealth_sampling([z[:, :-1] for z in Z], K, path='./', prefix='compas', timeout=30.0)
idx = np.random.choice(N, sum(K), p=np.concatenate(p), replace=False)
Xs = np.concatenate([z[:, :-2] for z in Z], axis=0)[idx, :]
Ss = np.concatenate([z[:, -2] for z in Z], axis=0)[idx]
Ts = np.concatenate([z[:, -1] for z in Z], axis=0)[idx]
#print('done.')
# demographic parity of the sampled data
#print('%s: evaluating ...' % (sampling,), end='')
Zs = split_to_four(Xs, Ss, Ts)
parity = demographic_parity([z[:, -1] for z in Zs])
# wasserstein disttance
dx, dx_s1, dx_s0 = compute_wasserstein(Xs, Ss, Xref, Sref, timeout=10.0)
#print('done.')
results.append([parity, dx, dx_s1, dx_s0])
return results
```
#### Experiment (One Run)
```
result = sample_and_evaluate(X, S, Y, Nref=Nref, Nsample=Nsample, sampled_ypos=sampled_ypos, seed=seed)
df = pd.DataFrame(result)
df.index = ['Baseline', 'Case-control', 'Stealth']
df.columns = ['DP', 'WD on Pr[x]', 'WD on Pr[x|s=1]', 'WD on Pr[x|s=0]']
print('Result (alpha = %.2f, seed=%d)' % (sampled_ypos[0], seed))
df
```
#### Experiment (10 Runs)
```
num_itr = 10
result_all = []
for i in range(num_itr):
result_i = sample_and_evaluate(X, S, Y, Nref=Nref, Nsample=Nsample, sampled_ypos=sampled_ypos, seed=i)
result_all.append(result_i)
result_all = np.array(result_all)
df = pd.DataFrame(np.mean(result_all, axis=0))
df.index = ['Baseline', 'Case-control', 'Stealth']
df.columns = ['DP', 'WD on Pr[x]', 'WD on Pr[x|s=1]', 'WD on Pr[x|s=0]']
print('Average Result of %d runs (alpha = %.2f)' % (num_itr, sampled_ypos[0]))
df
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import re
from scipy.integrate import odeint
# Read the data in, then select the relevant columns, and adjust the week so it is easier to realize
# as a time series.
virii = ["A (H1)", "A (H3)", "A (2009 H1N1)", "A (Subtyping not Performed)", "B"]
virus = "B"
file = "data/2007-2008_Region-5_WHO-NREVSS.csv"
fluData = pd.read_csv(file)[["YEAR", "WEEK", "TOTAL SPECIMENS"] + virii]
firstWeek = fluData["WEEK"][0]
fluData["T"] = fluData["WEEK"] + 52 * (fluData["WEEK"] < firstWeek)
fluData = fluData.drop(["YEAR", "WEEK"], axis=1)
match = re.match("^data/(\d+-\d+)_Region-(\d+)_.*", file)
title = "Flu Season " + match.groups()[0] + " for HHS Region " + match.groups()[1]
region = "HHS " + match.groups()[1]
match = re.match("^(\d+)-\d+.*", match.groups()[0])
popYear = match.groups()[0]
import matplotlib.pyplot as plt
%matplotlib inline
#plt.xkcd()
plt.style.use('ggplot')
tableau20 = [(31, 119, 180), (174, 199, 232), (255, 127, 14), (255, 187, 120),
(44, 160, 44), (152, 223, 138), (214, 39, 40), (255, 152, 150),
(148, 103, 189), (197, 176, 213), (140, 86, 75), (196, 156, 148),
(227, 119, 194), (247, 182, 210), (127, 127, 127), (199, 199, 199),
(188, 189, 34), (219, 219, 141), (23, 190, 207), (158, 218, 229)]
# Scale the RGB values to the [0, 1] range, which is the format matplotlib accepts.
for i in range(len(tableau20)):
r, g, b = tableau20[i]
tableau20[i] = (r / 255., g / 255., b / 255.)
plt.figure(figsize=(12,6))
for idx in [0, 1, 2, 3]:
plt.plot(fluData['T'], fluData[virii[idx]], ls="--", lw=2.5, color=tableau20[idx*2], alpha=1)
plt.scatter(fluData['T'], fluData[virii[idx]], color=tableau20[idx*2])
y_pos = 200 + idx*50
plt.text(40, y_pos, "Virus Strain:" + virii[idx], fontsize=8, color=tableau20[idx*2])
plt.title(title, fontsize=12)
plt.xlabel("Week of Flu Season", fontsize=10)
plt.ylabel("Infected Individuals", fontsize=10)
# Initial values of our states
popData = pd.read_csv('data/population_data.csv', index_col=0)
# N - total population of the region
# I0 - initial infected -- we assume 1.
# R0 - initial recovered -- we assume none.
# S0 - initial susceptible -- S0 = N - I0 - R0
# N - total population of the region
# I0 - initial infected -- we assume 1.
# R0 - initial recovered -- we assume none.
# S0 - initial susceptible -- S0 = N - I0 - R0
N = 52000000#int(popData[popData['Year'] == int(popYear)]['HHS 5']) #
I0 = 1
R0 = 0
S0 = N - R0 - I0
print("S0, ", S0)
gamma = 1/3
rho = 1.24
beta = rho*gamma
def deriv(y, t, N, beta, gamma):
S, I, R = y
dSdt = -beta * S * I / N
dIdt = beta * S * I / N - gamma * I
dRdt = gamma * I
return dSdt, dIdt, dRdt
y0 = S0, I0, R0
min = 40
max = fluData['T'].max()
t = list(range(min*7, max*7))
w = [x/7 for x in t]
ret = odeint(deriv, y0, t, args=(N, beta, gamma))
S, I, R = ret.T
incidence_predicted = -np.diff(S[0:len(S)-1:7])
incidence_observed = fluData['B']
fraction_confirmed = incidence_observed.sum()/incidence_predicted.sum()
# Correct for the week of missed incidence
plotT = fluData['T'] - 7
plt.figure(figsize=(6,3))
plt.plot(plotT[2:], incidence_predicted*fraction_confirmed, color=tableau20[2])
plt.text(40, 100, "CDC Data for Influenza B", fontsize=12, color=tableau20[0])
plt.text(40, 150, "SIRS Model Result", fontsize=12, color=tableau20[2])
plt.title(title, fontsize=12)
plt.xlabel("Week of Flu Season", fontsize=10)
plt.ylabel("Infected Individuals", fontsize=10)
```
| github_jupyter |
# Scene Classification-Test
## 1. Preprocess-KerasFolderClasses
- Import pkg
- Extract zip file
- Preview "scene_classes.csv"
- Preview "scene_{0}_annotations_20170922.json"
- Test the image and pickle function
- Split data into serval pickle file
This part need jupyter notebook start with "jupyter notebook --NotebookApp.iopub_data_rate_limit=1000000000" (https://github.com/jupyter/notebook/issues/2287)
Reference:
- https://challenger.ai/competitions
- https://github.com/jupyter/notebook/issues/2287
### Import pkg
```
import numpy as np
import pandas as pd
# import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import seaborn as sns
%matplotlib inline
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from keras.utils.np_utils import to_categorical # convert to one-hot-encoding
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPool2D, BatchNormalization
from keras.optimizers import Adam
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import LearningRateScheduler, TensorBoard
# import zipfile
import os
import zipfile
import math
from time import time
from IPython.display import display
import pdb
import json
from PIL import Image
import glob
import pickle
```
### Extract zip file
```
input_path = 'input'
datasetName = 'test_a'
date = '20170922'
datasetFolder = input_path + '\\data_{0}'.format(datasetName)
zip_path = input_path + '\\ai_challenger_scene_{0}_{1}.zip'.format(datasetName, date)
extract_path = input_path + '\\ai_challenger_scene_{0}_{1}'.format(datasetName, date)
image_path = extract_path + '\\scene_{0}_images_{1}'.format(datasetName, date)
scene_classes_path = extract_path + '\\scene_classes.csv'
scene_annotations_path = extract_path + '\\scene_{0}_annotations_{1}.json'.format(datasetName, date)
print(input_path)
print(datasetFolder)
print(zip_path)
print(extract_path)
print(image_path)
print(scene_classes_path)
print(scene_annotations_path)
if not os.path.isdir(extract_path):
with zipfile.ZipFile(zip_path) as file:
for name in file.namelist():
file.extract(name, input_path)
```
### Preview "scene_classes.csv"
```
scene_classes = pd.read_csv(scene_classes_path, header=None)
display(scene_classes.head())
def get_scene_name(lable_number, scene_classes_path):
scene_classes = pd.read_csv(scene_classes_path, header=None)
return scene_classes.loc[lable_number, 2]
print(get_scene_name(0, scene_classes_path))
```
### Copy images to ./input/data_test_a/test
```
from shutil import copy2
cwd = os.getcwd()
test_folder = os.path.join(cwd, datasetFolder)
test_sub_folder = os.path.join(test_folder, 'test')
if not os.path.isdir(test_folder):
os.mkdir(test_folder)
os.mkdir(test_sub_folder)
print(test_folder)
print(test_sub_folder)
trainDir = test_sub_folder
for image_id in os.listdir(os.path.join(cwd, image_path)):
fileName = image_path + '/' + image_id
# print(fileName)
# print(trainDir)
copy2(fileName, trainDir)
print('Done!')
```
| github_jupyter |
```
from collections import OrderedDict
import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pymc as pm
import scipy as sp
from theano import shared
%config InlineBackend.figure_format = 'retina'
az.style.use('arviz-darkgrid')
```
#### Code 11.1
```
trolley_df = pd.read_csv('Data/Trolley.csv', sep=';')
trolley_df.head()
```
#### Code 11.2
```
ax = (trolley_df.response
.value_counts()
.sort_index()
.plot(kind='bar'))
ax.set_xlabel("response", fontsize=14);
ax.set_ylabel("Frequency", fontsize=14);
```
#### Code 11.3
```
ax = (trolley_df.response
.value_counts()
.sort_index()
.cumsum()
.div(trolley_df.shape[0])
.plot(marker='o'))
ax.set_xlim(0.9, 7.1);
ax.set_xlabel("response", fontsize=14)
ax.set_ylabel("cumulative proportion", fontsize=14);
```
#### Code 11.4
```
resp_lco = (trolley_df.response
.value_counts()
.sort_index()
.cumsum()
.iloc[:-1]
.div(trolley_df.shape[0])
.apply(lambda p: np.log(p / (1. - p))))
ax = resp_lco.plot(marker='o')
ax.set_xlim(0.9, 7);
ax.set_xlabel("response", fontsize=14)
ax.set_ylabel("log-cumulative-odds", fontsize=14);
```
#### Code 11.5
```
with pm.Model() as m11_1:
a = pm.Normal(
'a', 0., 10.,
transform=pm.distributions.transforms.ordered,
shape=6, testval=np.arange(6) - 2.5)
resp_obs = pm.OrderedLogistic(
'resp_obs', 0., a,
observed=trolley_df.response.values - 1
)
with m11_1:
map_11_1 = pm.find_MAP()
```
#### Code 11.6
```
map_11_1['a']
daf
```
#### Code 11.7
```
sp.special.expit(map_11_1['a'])
```
#### Code 11.8
```
with m11_1:
trace_11_1 = pm.sample(1000, tune=1000)
az.summary(trace_11_1, var_names=['a'], credible_interval=.89, rount_to=2)
```
#### Code 11.9
```
def ordered_logistic_proba(a):
pa = sp.special.expit(a)
p_cum = np.concatenate(([0.], pa, [1.]))
return p_cum[1:] - p_cum[:-1]
ordered_logistic_proba(trace_11_1['a'].mean(axis=0))
```
#### Code 11.10
```
(ordered_logistic_proba(trace_11_1['a'].mean(axis=0)) \
* (1 + np.arange(7))).sum()
```
#### Code 11.11
```
ordered_logistic_proba(trace_11_1['a'].mean(axis=0) - 0.5)
```
#### Code 11.12
```
(ordered_logistic_proba(trace_11_1['a'].mean(axis=0) - 0.5) \
* (1 + np.arange(7))).sum()
```
#### Code 11.13
```
action = shared(trolley_df.action.values)
intention = shared(trolley_df.intention.values)
contact = shared(trolley_df.contact.values)
with pm.Model() as m11_2:
a = pm.Normal(
'a', 0., 10.,
transform=pm.distributions.transforms.ordered,
shape=6,
testval=trace_11_1['a'].mean(axis=0)
)
bA = pm.Normal('bA', 0., 10.)
bI = pm.Normal('bI', 0., 10.)
bC = pm.Normal('bC', 0., 10.)
phi = bA * action + bI * intention + bC * contact
resp_obs = pm.OrderedLogistic(
'resp_obs', phi, a,
observed=trolley_df.response.values - 1
)
with m11_2:
map_11_2 = pm.find_MAP()
```
#### Code 11.14
```
with pm.Model() as m11_3:
a = pm.Normal(
'a', 0., 10.,
transform=pm.distributions.transforms.ordered,
shape=6,
testval=trace_11_1['a'].mean(axis=0)
)
bA = pm.Normal('bA', 0., 10.)
bI = pm.Normal('bI', 0., 10.)
bC = pm.Normal('bC', 0., 10.)
bAI = pm.Normal('bAI', 0., 10.)
bCI = pm.Normal('bCI', 0., 10.)
phi = bA * action + bI * intention + bC * contact \
+ bAI * action * intention \
+ bCI * contact * intention
resp_obs = pm.OrderedLogistic(
'resp_obs', phi, a,
observed=trolley_df.response - 1
)
with m11_3:
map_11_3 = pm.find_MAP()
```
#### Code 11.15
```
def get_coefs(map_est):
coefs = OrderedDict()
for i, ai in enumerate(map_est['a']):
coefs[f'a_{i}'] = ai
coefs['bA'] = map_est.get('bA', np.nan)
coefs['bI'] = map_est.get('bI', np.nan)
coefs['bC'] = map_est.get('bC', np.nan)
coefs['bAI'] = map_est.get('bAI', np.nan)
coefs['bCI'] = map_est.get('bCI', np.nan)
return coefs
(pd.DataFrame.from_dict(
OrderedDict([
('m11_1', get_coefs(map_11_1)),
('m11_2', get_coefs(map_11_2)),
('m11_3', get_coefs(map_11_3))
]))
.astype(np.float64)
.round(2))
```
#### Code 11.16
```
with m11_2:
trace_11_2 = pm.sample(1000, tune=1000)
with m11_3:
trace_11_3 = pm.sample(1000, tune=1000)
comp_df = pm.compare({m11_1:trace_11_1,
m11_2:trace_11_2,
m11_3:trace_11_3})
comp_df.loc[:,'model'] = pd.Series(['m11.1', 'm11.2', 'm11.3'])
comp_df = comp_df.set_index('model')
comp_df
```
#### Code 11.17-19
```
pp_df = pd.DataFrame(np.array([[0, 0, 0],
[0, 0, 1],
[1, 0, 0],
[1, 0, 1],
[0, 1, 0],
[0, 1, 1]]),
columns=['action', 'contact', 'intention'])
pp_df
action.set_value(pp_df.action.values)
contact.set_value(pp_df.contact.values)
intention.set_value(pp_df.intention.values)
with m11_3:
pp_trace_11_3 = pm.sample_ppc(trace_11_3, samples=1500)
PP_COLS = [f'pp_{i}' for i, _ in enumerate(pp_trace_11_3['resp_obs'])]
pp_df = pd.concat((pp_df,
pd.DataFrame(pp_trace_11_3['resp_obs'].T, columns=PP_COLS)),
axis=1)
pp_cum_df = (pd.melt(
pp_df,
id_vars=['action', 'contact', 'intention'],
value_vars=PP_COLS, value_name='resp'
)
.groupby(['action', 'contact', 'intention', 'resp'])
.size()
.div(1500)
.rename('proba')
.reset_index()
.pivot_table(
index=['action', 'contact', 'intention'],
values='proba',
columns='resp'
)
.cumsum(axis=1)
.iloc[:, :-1])
pp_cum_df
for (plot_action, plot_contact), plot_df in pp_cum_df.groupby(level=['action', 'contact']):
fig, ax = plt.subplots(figsize=(8, 6))
ax.plot([0, 1], plot_df, c='C0');
ax.plot([0, 1], [0, 0], '--', c='C0');
ax.plot([0, 1], [1, 1], '--', c='C0');
ax.set_xlim(0, 1);
ax.set_xlabel("intention");
ax.set_ylim(-0.05, 1.05);
ax.set_ylabel("probability");
ax.set_title(
"action = {action}, contact = {contact}".format(
action=plot_action, contact=plot_contact
)
);
```
#### Code 11.20
```
# define parameters
PROB_DRINK = 0.2 # 20% of days
RATE_WORK = 1. # average 1 manuscript per day
# sample one year of production
N = 365
drink = np.random.binomial(1, PROB_DRINK, size=N)
y = (1 - drink) * np.random.poisson(RATE_WORK, size=N)
```
#### Code 11.21
```
drink_zeros = drink.sum()
work_zeros = (y == 0).sum() - drink_zeros
bins = np.arange(y.max() + 1) - 0.5
plt.hist(y, bins=bins);
plt.bar(0., drink_zeros, width=1., bottom=work_zeros, color='C1', alpha=.5);
plt.xticks(bins + 0.5);
plt.xlabel("manuscripts completed");
plt.ylabel("Frequency");
```
#### Code 11.22
```
with pm.Model() as m11_4:
ap = pm.Normal('ap', 0., 1.)
p = pm.math.sigmoid(ap)
al = pm.Normal('al', 0., 10.)
lambda_ = pm.math.exp(al)
y_obs = pm.ZeroInflatedPoisson('y_obs', 1. - p, lambda_, observed=y)
with m11_4:
map_11_4 = pm.find_MAP()
map_11_4
```
#### Code 11.23
```
sp.special.expit(map_11_4['ap']) # probability drink
np.exp(map_11_4['al']) # rate finish manuscripts, when not drinking
```
#### Code 11.24
```
def dzip(x, p, lambda_, log=True):
like = p**(x == 0) + (1 - p) * sp.stats.poisson.pmf(x, lambda_)
return np.log(like) if log else like
```
#### Code 11.25
```
PBAR = 0.5
THETA = 5.
a = PBAR * THETA
b = (1 - PBAR) * THETA
p = np.linspace(0, 1, 100)
plt.plot(p, sp.stats.beta.pdf(p, a, b));
plt.xlim(0, 1);
plt.xlabel("probability");
plt.ylabel("Density");
```
#### Code 11.26
```
admit_df = pd.read_csv('Data/UCBadmit.csv', sep=';')
admit_df.head()
with pm.Model() as m11_5:
a = pm.Normal('a', 0., 2.)
pbar = pm.Deterministic('pbar', pm.math.sigmoid(a))
theta = pm.Exponential('theta', 1.)
admit_obs = pm.BetaBinomial(
'admit_obs',
pbar * theta, (1. - pbar) * theta,
admit_df.applications.values,
observed=admit_df.admit.values
)
with m11_5:
trace_11_5 = pm.sample(1000, tune=1000)
```
#### Code 11.27
```
pm.summary(trace_11_5, alpha=.11).round(2)
```
#### Code 11.28
```
np.percentile(trace_11_5['pbar'], [2.5, 50., 97.5])
```
#### Code 11.29
```
pbar_hat = trace_11_5['pbar'].mean()
theta_hat = trace_11_5['theta'].mean()
p_plot = np.linspace(0, 1, 100)
plt.plot(
p_plot,
sp.stats.beta.pdf(p_plot, pbar_hat * theta_hat, (1. - pbar_hat) * theta_hat)
);
plt.plot(
p_plot,
sp.stats.beta.pdf(
p_plot[:, np.newaxis],
trace_11_5['pbar'][:100] * trace_11_5['theta'][:100],
(1. - trace_11_5['pbar'][:100]) * trace_11_5['theta'][:100]
),
c='C0', alpha=0.1
);
plt.xlim(0., 1.);
plt.xlabel("probability admit");
plt.ylim(0., 3.);
plt.ylabel("Density");
```
#### Code 11.30
```
with m11_5:
pp_trace_11_5 = pm.sample_ppc(trace_11_5)
x_case = np.arange(admit_df.shape[0])
plt.scatter(
x_case,
pp_trace_11_5['admit_obs'].mean(axis=0) \
/ admit_df.applications.values
);
plt.scatter(x_case, admit_df.admit / admit_df.applications);
high = np.percentile(pp_trace_11_5['admit_obs'], 95, axis=0) \
/ admit_df.applications.values
plt.scatter(x_case, high, marker='x', c='k');
low = np.percentile(pp_trace_11_5['admit_obs'], 5, axis=0) \
/ admit_df.applications.values
plt.scatter(x_case, low, marker='x', c='k');
```
#### Code 11.31
```
mu = 3.
theta = 1.
x = np.linspace(0, 10, 100)
plt.plot(x, sp.stats.gamma.pdf(x, mu / theta, scale=theta));
import platform
import sys
import IPython
import matplotlib
import scipy
print("This notebook was createad on a computer {} running {} and using:\nPython {}\nIPython {}\nPyMC {}\nNumPy {}\nPandas {}\nSciPy {}\nMatplotlib {}\n".format(platform.machine(), ' '.join(platform.linux_distribution()[:2]), sys.version[:5], IPython.__version__, pm.__version__, np.__version__, pd.__version__, scipy.__version__, matplotlib.__version__))
```
| github_jupyter |
<a href="https://colab.research.google.com/github/iesous-kurios/DS-Unit-2-Applied-Modeling/blob/master/module4/BuildWeekProject.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
!pip install category_encoders==2.*
!pip install eli5
# If you're working locally:
else:
DATA_PATH = '../data/'
# all imports needed for this sheet
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
import category_encoders as ce
from sklearn.ensemble import RandomForestClassifier
from sklearn.impute import SimpleImputer
from sklearn.pipeline import make_pipeline
from sklearn.feature_selection import f_regression, SelectKBest
from sklearn.linear_model import Ridge
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import StandardScaler
import matplotlib.pyplot as plt
from sklearn.model_selection import validation_curve
from sklearn.tree import DecisionTreeRegressor
import xgboost as xgb
%matplotlib inline
import seaborn as sns
from sklearn.metrics import accuracy_score
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
df = pd.read_excel('/content/pipeline_pickle.xlsx')
```
I chose "exit to permanent" housing as my target due to my belief that accurately predicting this feature would have the largest impact on actual people experiencing homelessness in my county. Developing and fine tuning an accurate model with our data could also lead to major improvements in our county's efforts at addressing the homelessness problem among singles as well (as our shelter only serves families)
```
exit_reasons = ['Rental by client with RRH or equivalent subsidy',
'Rental by client, no ongoing housing subsidy',
'Staying or living with family, permanent tenure',
'Rental by client, other ongoing housing subsidy',
'Permanent housing (other than RRH) for formerly homeless persons',
'Staying or living with friends, permanent tenure',
'Owned by client, with ongoing housing subsidy',
'Rental by client, VASH housing Subsidy'
]
# pull all exit destinations from main data file and sum up the totals of each destination,
# placing them into new df for calculations
exits = df['3.12 Exit Destination'].value_counts()
# create target column (multiple types of exits to perm)
df['perm_leaver'] = df['3.12 Exit Destination'].isin(exit_reasons)
# replace spaces with underscore
df.columns = df.columns.str.replace(' ', '_')
df = df.rename(columns = {'Length_of_Time_Homeless_(3.917_Approximate_Start)':'length_homeless', '4.2_Income_Total_at_Entry':'entry_income'
})
```
If a person were to guess "did not exit to permanent" housing every single time, they would be correct approximately 63 percent of the time. I am hoping that through this project, we will be able to provide more focused case management services to guests that displayed features which my model predicted as contributing negatively toward their chances of having an exit to permanent housing. It is my hope that a year from now, the base case will be flipped, and you would need to guess "did exit to permanent housing" to be correct approximately 63 percent of the time.
```
# base case
df['perm_leaver'].value_counts(normalize=True)
# see size of df prior to dropping empties
df.shape
# drop rows with no exit destination (current guests at time of report)
df = df.dropna(subset=['3.12_Exit_Destination'])
# shape of df after dropping current guests
df.shape
df.to_csv('/content/n_alltime.csv')
# verify no NaN in exit destination feature
df['3.12_Exit_Destination'].isna().value_counts()
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
train = df
# Split train into train & val
#train, val = train_test_split(train, train_size=0.80, test_size=0.20,
# stratify=train['perm_leaver'], random_state=42)
# Do train/test split
# Use data from Jan -March 2019 to train
# Use data from April 2019 to test
df['enroll_date'] = pd.to_datetime(df['3.10_Enroll_Date'], infer_datetime_format=True)
cutoff = pd.to_datetime('2019-01-01')
train = df[df.enroll_date < cutoff]
test = df[df.enroll_date >= cutoff]
def wrangle(X):
"""Wrangle train, validate, and test sets in the same way"""
# Prevent SettingWithCopyWarning
X = X.copy()
# drop any private information
X = X.drop(columns=['3.1_FirstName', '3.1_LastName', '3.2_SocSecNo',
'3.3_Birthdate', 'V5_Prior_Address'])
# drop unusable columns
X = X.drop(columns=['2.1_Organization_Name', '2.4_ProjectType',
'WorkSource_Referral_Most_Recent', 'YAHP_Referral_Most_Recent',
'SOAR_Enrollment_Determination_(Most_Recent)',
'R7_General_Health_Status', 'R8_Dental_Health_Status',
'R9_Mental_Health_Status', 'RRH_Date_Of_Move-In',
'RRH_In_Permanent_Housing', 'R10_Pregnancy_Due_Date',
'R10_Pregnancy_Status', 'R1_Referral_Source',
'R2_Date_Status_Determined', 'R2_Enroll_Status',
'R2_Reason_Why_No_Services_Funded', 'R2_Runaway_Youth',
'R3_Sexual_Orientation', '2.5_Utilization_Tracking_Method_(Invalid)',
'2.2_Project_Name', '2.6_Federal_Grant_Programs', '3.16_Client_Location',
'3.917_Stayed_Less_Than_90_Days',
'3.917b_Stayed_in_Streets,_ES_or_SH_Night_Before',
'3.917b_Stayed_Less_Than_7_Nights', '4.24_In_School_(Retired_Data_Element)',
'CaseChildren', 'ClientID', 'HEN-HP_Referral_Most_Recent',
'HEN-RRH_Referral_Most_Recent', 'Emergency_Shelter_|_Most_Recent_Enrollment',
'ProgramType', 'Days_Enrolled_Until_RRH_Date_of_Move-in',
'CurrentDate', 'Current_Age', 'Count_of_Bed_Nights_-_Entire_Episode',
'Bed_Nights_During_Report_Period'])
# drop rows with no exit destination (current guests at time of report)
X = X.dropna(subset=['3.12_Exit_Destination'])
# remove columns to avoid data leakage
X = X.drop(columns=['3.12_Exit_Destination', '5.9_Household_ID', '5.8_Personal_ID',
'4.2_Income_Total_at_Exit', '4.3_Non-Cash_Benefit_Count_at_Exit'])
# Drop needless feature
unusable_variance = ['Enrollment_Created_By', '4.24_Current_Status_(Retired_Data_Element)']
X = X.drop(columns=unusable_variance)
# Drop columns with timestamp
timestamp_columns = ['3.10_Enroll_Date', '3.11_Exit_Date',
'Date_of_Last_ES_Stay_(Beta)', 'Date_of_First_ES_Stay_(Beta)',
'Prevention_|_Most_Recent_Enrollment', 'PSH_|_Most_Recent_Enrollment',
'Transitional_Housing_|_Most_Recent_Enrollment', 'Coordinated_Entry_|_Most_Recent_Enrollment',
'Street_Outreach_|_Most_Recent_Enrollment', 'RRH_|_Most_Recent_Enrollment',
'SOAR_Eligibility_Determination_(Most_Recent)', 'Date_of_First_Contact_(Beta)',
'Date_of_Last_Contact_(Beta)', '4.13_Engagement_Date', '4.11_Domestic_Violence_-_When_it_Occurred',
'3.917_Homeless_Start_Date']
X = X.drop(columns=timestamp_columns)
# return the wrangled dataframe
return X
train.shape
test.shape
train = wrangle(train)
test = wrangle(test)
# Hand pick features only known at entry to avoid data leakage
features = ['CaseMembers',
'3.2_Social_Security_Quality', '3.3_Birthdate_Quality',
'Age_at_Enrollment', '3.4_Race', '3.5_Ethnicity', '3.6_Gender',
'3.7_Veteran_Status', '3.8_Disabling_Condition_at_Entry',
'3.917_Living_Situation', 'length_homeless',
'3.917_Times_Homeless_Last_3_Years', '3.917_Total_Months_Homeless_Last_3_Years',
'V5_Last_Permanent_Address', 'V5_State', 'V5_Zip', 'Municipality_(City_or_County)',
'4.1_Housing_Status', '4.4_Covered_by_Health_Insurance', '4.11_Domestic_Violence',
'4.11_Domestic_Violence_-_Currently_Fleeing_DV?', 'Household_Type',
'R4_Last_Grade_Completed', 'R5_School_Status',
'R6_Employed_Status', 'R6_Why_Not_Employed', 'R6_Type_of_Employment',
'R6_Looking_for_Work', 'entry_income',
'4.3_Non-Cash_Benefit_Count', 'Barrier_Count_at_Entry',
'Chronic_Homeless_Status', 'Under_25_Years_Old',
'4.10_Alcohol_Abuse_(Substance_Abuse)', '4.07_Chronic_Health_Condition',
'4.06_Developmental_Disability', '4.10_Drug_Abuse_(Substance_Abuse)',
'4.08_HIV/AIDS', '4.09_Mental_Health_Problem',
'4.05_Physical_Disability'
]
target = 'perm_leaver'
X_train = train[features]
y_train = train[target]
X_test = test[features]
y_test = test[target]
# base case
df['perm_leaver'].value_counts(normalize=True)
# fit linear model to get a 3 on Sprint
from sklearn.linear_model import LogisticRegression
encoder = ce.OneHotEncoder(use_cat_names=True)
X_train_encoded = encoder.fit_transform(X_train)
X_test_encoded = encoder.transform(X_test)
imputer = SimpleImputer()
X_train_imputed = imputer.fit_transform(X_train_encoded)
X_test_imputed = imputer.transform(X_test_encoded)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train_imputed)
X_test_scaled = scaler.transform(X_test_imputed)
model = LogisticRegression(random_state=42, max_iter=5000)
model.fit(X_train_scaled, y_train)
print ('Validation Accuracy', model.score(X_test_scaled,y_test))
```
Linear model above beat the baseline model, now let's see if we can get even more accurate with a tree-based model
```
import category_encoders as ce
from sklearn.impute import SimpleImputer
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.ensemble import GradientBoostingClassifier
# Make pipeline!
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='most_frequent'),
RandomForestClassifier(n_estimators=100, n_jobs=-1,
random_state=42,
)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_test)
print('Validation Accuracy', accuracy_score(y_test, y_pred))
from joblib import dump
dump(pipeline, 'pipeline.joblib', compress=True)
# get and plot feature importances
# Linear models have coefficients whereas decision trees have "Feature Importances"
import matplotlib.pyplot as plt
model = pipeline.named_steps['randomforestclassifier']
encoder = pipeline.named_steps['ordinalencoder']
encoded_columns = encoder.transform(X_test).columns
importances = pd.Series(model.feature_importances_, encoded_columns)
n = 20
plt.figure(figsize=(10,n/2))
plt.title(f'Top {n} features')
importances.sort_values()[-n:].plot.barh(color='grey');
# cross validation
k = 3
scores = cross_val_score(pipeline, X_train, y_train, cv=k,
scoring='accuracy')
print(f'MAE for {k} folds:', -scores)
-scores.mean()
```
Now that we have beaten the linear model with a tree based model, let us see if xgboost does a better job at predicting exit destination
```
from xgboost import XGBClassifier
pipeline = make_pipeline(
ce.OrdinalEncoder(),
XGBClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
print('Validation Accuracy:', pipeline.score(X_test, y_test))
```
xgboost failed to beat my tree-based model, so the tree-based model is what I will use for my prediction on my web-app
```
# get and plot feature importances
# Linear models have coefficients whereas decision trees have "Feature Importances"
import matplotlib.pyplot as plt
model = pipeline.named_steps['xgbclassifier']
encoder = pipeline.named_steps['ordinalencoder']
encoded_columns = encoder.transform(X_test).columns
importances = pd.Series(model.feature_importances_, encoded_columns)
n = 20
plt.figure(figsize=(10,n/2))
plt.title(f'Top {n} features')
importances.sort_values()[-n:].plot.barh(color='grey');
history = pd.read_csv('/content/n_alltime.csv')
from plotly.tools import mpl_to_plotly
import seaborn as sns
from sklearn.metrics import accuracy_score
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
# Assign to X, y to avoid data leakage
features = ['CaseMembers',
'3.2_Social_Security_Quality', '3.3_Birthdate_Quality',
'Age_at_Enrollment', '3.4_Race', '3.5_Ethnicity', '3.6_Gender',
'3.7_Veteran_Status', '3.8_Disabling_Condition_at_Entry',
'3.917_Living_Situation', 'length_homeless',
'3.917_Times_Homeless_Last_3_Years', '3.917_Total_Months_Homeless_Last_3_Years',
'V5_Last_Permanent_Address', 'V5_State', 'V5_Zip', 'Municipality_(City_or_County)',
'4.1_Housing_Status', '4.4_Covered_by_Health_Insurance', '4.11_Domestic_Violence',
'4.11_Domestic_Violence_-_Currently_Fleeing_DV?', 'Household_Type',
'R4_Last_Grade_Completed', 'R5_School_Status',
'R6_Employed_Status', 'R6_Why_Not_Employed', 'R6_Type_of_Employment',
'R6_Looking_for_Work', 'entry_income',
'4.3_Non-Cash_Benefit_Count', 'Barrier_Count_at_Entry',
'Chronic_Homeless_Status', 'Under_25_Years_Old',
'4.10_Alcohol_Abuse_(Substance_Abuse)', '4.07_Chronic_Health_Condition',
'4.06_Developmental_Disability', '4.10_Drug_Abuse_(Substance_Abuse)',
'4.08_HIV/AIDS', '4.09_Mental_Health_Problem',
'4.05_Physical_Disability', 'perm_leaver'
]
X = history[features]
X = X.drop(columns='perm_leaver')
y_pred = pipeline.predict(X)
fig, ax = plt.subplots()
sns.distplot(test['perm_leaver'], hist=False, kde=True, ax=ax, label='Actual')
sns.distplot(y_pred, hist=False, kde=True, ax=ax, label='Predicted')
ax.set_title('Distribution of Actual Exit compared to prediction')
ax.legend().set_visible(True)
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(),
RandomForestClassifier(random_state=42)
)
param_distributions = {
'simpleimputer__strategy': ['most_frequent', 'mean', 'median'],
'randomforestclassifier__bootstrap': [True, False],
'randomforestclassifier__max_depth': [10, 20, 30, 40, 50, 60, 70, 80, 90, 100, None],
'randomforestclassifier__max_features': ['auto', 'sqrt'],
'randomforestclassifier__min_samples_leaf': [1, 2, 4],
'randomforestclassifier__min_samples_split': [2, 5, 10],
'randomforestclassifier__n_estimators': [200, 400, 600, 800, 1000, 1200, 1400, 1600, 1800, 2000]}
# If you're on Colab, decrease n_iter & cv parameters
search = RandomizedSearchCV(
pipeline,
param_distributions=param_distributions,
n_iter=1,
cv=3,
scoring='accuracy',
verbose=10,
return_train_score=True,
n_jobs=-1
)
# Fit on train, score on val
search.fit(X_train, y_train)
print('Best hyperparameters', search.best_params_)
print('Cross-validation accuracy score', -search.best_score_)
print('Validation Accuracy', search.score(X_test, y_test))
y_pred.shape
history['perm_leaver'].value_counts()
1282+478
from joblib import dump
dump(pipeline, 'pipeline2.joblib', compress=True)
```
| github_jupyter |
# Chapter 3
***Ver como criar uma tabela de conteรบdo TOC**
## Strings
```
a = "My dog's name is"
b = "Bingo"
c = a + " " + b
c
#trying to add string and integer
d = "927"
e = 927
d + e
```
## Lists
```
a = [0, 1, 1, 2, 3, 5, 8, 13]
b = [5., "girl", 2+0j, "horse", 21]
b[0]
b[1]
```
<div class="alert alert-block alert-warning">
<big><center>Lists are <span style="color:red"> *zero-indexed*</span> </center></big>
</div>
<div class="alert alert-block alert-success">
$$
\begin{align}
list = &[a, b, c, d, e]\\
&\color{red}\Downarrow
\hspace{2.2pc}\color{red}\Downarrow\\
&\color{purple}{list[0]}
\hspace{1.2pc}
\color{purple}{list[4]}\\
&\color{brown}{list[-5]}
\hspace{0.7pc}
\color{brown}{list[-1]}
\end{align}
$$
</div>
```
b[-1]
b[-5]
b[4]
b = [5., "girl", 2+0j, "horse", 21]
b[0] = b[0]+2
import numpy as np
b[3] = np.pi
b
a
```
<div class="alert alert-block alert-warning">
<big><center>Adding lists <span style="color:red"> *concatenates*</span> them, just as the **+** operator concatenates strings. </center></big>
</div>
```
a+a
```
### Slicing lists
<div class="alert alert-block alert-warning">
<big>Reparar que <span style="color:red"> *nรฃo*</span> se inclui o รบltimo elemento.</big>
</div>
```
b
b[1:4]
b[3:5]
b[2:]
b[:3]
b[:]
b[1:-1]
len(b) #len --> length
? range
```
### Creating and modifying lists
<div class="alert alert-block alert-info">
range(stop) -> range object <br>
range(start, stop[, step]) -> range object
</div>
รtil para criar *PAs (progressรตes aritmรฉticas)*
```
range(10) #comeรงa de zero por padrรฃo, armazena apenas inรญcio, fim e step. รtil para economizar memรณria
print(range(10))
list(range(10)) #para explicitar todos os integrantes
list(range(3,10))
list(range(0,10,2))
a = range(1,10,3)
a
list(a)
a += [16, 31, 64, 127]
a = a + [16, 31, 64,127]
a = list(a) + [16, 31, 64,127]
a
a = [0, 0] + a
a
b = a[:5] + [101, 102] + a[5:]
b
```
### Tuples
<div class="alert alert-block alert-warning">
<big><center>**Tuples** are lists that are <span style="color:red"> *immutable*</span></center></big>
</div>
Logo, pode ser usado para armazenar constantes, por exemplo.
```
c = (1, 1, 2, 3, 5, 8, 13)
c[4]
c[4] = 7
```
### Multidimensional lists and tuples
Useful in making tables and other structures.
```
a = [[3,9], [8,5], [11,1]] #list
a
a[0]
a[1][0]
a[1][0] = 10
a
a = ([3,9], [8,5], [11,1]) #? Nรฃo forma tuple assim... tudo deve ser parรชntese. Ver abaixo
a[1][0]
a[1][0] = 10
a
a = ((3,9), (8,5), (11,1)) #tuple
a
a[1][0]
a[1][0] = 10
```
## NumPy arrays
- all the elements are of the same type.
```
import numpy as np
a = [0, 0, 1, 4, 7, 16, 31, 64,127]
a
b = np.array(a) #converts a list to an array
b
```
- the `array` function promotes all of the numbers to the type of the most general entry in the list.
```
c = np.array([1, 4., -2,7]) #todos se tornarรฃo float
c
? np.linspace
```
<div class="alert alert-block alert-info">
np.linspace(start, stop, num=50, endpoint=True, retstep=False, dtype=None)
</div>
Return evenly spaced numbers over a specified interval.
Returns `num` evenly spaced samples, calculated over the interval [`start`, `stop`].
The endpoint of the interval can optionally be excluded.
```
np.linspace(0, 10, 5)
np.linspace(0, 10, 5, endpoint=False)
np.linspace(0, 10, 5, retstep=True)
? np.logspace
```
<div class="alert alert-block alert-info">
np.logspace(start, stop, num=50, endpoint=True, base=10.0, dtype=None)
</div>
Return numbers spaced evenly on a log scale.
In linear space, the sequence starts at ``base**start`` (`base` to the power of `start`) and ends with ``base**stop``.
```
np.logspace(1,3,5)
%precision 1
np.logspace(1,3,5)
? np.arange
```
<div class="alert alert-block alert-info">
arange([start,] stop[, step,], dtype=None)
</div>
Return evenly spaced values within a given interval.
Values are generated within the half-open interval ``[start, stop)`` (in other words, the interval including `start` but excluding `stop`). For integer arguments the function is equivalent to the Python built-in
`range <http://docs.python.org/lib/built-in-funcs.html>`_ function, but returns an ndarray rather than a list.
```
np.arange(0, 10, 2)
np.arange(0., 10, 2) #todos serรฃo float
np.arange(0, 10, 1.5)
```
### Criaรงรฃo de arrays de zeros e uns.
```
np.zeros(6)
np.ones(8)
np.ones(8, dtype=int)
```
### Mathematical operations with arrays
```
import numpy as np
a = np.linspace(-1, 5, 7)
a
a*6
np.sin(a)
x = np.linspace(-3.14, 3.14, 21)
y = np.cos(x)
x
y #fazer o plot disto futuramente
a
np.log(a)
a = np.array([34., -12, 5.])
b = np.array([68., 5., 20.])
a+b #vectorized operations
```
### Slicing and addressing arrays
Fรณrmula para a velocidade mรฉdia em um intervalo de tempo *i*:
$$
v_i = \frac{y_i - y_{i-1}}{t_i - t_{i-1}}
$$
```
y = np.array([0., 1.3, 5., 10.9, 18.9, 28.7, 40.])
t = np.array([0., 0.49, 1., 1.5, 2.08, 2.55, 3.2])
y[:-1]
y[1:]
v = (y[1:]-y[:-1])/(t[1:]-t[:-1])
v
```
### Multi-dimensional arrays and matrices
```
b = np.array([[1., 4, 5], [9, 7, 4]])
b
#all elements of a MunPy array must be of the same data type: floats, integers, complex numbers, etc.
a = np.ones((3,4), dtype=float)
a
np.eye(4)
c = np.arange(6)
c
c = np.reshape(c, (2,3))
c
b
b[0][2]
b[0,2] #0 indexed
b[1,2]
2*b
```
<div class="alert alert-block alert-warning">
*Beware*: array multiplication, done on an element-by-element basis, <span, style="color:red">*is not the same as **matrix** multiplication*</span> as defined in linear algebra. Therefore, we distinguish between *array* multiplication and *matrix* multiplication in Python.
</div>
```
b*c
d = c.T #cria matriz transposta
d
np.dot(b,d) #faz multiplicaรงรฃo matricial
```
## Dictionaries
\* Tambรฉm chamados de *hashmaps* ou *associative arrays* em outras linguagens de programaรงรฃo.
<div class="alert alert-block alert-success">
$$
\begin{align}
room =&\text{{"Emma":309, "Jacob":582, "Olivia":764}}\\
&\hspace{1.0pc}\color{red}\Downarrow
\hspace{1.5pc}\color{red}\Downarrow\\
&\hspace{0.7pc}\color{purple}{key} \hspace{1.5pc}\color{purple}{value}
\end{align}
$$
</div>
```
room = {"Emma":309, "Jacob":582, "Olivia":764}
room["Olivia"]
weird = {"tank":52, 846:"horse", "bones":[23, "fox", "grass"], "phrase":"I am here"}
weird["tank"]
weird[846]
weird["bones"]
weird["phrase"]
d = {}
d["last name"] = "Alberts"
d["first name"] = "Marie"
d["birthday"] = "January 27"
d
d.keys()
d.values()
```
## Random numbers
`np.random.rand(num)` creates an array of `num` floats **uniformly** distributed on the interval from 0 to 1.
`np.random.randn(num)` produces a **normal (Gaussian)** distribution of `num` random numbers with a mean of 0 and a standard deviation of 1. They are distributed according to
$$
P(x)=\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}xยฒ}
$$
`np.random.randint(low, high, num)` produces a **uniform** random distribution of `num` integers between `low` (inclusive) and `high` (exclusive).
```
np.random.rand()
np.random.rand(5)
a, b = 10, 20
(b-a)*np.random.rand(20) + a #setting interval
x0, sigma = 15, 10
sigma*np.random.randn(20) + x0 #setting width and center of normal distribution
np.random.randint(1, 7, 12) #simutaling a dozen rolls of a single die
```
| github_jupyter |
```
%matplotlib inline
```
Captum์ ์ฌ์ฉํ์ฌ ๋ชจ๋ธ ํด์ํ๊ธฐ
===================================
**๋ฒ์ญ**: `์ ์ฌ๋ฏผ <https://github.com/jjeamin>`_
Captum์ ์ฌ์ฉํ๋ฉด ๋ฐ์ดํฐ ํน์ง(features)์ด ๋ชจ๋ธ์ ์์ธก ๋๋ ๋ด๋ฐ ํ์ฑํ์
๋ฏธ์น๋ ์ํฅ์ ์ดํดํ๊ณ , ๋ชจ๋ธ์ ๋์ ๋ฐฉ์์ ์ ์ ์์ต๋๋ค.
๊ทธ๋ฆฌ๊ณ \ ``Integrated Gradients``\ ์ \ ``Guided GradCam``\ ๊ณผ ๊ฐ์
์ต์ฒจ๋จ์ feature attribution ์๊ณ ๋ฆฌ์ฆ์ ์ ์ฉํ ์ ์์ต๋๋ค.
์ด ๋ ์ํผ์์๋ Captum์ ์ฌ์ฉํ์ฌ ๋ค์์ ์ํํ๋ ๋ฐฉ๋ฒ์ ๋ฐฐ์๋๋ค.
\* ์ด๋ฏธ์ง ๋ถ๋ฅ๊ธฐ(classifier)์ ์์ธก์ ํด๋น ์ด๋ฏธ์ง์ ํน์ง(features)์ ํ์ํ๊ธฐ
\* ์์ฑ(attribution) ๊ฒฐ๊ณผ๋ฅผ ์๊ฐํ ํ๊ธฐ
์์ํ๊ธฐ ์ ์
----------------
Captum์ด Python ํ๊ฒฝ์ ์ค์น๋์ด ์๋์ง ํ์ธํด์ผ ํฉ๋๋ค.
Captum์ Github์์ ``pip`` ํจํค์ง ๋๋ ``conda`` ํจํค์ง๋ก ์ ๊ณต๋ฉ๋๋ค.
์์ธํ ์ง์นจ์ https://captum.ai/ ์ ์ค์น ์๋ด์๋ฅผ ์ฐธ์กฐํ๋ฉด ๋ฉ๋๋ค.
๋ชจ๋ธ์ ๊ฒฝ์ฐ, PyTorch์ ๋ด์ฅ ๋ ์ด๋ฏธ์ง ๋ถ๋ฅ๊ธฐ(classifier)๋ฅผ ์ฌ์ฉํฉ๋๋ค.
Captum์ ์ํ ์ด๋ฏธ์ง์ ์ด๋ค ๋ถ๋ถ์ด ๋ชจ๋ธ์ ์ํด ๋ง๋ค์ด์ง
ํน์ ํ ์์ธก์ ๋์์ ์ฃผ๋์ง ๋ณด์ฌ์ค๋๋ค.
```
import torchvision
from torchvision import transforms
from PIL import Image
import requests
from io import BytesIO
model = torchvision.models.resnet18(pretrained=True).eval()
response = requests.get("https://image.freepik.com/free-photo/two-beautiful-puppies-cat-dog_58409-6024.jpg")
img = Image.open(BytesIO(response.content))
center_crop = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
])
normalize = transforms.Compose([
transforms.ToTensor(), # ์ด๋ฏธ์ง๋ฅผ 0์์ 1์ฌ์ด์ ๊ฐ์ ๊ฐ์ง Tensor๋ก ๋ณํ
transforms.Normalize( # 0์ ์ค์ฌ์ผ๋ก ํ๋ imagenet ํฝ์
์ rgb ๋ถํฌ๋ฅผ ๋ฐ๋ฅด๋ ์ ๊ทํ
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]
)
])
input_img = normalize(center_crop(img)).unsqueeze(0)
```
์์ฑ(attribution) ๊ณ์ฐํ๊ธฐ
---------------------
๋ชจ๋ธ์ top-3 ์์ธก ์ค์๋ ๊ฐ์ ๊ณ ์์ด์ ํด๋นํ๋ ํด๋์ค 208๊ณผ 283์ด ์์ต๋๋ค.
Captum์ \ ``Occlusion``\ ์๊ณ ๋ฆฌ์ฆ์ ์ฌ์ฉํ์ฌ ๊ฐ ์์ธก์ ์
๋ ฅ์ ํด๋น ๋ถ๋ถ์ ํ์ํฉ๋๋ค.
```
from captum.attr import Occlusion
occlusion = Occlusion(model)
strides = (3, 9, 9) # ์์์๋ก = ์ธ๋ถ์ ์ธ ์์ฑ์ด์ง๋ง ๋๋ฆผ
target=208, # ImageNet์์ Labrador์ ์ธ๋ฑ์ค
sliding_window_shapes=(3,45, 45) # ๊ฐ์ฒด์ ๋ชจ์์ ๋ณํ์ํค๊ธฐ์ ์ถฉ๋ถํ ํฌ๊ธฐ๋ฅผ ์ ํ
baselines = 0 # ์ด๋ฏธ์ง๋ฅผ ๊ฐ๋ฆด ๊ฐ, 0์ ํ์
attribution_dog = occlusion.attribute(input_img,
strides = strides,
target=target,
sliding_window_shapes=sliding_window_shapes,
baselines=baselines)
target=283, # ImageNet์์ Persian cat์ ์ธ๋ฑ์ค
attribution_cat = occlusion.attribute(input_img,
strides = strides,
target=target,
sliding_window_shapes=sliding_window_shapes,
baselines=0)
```
Captum์ ``Occlusion`` ์ธ์๋ \ ``Integrated Gradients``\ , \ ``Deconvolution``\ ,
\ ``GuidedBackprop``\ , \ ``Guided GradCam``\ , \ ``DeepLift``\ ,
๊ทธ๋ฆฌ๊ณ \ ``GradientShap``\๊ณผ ๊ฐ์ ๋ง์ ์๊ณ ๋ฆฌ์ฆ์ ์ ๊ณตํฉ๋๋ค.
์ด๋ฌํ ๋ชจ๋ ์๊ณ ๋ฆฌ์ฆ์ ์ด๊ธฐํํ ๋ ๋ชจ๋ธ์ ํธ์ถ ๊ฐ๋ฅํ \ ``forward_func``\ ์ผ๋ก ๊ธฐ๋ํ๋ฉฐ
์์ฑ(attribution) ๊ฒฐ๊ณผ๋ฅผ ํตํฉํด์ ๋ฐํํ๋ ``attribute(...)`` ๋ฉ์๋๋ฅผ ๊ฐ์ง๋
``Attribution`` ์ ์๋ธํด๋์ค ์
๋๋ค.
์ด๋ฏธ์ง์ธ ๊ฒฝ์ฐ ์์ฑ(attribution) ๊ฒฐ๊ณผ๋ฅผ ์๊ฐํ ํด๋ณด๊ฒ ์ต๋๋ค.
๊ฒฐ๊ณผ ์๊ฐํํ๊ธฐ
-----------------------
Captum์ \ ``visualization``\ ์ ํธ๋ฆฌํฐ๋ ๊ทธ๋ฆผ๊ณผ ํ
์คํธ ์
๋ ฅ ๋ชจ๋์ ๋ํ
์์ฑ(attribution) ๊ฒฐ๊ณผ๋ฅผ ์๊ฐํ ํ ์ ์๋ ์ฆ์ ์ฌ์ฉ๊ฐ๋ฅํ ๋ฐฉ๋ฒ์ ์ ๊ณตํฉ๋๋ค.
```
import numpy as np
from captum.attr import visualization as viz
# ๊ณ์ฐ ์์ฑ Tensor๋ฅผ ์ด๋ฏธ์ง ๊ฐ์ numpy ๋ฐฐ์ด๋ก ๋ณํํฉ๋๋ค.
attribution_dog = np.transpose(attribution_dog.squeeze().cpu().detach().numpy(), (1,2,0))
vis_types = ["heat_map", "original_image"]
vis_signs = ["all", "all"] # "positive", "negative", ๋๋ ๋ชจ๋ ํ์ํ๋ "all"
# positive ์์ฑ์ ํด๋น ์์ญ์ ์กด์ฌ๊ฐ ์์ธก ์ ์๋ฅผ ์ฆ๊ฐ์ํจ๋ค๋ ๊ฒ์ ์๋ฏธํฉ๋๋ค.
# negative ์์ฑ์ ํด๋น ์์ญ์ ์กด์ฌ๊ฐ ์์ธก ์ ์๋ฅผ ๋ฎ์ถ๋ ์ค๋ต ์์ญ์ ์๋ฏธํฉ๋๋ค.
_ = viz.visualize_image_attr_multiple(attribution_dog,
center_crop(img),
vis_types,
vis_signs,
["attribution for dog", "image"],
show_colorbar = True
)
attribution_cat = np.transpose(attribution_cat.squeeze().cpu().detach().numpy(), (1,2,0))
_ = viz.visualize_image_attr_multiple(attribution_cat,
center_crop(img),
["heat_map", "original_image"],
["all", "all"], # positive/negative ์์ฑ ๋๋ all
["attribution for cat", "image"],
show_colorbar = True
)
```
๋ง์ฝ ๋ฐ์ดํฐ๊ฐ ํ
์คํธ์ธ ๊ฒฝ์ฐ ``visualization.visualize_text()`` ๋
์
๋ ฅ ํ
์คํธ ์์ ์์ฑ(attribution)์ ํ์ํ ์ ์๋ ์ ์ฉ ๋ทฐ(view)๋ฅผ ์ ๊ณตํฉ๋๋ค.
http://captum.ai/tutorials/IMDB_TorchText_Interpret ์์ ์์ธํ ๋ด์ฉ์ ํ์ธํ์ธ์.
๋ง์ง๋ง ๋
ธํธ
-----------
Captum์ ์ด๋ฏธ์ง, ํ
์คํธ ๋ฑ์ ํฌํจํ์ฌ ๋ค์ํ ๋ฐฉ์์ผ๋ก PyTorch์์ ๋๋ถ๋ถ์ ๋ชจ๋ธ ํ์
์ ์ฒ๋ฆฌํ ์ ์์ต๋๋ค.
Captum์ ์ฌ์ฉํ๋ฉด ๋ค์์ ์ํํ ์ ์์ต๋๋ค.
\* ์์์ ์ค๋ช
ํ ๊ฒ์ฒ๋ผ ํน์ ํ ์ถ๋ ฅ์ ๋ชจ๋ธ ์
๋ ฅ์ ํ์ํ๊ธฐ
\* ํน์ ํ ์ถ๋ ฅ์ ์๋์ธต์ ๋ด๋ฐ์ ํ์ํ๊ธฐ (Captum API reference๋ฅผ ๋ณด์ธ์).
\* ๋ชจ๋ธ ์
๋ ฅ์ ๋ํ ์๋์ธต ๋ด๋ฐ์ ๋ฐ์์ ํ์ํ๊ธฐ (Captum API reference๋ฅผ ๋ณด์ธ์).
์ง์๋๋ ๋ฉ์๋์ ์ ์ฒด API์ ํํ ๋ฆฌ์ผ์ ๋ชฉ๋ก์ http://captum.ai ๋ฅผ ์ฐธ์กฐํ์ธ์.
Gilbert Tanner์ ๋ ๋ค๋ฅธ ์ ์ฉํ ๊ฒ์๋ฌผ :
https://gilberttanner.com/blog/interpreting-pytorch-models-with-captum
| github_jupyter |
```
import sys
sys.path.append('../src')
from numpy import *
import matplotlib.pyplot as plt
from Like import *
from PlotFuncs import *
import WIMPFuncs
pek = line_background(6,'k')
fig,ax = MakeLimitPlot_SDn()
alph = 0.25
cols = cm.bone(linspace(0.3,0.7,4))
nucs = ['Xe','Ge','NaI']
zos = [0,-50,-100,-50]
C_Si = WIMPFuncs.C_SDp(Si29)/WIMPFuncs.C_SDn(Si29)
C_Ge = WIMPFuncs.C_SDp(Ge73)/WIMPFuncs.C_SDn(Ge73)
Cs = [1.0,C_Ge,1.0]
froots = ['SDn','SDp','SDn']
for nuc,zo,col,C,froot in zip(nucs,zos,cols,Cs,froots):
data = loadtxt('../data/WIMPLimits/mylimits/DLNuFloor'+nuc+'_detailed_'+froot+'.txt')
m,sig,NUFLOOR,DY = Floor_2D(data)
plt.plot(m,NUFLOOR*C,'-',color=col,lw=3,path_effects=pek,zorder=zo)
plt.fill_between(m,NUFLOOR*C,y2=1e-99,color=col,zorder=zo,alpha=alph)
#plt.text(0.12,0.2e-35,r'{\bf Silicon}',rotation=45,color='k')
plt.text(0.23,1.5e-38,r'{\bf Ge}',rotation=25,color='k')
plt.text(0.18,5e-38,r'{\bf NaI}',rotation=26,color='k')
plt.text(0.175,5e-40,r'{\bf Xenon}',rotation=31,color='k')
MySaveFig(fig,'NuFloor_Targets_SDn')
pek = line_background(6,'k')
cmap = cm.terrain_r
fig,ax = MakeLimitPlot_SDn(Collected=True,alph=1,edgecolor=col_alpha('gray',0.75),facecolor=col_alpha('gray',0.5))
data = loadtxt('../data/WIMPLimits/mylimits/DLNuFloorXe_detailed_SDn.txt')
m,sig,NUFLOOR,DY = Floor_2D(data,filt=True,filt_width=2,Ex_crit=1e10)
cnt = plt.contourf(m,sig,DY,levels=linspace(2,15,100),vmax=8,vmin=2.2,cmap=cmap)
for c in cnt.collections:
c.set_edgecolor("face")
plt.plot(m,NUFLOOR,'-',color='brown',lw=3,path_effects=pek,zorder=100)
im = plt.pcolormesh(-m,sig,DY,vmax=6,vmin=2.2,cmap=cmap,rasterized=True)
cbar(im,extend='min')
plt.gcf().text(0.82,0.9,r'$\left(\frac{{\rm d}\ln\sigma}{{\rm d}\ln N}\right)^{-1}$',fontsize=35)
plt.gcf().text(0.15*(1-0.01),0.16*(1+0.01),r'{\bf Xenon}',color='k',fontsize=50,alpha=0.2)
plt.gcf().text(0.15,0.16,r'{\bf Xenon}',color='brown',fontsize=50)
MySaveFig(fig,'NuFloorDetailed_Xe_SDn')
fig,ax = MakeLimitPlot_SDn(Collected=True,alph=1,edgecolor=col_alpha('gray',0.75),facecolor=col_alpha('gray',0.5))
data = loadtxt('../data/WIMPLimits/mylimits/DLNuFloorNaI_detailed_SDn.txt')
m,sig,NUFLOOR,DY = Floor_2D(data,filt=True,filt_width=2,Ex_crit=1e11)
cnt = plt.contourf(m,sig,DY,levels=linspace(2,15,100),vmax=6,vmin=2.2,cmap=cmap)
for c in cnt.collections:
c.set_edgecolor("face")
plt.plot(m,NUFLOOR,'-',color='brown',lw=3,path_effects=pek,zorder=100)
im = plt.pcolormesh(-m,sig,DY,vmax=6,vmin=2.2,cmap=cmap,rasterized=True)
cbar(im,extend='min')
plt.gcf().text(0.82,0.9,r'$\left(\frac{{\rm d}\ln\sigma}{{\rm d}\ln N}\right)^{-1}$',fontsize=35)
plt.gcf().text(0.15*(1-0.01),0.16*(1+0.01),r'{\bf NaI}',color='k',fontsize=50,alpha=0.2)
plt.gcf().text(0.15,0.16,r'{\bf NaI}',color='brown',fontsize=50)
MySaveFig(fig,'NuFloorDetailed_NaI_SDn')
dat1 = loadtxt("../data/WIMPLimits/SDn/XENON1T.txt")
dat2 = loadtxt("../data/WIMPLimits/SDn/PandaX.txt")
dat3 = loadtxt("../data/WIMPLimits/SDn/CDMSlite.txt")
dat4 = loadtxt("../data/WIMPLimits/SDn/CRESST.txt")
dats = [dat1,dat2,dat3,dat4]
mmin = amin(dat4[:,0])
mmax = 1e4
mvals = logspace(log10(mmin),log10(mmax),1000)
sig = zeros(shape=1000)
for dat in dats:
sig1 = 10**interp(log10(mvals),log10(dat[:,0]),log10(dat[:,1]))
sig1[mvals<amin(dat[:,0])] = inf
sig1[mvals>amax(dat[:,0])] = inf
sig = column_stack((sig,sig1))
sig = sig[:,1:]
sig = amin(sig,1)
plt.loglog(mvals,sig,color='r',alpha=1,zorder=0.5,lw=2)
savetxt('../data/WIMPLimits/SDn/AllLimits-2021.txt',column_stack((mvals,sig)))
```
| github_jupyter |
# <center>HW 01: Geomviz: Visualizing Differential Geometry<center>
## <center>Special Euclidean Group SE(n)<center>
<center>$\color{#003660}{\text{Swetha Pillai, Ryan Guajardo}}$<center>
# <center> 1.) Mathematical Definition of Special Euclidean SE(n)<center>
### <center> This group is defined as the set of direct isometries - or rigid-body transformations - of $R^n$.<center>
<center>i.e. the linear transformations of the affine space $R^n$ that preserve its canonical inner-product, or euclidean distance between points.<center>
***
$$
\rho(x) = Rx + u
$$
***
<center>$\rho$ is comprised of a rotational part $R$ and a translational part $u$.<center>
$$
\newline
$$
$$
\newline
SE(n) = \{(R,u)\ \vert\ R \in SO(n), u \in R^n\}
\newline
$$
<center>Where SO(n) is the special orthogonal group.<center>
# <center> 2.) Uses of Special Euclidean SE(n) in real-world applications<center>
## <center>Rigid Body Kinematics<center>
Can represent linear and angular displacements in rigid bodies, commonly in SE(3)
<center><img src="rigid.png" width="500"/></center>
## <center> Autonomous Quadcoptor Path Planning!<center>
If we want to make a quadcopter autonomous a predefined
path must be computed by finding collision free paths throughout
a space whose topological structure is SE(3)
<center><img src="quadcopterpic.jpeg" width="500"/></center>
## <center> Optimal Paths for Polygonal Robots SE(2) <center>
Similar to Autonomous Quadcopter but we are now in a 2 dimensional plane, hence SE(2)...
<center><img src="polygonal.png" width="500"/></center>
## <center>Projective Model of Ideal Pinhole Camera <center>
Camera coordinate system to World coordinate system transform.
<center><img src="pinhole.png" width="500"/></center>
## <center>Pose Estimation <center>

# 3.) Visualization of Elementary Operation on SE(3)
Showcase your visualization can be used by plotting the inputs and outputs of operations such as exp, log, geodesics.
```
%pip install geomstats
import warnings
warnings.filterwarnings("ignore")
from Special_Euclidean import *
manifold = Special_Euclidean()
point = manifold.random_point()
# point = np.array([1,1,1,1,1,1])
manifold.plot(point)
manifold.scatter(5)
random_points = manifold.random_point(2)
manifold.plot_exp(random_points[0], random_points[1])
manifold.plot_log(random_points[0], random_points[1])
# point = np.eye(6)
point = np.array([0,0,0,0,0,0])
# all rotations and vectors equal
vector = np.array([0.5, 0.5, 0.5, 0.5, 0.5, 0.5])
#rotation in one dimension, no translation
# vector = np.array([0,0,.9,0,0,0])
#rotation in one dimension, translation in one direction
# vector = np.array([0,0,.9,0,0,.5])
N_STEPS = 10
manifold.plot_geodesic(point, vector, N_STEPS)
```
# 4.) Conclusion
## SE(n) is very useful.
Geomstats: https://github.com/geomstats/geomstats
http://ingmec.ual.es/~jlblanco/papers/jlblanco2010geometry3D_techrep.pdf
https://ieeexplore.ieee.org/document/7425231
https://arm.stanford.edu/publications/optimal-paths-polygonal-robots-se2
https://mappingignorance.org/2015/10/14/shortcuts-for-efficiently-moving-a-quadrotor-throughout-the-special-euclidean-group-se3-and-2/
https://arxiv.org/abs/2111.00190
https://www.seas.upenn.edu/~meam620/slides/kinematicsI.pdf
| github_jupyter |
## A track example
The file `times.dat` has made up data for 100-m races between Florence Griffith-Joyner and Shelly-Ann Fraser-Pryce.
We want to understand how often Shelly-Ann beats Flo-Jo.
```
%pylab inline --no-import-all
```
<!-- Secret comment:
How the data were generated
w = np.random.normal(0,.07,10000)
x = np.random.normal(10.65,.02,10000)+w
y = np.random.normal(10.7,.02,10000)+w
np.savetxt('times.dat', (x,y), delimiter=',')
-->
```
florence, shelly = np.loadtxt('times.dat', delimiter=',')
counts, bins, patches = plt.hist(florence,bins=50,alpha=0.2, label='Flo-Jo')
counts, bins, patches = plt.hist(shelly,bins=bins,alpha=0.2, label='Shelly-Ann')
plt.legend()
plt.xlabel('times (s)')
np.mean(florence), np.mean(shelly)
np.std(florence),np.std(shelly)
```
## let's make a prediction
Based on the mean and std. of their times, let's make a little simulation to predict how often Shelly-Ann beats Flo-Jo.
We can use propagation of errors to predict mean and standard deviation for $q=T_{shelly}-T_{Florence}$
```
mean_q = np.mean(shelly)-np.mean(florence)
sigma_q = np.sqrt(np.std(florence)**2+np.std(shelly)**2)
f_guess = np.random.normal(np.mean(florence),np.std(florence),10000)
s_guess = np.random.normal(np.mean(shelly),np.std(shelly),10000)
toy_difference = s_guess-f_guess
```
Make Toy data
```
#toy_difference = np.random.normal(mean_q, sigma_q, 10000)
counts, bins, patches = plt.hist(toy_difference,bins=50, alpha=0.2, label='toy data')
counts, bins, patches = plt.hist(toy_difference[toy_difference<0],bins=bins, alpha=0.2)
norm = (bins[1]-bins[0])*10000
plt.plot(bins,norm*mlab.normpdf(bins,mean_q,sigma_q), label='prediction')
plt.legend()
plt.xlabel('Shelly - Florence')
# predict fraction of wins
np.sum(toy_difference<0)/10000.
#check toy data looks like real data
counts, bins, patches = plt.hist(f_guess,bins=50,alpha=0.2)
counts, bins, patches = plt.hist(s_guess,bins=bins,alpha=0.2)
```
## How often does she actually win?
```
counts, bins, patches = plt.hist(shelly-florence,bins=50,alpha=0.2)
counts, bins, patches = plt.hist((shelly-florence)[florence-shelly>0],bins=bins,alpha=0.2)
plt.xlabel('Shelly - Florence')
1.*np.sum(florence-shelly>0)/florence.size
```
## What's gonig on?
```
plt.scatter(f_guess,s_guess, alpha=0.01)
plt.scatter(florence,shelly, alpha=0.01)
plt.hexbin(shelly,florence, alpha=1)
```
Previously we learned propagation of errors formula neglecting correlation:
$\sigma_q^2 = \left( \frac{\partial q}{ \partial x} \sigma_x \right)^2 + \left( \frac{\partial q}{ \partial y}\, \sigma_y \right)^2 = \frac{\partial q}{ \partial x} \frac{\partial q}{ \partial x} C_{xx} + \frac{\partial q}{ \partial y} \frac{\partial q}{ \partial y} C_{yy}$
Now we need to extend the formula to take into account correlation
$\sigma_q^2 = \frac{\partial q}{ \partial x} \frac{\partial q}{ \partial x} C_{xx} + \frac{\partial q}{ \partial y} \frac{\partial q}{ \partial y} C_{yy} + 2 \frac{\partial q}{ \partial x} \frac{\partial q}{ \partial y} C_{xxy} $
```
# covariance matrix
cov_matrix = np.cov(shelly,florence)
cov_matrix
# normalized correlation matrix
np.corrcoef(shelly,florence)
# q = T_shelly - T_florence
# x = T_shelly
# y = T_florence
# propagation of errors
cov_matrix[0,0]+cov_matrix[1,1]-2*cov_matrix[0,1]
mean_q = np.mean(shelly)-np.mean(florence)
sigma_q_with_corr = np.sqrt(cov_matrix[0,0]+cov_matrix[1,1]-2*cov_matrix[0,1])
sigma_q_no_corr = np.sqrt(cov_matrix[0,0]+cov_matrix[1,1])
counts, bins, patches = plt.hist(shelly-florence,bins=50,alpha=0.2)
counts, bins, patches = plt.hist((shelly-florence)[florence-shelly>0],bins=bins,alpha=0.2)
norm = (bins[1]-bins[0])*10000
plt.plot(bins,norm*mlab.normpdf(bins,mean_q,sigma_q_with_corr), label='prediction with correlation')
plt.plot(bins,norm*mlab.normpdf(bins,mean_q, sigma_q_no_corr), label='prediction without correlation')
plt.legend()
plt.xlabel('Shelly - Florence')
1.*np.sum(florence-shelly>0)/florence.size
np.std(florence-shelly)
np.sqrt(2.)*0.73
((np.sqrt(2.)*0.073)**2-0.028**2)/2.
.073**2
np.std(florence+shelly)
np.sqrt(2*(np.sqrt(2.)*0.073)**2 -0.028**2)
```
| github_jupyter |
## Load Python Packages
```
# --- load packages
import torch
import torch.nn as nn
import torch.optim as optim
from torch.optim import lr_scheduler
from torch.nn.modules.distance import PairwiseDistance
from torch.utils.data import Dataset
from torchvision import transforms
from torchsummary import summary
from torch.cuda.amp import GradScaler, autocast
from torch.nn import functional as F
import time
from collections import OrderedDict
import numpy as np
import os
from skimage import io
from PIL import Image
import cv2
import matplotlib.pyplot as plt
```
## Set parameters
```
# --- Set all Parameters
DatasetFolder = "./CASIA-WebFace" # path to Dataset folder
ResNet_sel = "18" # select ResNet type
NumberID = 10575 # Number of ID in dataset
batch_size = 256 # size of batch size
Triplet_size = 10000 * batch_size # size of total Triplets
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
loss_margin = 0.6 # Margin for Triplet loss
learning_rate = 0.075 # choose Learning Rate(note that this value will be change during training)
epochs = 200 # number of iteration over total dataset
```
## Download Datasets
#### In this section we download CASIA-WebFace and LFW-Dataset
#### we use CAISA-WebFace for Training and LFW for Evaluation
```
# --- Download CASIA-WebFace Dataset
print(40*"=" + " Download CASIA WebFace " + 40*'=')
! gdown --id 1Of_EVz-yHV7QVWQGihYfvtny9Ne8qXVz
! unzip CASIA-WebFace.zip
! rm CASIA-WebFace.zip
# --- Download LFW Dataset
print(40*"=" + " Download LFW " + 40*'=')
! wget http://vis-www.cs.umass.edu/lfw/lfw-deepfunneled.tgz
! tar -xvzf lfw-deepfunneled.tgz
! rm lfw-deepfunneled.tgz
```
# Define ResNet Parts
#### 1. Residual block
#### 2. Make ResNet by Prv. block
```
# --- Residual block
class ResidualBlock(nn.Module):
def __init__(self, in_channels, out_channels, downsample=1):
super().__init__()
# --- Variables
self.in_channels = in_channels
self.out_channels = out_channels
self.downsample = downsample
# --- Residual parts
# --- Conv part
self.blocks = nn.Sequential(OrderedDict(
{
# --- First Conv
'conv1' : nn.Conv2d(self.in_channels, self.out_channels, kernel_size=3, stride=self.downsample, padding=1, bias=False),
'bn1' : nn.BatchNorm2d(self.out_channels),
'Relu1' : nn.ReLU(),
# --- Secound Conv
'conv2' : nn.Conv2d(self.out_channels, self.out_channels, kernel_size=3, stride=1, padding=1, bias=False),
'bn2' : nn.BatchNorm2d(self.out_channels)
}
))
# --- shortcut part
self.shortcut = nn.Sequential(OrderedDict(
{
'conv' : nn.Conv2d(self.in_channels, self.out_channels, kernel_size=1, stride=self.downsample, bias=False),
'bn' : nn.BatchNorm2d(self.out_channels)
}
))
def forward(self, x):
residual = x
if (self.in_channels != self.out_channels) : residual = self.shortcut(x)
x = self.blocks(x)
x += residual
return x
# # --- Test Residual block
# dummy = torch.ones((1, 32, 140, 140))
# block = ResidualBlock(32, 64)
# block(dummy).shape
# print(block)
# --- Make ResNet18
class ResNet18(nn.Module):
def __init__(self):
super().__init__()
# --- Pre layers with 7*7 conv with stride2 and a max-pooling
self.PreBlocks = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=7, padding=3, stride=2, bias=False),
nn.BatchNorm2d(64),
nn.ReLU(),
nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
)
# --- Define all Residual Blocks here
self.CoreBlocka = nn.Sequential(
ResidualBlock(64,64 ,downsample=1),
ResidualBlock(64,64 ,downsample=1),
# ResidualBlock(64,64 ,downsample=1),
ResidualBlock(64,128 ,downsample=2),
ResidualBlock(128,128 ,downsample=1),
# ResidualBlock(128,128 ,downsample=1),
# ResidualBlock(128,128 ,downsample=1),
ResidualBlock(128,256 ,downsample=2),
ResidualBlock(256,256 ,downsample=1),
# ResidualBlock(256,256 ,downsample=1),
# ResidualBlock(256,256 ,downsample=1),
# ResidualBlock(256,256 ,downsample=1),
# ResidualBlock(256,256 ,downsample=1),
ResidualBlock(256,512 ,downsample=2),
ResidualBlock(512,512 ,downsample=1),
# ResidualBlock(512,512 ,downsample=1)
)
# --- Make Average pooling
self.avg = nn.AdaptiveAvgPool2d((1,1))
# --- FC layer for output
self.fc = nn.Linear(512, 512, bias=False)
def forward(self, x):
x = self.PreBlocks(x)
x = self.CoreBlocka(x)
x = self.avg(x)
x = x.view(x.size(0), -1)
x = self.fc(x)
x = F.normalize(x, p=2, dim=1)
return x
# dummy = torch.ones((1, 3, 114, 144))
model = ResNet18()
# model
# res = model(dummy)
model.to(device)
summary(model, (3, 114, 114))
del model
```
# Make TripletLoss Class
```
# --- Triplet loss
"""
This code was imported from tbmoon's 'facenet' repository:
https://github.com/tbmoon/facenet/blob/master/utils.py
"""
import torch
from torch.autograd import Function
from torch.nn.modules.distance import PairwiseDistance
class TripletLoss(Function):
def __init__(self, margin):
super(TripletLoss, self).__init__()
self.margin = margin
self.pdist = PairwiseDistance(p=2)
def forward(self, anchor, positive, negative):
pos_dist = self.pdist.forward(anchor, positive)
neg_dist = self.pdist.forward(anchor, negative)
hinge_dist = torch.clamp(self.margin + pos_dist - neg_dist, min=0.0)
loss = torch.mean(hinge_dist)
# print(torch.mean(pos_dist).item(), torch.mean(neg_dist).item(), loss.item())
# print("pos_dist", pos_dist)
# print("neg_dist", neg_dist)
# print(self.margin + pos_dist - neg_dist)
return loss
```
# Make Triplet Dataset from CASIA-WebFace
##### 1. Make Triplet pairs
##### 2. Make them zip
##### 3. Make Dataset Calss
##### 4. Define Transform
```
# --- Create Triplet Datasets ---
# --- make a list of ids and folders
selected_ids = np.uint32(np.round((np.random.rand(int(Triplet_size))) * (NumberID-1)))
folders = os.listdir("./CASIA-WebFace/")
# --- Itrate on each id and make Triplets list
TripletList = []
for index,id in enumerate(selected_ids):
# --- find name of id faces folder
id_str = str(folders[id])
# --- find list of faces in this folder
number_faces = os.listdir("./CASIA-WebFace/"+id_str)
# --- Get two Random number for Anchor and Positive
while(True):
two_random = np.uint32(np.round(np.random.rand(2) * (len(number_faces)-1)))
if (two_random[0] != two_random[1]):
break
# --- Make Anchor and Positive image
Anchor = str(number_faces[two_random[0]])
Positive = str(number_faces[two_random[1]])
# --- Make Negative image
while(True):
neg_id = np.uint32(np.round(np.random.rand(1) * (NumberID-1)))
if (neg_id != id):
break
# --- number of images in negative Folder
neg_id_str = str(folders[neg_id[0]])
number_faces = os.listdir("./CASIA-WebFace/"+neg_id_str)
one_random = np.uint32(np.round(np.random.rand(1) * (len(number_faces)-1)))
Negative = str(number_faces[one_random[0]])
# --- insert Anchor, Positive and Negative image path to TripletList
TempList = ["","",""]
TempList[0] = id_str + "/" + Anchor
TempList[1] = id_str + "/" + Positive
TempList[2] = neg_id_str + "/" + Negative
TripletList.append(TempList)
# # --- Make dataset Triplets File
# f = open("CASIA-WebFace-Triplets.txt", "w")
# for index, triplet in enumerate(TripletList):
# f.write(triplet[0] + " " + triplet[1] + " " + triplet[2])
# if (index != len(TripletList)-1):
# f.write("\n")
# f.close()
# # --- Make zipFile if you need
# !zip -r CASIA-WebFace-Triplets.zip CASIA-WebFace-Triplets.txt
# # --- Read zip File and extract TripletList
# TripletList = []
# # !unzip CASIA-WebFace-Triplets.zip
# # --- Read text file
# with open('CASIA-WebFace-Triplets.txt') as f:
# lines = f.readlines()
# for line in lines:
# TripletList.append(line.split(' '))
# TripletList[-1][2] = TripletList[-1][2][0:-1]
# # --- Print some data
# print(TripletList[0:5])
# --- Make Pytorch Dataset Class for Triplets
class TripletFaceDatset(Dataset):
def __init__(self, list_of_triplets, transform=None):
# --- initializing values
print("Start Creating Triplets Dataset from CASIA-WebFace")
self.list_of_triplets = list_of_triplets
self.transform = transform
# --- getitem function
def __getitem__(self, index):
# --- get images path and read faces
anc_img_path, pos_img_path, neg_img_path = self.list_of_triplets[index]
anc_img = cv2.imread('./CASIA-WebFace/'+anc_img_path)
pos_img = cv2.imread('./CASIA-WebFace/'+pos_img_path)
neg_img = cv2.imread('./CASIA-WebFace/'+neg_img_path)
# anc_img = cv2.resize(anc_img, (114,114))
# pos_img = cv2.resize(pos_img, (114,114))
# neg_img = cv2.resize(neg_img, (114,114))
# --- set transform
if self.transform:
anc_img = self.transform(anc_img)
pos_img = self.transform(pos_img)
neg_img = self.transform(neg_img)
return {'anc_img' : anc_img,
'pos_img' : pos_img,
'neg_img' : neg_img}
# --- return len of triplets
def __len__(self):
return len(self.list_of_triplets)
# --- Define Transforms
transform_list =transforms.Compose([
transforms.ToPILImage(),
transforms.Resize((140,140)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std =[0.229, 0.224, 0.225])
])
# --- Test Dataset
triplet_dataset = TripletFaceDatset(TripletList, transform_list)
triplet_dataset[0]['anc_img'].shape
```
# LFW Evaluation
##### 1. Face detection function
##### 2. Load LFW Pairs .npy file
##### 3. Define Function for evaluation
```
# -------------------------- UTILS CELL -------------------------------
trained_face_data = cv2.CascadeClassifier(cv2.data.haarcascades+'haarcascade_frontalface_default.xml')
# --- define Functions
def face_detect(file_name):
flag = True
# Choose an image to detect faces in
img = cv2.imread(file_name)
# Must convert to greyscale
# grayscaled_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Detect Faces
# face_coordinates = trained_face_data.detectMultiScale(grayscaled_img)
# img_crop = []
# Draw rectangles around the faces
# for (x, y, w, h) in face_coordinates:
# img_crop.append(img[y-20:y+h+20, x-20:x+w+20])
# --- select only Biggest
# big_id = 0
# if len(img_crop) > 1:
# temp = 0
# for idx, img in enumerate(img_crop):
# if img.shape[0] > temp:
# temp = img.shape[0]
# big_id = idx
# elif len(img_crop) == 0:
# flag = False
# img_crop = [0]
# return image crop
# return [img_crop[big_id]], flag
return [img], flag
# --- LFW Dataset loading for test part
l2_dist = PairwiseDistance(2)
cos = nn.CosineSimilarity(dim=1, eps=1e-6)
# --- 1. Load .npy pairs path
lfw_pairs_path = np.load('lfw_pairs_path.npy', allow_pickle=True)
pairs_dist_list_mat = []
pairs_dist_list_unmat = []
valid_thresh = 0.96
def lfw_validation(model):
global valid_thresh
tot_len = len(lfw_pairs_path)
model.eval() # use model in evaluation mode
with torch.no_grad():
true_match = 0
for path in lfw_pairs_path:
# --- extracting
pair_one_path = path['pair_one']
# print(pair_one_path)
pair_two_path = path['pair_two']
# print(pair_two_path)
matched = int(path['matched'])
# --- detect face and resize it
pair_one_img, flag_one = face_detect(pair_one_path)
pair_two_img, flag_two = face_detect(pair_two_path)
if (flag_one==False) or (flag_two==False):
tot_len = tot_len-1
continue
# --- Model Predict
pair_one_img = transform_list(pair_one_img[0])
pair_two_img = transform_list(pair_two_img[0])
pair_one_embed = model(torch.unsqueeze(pair_one_img, 0).to(device))
pair_two_embed = model(torch.unsqueeze(pair_two_img, 0).to(device))
# print(pair_one_embed.shape)
# break
# print(pair_one_img)
# break
# --- find Distance
pairs_dist = l2_dist.forward(pair_one_embed, pair_two_embed)
if matched == 1: pairs_dist_list_mat.append(pairs_dist.item())
if matched == 0: pairs_dist_list_unmat.append(pairs_dist.item())
# --- thrsholding
if (matched==1 and pairs_dist.item() <= valid_thresh) or (matched==0 and pairs_dist.item() > valid_thresh):
true_match += 1
valid_thresh = (np.percentile(pairs_dist_list_unmat,25) + np.percentile(pairs_dist_list_mat,75)) /2
print("Thresh :", valid_thresh)
return (true_match/tot_len)*100
# img, _ = face_detect("./lfw-deepfunneled/Steve_Lavin/Steve_Lavin_0002.jpg")
# plt.imshow(img[0])
# plt.show()
temp = [0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
for i in temp:
valid_thresh = i
print(lfw_validation(model))
(np.mean(pairs_dist_list_mat) + np.mean(pairs_dist_list_unmat) )/2
ppairs_dist_list_unmat
# --- find best thresh
round_unmat = pairs_dist_list_unmat
round_mat = pairs_dist_list_mat
print("----- Unmatched statistical information -----")
print("len : ",len(round_unmat))
print("min : ", np.min(round_unmat))
print("Q1 : ", np.percentile(round_unmat, 15))
print("mean : ", np.mean(round_unmat))
print("Q3 : ", np.percentile(round_unmat, 75))
print("max : ", np.max(round_unmat))
print("\n")
print("----- matched statistical information -----")
print("len : ",len(round_mat))
print("min : ", np.min(round_mat))
print("Q1 : ", np.percentile(round_mat, 25))
print("mean : ", np.mean(round_mat))
print("Q3 : ", np.percentile(round_mat, 85))
print("max : ", np.max(round_mat))
```
## How to make Training Faster
```
# Make Trianing Faster in Pytorch(Cuda):
# 1. use number of worker
# 2. set pin_memory
# 3. Enable cuDNN for optimizing Conv
# 4. using AMP
# 5. set bias=False in conv layer if you set batch normalizing in model
# source: https://betterprogramming.pub/how-to-make-your-pytorch-code-run-faster-93079f3c1f7b
```
# DataLoader
```
# --- DataLoader
face_data = torch.utils.data.DataLoader(triplet_dataset,
batch_size= batch_size,
shuffle=True,
num_workers=4,
pin_memory= True)
# --- Enable cuDNN
torch.backends.cudnn.benchmark = True
```
# Save Model (best acc. and last acc.)
```
# --- saving model for best and last model
# --- Connect to google Drive for saving models
from google.colab import drive
drive.mount('/content/gdrive')
# --- some variable for saving models
BEST_MODEL_PATH = "./gdrive/MyDrive/best_trained.pth"
LAST_MODEL_PATH = "./gdrive/MyDrive/last_trained.pth"
def save_model(model_sv, loss_sv, epoch_sv, optimizer_state_sv, accuracy, accu_sv_list, loss_sv_list):
# --- Inputs:
# 1. model_sv : orginal model that trained
# 2. loss_sv : current loss
# 3. epoch_sv : current epoch
# 4. optimizer_state_sv : current value of optimizer
# 5. accuracy : current accuracy
# --- save last epoch
if accuracy >= max(accu_sv_list):
torch.save(model.state_dict(), BEST_MODEL_PATH)
# --- save this model for checkpoint
torch.save({
'epoch': epoch_sv,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer_state_sv.state_dict(),
'loss': loss_sv,
'accu_sv_list': accu_sv_list,
'loss_sv_list' : loss_sv_list
}, LAST_MODEL_PATH)
```
# Load prev. model for continue training
```
torch.cuda.empty_cache()
# --- training initialize and start
model = ResNet18().to(device) # load model
tiplet_loss = TripletLoss(loss_margin) # load Tripletloss
optimizer = optim.Adam(filter(lambda p: p.requires_grad, model.parameters()),
lr=learning_rate) # load optimizer
l2_dist = PairwiseDistance(2) # L2 distance loading # save loss values
epoch_check = 0
valid_arr = []
loss_arr = []
load_last_epoch = True
if (load_last_epoch == True):
# --- load last model
# define model objects before this
checkpoint = torch.load(LAST_MODEL_PATH, map_location=device) # load model path
model.load_state_dict(checkpoint['model_state_dict']) # load state dict
optimizer.load_state_dict(checkpoint['optimizer_state_dict']) # load optimizer
epoch_check = checkpoint['epoch'] # load epoch
loss = checkpoint['loss'] # load loss value
valid_arr = checkpoint['accu_sv_list'] # load Acc. values
loss_arr = checkpoint['loss_sv_list'] # load loss values
model.train()
epoch_check
loss
```
# Training Loop
```
model.train()
# --- Training loop based on number of epoch
temp = 0.075
for epoch in range(epoch_check,200):
print(80*'=')
# --- For saving imformation
triplet_loss_sum = 0.0
len_face_data = len(face_data)
# -- set starting time
time0 = time.time()
# --- make learning rate update
if 50 < len(loss_arr):
for g in optimizer.param_groups:
g['lr'] = 0.001
temp = 0.001
# --- loop on batches
for batch_idx, batch_faces in enumerate(face_data):
# --- Extract face triplets and send them to CPU or GPU
anc_img = batch_faces['anc_img'].to(device)
pos_img = batch_faces['pos_img'].to(device)
neg_img = batch_faces['neg_img'].to(device)
# --- Get embedded values for each triplet
anc_embed = model(anc_img)
pos_embed = model(pos_img)
neg_embed = model(neg_img)
# --- Find Distance
pos_dist = l2_dist.forward(anc_embed, pos_embed)
neg_dist = l2_dist.forward(anc_embed, neg_embed)
# --- Select hard triplets
all = (neg_dist - pos_dist < 0.8).cpu().numpy().flatten()
hard_triplets = np.where(all == 1)
if len(hard_triplets[0]) == 0: # --- Check number of hard triplets
continue
# --- select hard embeds
anc_hard_embed = anc_embed[hard_triplets]
pos_hard_embed = pos_embed[hard_triplets]
neg_hard_embed = neg_embed[hard_triplets]
# --- Loss
loss_value = tiplet_loss.forward(anc_hard_embed, pos_hard_embed, neg_hard_embed)
# --- backward path
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
if (batch_idx % 200 == 0) : print("Epoch: [{}/{}] ,Batch index: [{}/{}], Loss Value:[{:.8f}]".format(epoch+1, epochs, batch_idx+1, len_face_data,loss_value))
# --- save information
triplet_loss_sum += loss_value.item()
print("Learning Rate: ", temp)
# --- Find Avg. loss value
avg_triplet_loss = triplet_loss_sum / len_face_data
loss_arr.append(avg_triplet_loss)
# --- Validation part besed on LFW Dataset
validation_acc = lfw_validation(model)
valid_arr.append(validation_acc)
model.train()
# --- Save model with checkpoints
save_model(model, avg_triplet_loss, epoch+1, optimizer, validation_acc, valid_arr, loss_arr)
# --- Print information for each epoch
print(" Train set - Triplet Loss = {:.8f}".format(avg_triplet_loss))
print(' Train set - Accuracy = {:.8f}'.format(validation_acc))
print(f' Execution time = {time.time() - time0}')
```
# plot and print some information
```
plt.plotvalid_arr(, 'b-',
label='Validation Accuracy',
)
plt.show()
plt.plot(loss_arr, 'b-',
label='loss values',
)
plt.show()
for param_group in optimizer.param_groups:
print(param_group['lr'])
valid_arr
print(40*"=" + " Download CASIA WebFace " + 40*'=')
! gdown --id 1Of_EVz-yHV7QVWQGihYfvtny9Ne8qXVz
! unzip CASIA-WebFace.zip
! rm CASIA-WebFace.zip
# --- LFW Dataset loading for test part
l2_dist = PairwiseDistance(2)
cos = nn.CosineSimilarity(dim=1, eps=1e-6)
valid_thresh = 0.96
model.eval()
with torch.no_grad():
# --- extracting
pair_one_path = "./3.jpg"
# print(pair_one_path)
pair_two_path = "./2.jpg"
# --- detect face and resize it
pair_one_img, flag_one = face_detect(pair_one_path)
pair_two_img, flag_two = face_detect(pair_two_path)
# --- Model Predict
pair_one_img = transform_list(pair_one_img[0])
pair_two_img = transform_list(pair_two_img[0])
pair_one_embed = model(torch.unsqueeze(pair_one_img, 0).to(device))
pair_two_embed = model(torch.unsqueeze(pair_two_img, 0).to(device))
# --- find Distance
pairs_dist = l2_dist.forward(pair_one_embed, pair_two_embed)
print(pairs_dist)
# --- Create Triplet Datasets ---
# --- make a list of ids and folders
selected_ids = np.uint32(np.round((np.random.rand(int(Triplet_size))) * (NumberID-1)))
folders = os.listdir("./CASIA-WebFace/")
# --- Itrate on each id and make Triplets list
TripletList = []
for index,id in enumerate(selected_ids):
# --- print info
# print(40*"=" + str(index) + 40*"=")
# print(index)
# --- find name of id faces folder
id_str = str(folders[id])
# --- find list of faces in this folder
number_faces = os.listdir("./CASIA-WebFace/"+id_str)
# --- Get two Random number for Anchor and Positive
while(True):
two_random = np.uint32(np.round(np.random.rand(2) * (len(number_faces)-1)))
if (two_random[0] != two_random[1]):
break
# --- Make Anchor and Positive image
Anchor = str(number_faces[two_random[0]])
Positive = str(number_faces[two_random[1]])
# --- Make Negative image
while(True):
neg_id = np.uint32(np.round(np.random.rand(1) * (NumberID-1)))
if (neg_id != id):
break
# --- number of images in negative Folder
neg_id_str = str(folders[neg_id[0]])
number_faces = os.listdir("./CASIA-WebFace/"+neg_id_str)
one_random = np.uint32(np.round(np.random.rand(1) * (len(number_faces)-1)))
Negative = str(number_faces[one_random[0]])
# --- insert Anchor, Positive and Negative image path to TripletList
TempList = ["","",""]
TempList[0] = id_str + "/" + Anchor
TempList[1] = id_str + "/" + Positive
TempList[2] = neg_id_str + "/" + Negative
TripletList.append(TempList)
# print(TripletList[-1])
```
| github_jupyter |
# Data
Data en handelingen op data
## Informatica
een taal leren $\sim$ **syntax** (noodzakelijk, maar niet het punt)
... informatica studeren $\sim$ **semantiek** (leren hoe machines denken!)
Een programmeertaal als Python leren heeft alles te maken met syntax waarmee je handelingen kan schrijven die een machine moet uitvoeren. Maar hiervoor heb je eerst andere kennis nodig, kennis die alles te maken heeft met wat de machine (bijvoorbeeld, jouw laptop) doet.

We gaan stap voor stap ontdekken wat er zich in de machine afspeelt en gaan we kijken naar data en handelingen op (of de verwerking van) data.
## Handelingen en data
```
x = 41
y = x + 1
```
Laten we om te beginnen de volgende twee variabelen `x` en `y` ieder een waarde toekennen. Deze waarden (`41` en `42`) worden in het geheugen opgeslagen.
## Achter het doek

Stel je een variabele voor als een doos: de inhoud van de doos is de waarde (bijvoorbeeld `41` of `42` in ons geval) met extra informatie over het *type* van de waarde (een `int` wat staat voor *integer*, een geheel getal) en een geheugenlocatie (LOC).
### Geheugen

Geheugen is een hele lange lijst van dit soort dozen, elk met een naam, waarde, type en geheugenlocatie.

Random Access Memory ([RAM](https://nl.wikipedia.org/wiki/Dynamic_random-access_memory)) is waar variabelen worden opgeslagen, een kaart zoals je deze hier ziet zit ook in jouw computer! Als je het zwarte materiaal voorzichtig zou weghalen zal een (microscopisch klein) raster zichtbaar worden.

Horizontaal zie je de *bitlijnen*, of adresregels (de geheugenlokatie) en verticaal de *woordlijnen* (of dataregels). Elk kruispunt is een [condensator](https://nl.wikipedia.org/wiki/Condensator) die elektrisch geladen of ongeladen kan zijn.
### Bits

Zo'n punt (een condensator) dat geladen (1 of `True`) of ongeladen (0 of `False`) kan zijn wordt een *bit* genoemd. Dit is de kleinst mogelijk informatie-eenheid!
### Bytes

Je zal ook vaak horen over *bytes* en dit is een verzameling van 8 aaneengesloten *bits* op een adresregel. Waarom 8 en niet 5, 10, 12 of meer (of minder) zal je je misschien afvragen? Dit is historisch bepaald en heeft alles te maken met het minimaal aantal bits dat ooit nodig was om een bepaalde set van karakters (letters en andere tekens) te kunnen representeren ([ASCII](https://nl.wikipedia.org/wiki/ASCII_(tekenset)) om precies te zijn). Maak je geen zorgen om wat dit precies betekent, we komen hier nog op terug!
### Woord?

*Woord* in woordregel is niet een woord als in een zin (taal) maar een term die staat voor de [natuurlijke eenheid](https://en.wikipedia.org/wiki/Word_(computer_architecture)) van informatie voor een bepaalde computerarchitectuur. Tegenwoordig is deze voor de meeste systemen 64-bit, dit wordt ook wel de *adresruimte* van een architectuur genoemd.
Deze eenheid is van belang want het bepaalt bijvoorbeeld het grootste gehele getal dat kan worden opgeslagen. Maar hoe komen we van bits naar bytes en vervolgens tot getallen en andere data zul je je afvragen? Dit zul je later zien, eerst gaan we kijken naar de verschillende typen data die we kunnen onderscheiden.
## Datatypes
*Alle* talen hebben datatypes!
| Type | Voorbeeld | Wat is het? |
|---------|-------------------|----------------------------------------------------------------------------------|
| `float` | `3.14` of `3.0` | decimale getallen |
| `int` | `42` of `10**100` | gehele getallen |
| `bool` | `True` of `False` | het resultaat van een test of vergelijking met: `==`, `!=`, `<`, `>`, `<=`, `>=` |
```
type(42.0)
```
Dit zijn de eerste datatypes waar we kennis mee gaan maken en ze komen aardig overeen met wat wij (mensen!) kunnen onderscheiden, bijvoorbeeld gehele- of decimale getallen.
Ook een `bool`(ean) is uiteindelijk een getal: als we `False` typen zal Python dit lezen als 0. `True` en `False`is *syntax* (!) om het voor ons makkelijker te maken, maar *semantisch* staat het voor 1 en 0 (in ieder geval voor Python!).
Met de de *functie* `type(x)` kan je opvragen welk type Python denkt dat de waarde heeft.
## Operatoren
Speciale tekens die alles te maken hebben met handelingen op data.
### Python operatoren
| Betekenis | |
|-----------------------------------|---------------------------------|
| groepering | `(` `)` |
| machtsverheffing | `**` |
| vermenigvuldiging, modulo, deling | `*` `%` `/` `//` |
| optelling, aftrekking | `+` `-` |
| vergelijking | `==` `!=`, `<`, `>`, `<=`, `>=` |
| toekenning | `=` |
Net als bij rekenen moet je hier rekening houden met de bewerkingsvolgorde, hier zijn ze van meest naar minst belangrijk weergegeven. Het is *niet* nodig deze volgorde te onthouden, onze tip is waarden te groepereren in plaats van je zorgen te maken over de bewerkingsvolgorde.
Bij twee operatoren moeten we even stilstaan omdat niet direct duidelijk is wat ze doen, de modulo operator `%` en de *integer* deling `//` (in tegenstelling tot de gewone deling `/`).
### Modulo operator `%`
- `7 % 3`
- `9 % 3`
`x % y` is het **restant** wanneer `x` door `y` wordt gedeeld
```
11 % 3
```
Syntax check! Het maakt niet uit of je `x%2` of `x % 2` schrijft (met spaties), Python weet wat je bedoelt :)
#### Voorbeelden
| | Test | Mogelijke waarden van `x` | |
|---|---------------|---------------------------|----------------------------------------------|
| A | `x % 2 == 0` | | |
| B | `x % 2 == 1` | | |
| C | `x % 4 == 0` | | Wat gebeurt hier als `x` een jaartal is? |
| D | `x % 24 == 0` | | Wat gebeurt hier als `x` een aantal uren is? |
```
3 % 2 == 0
```
A en B hebben alles te maken met even en oneven getallen, voorbeeld C met schrikkeljaren en voorbeeld D misschien met het digitaal display van jouw wekker?
### Integer deling
- `7 // 3`
- `9 // 3`
- `30 // 7`
`x // y` is als `x / y` maar dan **afgerond** tot een geheel getal
```
30 // 7
```
De `//` operator rondt af naar beneden, maar dan ook volledig naar beneden! In het Engels staat de `//` operator naast "integer division" ook bekend als "floor division": floor als in vloer (het laagste) in tegenstelling tot ceiling (plafond, hoogste). Maar er is meer aan de hand, want je zult zien dat `//` veel lijkt op de `%` operator!
De verdeling van 30 in veelheden van 7:
```python
30 == (4) * 7 + (2)
```
Zouden we dit kunnen generaliseren tot een algemene regel met behulp van de operatoren `//` en `%` die we nu hebben leren kennen?
De verdeling van `x` in veelheden van `y`
```python
x == (x // y) * y + (x % y)
```
en dit ingevuld voor ons voorbeeld:
```python
30 = (30 // 7) * 7 + (30 % 7)
```
En daar is de `%` operator weer :) Je zult later zien dat het gebruik van `%` en `//` bijzonder handig is als we gaan rekenen met ... bits!
Kort samengevat: de `//` operator rondt volledig naar beneden af (door alles achter de komma weg te laten).
### Wat is gelijk?
| Een waarde TOEKENNEN | IS NIET gelijk aan | een waarde TESTEN |
|----------------------|--------------------|-------------------|
| `=` | `!=` | `==` |
De enkele `=` ken je van wiskunde waar je $a = 1$ zal uitspreken als "a is gelijk aan 1". Bij programmeertalen is dit anders en wordt "ken aan a de waarde 1 toe" bedoeld. Om te testen of de waarde gelijk is aan een andere waarde wordt `==` gebruikt (en `!=` voor is *niet* gelijk aan).
### Identiteit
Is `==` een test op *waarde* of *identiteit* (de geheugenlokatie waar de waarde *leeft*)?
Sommige talen hebben `===`!
Er is een verschil tussen testen op *waarde* en testen op *identiteit* (of het hetzelfde "doos" is, de geheugenlokatie). Python heeft geen `===` (zoals Javascript, een programeertal gebruikt in browsers) maar heeft speciaal voor dit geval `is`, bijvoorbeeld `a is b` om te vergelijken op basis van identiteit.
Vergelijken op waarde of identiteit met `==` kan erg verschillen per taal. Voor Java (een veel gebruikte programmeertaal) betekent `==` een test op *identiteit*. Python heeft gekozen om `==` een test op gelijkheid van *waarde* te laten zijn. Dit ligt misschien het dichtst bij hoe mensen denken, zeker als het gaat om vergelijken van bijvoorbeeld getallen of tekst.
Een voorbeeld om het verschil duidelijk te maken.
```
a = 3141592
b = 3141592
```
Gegeven twee variabelen `a` en `b` met dezelfde waarde
```
a == b
```
zal inderdaar blijken dat `a` en `b` een gelijke *waarde* hebben.
```
a is b
```
maar een vergelijking op basis van *identiteit* zal niet slagen...
```
print(id(a))
print(id(b))
```
`id(x)` geeft de adreslokatie van een waarde terug. Je kan zien dat `a` en `b` anders zijn, hoewel ze tรณch dezelfe waarde hebben! (let op, deze geheugenlokaties kunnen verschillen met jouw computer!)
## Quiz
Voer de volgende regels uit:
```python
x = 41
y = x + 1
z = x + y
```
Welke waarden hebben `x`, `y` en `z`?
```
x = 41
y = x + 1
z = x + y
print(x, y, z)
```
Voer vervolgens de volgende regel uit:
```python
x = x + y
```
Welke waarden hebben `x`, `y` en `z` nu?
```
x = x + y
print(x, y, z)
```
### Achter de schermen
```python
x = 41
y = x + 1
z = x + y
```
In het geheugen:

De drie variabelen `x`, `y` en `z` zijn nu in het geheugen bewaard op drie verschillende lokaties.
Laatste stap:
```python
x = x + y
```
In het geheugen:

Met de laatste stap wijzigen we de waarde van `x` en dit betekent dat de oorspronkelijke lokatie wordt gewist en de nieuwe waarde in het geheugen wordt gezet, op een nieuwe lokatie!
Je kan de identiteit (de geheugenlokatie) in Python opvragen met `id(x)`. Probeer dit eens met `x` voor en na de laatste operatie en je zal zien dat ze verschillend zijn. Het wissen of verwijderen van een waarde kan je doen met `del x` (dus zonder de haakjes `()`).
### Extra
```python
a = 11 // 2
b = a % 3
c = b ** a+b * a
```
Welke waarden hebben `a`, `b` en `c`?
```
a = 11 // 2
b = a % 3
c = b ** a+b * a
print(a, b, c)
```
## Cultuur

Het boek [The Hitchhiker's Guide to the Galaxy](https://en.wikipedia.org/wiki/The_Hitchhiker%27s_Guide_to_the_Galaxy) van [Douglas Adams](https://en.wikipedia.org/wiki/Douglas_Adams) heeft sporen nagelaten in onder andere informatica: de kans is groot dat je in voorbeelden of uitwerkingen het getal 42 tegenkomt. Maar ook in het gewone leven als je op [25 mei](https://en.wikipedia.org/wiki/Towel_Day) mensen met een handdoek ziet lopen ...
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.