Unnamed: 0 int64 0 16k | text_prompt stringlengths 110 62.1k | code_prompt stringlengths 37 152k |
|---|---|---|
8,700 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab 12
Step1: Like last week, we're going to use pyspark, a Python package that wraps Apache Spark and makes its functionality available in Python. We'll also use a few of the standard Python libraries - json, socket, threading and time - as well as the sseclient package you just installed to connect to the event stream.
Note
Step2: Streaming Wikipedia events
Currently, Spark supports three kinds of streaming connection out of the box
Step3: Streaming analysis
Now that we have our stream relay set up, we can start to analyse its contents. First, let's initialise a SparkContext object, which will represent our connection to the Spark cluster. To do this, we must first specify the URL of the master node to connect to. As we're only running this notebook for demonstration purposes, we can just run the cluster locally, as follows
Step4: Next, we create a StreamingContext object, which represents the streaming functionality of our Spark cluster. When we create the context, we must specify a batch duration time (in seconds), to tell Spark how often it should process data from the stream. Let's process the Wikipedia data in batches of one second
Step5: Using our StreamingContext object, we can create a data stream from our local TCP relay socket with the socketTextStream method
Step6: Even though we've created a data stream, nothing happens! Before Spark starts to consume the stream, we must first define one or more operations to perform on it. Let's count the number of edits made by different users in the last minute
Step7: Again, nothing happens! This is because the StreamingContext must be started before the stream is processed by Spark. We can start data streaming using the start method of the StreamingContext and stop it using the stop method. Let's run the stream for two minutes (120 seconds) and then stop | Python Code:
!pip install sseclient
Explanation: Lab 12: Spark Streaming
Introduction
In this lab, we're going to look at data streaming with Apache Spark. At the end of the lab, you should be able to:
Create a local StreamingContext object.
Use Spark to analyse the recent Wikipedia edits stream.
Getting started
Let's start by importing the packages we'll need. This week, we'll need to install the sseclient package so we can connect to the Wikipedia stream. This package is not installed on student vDesktop environments, but you can install it if you're running at home or using Docker by executing the code in the box below:
End of explanation
import json
import pyspark
import socket
import threading
import time
from pyspark.streaming import StreamingContext
from sseclient import SSEClient
Explanation: Like last week, we're going to use pyspark, a Python package that wraps Apache Spark and makes its functionality available in Python. We'll also use a few of the standard Python libraries - json, socket, threading and time - as well as the sseclient package you just installed to connect to the event stream.
Note: You don't need to understand how these packages are used to connect to the event stream, but the code is below if you're curious.
End of explanation
def relay():
events = SSEClient('https://stream.wikimedia.org/v2/stream/recentchange')
s = socket.socket()
s.bind(('localhost', 50000))
s.listen(1)
while True:
try:
client, address = s.accept()
for event in events:
if event.event == 'message':
client.sendall(event.data)
break
except:
pass
finally:
client.close()
threading.Thread(target=relay).start()
Explanation: Streaming Wikipedia events
Currently, Spark supports three kinds of streaming connection out of the box:
Apache Kafka
Amazon Kinesis
Apache Flume
While it's possible to connect to other kinds of streams too, we must write our own code to do it and, at present, this is unsupported in Python (although it is possible in Java and Scala). However, Spark also supports streaming data from arbitrary TCP socket endpoints and so we can instead relay the remote data stream to a local socket port to enable Spark to consume it.
The code in the box below connects to the Wikipedia event stream and publishes its content to a local port. While you don't need to understand it to complete the lab, the basic logic is as follows:
Connect to the Wikipedia RecentChanges stream using SSEClient.
Create a local socket connection on port 50000.
When a client (e.g. Spark) connects to the local socket, relay the next available event to it from the event stream.
End of explanation
sc = pyspark.SparkContext(master='local[*]')
Explanation: Streaming analysis
Now that we have our stream relay set up, we can start to analyse its contents. First, let's initialise a SparkContext object, which will represent our connection to the Spark cluster. To do this, we must first specify the URL of the master node to connect to. As we're only running this notebook for demonstration purposes, we can just run the cluster locally, as follows:
End of explanation
ssc = StreamingContext(sc, 1)
Explanation: Next, we create a StreamingContext object, which represents the streaming functionality of our Spark cluster. When we create the context, we must specify a batch duration time (in seconds), to tell Spark how often it should process data from the stream. Let's process the Wikipedia data in batches of one second:
End of explanation
stream = ssc.socketTextStream('localhost', 50000)
Explanation: Using our StreamingContext object, we can create a data stream from our local TCP relay socket with the socketTextStream method:
End of explanation
users = (
stream.map(json.loads) # Parse the stream data as JSON
.map(lambda obj: obj['user']) # Extract the values corresponding to the 'user' key
.map(lambda user: (user, 1)) # Give each user a count of one
.window(60) # Create a sliding window, sixty seconds in length
.reduceByKey(lambda a, b: a + b) # Reduce all key-value pairs in the window by adding values
.transform( # Sort by the largest count
lambda rdd: rdd.sortBy(lambda kv: kv[1], ascending=False))
.pprint() # Print the results
)
Explanation: Even though we've created a data stream, nothing happens! Before Spark starts to consume the stream, we must first define one or more operations to perform on it. Let's count the number of edits made by different users in the last minute:
End of explanation
ssc.start()
time.sleep(120)
ssc.stop()
Explanation: Again, nothing happens! This is because the StreamingContext must be started before the stream is processed by Spark. We can start data streaming using the start method of the StreamingContext and stop it using the stop method. Let's run the stream for two minutes (120 seconds) and then stop:
End of explanation |
8,701 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Landice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Ice Albedo
Is Required
Step7: 1.4. Atmospheric Coupling Variables
Is Required
Step8: 1.5. Oceanic Coupling Variables
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required
Step11: 2.2. Code Version
Is Required
Step12: 2.3. Code Languages
Is Required
Step13: 3. Grid
Land ice grid
3.1. Overview
Is Required
Step14: 3.2. Adaptive Grid
Is Required
Step15: 3.3. Base Resolution
Is Required
Step16: 3.4. Resolution Limit
Is Required
Step17: 3.5. Projection
Is Required
Step18: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required
Step19: 4.2. Description
Is Required
Step20: 4.3. Dynamic Areal Extent
Is Required
Step21: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required
Step22: 5.2. Grounding Line Method
Is Required
Step23: 5.3. Ice Sheet
Is Required
Step24: 5.4. Ice Shelf
Is Required
Step25: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required
Step26: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required
Step27: 7.2. Ocean
Is Required
Step28: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required
Step29: 8.2. Melting
Is Required
Step30: 9. Ice --> Dynamics
**
9.1. Description
Is Required
Step31: 9.2. Approximation
Is Required
Step32: 9.3. Adaptive Timestep
Is Required
Step33: 9.4. Timestep
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'niwa', 'sandbox-2', 'landice')
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: NIWA
Source ID: SANDBOX-2
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:30
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation |
8,702 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 4
Imports
Step2: Line with Gaussian noise
Write a function named random_line that creates x and y data for a line with y direction random noise that has a normal distribution $N(0,\sigma^2)$
Step5: Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function
Step6: Use interact to explore the plot_random_line function using | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
Explanation: Interact Exercise 4
Imports
End of explanation
def random_line(m, b, sigma, size=10):
Create a line y = m*x + b + N(0,sigma**2) between x=[-1.0,1.0]
Parameters
----------
m : float
The slope of the line.
b : float
The y-intercept of the line.
sigma : float
The standard deviation of the y direction normal distribution noise.
size : int
The number of points to create for the line.
Returns
-------
x : array of floats
The array of x values for the line with `size` points.
y : array of floats
The array of y values for the lines with `size` points.
x = np.linspace(-1., 1., size)
n = (sigma ** 2)*(np.random.randn(size))
y = m*x + b + n
return x, y
print (random_line(2, 3, 4, size=10))
#raise NotImplementedError()
m = 0.0; b = 1.0; sigma=0.0; size=3
x, y = random_line(m, b, sigma, size)
assert len(x)==len(y)==size
assert list(x)==[-1.0,0.0,1.0]
assert list(y)==[1.0,1.0,1.0]
sigma = 1.0
m = 0.0; b = 0.0
size = 500
x, y = random_line(m, b, sigma, size)
assert np.allclose(np.mean(y-m*x-b), 0.0, rtol=0.1, atol=0.1)
assert np.allclose(np.std(y-m*x-b), sigma, rtol=0.1, atol=0.1)
Explanation: Line with Gaussian noise
Write a function named random_line that creates x and y data for a line with y direction random noise that has a normal distribution $N(0,\sigma^2)$:
$$
y = m x + b + N(0,\sigma^2)
$$
Be careful about the sigma=0.0 case.
End of explanation
def ticks_out(ax):
Move the ticks to the outside of the box.
ax.get_xaxis().set_tick_params(direction='out', width=1, which='both')
ax.get_yaxis().set_tick_params(direction='out', width=1, which='both')
def plot_random_line(m, b, sigma, size=10, color='red'):
x, y = random_line(m, b, sigma, size)
Plot a random line with slope m, intercept b and size points.
plt.scatter(x, y, c=color)
#raise NotImplementedError()
plot_random_line(5.0, -1.0, 2.0, 50)
plt.xlim(-1.1,1.1)
plt.ylim(-10.,10.)
assert True # use this cell to grade the plot_random_line function
Explanation: Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function:
Make the marker color settable through a color keyword argument with a default of red.
Display the range $x=[-1.1,1.1]$ and $y=[-10.0,10.0]$.
Customize your plot to make it effective and beautiful.
End of explanation
interact(plot_random_line, m=(-10.,10.,.1), b=(-5.,5.,.1), sigma=(0.,5.,.01), size=(10,100,10), color={'red':'r','green':'g', 'blue':'b'});
#raise NotImplementedError()
#### assert True # use this cell to grade the plot_random_line interact
Explanation: Use interact to explore the plot_random_line function using:
m: a float valued slider from -10.0 to 10.0 with steps of 0.1.
b: a float valued slider from -5.0 to 5.0 with steps of 0.1.
sigma: a float valued slider from 0.0 to 5.0 with steps of 0.01.
size: an int valued slider from 10 to 100 with steps of 10.
color: a dropdown with options for red, green and blue.
End of explanation |
8,703 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Federated learning algorithms
This tutorial introduces algorithms for federated learning in FedJAX. By completing this tutorial, we'll learn how to write clear and efficient algorithms that follow best practices. This tutorial assumes that we have finished the tutorials on datasets and models.
In order to keep the code pseudo-code-like, we avoid using jax primitives directly while writing algorithms, with the notable exceptions of the jax.randomand jax.tree_util libraries. Since lower-level functions that are described in the model tutorial, such as fedjax.optimizers, model.grad, are all JIT compiled already, the algorithms will still be efficient.
Step1: Introduction
A federated algorithm trains a machine learning model over decentralized data distributed over several clients. At a high level, the server first randomly initializes the model parameters and other learning components. Then at each round the following happens
Step2: Note that similar to the rest of the library, we only pass on the necessary functions and parameters to the federated algorithm. Hence, to initialize the federated algorithm, we only passed the grad_fn and did not pass the entire model. With this, we now initialize the server state.
Step3: To run the federated algorithm, we pass the server state and client data to the apply function. For this end, we pass client data as a tuple of client id, client data, and the random keys. Adding client ids and random keys has multiple advantages. Firstly client client ids allows to track client diagnostics and would be helpful in debugging. Passing random keys would ensure deterministic execution and allow repeatability. Furthermore, as we discuss later it would help us with fast implementations. We first format the data in this necessary format and then run one round of federated learning.
Step4: As we see above, the client statistics provide the delta_l2_norm of the gradients for each client, which can be potentially used for debugging purposes.
Writing federated algorithms
With this background on how to use existing implementations, we are now going to describe how to write your own federated algorithms in FedJAX. As discussed above, this involves three steps
Step5: However, the above code is not desirable due to the following reasons
Step6: Client update
After selecting the clients, the next step would be running a model update step in the clients. Typically this is done by running a few epochs of SGD. We only pass parts of the algorithm that are necessary for the client update.
The client update typically requires a set of parameters from the server (init_params in this example), the client dataset, and a source of randomness (rng). The randomness can be used for dropout or other model update steps. Finally, instead of passing the entire model to the client update, since our code only depends on the gradient function, we pass grad_fn to client_update.
Step7: Server aggregation
The outputs of the clients are typically aggregated by computing the weighted mean of the updates, where the weight is the number of client examples. This can be easily done by using the fedjax.tree_util.tree_mean function.
Step8: Combing the above steps gives the FedAvg algorithm, which can be found in the example FedJAX implementation of FedAvg..
Efficient implementation
The above implementation would be efficient enough for running on single machines. However, JAX provides primitives such as jax.pmap and jax.vmap for efficient parallelization across multiple accelerators. FedJAX provides support for them in federated learning by distributing client computation across several accelerators.
To take advantage of the faster implementation, we need to implement client_update in a specific format. It has three functions
Step9: client_step
client_step takes the current client_step_state and a batch of examples and updates the client_step_state. In this example, we run one step of SGD using the batch of examples and update client_step_state to reflect the new parameters, optimization state, and randomness.
Step10: client_final
client_final modifies the final client_step_state and returns the desired parameters. In this example, we compute the difference between the initial parameters and the final updated parameters in the client_final function.
Step11: fedjax.for_each_client
Once we have these three functions, we can combine them to create a client_update function using the fedjax.for_each_client function. fedjax.for_each_client returns a function that can be used to run client updates. The sample usage is below.
Step12: Note that for_each_client_update requires the client data to be already batched. This is necessary for performance gains while using multiple accelerators. Furthermore, the batch size needs to be the same across all clients.
By default fedjax.for_each_client selects the standard JIT backend. To enable parallelism with TPUs or for debugging, we can set it using fedjax.set_for_each_client_backend(backend), where backend is either 'pmap' or 'debug', respectively.
The for each client function can also be used to add some additional step wise results, which can be used for debugging. This requires changing the client_step function. | Python Code:
# Uncomment these to install fedjax.
# !pip install fedjax
# !pip install --upgrade git+https://github.com/google/fedjax.git
import jax
import jax.numpy as jnp
import numpy as np
import fedjax
# We only use TensorFlow for datasets, so we restrict it to CPU only to avoid
# issues with certain ops not being available on GPU/TPU.
fedjax.training.set_tf_cpu_only()
Explanation: Federated learning algorithms
This tutorial introduces algorithms for federated learning in FedJAX. By completing this tutorial, we'll learn how to write clear and efficient algorithms that follow best practices. This tutorial assumes that we have finished the tutorials on datasets and models.
In order to keep the code pseudo-code-like, we avoid using jax primitives directly while writing algorithms, with the notable exceptions of the jax.randomand jax.tree_util libraries. Since lower-level functions that are described in the model tutorial, such as fedjax.optimizers, model.grad, are all JIT compiled already, the algorithms will still be efficient.
End of explanation
train, test = fedjax.datasets.emnist.load_data(only_digits=False)
model = fedjax.models.emnist.create_conv_model(only_digits=False)
rng = jax.random.PRNGKey(0)
init_params = model.init(rng)
# Federated algorithm requires a gradient function, client optimizer,
# server optimizers, and hyperparameters for batching at the client level.
grad_fn = fedjax.model_grad(model)
client_optimizer = fedjax.optimizers.sgd(0.1)
server_optimizer = fedjax.optimizers.sgd(1.0)
batch_hparams = fedjax.ShuffleRepeatBatchHParams(batch_size=10)
fed_alg = fedjax.algorithms.fed_avg.federated_averaging(grad_fn,
client_optimizer,
server_optimizer,
batch_hparams)
Explanation: Introduction
A federated algorithm trains a machine learning model over decentralized data distributed over several clients. At a high level, the server first randomly initializes the model parameters and other learning components. Then at each round the following happens:
1. Client selection: The server selects a few clients at each round, typically at random.
2. The server transmits the model parameters and other necessary components to the selected clients.
3. Client update: The clients update the model parameters using a subroutine, which typically involves a few epochs of SGD on their local examples.
4. The clients transmit the updates to the server.
5. Server aggregation: The server combines the clients' updates to produce new model parameters.
A pseudo-code for a common federated learning algorithm can be found in Algorithm 1 in Kairouz et al. (2020).
Since FedJAX focuses on federated simulation and there is no actual transmission between clients and the server, we only focus on steps 1, 3, and 5, and ignore steps 2 and 4.
Before we describe each of the modules, we will first describe how to use algorithms that are implemented in FedJAX.
Federated algorithm overview
We implement federated learning algorithms using the fedjax.FederatedAlgorithm interface. The fedjax.FederatedAlgorithm interface has two functions init and apply. Broadly, our implementation has three parts.
1. ServerState: This contains all the information available at the server at any given round. It includes model parameters and can also include other parameters that are used during optimization. At every round, a subset of ServerState is passed to the clients for federated learning. ServerState is also used in checkpointing and evaluation. Hence it is crucial that all the parameters that are modified during the course of federated learning are stored as part of the ServerState. Do not store mutable parameters as part of fedjax.FederatedAlgorithm.
init: Initializes the server state.
apply: Takes the ServerState and a set of client_ids, corresponding datasets, and random keys and returns a new ServerState along with any information we need from the clients in the form of client_diagnostics.
We demonstrate fedjax.FederatedAlgorithm using Federated Averaging (FedAvg) and the emnist dataset. We first initialize the model, datasets and the federated algorithm.
End of explanation
init_server_state = fed_alg.init(init_params)
Explanation: Note that similar to the rest of the library, we only pass on the necessary functions and parameters to the federated algorithm. Hence, to initialize the federated algorithm, we only passed the grad_fn and did not pass the entire model. With this, we now initialize the server state.
End of explanation
# Select 5 client_ids and their data
client_ids = list(train.client_ids())[:5]
clients_ids_and_data = list(train.get_clients(client_ids))
client_inputs = []
for i in range(5):
rng, use_rng = jax.random.split(rng)
client_id, client_data = clients_ids_and_data[i]
client_inputs.append((client_id, client_data, use_rng))
updated_server_state, client_diagnostics = fed_alg.apply(init_server_state,
client_inputs)
# Prints the l2 norm of gradients as part of client_diagnostics.
print(client_diagnostics)
Explanation: To run the federated algorithm, we pass the server state and client data to the apply function. For this end, we pass client data as a tuple of client id, client data, and the random keys. Adding client ids and random keys has multiple advantages. Firstly client client ids allows to track client diagnostics and would be helpful in debugging. Passing random keys would ensure deterministic execution and allow repeatability. Furthermore, as we discuss later it would help us with fast implementations. We first format the data in this necessary format and then run one round of federated learning.
End of explanation
all_client_ids = list(train.client_ids())
print("Total number of client ids: ", len(all_client_ids))
sampled_client_ids = np.random.choice(all_client_ids, size=2, replace=False)
print("Sampled client ids: ", sampled_client_ids)
Explanation: As we see above, the client statistics provide the delta_l2_norm of the gradients for each client, which can be potentially used for debugging purposes.
Writing federated algorithms
With this background on how to use existing implementations, we are now going to describe how to write your own federated algorithms in FedJAX. As discussed above, this involves three steps:
1. Client selection
2. Client update
3. Server aggregation
Client selection
At each round of federated learning, typically clients are sampled uniformly at random. This can be done using numpy as follows.
End of explanation
efficient_sampler = fedjax.client_samplers.UniformShuffledClientSampler(
train.shuffled_clients(buffer_size=100), num_clients=2)
print("Sampling from the efficient sampler.")
for round in range(3):
sampled_clients_with_data = efficient_sampler.sample()
for client_id, client_data, client_rng in sampled_clients_with_data:
print(round, client_id)
perfect_uniform_sampler = fedjax.client_samplers.UniformGetClientSampler(
train, num_clients=2, seed=1)
print("Sampling from the perfect uniform sampler.")
for round in range(3):
sampled_clients_with_data = perfect_uniform_sampler.sample()
for client_id, client_data, client_rng in sampled_clients_with_data:
print(round, client_id)
Explanation: However, the above code is not desirable due to the following reasons:
1. For reproducibility, it is desirable to have a fixed seed just for sampling clients.
2. Across rounds, different clients need to be sampled.
3. For I/O efficiency reasons, it might be better to do an approximately uniform sampling, where clients whose data is stored together are sampled together.
4. Federated algorithms typically require additional randomness for batching, or dropout that needs to be sent to clients.
To incorporate these features, FedJAX provides a few client samplers.
1. fedjax.client_samplers.UniformShuffledClientSampler
2. fedjax.client_samplers.UniformGetClientSampler
fedjax.client_samplers.UniformShuffledClientSampler is preferred for efficiency reasons, but if we need to sample clients truly randomly, fedjax.client_samplers.UniformGetClientSampler can be used.
Both of them have a sample function that returns a list of client_ids, client_data, and client_rng.
End of explanation
def client_update(init_params, client_dataset, client_rng, grad_fn):
opt_state = client_optimizer.init(init_params)
params = init_params
for batch in client_dataset.shuffle_repeat_batch(batch_size=10):
client_rng, use_rng = jax.random.split(client_rng)
grads = grad_fn(params, batch, use_rng)
opt_state, params = client_optimizer.apply(grads, opt_state, params)
delta_params = jax.tree_util.tree_multimap(lambda a, b: a - b,
init_params, params)
return delta_params, len(client_dataset)
client_sampler = fedjax.client_samplers.UniformGetClientSampler(
train, num_clients=2, seed=1)
sampled_clients_with_data = client_sampler.sample()
for client_id, client_data, client_rng in sampled_clients_with_data:
delta_params, num_samples = client_update(init_params,client_data,
client_rng, grad_fn)
print(client_id, num_samples, delta_params.keys())
Explanation: Client update
After selecting the clients, the next step would be running a model update step in the clients. Typically this is done by running a few epochs of SGD. We only pass parts of the algorithm that are necessary for the client update.
The client update typically requires a set of parameters from the server (init_params in this example), the client dataset, and a source of randomness (rng). The randomness can be used for dropout or other model update steps. Finally, instead of passing the entire model to the client update, since our code only depends on the gradient function, we pass grad_fn to client_update.
End of explanation
sampled_clients_with_data = client_sampler.sample()
client_updates = []
for client_id, client_data, client_rng in sampled_clients_with_data:
delta_params, num_samples = client_update(init_params, client_data,
client_rng, grad_fn)
client_updates.append((delta_params, num_samples))
updated_output = fedjax.tree_util.tree_mean(client_updates)
print(updated_output.keys())
Explanation: Server aggregation
The outputs of the clients are typically aggregated by computing the weighted mean of the updates, where the weight is the number of client examples. This can be easily done by using the fedjax.tree_util.tree_mean function.
End of explanation
def client_init(server_params, client_rng):
opt_state = client_optimizer.init(server_params)
client_step_state = {
'params': server_params,
'opt_state': opt_state,
'rng': client_rng,
}
return client_step_state
Explanation: Combing the above steps gives the FedAvg algorithm, which can be found in the example FedJAX implementation of FedAvg..
Efficient implementation
The above implementation would be efficient enough for running on single machines. However, JAX provides primitives such as jax.pmap and jax.vmap for efficient parallelization across multiple accelerators. FedJAX provides support for them in federated learning by distributing client computation across several accelerators.
To take advantage of the faster implementation, we need to implement client_update in a specific format. It has three functions:
1. client_init
2. client_step
3. client_final
client_init
This function takes the inputs from the server and outputs a client_step_state which will be passed in between client steps. It is desirable for the client_step_state to be a dictionary. In this example, it just copies the parameters, optimizer_state and the current state of client randomness.
We can think of the inputs from the server as "shared inputs" that are shared across all clients and the client_step_state as client-specific inputs that are separate per client.
End of explanation
def client_step(client_step_state, batch):
rng, use_rng = jax.random.split(client_step_state['rng'])
grads = grad_fn(client_step_state['params'], batch, use_rng)
opt_state, params = client_optimizer.apply(grads,
client_step_state['opt_state'],
client_step_state['params'])
next_client_step_state = {
'params': params,
'opt_state': opt_state,
'rng': rng,
}
return next_client_step_state
Explanation: client_step
client_step takes the current client_step_state and a batch of examples and updates the client_step_state. In this example, we run one step of SGD using the batch of examples and update client_step_state to reflect the new parameters, optimization state, and randomness.
End of explanation
def client_final(server_params, client_step_state):
delta_params = jax.tree_util.tree_multimap(lambda a, b: a - b,
server_params,
client_step_state['params'])
return delta_params
Explanation: client_final
client_final modifies the final client_step_state and returns the desired parameters. In this example, we compute the difference between the initial parameters and the final updated parameters in the client_final function.
End of explanation
for_each_client_update = fedjax.for_each_client(client_init,
client_step,
client_final)
client_sampler = fedjax.client_samplers.UniformGetClientSampler(
train, num_clients=2, seed=1)
sampled_clients_with_data = client_sampler.sample()
batched_clients_data = [
(cid, cds.shuffle_repeat_batch(batch_size=10), crng)
for cid, cds, crng in sampled_clients_with_data
]
for client_id, delta_params in for_each_client_update(init_params,
batched_clients_data):
print(client_id, delta_params.keys())
Explanation: fedjax.for_each_client
Once we have these three functions, we can combine them to create a client_update function using the fedjax.for_each_client function. fedjax.for_each_client returns a function that can be used to run client updates. The sample usage is below.
End of explanation
def client_step_with_log(client_step_state, batch):
rng, use_rng = jax.random.split(client_step_state['rng'])
grads = grad_fn(client_step_state['params'], batch, use_rng)
opt_state, params = client_optimizer.apply(grads,
client_step_state['opt_state'],
client_step_state['params'])
next_client_step_state = {
'params': params,
'opt_state': opt_state,
'rng': rng,
}
grad_norm = fedjax.tree_util.tree_l2_norm(grads)
return next_client_step_state, grad_norm
for_each_client_update = fedjax.for_each_client(
client_init, client_step_with_log, client_final, with_step_result=True)
for client_id, delta_params, grad_norms in for_each_client_update(
init_params, batched_clients_data):
print(client_id, list(delta_params.keys()))
print(client_id, np.array(grad_norms))
Explanation: Note that for_each_client_update requires the client data to be already batched. This is necessary for performance gains while using multiple accelerators. Furthermore, the batch size needs to be the same across all clients.
By default fedjax.for_each_client selects the standard JIT backend. To enable parallelism with TPUs or for debugging, we can set it using fedjax.set_for_each_client_backend(backend), where backend is either 'pmap' or 'debug', respectively.
The for each client function can also be used to add some additional step wise results, which can be used for debugging. This requires changing the client_step function.
End of explanation |
8,704 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Creating epochs of equal length
This tutorial shows how to create equal length epochs and briefly demonstrates
an example of their use in connectivity analysis.
First, we import necessary modules and read in a sample raw
data set. This data set contains brain activity that is event-related, i.e.
synchronized to the onset of auditory stimuli. However, rather than creating
epochs by segmenting the data around the onset of each stimulus, we will
create 30 second epochs that allow us to perform non-event-related analyses of
the signal.
Step1: For this tutorial we'll crop and resample the raw data to a manageable size
for our web server to handle, ignore EEG channels, and remove the heartbeat
artifact so we don't get spurious correlations just because of that.
Step2: To create fixed length epochs, we simply call the function and provide it
with the appropriate parameters indicating the desired duration of epochs in
seconds, whether or not to preload data, whether or not to reject epochs that
overlap with raw data segments annotated as bad, whether or not to include
projectors, and finally whether or not to be verbose. Here, we choose a long
epoch duration (30 seconds). To conserve memory, we set preload to
False.
Step3: Characteristics of Fixed Length Epochs
Fixed length epochs are generally unsuitable for event-related analyses. This
can be seen in an image map of our fixed length
epochs. When the epochs are averaged, as seen at the bottom of the plot,
misalignment between onsets of event-related activity results in noise.
Step4: For information about creating epochs for event-related analyses, please see
tut-epochs-class.
Example Use Case for Fixed Length Epochs
Step5: If desired, separate correlation matrices for each epoch can be obtained.
For envelope correlations, this is done by passing combine=None to the
envelope correlations function.
Step6: Now we can plot correlation matrices. We'll compare the first and last
30-second epochs of the recording | Python Code:
import os
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.preprocessing import compute_proj_ecg
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file)
Explanation: Creating epochs of equal length
This tutorial shows how to create equal length epochs and briefly demonstrates
an example of their use in connectivity analysis.
First, we import necessary modules and read in a sample raw
data set. This data set contains brain activity that is event-related, i.e.
synchronized to the onset of auditory stimuli. However, rather than creating
epochs by segmenting the data around the onset of each stimulus, we will
create 30 second epochs that allow us to perform non-event-related analyses of
the signal.
End of explanation
raw.crop(tmax=150).resample(100).pick('meg')
ecg_proj, _ = compute_proj_ecg(raw, ch_name='MEG 0511') # No ECG chan
raw.add_proj(ecg_proj)
raw.apply_proj()
Explanation: For this tutorial we'll crop and resample the raw data to a manageable size
for our web server to handle, ignore EEG channels, and remove the heartbeat
artifact so we don't get spurious correlations just because of that.
End of explanation
epochs = mne.make_fixed_length_epochs(raw, duration=30, preload=False)
Explanation: To create fixed length epochs, we simply call the function and provide it
with the appropriate parameters indicating the desired duration of epochs in
seconds, whether or not to preload data, whether or not to reject epochs that
overlap with raw data segments annotated as bad, whether or not to include
projectors, and finally whether or not to be verbose. Here, we choose a long
epoch duration (30 seconds). To conserve memory, we set preload to
False.
End of explanation
event_related_plot = epochs.plot_image(picks=['MEG 1142'])
Explanation: Characteristics of Fixed Length Epochs
Fixed length epochs are generally unsuitable for event-related analyses. This
can be seen in an image map of our fixed length
epochs. When the epochs are averaged, as seen at the bottom of the plot,
misalignment between onsets of event-related activity results in noise.
End of explanation
epochs.load_data().filter(l_freq=8, h_freq=12)
alpha_data = epochs.get_data()
Explanation: For information about creating epochs for event-related analyses, please see
tut-epochs-class.
Example Use Case for Fixed Length Epochs: Connectivity Analysis
Fixed lengths epochs are suitable for many types of analysis, including
frequency or time-frequency analyses, connectivity analyses, or
classification analyses. Here we briefly illustrate their utility in a sensor
space connectivity analysis.
The data from our epochs object has shape (n_epochs, n_sensors, n_times)
and is therefore an appropriate basis for using MNE-Python's envelope
correlation function to compute power-based connectivity in sensor space. The
long duration of our fixed length epochs, 30 seconds, helps us reduce edge
artifacts and achieve better frequency resolution when filtering must
be applied after epoching.
Let's examine the alpha band. We allow default values for filter parameters
(for more information on filtering, please see tut-filter-resample).
End of explanation
corr_matrix = mne.connectivity.envelope_correlation(alpha_data, combine=None)
Explanation: If desired, separate correlation matrices for each epoch can be obtained.
For envelope correlations, this is done by passing combine=None to the
envelope correlations function.
End of explanation
first_30 = corr_matrix[0]
last_30 = corr_matrix[-1]
corr_matrices = [first_30, last_30]
color_lims = np.percentile(np.array(corr_matrices), [5, 95])
titles = ['First 30 Seconds', 'Last 30 Seconds']
fig, axes = plt.subplots(nrows=1, ncols=2)
fig.suptitle('Correlation Matrices from First 30 Seconds and Last 30 Seconds')
for ci, corr_matrix in enumerate(corr_matrices):
ax = axes[ci]
mpbl = ax.imshow(corr_matrix, clim=color_lims)
ax.set_xlabel(titles[ci])
fig.subplots_adjust(right=0.8)
cax = fig.add_axes([0.85, 0.2, 0.025, 0.6])
cbar = fig.colorbar(ax.images[0], cax=cax)
cbar.set_label('Correlation Coefficient')
Explanation: Now we can plot correlation matrices. We'll compare the first and last
30-second epochs of the recording:
End of explanation |
8,705 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Estimating Joint Tour Frequency
This notebook illustrates how to re-estimate a single model component for ActivitySim. This process
includes running ActivitySim in estimation mode to read household travel survey files and write out
the estimation data bundles used in this notebook. To review how to do so, please visit the other
notebooks in this directory.
Load libraries
Step1: We'll work in our test directory, where ActivitySim has saved the estimation data bundles.
Step2: Load data and prep model for estimation
Step3: Review data loaded from the EDB
The next step is to read the EDB, including the coefficients, model settings, utilities specification, and chooser and alternative data.
Coefficients
Step4: Utility specification
Step5: Chooser data
Step6: Estimate
With the model setup for estimation, the next step is to estimate the model coefficients. Make sure to use a sufficiently large enough household sample and set of zones to avoid an over-specified model, which does not have a numerically stable likelihood maximizing solution. Larch has a built-in estimation methods including BHHH, and also offers access to more advanced general purpose non-linear optimizers in the scipy package, including SLSQP, which allows for bounds and constraints on parameters. BHHH is the default and typically runs faster, but does not follow constraints on parameters.
Step7: Estimated coefficients
Step8: Output Estimation Results
Step9: Write the model estimation report, including coefficient t-statistic and log likelihood
Step10: Next Steps
The final step is to either manually or automatically copy the *_coefficients_revised.csv file to the configs folder, rename it to *_coefficients.csv, and run ActivitySim in simulation mode. | Python Code:
import os
import larch # !conda install larch -c conda-forge # for estimation
import pandas as pd
Explanation: Estimating Joint Tour Frequency
This notebook illustrates how to re-estimate a single model component for ActivitySim. This process
includes running ActivitySim in estimation mode to read household travel survey files and write out
the estimation data bundles used in this notebook. To review how to do so, please visit the other
notebooks in this directory.
Load libraries
End of explanation
os.chdir('test')
Explanation: We'll work in our test directory, where ActivitySim has saved the estimation data bundles.
End of explanation
modelname = "joint_tour_frequency"
from activitysim.estimation.larch import component_model
model, data = component_model(modelname, return_data=True)
Explanation: Load data and prep model for estimation
End of explanation
data.coefficients
Explanation: Review data loaded from the EDB
The next step is to read the EDB, including the coefficients, model settings, utilities specification, and chooser and alternative data.
Coefficients
End of explanation
data.spec
Explanation: Utility specification
End of explanation
data.chooser_data
Explanation: Chooser data
End of explanation
model.estimate(method='SLSQP')
Explanation: Estimate
With the model setup for estimation, the next step is to estimate the model coefficients. Make sure to use a sufficiently large enough household sample and set of zones to avoid an over-specified model, which does not have a numerically stable likelihood maximizing solution. Larch has a built-in estimation methods including BHHH, and also offers access to more advanced general purpose non-linear optimizers in the scipy package, including SLSQP, which allows for bounds and constraints on parameters. BHHH is the default and typically runs faster, but does not follow constraints on parameters.
End of explanation
model.parameter_summary()
Explanation: Estimated coefficients
End of explanation
from activitysim.estimation.larch import update_coefficients
result_dir = data.edb_directory/"estimated"
update_coefficients(
model, data, result_dir,
output_file=f"{modelname}_coefficients_revised.csv",
);
Explanation: Output Estimation Results
End of explanation
model.to_xlsx(
result_dir/f"{modelname}_model_estimation.xlsx",
data_statistics=False,
)
Explanation: Write the model estimation report, including coefficient t-statistic and log likelihood
End of explanation
pd.read_csv(result_dir/f"{modelname}_coefficients_revised.csv")
Explanation: Next Steps
The final step is to either manually or automatically copy the *_coefficients_revised.csv file to the configs folder, rename it to *_coefficients.csv, and run ActivitySim in simulation mode.
End of explanation |
8,706 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Handwritten Number Recognition with TFLearn and MNIST
In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9.
This kind of neural network is used in a variety of real-world applications including
Step1: Retrieving training and test data
The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.
Each MNIST data point has
Step2: Visualize the training data
Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.
Step3: Building the network
TFLearn lets you build the network by defining the layers in that network.
For this example, you'll define
Step4: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively.
Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!
Step5: Testing
After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.
A good result will be higher than 95% accuracy. Some simple models have been known to get up to 99.7% accuracy! | Python Code:
# Import Numpy, TensorFlow, TFLearn, and MNIST data
import numpy as np
import tensorflow as tf
import tflearn
import tflearn.datasets.mnist as mnist
Explanation: Handwritten Number Recognition with TFLearn and MNIST
In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9.
This kind of neural network is used in a variety of real-world applications including: recognizing phone numbers and sorting postal mail by address. To build the network, we'll be using the MNIST data set, which consists of images of handwritten numbers and their correct labels 0-9.
We'll be using TFLearn, a high-level library built on top of TensorFlow to build the neural network. We'll start off by importing all the modules we'll need, then load the data, and finally build the network.
End of explanation
# Retrieve the training and test data
trainX, trainY, testX, testY = mnist.load_data(one_hot=True)
Explanation: Retrieving training and test data
The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.
Each MNIST data point has:
1. an image of a handwritten digit and
2. a corresponding label (a number 0-9 that identifies the image)
We'll call the images, which will be the input to our neural network, X and their corresponding labels Y.
We're going to want our labels as one-hot vectors, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0].
Flattened data
For this example, we'll be using flattened data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values.
Flattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network.
End of explanation
# Visualizing the data
import matplotlib.pyplot as plt
%matplotlib inline
# Function for displaying a training image by it's index in the MNIST set
def show_digit(index):
label = trainY[index].argmax(axis=0)
# Reshape 784 array into 28x28 image
image = trainX[index].reshape([28,28])
plt.title('Training data, index: %d, Label: %d' % (index, label))
plt.imshow(image, cmap='gray_r')
plt.show()
# Display the first (index 0) training image
show_digit(0)
testX.shape
Explanation: Visualize the training data
Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.
End of explanation
# Define the neural network
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
# Include the input layer, hidden layer(s), and set how you want to train the model
net = tflearn.input_data([None, 784])
net = tflearn.fully_connected(net, 128, activation="ReLU")
net = tflearn.fully_connected(net, 32, activation="ReLU")
net = tflearn.fully_connected(net, 10, activation='softmax') # Output
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.01, loss='categorical_crossentropy')
# This model assumes that your network is named "net"
model = tflearn.DNN(net)
return model
# Build the model
model = build_model()
Explanation: Building the network
TFLearn lets you build the network by defining the layers in that network.
For this example, you'll define:
The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data.
Hidden layers, which recognize patterns in data and connect the input to the output layer, and
The output layer, which defines how the network learns and outputs a label for a given image.
Let's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need 784 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call, it designates the input to the hidden layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling tflearn.fully_connected(net, n_units).
Then, to set how you train the network, use:
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with categorical cross-entropy.
Finally, you put all this together to create the model with tflearn.DNN(net).
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
Hint: The final output layer must have 10 output nodes (one for each digit 0-9). It's also recommended to use a softmax activation layer as your final output layer.
End of explanation
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=100)
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively.
Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!
End of explanation
# Compare the labels that our model predicts with the actual labels
# Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample.
predictions = np.array(model.predict(testX)).argmax(axis=1)
# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels
actual = testY.argmax(axis=1)
test_accuracy = np.mean(predictions == actual, axis=0)
# Print out the result
print("Test accuracy: ", test_accuracy)
Explanation: Testing
After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.
A good result will be higher than 95% accuracy. Some simple models have been known to get up to 99.7% accuracy!
End of explanation |
8,707 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
# Table of Contents
<div class="toc" style="margin-top
Step1: A simple neural network in Numpy
So what is a neural network anyway. Let's start by looking at a picture.
A neural network consists of several layers of interconnected nodes. Each node represents a weighted sum of all the nodes in the previous layers plus a bias.
$$\sum_j w_j x_j + b$$
Additionally, each hidden layer is modified by a non-linear function $g(z)$. One very simple and popular activation function is called ReLU
Step2: Now this is of course just some random garbage. The goal of the optimization algorithm is to change the weights and biases to get the output closer to the target. Mathematically, we are minimizing a loss function, for example the mean squared error.
$$L = \frac{1}{N_\mathrm{{samples}}} \sum_{i=1}^{N_\mathrm{{samples}}} (\mathrm{output} - \mathrm{target})^2$$
To minimize this loss we are using stochastic gradient descent. For this we need to compute the gradient of the loss with respect to all weights and biases. Basically, this mean using the chain rule of calculus. The algorithm to do this efficiently is called backpropagation.
Step3: Define backpropagation in numpy
Step4: Train the network
Step5: Building a neural network in Keras
Step6: Post-processing
Step7: The raw ensemble contains 50 ensemble members. We take the mean and standard deviations of these 50 values which is a good approximation since temperature is normally distributed.
Step8: In total we have around 500 stations for every day with some missing observation data. In total that makes around 180k samples.
Step9: The goal of post-processing is to produce a sharp but reliable distribution.
Step10: To measure the skill of the forecast, we use the CRPS
Step11: For a normal distribution we can easily compute the CRPS from the mean and standard deviation for the raw ensemble, which is the score we want to improve.
Step12: Simple postprocessing
The most common post-processing technique for this sort of problem is called Ensemble Model Output Statistic (Gneiting et al. 2005). In this technique, the goal is to find a distribution
$$ \mathcal{N}(a + bX, c + dS^2), $$
where $X$ is the raw ensemble mean and $S$ is the raw ensemble standard deviation, so that
$$ \min_{a, b, c, d} \frac{1}{N_{\mathrm{sample}}} \sum_{i = 1}^{N_{\mathrm{sample}}} crps(\mathcal{N}(a + bX, c + dS^2), y_i)$$
The minimum over all samples is found using some optimization algorithm. We can also view this as a network graph
There are two commonly used variant of EMOS
Step13: So we basically get the same score as global EMOS, which is what we would expect.
Add station information with embeddings
The stations probably differ a lot in their post-processing characteristics. So we want to include this somehow. In local EMOS, we wit a separate model for each station, but this takes a long time and doesn't optimally use all the training data.
Embeddings are a neural network technique which provide a natural way to include station information. An embedding is a mapping from a discrete object, in our case the station ID, to a vector. The elements of the vector are learned by the network just like the other weights and biases and represent some extra information about each station.
Step14: This score is about 1% better than local EMOS and is much faster.
Step15: What about a neural network?
So far the network we used were simple linear networks, nothing neural about them. Let's try adding a hidden layer.
Step16: For the simple input that we have, adding non-linearity doesn't help to improve the fit.
Adding more variables
So far we have only used the temperature forecast as input but really we have a lot more variables from each forecast which might give us more information about the weather situation.
In traditional post-processing there are techniques to utilize these auxiliary variables, called boosting techniques.
Here are the benchmark scores from Sebastian's boosting experiments
Step17: As with temperature, we took the ensemble mean and standard deviation of all auxiliary variables (except for the constants). Now we can build the same network as earlier but with 40 inputs.
Simple linear net with auxiliary variables
Step18: So we get a big improvement from 1.01 for only temperature. we are also doing better than global boosting. Next let's include our station embeddings.
Auxiliary variables with station embeddings
Step19: This is slightly worse than the local boosting algorithm.
Neural network
So far we have only used linear networks. Now let's add some non-linearity with one hidden layer.
Step20: The added non-linearity gives us a another few percent improvement compared to the local boosting algorithm. Why not try increasing the number of hidden layers and nodes?
A more complex neural network
Step21: Hmmm, weird...
This is what is called overfitting and is a serious problem in machine learning. The model basically memorizes the training examples and does not generalize to unseen samples.
The model complexity is limited by the amount of training data!
A longer training period
Finally, let's see if our score gets better if we train with a longer training period. | Python Code:
# Imports
import numpy as np
import sys
sys.path.append('../') # This is where all the python files are!
from importlib import reload
import utils; reload(utils)
from utils import *
import keras_models; reload(keras_models)
from keras_models import *
from losses import crps_cost_function
from scipy.stats import norm
import matplotlib.pyplot as plt
%matplotlib inline
from matplotlib import animation
import seaborn as sns
sns.set_style('dark')
sns.set_context('poster')
from tqdm import tqdm_notebook as tqdm
from collections import OrderedDict
from IPython.display import HTML
import time
from keras.utils.generic_utils import get_custom_objects
metrics_dict = dict([(f.__name__, f) for f in [crps_cost_function]])
get_custom_objects().update(metrics_dict)
# Basic setup
DATA_DIR = '/Volumes/STICK/data/ppnn_data/' # Mac
# DATA_DIR = '/project/meteo/w2w/C7/ppnn_data/' # LMU
Explanation: # Table of Contents
<div class="toc" style="margin-top: 1em;"><ul class="toc-item" id="toc-level0"><li><span><a href="http://localhost:8888/notebooks/presentation_notebook.ipynb#A-simple-neural-network-in-Numpy" data-toc-modified-id="A-simple-neural-network-in-Numpy-1"><span class="toc-item-num">1 </span>A simple neural network in Numpy</a></span><ul class="toc-item"><li><span><a href="http://localhost:8888/notebooks/presentation_notebook.ipynb#Define-backpropagation-in-numpy" data-toc-modified-id="Define-backpropagation-in-numpy-1.1"><span class="toc-item-num">1.1 </span>Define backpropagation in numpy</a></span></li><li><span><a href="http://localhost:8888/notebooks/presentation_notebook.ipynb#Train-the-network" data-toc-modified-id="Train-the-network-1.2"><span class="toc-item-num">1.2 </span>Train the network</a></span></li><li><span><a href="http://localhost:8888/notebooks/presentation_notebook.ipynb#Building-a-neural-network-in-Keras" data-toc-modified-id="Building-a-neural-network-in-Keras-1.3"><span class="toc-item-num">1.3 </span>Building a neural network in Keras</a></span></li></ul></li><li><span><a href="http://localhost:8888/notebooks/presentation_notebook.ipynb#Post-processing:-The-data" data-toc-modified-id="Post-processing:-The-data-2"><span class="toc-item-num">2 </span>Post-processing: The data</a></span></li><li><span><a href="http://localhost:8888/notebooks/presentation_notebook.ipynb#Simple-postprocessing" data-toc-modified-id="Simple-postprocessing-3"><span class="toc-item-num">3 </span>Simple postprocessing</a></span><ul class="toc-item"><li><span><a href="http://localhost:8888/notebooks/presentation_notebook.ipynb#Add-station-information-with-embeddings" data-toc-modified-id="Add-station-information-with-embeddings-3.1"><span class="toc-item-num">3.1 </span>Add station information with embeddings</a></span></li><li><span><a href="http://localhost:8888/notebooks/presentation_notebook.ipynb#What-about-a-neural-network?" data-toc-modified-id="What-about-a-neural-network?-3.2"><span class="toc-item-num">3.2 </span>What about a neural network?</a></span></li></ul></li><li><span><a href="http://localhost:8888/notebooks/presentation_notebook.ipynb#Adding-more-variables" data-toc-modified-id="Adding-more-variables-4"><span class="toc-item-num">4 </span>Adding more variables</a></span><ul class="toc-item"><li><span><a href="http://localhost:8888/notebooks/presentation_notebook.ipynb#Simple-linear-net-with-auxiliary-variables" data-toc-modified-id="Simple-linear-net-with-auxiliary-variables-4.1"><span class="toc-item-num">4.1 </span>Simple linear net with auxiliary variables</a></span></li><li><span><a href="http://localhost:8888/notebooks/presentation_notebook.ipynb#Auxiliary-variables-with-station-embeddings" data-toc-modified-id="Auxiliary-variables-with-station-embeddings-4.2"><span class="toc-item-num">4.2 </span>Auxiliary variables with station embeddings</a></span></li><li><span><a href="http://localhost:8888/notebooks/presentation_notebook.ipynb#Neural-network" data-toc-modified-id="Neural-network-4.3"><span class="toc-item-num">4.3 </span>Neural network</a></span></li><li><span><a href="http://localhost:8888/notebooks/presentation_notebook.ipynb#A-more-complex-neural-network" data-toc-modified-id="A-more-complex-neural-network-4.4"><span class="toc-item-num">4.4 </span>A more complex neural network</a></span></li><li><span><a href="http://localhost:8888/notebooks/presentation_notebook.ipynb#Hmmm,-weird..." data-toc-modified-id="Hmmm,-weird...-4.5"><span class="toc-item-num">4.5 </span>Hmmm, weird...</a></span></li></ul></li><li><span><a href="http://localhost:8888/notebooks/presentation_notebook.ipynb#A-longer-training-period" data-toc-modified-id="A-longer-training-period-5"><span class="toc-item-num">5 </span>A longer training period</a></span></li><li><span><a href="http://localhost:8888/notebooks/presentation_notebook.ipynb#Conclusion" data-toc-modified-id="Conclusion-6"><span class="toc-item-num">6 </span>Conclusion</a></span></li></ul></div>
Neural networks for post-processing NWP forecasts
End of explanation
# Create the data
n_samples = 50
x = np.expand_dims(np.random.uniform(0, 1, n_samples), -1)
y = np.sin(2 * x) + 0.5 * np.sin(15 * x)
plt.scatter(x, y);
x.shape
# Initialize the weights and biases for the input --> hidden layer step
n_hidden = 200 # Number of nodes in hidden layer
w1 = np.random.normal(size=(1, n_hidden)) # a matrix
b1 = np.random.normal(size=n_hidden) # a vector
w1.shape, b1.shape
# Do the first step
hidden = np.dot(x, w1) + b1
hidden.shape
# Here comes the non-linearity
def relu(z):
return np.maximum(0, z)
hidden = relu(hidden)
# Now the weights and biases for the hidden --> output step
w2 = np.random.normal(size=(n_hidden, 1))
b2 = np.random.normal(size=1)
w2.shape, b2.shape
# Now the second step
preds = np.dot(hidden, w2) + b2
preds.shape
plt.scatter(x, y);
plt.scatter(x, preds);
Explanation: A simple neural network in Numpy
So what is a neural network anyway. Let's start by looking at a picture.
A neural network consists of several layers of interconnected nodes. Each node represents a weighted sum of all the nodes in the previous layers plus a bias.
$$\sum_j w_j x_j + b$$
Additionally, each hidden layer is modified by a non-linear function $g(z)$. One very simple and popular activation function is called ReLU:
$$\mathrm{relu}(z) = \mathrm{max}(0, z)$$
Let's build a simplified network with one input and one output in pure Numpy.
End of explanation
def mse(predictions, targets):
return np.mean((predictions - targets) ** 2)
mse(preds, y)
Explanation: Now this is of course just some random garbage. The goal of the optimization algorithm is to change the weights and biases to get the output closer to the target. Mathematically, we are minimizing a loss function, for example the mean squared error.
$$L = \frac{1}{N_\mathrm{{samples}}} \sum_{i=1}^{N_\mathrm{{samples}}} (\mathrm{output} - \mathrm{target})^2$$
To minimize this loss we are using stochastic gradient descent. For this we need to compute the gradient of the loss with respect to all weights and biases. Basically, this mean using the chain rule of calculus. The algorithm to do this efficiently is called backpropagation.
End of explanation
# Some helper function to reset the weights
def init_weights(n_hidden):
w1 = np.random.normal(size=(1, n_hidden)) # a matrix
b1 = np.random.normal(size=n_hidden) # a vector
w2 = np.random.normal(size=(n_hidden, 1))
b2 = np.random.normal(size=1)
return [w1, w2], [b1, b2]
# First define the forward pass.
def forward_pass(x, weights, biases):
hidden = relu(np.dot(x, weights[0]) + biases[0])
return np.dot(hidden, weights[1]) + biases[1]
# Define the derivative of the loss function and the activation function
def dmse(predictions, targets):
return predictions - targets
def drelu(z):
return 1. * (z > 0)
def backprop_and_update(x, y, weights, biases, lr=1e-5):
# Compute the predictions
hidden = relu(np.dot(x, weights[0]) + biases[0])
preds = np.dot(hidden, weights[1]) + biases[1]
# Compute the loss
loss = mse(preds, y)
# Compute Ds
delta2 = dmse(preds, y)
dw2 = np.dot(hidden.T, delta2)
db2 = np.sum(delta2, axis=0)
delta1 = np.dot(delta2, weights[1].T) * drelu(hidden)
dw1 = np.dot(x.T, delta1)
db1 = np.sum(delta1, axis=0)
# Update parameters
weights[0] -= lr * dw1
biases[0] -= lr * db1
weights[1] -= lr * dw2
biases[1] -= lr * db2
return loss
Explanation: Define backpropagation in numpy
End of explanation
weights, biases = init_weights(n_hidden)
n_steps = 50000
saved_preds = []
pbar = tqdm(total=n_steps)
for i in range(n_steps):
loss = backprop_and_update(x, y, weights, biases, lr=1e-4)
pbar.update(1)
pbar.set_postfix(OrderedDict({'loss': loss}))
if i % 500 == 0:
saved_preds.append(forward_pass(x, weights, biases))
pbar.close()
saved_preds = np.array(saved_preds)
saved_preds.shape
fig, ax = plt.subplots()
ax.scatter(x, y)
s = ax.scatter(x[:, 0], saved_preds[0, :, 0])
ax.set_ylim(0, 2)
def animate(i):
y_i = saved_preds[i, :, 0]
s.set_offsets(np.array([x[:, 0], y_i]).T)
plt.close();
ani = animation.FuncAnimation(fig, animate, np.arange(saved_preds.shape[0]),
interval=100)
HTML(ani.to_html5_video())
Explanation: Train the network
End of explanation
network = Sequential([
Dense(n_hidden, input_dim=1, activation='relu'),
Dense(1)
])
network.summary()
network.compile(SGD(), 'mse')
network.fit(x, y, batch_size=x.shape[0], epochs=10)
Explanation: Building a neural network in Keras
End of explanation
train_dates = ['2015-01-01', '2016-01-01']
test_dates = ['2016-01-01', '2017-01-01']
train_set, test_set = get_train_test_sets(DATA_DIR, train_dates, test_dates)
Explanation: Post-processing: The data
In post-processing we want to correct model biases by looking at past forecast/observation pairs. Specifically, if we are looking at probabilistic/ensemble forecasts, we want the forecast to be calibrated. For example, for all cases where the forecasts say that the chance of rain is 40%, it should actually rain in 40% of these cases.
For this study we are looking at 48h ensemble forecasts of temperature at around DWD 500 surface stations in Germany. Our forecasts are ECMWF 50 member ensemble forecasts taken from the TIGGE dataset which contain forecasts from 2008 to now upscaled to 40km grid spacing. The forecast data was bilinearly interpolated to the station locations.
We will use all of 2015 for training the model and all of 2016 to test how well the model performs.
End of explanation
train_set.feature_names
Explanation: The raw ensemble contains 50 ensemble members. We take the mean and standard deviations of these 50 values which is a good approximation since temperature is normally distributed.
End of explanation
len(np.unique(train_set.station_ids))
train_set.features.shape, train_set.targets.shape
Explanation: In total we have around 500 stations for every day with some missing observation data. In total that makes around 180k samples.
End of explanation
plot_fc(train_set, 1001)
Explanation: The goal of post-processing is to produce a sharp but reliable distribution.
End of explanation
plot_fc(train_set, 1001, 'cdf')
Explanation: To measure the skill of the forecast, we use the CRPS:
$$ \mathrm{crps}(F, y) = \int_{-\infty}^{\infty} [F(t) - H(t-y)]^2\mathrm{d}t, $$
where $F(t)$ is the forecast CDF and $H(t-y)$ is the Heaviside function.
End of explanation
np.mean(crps_normal(
test_set.features[:, 0] * test_set.scale_factors[0],
test_set.features[:, 1] * test_set.scale_factors[1],
test_set.targets
))
Explanation: For a normal distribution we can easily compute the CRPS from the mean and standard deviation for the raw ensemble, which is the score we want to improve.
End of explanation
# Build the network using Keras
fc_model = Sequential([
Dense(2, input_dim=2)
])
fc_model.summary()
fc_model.compile(Adam(0.1), crps_cost_function)
fc_model.fit(train_set.features, train_set.targets, epochs=20, batch_size=4096)
# Now display the score for 2016
fc_model.evaluate(test_set.features, test_set.targets, 4096, verbose=0)
Explanation: Simple postprocessing
The most common post-processing technique for this sort of problem is called Ensemble Model Output Statistic (Gneiting et al. 2005). In this technique, the goal is to find a distribution
$$ \mathcal{N}(a + bX, c + dS^2), $$
where $X$ is the raw ensemble mean and $S$ is the raw ensemble standard deviation, so that
$$ \min_{a, b, c, d} \frac{1}{N_{\mathrm{sample}}} \sum_{i = 1}^{N_{\mathrm{sample}}} crps(\mathcal{N}(a + bX, c + dS^2), y_i)$$
The minimum over all samples is found using some optimization algorithm. We can also view this as a network graph
There are two commonly used variant of EMOS: Global EMOS where all stations share the same coefficients and training happens over a rolling window of e.g. 25 days and local EMOS where each station is fit separately with a longer training window (e.g. 1 year).
The CRPS scores for 2016 are:
- Global EMOS: 1.01
- Local EMOS: 0.92
This is the benchmark for our networks.
Let's start by fitting a very simple fully connected network like this:
End of explanation
emb_size = 2
max_id = int(np.max([train_set.cont_ids.max(), test_set.cont_ids.max()]))
max_id
features_inp = Input(shape=(2,))
id_inp = Input(shape=(1,))
emb = Embedding(max_id+1, emb_size)(id_inp)
emb = Flatten()(emb)
x = Concatenate()([features_inp, emb])
outp = Dense(2)(x)
emb_model = Model([features_inp, id_inp], outp)
emb_model.summary()
emb_model.compile(Adam(0.1), crps_cost_function)
emb_model.fit([train_set.features, train_set.cont_ids], train_set.targets,
epochs=20, batch_size=4096);
emb_model.evaluate([test_set.features, test_set.cont_ids], test_set.targets, 4096, 0)
Explanation: So we basically get the same score as global EMOS, which is what we would expect.
Add station information with embeddings
The stations probably differ a lot in their post-processing characteristics. So we want to include this somehow. In local EMOS, we wit a separate model for each station, but this takes a long time and doesn't optimally use all the training data.
Embeddings are a neural network technique which provide a natural way to include station information. An embedding is a mapping from a discrete object, in our case the station ID, to a vector. The elements of the vector are learned by the network just like the other weights and biases and represent some extra information about each station.
End of explanation
preds = emb_model.predict([test_set.features, test_set.cont_ids], 4096)
plot_fc(test_set, 5, preds=preds)
Explanation: This score is about 1% better than local EMOS and is much faster.
End of explanation
def create_emb_hidden_model(hidden_nodes, n_features=2, activation='relu'):
features_inp = Input(shape=(n_features,))
id_inp = Input(shape=(1,))
emb = Embedding(max_id+1, emb_size)(id_inp)
emb = Flatten()(emb)
x = Concatenate()([features_inp, emb])
for h in hidden_nodes:
x = Dense(h, activation=activation)(x)
outp = Dense(2)(x)
return Model([features_inp, id_inp], outp)
neural_net = create_emb_hidden_model([1024])
neural_net.summary()
neural_net.compile(Adam(0.1), crps_cost_function)
neural_net.fit([train_set.features, train_set.cont_ids], train_set.targets,
epochs=20, batch_size=4096)
neural_net.evaluate([test_set.features, test_set.cont_ids], test_set.targets, 4096, 0)
Explanation: What about a neural network?
So far the network we used were simple linear networks, nothing neural about them. Let's try adding a hidden layer.
End of explanation
#aux_train_set, aux_test_set = get_train_test_sets(DATA_DIR, train_dates, test_dates,
# aux_dict=aux_dict)
with open(DATA_DIR + 'pickled/aux_15_16.pkl', 'rb') as f:
aux_train_set, aux_test_set = pickle.load(f)
print(aux_train_set.feature_names)
len(aux_train_set.feature_names)
Explanation: For the simple input that we have, adding non-linearity doesn't help to improve the fit.
Adding more variables
So far we have only used the temperature forecast as input but really we have a lot more variables from each forecast which might give us more information about the weather situation.
In traditional post-processing there are techniques to utilize these auxiliary variables, called boosting techniques.
Here are the benchmark scores from Sebastian's boosting experiments:
- global boosting: 0.97
- local boosting: 0.87
As a first attempt we can simply throw in these extra variables to our standard network and see what happens.
End of explanation
aux_fc_model = Sequential([
Dense(2, input_dim=40)
])
aux_fc_model.compile(Adam(0.1), crps_cost_function)
aux_fc_model.summary()
aux_fc_model.fit(aux_train_set.features, aux_train_set.targets, epochs=20, batch_size=4096)
aux_fc_model.evaluate(aux_test_set.features, aux_test_set.targets, 4096, 0)
Explanation: As with temperature, we took the ensemble mean and standard deviation of all auxiliary variables (except for the constants). Now we can build the same network as earlier but with 40 inputs.
Simple linear net with auxiliary variables
End of explanation
def create_emb_hidden_model(hidden_nodes, n_features=2, activation='relu'):
features_inp = Input(shape=(n_features,))
id_inp = Input(shape=(1,))
emb = Embedding(max_id+1, emb_size)(id_inp)
emb = Flatten()(emb)
x = Concatenate()([features_inp, emb])
for h in hidden_nodes:
x = Dense(h, activation=activation)(x)
outp = Dense(2)(x)
return Model([features_inp, id_inp], outp)
aux_emb_model = create_emb_hidden_model([], n_features=40)
aux_emb_model.compile(Adam(0.01), crps_cost_function)
aux_emb_model.summary()
aux_emb_model.fit([aux_train_set.features, aux_train_set.cont_ids], aux_train_set.targets,
epochs=50, batch_size=1024);
aux_emb_model.evaluate([aux_test_set.features, aux_test_set.cont_ids],
aux_test_set.targets, 4096, 0)
Explanation: So we get a big improvement from 1.01 for only temperature. we are also doing better than global boosting. Next let's include our station embeddings.
Auxiliary variables with station embeddings
End of explanation
nn_model = create_emb_hidden_model([100], n_features=40)
nn_model.compile(Adam(0.005), crps_cost_function)
nn_model.summary()
#nn_model.fit([aux_train_set.features, aux_train_set.cont_ids], aux_train_set.targets,
# epochs=50, batch_size=4096);
#nn_model.save(DATA_DIR + 'saved_models/nn_model.h5')
nn_model = keras.models.load_model(DATA_DIR + 'saved_models/nn_model.h5')
nn_model.evaluate([aux_test_set.features, aux_test_set.cont_ids],
aux_test_set.targets, 4096, 0)
Explanation: This is slightly worse than the local boosting algorithm.
Neural network
So far we have only used linear networks. Now let's add some non-linearity with one hidden layer.
End of explanation
better_nn = create_emb_hidden_model([512, 512], 40)
better_nn.compile(Adam(0.01), crps_cost_function)
better_nn.summary()
#better_nn.fit([aux_train_set.features, aux_train_set.cont_ids], aux_train_set.targets,
# epochs=50, batch_size=1024);
#better_nn.save(DATA_DIR + 'saved_models/better_nn.h5')
better_nn = keras.models.load_model(DATA_DIR + 'saved_models/better_nn.h5')
# Training score
better_nn.evaluate([aux_train_set.features, aux_train_set.cont_ids],
aux_train_set.targets, 4096, 0)
# Test score
better_nn.evaluate([aux_test_set.features, aux_test_set.cont_ids],
aux_test_set.targets, 4096, 0)
Explanation: The added non-linearity gives us a another few percent improvement compared to the local boosting algorithm. Why not try increasing the number of hidden layers and nodes?
A more complex neural network
End of explanation
long_train_dates = ['2008-01-01', '2016-01-01']
#long_train_set, long_test_set = get_train_test_sets(DATA_DIR, long_train_dates, test_dates,
# aux_dict=aux_dict)
with open(DATA_DIR + 'pickled/aux_08-15_16.pkl', 'rb') as f:
long_train_set, long_test_set = pickle.load(f)
long_train_set.features.shape
nn_model = create_emb_hidden_model([500], n_features=40)
nn_model.compile(Adam(0.002), crps_cost_function)
nn_model.summary()
#nn_model.fit([long_train_set.features, long_train_set.cont_ids], long_train_set.targets,
# epochs=100, batch_size=4096, validation_split=0.2,
# callbacks=[EarlyStopping(patience=2)]);
#nn_model.save(DATA_DIR + 'saved_models/nn_model_long.h5')
nn_model = keras.models.load_model(DATA_DIR + 'saved_models/nn_model_long.h5')
nn_model.evaluate([long_test_set.features, long_test_set.cont_ids],
long_test_set.targets, 4096, 0)
Explanation: Hmmm, weird...
This is what is called overfitting and is a serious problem in machine learning. The model basically memorizes the training examples and does not generalize to unseen samples.
The model complexity is limited by the amount of training data!
A longer training period
Finally, let's see if our score gets better if we train with a longer training period.
End of explanation |
8,708 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tranlation Matrix Tutorial
What is it ?
Suppose we are given a set of word pairs and their associated vector representaion ${x_{i},z_{i}}{i=1}^{n}$, where $x{i} \in R^{d_{1}}$ is the distibuted representation of word $i$ in the source language, and ${z_{i} \in R^{d_{2}}}$ is the vector representation of its translation. Our goal is to find a transformation matrix $W$ such that $Wx_{i}$ approximates $z_{i}$. In practice, $W$ can be learned by the following optimization prolem
Step1: For this tutorial, we'll train our model using the English -> Italian word pairs from the OPUS collection. This corpus contains 5000 word pairs. Each word pair is English word with corresponding Italian word.
Dataset download
Step2: This tutorial uses 300-dimensional vectors of English words as source and vectors of Italian words as target. (Those vector trained by the word2vec toolkit with cbow. The context window was set 5 words to either side of the target,
the sub-sampling option was set to 1e-05 and estimate the probability of a target word with the negative sampling method, drawing 10 samples from the noise distribution)
Download dataset
Step3: Train the translation matrix
Step4: Prediction Time
Step5: Part two
Step6: Part three
Step7: The Creation Time for the Translation Matrix
Testing the creation time, we extracted more word pairs from a dictionary built from Europarl(Europara, en-it). We obtain about 20K word pairs and their coresponding word vectors or you can download from this.word_dict.pkl
Step8: You will see a two dimensional coordination whose horizontal axis is the size of corpus and vertical axis is the time to train a translation matrix (the unit is second). As the size of corpus increases, the time increases linearly.
Linear Relationship Between Languages
To have a better understanding of the principles behind, we visualized the word vectors using PCA, we noticed that the vector representations of similar words in different languages were related by a linear transformation.
Step9: The figure shows that the word vectors for English number one to five and the corresponding Italian words uno to cinque have similar geometric arrangements. So the relationship between vector spaces that represent these two languages can be captured by linear mapping.
If we know the translation of one to four from English to Italian, we can learn the transformation matrix that can help us to translate five or other numbers to the Italian word.
Step10: You probably will see that two kind of different color nodes, one for the English and the other for the Italian. For the translation of word five, we return top 3 similar words [u'cinque', u'quattro', u'tre']. We can easily see that the translation is convincing.
Let's see some animal words, the figue shows that most of words are also share the similar geometric arrangements.
Step11: You probably will see that two kind of different color nodes, one for the English and the other for the Italian. For the translation of word birds, we return top 3 similar words [u'uccelli', u'garzette', u'iguane']. We can easily see that the animals' words translation is also convincing as the numbers.
Tranlation Matrix Revisit
Warning
Step12: Here, we train two Doc2vec model, the parameters can be determined by yourself. We trained on 15k documents for the model1 and 50k documents for the model2. But you should mixed some documents which from the 15k document in model to the model2 as dicussed before.
Step13: For the IMDB training dataset, we train an classifier on the train data which has 25k documents with positive and negative label. Then using this classifier to predict the test data. To see what accuracy can the document vectors which learned by different method achieve.
Step14: For the experiment one, we use the vector which learned by the Doc2vec method.To evalute those document vector, we use split those 50k document into two part, one for training and the other for testing.
Step15: For the experiment two, the document vectors are learned by the back-mapping method, which has a linear mapping for the model1 and model2. Using this method like translation matrix for the word translation, If we provide the vector for the addtional 35k document vector in model2, we can infer this vector for the model1.
Step16: As we can see that, the vectors learned by back-mapping method performed not bad but still need improved.
Visulization
we pick some documents and extract the vector both from model1 and model2, we can see that they also share the similar geometric arrangment. | Python Code:
import os
from gensim import utils
from gensim.models import translation_matrix
from gensim.models import KeyedVectors
Explanation: Tranlation Matrix Tutorial
What is it ?
Suppose we are given a set of word pairs and their associated vector representaion ${x_{i},z_{i}}{i=1}^{n}$, where $x{i} \in R^{d_{1}}$ is the distibuted representation of word $i$ in the source language, and ${z_{i} \in R^{d_{2}}}$ is the vector representation of its translation. Our goal is to find a transformation matrix $W$ such that $Wx_{i}$ approximates $z_{i}$. In practice, $W$ can be learned by the following optimization prolem:
<center>$\min \limits_{W} \sum \limits_{i=1}^{n} ||Wx_{i}-z_{i}||^{2}$</center>
Resources
Tomas Mikolov, Quoc V Le, Ilya Sutskever. 2013.Exploiting Similarities among Languages for Machine Translation
Georgiana Dinu, Angelikie Lazaridou and Marco Baroni. 2014.Improving zero-shot learning by mitigating the hubness problem
End of explanation
!rm 1nuIuQoT
train_file = "OPUS_en_it_europarl_train_5K.txt"
with utils.smart_open(train_file, "r") as f:
word_pair = [tuple(utils.to_unicode(line).strip().split()) for line in f]
print (word_pair[:10])
Explanation: For this tutorial, we'll train our model using the English -> Italian word pairs from the OPUS collection. This corpus contains 5000 word pairs. Each word pair is English word with corresponding Italian word.
Dataset download:
OPUS_en_it_europarl_train_5K.txt
End of explanation
# Load the source language word vector
source_word_vec_file = "EN.200K.cbow1_wind5_hs0_neg10_size300_smpl1e-05.txt"
source_word_vec = KeyedVectors.load_word2vec_format(source_word_vec_file, binary=False)
# Load the target language word vector
target_word_vec_file = "IT.200K.cbow1_wind5_hs0_neg10_size300_smpl1e-05.txt"
target_word_vec = KeyedVectors.load_word2vec_format(target_word_vec_file, binary=False)
Explanation: This tutorial uses 300-dimensional vectors of English words as source and vectors of Italian words as target. (Those vector trained by the word2vec toolkit with cbow. The context window was set 5 words to either side of the target,
the sub-sampling option was set to 1e-05 and estimate the probability of a target word with the negative sampling method, drawing 10 samples from the noise distribution)
Download dataset:
EN.200K.cbow1_wind5_hs0_neg10_size300_smpl1e-05.txt
IT.200K.cbow1_wind5_hs0_neg10_size300_smpl1e-05.txt
End of explanation
transmat = translation_matrix.TranslationMatrix(source_word_vec, target_word_vec, word_pair)
transmat.train(word_pair)
print ("the shape of translation matrix is: ", transmat.translation_matrix.shape)
Explanation: Train the translation matrix
End of explanation
# The pair is in the form of (English, Italian), we can see whether the translated word is correct
words = [("one", "uno"), ("two", "due"), ("three", "tre"), ("four", "quattro"), ("five", "cinque")]
source_word, target_word = zip(*words)
translated_word = transmat.translate(source_word, 5, )
for k, v in translated_word.iteritems():
print ("word ", k, " and translated word", v)
Explanation: Prediction Time: For any given new word, we can map it to the other language space by coputing $z = Wx$, then we find the word whose representation is closet to z in the target language space, using consine similarity as the distance metric.
Part one:
Let's look at some vocabulary of numbers translation. We use English words (one, two, three, four and five) as test.
End of explanation
words = [("apple", "mela"), ("orange", "arancione"), ("grape", "acino"), ("banana", "banana"), ("mango", "mango")]
source_word, target_word = zip(*words)
translated_word = transmat.translate(source_word, 5)
for k, v in translated_word.iteritems():
print ("word ", k, " and translated word", v)
Explanation: Part two:
Let's look at some vocabulary of fruits translation. We use English words (apple, orange, grape, banana and mango) as test.
End of explanation
words = [("dog", "cane"), ("pig", "maiale"), ("cat", "gatto"), ("fish", "cavallo"), ("birds", "uccelli")]
source_word, target_word = zip(*words)
translated_word = transmat.translate(source_word, 5)
for k, v in translated_word.iteritems():
print ("word ", k, " and translated word", v)
Explanation: Part three:
Let's look at some vocabulary of animals translation. We use English words (dog, pig, cat, horse and bird) as test.
End of explanation
import pickle
word_dict = "word_dict.pkl"
with utils.smart_open(word_dict, "r") as f:
word_pair = pickle.load(f)
print ("the length of word pair ", len(word_pair))
import time
test_case = 10
word_pair_length = len(word_pair)
step = word_pair_length / test_case
duration = []
sizeofword = []
for idx in range(0, test_case):
sub_pair = word_pair[: (idx + 1) * step]
startTime = time.time()
transmat = translation_matrix.TranslationMatrix(source_word_vec, target_word_vec, sub_pair)
transmat.train(sub_pair)
endTime = time.time()
sizeofword.append(len(sub_pair))
duration.append(endTime - startTime)
import plotly
from plotly.graph_objs import Scatter, Layout
plotly.offline.init_notebook_mode(connected=True)
plotly.offline.iplot({
"data": [Scatter(x=sizeofword, y=duration)],
"layout": Layout(title="time for creation"),
}, filename="tm_creation_time.html")
Explanation: The Creation Time for the Translation Matrix
Testing the creation time, we extracted more word pairs from a dictionary built from Europarl(Europara, en-it). We obtain about 20K word pairs and their coresponding word vectors or you can download from this.word_dict.pkl
End of explanation
from sklearn.decomposition import PCA
import plotly
from plotly.graph_objs import Scatter, Layout, Figure
plotly.offline.init_notebook_mode(connected=True)
words = [("one", "uno"), ("two", "due"), ("three", "tre"), ("four", "quattro"), ("five", "cinque")]
en_words_vec = [source_word_vec[item[0]] for item in words]
it_words_vec = [target_word_vec[item[1]] for item in words]
en_words, it_words = zip(*words)
pca = PCA(n_components=2)
new_en_words_vec = pca.fit_transform(en_words_vec)
new_it_words_vec = pca.fit_transform(it_words_vec)
# remove the code, use the plotly for ploting instead
# fig = plt.figure()
# fig.add_subplot(121)
# plt.scatter(new_en_words_vec[:, 0], new_en_words_vec[:, 1])
# for idx, item in enumerate(en_words):
# plt.annotate(item, xy=(new_en_words_vec[idx][0], new_en_words_vec[idx][1]))
# fig.add_subplot(122)
# plt.scatter(new_it_words_vec[:, 0], new_it_words_vec[:, 1])
# for idx, item in enumerate(it_words):
# plt.annotate(item, xy=(new_it_words_vec[idx][0], new_it_words_vec[idx][1]))
# plt.show()
# you can also using plotly lib to plot in one figure
trace1 = Scatter(
x = new_en_words_vec[:, 0],
y = new_en_words_vec[:, 1],
mode = 'markers+text',
text = en_words,
textposition = 'top'
)
trace2 = Scatter(
x = new_it_words_vec[:, 0],
y = new_it_words_vec[:, 1],
mode = 'markers+text',
text = it_words,
textposition = 'top'
)
layout = Layout(
showlegend = False
)
data = [trace1, trace2]
fig = Figure(data=data, layout=layout)
plot_url = plotly.offline.iplot(fig, filename='relatie_position_for_number.html')
Explanation: You will see a two dimensional coordination whose horizontal axis is the size of corpus and vertical axis is the time to train a translation matrix (the unit is second). As the size of corpus increases, the time increases linearly.
Linear Relationship Between Languages
To have a better understanding of the principles behind, we visualized the word vectors using PCA, we noticed that the vector representations of similar words in different languages were related by a linear transformation.
End of explanation
words = [("one", "uno"), ("two", "due"), ("three", "tre"), ("four", "quattro"), ("five", "cinque")]
en_words, it_words = zip(*words)
en_words_vec = [source_word_vec[item[0]] for item in words]
it_words_vec = [target_word_vec[item[1]] for item in words]
# Translate the English word five to Italian word
translated_word = transmat.translate([en_words[4]], 3)
print "translation of five: ", translated_word
# the translated words of five
for item in translated_word[en_words[4]]:
it_words_vec.append(target_word_vec[item])
pca = PCA(n_components=2)
new_en_words_vec = pca.fit_transform(en_words_vec)
new_it_words_vec = pca.fit_transform(it_words_vec)
# remove the code, use the plotly for ploting instead
# fig = plt.figure()
# fig.add_subplot(121)
# plt.scatter(new_en_words_vec[:, 0], new_en_words_vec[:, 1])
# for idx, item in enumerate(en_words):
# plt.annotate(item, xy=(new_en_words_vec[idx][0], new_en_words_vec[idx][1]))
# fig.add_subplot(122)
# plt.scatter(new_it_words_vec[:, 0], new_it_words_vec[:, 1])
# for idx, item in enumerate(it_words):
# plt.annotate(item, xy=(new_it_words_vec[idx][0], new_it_words_vec[idx][1]))
# # annote for the translation of five, the red text annotation is the translation of five
# for idx, item in enumerate(translated_word[en_words[4]]):
# plt.annotate(item, xy=(new_it_words_vec[idx + 5][0], new_it_words_vec[idx + 5][1]),
# xytext=(new_it_words_vec[idx + 5][0] + 0.1, new_it_words_vec[idx + 5][1] + 0.1),
# color="red",
# arrowprops=dict(facecolor='red', shrink=0.1, width=1, headwidth=2),)
# plt.show()
trace1 = Scatter(
x = new_en_words_vec[:, 0],
y = new_en_words_vec[:, 1],
mode = 'markers+text',
text = en_words,
textposition = 'top'
)
trace2 = Scatter(
x = new_it_words_vec[:, 0],
y = new_it_words_vec[:, 1],
mode = 'markers+text',
text = it_words,
textposition = 'top'
)
layout = Layout(
showlegend = False,
annotations = [dict(
x = new_it_words_vec[5][0],
y = new_it_words_vec[5][1],
text = translated_word[en_words[4]][0],
arrowcolor = "black",
arrowsize = 1.5,
arrowwidth = 1,
arrowhead = 0.5
), dict(
x = new_it_words_vec[6][0],
y = new_it_words_vec[6][1],
text = translated_word[en_words[4]][1],
arrowcolor = "black",
arrowsize = 1.5,
arrowwidth = 1,
arrowhead = 0.5
), dict(
x = new_it_words_vec[7][0],
y = new_it_words_vec[7][1],
text = translated_word[en_words[4]][2],
arrowcolor = "black",
arrowsize = 1.5,
arrowwidth = 1,
arrowhead = 0.5
)]
)
data = [trace1, trace2]
fig = Figure(data=data, layout=layout)
plot_url = plotly.offline.iplot(fig, filename='relatie_position_for_numbers.html')
Explanation: The figure shows that the word vectors for English number one to five and the corresponding Italian words uno to cinque have similar geometric arrangements. So the relationship between vector spaces that represent these two languages can be captured by linear mapping.
If we know the translation of one to four from English to Italian, we can learn the transformation matrix that can help us to translate five or other numbers to the Italian word.
End of explanation
words = [("dog", "cane"), ("pig", "maiale"), ("cat", "gatto"), ("horse", "cavallo"), ("birds", "uccelli")]
en_words_vec = [source_word_vec[item[0]] for item in words]
it_words_vec = [target_word_vec[item[1]] for item in words]
en_words, it_words = zip(*words)
# remove the code, use the plotly for ploting instead
# pca = PCA(n_components=2)
# new_en_words_vec = pca.fit_transform(en_words_vec)
# new_it_words_vec = pca.fit_transform(it_words_vec)
# fig = plt.figure()
# fig.add_subplot(121)
# plt.scatter(new_en_words_vec[:, 0], new_en_words_vec[:, 1])
# for idx, item in enumerate(en_words):
# plt.annotate(item, xy=(new_en_words_vec[idx][0], new_en_words_vec[idx][1]))
# fig.add_subplot(122)
# plt.scatter(new_it_words_vec[:, 0], new_it_words_vec[:, 1])
# for idx, item in enumerate(it_words):
# plt.annotate(item, xy=(new_it_words_vec[idx][0], new_it_words_vec[idx][1]))
# plt.show()
trace1 = Scatter(
x = new_en_words_vec[:, 0],
y = new_en_words_vec[:, 1],
mode = 'markers+text',
text = en_words,
textposition = 'top'
)
trace2 = Scatter(
x = new_it_words_vec[:, 0],
y = new_it_words_vec[:, 1],
mode = 'markers+text',
text = it_words,
textposition ='top'
)
layout = Layout(
showlegend = False
)
data = [trace1, trace2]
fig = Figure(data=data, layout=layout)
plot_url = plotly.offline.iplot(fig, filename='relatie_position_for_animal.html')
words = [("dog", "cane"), ("pig", "maiale"), ("cat", "gatto"), ("horse", "cavallo"), ("birds", "uccelli")]
en_words, it_words = zip(*words)
en_words_vec = [source_word_vec[item[0]] for item in words]
it_words_vec = [target_word_vec[item[1]] for item in words]
# Translate the English word birds to Italian word
translated_word = transmat.translate([en_words[4]], 3)
print "translation of birds: ", translated_word
# the translated words of birds
for item in translated_word[en_words[4]]:
it_words_vec.append(target_word_vec[item])
pca = PCA(n_components=2)
new_en_words_vec = pca.fit_transform(en_words_vec)
new_it_words_vec = pca.fit_transform(it_words_vec)
# # remove the code, use the plotly for ploting instead
# fig = plt.figure()
# fig.add_subplot(121)
# plt.scatter(new_en_words_vec[:, 0], new_en_words_vec[:, 1])
# for idx, item in enumerate(en_words):
# plt.annotate(item, xy=(new_en_words_vec[idx][0], new_en_words_vec[idx][1]))
# fig.add_subplot(122)
# plt.scatter(new_it_words_vec[:, 0], new_it_words_vec[:, 1])
# for idx, item in enumerate(it_words):
# plt.annotate(item, xy=(new_it_words_vec[idx][0], new_it_words_vec[idx][1]))
# # annote for the translation of five, the red text annotation is the translation of five
# for idx, item in enumerate(translated_word[en_words[4]]):
# plt.annotate(item, xy=(new_it_words_vec[idx + 5][0], new_it_words_vec[idx + 5][1]),
# xytext=(new_it_words_vec[idx + 5][0] + 0.1, new_it_words_vec[idx + 5][1] + 0.1),
# color="red",
# arrowprops=dict(facecolor='red', shrink=0.1, width=1, headwidth=2),)
# plt.show()
trace1 = Scatter(
x = new_en_words_vec[:, 0],
y = new_en_words_vec[:, 1],
mode = 'markers+text',
text = en_words,
textposition = 'top'
)
trace2 = Scatter(
x = new_it_words_vec[:5, 0],
y = new_it_words_vec[:5, 1],
mode = 'markers+text',
text = it_words[:5],
textposition = 'top'
)
layout = Layout(
showlegend = False,
annotations = [dict(
x = new_it_words_vec[5][0],
y = new_it_words_vec[5][1],
text = translated_word[en_words[4]][0],
arrowcolor = "black",
arrowsize = 1.5,
arrowwidth = 1,
arrowhead = 0.5
), dict(
x = new_it_words_vec[6][0],
y = new_it_words_vec[6][1],
text = translated_word[en_words[4]][1],
arrowcolor = "black",
arrowsize = 1.5,
arrowwidth = 1,
arrowhead = 0.5
), dict(
x = new_it_words_vec[7][0],
y = new_it_words_vec[7][1],
text = translated_word[en_words[4]][2],
arrowcolor = "black",
arrowsize = 1.5,
arrowwidth = 1,
arrowhead = 0.5
)]
)
data = [trace1, trace2]
fig = Figure(data=data, layout=layout)
plot_url = plotly.offline.iplot(fig, filename='relatie_position_for_animal.html')
Explanation: You probably will see that two kind of different color nodes, one for the English and the other for the Italian. For the translation of word five, we return top 3 similar words [u'cinque', u'quattro', u'tre']. We can easily see that the translation is convincing.
Let's see some animal words, the figue shows that most of words are also share the similar geometric arrangements.
End of explanation
import gensim
from gensim.models.doc2vec import TaggedDocument
from gensim.models import Doc2Vec
from collections import namedtuple
from gensim import utils
def read_sentimentDocs():
SentimentDocument = namedtuple('SentimentDocument', 'words tags split sentiment')
alldocs = [] # will hold all docs in original order
with utils.smart_open('aclImdb/alldata-id.txt', encoding='utf-8') as alldata:
for line_no, line in enumerate(alldata):
tokens = gensim.utils.to_unicode(line).split()
words = tokens[1:]
tags = [line_no] # `tags = [tokens[0]]` would also work at extra memory cost
split = ['train','test','extra','extra'][line_no // 25000] # 25k train, 25k test, 25k extra
sentiment = [1.0, 0.0, 1.0, 0.0, None, None, None, None][line_no // 12500] # [12.5K pos, 12.5K neg]*2 then unknown
alldocs.append(SentimentDocument(words, tags, split, sentiment))
train_docs = [doc for doc in alldocs if doc.split == 'train']
test_docs = [doc for doc in alldocs if doc.split == 'test']
doc_list = alldocs[:] # for reshuffling per pass
print('%d docs: %d train-sentiment, %d test-sentiment' % (len(doc_list), len(train_docs), len(test_docs)))
return train_docs, test_docs, doc_list
train_docs, test_docs, doc_list = read_sentimentDocs()
small_corpus = train_docs[:15000]
large_corpus = train_docs + test_docs
print len(train_docs), len(test_docs), len(doc_list), len(small_corpus), len(large_corpus)
Explanation: You probably will see that two kind of different color nodes, one for the English and the other for the Italian. For the translation of word birds, we return top 3 similar words [u'uccelli', u'garzette', u'iguane']. We can easily see that the animals' words translation is also convincing as the numbers.
Tranlation Matrix Revisit
Warning: this part is unstable/experimental, it requires more experimentation and will change soon!
As dicussion in this PR, Translation Matrix not only can used to translate the words from one source language to another target lanuage, but also to translate new document vectors back to old model space.
For example, if we have trained 15k documents using doc2vec (we called this as model1), and we are going to train new 35k documents using doc2vec(we called this as model2). So we can include those 15k documents as reference documents into the new 35k documents. Then we can get 15k document vectors from model1 and 50k document vectors from model2, but both of the two models have vectors for those 15k documents. We can use those vectors to build a mapping from model1 to model2. Finally, with this relation, we can back-mapping the model2's vector to model1. Therefore, 35k document vectors are learned using this method.
In this notebook, we use the IMDB dataset as example. For more information about this dataset, please refer to this. And some of code are borrowed from this notebook
End of explanation
# for the computer performance limited, didn't run on the notebook.
# You do can trained on the server and save the model to the disk.
import multiprocessing
from random import shuffle
cores = multiprocessing.cpu_count()
model1 = Doc2Vec(dm=1, dm_concat=1, size=100, window=5, negative=5, hs=0, min_count=2, workers=cores)
model2 = Doc2Vec(dm=1, dm_concat=1, size=100, window=5, negative=5, hs=0, min_count=2, workers=cores)
small_train_docs = train_docs[:15000]
# train for small corpus
model1.build_vocab(small_train_docs)
for epoch in range(50):
shuffle(small_train_docs)
model1.train(small_train_docs, total_examples=len(small_train_docs), epochs=1)
model.save("small_doc_15000_iter50.bin")
large_train_docs = train_docs + test_docs
# train for large corpus
model2.build_vocab(large_train_docs)
for epoch in range(50):
shuffle(large_train_docs)
model2.train(large_train_docs, total_examples=len(train_docs), epochs=1)
# save the model
model2.save("large_doc_50000_iter50.bin")
Explanation: Here, we train two Doc2vec model, the parameters can be determined by yourself. We trained on 15k documents for the model1 and 50k documents for the model2. But you should mixed some documents which from the 15k document in model to the model2 as dicussed before.
End of explanation
import os
import numpy as np
from sklearn.linear_model import LogisticRegression
def test_classifier_error(train, train_label, test, test_label):
classifier = LogisticRegression()
classifier.fit(train, train_label)
score = classifier.score(test, test_label)
print "the classifier score :", score
return score
Explanation: For the IMDB training dataset, we train an classifier on the train data which has 25k documents with positive and negative label. Then using this classifier to predict the test data. To see what accuracy can the document vectors which learned by different method achieve.
End of explanation
#you can change the data folder
basedir = "/home/robotcator/doc2vec"
model2 = Doc2Vec.load(os.path.join(basedir, "large_doc_50000_iter50.bin"))
m2 = []
for i in range(len(large_corpus)):
m2.append(model2.docvecs[large_corpus[i].tags])
train_array = np.zeros((25000, 100))
train_label = np.zeros((25000, 1))
test_array = np.zeros((25000, 100))
test_label = np.zeros((25000, 1))
for i in range(12500):
train_array[i] = m2[i]
train_label[i] = 1
train_array[i + 12500] = m2[i + 12500]
train_label[i + 12500] = 0
test_array[i] = m2[i + 25000]
test_label[i] = 1
test_array[i + 12500] = m2[i + 37500]
test_label[i + 12500] = 0
print "The vectors are learned by doc2vec method"
test_classifier_error(train_array, train_label, test_array, test_label)
Explanation: For the experiment one, we use the vector which learned by the Doc2vec method.To evalute those document vector, we use split those 50k document into two part, one for training and the other for testing.
End of explanation
from gensim.models import translation_matrix
# you can change the data folder
basedir = "/home/robotcator/doc2vec"
model1 = Doc2Vec.load(os.path.join(basedir, "small_doc_15000_iter50.bin"))
model2 = Doc2Vec.load(os.path.join(basedir, "large_doc_50000_iter50.bin"))
l = model1.docvecs.count
l2 = model2.docvecs.count
m1 = np.array([model1.docvecs[large_corpus[i].tags].flatten() for i in range(l)])
# learn the mapping bettween two model
model = translation_matrix.BackMappingTranslationMatrix(large_corpus[:15000], model1, model2)
model.train(large_corpus[:15000])
for i in range(l, l2):
infered_vec = model.infer_vector(model2.docvecs[large_corpus[i].tags])
m1 = np.vstack((m1, infered_vec.flatten()))
train_array = np.zeros((25000, 100))
train_label = np.zeros((25000, 1))
test_array = np.zeros((25000, 100))
test_label = np.zeros((25000, 1))
# because those document, 25k documents are postive label, 25k documents are negative label
for i in range(12500):
train_array[i] = m1[i]
train_label[i] = 1
train_array[i + 12500] = m1[i + 12500]
train_label[i + 12500] = 0
test_array[i] = m1[i + 25000]
test_label[i] = 1
test_array[i + 12500] = m1[i + 37500]
test_label[i + 12500] = 0
print "The vectors are learned by back-mapping method"
test_classifier_error(train_array, train_label, test_array, test_label)
Explanation: For the experiment two, the document vectors are learned by the back-mapping method, which has a linear mapping for the model1 and model2. Using this method like translation matrix for the word translation, If we provide the vector for the addtional 35k document vector in model2, we can infer this vector for the model1.
End of explanation
from sklearn.decomposition import PCA
import plotly
from plotly.graph_objs import Scatter, Layout, Figure
plotly.offline.init_notebook_mode(connected=True)
m1_part = m1[14995: 15000]
m2_part = m2[14995: 15000]
m1_part = np.array(m1_part).reshape(len(m1_part), 100)
m2_part = np.array(m2_part).reshape(len(m2_part), 100)
pca = PCA(n_components=2)
reduced_vec1 = pca.fit_transform(m1_part)
reduced_vec2 = pca.fit_transform(m2_part)
trace1 = Scatter(
x = reduced_vec1[:, 0],
y = reduced_vec1[:, 1],
mode = 'markers+text',
text = ['doc' + str(i) for i in range(len(reduced_vec1))],
textposition = 'top'
)
trace2 = Scatter(
x = reduced_vec2[:, 0],
y = reduced_vec2[:, 1],
mode = 'markers+text',
text = ['doc' + str(i) for i in range(len(reduced_vec1))],
textposition ='top'
)
layout = Layout(
showlegend = False
)
data = [trace1, trace2]
fig = Figure(data=data, layout=layout)
plot_url = plotly.offline.iplot(fig, filename='doc_vec_vis')
m1_part = m1[14995: 15002]
m2_part = m2[14995: 15002]
m1_part = np.array(m1_part).reshape(len(m1_part), 100)
m2_part = np.array(m2_part).reshape(len(m2_part), 100)
pca = PCA(n_components=2)
reduced_vec1 = pca.fit_transform(m1_part)
reduced_vec2 = pca.fit_transform(m2_part)
trace1 = Scatter(
x = reduced_vec1[:, 0],
y = reduced_vec1[:, 1],
mode = 'markers+text',
text = ['sdoc' + str(i) for i in range(len(reduced_vec1))],
textposition = 'top'
)
trace2 = Scatter(
x = reduced_vec2[:, 0],
y = reduced_vec2[:, 1],
mode = 'markers+text',
text = ['tdoc' + str(i) for i in range(len(reduced_vec1))],
textposition ='top'
)
layout = Layout(
showlegend = False
)
data = [trace1, trace2]
fig = Figure(data=data, layout=layout)
plot_url = plotly.offline.iplot(fig, filename='doc_vec_vis')
Explanation: As we can see that, the vectors learned by back-mapping method performed not bad but still need improved.
Visulization
we pick some documents and extract the vector both from model1 and model2, we can see that they also share the similar geometric arrangment.
End of explanation |
8,709 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
First we import the table of tag-article mappings from our SQL database
(but read in as a .csv).
Step1: We only care about the content type "Article"
Step2: But we need to get the tag name out of the url string for the tag
Step3: Import the table of URLs and total pageviews
Step4: Now we will explore how a KNN classifier does with our dataset
Step5: Now we will explore how a RandomForest classifier does with our dataset
Step6: Prediction of a given URL
Step7: Refining the model
We will reproduce our results looking only at articles published in 2015 and 2016
Step8: Now we will rebuild our model to have it predict if an article will receive over 500 Facebook shares.
Step9: Logistic Regression with FB Shares > 500 as target
Step10: KNN with FB Shares > 500 as target
Step11: RandomForest with FB Shares > 500 as target | Python Code:
df = pd.read_csv('atlas-taggings.csv')
df[2:5]
Explanation: First we import the table of tag-article mappings from our SQL database
(but read in as a .csv).
End of explanation
articles = df[df.tagged_type == 'Article']
Explanation: We only care about the content type "Article"
End of explanation
articles.tag_url = articles.tag_url.apply(get_tag_name)
articles = get_dummies_and_join(articles,'tag_url')
articles = articles.drop(['tag_id','tag_url','tagged_type','tagged_id'],axis=1)
articles = unique_article_set(articles,'tagged_url')
articles = articles.reset_index().set_index('tagged_url')
Explanation: But we need to get the tag name out of the url string for the tag
End of explanation
pageviews = pd.read_csv('output_articles_performance.csv',header=None,names=[
'url','published','pageviews'
])
pageviews.url = ['www.atlasobscura.com/articles/' + x for x in pageviews.url]
pageviews.describe()
pageviews.set_index('url',inplace=True)
article_set = articles.join(pageviews)
article_set['ten_thousand'] = target_pageview_cutoff(10000,article_set.pageviews)
article_set['published'] = pd.to_datetime(article_set['published'])
article_set['year'] = get_year(article_set,'published')
article_set.pageviews.plot(kind='density',title='Page View Distribution, All Articles')
ax = article_set.boxplot(column='pageviews',by='year',figsize=(6,6),showfliers=False)
ax.set(title='PV distribution by year of publication, no outliers',ylabel='pageviews')
sns.factorplot(
x='year',
y='ten_thousand',
data = article_set
)
total_tagged = get_total_tagged(article_set,'num_tagged')
article_set.fillna(value=0,inplace=True)
y = article_set.ten_thousand
X = article_set.drop(['pageviews','published','ten_thousand'],axis=1)
cross_val_score = get_cross_validation_score(X,y,linear_model.LogisticRegression(penalty = 'l1'),
n_folds=5)
lr = linear_model.LogisticRegression(penalty = 'l1').fit(X,y)
lr_scores = lr.predict_proba(X)[:,1]
roc_score = get_roc_scores(y,lr_scores)
print roc_score
single_tag_probabilities = get_probabilities(lr,X)
Explanation: Import the table of URLs and total pageviews
End of explanation
params = {'n_neighbors' : [x for x in range(2,100,4)],
'weights' : ['distance','uniform']}
gs = GridSearchCV(estimator = KNeighborsClassifier(),param_grid=params,
n_jobs=-1,cv=10,verbose=1)
gs.fit(X,y)
print gs.best_params_
print gs.best_score_
knn = gs.best_estimator_.fit(X,y)
knn_probs = get_probabilities(knn,X)
knn_cross_val_score = get_cross_validation_score(X,y,knn,5)
knn_scores = knn.predict_proba(X)[:,1]
knn_roc_score = get_roc_scores(y,knn_scores)
params_rfc = {'max_depth': np.arange(20,100,5),
'min_samples_leaf': np.arange(90,200,5),
'n_estimators': [20],
'criterion' : ['gini','entropy']
}
gs1 = GridSearchCV(RandomForestClassifier(),param_grid=params_rfc, cv=10, scoring='roc_auc',n_jobs=-1,verbose=1)
gs1.fit(X,y)
print gs1.best_params_
print gs1.best_score_
Explanation: Now we will explore how a KNN classifier does with our dataset
End of explanation
rf = gs1.best_estimator_
rf.fit(X,y)
rf_cross_val_score = get_cross_validation_score(X,y,rf,5)
rf_scores = rf.predict_proba(X)[:,1]
rf_roc_score = get_roc_scores(y,rf_scores)
print "Logistic Regression Cross-validation Score: ", cross_val_score
print "K Nearest Neighbors Cross-validation Score: ", knn_cross_val_score
print "RandomForest Cross-validation Score: ", rf_cross_val_score
print "Logistic Regressions ROC AUC score: ", roc_score
print "K Nearest Neighbors ROC AUC score: ", knn_roc_score
print "RandomForest ROC AUC score: ", rf_roc_score
Explanation: Now we will explore how a RandomForest classifier does with our dataset
End of explanation
url, taglist = get_article_tags('http://www.atlasobscura.com/articles/the-ao-exit-interview-12-years-in-the-blue-man-group')
transformed_article = transform_article_for_prediction(url,article_set)
Explanation: Prediction of a given URL
End of explanation
article_set.head(1)
y1 = article_set[article_set.year >= 2016].ten_thousand
X1 = article_set[article_set.year >= 2016].drop(['pageviews','published','ten_thousand'],axis=1)
cross_val_score1 = get_cross_validation_score(X1,y1,linear_model.LogisticRegression(penalty = 'l1'),
n_folds=5)
lr1 = linear_model.LogisticRegression(penalty = 'l1').fit(X1,y1)
lr_scores1 = lr1.predict_proba(X1)[:,1]
roc_score1 = get_roc_scores(y1,lr_scores1)
print roc_score1
Explanation: Refining the model
We will reproduce our results looking only at articles published in 2015 and 2016
End of explanation
simplereach = pd.read_csv('~/Downloads/all-content-simplereach.csv')
simplereach.Url = simplereach.Url.apply(get_simplereach_url)
simplereach = simplereach.set_index('Url')
simplereach = simplereach[['Avg Engaged Time','Social Actions','Facebook Shares','FaceBook Referrals']]
article_set2 = article_set.join(simplereach['Facebook Shares'])
article_set2['five_hundred_shares'] = target_pageview_cutoff(500,article_set2['Facebook Shares'])
Explanation: Now we will rebuild our model to have it predict if an article will receive over 500 Facebook shares.
End of explanation
y2 = article_set2.five_hundred_shares
X2 = article_set2.drop(['pageviews',
'published',
'ten_thousand',
'Facebook Shares',
'five_hundred_shares'
],axis=1)
cross_val_score_social = get_cross_validation_score(X2,y2,linear_model.LogisticRegression(penalty = 'l1'),
n_folds=5)
lr_social = linear_model.LogisticRegression(penalty = 'l1').fit(X2,y2)
lr_scores_social = lr_social.predict_proba(X2)[:,1]
roc_score_social = get_roc_scores(y2,lr_scores_social)
print "Cross-val score when predicting Facebook shares > 500: ", cross_val_score_social
print "ROC AUC score when predicting Facebook shares > 500: ",roc_score_social
url = 'http://www.atlasobscura.com/articles/winters-effigies-the-deviant-history-of-the-snowman'
lr2.predict(transform_article_for_prediction(url,X2))
Explanation: Logistic Regression with FB Shares > 500 as target
End of explanation
params_social = {'n_neighbors' : [x for x in range(2,100,4)],
'weights' : ['distance','uniform']}
gs_social = GridSearchCV(estimator = KNeighborsClassifier(),param_grid=params,
n_jobs=-1,cv=10,verbose=1)
gs_social.fit(X2,y2)
print gs_social.best_params_
print gs_social.best_score_
knn_social = gs_social.best_estimator_.fit(X2,y2)
knn_probs_social = get_probabilities(knn_social,X2)
knn_cross_val_score_social = get_cross_validation_score(X2,y2,knn_social,5)
knn_scores_social = knn_social.predict_proba(X2)[:,1]
knn_roc_score_social = get_roc_scores(y2,knn_scores_social)
Explanation: KNN with FB Shares > 500 as target
End of explanation
params_rfc = {'max_depth': np.arange(20,100,5),
'min_samples_leaf': np.arange(90,200,5),
'n_estimators': [20]}
gs1_social = GridSearchCV(RandomForestClassifier(),param_grid=params_rfc, cv=10, scoring='roc_auc',n_jobs=-1,verbose=1)
gs1_social.fit(X2,y2)
rf_social = gs1_social.best_estimator_
rf_social.fit(X2,y2)
rf_cross_val_score_social = get_cross_validation_score(X2,y2,rf_social,5)
rf_scores_social = rf_social.predict_proba(X)[:,1]
rf_roc_score_social = get_roc_scores(y2,rf_scores_social)
print gs1_social.best_params_
print gs1_social.best_score_
print "Logistic Regression Cross-validation Score: ", cross_val_score_social
print "K Nearest Neighbors Cross-validation Score: ", knn_cross_val_score_social
print "RandomForest Cross-validation Score: ", rf_cross_val_score_social
print "Logistic Regressions ROC AUC score: ", roc_score_social
print "K Nearest Neighbors ROC AUC score: ", knn_roc_score_social
print "RandomForest ROC AUC score: ", rf_roc_score_social
np.mean(y)
simplereach.describe()
Explanation: RandomForest with FB Shares > 500 as target
End of explanation |
8,710 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Données multidimensionnelles SQL - énoncé
Ce notebook propose l'utilisation de SQL avec SQLite pour manipuler les données depuis un notebook (avec le module sqlite3).
Step1: Représentation
Le module pandas manipule des tables et c'est la façon la plus commune de représenter les données. Lorsque les données sont multidimensionnelles, on distingue les coordonnées des valeurs
Step2: Dans cet exemple, il y a
Step3: C'est assez simple. Prenons un exemple
Step4: Les indicateurs pour deux âges différents
Step5: Exercice 1
Step6: Données trop grosses pour tenir en mémoire
Step7: Les données sont trop grosses pour tenir dans une feuille Excel. Pour les consulter, il n'y a pas d'autres moyens que d'en regarder des extraits. Que passe quand même ceci n'est pas possible ? Quelques solutions
Step8: On peut maintenant récupérer un morceau avec la fonction read_sql.
Step9: L'ensemble des données restent sur le disque, seul le résultat de la requête est chargé en mémoire. Si on ne peut pas faire tenir les données en mémoire, il faut soit en obtenir une vue partielle (un échantillon aléatoire, un vue filtrée), soit une vue agrégrée.
Pour finir, il faut fermer la connexion pour laisser d'autres applications ou notebook modifier la base ou tout simplement supprimer le fichier.
Step10: Sous Windows, on peut consulter la base avec le logiciel SQLiteSpy.
Step11: Sous Linux ou Max, on peut utiliser une extension Firefox SQLite Manager. Dans ce notebook, on utilisera la commande magique %%SQL du module pyensae
Step12: Exercice 2
Step13: Je ne sais pas si cela peut être réalisé sans charger les données en mémoire. Si les données pèsent 20 Go, cette méthode n'aboutira pas. Pourtant, on veut juste un échantillon pour commencer à regarder les données. On utilise la seconde option avec create_function et la fonction suivante
Step14: Que faut-il écrire ici pour récupérer 1% de la table ?
Step15: Pseudo Map/Reduce avec SQLite
La liste des mots-clés du langage SQL utilisés par SQLite n'est pas aussi riche que d'autres solutions de serveurs SQL. La médiane ne semble pas en faire partie. Cependant, pour une année, un genre, un âge donné, on voudrait calculer la médiane de l'espérance de vie sur l'ensembles des pays.
Step17: Il n'y a pas le même nombre de données selon les pays, il est probable que le nombre de pays pour lesquels il existe des données varie selon les âges et les années.
Step19: Soit un nombre inconstant de pays. Le fait qu'on est 100 pays suggère qu'on ait une erreur également.
Step21: Ce sont des valeurs manquantes. Le problème pour calculer la médiane pour chaque observation est qu'il faut d'abord regrouper les lignes de la table par indicateur puis choisir la médiane dans chaque de ces petits groupes. On s'inspire pour cela de la logique Map/Reduce et de la fonction create_aggregate.
Exercice 3 | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('ggplot')
import pyensae
from pyquickhelper.helpgen import NbImage
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: Données multidimensionnelles SQL - énoncé
Ce notebook propose l'utilisation de SQL avec SQLite pour manipuler les données depuis un notebook (avec le module sqlite3).
End of explanation
NbImage("cube1.png")
Explanation: Représentation
Le module pandas manipule des tables et c'est la façon la plus commune de représenter les données. Lorsque les données sont multidimensionnelles, on distingue les coordonnées des valeurs :
End of explanation
NbImage("cube2.png")
Explanation: Dans cet exemple, il y a :
3 coordonnées : Age, Profession, Annéee
2 valeurs : Espérance de vie, Population
On peut représenter les donnés également comme ceci :
End of explanation
from actuariat_python.data import table_mortalite_euro_stat
table_mortalite_euro_stat()
import os
os.stat("mortalite.txt")
import pandas
df = pandas.read_csv("mortalite.txt", sep="\t", encoding="utf8", low_memory=False)
df.head()
Explanation: C'est assez simple. Prenons un exemple : table de mortalité de 1960 à 2010 qu'on récupère à l'aide de la fonction table_mortalite_euro_stat. C'est assez long (4-5 minutes) sur l'ensemble des données car elles doivent être prétraitées (voir la documentation de la fonction). Pour écouter, il faut utiliser le paramètre stop_at.
End of explanation
df [ ((df.age=="Y60") | (df.age=="Y61")) & (df.annee == 2000) & (df.pays=="FR") & (df.genre=="F")]
Explanation: Les indicateurs pour deux âges différents :
End of explanation
#
Explanation: Exercice 1 : filtre
On veut comparer les espérances de vie pour deux pays et deux années.
End of explanation
df.shape
Explanation: Données trop grosses pour tenir en mémoire : SQLite
End of explanation
import sqlite3
from pandas.io import sql
cnx = sqlite3.connect('mortalite.db3')
try:
df.to_sql(name='mortalite', con=cnx)
except ValueError as e:
if "Table 'mortalite' already exists" not in str(e):
# seulement si l'erreur ne vient pas du fait que cela
# a déjà été fait
raise e
# on peut ajouter d'autres dataframe à la table comme si elle était créée par morceau
# voir le paramètre if_exists de la fonction to_sql
Explanation: Les données sont trop grosses pour tenir dans une feuille Excel. Pour les consulter, il n'y a pas d'autres moyens que d'en regarder des extraits. Que passe quand même ceci n'est pas possible ? Quelques solutions :
augmenter la mémoire de l'ordinateur, avec 20 Go, on peut faire beaucoup de choses
stocker les données dans un serveur SQL
stocker les données sur un système distribué (cloud, Hadoop, ...)
La seconde option n'est pas toujours simple, il faut installer un serveur SQL. Pour aller plus vite, on peut simplement utiliser SQLite qui est une façon de faire du SQL sans serveur (cela prend quelques minutes). On utilise la méthode to_sql.
End of explanation
import pandas
example = pandas.read_sql('select * from mortalite where age_num==50 limit 5', cnx)
example
Explanation: On peut maintenant récupérer un morceau avec la fonction read_sql.
End of explanation
cnx.close()
Explanation: L'ensemble des données restent sur le disque, seul le résultat de la requête est chargé en mémoire. Si on ne peut pas faire tenir les données en mémoire, il faut soit en obtenir une vue partielle (un échantillon aléatoire, un vue filtrée), soit une vue agrégrée.
Pour finir, il faut fermer la connexion pour laisser d'autres applications ou notebook modifier la base ou tout simplement supprimer le fichier.
End of explanation
NbImage("sqlite.png")
Explanation: Sous Windows, on peut consulter la base avec le logiciel SQLiteSpy.
End of explanation
%load_ext pyensae
%SQL_connect mortalite.db3
%SQL_tables
%SQL_schema mortalite
%%SQL
SELECT COUNT(*) FROM mortalite
%SQL_close
Explanation: Sous Linux ou Max, on peut utiliser une extension Firefox SQLite Manager. Dans ce notebook, on utilisera la commande magique %%SQL du module pyensae :
End of explanation
sample = df.sample(frac=0.1)
sample.shape, df.shape
Explanation: Exercice 2 : échantillon aléatoire
Si on ne peut pas faire tenir les données en mémoire, on peut soit regarder les premières lignes soit prendre un échantillon aléatoire. Deux options :
Dataframe.sample
create_function
La première fonction est simple :
End of explanation
import random #loi uniforme
def echantillon(proportion):
return 1 if random.random() < proportion else 0
import sqlite3
from pandas.io import sql
cnx = sqlite3.connect('mortalite.db3')
cnx.create_function('echantillon', 1, echantillon)
Explanation: Je ne sais pas si cela peut être réalisé sans charger les données en mémoire. Si les données pèsent 20 Go, cette méthode n'aboutira pas. Pourtant, on veut juste un échantillon pour commencer à regarder les données. On utilise la seconde option avec create_function et la fonction suivante :
End of explanation
import pandas
#example = pandas.read_sql(' ??? ', cnx)
#example
cnx.close()
Explanation: Que faut-il écrire ici pour récupérer 1% de la table ?
End of explanation
import sqlite3, pandas
from pandas.io import sql
cnx = sqlite3.connect('mortalite.db3')
pandas.read_sql('select pays,count(*) from mortalite group by pays', cnx)
Explanation: Pseudo Map/Reduce avec SQLite
La liste des mots-clés du langage SQL utilisés par SQLite n'est pas aussi riche que d'autres solutions de serveurs SQL. La médiane ne semble pas en faire partie. Cependant, pour une année, un genre, un âge donné, on voudrait calculer la médiane de l'espérance de vie sur l'ensembles des pays.
End of explanation
query = SELECT nb_country, COUNT(*) AS nb_rows FROM (
SELECT annee,age,age_num, count(*) AS nb_country FROM mortalite
WHERE indicateur=="LIFEXP" AND genre=="F"
GROUP BY annee,age,age_num
) GROUP BY nb_country
df = pandas.read_sql(query, cnx)
df.sort_values("nb_country", ascending=False).head(n=2)
df.plot(x="nb_country", y="nb_rows")
Explanation: Il n'y a pas le même nombre de données selon les pays, il est probable que le nombre de pays pour lesquels il existe des données varie selon les âges et les années.
End of explanation
query = SELECT annee,age,age_num, count(*) AS nb_country FROM mortalite
WHERE indicateur=="LIFEXP" AND genre=="F"
GROUP BY annee,age,age_num
HAVING nb_country >= 100
df = pandas.read_sql(query, cnx)
df.head()
Explanation: Soit un nombre inconstant de pays. Le fait qu'on est 100 pays suggère qu'on ait une erreur également.
End of explanation
class ReducerMediane:
def __init__(self):
# ???
pass
def step(self, value):
# ???
#
pass
def finalize(self):
# ???
# return ... //2 ]
pass
cnx.create_aggregate("ReducerMediane", 1, ReducerMediane)
#query = SELECT annee,age,age_num, ...... AS mediane FROM mortalite
# WHERE indicateur=="LIFEXP" AND genre=="F"
# GROUP BY annee,age,age_num
#df = pandas.read_sql(query, cnx)
cnx.close()
Explanation: Ce sont des valeurs manquantes. Le problème pour calculer la médiane pour chaque observation est qu'il faut d'abord regrouper les lignes de la table par indicateur puis choisir la médiane dans chaque de ces petits groupes. On s'inspire pour cela de la logique Map/Reduce et de la fonction create_aggregate.
Exercice 3 : reducer SQL
Il faut compléter le programme suivant.
End of explanation |
8,711 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cas', 'sandbox-2', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: CAS
Source ID: SANDBOX-2
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:45
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
8,712 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Functions
Functions are blocks of code identified by a name, which can receive ""predetermined"" parameters or not ;).
In Python, functions
Step3: In the above example, we have caps as function, which takes val as argument and returns val * 2.
Step4: Functions can return any data type, next example returns a boolean value.
Step5: Example (factorial without recursion)
Step7: Example (factorial with recursion)
Step9: Example (Fibonacci series with recursion)
Step10: Example (Fibonacci series without recursion)
Step11: NOTE
Step12: Functions can also not return anything like in the below example
Step13: Functions can also return multiple values, usually in form of tuple.
Step16: Example (RGB conversion)
Step17: Note
Step18: Observations
Step19: In the example, kargs will receive the named arguments and args will receive the others.
The interpreter has some builtin functions defined, including sorted(), which orders sequences, and cmp(), which makes comparisons between two arguments and returns -1 if the first element is greater, 0 (zero) if they are equal, or 1 if the latter is higher. This function is used by the routine of ordering, a behavior that can be modified.
Example
Step20: Python also has a builtin function eval(), which evaluates code (source or object) and returns the value.
Example | Python Code:
def caps(val):
caps returns double the value of the provided value
return val*2
a = caps("TEST ")
print(a)
print(caps.__doc__)
Explanation: Functions
Functions are blocks of code identified by a name, which can receive ""predetermined"" parameters or not ;).
In Python, functions:
return objects or not.
can provide documentation using Doc Strings.
Can have their properties changed (usually by decorators).
Have their own namespace (local scope), and therefore may obscure definitions of global scope.
Allows parameters to be passed by name. In this case, the parameters can be passed in any order.
Allows optional parameters (with pre-defined defaults ), thus if no parameter are provided then, pre-defined default will be used.
Syntax:
python
def func(parameter1, parameter2=default_value):
Doc String
<code block>
return value
NOTE: The parameters with default value must be declared after the ones without default value.
End of explanation
a = caps(1234)
print(a)
Explanation: In the above example, we have caps as function, which takes val as argument and returns val * 2.
End of explanation
def isValid(data):
if 10 in data:
return True
return False
a = isValid([10, 200, 33, "asf"])
print(a)
a = isValid((10,))
print(a)
isValid((10,))
a = isValid((110,))
print(a)
def isValid_new(data):
return 10 in data
print(isValid_new([10, 200, 33, "asf"]))
a = isValid_new((110,))
print(a)
Explanation: Functions can return any data type, next example returns a boolean value.
End of explanation
def fatorial(n):#{
n = n if n > 1 else 1
j = 1
for i in range(1, n + 1):
j = j * i
return j
#}
# Testing...
for i in range(1, 6):
print (i, '->', fatorial(i))
Explanation: Example (factorial without recursion):
End of explanation
def factorial(num):
Fatorial implemented with recursion.
if num <= 1:
return 1
else:
return(num * factorial(num - 1))
# Testing factorial()
print (factorial(5))
# 5 * (4 * (3 * (2) * (1))
Explanation: Example (factorial with recursion):
End of explanation
def fib(n):
Fibonacci:
fib(n) = fib(n - 1) + fib(n - 2) se n > 1
fib(n) = 1 se n <= 1
if n > 1:
return fib(n - 1) + fib(n - 2)
else:
return 1
# Show Fibonacci from 1 to 5
for i in [1, 2, 3, 4, 5]:
print (i, '=>', fib(i))
Explanation: Example (Fibonacci series with recursion):
End of explanation
def fib(n):
# the first two values
l = [1, 1]
# Calculating the others
for i in range(2, n + 1):
l.append(l[i -1] + l[i - 2])
return l[n]
# Show Fibonacci from 1 to 5
for i in [1, 2, 3, 4, 5]:
print (i, '=>', fib(i))
def test(a, b):
print(a, b)
return a + b
print(test(1, 2))
test(b=1, a=2)
def test_abc(a, b, c):
print(a, b, c)
return a + b + c
try:
test_abc(b=1, a=2, 3)
except SyntaxError as e:
print("error", e)
Explanation: Example (Fibonacci series without recursion):
End of explanation
test_abc(2, c=3, b=2)
test_abc(2, b=2, c=3)
Explanation: NOTE: We cannot have non-keyword arguments after keyword arguments
End of explanation
def test_new(a, b, c):
pass
Explanation: Functions can also not return anything like in the below example
End of explanation
def test(a, b):
print(a, b)
return a*a, b*b
x, a = test(2, 5)
print(x)
print(type(x))
print(a)
print(type(a))
print(type(test(2, 5)))
def test(a, b):
print(a, b)
return a*a, b*b, a*b
x = test(2 , 5)
print(x)
print(type(x))
def test(a, b):
print(a, b)
return a*a, b*b, "asdf"
x = test(2 , 5)
print(x)
print(type(x))
def test(a=100, b=1000):
print(a, b)
return a, b
x = test(2, 5)
print(x)
print(test(10))
def test(a=100, b=1000):
print(a, b)
return a, b
print(test(b=10))
print(test(101))
def test(d, c, a=100, b=1000):
print(d, c, a, b)
return d, c, a, b
x = test(c=2, d=10, b=5)
print(x)
x = test(1, 2, 3, 4)
print(x)
print(test(10, 2))
Explanation: Functions can also return multiple values, usually in form of tuple.
End of explanation
def rgb_html(r=0, g=0, b=0):
Converts R, G, B to #RRGGBB
return '#%02x%02x%02x' % (r, g, b)
def html_rgb(color='#000000'):
Converts #RRGGBB em R, G, B
if color.startswith('#'): color = color[1:]
r = int(color[:2], 16)
g = int(color[2:4], 16)
b = int(color[4:], 16)
return r, g, b # a sequence
print (rgb_html(200, 200, 255))
print (rgb_html(b=200, g=200, r=255)) # what's happened?
print (html_rgb('#c8c8ff'))
Explanation: Example (RGB conversion):
End of explanation
def test(d, a=100, c, b=1000):
print(d, c, a, b)
return d, c, a, b
x = test(c=2, d=10, b=5)
print(x)
x = test(1, 2, 3, 4)
print(x)
print(test(10, 2))
def test(c, d, a=100, b=1000):
print(d, c, a, b)
return d, c, a, b
x = test(c=2, d=10, b=5)
print(x)
x = test(1, 2, 3, 4)
print(x)
print(test(10, 2))
Explanation: Note: non-default argument's should always follow default argument
End of explanation
# *args - arguments without name (list)
# **kargs - arguments with name (ditcionary)
def func(*args, **kargs):
print (args)
print (kargs)
func('weigh', 10, unit='k')
Explanation: Observations:
The arguments with default value must come last, after the non-default arguments.
The default value for a parameter is calculated when the function is defined.
The arguments passed without an identifier are received by the function in the form of a list.
The arguments passed to the function with an identifier are received in the form of a dictionary.
The parameters passed to the function with an identifier should come at the end of the parameter list.
Example of how to get all parameters:
End of explanation
def func(*args, **kargs):
print (args)
print (kargs)
a = {
"name": "Mohan kumar Shah",
"age": 24 + 1
}
func('weigh', 10, unit='k', val=a)
def func(*args):
print(args)
func('weigh', 10, "test")
data = [(4, 3), (5, 1), (7, 2), (9, 0)]
# Comparing by the last element
def _cmp(x, y):
return cmp(x[-1], y[-1])
print ('List:', data)
Explanation: In the example, kargs will receive the named arguments and args will receive the others.
The interpreter has some builtin functions defined, including sorted(), which orders sequences, and cmp(), which makes comparisons between two arguments and returns -1 if the first element is greater, 0 (zero) if they are equal, or 1 if the latter is higher. This function is used by the routine of ordering, a behavior that can be modified.
Example:
End of explanation
print (eval('12. / 2 + 3.3'))
def listing(lst):
for l in lst:
print(l)
d = {"Mayank Johri":40, "Janki Mohan Johri":68}
listing(d)
d = {
"name": "Mohan",
"age": 24
}
a = {
"name": "Mohan kumar Shah",
"age": 24 + 1
}
def process_dict(d=a):
print(d)
process_dict(d)
process_dict()
Explanation: Python also has a builtin function eval(), which evaluates code (source or object) and returns the value.
Example:
End of explanation |
8,713 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Saving and Loading Models
In this notebook, I'll show you how to save and load models with PyTorch. This is important because you'll often want to load previously trained models to use in making predictions or to continue training on new data.
Step1: Here we can see one of the images.
Step2: Train a network
To make things more concise here, I moved the model architecture and training code from the last part to a file called fc_model. Importing this, we can easily create a fully-connected network with fc_model.Network, and train the network using fc_model.train. I'll use this model (once it's trained) to demonstrate how we can save and load models.
Step3: Saving and loading networks
As you can imagine, it's impractical to train a network every time you need to use it. Instead, we can save trained networks then load them later to train more or use them for predictions.
The parameters for PyTorch networks are stored in a model's state_dict. We can see the state dict contains the weight and bias matrices for each of our layers.
Step4: The simplest thing to do is simply save the state dict with torch.save. For example, we can save it to a file 'checkpoint.pth'.
Step5: Then we can load the state dict with torch.load.
Step6: And to load the state dict in to the network, you do model.load_state_dict(state_dict).
Step7: Seems pretty straightforward, but as usual it's a bit more complicated. Loading the state dict works only if the model architecture is exactly the same as the checkpoint architecture. If I create a model with a different architecture, this fails.
This means we need to rebuild the model exactly as it was when trained. Information about the model architecture needs to be saved in the checkpoint, along with the state dict.
Step8: Now the checkpoint has all the necessary information to rebuild the trained model. You can easily make that a function if you want. Similarly, we can write a function to load checkpoints. | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
from torchvision import datasets, transforms
import helper
import fc_model
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))])
# Download and load the training data
trainset = datasets.FashionMNIST('F_MNIST_data/',
download=True,
train=True,
transform=transform)
trainloader = torch.utils.data.DataLoader(dataset=trainset,
batch_size=64,
shuffle=True)
# Download and load the test data
testset = datasets.FashionMNIST('F_MNIST_data/',
download=True,
train=False,
transform=transform)
testloader = torch.utils.data.DataLoader(dataset=testset,
batch_size=64,
shuffle=True)
Explanation: Saving and Loading Models
In this notebook, I'll show you how to save and load models with PyTorch. This is important because you'll often want to load previously trained models to use in making predictions or to continue training on new data.
End of explanation
image, label = next(iter(trainloader))
helper.imshow(image[0,:]);
Explanation: Here we can see one of the images.
End of explanation
# Create the network, define the criterion and optimizer
model = fc_model.Network(784, 10, [512, 256, 128])
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
fc_model.train(model,
trainloader,
testloader,
criterion,
optimizer,
epochs=2)
Explanation: Train a network
To make things more concise here, I moved the model architecture and training code from the last part to a file called fc_model. Importing this, we can easily create a fully-connected network with fc_model.Network, and train the network using fc_model.train. I'll use this model (once it's trained) to demonstrate how we can save and load models.
End of explanation
print("Our model: \n\n", model, '\n')
print("The state dict keys: \n\n", model.state_dict().keys())
Explanation: Saving and loading networks
As you can imagine, it's impractical to train a network every time you need to use it. Instead, we can save trained networks then load them later to train more or use them for predictions.
The parameters for PyTorch networks are stored in a model's state_dict. We can see the state dict contains the weight and bias matrices for each of our layers.
End of explanation
torch.save(model.state_dict(), './models/checkpoint.pth')
Explanation: The simplest thing to do is simply save the state dict with torch.save. For example, we can save it to a file 'checkpoint.pth'.
End of explanation
state_dict = torch.load('./models/checkpoint.pth')
print(state_dict.keys())
Explanation: Then we can load the state dict with torch.load.
End of explanation
model.load_state_dict(state_dict)
Explanation: And to load the state dict in to the network, you do model.load_state_dict(state_dict).
End of explanation
checkpoint = {'input_size': 784,
'output_size': 10,
'hidden_layers': [each.out_features for each in model.hidden_layers],
'state_dict': model.state_dict()}
torch.save(checkpoint, './models/checkpoint.pth')
Explanation: Seems pretty straightforward, but as usual it's a bit more complicated. Loading the state dict works only if the model architecture is exactly the same as the checkpoint architecture. If I create a model with a different architecture, this fails.
This means we need to rebuild the model exactly as it was when trained. Information about the model architecture needs to be saved in the checkpoint, along with the state dict.
End of explanation
def load_checkpoint(filepath):
checkpoint = torch.load(filepath)
model = fc_model.Network(checkpoint['input_size'],
checkpoint['output_size'],
checkpoint['hidden_layers'])
model.load_state_dict(checkpoint['state_dict'])
return model
model = load_checkpoint('./models/checkpoint.pth')
print(model)
for name, param in model.named_parameters():
if param.requires_grad:
print(name)
print(':')
print(param.data)
name, params = next(model.named_parameters())
name
params
Explanation: Now the checkpoint has all the necessary information to rebuild the trained model. You can easily make that a function if you want. Similarly, we can write a function to load checkpoints.
End of explanation |
8,714 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Learning
Assignment 2
Previously in 1_notmnist.ipynb, we created a pickle with formatted datasets for training, development and testing on the notMNIST dataset.
The goal of this assignment is to progressively train deeper and more accurate models using TensorFlow.
Step1: First reload the data we generated in 1_notmnist.ipynb.
Step2: Reformat into a shape that's more adapted to the models we're going to train
Step3: We're first going to train a multinomial logistic regression using simple gradient descent.
TensorFlow works like this
Step4: Let's run this computation and iterate
Step5: Let's now switch to stochastic gradient descent training instead, which is much faster.
The graph will be similar, except that instead of holding all the training data into a constant node, we create a Placeholder node which will be fed actual data at every call of session.run().
Step6: Let's run it
Step7: Problem
Turn the logistic regression example with SGD into a 1-hidden layer neural network with rectified linear units nn.relu() and 1024 hidden nodes. This model should improve your validation / test accuracy.
Setting up the graph with rectified linear units and one hidden layer | Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range
Explanation: Deep Learning
Assignment 2
Previously in 1_notmnist.ipynb, we created a pickle with formatted datasets for training, development and testing on the notMNIST dataset.
The goal of this assignment is to progressively train deeper and more accurate models using TensorFlow.
End of explanation
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
Explanation: First reload the data we generated in 1_notmnist.ipynb.
End of explanation
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
Explanation: Reformat into a shape that's more adapted to the models we're going to train:
- data as a flat matrix,
- labels as float 1-hot encodings.
End of explanation
# With gradient descent training, even this much data is prohibitive.
# Subset the training data for faster turnaround.
train_subset = 10000
graph = tf.Graph()
with graph.as_default():
# Input data.
# Load the training, validation and test data into constants that are
# attached to the graph.
tf_train_dataset = tf.constant(train_dataset[:train_subset, :])
tf_train_labels = tf.constant(train_labels[:train_subset])
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
# These are the parameters that we are going to be training. The weight
# matrix will be initialized using random values following a (truncated)
# normal distribution. The biases get initialized to zero.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
# We multiply the inputs with the weight matrix, and add biases. We compute
# the softmax and cross-entropy (it's one operation in TensorFlow, because
# it's very common, and it can be optimized). We take the average of this
# cross-entropy across all training examples: that's our loss.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))
# Optimizer.
# We are going to find the minimum of this loss using gradient descent.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
# These are not part of training, but merely here so that we can report
# accuracy figures as we train.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
Explanation: We're first going to train a multinomial logistic regression using simple gradient descent.
TensorFlow works like this:
* First you describe the computation that you want to see performed: what the inputs, the variables, and the operations look like. These get created as nodes over a computation graph. This description is all contained within the block below:
with graph.as_default():
...
Then you can run the operations on this graph as many times as you want by calling session.run(), providing it outputs to fetch from the graph that get returned. This runtime operation is all contained in the block below:
with tf.Session(graph=graph) as session:
...
Let's load all the data into TensorFlow and build the computation graph corresponding to our training:
End of explanation
num_steps = 801
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
with tf.Session(graph=graph) as session:
# This is a one-time operation which ensures the parameters get initialized as
# we described in the graph: random weights for the matrix, zeros for the
# biases.
tf.global_variables_initializer().run()
print('Initialized')
for step in range(num_steps):
# Run the computations. We tell .run() that we want to run the optimizer,
# and get the loss value and the training predictions returned as numpy
# arrays.
_, l, predictions = session.run([optimizer, loss, train_prediction])
if (step % 100 == 0):
print('Loss at step %d: %f' % (step, l))
print('Training accuracy: %.1f%%' % accuracy(
predictions, train_labels[:train_subset, :]))
# Calling .eval() on valid_prediction is basically like calling run(), but
# just to get that one numpy array. Note that it recomputes all its graph
# dependencies.
print('Validation accuracy: %.1f%%' % accuracy(
valid_prediction.eval(), valid_labels))
print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))
Explanation: Let's run this computation and iterate:
End of explanation
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
Explanation: Let's now switch to stochastic gradient descent training instead, which is much faster.
The graph will be similar, except that instead of holding all the training data into a constant node, we create a Placeholder node which will be fed actual data at every call of session.run().
End of explanation
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
Explanation: Let's run it:
End of explanation
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights_layer_1 = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases_layer_1 = tf.Variable(tf.zeros([num_labels]))
# Layer 2 weights have an input dimension = output of first layer
weights_layer_2 = tf.Variable(
tf.truncated_normal([num_labels, num_labels]))
biases_layer_2 = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits_layer_1 = tf.matmul(tf_train_dataset, weights_layer_1) + biases_layer_1
relu_output = tf.nn.relu(logits_layer_1)
logits_layer_2 = tf.matmul(relu_output, weights_layer_2) + biases_layer_2
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits_layer_2))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits_layer_2)
logits_l_1_valid = tf.matmul(tf_valid_dataset, weights_layer_1) + biases_layer_1
relu_valid = tf.nn.relu(logits_l_1_valid)
logits_l_2_valid = tf.matmul(relu_valid, weights_layer_2) + biases_layer_2
valid_prediction = tf.nn.softmax(logits_l_2_valid)
logits_l_1_test = tf.matmul(tf_test_dataset, weights_layer_1) + biases_layer_1
relu_test = tf.nn.relu(logits_l_1_test)
logits_l_2_test = tf.matmul(relu_test, weights_layer_2) + biases_layer_2
test_prediction = tf.nn.softmax(logits_l_2_test)
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy.eval(
predictions, batch_labels)
)
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
Explanation: Problem
Turn the logistic regression example with SGD into a 1-hidden layer neural network with rectified linear units nn.relu() and 1024 hidden nodes. This model should improve your validation / test accuracy.
Setting up the graph with rectified linear units and one hidden layer:
End of explanation |
8,715 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
(📗) Cookbook
Step1: ipcluster setup to run parallel code
You can find more details about this in our [ipyparallel tutorial].
Step3: Get default (example) 12 taxon tree
The function baba.Tree() takes a newick string as an argument, or, if no value is passed, it constructs the default 12 taxon tree shown below. The Tree Class object holds a representation of the tree, which can be used for plotting, as well as for describing a demographic model for simulating data, as we'll show below.
Step4: Or get any arbitrary tree
Step5: get a tree with an admixture edge
Shown by an arrow pointing backwards from source to sink (indicating that geneflow should be entered as occurring backwards in time). This is how the program msprime will interpret it.
Step6: simulate data on that tree
Step7: A function to print the variant matrix of one rep.
This is used just for demonstration. The matrix will have 2X as many columns as there are
tips in the tree, corresponding to two alleles per individual. The number of rows is the
number of variant sites simulated.
Step8: Simulate data for ABBA-BABA tests | Python Code:
## imports
import numpy as np
import ipyrad as ip
import ipyparallel as ipp
from ipyrad.analysis import baba
## print versions
print "ipyrad v.{}".format(ip.__version__)
print "ipyparallel v.{}".format(ipp.__version__)
print "numpy v.{}".format(np.__version__)
Explanation: (📗) Cookbook: ipyrad.analysis simulations
This notebook demonstrates how to use the baba module in ipyrad to test for admixture and introgression. The code is written in Python and is best implemented in a Jupyter-notebook like this one. Analyses can be split up among many computing cores. Finally, there are some simple plotting methods to accompany tests which I show below.
import packages
End of explanation
## start 4 engines by running the commented line below in a separate bash terminal.
##
## ipcluster start --n=4
## connect to client and print connection summary
ipyclient = ipp.Client()
print ip.cluster_info(client=ipyclient)
Explanation: ipcluster setup to run parallel code
You can find more details about this in our [ipyparallel tutorial].
End of explanation
tre.draw(width=400, height=200);
## tree2dict
import ete3
newick = "./analysis_tetrad/pedtest1.full.tre"
## load the newick string as a Tree object
tre = ete3.Tree(newick)
## set przewalskiis as the outgroup
prz = [i for i in tre.get_leaves() if "prz" in i.name]
out = tre.get_common_ancestor(prz)
## set new outgroup and store back as a newick string
tre.set_outgroup(out)
newick = tre.write()
def test_constraint(node, cdict, tip, exact):
names = set(node.get_leaf_names())
const = set(cdict[tip])
if const:
if exact:
if len(names.intersection(const)) == len(const):
return 1
else:
return 0
else:
if len(names.intersection(const)) == len(names):
return 1
else:
return 0
return 1
## constraints
#if not constraint_dict:
cdict = {"p1":[], "p2":[], "p3":[], "p4":[]}
cdict.update({"p4":['a']})
cdict
def tree2tests(newick, constraint_dict=None, constraint_exact=True):
Returns dict of all possible four-taxon splits in a tree. Assumes
the user has entered a rooted tree. Skips polytomies.
## make tree
tree = ete3.Tree(newick)
## constraints
#if not constraint_dict:
cdict = {"p1":[], "p2":[], "p3":[], "p4":[]}
cdict.update(constraint_dict)
print cdict
## traverse root to tips. Treat the left as outgroup, then the right.
tests = []
## topnode must have children
for topnode in tree.traverse("levelorder"):
## test onode as either child
for oparent in topnode.children:
## test outgroup as all descendants on one child branch
for onode in oparent.traverse():
## put constraints on onode
if test_constraint(onode, cdict, "p4", constraint_exact):
## p123 parent is sister to oparent
p123parent = oparent.get_sisters()[0]
## test p3 as all descendants of p123parent
for p3parent in p123parent.children:
## p12 parent is sister to p3parent
p12parent = p3parent.get_sisters()[0]
for p3node in p3parent.traverse():
if test_constraint(p3node, cdict, "p3", constraint_exact):
if p12parent.children:
p1parent, p2parent = p12parent.children
for p2node in p2parent.traverse():
if test_constraint(p2node, cdict, "p2", constraint_exact):
for p1node in p1parent.traverse():
if test_constraint(p1node, cdict, "p1", constraint_exact):
test = {}
test['p4'] = onode.get_leaf_names()
test['p3'] = p3node.get_leaf_names()
test['p2'] = p2node.get_leaf_names()
test['p1'] = p1node.get_leaf_names()
tests.append(test)
return tests
tests = tree2tests(newick,
constraint_dict={"p4":['32082_przewalskii', '33588_przewalskii'],
"p3":['30686_cyathophylla']},
constraint_exact=True)
len(tests)
tests
Explanation: Get default (example) 12 taxon tree
The function baba.Tree() takes a newick string as an argument, or, if no value is passed, it constructs the default 12 taxon tree shown below. The Tree Class object holds a representation of the tree, which can be used for plotting, as well as for describing a demographic model for simulating data, as we'll show below.
End of explanation
## pass a newick string to the Tree class object
tre = baba.Tree(newick="((a,b),c);")
tre.draw(width=200, height=200);
Explanation: Or get any arbitrary tree
End of explanation
## store newick tree and a migration event list [source, sink, start, stop, rate]
## if not edges are entered then it is converted to cladogram
newick = "(((a,b),Cow), (d,e));"
events = [['e', 'Cow', 0, 1, 1e-6],
['3', '1', 1, 2, 1e-6]]
## initiate Tree object with newick and admix args
tre = baba.Tree(newick=newick,
admix=events)
## show the tree
tre.draw(width=250, height=250, yaxis=True);
## a way of finding names xpos from verts
tre.verts[tre.verts[:, 1] == 0]
tre.tree.search_nodes(name='b')[0].idx
tre.verts[6, 0]
print [tre.verts[tre.tree.search_nodes(name=name)[0].idx, 0]
for name in tre.tree.get_leaf_names()]
print tre.tree.get_leaf_names()
Explanation: get a tree with an admixture edge
Shown by an arrow pointing backwards from source to sink (indicating that geneflow should be entered as occurring backwards in time). This is how the program msprime will interpret it.
End of explanation
## returns a Sim data object
sims = tre.simulate(nreps=10000, Ns=50000, gen=20)
## what is in the sims object? 4 attributes:
for key, val in sims.__dict__.items():
print key, val
baba._msp_to_arr(sims, test)
def _msp_to_arr(Sim, test):
## the fixed tree dictionary
#fix = {j: [i, i+1] for j, i in zip(list("abcdefghijkl"), range(0, 24, 2))}
fix = {j: [i, i+1] for j, i in zip(Sim.names, range(0, len(Sim.names)*2, 2))}
## fill taxdict by test
keys = ['p1', 'p2', 'p3', 'p4']
arr = np.zeros((Sim.nreps, 4, 100))
## unless it's a 5-taxon test
if len(test) == 5:
arr = np.zeros((100000, 6, 100))
keys += ['p5']
## create array sampler for taxa
taxs = [test[key] for key in keys]
idxs = [list(itertools.chain(*[fix[j] for j in i])) for i in taxs]
print idxs
from ipyrad.analysis.baba import *
_msp_to_arr(sims, test)
Explanation: simulate data on that tree
End of explanation
def print_variant_matrix(tree):
shape = tree.get_num_mutations(), tree.get_sample_size()
arr = np.empty(shape, dtype="u1")
for variant in tree.variants():
arr[variant.index] = variant.genotypes
print(arr)
## in order of ladderized tip names (2 copies per tip)
print_variant_matrix(sims.sims.next())
Explanation: A function to print the variant matrix of one rep.
This is used just for demonstration. The matrix will have 2X as many columns as there are
tips in the tree, corresponding to two alleles per individual. The number of rows is the
number of variant sites simulated.
End of explanation
sims
## simulate data on tree
#sims = tre.simulate(nreps=10000, Ns=50000, gen=20)
## pass sim object to baba
test = {
'p4': ['e', 'd'],
'p3': ['Cow'],
'p2': ['b'],
'p1': ['a'],
}
#baba.batch(sims, test, ipyclient=ipyclient)
Explanation: Simulate data for ABBA-BABA tests
End of explanation |
8,716 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numpy Exercise 2
Imports
Step2: Factorial
Write a function that computes the factorial of small numbers using np.arange and np.cumprod.
Step4: Write a function that computes the factorial of small numbers using a Python loop.
Step5: Use the %timeit magic to time both versions of this function for an argument of 50. The syntax for %timeit is | Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
Explanation: Numpy Exercise 2
Imports
End of explanation
def np_fact(n):
Compute n! = n*(n-1)*...*1 using Numpy.
# YOUR CODE HERE
a = np.arange(1, n+1, 1) #Makes array from 1 to n+1
if n==0:
return 1 #If n is 1 or 0, returns value of 1.
elif n==1:
return 1
else:
return max(a.cumprod())#For all other n, takes max value of cumulative products
print np_fact(6)
assert np_fact(0)==1
assert np_fact(1)==1
assert np_fact(10)==3628800
assert [np_fact(i) for i in range(0,11)]==[1,1,2,6,24,120,720,5040,40320,362880,3628800]
Explanation: Factorial
Write a function that computes the factorial of small numbers using np.arange and np.cumprod.
End of explanation
def loop_fact(n):
Compute n! using a Python for loop.
# YOUR CODE HERE
f = n
if n == 0:
return 1 #Same as above.
elif n == 1:
return 1
while n > 1:
f *= (n-1) #For n > 1, takes continuous product of n to right before n = 0, otherwise it would all equal 0.
n -= 1
return f
print loop_fact(10)
assert loop_fact(0)==1
assert loop_fact(1)==1
assert loop_fact(10)==3628800
assert [loop_fact(i) for i in range(0,11)]==[1,1,2,6,24,120,720,5040,40320,362880,3628800]
Explanation: Write a function that computes the factorial of small numbers using a Python loop.
End of explanation
# YOUR CODE HERE
%timeit -n1 -r1 loop_fact(50)
%timeit -n1 -r1 np_fact(50)
Explanation: Use the %timeit magic to time both versions of this function for an argument of 50. The syntax for %timeit is:
python
%timeit -n1 -r1 function_to_time()
End of explanation |
8,717 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'cmcc-cm2-hr4', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: CMCC
Source ID: CMCC-CM2-HR4
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:50
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
8,718 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sheet To BigQuery
Import data from a sheet and move it to a BigQuery table.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step1: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project
Step2: 3. Enter Sheet To BigQuery Recipe Parameters
For the sheet, provide the full edit URL.
If the tab does not exist it will be created.
Empty cells in the range will be NULL.
Check Sheets header if first row is a header
Modify the values below for your use case, can be done multiple times, then click play.
Step3: 4. Execute Sheet To BigQuery
This does NOT need to be modified unless you are changing the recipe, click play. | Python Code:
!pip install git+https://github.com/google/starthinker
Explanation: Sheet To BigQuery
Import data from a sheet and move it to a BigQuery table.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Disclaimer
This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.
This code generated (see starthinker/scripts for possible source):
- Command: "python starthinker_ui/manage.py colab"
- Command: "python starthinker/tools/colab.py [JSON RECIPE]"
1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
End of explanation
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
Explanation: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project:
Set the configuration project value to the project identifier from these instructions.
If the recipe has auth set to user:
If you have user credentials:
Set the configuration user value to your user credentials JSON.
If you DO NOT have user credentials:
Set the configuration client value to downloaded client credentials.
If the recipe has auth set to service:
Set the configuration service value to downloaded service credentials.
End of explanation
FIELDS = {
'auth_read':'user', # Credentials used for reading data.
'auth_write':'service', # Credentials used for writing data.
'sheets_url':'',
'sheets_tab':'',
'sheets_range':'',
'dataset':'',
'table':'',
'sheets_header':True,
}
print("Parameters Set To: %s" % FIELDS)
Explanation: 3. Enter Sheet To BigQuery Recipe Parameters
For the sheet, provide the full edit URL.
If the tab does not exist it will be created.
Empty cells in the range will be NULL.
Check Sheets header if first row is a header
Modify the values below for your use case, can be done multiple times, then click play.
End of explanation
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'sheets':{
'auth':{'field':{'name':'auth_read','kind':'authentication','order':0,'default':'user','description':'Credentials used for reading data.'}},
'sheet':{'field':{'name':'sheets_url','kind':'string','order':2,'default':''}},
'tab':{'field':{'name':'sheets_tab','kind':'string','order':3,'default':''}},
'range':{'field':{'name':'sheets_range','kind':'string','order':4,'default':''}},
'header':{'field':{'name':'sheets_header','kind':'boolean','order':9,'default':True}},
'out':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}},
'bigquery':{
'dataset':{'field':{'name':'dataset','kind':'string','order':5,'default':''}},
'table':{'field':{'name':'table','kind':'string','order':6,'default':''}}
}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
Explanation: 4. Execute Sheet To BigQuery
This does NOT need to be modified unless you are changing the recipe, click play.
End of explanation |
8,719 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TFX Components Walk-through
Learning Objectives
Develop a high level understanding of TFX pipeline components.
Learn how to use a TFX Interactive Context for prototype development of TFX pipelines.
Work with the Tensorflow Data Validation (TFDV) library to check and analyze input data.
Utilize the Tensorflow Transform (TFT) library for scalable data preprocessing and feature transformations.
Employ the Tensorflow Model Analysis (TFMA) library for model evaluation.
In this lab, you will work with the Covertype Data Set and use TFX to analyze, understand, and pre-process the dataset and train, analyze, validate, and deploy a multi-class classification model to predict the type of forest cover from cartographic features.
You will utilize TFX Interactive Context to work with the TFX components interactivelly in a Jupyter notebook environment. Working in an interactive notebook is useful when doing initial data exploration, experimenting with models, and designing ML pipelines. You should be aware that there are differences in the way interactive notebooks are orchestrated, and how they access metadata artifacts. In a production deployment of TFX on GCP, you will use an orchestrator such as Kubeflow Pipelines, or Cloud Composer. In an interactive mode, the notebook itself is the orchestrator, running each TFX component as you execute the notebook cells. In a production deployment, ML Metadata will be managed in a scalabe database like MySQL, and artifacts in apersistent store such as Google Cloud Storage. In an interactive mode, both properties and payloads are stored in a local file system of the Jupyter host.
Setup Note
Step1: Note
Step2: If the versions above do not match, update your packages in the current Jupyter kernel below. The default %pip package installation location is not on your system installation PATH; use the command below to append the local installation path to pick up the latest package versions. Note that you may also need to restart your notebook kernel to pick up the specified package versions and re-run the imports cell above before proceeding with the lab.
Step3: Configure lab settings
Set constants, location paths and other environment settings.
Step4: Creating Interactive Context
TFX Interactive Context allows you to create and run TFX Components in an interactive mode. It is designed to support experimentation and development in a Jupyter Notebook environment. It is an experimental feature and major changes to interface and functionality are expected. When creating the interactive context you can specifiy the following parameters
Step5: Ingesting data using ExampleGen
In any ML development process the first step is to ingest the training and test datasets. The ExampleGen component ingests data into a TFX pipeline. It consumes external files/services to generate a set file files in the TFRecord format, which will be used by other TFX components. It can also shuffle the data and split into an arbitrary number of partitions.
<img src=../../images/ExampleGen.png width="300">
Configure and run CsvExampleGen
In this exercise, you use the CsvExampleGen specialization of ExampleGen to ingest CSV files from a GCS location and emit them as tf.Example records for consumption by downstream TFX pipeline components. Your task is to configure the component to create 80-20 train and eval splits. Hint
Step6: Examine the ingested data
Step7: Generating statistics using StatisticsGen
The StatisticsGen component generates data statistics that can be used by other TFX components. StatisticsGen uses TensorFlow Data Validation. StatisticsGen generate statistics for each split in the ExampleGen component's output. In our case there two splits
Step8: Visualize statistics
The generated statistics can be visualized using the tfdv.visualize_statistics() function from the TensorFlow Data Validation library or using a utility method of the InteractiveContext object. In fact, most of the artifacts generated by the TFX components can be visualized using InteractiveContext.
Step9: Infering data schema using SchemaGen
Some TFX components use a description input data called a schema. The schema is an instance of schema.proto. It can specify data types for feature values, whether a feature has to be present in all examples, allowed value ranges, and other properties. SchemaGen automatically generates the schema by inferring types, categories, and ranges from data statistics. The auto-generated schema is best-effort and only tries to infer basic properties of the data. It is expected that developers review and modify it as needed. SchemaGen uses TensorFlow Data Validation.
The SchemaGen component generates the schema using the statistics for the train split. The statistics for other splits are ignored.
<img src=../../images/SchemaGen.png width="200">
Configure and run the SchemaGen components
Step10: Visualize the inferred schema
Step11: Updating the auto-generated schema
In most cases the auto-generated schemas must be fine-tuned manually using insights from data exploration and/or domain knowledge about the data. For example, you know that in the covertype dataset there are seven types of forest cover (coded using 1-7 range) and that the value of the Slope feature should be in the 0-90 range. You can manually add these constraints to the auto-generated schema by setting the feature domain.
Load the auto-generated schema proto file
Step12: Modify the schema
You can use the protocol buffer APIs to modify the schema.
Hint
Step13: Save the updated schema
Step14: Importing the updated schema using Importer
The Importer component allows you to import an external artifact, including the schema file, so it can be used by other TFX components in your workflow.
Configure and run the Importer component
Step15: Visualize the imported schema
Step16: Validating data with ExampleValidator
The ExampleValidator component identifies anomalies in data. It identifies anomalies by comparing data statistics computed by the StatisticsGen component against a schema generated by SchemaGen or imported by Importer.
ExampleValidator can detect different classes of anomalies. For example it can
Step17: Visualize validation results
The file anomalies.pbtxt can be visualized using context.show.
Step18: In our case no anomalies were detected in the eval split.
For a detailed deep dive into data validation and schema generation refer to the lab-31-tfdv-structured-data lab.
Preprocessing data with Transform
The Transform component performs data transformation and feature engineering. The Transform component consumes tf.Examples emitted from the ExampleGen component and emits the transformed feature data and the SavedModel graph that was used to process the data. The emitted SavedModel can then be used by serving components to make sure that the same data pre-processing logic is applied at training and serving.
The Transform component requires more code than many other components because of the arbitrary complexity of the feature engineering that you may need for the data and/or model that you're working with. It requires code files to be available which define the processing needed.
<img src=../../images/Transform.png width="400">
Define the pre-processing module
To configure Trainsform, you need to encapsulate your pre-processing code in the Python preprocessing_fn function and save it to a python module that is then provided to the Transform component as an input. This module will be loaded by transform and the preprocessing_fn function will be called when the Transform component runs.
In most cases, your implementation of the preprocessing_fn makes extensive use of TensorFlow Transform for performing feature engineering on your dataset.
Step19: Configure and run the Transform component.
Step20: Examine the Transform component's outputs
The Transform component has 2 outputs
Step21: And the transform.examples artifact
Step22: Train your TensorFlow model with the Trainer component
The Trainer component trains a model using TensorFlow.
Trainer takes
Step23: Create and run the Trainer component
Note that the Trainer component supports passing the field num_steps through the train_args and eval_args arguments.
Step24: Analyzing training runs with TensorBoard
In this step you will analyze the training run with TensorBoard.dev. TensorBoard.dev is a managed service that enables you to easily host, track and share your ML experiments.
Retrieve the location of TensorBoard logs
Each model run's train and eval metric logs are written to the model_run directory by the Tensorboard callback defined in model.py.
Step25: Upload the logs and start TensorBoard.dev
Open a new JupyterLab terminal window
From the terminal window, execute the following command
tensorboard dev upload --logdir [YOUR_LOGDIR]
Where [YOUR_LOGDIR] is an URI retrieved by the previous cell.
You will be asked to authorize TensorBoard.dev using your Google account. If you don't have a Google account or you don't want to authorize TensorBoard.dev you can skip this exercise.
After the authorization process completes, follow the link provided to view your experiment.
Evaluating trained models with Evaluator
The Evaluator component analyzes model performance using the TensorFlow Model Analysis library. It runs inference requests on particular subsets of the test dataset, based on which slices are defined by the developer. Knowing which slices should be analyzed requires domain knowledge of what is important in this particular use case or domain.
The Evaluator can also optionally validate a newly trained model against a previous model. In this lab, you only train one model, so the Evaluator automatically will label the model as "blessed".
<img src=../../images/Evaluator.png width="400">
Configure and run the Evaluator component
Use the ResolverNode to pick the previous model to compare against. The model resolver is only required if performing model validation in addition to evaluation. In this case we validate against the latest blessed model. If no model has been blessed before (as in this case) the evaluator will make our candidate the first blessed model.
Step26: Configure evaluation metrics and slices.
Step27: Check the model performance validation status
Step28: Visualize evaluation results
You can visualize the evaluation results using the tfma.view.render_slicing_metrics() function from TensorFlow Model Analysis library.
Setup Note
Step29: InfraValidator
The InfraValidator component acts as an additional early warning layer by validating a candidate model in a sandbox version of its serving infrastructure to prevent an unservable model from being pushed to production. Compared to the Evaluator component above which validates a model's performance, the InfraValidator component is validating that a model is able to generate predictions from served examples in an environment configured to match production. The config below takes a model and examples, launches the model in a sand-boxed TensorflowServing model server from the latest image in a local docker engine, and optionally checks that the model binary can be loaded and queried before "blessing" it for production.
<img src=../../images/InfraValidator.png width="400">
Step30: Check the model infrastructure validation status
Step31: Deploying models with Pusher
The Pusher component checks whether a model has been "blessed", and if so, deploys it by pushing the model to a well known file destination.
<img src=../../images/Pusher.png width="400">
Configure and run the Pusher component
Step32: Examine the output of Pusher | Python Code:
import os
import time
from pprint import pprint
import absl
import tensorflow as tf
import tensorflow_data_validation as tfdv
import tensorflow_model_analysis as tfma
import tensorflow_transform as tft
import tfx
from tensorflow_metadata.proto.v0 import schema_pb2
from tfx.components import (
CsvExampleGen,
Evaluator,
ExampleValidator,
InfraValidator,
Pusher,
SchemaGen,
StatisticsGen,
Trainer,
Transform,
)
from tfx.components.trainer import executor as trainer_executor
from tfx.dsl.components.base import executor_spec
from tfx.dsl.components.common.importer import Importer
from tfx.dsl.components.common.resolver import Resolver
from tfx.dsl.input_resolution.strategies.latest_blessed_model_strategy import (
LatestBlessedModelStrategy,
)
from tfx.orchestration import metadata, pipeline
from tfx.orchestration.experimental.interactive.interactive_context import (
InteractiveContext,
)
from tfx.proto import (
example_gen_pb2,
infra_validator_pb2,
pusher_pb2,
trainer_pb2,
)
from tfx.types import Channel
from tfx.types.standard_artifacts import Model, ModelBlessing
Explanation: TFX Components Walk-through
Learning Objectives
Develop a high level understanding of TFX pipeline components.
Learn how to use a TFX Interactive Context for prototype development of TFX pipelines.
Work with the Tensorflow Data Validation (TFDV) library to check and analyze input data.
Utilize the Tensorflow Transform (TFT) library for scalable data preprocessing and feature transformations.
Employ the Tensorflow Model Analysis (TFMA) library for model evaluation.
In this lab, you will work with the Covertype Data Set and use TFX to analyze, understand, and pre-process the dataset and train, analyze, validate, and deploy a multi-class classification model to predict the type of forest cover from cartographic features.
You will utilize TFX Interactive Context to work with the TFX components interactivelly in a Jupyter notebook environment. Working in an interactive notebook is useful when doing initial data exploration, experimenting with models, and designing ML pipelines. You should be aware that there are differences in the way interactive notebooks are orchestrated, and how they access metadata artifacts. In a production deployment of TFX on GCP, you will use an orchestrator such as Kubeflow Pipelines, or Cloud Composer. In an interactive mode, the notebook itself is the orchestrator, running each TFX component as you execute the notebook cells. In a production deployment, ML Metadata will be managed in a scalabe database like MySQL, and artifacts in apersistent store such as Google Cloud Storage. In an interactive mode, both properties and payloads are stored in a local file system of the Jupyter host.
Setup Note:
Currently, TFMA visualizations do not render properly in JupyterLab. It is recommended to run this notebook in Jupyter Classic Notebook. To switch to Classic Notebook select Launch Classic Notebook from the Help menu.
End of explanation
print("Tensorflow Version:", tf.__version__)
print("TFX Version:", tfx.__version__)
print("TFDV Version:", tfdv.__version__)
print("TFMA Version:", tfma.VERSION_STRING)
absl.logging.set_verbosity(absl.logging.INFO)
Explanation: Note: this lab was developed and tested with the following TF ecosystem package versions:
Tensorflow Version: 2.6.2
TFX Version: 1.4.0
TFDV Version: 1.4.0
TFMA Version: 0.35.0
If you encounter errors with the above imports (e.g. TFX component not found), check your package versions in the cell below.
End of explanation
os.environ["PATH"] += os.pathsep + "/home/jupyter/.local/bin"
Explanation: If the versions above do not match, update your packages in the current Jupyter kernel below. The default %pip package installation location is not on your system installation PATH; use the command below to append the local installation path to pick up the latest package versions. Note that you may also need to restart your notebook kernel to pick up the specified package versions and re-run the imports cell above before proceeding with the lab.
End of explanation
ARTIFACT_STORE = os.path.join(os.sep, "home", "jupyter", "artifact-store")
SERVING_MODEL_DIR = os.path.join(os.sep, "home", "jupyter", "serving_model")
DATA_ROOT = "../../data"
Explanation: Configure lab settings
Set constants, location paths and other environment settings.
End of explanation
PIPELINE_NAME = "tfx-covertype-classifier"
PIPELINE_ROOT = os.path.join(
ARTIFACT_STORE, PIPELINE_NAME, time.strftime("%Y%m%d_%H%M%S")
)
os.makedirs(PIPELINE_ROOT, exist_ok=True)
context = InteractiveContext(
pipeline_name=PIPELINE_NAME,
pipeline_root=PIPELINE_ROOT,
metadata_connection_config=None,
)
Explanation: Creating Interactive Context
TFX Interactive Context allows you to create and run TFX Components in an interactive mode. It is designed to support experimentation and development in a Jupyter Notebook environment. It is an experimental feature and major changes to interface and functionality are expected. When creating the interactive context you can specifiy the following parameters:
- pipeline_name - Optional name of the pipeline for ML Metadata tracking purposes. If not specified, a name will be generated for you.
- pipeline_root - Optional path to the root of the pipeline's outputs. If not specified, an ephemeral temporary directory will be created and used.
- metadata_connection_config - Optional metadata_store_pb2.ConnectionConfig instance used to configure connection to a ML Metadata connection. If not specified, an ephemeral SQLite MLMD connection contained in the pipeline_root directory with file name "metadata.sqlite" will be used.
End of explanation
output_config = example_gen_pb2.Output(
split_config=example_gen_pb2.SplitConfig(
splits=[
# TODO: Your code to configure train data split
# TODO: Your code to configure eval data split
example_gen_pb2.SplitConfig.Split(name="train", hash_buckets=4),
example_gen_pb2.SplitConfig.Split(name="eval", hash_buckets=1),
]
)
)
example_gen = tfx.components.CsvExampleGen(
input_base=DATA_ROOT, output_config=output_config
).with_id("CsvExampleGen")
context.run(example_gen)
Explanation: Ingesting data using ExampleGen
In any ML development process the first step is to ingest the training and test datasets. The ExampleGen component ingests data into a TFX pipeline. It consumes external files/services to generate a set file files in the TFRecord format, which will be used by other TFX components. It can also shuffle the data and split into an arbitrary number of partitions.
<img src=../../images/ExampleGen.png width="300">
Configure and run CsvExampleGen
In this exercise, you use the CsvExampleGen specialization of ExampleGen to ingest CSV files from a GCS location and emit them as tf.Example records for consumption by downstream TFX pipeline components. Your task is to configure the component to create 80-20 train and eval splits. Hint: review the ExampleGen proto definition to split your data with hash buckets.
End of explanation
examples_uri = example_gen.outputs["examples"].get()[-1].uri
tfrecord_filenames = [
os.path.join(examples_uri, "Split-train", name)
for name in os.listdir(os.path.join(examples_uri, "Split-train"))
]
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
for tfrecord in dataset.take(2):
example = tf.train.Example()
example.ParseFromString(tfrecord.numpy())
for name, feature in example.features.feature.items():
if feature.HasField("bytes_list"):
value = feature.bytes_list.value
if feature.HasField("float_list"):
value = feature.float_list.value
if feature.HasField("int64_list"):
value = feature.int64_list.value
print(f"{name}: {value}")
print("******")
Explanation: Examine the ingested data
End of explanation
statistics_gen = tfx.components.StatisticsGen(
examples=example_gen.outputs["examples"]
).with_id("StatisticsGen")
context.run(statistics_gen)
Explanation: Generating statistics using StatisticsGen
The StatisticsGen component generates data statistics that can be used by other TFX components. StatisticsGen uses TensorFlow Data Validation. StatisticsGen generate statistics for each split in the ExampleGen component's output. In our case there two splits: train and eval.
<img src=../../images/StatisticsGen.png width="200">
Configure and run the StatisticsGen component
End of explanation
context.show(statistics_gen.outputs["statistics"])
Explanation: Visualize statistics
The generated statistics can be visualized using the tfdv.visualize_statistics() function from the TensorFlow Data Validation library or using a utility method of the InteractiveContext object. In fact, most of the artifacts generated by the TFX components can be visualized using InteractiveContext.
End of explanation
schema_gen = SchemaGen(
statistics=statistics_gen.outputs["statistics"], infer_feature_shape=False
).with_id("SchemaGen")
context.run(schema_gen)
Explanation: Infering data schema using SchemaGen
Some TFX components use a description input data called a schema. The schema is an instance of schema.proto. It can specify data types for feature values, whether a feature has to be present in all examples, allowed value ranges, and other properties. SchemaGen automatically generates the schema by inferring types, categories, and ranges from data statistics. The auto-generated schema is best-effort and only tries to infer basic properties of the data. It is expected that developers review and modify it as needed. SchemaGen uses TensorFlow Data Validation.
The SchemaGen component generates the schema using the statistics for the train split. The statistics for other splits are ignored.
<img src=../../images/SchemaGen.png width="200">
Configure and run the SchemaGen components
End of explanation
context.show(schema_gen.outputs["schema"])
Explanation: Visualize the inferred schema
End of explanation
schema_proto_path = "{}/{}".format(
schema_gen.outputs["schema"].get()[0].uri, "schema.pbtxt"
)
schema = tfdv.load_schema_text(schema_proto_path)
Explanation: Updating the auto-generated schema
In most cases the auto-generated schemas must be fine-tuned manually using insights from data exploration and/or domain knowledge about the data. For example, you know that in the covertype dataset there are seven types of forest cover (coded using 1-7 range) and that the value of the Slope feature should be in the 0-90 range. You can manually add these constraints to the auto-generated schema by setting the feature domain.
Load the auto-generated schema proto file
End of explanation
# TODO: Your code to restrict the categorical feature Cover_Type between the values of 0 and 6.
# TODO: Your code to restrict the numeric feature Slope between 0 and 90.
tfdv.set_domain(
schema,
"Cover_Type",
schema_pb2.IntDomain(name="Cover_Type", min=0, max=6, is_categorical=True),
)
tfdv.set_domain(
schema, "Slope", schema_pb2.IntDomain(name="Slope", min=0, max=90)
)
tfdv.display_schema(schema=schema)
Explanation: Modify the schema
You can use the protocol buffer APIs to modify the schema.
Hint: Review the TFDV library API documentation on setting a feature's domain. You can use the protocol buffer APIs to modify the schema. Review the Tensorflow Metadata proto definition for configuration options.
End of explanation
schema_dir = os.path.join(ARTIFACT_STORE, "schema")
tf.io.gfile.makedirs(schema_dir)
schema_file = os.path.join(schema_dir, "schema.pbtxt")
tfdv.write_schema_text(schema, schema_file)
!cat {schema_file}
Explanation: Save the updated schema
End of explanation
schema_importer = Importer(
source_uri=schema_dir, artifact_type=tfx.types.standard_artifacts.Schema
).with_id("SchemaImporter")
context.run(schema_importer)
Explanation: Importing the updated schema using Importer
The Importer component allows you to import an external artifact, including the schema file, so it can be used by other TFX components in your workflow.
Configure and run the Importer component
End of explanation
context.show(schema_importer.outputs["result"])
Explanation: Visualize the imported schema
End of explanation
# TODO: Complete ExampleValidator
# Hint: review the visual above and review the documentation on ExampleValidator's inputs and outputs:
# https://www.tensorflow.org/tfx/guide/exampleval
# Make sure you use the output of the schema_importer component created above.
example_validator = ExampleValidator(
statistics=statistics_gen.outputs["statistics"],
schema=schema_importer.outputs["result"],
).with_id("ExampleValidator")
context.run(example_validator)
Explanation: Validating data with ExampleValidator
The ExampleValidator component identifies anomalies in data. It identifies anomalies by comparing data statistics computed by the StatisticsGen component against a schema generated by SchemaGen or imported by Importer.
ExampleValidator can detect different classes of anomalies. For example it can:
perform validity checks by comparing data statistics against a schema
detect training-serving skew by comparing training and serving data.
detect data drift by looking at a series of data.
The ExampleValidator component validates the data in the eval split only. Other splits are ignored.
<img src=../../images/ExampleValidator.png width="350">
Configure and run the ExampleValidator component
End of explanation
context.show(example_validator.outputs["anomalies"])
Explanation: Visualize validation results
The file anomalies.pbtxt can be visualized using context.show.
End of explanation
TRANSFORM_MODULE = "preprocessing.py"
!cat {TRANSFORM_MODULE}
Explanation: In our case no anomalies were detected in the eval split.
For a detailed deep dive into data validation and schema generation refer to the lab-31-tfdv-structured-data lab.
Preprocessing data with Transform
The Transform component performs data transformation and feature engineering. The Transform component consumes tf.Examples emitted from the ExampleGen component and emits the transformed feature data and the SavedModel graph that was used to process the data. The emitted SavedModel can then be used by serving components to make sure that the same data pre-processing logic is applied at training and serving.
The Transform component requires more code than many other components because of the arbitrary complexity of the feature engineering that you may need for the data and/or model that you're working with. It requires code files to be available which define the processing needed.
<img src=../../images/Transform.png width="400">
Define the pre-processing module
To configure Trainsform, you need to encapsulate your pre-processing code in the Python preprocessing_fn function and save it to a python module that is then provided to the Transform component as an input. This module will be loaded by transform and the preprocessing_fn function will be called when the Transform component runs.
In most cases, your implementation of the preprocessing_fn makes extensive use of TensorFlow Transform for performing feature engineering on your dataset.
End of explanation
transform = Transform(
examples=example_gen.outputs["examples"],
schema=schema_importer.outputs["result"],
module_file=TRANSFORM_MODULE,
).with_id("Transform")
context.run(transform)
Explanation: Configure and run the Transform component.
End of explanation
os.listdir(transform.outputs["transform_graph"].get()[0].uri)
Explanation: Examine the Transform component's outputs
The Transform component has 2 outputs:
transform_graph - contains the graph that can perform the preprocessing operations (this graph will be included in the serving and evaluation models).
transformed_examples - contains the preprocessed training and evaluation data.
Take a peek at the transform_graph artifact: it points to a directory containing 3 subdirectories:
End of explanation
os.listdir(transform.outputs["transformed_examples"].get()[0].uri)
transform_uri = transform.outputs["transformed_examples"].get()[0].uri
tfrecord_filenames = [
os.path.join(transform_uri, "Split-train", name)
for name in os.listdir(os.path.join(transform_uri, "Split-train"))
]
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
for tfrecord in dataset.take(2):
example = tf.train.Example()
example.ParseFromString(tfrecord.numpy())
for name, feature in example.features.feature.items():
if feature.HasField("bytes_list"):
value = feature.bytes_list.value
if feature.HasField("float_list"):
value = feature.float_list.value
if feature.HasField("int64_list"):
value = feature.int64_list.value
print(f"{name}: {value}")
print("******")
Explanation: And the transform.examples artifact
End of explanation
TRAINER_MODULE_FILE = "model.py"
!cat {TRAINER_MODULE_FILE}
Explanation: Train your TensorFlow model with the Trainer component
The Trainer component trains a model using TensorFlow.
Trainer takes:
tf.Examples used for training and eval.
A user provided module file that defines the trainer logic.
A data schema created by SchemaGen or imported by Importer.
A proto definition of train args and eval args.
An optional transform graph produced by upstream Transform component.
An optional base models used for scenarios such as warmstarting training.
<img src=../../images/Trainer.png width="400">
Define the trainer module
To configure Trainer, you need to encapsulate your training code in a Python module that is then provided to the Trainer as an input.
End of explanation
trainer = Trainer(
custom_executor_spec=executor_spec.ExecutorClassSpec(
trainer_executor.GenericExecutor
),
module_file=TRAINER_MODULE_FILE,
transformed_examples=transform.outputs["transformed_examples"],
schema=schema_importer.outputs["result"],
transform_graph=transform.outputs["transform_graph"],
train_args=trainer_pb2.TrainArgs(splits=["train"], num_steps=2),
eval_args=trainer_pb2.EvalArgs(splits=["eval"], num_steps=1),
).with_id("Trainer")
context.run(trainer)
Explanation: Create and run the Trainer component
Note that the Trainer component supports passing the field num_steps through the train_args and eval_args arguments.
End of explanation
logs_path = trainer.outputs["model_run"].get()[0].uri
print(logs_path)
Explanation: Analyzing training runs with TensorBoard
In this step you will analyze the training run with TensorBoard.dev. TensorBoard.dev is a managed service that enables you to easily host, track and share your ML experiments.
Retrieve the location of TensorBoard logs
Each model run's train and eval metric logs are written to the model_run directory by the Tensorboard callback defined in model.py.
End of explanation
model_resolver = Resolver(
strategy_class=LatestBlessedModelStrategy,
model=Channel(type=tfx.types.standard_artifacts.Model),
model_blessing=Channel(type=tfx.types.standard_artifacts.ModelBlessing),
).with_id("LatestBlessedModelResolver")
context.run(model_resolver)
Explanation: Upload the logs and start TensorBoard.dev
Open a new JupyterLab terminal window
From the terminal window, execute the following command
tensorboard dev upload --logdir [YOUR_LOGDIR]
Where [YOUR_LOGDIR] is an URI retrieved by the previous cell.
You will be asked to authorize TensorBoard.dev using your Google account. If you don't have a Google account or you don't want to authorize TensorBoard.dev you can skip this exercise.
After the authorization process completes, follow the link provided to view your experiment.
Evaluating trained models with Evaluator
The Evaluator component analyzes model performance using the TensorFlow Model Analysis library. It runs inference requests on particular subsets of the test dataset, based on which slices are defined by the developer. Knowing which slices should be analyzed requires domain knowledge of what is important in this particular use case or domain.
The Evaluator can also optionally validate a newly trained model against a previous model. In this lab, you only train one model, so the Evaluator automatically will label the model as "blessed".
<img src=../../images/Evaluator.png width="400">
Configure and run the Evaluator component
Use the ResolverNode to pick the previous model to compare against. The model resolver is only required if performing model validation in addition to evaluation. In this case we validate against the latest blessed model. If no model has been blessed before (as in this case) the evaluator will make our candidate the first blessed model.
End of explanation
# TODO: Your code here to create a tfma.MetricThreshold.
# Review the API documentation here: https://www.tensorflow.org/tfx/model_analysis/api_docs/python/tfma/MetricThreshold
# Hint: Review the API documentation for tfma.GenericValueThreshold to constrain accuracy between 50% and 99%.
accuracy_threshold = tfma.MetricThreshold(
value_threshold=tfma.GenericValueThreshold(
lower_bound={"value": 0.0}, upper_bound={"value": 0.99}
)
)
metrics_specs = tfma.MetricsSpec(
metrics=[
tfma.MetricConfig(
class_name="SparseCategoricalAccuracy", threshold=accuracy_threshold
),
tfma.MetricConfig(class_name="ExampleCount"),
]
)
eval_config = tfma.EvalConfig(
model_specs=[tfma.ModelSpec(label_key="Cover_Type")],
metrics_specs=[metrics_specs],
slicing_specs=[
tfma.SlicingSpec(),
tfma.SlicingSpec(feature_keys=["Wilderness_Area"]),
],
)
eval_config
model_analyzer = Evaluator(
examples=example_gen.outputs["examples"],
model=trainer.outputs["model"],
baseline_model=model_resolver.outputs["model"],
eval_config=eval_config,
).with_id("ModelEvaluator")
context.run(model_analyzer, enable_cache=False)
Explanation: Configure evaluation metrics and slices.
End of explanation
model_blessing_uri = model_analyzer.outputs["blessing"].get()[0].uri
!ls -l {model_blessing_uri}
Explanation: Check the model performance validation status
End of explanation
evaluation_uri = model_analyzer.outputs["evaluation"].get()[0].uri
evaluation_uri
!ls {evaluation_uri}
eval_result = tfma.load_eval_result(evaluation_uri)
eval_result
tfma.view.render_slicing_metrics(eval_result)
tfma.view.render_slicing_metrics(eval_result, slicing_column="Wilderness_Area")
Explanation: Visualize evaluation results
You can visualize the evaluation results using the tfma.view.render_slicing_metrics() function from TensorFlow Model Analysis library.
Setup Note: Currently, TFMA visualizations don't render in JupyterLab. Make sure that you run this notebook in Classic Notebook.
End of explanation
infra_validator = InfraValidator(
model=trainer.outputs["model"],
examples=example_gen.outputs["examples"],
serving_spec=infra_validator_pb2.ServingSpec(
tensorflow_serving=infra_validator_pb2.TensorFlowServing(
tags=["latest"]
),
local_docker=infra_validator_pb2.LocalDockerConfig(),
),
validation_spec=infra_validator_pb2.ValidationSpec(
max_loading_time_seconds=60,
num_tries=5,
),
request_spec=infra_validator_pb2.RequestSpec(
tensorflow_serving=infra_validator_pb2.TensorFlowServingRequestSpec(),
num_examples=5,
),
).with_id("ModelInfraValidator")
context.run(infra_validator, enable_cache=False)
Explanation: InfraValidator
The InfraValidator component acts as an additional early warning layer by validating a candidate model in a sandbox version of its serving infrastructure to prevent an unservable model from being pushed to production. Compared to the Evaluator component above which validates a model's performance, the InfraValidator component is validating that a model is able to generate predictions from served examples in an environment configured to match production. The config below takes a model and examples, launches the model in a sand-boxed TensorflowServing model server from the latest image in a local docker engine, and optionally checks that the model binary can be loaded and queried before "blessing" it for production.
<img src=../../images/InfraValidator.png width="400">
End of explanation
infra_blessing_uri = infra_validator.outputs["blessing"].get()[0].uri
!ls -l {infra_blessing_uri}
Explanation: Check the model infrastructure validation status
End of explanation
trainer.outputs["model"]
pusher = Pusher(
model=trainer.outputs["model"],
model_blessing=model_analyzer.outputs["blessing"],
infra_blessing=infra_validator.outputs["blessing"],
push_destination=pusher_pb2.PushDestination(
filesystem=pusher_pb2.PushDestination.Filesystem(
base_directory=SERVING_MODEL_DIR
)
),
).with_id("ModelPusher")
context.run(pusher)
Explanation: Deploying models with Pusher
The Pusher component checks whether a model has been "blessed", and if so, deploys it by pushing the model to a well known file destination.
<img src=../../images/Pusher.png width="400">
Configure and run the Pusher component
End of explanation
pusher.outputs
latest_pushed_model = os.path.join(
SERVING_MODEL_DIR, max(os.listdir(SERVING_MODEL_DIR))
)
ls $latest_pushed_model
Explanation: Examine the output of Pusher
End of explanation |
8,720 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step by step code for abide_motion_wrapper.py
Step1: Read in the phenotypic behavioural data
This is the Phenotypic_V1_0b_preprocessed1.csv file. It's saved in the DATA folder.
You can find the explanations of all the columns in the ABIDE_LEGEND_V1.02.pdf file.
We're going to load the data into a pandas data frame.
Step2: Our measure of interest is func_perc_fd so lets get rid of all participants who don't have a value!
We also want to make sure our data has the data so lets get rid of all participants who's file ID is "no_filename".
We also want to know the age in years for each participant.
Step3: Create a stratified sample
We want to see how similar the average connectivity values are when there are no differences between the groups.
Therefore we need to split participants into matched samples.
What do they need to be matched on?!
DSM_IV_TR -- their diagnosis according to the DSM IV (0
Step4: Look at distribution of motion in sample
Step5: To avoid oversampling the many low movers, we are going to split up our data into 4 motion quartiles and evenly sample from them
To do this we are going to
Step6: Let's check to make sure we correctly sorted our subjects by motion
Step7: Now, based on our sample of motion, create motion quartile cutoffs
Step8: Then create bins of subjects by motion quartile cutoffs
Step9: Look at what our sampling look like
Step10: Looks good! We are evently sampling from all motion bins
Only keep 2n/4 participants from each bin
Remember to shuffle these remaining participants to ensure you get different sub samples each time you run the code.
Step11: Append all the samples together into one big dataframe and then sort according to matching measures
Step12: Split this data frame into two and VOILA
Step13: Actually this can be implemented as a function
The inputs to split_two_matched_samples are the master data frame (df), the motion threshold (motion_thresh), lower age limit (age_l), upper age limit (age_u) and the number of participants (n) in each group.
Step14: Now that we have our groups, we are going to want to load in the actual AAL ROI times series files and make individual and group correlation matrices
We already have the aal time series files downloaded in the DATA folder, but if you wanted to download them yourselves, you can use the code below
Step16: The function below (make_group_corr_mat) creates individual and group roi-roi correlation matrices by
Step17: Make the group correlation matrices for the two different groups.
Step18: Check out the distributions of the r and z values in one of the correlation matrices
Just to see what happens to the data when you apply the arctanh transform.
(The answer is
Step19: Visually check the average correlation matrices for the two groups
Step20: Scatter plot of the two connectivity matrices
Step22: Looks very similar!
Now that we have the roi-roi mean correlation matrices for each group, we want to see how similar they are quantitatively
We expect them to be (about) exactly the same, thus we are going to see how far off the relationship is between these two correlation matrices to the unity line. This is a twist on the classical R squared. You can read more about it here
Step23: Let's first visualize how far off our actual two sample correlation is from the unity line
Black line = unity line
Blue line = best fit line
Step24: This looks like a very good fit!
Let's check what the actual Rsq is with our function - we expect it to be super high!
Step26: Run this split half calculation multiple times to get a distribution R square values
We want to build up a distribution of R sqaured values per specific motion cutoff, age range, and N combinations
Step27: Plot R sqr values
Step28: This is not a normal distribution, so when we want to plot the average Rsq value for a certain combination of age, motion cutoff and n, we should take the median of Rsq not the mean
We can wrap everything we just did into one big function
The function iterates through different sample sizes, age bins, and motion cutoffs for a specific amount of permutations and
Step29: If you want to just run it with abide_motion_wrapper.py you will need to use loop_abide_motion_qsub_array.sh and SgeAbideMotion.sh
loop_abide_motion_qsub_array.sh loops through age, motion, and sample sizes of interest. To actually run the code (looping through all iteration) run SgeAbideMotion.sh. This is also where you can choose how to submit jobs (and parralalize or not)
Once you have finished running the code, you will want to summarize the data for plotting
This code grabs and formats the data for plotting into a summary file call SummaryRsqs.csv that has columns for | Python Code:
import matplotlib.pylab as plt
%matplotlib inline
from matplotlib import ticker
from glob import glob
import numpy as np
import os
import pandas as pd
from scipy.stats import linregress, pearsonr, spearmanr
import nibabel as nib
import urllib
import seaborn as sns
sns.set_context('notebook', font_scale=2)
sns.set_style('white')
Explanation: Step by step code for abide_motion_wrapper.py
End of explanation
behav_data_f = '../Phenotypic_V1_0b_preprocessed.csv'
df = pd.read_csv(behav_data_f)
Explanation: Read in the phenotypic behavioural data
This is the Phenotypic_V1_0b_preprocessed1.csv file. It's saved in the DATA folder.
You can find the explanations of all the columns in the ABIDE_LEGEND_V1.02.pdf file.
We're going to load the data into a pandas data frame.
End of explanation
df = df.loc[df['func_perc_fd'].notnull(), :]
df = df.loc[df['FILE_ID']!='no_filename', :]
df['AGE_YRS'] = np.floor(df['AGE_AT_SCAN'])
Explanation: Our measure of interest is func_perc_fd so lets get rid of all participants who don't have a value!
We also want to make sure our data has the data so lets get rid of all participants who's file ID is "no_filename".
We also want to know the age in years for each participant.
End of explanation
motion_thresh = 80
df_samp_motion = df.loc[df['func_perc_fd']<motion_thresh, :]
age_l, age_u = 6, 18
df_samp = df_samp_motion.loc[(df_samp_motion['AGE_YRS']>=age_l) & (df_samp_motion['AGE_YRS']<=age_u), :]
Explanation: Create a stratified sample
We want to see how similar the average connectivity values are when there are no differences between the groups.
Therefore we need to split participants into matched samples.
What do they need to be matched on?!
DSM_IV_TR -- their diagnosis according to the DSM IV (0: control, 1: ASD, 2: Asp, 3: PDD)
SITE_ID -- the scanning site
AGE_YRS -- age in years
SEX -- sex (1: male, 2: female)
We also want to make sure that we sample evenly from the distribution of motion. This will prevent us from over sampling the low motion people, for which we have more data on.
Threshold your sample according to the motion/age cut offs
We're going to systematically change the upper threshold of the percent of volumes that exceed 0.2mm frame to frame dispacement.
And we're also going to select our lower and upper age limits. NOTE that these are inclusive boundaries. So for example a lower limit of 6 and an upper limit of 10 will include participants who are 6, 7, 8, 9 and 10 years old.
func_perc_fd
AGE_YRS
End of explanation
plt.hist(np.array(df_samp ["func_perc_fd"]),bins=40)
plt.xlabel('func_perc_fd')
plt.ylabel('count')
Explanation: Look at distribution of motion in sample
End of explanation
##sort subjects based on motion
sort_column_list = ['func_perc_fd']
df_motion_sorted = df_samp.sort_values(by=sort_column_list)
#check that sorted worked!
df = df_motion_sorted[['func_perc_fd', "SUB_ID"]]
df.head(10)
#df.tail(10)
##rank subjects by motion
r=range(len(df_motion_sorted))
r_df=pd.DataFrame(r)
r_df.columns = ['rank']
r_df['newcol'] = df_motion_sorted.index
r_df.set_index('newcol', inplace=True)
r_df.index.names = [None]
df_motion_sorted_rank=pd.concat ([r_df,df_motion_sorted], axis=1)
Explanation: To avoid oversampling the many low movers, we are going to split up our data into 4 motion quartiles and evenly sample from them
To do this we are going to:
* sort our sample based on motion and then add a column of their ranking
* based on our sample of motion, create motion quartile cutoffs
* create bins of subjects by motion quartile cutoffs
First we will sort our sample based on motion ('Func_perc_fd')
End of explanation
plt.scatter(np.array(df_motion_sorted_rank["rank"]), np.array(df_motion_sorted_rank['func_perc_fd']))
plt.xlabel('rank')
plt.ylabel('func_perc_fd')
Explanation: Let's check to make sure we correctly sorted our subjects by motion
End of explanation
##create bins of subjects in quartiles
l=len(df_motion_sorted_rank)
chunk=l/4
chunk1=chunk
chunk2=2*chunk
chunk3=3*chunk
chunk4=l
Explanation: Now, based on our sample of motion, create motion quartile cutoffs
End of explanation
first=df_motion_sorted_rank[df_motion_sorted_rank['rank']<=chunk1]
second=df_motion_sorted_rank[(df_motion_sorted_rank['rank']>chunk1) & (df_motion_sorted_rank['rank']<=chunk2)]
third=df_motion_sorted_rank[(df_motion_sorted_rank['rank']>chunk2) & (df_motion_sorted_rank['rank']<=chunk3)]
fourth=df_motion_sorted_rank[df_motion_sorted_rank['rank']>=chunk3]
Explanation: Then create bins of subjects by motion quartile cutoffs
End of explanation
motion_boundaries = (first.func_perc_fd.max(), second.func_perc_fd.max(), third.func_perc_fd.max())
for boundary in motion_boundaries:
print boundary
plt.hist(np.array(df["func_perc_fd"]),bins=40)
plt.xlabel('func_perc_fd')
plt.ylabel('count')
for boundary in motion_boundaries:
plt.plot((boundary, boundary), (0,350), 'k-')
Explanation: Look at what our sampling look like
End of explanation
##shuffle
first_rand = first.reindex(np.random.permutation(first.index))
second_rand = second.reindex(np.random.permutation(second.index))
third_rand = third.reindex(np.random.permutation(third.index))
fourth_rand = fourth.reindex(np.random.permutation(fourth.index))
#Only keep the top 2*n/4 participants.
n=50
n_samp=(n*2)/4
n_samp
first_samp_2n = first_rand.iloc[:n_samp, :]
second_samp_2n = second_rand.iloc[:n_samp, :]
third_samp_2n = third_rand.iloc[:n_samp, :]
fourth_samp_2n = fourth_rand.iloc[:n_samp, :]
Explanation: Looks good! We are evently sampling from all motion bins
Only keep 2n/4 participants from each bin
Remember to shuffle these remaining participants to ensure you get different sub samples each time you run the code.
End of explanation
#append these together
frames = [first_samp_2n, second_samp_2n, third_samp_2n,fourth_samp_2n]
final_df = pd.concat(frames)
sort_column_list = ['DSM_IV_TR', 'DX_GROUP', 'SITE_ID', 'SEX', 'AGE_AT_SCAN']
df_samp_2n_sorted = final_df.sort_values(by=sort_column_list)
Explanation: Append all the samples together into one big dataframe and then sort according to matching measures
End of explanation
df_grp_A = df_samp_2n_sorted.iloc[::2, :]
df_grp_B = df_samp_2n_sorted.iloc[1::2, :]
Explanation: Split this data frame into two and VOILA
End of explanation
from abide_motion_wrapper import split_two_matched_samples
df_A, df_B = split_two_matched_samples(df, 80, 6, 18, 200)
print df_A[['AGE_AT_SCAN', 'DX_GROUP', 'SEX']].describe()
print df_B[['AGE_AT_SCAN', 'DX_GROUP', 'SEX']].describe()
Explanation: Actually this can be implemented as a function
The inputs to split_two_matched_samples are the master data frame (df), the motion threshold (motion_thresh), lower age limit (age_l), upper age limit (age_u) and the number of participants (n) in each group.
End of explanation
## to grab data
for f_id in df.loc[:, 'FILE_ID']:
if not (f_id == "no_filename") and not os.path.isfile("../DATA/{}_rois_aal.1D".format(f_id)):
print f_id
testfile = urllib.URLopener()
testfile.retrieve(("https://s3.amazonaws.com/fcp-indi/data/Projects"
"/ABIDE_Initiative/Outputs/cpac/filt_noglobal/rois_aal"
"/{}_rois_aal.1D".format(f_id)),
"DATA/{}_rois_aal.1D".format(f_id))
Explanation: Now that we have our groups, we are going to want to load in the actual AAL ROI times series files and make individual and group correlation matrices
We already have the aal time series files downloaded in the DATA folder, but if you wanted to download them yourselves, you can use the code below
End of explanation
## looking at an example aal time series file for one subject
test = '../DATA/NYU_0051076_rois_aal.1D'
tt = pd.read_csv(test, sep='\t')
tt.head()
def make_group_corr_mat(df):
This function reads in each subject's aal roi time series files and creates roi-roi correlation matrices
for each subject and then sums them all together. The final output is a 3d matrix of all subjects
roi-roi correlations, a mean roi-roi correlation matrix and a roi-roi covariance matrix.
**NOTE WELL** This returns correlations transformed by the Fisher z, aka arctanh, function.
for i, (sub, f_id) in enumerate(df[['SUB_ID', 'FILE_ID']].values):
# read each subjects aal roi time series files
ts_df = pd.read_table('../DATA/{}_rois_aal.1D'.format(f_id))
# create a correlation matrix from the roi all time series files
corr_mat_r = ts_df.corr()
# the correlations need to be transformed to Fisher z, which is
# equivalent to the arctanh function.
corr_mat_z = np.arctanh(corr_mat_r)
# for the first subject, create a correlation matrix of zeros
# that is the same dimensions as the aal roi-roi matrix
if i == 0:
all_corr_mat = np.zeros([corr_mat_z.shape[0], corr_mat_z.shape[1], len(df)])
# now add the correlation matrix you just created for each subject to the all_corr_mat matrix (3D)
all_corr_mat[:, :, i] = corr_mat_z
# create the mean correlation matrix (ignore nas - sometime there are some...)
av_corr_mat = np.nanmean(all_corr_mat, axis=2)
# create the group covariance matrix (ignore nas - sometime there are some...)
var_corr_mat = np.nanvar(all_corr_mat, axis=2)
return all_corr_mat, av_corr_mat, var_corr_mat
Explanation: The function below (make_group_corr_mat) creates individual and group roi-roi correlation matrices by:
Reading in each subjects AAL roi time series file (in DATA folder). Each column is a AAL ROI and the rows below correspond to its average time series.
Create roi-roi correlation matrices for each subject
Fischer Z transforming the correlation matrices
Concatonating all subjects roi-roi matrices and creating a mean and variance roi-roi correlation matrix
End of explanation
M_grA, M_grA_av, M_grA_var = make_group_corr_mat(df_A)
M_grB, M_grB_av, M_grB_var = make_group_corr_mat(df_B)
Explanation: Make the group correlation matrices for the two different groups.
End of explanation
sub, f_id = df[['SUB_ID', 'FILE_ID']].values[0]
ts_df = pd.read_table('../DATA/{}_rois_aal.1D'.format(f_id))
corr_mat_r = ts_df.corr()
corr_mat_z = np.arctanh(corr_mat_r)
r_array = np.triu(corr_mat_r, k=1).reshape(-1)
z_array = np.triu(corr_mat_z, k=1).reshape(-1)
sns.distplot(r_array[r_array<>0.0], label='r values')
sns.distplot(z_array[z_array<>0.0], label='z values')
plt.axvline(c='k', linewidth=0.5)
plt.legend()
plt.title('Pairwise correlation values\nfor an example subject')
sns.despine()
Explanation: Check out the distributions of the r and z values in one of the correlation matrices
Just to see what happens to the data when you apply the arctanh transform.
(The answer is: not too much!)
End of explanation
fig, ax_list = plt.subplots(1,2)
ax_list[0].imshow(M_grA_av, interpolation='none', cmap='RdBu_r', vmin=-1, vmax=1)
ax_list[1].imshow(M_grB_av, interpolation='none', cmap='RdBu_r', vmin=-1, vmax=1)
for ax in ax_list:
ax.set_xticklabels([])
ax.set_yticklabels([])
fig.suptitle('Comparison of average\nconnectivity matrices for two groups')
plt.tight_layout()
Explanation: Visually check the average correlation matrices for the two groups
End of explanation
indices = np.triu_indices_from(M_grA_av, k=1)
grA_values = M_grA_av[indices]
grB_values = M_grB_av[indices]
min_val = np.min([np.min(grA_values), np.min(grB_values)])
max_val = np.max([np.max(grA_values), np.max(grB_values)])
fig, ax = plt.subplots(figsize=(6,5))
ax.plot([np.min(grA_values), np.max(grA_values)], [np.min(grA_values), np.max(grA_values)], c='k', zorder=-1)
ax.scatter(grA_values, grB_values, color=sns.color_palette()[3], s=10, edgecolor='face')
ticks = ticker.MaxNLocator(5)
ax.xaxis.set_major_locator(ticks)
ax.yaxis.set_major_locator(ticks)
plt.xlabel('average roi-roi matrix group A ',fontsize=15)
plt.ylabel('average roi-roi matrix group B',fontsize=15)
ax.set_title('Correlation between average\nmatrices for two groups')
plt.tight_layout()
Explanation: Scatter plot of the two connectivity matrices
End of explanation
def calc_rsq(av_corr_mat_A, av_corr_mat_B):
From wikipedia: https://en.wikipedia.org/wiki/Coefficient_of_determination
Rsq = 1 - (SSres / SStot)
SSres is calculated as the sum of square errors (where the error
is the difference between x and y).
SStot is calculated as the total sum of squares in y.
# Get the data we need
inds = np.triu_indices_from(av_corr_mat_B, k=1)
x = av_corr_mat_A[inds]
y = av_corr_mat_B[inds]
# Calculate the error/residuals
res = y - x
SSres = np.sum(res**2)
# Sum up the total error in y
y_var = y - np.mean(y)
SStot = np.sum(y_var**2)
# R squared
Rsq = 1 - (SSres/SStot)
return Rsq
Explanation: Looks very similar!
Now that we have the roi-roi mean correlation matrices for each group, we want to see how similar they are quantitatively
We expect them to be (about) exactly the same, thus we are going to see how far off the relationship is between these two correlation matrices to the unity line. This is a twist on the classical R squared. You can read more about it here: https://en.wikipedia.org/wiki/Coefficient_of_determination
End of explanation
indices = np.triu_indices_from(M_grA_av, k=1)
grA_values = M_grA_av[indices]
grB_values = M_grB_av[indices]
min_val = np.min([np.nanmin(grA_values), np.nanmin(grB_values)])
max_val = np.max([np.nanmax(grA_values), np.nanmax(grB_values)])
mask_nans = np.logical_or(np.isnan(grA_values), np.isnan(grB_values))
fig, ax = plt.subplots(figsize=(6,5))
sns.regplot(grA_values[~mask_nans],
grB_values[~mask_nans],
color = sns.color_palette()[5],
scatter_kws={'s' : 10, 'edgecolor' : 'face'},
ax=ax)
ax.plot([min_val, max_val], [min_val, max_val], c='k', zorder=-1)
ax.axhline(0, color='k', linewidth=0.5)
ax.axvline(0, color='k', linewidth=0.5)
ticks = ticker.MaxNLocator(5)
ax.xaxis.set_major_locator(ticks)
ax.yaxis.set_major_locator(ticks)
ax.set_title('Correlation between average\nmatrices for two groups\nblack line = unity line\nblue line = best fit line')
plt.tight_layout()
Explanation: Let's first visualize how far off our actual two sample correlation is from the unity line
Black line = unity line
Blue line = best fit line
End of explanation
Rsq=calc_rsq( M_grA_av, M_grB_av)
Rsq
Explanation: This looks like a very good fit!
Let's check what the actual Rsq is with our function - we expect it to be super high!
End of explanation
def split_half_outcome(df, motion_thresh, age_l, age_u, n, n_perms=100):
This function returns the R squared of how each parameter affects split-half reliability!
It takes in a dataframe, motion threshold, an age upper limit(age_u) an age lower limit (age_l), sample size (n),
and number of permutations (n_perms, currently hard coded at 100). This function essentially splits a data frame
into two matched samples (split_two_matched_samples.py), then creates mean roi-roi correlation matrices per sample
(make_group_corr_mat.py) and then calculates the R squared (calc_rsq.py) between the two samples'
correlation matrices and returns all the permuation coefficients of determinations in a dataframe.
#set up data frame of average R squared to fill up later
Rsq_list = []
#Do this in each permutation
for i in range(n_perms):
#create two matched samples split on motion_thresh, age upper, age lower, and n
df_A, df_B = split_two_matched_samples(df, motion_thresh, age_l, age_u, n)
#make the matrix of all subjects roi-roi correlations, make the mean corr mat, and make covariance cor mat
#do this for A and then B
all_corr_mat_A, av_corr_mat_A, var_corr_mat_A = make_group_corr_mat(df_A)
all_corr_mat_B, av_corr_mat_B, var_corr_mat_B = make_group_corr_mat(df_B)
#calculate the R squared between the two matrices
Rsq = calc_rsq(av_corr_mat_A, av_corr_mat_B)
#print "Iteration " + str(i) + ": R^2 = " + str(Rsq)
#build up R squared output
Rsq_list += [Rsq]
return np.array(Rsq_list)
rsq_list = split_half_outcome(df, 50, 6, 18, 20, n_perms=100)
Explanation: Run this split half calculation multiple times to get a distribution R square values
We want to build up a distribution of R sqaured values per specific motion cutoff, age range, and N combinations
End of explanation
sns.distplot(rsq_list )
Explanation: Plot R sqr values
End of explanation
def abide_motion_wrapper(motion_thresh, age_l, age_u, n, n_perms=1000, overwrite=True):
behav_data_f = '../Phenotypic_V1_0b_preprocessed1.csv'
f_name = 'RESULTS/rsq_{:03.0f}pct_{:03.0f}subs_{:02.0f}to{:02.0f}.csv'.format(motion_thresh, n, age_l, age_u)
# By default this code will recreate files even if they already exist
# (overwrite=True)
# If you don't want to do this though, set overwrite to False and
# this step will skip over the analysis if the file already exists
if not overwrite:
# If the file exists then skip this loop
if os.path.isfile(f_name):
return
df = read_in_data(behav_data_f)
rsq_list = split_half_outcome(df, motion_thresh, age_l, age_u, n, n_perms=n_perms)
#print "R Squared list shape: " + str(rsq_list.shape)
med_rsq = np.median(rsq_list)
rsq_CI = np.percentile(rsq_list, 97.5) - np.percentile(rsq_list, 2.5)
columns = [ 'motion_thresh', 'age_l', 'age_u', 'n', 'med_rsq', 'CI_95' ]
results_df = pd.DataFrame(np.array([[motion_thresh, age_l, age_u, n, med_rsq, rsq_CI ]]),
columns=columns)
results_df.to_csv(f_name)
Explanation: This is not a normal distribution, so when we want to plot the average Rsq value for a certain combination of age, motion cutoff and n, we should take the median of Rsq not the mean
We can wrap everything we just did into one big function
The function iterates through different sample sizes, age bins, and motion cutoffs for a specific amount of permutations and:
* Creates 2 split half samples
* Creates average roi-roi correlation matrices for each sample
* Calculates R squared value for fit of the two samples mean roi-roi corrlelation matrices
* Creates csvs of median Rsq and 95% confidence intervals per each motion, age, and sample size iteration
Note: the output csvs will be saved in RESULTS and will be labeled based on their specific input criteria. So for example, if the motion threshold was 50, age lower was 6, age upper was 10 and n=20, the csv output would be rsq_050pct_020subs_06to08.csv
End of explanation
columns = [ 'motion_thresh', 'med_rsq', 'CI_95', 'n', 'age_l', 'age_u']
results_df = pd.DataFrame(columns = columns)
for f in glob('RESULTS/*csv'):
temp_df = pd.read_csv(f, index_col=0)
results_df = results_df.append(temp_df)
results_df.to_csv('RESULTS/SummaryRsqs.csv', index=None, columns=columns)
Explanation: If you want to just run it with abide_motion_wrapper.py you will need to use loop_abide_motion_qsub_array.sh and SgeAbideMotion.sh
loop_abide_motion_qsub_array.sh loops through age, motion, and sample sizes of interest. To actually run the code (looping through all iteration) run SgeAbideMotion.sh. This is also where you can choose how to submit jobs (and parralalize or not)
Once you have finished running the code, you will want to summarize the data for plotting
This code grabs and formats the data for plotting into a summary file call SummaryRsqs.csv that has columns for: motion threshold, median Rsq, 95% CI for R sqr, sample size, age lower, and age upper. It will be in the RESULTS folder.
End of explanation |
8,721 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div class="alert alert-info">**Hint**
Step1: You will use this data to train and evaluate your learning algorithm.
A "learning algorithm" which finds a polynomial of given degree that minimizes the MSE on a set of $(x,y)$ coordinates is implemented in Python with the np.polyfit command. You can use p = np.polyfit(x, y, k) to return the coefficient vector ${\bf p}$ for the $k$th-order polynomial $g$ that best fits (i.e., has the smallest $MSE$ on) the $x$ and $y$ coordinates in x and y.
For example, to fit a 4th-order polynomial to the data
Step2: You can calculate the values of $g$ at a set of $x$ coordinates using the command np.polyval. For example, if you wanted to compute $g(x)$ at $x=0$, $x=0.5$, and $x=1$, you could do
Step4: Part A (1 point)
<div class="alert alert-success">Run the following cell to call `make_polynomial_fit_and_graph`, which creates an IPython *widget* that will allow you to explore what happens when you try to fit polynomials of different orders to different subsets of the data. You should read the source code to see how this is accomplished.</div>
Step6: <div class="alert alert-success">Examine the figure that is produced. Try changing the widget sliders to change the dataset that we are fitting the polynomial to, and the degree of that polynomial. Which degree polynomial both results in similar fits (i.e. similar coefficients) across all datasets _and_ does a good job at capturing the data? How can this be understood in terms of the bias and variance tradeoff we discussed in class?</div>
If you said that polynomials with order 2 or 3 had the best fit, you got 0.5 points. If you noted that it is best to have neither too few nor too many polynomial terms to ensure that you neither under- nor over-fit your data, you received another 0.5 points.
Part B (1 point)
To get a more quantitative sense of how well each polynomial order fits the data, we'll now compute the actual mean squared error (MSE) of the polynomial fits in relationship to both a training dataset and a testing dataset.
<div class="alert alert-success">Complete the function `mse` to compute the MSE for a polynomial with order $k$ that has been fitted to the training data. The completed `mse` function should return a tuple containing the MSE values for the training data and the test data.</div>
Step8: For example, we can compute the MSE for $k=2$ by using the first ten datapoints as training data, and the other datapoints as testing data, as follows
Step10: Part C (1 point)
Next, complete the function template plot_mse to plot MSE versus $k$ for both traindata and testdata. Be sure to include a proper legend, title, and axis labels.
Step12: After implementing the plot_mse function, you should be able to see the error as a function of the polynomial order for both the training set and the test set
Step13: Part D (1 point)
Now, we will use another IPython widget to visualize how the error changes depending on the dataset that we are fitting to. The widget will call your plot_mse function with different subsets of the data, depending on the index that is set | Python Code:
data = np.load("data/xy_data.npy")
# only show the first ten points, since there are a lot
data[:10]
Explanation: <div class="alert alert-info">**Hint**: Much of the material covered in this problem is introduced in the Geman et al. (1992) reading. If you are having trouble with the conceptual questions, this might be a good place to look.</div>
We wish to evaluate the performance of a learning algorithm that takes a set of $(x,y)$ pairs as input and selects a function $g(x)$, which predicts the value of $y$ for a given $x$ (that is, for a given $(x,y)$ pair, $g(x)$ should approximate $y$).
We will evaluate the 'fit' of the functions that our learning algorithm selects using the mean squared error (MSE) between $g(x)$ and $y$. For a set of $n$ data points ${ (x_1, y_1), \ldots, (x_n,y_n) }$, the MSE associated with a function $g$ is calculated as
$$
MSE = \frac{1}{n} \sum_{i=1}^n \left ( y_i - g(x_i) \right )^2 .
$$
The set of candidate functions that we will allow our algorithm to consider is the set of $k$th-order polynomials. This means that our hypothesis space will contain all functions of the form $g(x) = p_kx^k + p_{k-1} x^{k-1} + \ldots + p_{1} x + p_0$. These functions are entirely characterized by their coefficient vector ${\bf p} = (p_k, p_{k-1}, \ldots, p_0)$, and become much more flexible as $k$ increases (think about the difference between linear ($k=1$), quadratic ($k=2$), and cubic ($k=3$) functions). If we are given a set of $(x,y)$ pairs, it is straightforward to find the $k$th-order polynomial that minimizes the MSE between $g(x)$ and the observed $y$ values.
<div class="alert alert-info">For those who have done some statistics, the calculation for finding ${\bf p}$ is just a case of linear regression where the various powers of $x$ are the predictors</div>
In this problem, we'll be trying to learn the function $g(x)$ that generated some data with the addition of some Gaussian (i.e., normally distributed) noise. The data is a $110\times 2$ array, where the first column corresponds to the $x$ coordinate, and the second column corresponds to the $y$ coordinate (which is the function evaluated at the corresponding $x$ value, i.e. $y = g(x)$.
End of explanation
# fit the 4th order polynomial
p = np.polyfit(data[:, 0], data[:, 1], 4)
print("Vector of coefficients: " + str(p))
# display the resulting equation
print_equation(p)
Explanation: You will use this data to train and evaluate your learning algorithm.
A "learning algorithm" which finds a polynomial of given degree that minimizes the MSE on a set of $(x,y)$ coordinates is implemented in Python with the np.polyfit command. You can use p = np.polyfit(x, y, k) to return the coefficient vector ${\bf p}$ for the $k$th-order polynomial $g$ that best fits (i.e., has the smallest $MSE$ on) the $x$ and $y$ coordinates in x and y.
For example, to fit a 4th-order polynomial to the data:
End of explanation
np.polyval(p, np.array([0, 0.5, 1]))
Explanation: You can calculate the values of $g$ at a set of $x$ coordinates using the command np.polyval. For example, if you wanted to compute $g(x)$ at $x=0$, $x=0.5$, and $x=1$, you could do:
End of explanation
# first load the data
data = np.load("data/xy_data.npy")
@interact
def make_polynomial_fit_and_graph(polynomial_order=(0, 9), training_set_index=(1, 11)):
Finds the best-fitting polynomials for k = {0, ... , 9},
using one of eleven different training datasets.
# relabel the parameters
k = polynomial_order
i = training_set_index
# pull out the x and y values
x = data[((i - 1) * 10):(i * 10), 0]
y = data[((i - 1) * 10):(i * 10), 1]
# create the figure
fig, axis = plt.subplots()
# create a range of values for x between 0 and 1
plotx = np.arange(0, 1.01, 0.01)
# find the coefficients p
p = np.polyfit(x, y, k)
# find the values of the polynomial parameterized by p and
# evaluated for the points plotx
ploty = np.polyval(p, plotx)
# plot the fitted function
axis.plot(plotx, ploty, 'b-', label="${}$".format(format_equation(p)))
# plot the original data points
axis.plot(x, y, 'ko')
# set the axis limits
axis.set_xlim(0, 1)
axis.set_ylim(0, 0.35)
# put a title on each plot
axis.set_title('Dataset #{} fitted with k = {}'.format(i, k))
# create a legend
axis.legend(loc='upper left', frameon=False)
Explanation: Part A (1 point)
<div class="alert alert-success">Run the following cell to call `make_polynomial_fit_and_graph`, which creates an IPython *widget* that will allow you to explore what happens when you try to fit polynomials of different orders to different subsets of the data. You should read the source code to see how this is accomplished.</div>
End of explanation
def mse(k, train, test):
Fits a polynomial with order `k` to a training dataset, and
then returns the mean squared error (MSE) between the y-values
of the training data and the fitted polynomial, and the MSE
between the y-values of the test data and the fitted polynomial.
Your answer can be done in 6 lines of code, including the return
statement.
Parameters
----------
k : integer
The polynomial order
train : numpy array with shape (n, 2)
The training data, where the first column corresponds to the
x-values, and the second column corresponds to the y-values
test : numpy array with shape (m, 2)
The testing data, where the first column corresponds to the
x-values, and the second column corresponds to the y-values
Returns
-------
a 2-tuple consisting of the training set MSE and testing set MSE
### BEGIN SOLUTION
# compute the polynomial fit
p = np.polyfit(train[:, 0], train[:, 1], k)
# compute predictions and MSE for the training data
train_prediction = np.polyval(p, train[:, 0])
train_mse = np.mean((train_prediction - train[:, 1]) ** 2)
# compute predictions and MSE for the testing data
test_prediction = np.polyval(p, test[:, 0])
test_mse = np.mean((test_prediction - test[:, 1]) ** 2)
return train_mse, test_mse
### END SOLUTION
Explanation: <div class="alert alert-success">Examine the figure that is produced. Try changing the widget sliders to change the dataset that we are fitting the polynomial to, and the degree of that polynomial. Which degree polynomial both results in similar fits (i.e. similar coefficients) across all datasets _and_ does a good job at capturing the data? How can this be understood in terms of the bias and variance tradeoff we discussed in class?</div>
If you said that polynomials with order 2 or 3 had the best fit, you got 0.5 points. If you noted that it is best to have neither too few nor too many polynomial terms to ensure that you neither under- nor over-fit your data, you received another 0.5 points.
Part B (1 point)
To get a more quantitative sense of how well each polynomial order fits the data, we'll now compute the actual mean squared error (MSE) of the polynomial fits in relationship to both a training dataset and a testing dataset.
<div class="alert alert-success">Complete the function `mse` to compute the MSE for a polynomial with order $k$ that has been fitted to the training data. The completed `mse` function should return a tuple containing the MSE values for the training data and the test data.</div>
End of explanation
# load the data
data = np.load("data/xy_data.npy")
# compute the MSE
train_mse, test_mse = mse(2, data[:10], data[10:])
print("The training error is: " + str(train_mse))
print("The testing error is: " + str(test_mse))
# add your own test cases here!
Test that the `mse` function is correct.
from numpy.testing import assert_allclose
data = np.load("data/xy_data.npy")
# use first ten, and the remaining
assert_allclose(mse(2, data[:10], data[10:]), (0.000229744955167, 0.000374298371336))
assert_allclose(mse(3, data[:10], data[10:]), (0.000169612346303, 0.000463251756094))
assert_allclose(mse(9, data[:10], data[10:]), (1.46448764925e-21, 0.337001581723), atol=1e-20)
# use half-and-half
assert_allclose(mse(2, data[:55], data[55:]), (0.00034502281024316553, 0.00037620706341530435))
assert_allclose(mse(3, data[:55], data[55:]), (0.0003378190977339938, 0.00039980736728858482))
assert_allclose(mse(9, data[:55], data[55:]), (0.00026755111091101571, 0.00061531514687572487))
# use last twenty, and the remaining
assert_allclose(mse(2, data[-20:], data[:-20]), (0.00030881029910697136, 0.00040876086505745344))
assert_allclose(mse(3, data[-20:], data[:-20]), (0.00021713262385879197, 0.00055653317636801015))
assert_allclose(mse(9, data[-20:], data[:-20]), (0.00012210662449207329, 0.00071987940235435685))
print("Success!")
Explanation: For example, we can compute the MSE for $k=2$ by using the first ten datapoints as training data, and the other datapoints as testing data, as follows:
End of explanation
def plot_mse(axis, max_order, train, test):
Plot the mean squared error (MSE) for the given training and testing
data as a function of polynomial order.
* Your plot should show the MSE for 0 <= k < max_order
* There should be two lines: one black, for the training set error, and
one red, for the testing set error.
* Make sure to include labels for the x- and y- axes.
* Label the training error and testing error lines as "Training set error"
and "Testing set error", respectively. These labels will be used to
create a legend later on (and so you should NOT actually create the
legend yourself -- just label the lines).
Your answer can be done in 10 lines of code, including the return statement.
Parameters
----------
axis : matplotlib axis object
The axis on which to plot the MSE
max_order : integer
The maximum polynomial order to compute a fit for
train : numpy array with shape (n, 2)
The training data, where the first column corresponds to the
x-values, and the second column corresponds to the y-values
test : numpy array with shape (m, 2)
The testing data, where the first column corresponds to the
x-values, and the second column corresponds to the y-values
Returns
-------
numpy array with shape (max_order, 2)
The MSE for the training data (corresponding to the first column) and
for the testing data (corresponding to the second column). Each row
corresponds to a different polynomial order.
### BEGIN SOLUTION
k = np.arange(0, max_order)
# compute error for all values of k
error = np.empty((max_order, 2))
for i in range(max_order):
error[i] = mse(k[i], train, test)
axis.plot(k, error[:, 0], 'k-', label="Training set error")
axis.plot(k, error[:, 1], 'r-', label="Testing set error")
axis.set_xlabel("Polynomial model order (k)")
axis.set_ylabel("Mean squared error")
return error
### END SOLUTION
Explanation: Part C (1 point)
Next, complete the function template plot_mse to plot MSE versus $k$ for both traindata and testdata. Be sure to include a proper legend, title, and axis labels.
End of explanation
# load the data
data = np.load("data/xy_data.npy")
# plot it
fig, axis = plt.subplots()
plot_mse(axis, 15, data[:20], data[20:])
axis.legend(loc='upper left')
# add your own test cases here!
Is the plot_mse function correctly implemented?
from nose.tools import assert_equal, assert_not_equal
from numpy.testing import assert_allclose
from plotchecker import get_data
data = np.load("data/xy_data.npy")
# check that it uses the mse function
old_mse = mse
del mse
try:
fig, axis = plt.subplots()
plot_mse(axis, 9, data[:10], data[10:])
except NameError:
pass
else:
raise AssertionError("plot_mse should call mse, but it does not")
finally:
plt.close('all')
mse = old_mse
del old_mse
fig, axis = plt.subplots()
error = plot_mse(axis, 9, data[:10], data[10:])
axis.legend(loc='upper left')
# check the error
assert_equal(error.shape, (9, 2))
assert_allclose(error[0], mse(0, data[:10], data[10:]))
assert_allclose(error[4], mse(4, data[:10], data[10:]))
assert_allclose(error[8], mse(8, data[:10], data[10:]))
# check the plotted data
plotted_data = get_data(axis)
assert_equal(plotted_data.shape, (18, 2))
assert_allclose(plotted_data[:9, 0], np.arange(9))
assert_allclose(plotted_data[9:, 0], np.arange(9))
assert_allclose(plotted_data[:9, 1], error[:, 0])
assert_allclose(plotted_data[9:, 1], error[:, 1])
# check the line colors
assert axis.lines[0].get_color() in ['k', 'black', (0, 0, 0), '#000000']
assert axis.lines[1].get_color() in ['r', 'red', (1, 0, 0), '#FF0000']
# check the legend
legend_labels = [x.get_text() for x in axis.get_legend().get_texts()]
assert_equal(legend_labels, ["Training set error", "Testing set error"])
# check the axis labels
assert_not_equal(axis.get_xlabel(), "")
assert_not_equal(axis.get_ylabel(), "")
plt.close('all')
print("Success!")
Explanation: After implementing the plot_mse function, you should be able to see the error as a function of the polynomial order for both the training set and the test set:
End of explanation
# load the data
data = np.load("data/xy_data.npy")
@interact
def visualize_mse(training_set_index=(1, 11)):
# relabel the index for convenience
i = training_set_index
# pull out the training and testing data
traindata = data[((i - 1) * 10):(i * 10)]
testdata = np.concatenate([data[:((i - 1) * 10)], data[(i * 10):]])
# plot the MSE
fig, axis = plt.subplots()
plot_mse(axis, 10, traindata, testdata)
axis.set_ylim(0, 0.01)
axis.set_title("MSE for dataset #{}".format(i))
axis.legend(loc='upper left')
Explanation: Part D (1 point)
Now, we will use another IPython widget to visualize how the error changes depending on the dataset that we are fitting to. The widget will call your plot_mse function with different subsets of the data, depending on the index that is set:
End of explanation |
8,722 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<table align="left">
<td>
<a href="https
Step1: Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
Step2: Before you begin
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI API
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step3: Otherwise, set your project ID here.
Step4: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
Step5: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step6: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. We suggest that you choose a region where Vertex AI services are
available.
Step7: Only if your bucket doesn't already exist
Step8: Finally, validate access to your Cloud Storage bucket by examining its contents
Step9: Set project template
Step10: Prepare input data
In the following code, you will
1) Get dataset from UCI archive.
2) Untar the dataset
3) Copy the dataset to a Cloud Storage location.
Step11: Import libraries
Step12: Define constants
About the model we are going to use in preprocessing, we use the Swivel embedding which was trained on English Google News 130GB corpus and has 20 dimensions.
Step13: Initialize client
Step22: Pipeline formalization
Data ingestion component
Create Dataflow Python module
The following module contains a Dataflow pipeline that
1) Read the files from Cloud Storage.
2) Extract the article and generate title, topics, and content from files.
3) Load the structured data to BigQuery.
Step23: Create requirements
Next, create the requirements.txt file with Python modules that are needed for Apache Beam pipeline.
Step24: Create Setup file
And add the setup file with Python modules that are needed for executing the Dataflow workers.
Step25: Copy the setup, the python module and requirements file to Cloud Storage
Finally, copy the Python module, requirements and setup file to your Cloud Storage bucket.
Step26: BQML components
To build the next steps of our pipelines, we define a set of queries to
Step28: Create BQ Dataset query
With this query, we create the Bigquery dataset schema we are going to use to train our model.
Step30: Create BQ Preprocess query
The following query use the TFHub Swevel model to generate the embedding of our text data and split the dataset for training and serving purposes.
Step32: Create BQ Model query
Below you have a simple query to build a BigQuery ML Logistic Classifier model for topic's articles classification.
Step34: Create BQ Prediction query
With the following query, we run a prediction job using the table with the preprocessing query.
Step35: Build Pipeline
Step36: Create a custom component to pass DataflowPythonJobOp arguments
Step37: Create the pipeline
Step38: Compile and Run the pipeline
Step39: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial. | Python Code:
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
if os.getenv("IS_TESTING"):
! touch /builder/home/.local/lib/python3.9/site-packages/google_api_core-2.7.1.dist-info/METADATA
! pip3 install {USER_FLAG} --upgrade "apache-beam[gcp]==2.36.0"
! pip3 install {USER_FLAG} --upgrade "bs4==0.0.1"
! pip3 install {USER_FLAG} --upgrade "nltk==3.7"
! pip3 install {USER_FLAG} --upgrade "tensorflow<2.8.0"
! pip3 install {USER_FLAG} --upgrade "tensorflow-hub==0.12.0"
! pip3 install {USER_FLAG} --upgrade "kfp==1.8.2"
! pip3 install {USER_FLAG} --upgrade "google-cloud-aiplatform==1.10.0"
! pip3 install {USER_FLAG} --upgrade "google_cloud_pipeline_components==1.0.1"
Explanation: <table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/pipelines/google_cloud_pipeline_components_bqml_text.ipynb"">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/notebooks/official/pipelines/google_cloud_pipeline_components_bqml_text.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/vertex-ai/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/notebooks/official/pipelines/google_cloud_pipeline_components_bqml_text.ipynb">
Open in Vertex AI Workbench
</a>
</td>
</table>
Overview
This notebooks shows the DataflowPythonJobOp and the main BQML components in a Text Categorization Vertex AI Pipeline.
The pipeline will
Read raw text (HTML) documents stored in Google Cloud Storage
Extract title, content and topic of (HTML) documents using Dataflow and ingest into BigQuery
Apply the Swivel model to generate embeddings of our document’s content
Train a Logistic regression model to classify if an article is about corporate acquisitions (acq category).
Evaluate the model
Apply the model to a dataset in order to generate predictions
Dataset
The dataset is Reuters-21578 Text Categorization Collection Data Set.
The dataset is a collection of publicly available news articles appeared on the Reuters newswire in 1987. They were assembled and indexed with categories by personnel from Reuters Ltd. and Carnegie Group, Inc. in 1987.
Objective
In this notebook, you will learn how to build a simple BigQuery ML pipeline on Vertex AI pipeline in order to calculate text embeddings of articles' content and classify them
depending the corporate acquisitions category.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
BigQuery
Dataflow
Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets
all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements.
You need the following:
The Google Cloud SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Google Cloud guide to Setting up a Python development
environment and the Jupyter
installation guide provide detailed instructions
for meeting these requirements. The following steps provide a condensed set of
instructions:
Install and initialize the Cloud SDK.
Install Python 3.
Install
virtualenv
and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip3 install jupyter on the
command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Install additional packages
Install additional package dependencies not installed in your notebook environment, such as Vertex AI SDK. Use the latest major GA version of each package.
End of explanation
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
import os
PROJECT_ID = ""
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
Explanation: Before you begin
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI API
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
Set your project ID
If you don't know your project ID, you may be able to get your project ID using gcloud.
End of explanation
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "" # @param {type:"string"}
!gcloud config set project $PROJECT_ID
Explanation: Otherwise, set your project ID here.
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
End of explanation
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# If on Google Cloud Notebooks, then don't execute this code
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key
page.
Click Create service account.
In the Service account name field, enter a name, and
click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex AI"
into the filter box, and select
Vertex AI Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your
local environment.
Enter the path to your service account key as the
GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_URI = "gs://[your-bucket-name]" # @param {type:"string"}
REGION = "[your-region]" # @param {type:"string"}
if BUCKET_URI == "" or BUCKET_URI is None or BUCKET_URI == "gs://[your-bucket-name]":
BUCKET_URI = "gs://" + PROJECT_ID + "-aip-" + TIMESTAMP
if REGION == "[your-region]":
REGION = "us-central1"
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. We suggest that you choose a region where Vertex AI services are
available.
End of explanation
! gsutil mb -l $REGION -p $PROJECT_ID $BUCKET_URI
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_URI
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
DATA_PATH = "data"
KFP_COMPONENTS_PATH = "components"
SRC = "src"
BUILD = "build"
!mkdir -m 777 -p {DATA_PATH} {KFP_COMPONENTS_PATH} {SRC} {BUILD}
Explanation: Set project template
End of explanation
!wget --no-parent https://archive.ics.uci.edu/ml/machine-learning-databases/reuters21578-mld/reuters21578.tar.gz --directory-prefix={DATA_PATH}/raw
!mkdir -m 777 -p {DATA_PATH}/raw/temp {DATA_PATH}/raw
!tar -zxvf {DATA_PATH}/raw/reuters21578.tar.gz -C {DATA_PATH}/raw/temp/
!mv {DATA_PATH}/raw/temp/*.sgm {DATA_PATH}/raw && rm -rf {DATA_PATH}/raw/temp && rm -f {DATA_PATH}/raw/reuters21578.tar.gz
!gsutil -m cp -R {DATA_PATH}/raw $BUCKET_URI/{DATA_PATH}/raw
Explanation: Prepare input data
In the following code, you will
1) Get dataset from UCI archive.
2) Untar the dataset
3) Copy the dataset to a Cloud Storage location.
End of explanation
import random
from pathlib import Path as path
from urllib.parse import urlparse
import tensorflow_hub as hub
os.environ["TFHUB_MODEL_LOAD_FORMAT"] = "UNCOMPRESSED"
import google.cloud.aiplatform as vertex_ai
from kfp import dsl
from kfp.v2 import compiler
from kfp.v2.dsl import component
Explanation: Import libraries
End of explanation
JOB_NAME = f"reuters-ingest-{TIMESTAMP}"
SETUP_FILE_URI = urlparse(BUCKET_URI)._replace(path="setup.py").geturl()
RUNNER = "DataflowRunner"
STAGING_LOCATION_URI = urlparse(BUCKET_URI)._replace(path="staging").geturl()
TMP_LOCATION_URI = urlparse(BUCKET_URI)._replace(path="temp").geturl()
INPUTS_URI = urlparse(BUCKET_URI)._replace(path=f"{DATA_PATH}/raw/*.sgm").geturl()
BQ_DATASET = "mlops_bqml_text_analyisis"
BQ_TABLE = "reuters_ingested"
MODEL_NAME = "swivel_text_embedding_model"
EMBEDDINGS_TABLE = f"reuters_text_embeddings_{TIMESTAMP}"
MODEL_PATH = (
f'{hub.resolve("https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1")}/*'
)
PREPROCESSED_TABLE = f"reuters_text_preprocessed_{TIMESTAMP}"
CLASSIFICATION_MODEL_NAME = "logistic_reg"
PREDICT_TABLE = f"reuters_text_predict_{TIMESTAMP}"
Explanation: Define constants
About the model we are going to use in preprocessing, we use the Swivel embedding which was trained on English Google News 130GB corpus and has 20 dimensions.
End of explanation
vertex_ai.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_URI)
Explanation: Initialize client
End of explanation
!touch {SRC}/__init__.py
%%writefile src/ingest_pipeline.py
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# General imports
from __future__ import absolute_import
import argparse
import logging
import os
import string
# Preprocessing imports
import tensorflow as tf
import bs4
import nltk
import apache_beam as beam
from apache_beam.io.gcp.internal.clients import bigquery
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
# Helpers -------------------------------------------------------- -------------
def get_args():
Get command line arguments.
Returns:
args: The parsed arguments.
parser = argparse.ArgumentParser()
parser.add_argument('--inputs', dest='inputs', default='data/raw/reuters/*.sgm',
help='A directory location of input data')
parser.add_argument('--bq-dataset', dest='bq_dataset', required=False,
default='reuters_dataset', help='Dataset name used in BigQuery.')
parser.add_argument('--bq-table', dest='bq_table', required=False,
default='reuters_ingested_table', help='Table name used in BigQuery.')
args, pipeline_args = parser.parse_known_args()
return args, pipeline_args
def get_paths(data_pattern):
A function to get all the paths of the files in the data directory.
Args:
data_pattern: A directory location of input data.
Returns:
A list of file paths.
data_paths = tf.io.gfile.glob(data_pattern)
return data_paths
def get_title(article):
A function to get the title of an article.
Args:
article: A BeautifulSoup object of an article.
Returns:
A string of the title of the article.
title = article.find('text').title
if title is not None:
title = ''.join(filter(lambda x: x in set(string.printable), title.text))
title = title.encode('ascii', 'ignore')
return title
def get_content(article):
A function to get the content of an article.
Args:
article: A BeautifulSoup object of an article.
Returns:
A string of the content of the article.
content = article.find('text').body
if content is not None:
content = ''.join(filter(lambda x: x in set(string.printable), content.text))
content = ' '.join(content.split())
try:
content = '\n'.join(nltk.sent_tokenize(content))
except LookupError:
nltk.download('punkt')
content = '\n'.join(nltk.sent_tokenize(content))
content = content.encode('ascii', 'ignore')
return content
def get_topics(article):
A function to get the topics of an article.
Args:
article: A BeautifulSoup object of an article.
Returns:
A list of strings of the topics of the article.
topics = []
for topic in article.topics.children:
topic = ''.join(filter(lambda x: x in set(string.printable), topic.text))
topics.append(topic.encode('ascii', 'ignore'))
return topics
def get_articles(data_paths):
Args:
data_paths: A list of file paths.
Returns:
A list of articles.
data = tf.io.gfile.GFile(data_paths, 'rb').read()
soup = bs4.BeautifulSoup(data, "html.parser")
articles = []
for raw_article in soup.find_all('reuters'):
article = {
'title': get_title(raw_article),
'content': get_content(raw_article),
'topics': get_topics(raw_article)
}
if None not in article.values():
if [] not in article.values():
articles.append(article)
return articles
def get_bigquery_schema():
A function to get the BigQuery schema.
Returns:
A list of BigQuery schema.
table_schema = bigquery.TableSchema()
columns = (('topics', 'string', 'repeated'),
('title', 'string', 'nullable'),
('content', 'string', 'nullable'))
for column in columns:
column_schema = bigquery.TableFieldSchema()
column_schema.name = column[0]
column_schema.type = column[1]
column_schema.mode = column[2]
table_schema.fields.append(column_schema)
return table_schema
# Pipeline runner
def run(args, pipeline_args=None):
A function to run the pipeline.
Args:
args: The parsed arguments.
Returns:
None
options = PipelineOptions(pipeline_args)
options.view_as(SetupOptions).save_main_session = True
pipeline = beam.Pipeline(options=options)
articles = (
pipeline
| 'Get Paths' >> beam.Create(get_paths(args.inputs))
| 'Get Articles' >> beam.Map(get_articles)
| 'Get Article' >> beam.FlatMap(lambda x: x)
)
if options.get_all_options()['runner'] == 'DirectRunner':
articles | 'Dry run' >> beam.io.WriteToText('data/processed/reuters', file_name_suffix=".jsonl")
else:
(articles
| 'Write to BigQuery' >> beam.io.WriteToBigQuery(
project=options.get_all_options()['project'],
dataset=args.bq_dataset,
table=args.bq_table,
schema=get_bigquery_schema(),
create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,
write_disposition=beam.io.BigQueryDisposition.WRITE_TRUNCATE)
)
job = pipeline.run()
if options.get_all_options()['runner'] == 'DirectRunner':
job.wait_until_finish()
if __name__ == '__main__':
args, pipeline_args = get_args()
logging.getLogger().setLevel(logging.INFO)
run(args, pipeline_args)
Explanation: Pipeline formalization
Data ingestion component
Create Dataflow Python module
The following module contains a Dataflow pipeline that
1) Read the files from Cloud Storage.
2) Extract the article and generate title, topics, and content from files.
3) Load the structured data to BigQuery.
End of explanation
%%writefile requirements.txt
apache-beam[gcp]==2.36.0
bs4==0.0.1
nltk==3.7
tensorflow<2.8.0
Explanation: Create requirements
Next, create the requirements.txt file with Python modules that are needed for Apache Beam pipeline.
End of explanation
%%writefile setup.py
# !/usr/bin/python
# Copyright 2022 Google LLC
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import setuptools
REQUIRED_PACKAGES = [
'bs4==0.0.1',
'nltk==3.7',
'tensorflow<2.8.0']
setuptools.setup(
name='ingest',
version='0.0.1',
author='author',
author_email='author@google.com',
install_requires=REQUIRED_PACKAGES,
packages=setuptools.find_packages())
Explanation: Create Setup file
And add the setup file with Python modules that are needed for executing the Dataflow workers.
End of explanation
# !gsutil cp -R {SRC}/preprocess_pipeline.py {BUCKET_URI}/preprocess_pipeline.py
!gsutil cp -R {SRC} {BUCKET_URI}/{SRC}
!gsutil cp requirements.txt {BUCKET_URI}/requirements.txt
!gsutil cp setup.py {BUCKET_URI}/setup.py
Explanation: Copy the setup, the python module and requirements file to Cloud Storage
Finally, copy the Python module, requirements and setup file to your Cloud Storage bucket.
End of explanation
!mkdir -m 777 -p {KFP_COMPONENTS_PATH}/bq_dataset_component
!mkdir -m 777 -p {KFP_COMPONENTS_PATH}/bq_preprocess_component
!mkdir -m 777 -p {KFP_COMPONENTS_PATH}/bq_model_component
!mkdir -m 777 -p {KFP_COMPONENTS_PATH}/bq_prediction_component
Explanation: BQML components
To build the next steps of our pipelines, we define a set of queries to:
1) Create the BigQuery dataset schema.
2) Preprocess our text data and generate the embeddings using Swevel model
2) Train the BigQuery ML Logistic Regression model.
3) Evaluate the model.
4) Run a batch prediction
End of explanation
create_bq_dataset_query = f
CREATE SCHEMA IF NOT EXISTS {BQ_DATASET}
with open(
f"{KFP_COMPONENTS_PATH}/bq_dataset_component/create_bq_dataset.sql", "w"
) as q:
q.write(create_bq_dataset_query)
q.close()
Explanation: Create BQ Dataset query
With this query, we create the Bigquery dataset schema we are going to use to train our model.
End of explanation
create_bq_preprocess_query = f
-- create the embedding model
CREATE OR REPLACE MODEL
`{PROJECT_ID}.{BQ_DATASET}.{MODEL_NAME}` OPTIONS(model_type='tensorflow',
model_path='{MODEL_PATH}');
-- create the preprocessed table
CREATE OR REPLACE TABLE `{PROJECT_ID}.{BQ_DATASET}.{PREPROCESSED_TABLE}`
AS (
WITH
-- Apply the model for embedding generation
get_embeddings AS (
SELECT
title,
sentences,
output_0 as content_embeddings,
topics
FROM ML.PREDICT(MODEL `{PROJECT_ID}.{BQ_DATASET}.{MODEL_NAME}`,(
SELECT topics, title, content AS sentences
FROM `{PROJECT_ID}.{BQ_DATASET}.{BQ_TABLE}`
))),
-- Get label
get_label AS (
SELECT
*,
STRUCT( CASE WHEN 'acq' in UNNEST(topics) THEN 1 ELSE 0 END as acq ) AS label,
FROM get_embeddings
),
-- Train-serve splitting
get_split AS (
SELECT
*,
CASE WHEN ABS(MOD(FARM_FINGERPRINT(title), 10)) < 8 THEN 'TRAIN' ELSE 'PREDICT' END AS split
FROM get_label
)
-- create training table
SELECT
title,
sentences,
STRUCT( content_embeddings[OFFSET(0)] AS content_embed_0,
content_embeddings[OFFSET(1)] AS content_embed_1,
content_embeddings[OFFSET(2)] AS content_embed_2,
content_embeddings[OFFSET(3)] AS content_embed_3,
content_embeddings[OFFSET(4)] AS content_embed_4,
content_embeddings[OFFSET(5)] AS content_embed_5,
content_embeddings[OFFSET(6)] AS content_embed_6,
content_embeddings[OFFSET(7)] AS content_embed_7,
content_embeddings[OFFSET(8)] AS content_embed_8,
content_embeddings[OFFSET(9)] AS content_embed_9,
content_embeddings[OFFSET(10)] AS content_embed_10,
content_embeddings[OFFSET(11)] AS content_embed_11,
content_embeddings[OFFSET(12)] AS content_embed_12,
content_embeddings[OFFSET(13)] AS content_embed_13,
content_embeddings[OFFSET(14)] AS content_embed_14,
content_embeddings[OFFSET(15)] AS content_embed_15,
content_embeddings[OFFSET(16)] AS content_embed_16,
content_embeddings[OFFSET(17)] AS content_embed_17,
content_embeddings[OFFSET(18)] AS content_embed_18,
content_embeddings[OFFSET(19)] AS content_embed_19) AS feature,
label.acq as label,
split
FROM
get_split)
with open(
f"{KFP_COMPONENTS_PATH}/bq_preprocess_component/bq_preprocess_query.sql", "w"
) as q:
q.write(create_bq_preprocess_query)
q.close()
Explanation: Create BQ Preprocess query
The following query use the TFHub Swevel model to generate the embedding of our text data and split the dataset for training and serving purposes.
End of explanation
create_bq_model_query = f
CREATE OR REPLACE MODEL `{PROJECT_ID}.{BQ_DATASET}.{CLASSIFICATION_MODEL_NAME}`
OPTIONS (
model_type='logistic_reg',
input_label_cols=['label']) AS
SELECT
label,
feature.*
FROM
`{PROJECT_ID}.{BQ_DATASET}.{PREPROCESSED_TABLE}`
WHERE split = 'TRAIN';
with open(f"{KFP_COMPONENTS_PATH}/bq_model_component/create_bq_model.sql", "w") as q:
q.write(create_bq_model_query)
q.close()
Explanation: Create BQ Model query
Below you have a simple query to build a BigQuery ML Logistic Classifier model for topic's articles classification.
End of explanation
create_bq_prediction_query = fSELECT title, sentences, feature.* FROM `{PROJECT_ID}.{BQ_DATASET}.{PREPROCESSED_TABLE}` WHERE split = 'PREDICT'
with open(
f"{KFP_COMPONENTS_PATH}/bq_prediction_component/create_bq_prediction_query.sql", "w"
) as q:
q.write(create_bq_prediction_query)
q.close()
Explanation: Create BQ Prediction query
With the following query, we run a prediction job using the table with the preprocessing query.
End of explanation
ID = random.randint(1, 10000)
JOB_NAME = f"reuters-preprocess-{TIMESTAMP}-{ID}"
JOB_CONFIG = {
"destinationTable": {
"projectId": PROJECT_ID,
"datasetId": BQ_DATASET,
"tableId": PREDICT_TABLE,
}
}
Explanation: Build Pipeline
End of explanation
@component(base_image="python:3.8-slim")
def build_dataflow_args(
# destination_table: Input[Artifact],
bq_dataset: str,
bq_table: str,
job_name: str,
setup_file_uri: str,
runner: str,
inputs_uri: str,
) -> list:
return [
"--job_name",
job_name,
"--setup_file",
setup_file_uri,
"--runner",
runner,
"--inputs",
inputs_uri,
"--bq-dataset",
bq_dataset,
"--bq-table",
bq_table,
]
Explanation: Create a custom component to pass DataflowPythonJobOp arguments
End of explanation
@dsl.pipeline(
name="mlops-bqml-text-generate-embeddings",
description="A batch pipeline to generate embeddings",
)
def pipeline(
create_bq_dataset_query: str,
job_name: str,
inputs_uri: str,
bq_dataset: str,
bq_table: str,
requirements_file_path: str,
python_file_path: str,
setup_file_uri: str,
temp_location: str,
runner: str,
create_bq_preprocess_query: str,
create_bq_model_query: str,
create_bq_prediction_query: str,
job_config: dict,
project: str = PROJECT_ID,
region: str = REGION,
):
from google_cloud_pipeline_components.v1.bigquery import (
BigqueryCreateModelJobOp, BigqueryEvaluateModelJobOp,
BigqueryPredictModelJobOp, BigqueryQueryJobOp)
from google_cloud_pipeline_components.v1.dataflow import \
DataflowPythonJobOp
from google_cloud_pipeline_components.v1.wait_gcp_resources import \
WaitGcpResourcesOp
# create the dataset
bq_dataset_op = BigqueryQueryJobOp(
query=create_bq_dataset_query,
project=project,
location="US",
)
# instanciate dataflow args
build_dataflow_args_op = build_dataflow_args(
job_name=job_name,
inputs_uri=inputs_uri,
# destination_table = bq_dataset_op.outputs['destination_table'],
bq_dataset=bq_dataset,
bq_table=bq_table,
setup_file_uri=setup_file_uri,
runner=runner,
).after(bq_dataset_op)
# run dataflow job
dataflow_python_op = DataflowPythonJobOp(
requirements_file_path=requirements_file_path,
python_module_path=python_file_path,
args=build_dataflow_args_op.output,
project=project,
location=region,
temp_location=temp_location,
).after(build_dataflow_args_op)
dataflow_wait_op = WaitGcpResourcesOp(
gcp_resources=dataflow_python_op.outputs["gcp_resources"]
).after(dataflow_python_op)
# run preprocessing job
bq_preprocess_op = BigqueryQueryJobOp(
query=create_bq_preprocess_query,
project=project,
location="US",
).after(dataflow_wait_op)
# create the logistic regression
bq_model_op = BigqueryCreateModelJobOp(
query=create_bq_model_query,
project=project,
location="US",
).after(bq_preprocess_op)
# evaluate the logistic regression
bq_evaluate_op = BigqueryEvaluateModelJobOp(
project=project, location="US", model=bq_model_op.outputs["model"]
).after(bq_model_op)
# similuate prediction
BigqueryPredictModelJobOp(
model=bq_model_op.outputs["model"],
query_statement=create_bq_prediction_query,
job_configuration_query=job_config,
project=project,
location="US",
).after(bq_evaluate_op)
Explanation: Create the pipeline
End of explanation
PIPELINE_ROOT = urlparse(BUCKET_URI)._replace(path="pipeline_root").geturl()
PIPELINE_PACKAGE = str(path(BUILD) / "mlops_bqml_text_analyisis_pipeline.json")
REQUIREMENTS_URI = urlparse(BUCKET_URI)._replace(path="requirements.txt").geturl()
PYTHON_FILE_URI = urlparse(BUCKET_URI)._replace(path="src/ingest_pipeline.py").geturl()
MODEL_URI = urlparse(BUCKET_URI)._replace(path="swivel_text_embedding_model").geturl()
compiler.Compiler().compile(pipeline_func=pipeline, package_path=PIPELINE_PACKAGE)
pipeline = vertex_ai.PipelineJob(
display_name=f"data_preprocess_{TIMESTAMP}",
template_path=PIPELINE_PACKAGE,
pipeline_root=PIPELINE_ROOT,
parameter_values={
"create_bq_dataset_query": create_bq_dataset_query,
"bq_dataset": BQ_DATASET,
"job_name": JOB_NAME,
"inputs_uri": INPUTS_URI,
"bq_table": BQ_TABLE,
"requirements_file_path": REQUIREMENTS_URI,
"python_file_path": PYTHON_FILE_URI,
"setup_file_uri": SETUP_FILE_URI,
"temp_location": PIPELINE_ROOT,
"runner": RUNNER,
"create_bq_preprocess_query": create_bq_preprocess_query,
"create_bq_model_query": create_bq_model_query,
"create_bq_prediction_query": create_bq_prediction_query,
"job_config": JOB_CONFIG,
},
enable_caching=False,
)
pipeline.run()
Explanation: Compile and Run the pipeline
End of explanation
# delete bucket
! gsutil -m rm -r $BUCKET_URI
# delete dataset
! bq rm -r -f -d $PROJECT_ID:$BQ_DATASET
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
End of explanation |
8,723 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Data exploration
Exploração dos dados baseado em
Step1: Create a function to analysing
Step2: Feature Observation
We are see the 6 features in dataset
Step3: 2. Developing a model
In this second section of the project, you will develop the tools and techniques necessary for a model to make a prediction. Being able to make accurate evaluations of each model's performance through the use of these tools and techniques helps to greatly reinforce the confidence in your predictions.
Shuffle and split data
For the code cell below, you will need to implement the following
Step4: Performance Metric
It is difficult to measure the quality of a given model without quantifying its performance over training and testing. This is typically done using some type of performance metric, whether it is through calculating some type of error, the goodness of fit, or some other useful measurement
Algorithm OneVsRestClassifier
Step5: Algorithm OneVsOne
Step6: Algorithm MultinomialNB
Step7: Algorithm AdaBoostClassifier
Step8: Algorithm LinearSVC
Step9: Algorithm SVC with Kernel Linear
Step10: Algorithm DecisionTreeClassifier
Step11: Algorith OutputCodeClassifier
Step12: Algorithm GaussianProcessClassifier
Step13: Algorithm MLPClassifier
Step14: Algorithm KNeighborsClassifier
Step15: Algorithm QuadraticDiscriminantAnalysis
Step16: Algorithm GaussianNB
Step17: Algorithm RBF SVM
Step18: Select the best algorithm
Step19: Predicting who is | Python Code:
from data import get_full_data, get_who_is
from matplotlib import pyplot as plt
from sklearn import linear_model
from predicting_who_is import accuracy_score, performance_metric
import pandas as pd
import numpy as np
from IPython.display import display # Allows the use of display() for DataFrames
# Import supplementary visualizations code visuals.py
import visuals as vs
X, Y, df = get_full_data()
df = df.sample(frac=1)
#X = df[df['user'] == 2][['rate_blink_left', 'rate_blink_right', 'rate_smile_or_not', 'blink_left', 'blink_right', 'smile_or_not']]
#Y = df[df['user'] == 2]['user']
#X = df[['blink_left', 'blink_right', 'smile_or_not']]
# Y = df['user']
Xdummies_df = pd.get_dummies(X)
Ydummies_df = Y
X = Xdummies_df.values
Y = Ydummies_df.values
# Print the first few entries of the RMS Titanic data
display(df.head(20))
Explanation: 1. Data exploration
Exploração dos dados baseado em:
1500 observações
7 características
3 usuários distintos
Get started
End of explanation
# Investigar
# http://www.dummies.com/programming/big-data/data-science/how-to-visualize-the-classifier-in-an-svm-supervised-learning-model/
# http://scikit-learn.org/stable/auto_examples/svm/plot_iris.html#sphx-glr-auto-examples-svm-plot-iris-py
# http://scikit-learn.org/stable/auto_examples/plot_multilabel.html#sphx-glr-auto-examples-plot-multilabel-py
#http://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html#sphx-glr-auto-examples-classification-plot-classifier-comparison-py
def display_points(X, Y):
from sklearn.decomposition import PCA
from sklearn import svm
from sklearn.tree import DecisionTreeClassifier
from sklearn.cross_validation import train_test_split
import pylab as pl
import numpy as np
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.20, random_state=0)
pca = PCA(n_components=2).fit(X_train)
pca_2d = pca.transform(X_train)
svmClassifier_2d = svm.LinearSVC(random_state=0).fit(pca_2d, y_train)
#clf = DecisionTreeClassifier().fit(pca_2d, y_train)
for i in range(0, pca_2d.shape[0]):
if y_train[i] == 1:
c1 = pl.scatter(pca_2d[i,0],pca_2d[i,1], c='r', s=50,marker='+')
elif y_train[i] == 2:
c2 = pl.scatter(pca_2d[i,0],pca_2d[i,1], c='g', s=50,marker='o')
elif y_train[i] == 3:
c3 = pl.scatter(pca_2d[i,0],pca_2d[i,1], c='b', s=50,marker='*')
pl.legend([c1, c2, c3], ['Thiago', 'Alessandro', 'Ed'])
x_min, x_max = pca_2d[:, 0].min() - 1, pca_2d[:,0].max() + 1
y_min, y_max = pca_2d[:, 1].min() - 1, pca_2d[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, .01), np.arange(y_min, y_max, .01))
Z = svmClassifier_2d.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
pl.contour(xx, yy, Z, alpha=0.8)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.xticks(())
plt.yticks(())
pl.show()
Explanation: Create a function to analysing
End of explanation
display_points(X, Y)
Explanation: Feature Observation
We are see the 6 features in dataset
End of explanation
# Import 'train_test_split'
from sklearn.cross_validation import train_test_split
# Shuffle and split the data into training and testing subsets
X_train, X_test, y_train, y_test = train_test_split(X, Y, train_size = 0.8, random_state = 0)
# Success
print "Training and testing split was successful."
Explanation: 2. Developing a model
In this second section of the project, you will develop the tools and techniques necessary for a model to make a prediction. Being able to make accurate evaluations of each model's performance through the use of these tools and techniques helps to greatly reinforce the confidence in your predictions.
Shuffle and split data
For the code cell below, you will need to implement the following:
Use train_test_split from sklearn.cross_validation to shuffle and split the features and prices data into training and testing sets.
Split the data into 80% training and 20% testing.
Set the random_state for train_test_split to a value of your choice. This ensures results are consistent.
Assign the train and testing splits to X_train, X_test, y_train, and y_test.
End of explanation
def model_1(resultados):
from sklearn.multiclass import OneVsRestClassifier
from sklearn.svm import LinearSVC
modelo = OneVsRestClassifier(LinearSVC(random_state = 0))
resultado = accuracy_score("OneVsRest", modelo, X_train, y_train)
resultados[resultado] = modelo
Explanation: Performance Metric
It is difficult to measure the quality of a given model without quantifying its performance over training and testing. This is typically done using some type of performance metric, whether it is through calculating some type of error, the goodness of fit, or some other useful measurement
Algorithm OneVsRestClassifier
End of explanation
def model_2(resultados):
from sklearn.multiclass import OneVsOneClassifier
from sklearn.svm import LinearSVC
modelo = OneVsOneClassifier(LinearSVC(random_state = 0))
resultado = accuracy_score("OneVsOne", modelo, X_train, y_train)
resultados[resultado] = modelo
Explanation: Algorithm OneVsOne
End of explanation
def model_3(resultados):
from sklearn.naive_bayes import MultinomialNB
modelo = MultinomialNB()
resultado = accuracy_score("MultinomialNB", modelo, X_train, y_train)
resultados[resultado] = modelo
Explanation: Algorithm MultinomialNB
End of explanation
def model_4(resultados):
from sklearn.ensemble import AdaBoostClassifier
modelo = AdaBoostClassifier()
resultado = accuracy_score("AdaBoostClassifier", modelo, X_train, y_train)
resultados[resultado] = modelo
Explanation: Algorithm AdaBoostClassifier
End of explanation
def model_5(resultados):
from sklearn.svm import LinearSVC
modelo = LinearSVC(random_state=0)
resultado = accuracy_score('LinearSVC', modelo, X_train, y_train)
resultados[resultado] = modelo
Explanation: Algorithm LinearSVC
End of explanation
def model_6(resultados):
from sklearn.svm import SVC
modelo = SVC(kernel='linear', C=0.025)
resultado = accuracy_score('SVC with Kernel Linear', modelo, X_train, y_train)
resultados[resultado] = modelo
Explanation: Algorithm SVC with Kernel Linear
End of explanation
def model_7(resultados):
from sklearn.tree import DecisionTreeClassifier
modelo = DecisionTreeClassifier(random_state=0)
resultado = accuracy_score('DecisionTreeClassifier', modelo, X_train, y_train)
resultados[resultado] = modelo
Explanation: Algorithm DecisionTreeClassifier
End of explanation
def model_8(resultados):
from sklearn.multiclass import OutputCodeClassifier
from sklearn.svm import LinearSVC
modelo = OutputCodeClassifier(LinearSVC(random_state=0), code_size=2, random_state=0)
resultado = accuracy_score('OutputCodeClassifier', modelo, X_train, y_train)
resultados[resultado] = modelo
Explanation: Algorith OutputCodeClassifier
End of explanation
def model_9(resultados):
from sklearn.gaussian_process import GaussianProcessClassifier
from sklearn.gaussian_process.kernels import RBF
modelo = GaussianProcessClassifier(1.0 * RBF(1.0), warm_start=True)
resultado = accuracy_score('GaussianProcessClassifier', modelo, X_train, y_train)
resultados[resultado] = modelo
Explanation: Algorithm GaussianProcessClassifier
End of explanation
def model_10(resultados):
from sklearn.neural_network import MLPClassifier
modelo = MLPClassifier(alpha=1)
resultado = accuracy_score('MLPClassifier', modelo, X_train, y_train)
resultados[resultado] = modelo
Explanation: Algorithm MLPClassifier
End of explanation
def model_11(resultados):
from sklearn.neighbors import KNeighborsClassifier
modelo = KNeighborsClassifier(6)
resultado = accuracy_score('KNeighborsClassifier', modelo, X_train, y_train)
resultados[resultado] = modelo
Explanation: Algorithm KNeighborsClassifier
End of explanation
def model_12(resultados):
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
modelo = QuadraticDiscriminantAnalysis()
resultado = accuracy_score('QuadraticDiscriminantAnalysis', modelo, X_train, y_train)
resultados[resultado] = modelo
Explanation: Algorithm QuadraticDiscriminantAnalysis
End of explanation
def model_13(resultados):
from sklearn.naive_bayes import GaussianNB
modelo = GaussianNB()
resultado = accuracy_score('GaussianNB', modelo, X_train, y_train)
resultados[resultado] = modelo
Explanation: Algorithm GaussianNB
End of explanation
def model_14(resultados):
from sklearn.svm import SVC
modelo = SVC(gamma=2, C=1)
resultado = accuracy_score('RBF SVM', modelo, X_train, y_train)
resultados[resultado] = modelo
Explanation: Algorithm RBF SVM
End of explanation
# Storage result of all algorithm and select the best
resultados = {}
# Create model 1
model_1(resultados)
# Create model 2
model_2(resultados)
# Create model 3
model_3(resultados)
# Create model 4
model_4(resultados)
# Create model 5
model_5(resultados)
# Create model 6
model_6(resultados)
# Create model 7
model_7(resultados)
# Create model 8
#model_8(resultados)
# Create model 9
#model_9(resultados)
# Create model 10
model_10(resultados)
# Create model 11
model_11(resultados)
# Create model 12
model_12(resultados)
# Create model 13
model_13(resultados)
# Create model 14
model_14(resultados)
performance_metric(resultados, X_train, X_test, y_train, y_test);
Explanation: Select the best algorithm
End of explanation
from sklearn.tree import DecisionTreeClassifier
from collections import Counter
modelo = DecisionTreeClassifier(random_state=0)
X_who_is, Y_who_is, df = get_who_is()
#X_who_is = df[['blink_left', 'blink_right', 'smile_or_not']]
#print X_who_is
modelo.fit(X, Y)
predict = modelo.predict(X_who_is)
result = Counter(predict)
who_is = result.most_common()[0][0]
print result
if who_is == 1:
msg = "Você parece ser o Thiago"
elif who_is == 2:
msg = "Você parece ser o Alessandro"
elif who_is == 3:
msg = "Você parece ser o Ed"
print msg
Explanation: Predicting who is
End of explanation |
8,724 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Learning Engineer Nanodegree
Model Evaluation & Validation
Project
Step1: Data Exploration
In this first section of this project, you will make a cursory investigation about the Boston housing data and provide your observations. Familiarizing yourself with the data through an explorative process is a fundamental practice to help you better understand and justify your results.
Since the main goal of this project is to construct a working model which has the capability of predicting the value of houses, we will need to separate the dataset into features and the target variable. The features, 'RM', 'LSTAT', and 'PTRATIO', give us quantitative information about each data point. The target variable, 'MEDV', will be the variable we seek to predict. These are stored in features and prices, respectively.
Implementation
Step2: Question 1 - Feature Observation
As a reminder, we are using three features from the Boston housing dataset
Step3: LSTAT
Step4: PTRATIO
Step6: Developing a Model
In this second section of the project, you will develop the tools and techniques necessary for a model to make a prediction. Being able to make accurate evaluations of each model's performance through the use of these tools and techniques helps to greatly reinforce the confidence in your predictions.
Implementation
Step7: Question 2 - Goodness of Fit
Assume that a dataset contains five data points and a model made the following predictions for the target variable
Step8: Answer
Step9: Implementation
Step10: Question 3 - Training and Testing
What is the benefit to splitting a dataset into some ratio of training and testing subsets for a learning algorithm?
Hint
Step11: Question 4 - Learning the Data
Choose one of the graphs above and state the maximum depth for the model. What happens to the score of the training curve as more training points are added? What about the testing curve? Would having more training points benefit the model?
Hint
Step13: Question 5 - Bias-Variance Tradeoff
When the model is trained with a maximum depth of 1, does the model suffer from high bias or from high variance? How about when the model is trained with a maximum depth of 10? What visual cues in the graph justify your conclusions?
Hint
Step14: Making Predictions
Once a model has been trained on a given set of data, it can now be used to make predictions on new sets of input data. In the case of a decision tree regressor, the model has learned what the best questions to ask about the input data are, and can respond with a prediction for the target variable. You can use these predictions to gain information about data where the value of the target variable is unknown — such as data the model was not trained on.
Question 9 - Optimal Model
What maximum depth does the optimal model have? How does this result compare to your guess in Question 6?
Run the code block below to fit the decision tree regressor to the training data and produce an optimal model.
Step15: Answer
Step16: Answer
Step17: Sensitivity
An optimal model is not necessarily a robust model. Sometimes, a model is either too complex or too simple to sufficiently generalize to new data. Sometimes, a model could use a learning algorithm that is not appropriate for the structure of the data given. Other times, the data itself could be too noisy or contain too few samples to allow a model to adequately capture the target variable — i.e., the model is underfitted. Run the code cell below to run the fit_model function ten times with different training and testing sets to see how the prediction for a specific client changes with the data it's trained on. | Python Code:
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from sklearn.cross_validation import ShuffleSplit
# Import supplementary visualizations code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the Boston housing dataset
data = pd.read_csv('housing.csv')
prices = data['MEDV']
features = data.drop('MEDV', axis = 1)
# Success
print "Boston housing dataset has {} data points with {} variables each.".format(*data.shape)
Explanation: Machine Learning Engineer Nanodegree
Model Evaluation & Validation
Project: Predicting Boston Housing Prices
Welcome to the first project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
Getting Started
In this project, you will evaluate the performance and predictive power of a model that has been trained and tested on data collected from homes in suburbs of Boston, Massachusetts. A model trained on this data that is seen as a good fit could then be used to make certain predictions about a home — in particular, its monetary value. This model would prove to be invaluable for someone like a real estate agent who could make use of such information on a daily basis.
The dataset for this project originates from the UCI Machine Learning Repository. The Boston housing data was collected in 1978 and each of the 506 entries represent aggregated data about 14 features for homes from various suburbs in Boston, Massachusetts. For the purposes of this project, the following preprocessing steps have been made to the dataset:
- 16 data points have an 'MEDV' value of 50.0. These data points likely contain missing or censored values and have been removed.
- 1 data point has an 'RM' value of 8.78. This data point can be considered an outlier and has been removed.
- The features 'RM', 'LSTAT', 'PTRATIO', and 'MEDV' are essential. The remaining non-relevant features have been excluded.
- The feature 'MEDV' has been multiplicatively scaled to account for 35 years of market inflation.
Run the code cell below to load the Boston housing dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.
End of explanation
# TODO: Minimum price of the data
minimum_price = np.min(prices)
# TODO: Maximum price of the data
maximum_price = np.max(prices)
# TODO: Mean price of the data
mean_price = np.mean(prices)
# TODO: Median price of the data
median_price = np.median(prices)
# TODO: Standard deviation of prices of the data
std_price = np.std(prices)
# Show the calculated statistics
print "Statistics for Boston housing dataset:\n"
print "Minimum price: ${:,.2f}".format(minimum_price)
print "Maximum price: ${:,.2f}".format(maximum_price)
print "Mean price: ${:,.2f}".format(mean_price)
print "Median price ${:,.2f}".format(median_price)
print "Standard deviation of prices: ${:,.2f}".format(std_price)
Explanation: Data Exploration
In this first section of this project, you will make a cursory investigation about the Boston housing data and provide your observations. Familiarizing yourself with the data through an explorative process is a fundamental practice to help you better understand and justify your results.
Since the main goal of this project is to construct a working model which has the capability of predicting the value of houses, we will need to separate the dataset into features and the target variable. The features, 'RM', 'LSTAT', and 'PTRATIO', give us quantitative information about each data point. The target variable, 'MEDV', will be the variable we seek to predict. These are stored in features and prices, respectively.
Implementation: Calculate Statistics
For your very first coding implementation, you will calculate descriptive statistics about the Boston housing prices. Since numpy has already been imported for you, use this library to perform the necessary calculations. These statistics will be extremely important later on to analyze various prediction results from the constructed model.
In the code cell below, you will need to implement the following:
- Calculate the minimum, maximum, mean, median, and standard deviation of 'MEDV', which is stored in prices.
- Store each calculation in their respective variable.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.linear_model import LinearRegression
reg = LinearRegression()
pt_ratio = data["RM"].reshape(-1,1)
reg.fit(pt_ratio, prices)
# Create the figure window
plt.plot(pt_ratio, reg.predict(pt_ratio), color='red', lw=1)
plt.scatter(pt_ratio, prices, alpha=0.5, c=prices)
plt.xlabel('RM')
plt.ylabel('Prices')
plt.show()
Explanation: Question 1 - Feature Observation
As a reminder, we are using three features from the Boston housing dataset: 'RM', 'LSTAT', and 'PTRATIO'. For each data point (neighborhood):
- 'RM' is the average number of rooms among homes in the neighborhood.
- 'LSTAT' is the percentage of homeowners in the neighborhood considered "lower class" (working poor).
- 'PTRATIO' is the ratio of students to teachers in primary and secondary schools in the neighborhood.
Using your intuition, for each of the three features above, do you think that an increase in the value of that feature would lead to an increase in the value of 'MEDV' or a decrease in the value of 'MEDV'? Justify your answer for each.
Hint: Would you expect a home that has an 'RM' value of 6 be worth more or less than a home that has an 'RM' value of 7?
Answer:
Based on my intution, each of the below features explained thier effect on the value of 'MEDV' (label or target). This is further supported with Linear regression plots.
- 'RM' : Typically increase in 'RM' would lead to an increase in 'MEDV'. The 'RM' is also a good indication of the size of the homes, bigger house, bigger value.
- 'LSTAT' : Typically decrease in 'LSTAT' would lead to an increse in 'MEDV'. As 'LSTAT' is the indication of working class, "lower" class reduces the value of home.
- 'PTRATIO' : Typically decrease in 'PTRATIO' (students) would lead to an increase in 'MEDV'. As 'PTRATIO' is an indication that schools are good and suffeciently staffed and funded.
RM
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.linear_model import LinearRegression
reg = LinearRegression()
pt_ratio = data["LSTAT"].reshape(-1,1)
reg.fit(pt_ratio, prices)
# Create the figure window
plt.plot(pt_ratio, reg.predict(pt_ratio), color='red', lw=1)
plt.scatter(pt_ratio, prices, alpha=0.5, c=prices)
plt.xlabel('LSTAT')
plt.ylabel('Prices')
plt.show()
Explanation: LSTAT
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.linear_model import LinearRegression
reg = LinearRegression()
pt_ratio = data["PTRATIO"].reshape(-1,1)
reg.fit(pt_ratio, prices)
# Create the figure window
plt.plot(pt_ratio, reg.predict(pt_ratio), color='red', lw=1)
plt.scatter(pt_ratio, prices, alpha=0.5, c=prices)
plt.xlabel('PTRATIO')
plt.ylabel('Prices')
plt.show()
Explanation: PTRATIO
End of explanation
# TODO: Import 'r2_score'
from sklearn.metrics import r2_score
def performance_metric(y_true, y_predict):
Calculates and returns the performance score between
true and predicted values based on the metric chosen.
# TODO: Calculate the performance score between 'y_true' and 'y_predict'
score = r2_score(y_true, y_predict)
# Return the score
return score
Explanation: Developing a Model
In this second section of the project, you will develop the tools and techniques necessary for a model to make a prediction. Being able to make accurate evaluations of each model's performance through the use of these tools and techniques helps to greatly reinforce the confidence in your predictions.
Implementation: Define a Performance Metric
It is difficult to measure the quality of a given model without quantifying its performance over training and testing. This is typically done using some type of performance metric, whether it is through calculating some type of error, the goodness of fit, or some other useful measurement. For this project, you will be calculating the coefficient of determination, R<sup>2</sup>, to quantify your model's performance. The coefficient of determination for a model is a useful statistic in regression analysis, as it often describes how "good" that model is at making predictions.
The values for R<sup>2</sup> range from 0 to 1, which captures the percentage of squared correlation between the predicted and actual values of the target variable. A model with an R<sup>2</sup> of 0 is no better than a model that always predicts the mean of the target variable, whereas a model with an R<sup>2</sup> of 1 perfectly predicts the target variable. Any value between 0 and 1 indicates what percentage of the target variable, using this model, can be explained by the features. A model can be given a negative R<sup>2</sup> as well, which indicates that the model is arbitrarily worse than one that always predicts the mean of the target variable.
For the performance_metric function in the code cell below, you will need to implement the following:
- Use r2_score from sklearn.metrics to perform a performance calculation between y_true and y_predict.
- Assign the performance score to the score variable.
End of explanation
# Calculate the performance of this model
score = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3])
print "Model has a coefficient of determination, R^2, of {:.3f}.".format(score)
Explanation: Question 2 - Goodness of Fit
Assume that a dataset contains five data points and a model made the following predictions for the target variable:
| True Value | Prediction |
| :-------------: | :--------: |
| 3.0 | 2.5 |
| -0.5 | 0.0 |
| 2.0 | 2.1 |
| 7.0 | 7.8 |
| 4.2 | 5.3 |
Would you consider this model to have successfully captured the variation of the target variable? Why or why not?
Run the code cell below to use the performance_metric function and calculate this model's coefficient of determination.
End of explanation
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
true, pred = [3.0, -0.5, 2.0, 7.0, 4.2],[2.5, 0.0, 2.1, 7.8, 5.3]
#plot true values
true_handle = plt.scatter(true, true, alpha=0.6, color='blue', label = 'True' )
#reference line
fit = np.poly1d(np.polyfit(true, true, 1))
lims = np.linspace(min(true)-1, max(true)+1)
plt.plot(lims, fit(lims), alpha = 0.3, color = "black")
#plot predicted values
pred_handle = plt.scatter(true, pred, alpha=0.6, color='red', label = 'Pred')
#legend & show
plt.legend(handles=[true_handle, pred_handle], loc="upper left")
plt.show()
Explanation: Answer:
- Yes I would consider this model to have successfully captured the variation of the target variable.
- R2 is 0.923 which is very close to 1, means the True Value is 92.3% predicted from Prediction
- As shown below it is possible to plot the values to get the visual representation in this scenario
End of explanation
# TODO: Import 'train_test_split'
from sklearn.cross_validation import train_test_split
# TODO: Shuffle and split the data into training and testing subsets
X_train, X_test, y_train, y_test = train_test_split(features, prices, test_size=0.2, random_state=0)
# Success
print "Training and testing split was successful."
Explanation: Implementation: Shuffle and Split Data
Your next implementation requires that you take the Boston housing dataset and split the data into training and testing subsets. Typically, the data is also shuffled into a random order when creating the training and testing subsets to remove any bias in the ordering of the dataset.
For the code cell below, you will need to implement the following:
- Use train_test_split from sklearn.cross_validation to shuffle and split the features and prices data into training and testing sets.
- Split the data into 80% training and 20% testing.
- Set the random_state for train_test_split to a value of your choice. This ensures results are consistent.
- Assign the train and testing splits to X_train, X_test, y_train, and y_test.
End of explanation
# Produce learning curves for varying training set sizes and maximum depths
vs.ModelLearning(features, prices)
Explanation: Question 3 - Training and Testing
What is the benefit to splitting a dataset into some ratio of training and testing subsets for a learning algorithm?
Hint: What could go wrong with not having a way to test your model?
Answer:
- Learning algorithm used for prediction or inference of datasets. We do not need learning algorithm to predict known response lables, we do want learning algorithm to predict response label from unkown dataset. That is why it is benefitial to hold out some ratio of dataset as test dataset not known to learning algorithm. Learning algorithm is fitted against traning subset, which then can be used to predict response label from test dataset to measure performance of learning algorithm.
- Splitting dataset into some ratio of training and testing subsets, we can provide only training subset data to learning algorithm and learn behavior of response label against features. We can then provide testing subset not known to learning algorithm and have learning algorithm predict label. Predicted label can be compared with actuals of testing subset to find test error. Test error is a better metric to measure the performance of a learning algorithm compared to training error.
- Using training and testing subsets we can tune the learning algorithm to reduce bias and variance.
- If we do not have a way to test with testing subsets, approximation of traning error is used as a performance metric for learning algorithm, in some cases learning algorithm could have high variance and might not be the right algorithm for the dataset.
Analyzing Model Performance
In this third section of the project, you'll take a look at several models' learning and testing performances on various subsets of training data. Additionally, you'll investigate one particular algorithm with an increasing 'max_depth' parameter on the full training set to observe how model complexity affects performance. Graphing your model's performance based on varying criteria can be beneficial in the analysis process, such as visualizing behavior that may not have been apparent from the results alone.
Learning Curves
The following code cell produces four graphs for a decision tree model with different maximum depths. Each graph visualizes the learning curves of the model for both training and testing as the size of the training set is increased. Note that the shaded region of a learning curve denotes the uncertainty of that curve (measured as the standard deviation). The model is scored on both the training and testing sets using R<sup>2</sup>, the coefficient of determination.
Run the code cell below and use these graphs to answer the following question.
End of explanation
vs.ModelComplexity(X_train, y_train)
Explanation: Question 4 - Learning the Data
Choose one of the graphs above and state the maximum depth for the model. What happens to the score of the training curve as more training points are added? What about the testing curve? Would having more training points benefit the model?
Hint: Are the learning curves converging to particular scores?
Answer:
- Maximum depth for the model is max_depth = 1
- Score of the training curve is decreased for more training points added
- Score of the testing cureve is increased for more training points added
- Both training and testing curve are platoed or have very minimal gain in score for more traning points added after around 300 training points, so more traning points wont benefit the model. Learning curves of both training and testing curves seem to converge around score 0.4
- traning curve seems to be detoriating and indicates high bias
Complexity Curves
The following code cell produces a graph for a decision tree model that has been trained and validated on the training data using different maximum depths. The graph produces two complexity curves — one for training and one for validation. Similar to the learning curves, the shaded regions of both the complexity curves denote the uncertainty in those curves, and the model is scored on both the training and validation sets using the performance_metric function.
Run the code cell below and use this graph to answer the following two questions.
End of explanation
# TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV'
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import make_scorer
from sklearn.grid_search import GridSearchCV
def fit_model(X, y):
Performs grid search over the 'max_depth' parameter for a
decision tree regressor trained on the input data [X, y].
# Create cross-validation sets from the training data
cv_sets = ShuffleSplit(X.shape[0], n_iter = 10, test_size = 0.20, random_state = 0)
# TODO: Create a decision tree regressor object
regressor = DecisionTreeRegressor()
# TODO: Create a dictionary for the parameter 'max_depth' with a range from 1 to 10
params = {'max_depth': range(1,11)}
# TODO: Transform 'performance_metric' into a scoring function using 'make_scorer'
scoring_fnc = make_scorer(performance_metric)
# TODO: Create the grid search object
grid = GridSearchCV(regressor, params, scoring = scoring_fnc, cv = cv_sets)
# Fit the grid search object to the data to compute the optimal model
grid = grid.fit(X, y)
# Return the optimal model after fitting the data
return grid.best_estimator_
Explanation: Question 5 - Bias-Variance Tradeoff
When the model is trained with a maximum depth of 1, does the model suffer from high bias or from high variance? How about when the model is trained with a maximum depth of 10? What visual cues in the graph justify your conclusions?
Hint: How do you know when a model is suffering from high bias or high variance?
Answer:
- Yes when models is trained with a maximum depth of 1, model suffer from high bias and low variance
- When model is trained with a maxmum depth of 10, model suffer from high variance and low bais
- If training and validation score are close to each other it shows that there is low variance in the model. In the graph as both training and testing score are less, it could be that model is not using sufficient data so it could be biased or underfitting the data. When there is a huge difference in training score and validation score there is a high variance, this could be because, model has learnt very well, and fitted to training data, indicates that model is overfitting or has high variance. At about Maximum depth 4, model seem to be performing optimal for both traning and validations core having the right trade of bias and variance.
Question 6 - Best-Guess Optimal Model
Which maximum depth do you think results in a model that best generalizes to unseen data? What intuition lead you to this answer?
Answer:
- At about Maximum depth 4, model seem to be performing optimal for both traning and validations core having the right trade of bias and variance. Afyer Maximum depth 4, validation score starts detoriting and doesnt show any improvement where as traning score is increasing is a sign of overfitting or high variance being introduced by model.
Evaluating Model Performance
In this final section of the project, you will construct a model and make a prediction on the client's feature set using an optimized model from fit_model.
Question 7 - Grid Search
What is the grid search technique and how it can be applied to optimize a learning algorithm?
Answer:
Grid search technique is a way of performing hyperparameter optimization. This is simply an exhaustive seraching through a manually specified subset of hyperparameter of a learning algorithm.
Grid serach will methodically build and evaluate a model for each combination of learning algorithm parameters specified in a grid. A Grid search algorithm is guided by performance metric like typically measured by cross-validation on the training set or evaluation on a held-out validation set, and best combination is retained.
Question 8 - Cross-Validation
What is the k-fold cross-validation training technique? What benefit does this technique provide for grid search when optimizing a model?
Hint: Much like the reasoning behind having a testing set, what could go wrong with using grid search without a cross-validated set?
Answer:
- In k-fold cross-validation training technique, the original dataset is randomly partitioned into k equal sized subsets. Of the k subsets, a single subset is retained as the validation data (or test data) for testing the model, and the remaining k − 1 subsets are used as training data. The cross-validation process is then repeated k times (the folds), with each of the k subsets used exactly once as the validation data. The k results from the folds can then be averaged to produce a single estimation. 10-fold cross-validation is commonly used, but in general k remains an unfixed parameter.
- A grid search algorithm must be guided by a performance metric, typically measured by cross-validation. The advantage is that all observations are used for both training and validation, and each observation is used for validation exactly once. By doing this there will be low variance and no overfitting of the data by the optimized model.
Implementation: Fitting a Model
Your final implementation requires that you bring everything together and train a model using the decision tree algorithm. To ensure that you are producing an optimized model, you will train the model using the grid search technique to optimize the 'max_depth' parameter for the decision tree. The 'max_depth' parameter can be thought of as how many questions the decision tree algorithm is allowed to ask about the data before making a prediction. Decision trees are part of a class of algorithms called supervised learning algorithms.
In addition, you will find your implementation is using ShuffleSplit() for an alternative form of cross-validation (see the 'cv_sets' variable). While it is not the K-Fold cross-validation technique you describe in Question 8, this type of cross-validation technique is just as useful!. The ShuffleSplit() implementation below will create 10 ('n_splits') shuffled sets, and for each shuffle, 20% ('test_size') of the data will be used as the validation set. While you're working on your implementation, think about the contrasts and similarities it has to the K-fold cross-validation technique.
Please note that ShuffleSplit has different parameters in scikit-learn versions 0.17 and 0.18.
For the fit_model function in the code cell below, you will need to implement the following:
- Use DecisionTreeRegressor from sklearn.tree to create a decision tree regressor object.
- Assign this object to the 'regressor' variable.
- Create a dictionary for 'max_depth' with the values from 1 to 10, and assign this to the 'params' variable.
- Use make_scorer from sklearn.metrics to create a scoring function object.
- Pass the performance_metric function as a parameter to the object.
- Assign this scoring function to the 'scoring_fnc' variable.
- Use GridSearchCV from sklearn.grid_search to create a grid search object.
- Pass the variables 'regressor', 'params', 'scoring_fnc', and 'cv_sets' as parameters to the object.
- Assign the GridSearchCV object to the 'grid' variable.
End of explanation
# Fit the training data to the model using grid search
reg = fit_model(X_train, y_train)
# Produce the value for 'max_depth'
print "Parameter 'max_depth' is {} for the optimal model.".format(reg.get_params()['max_depth'])
Explanation: Making Predictions
Once a model has been trained on a given set of data, it can now be used to make predictions on new sets of input data. In the case of a decision tree regressor, the model has learned what the best questions to ask about the input data are, and can respond with a prediction for the target variable. You can use these predictions to gain information about data where the value of the target variable is unknown — such as data the model was not trained on.
Question 9 - Optimal Model
What maximum depth does the optimal model have? How does this result compare to your guess in Question 6?
Run the code block below to fit the decision tree regressor to the training data and produce an optimal model.
End of explanation
# Produce a matrix for client data
client_data = [[5, 17, 15], # Client 1
[4, 32, 22], # Client 2
[8, 3, 12]] # Client 3
# Show predictions
for i, price in enumerate(reg.predict(client_data)):
print "Predicted selling price for Client {}'s home: ${:,.2f}".format(i+1, price)
Explanation: Answer:
- 4, this is same as the result of my guess in Question 6
Question 10 - Predicting Selling Prices
Imagine that you were a real estate agent in the Boston area looking to use this model to help price homes owned by your clients that they wish to sell. You have collected the following information from three of your clients:
| Feature | Client 1 | Client 2 | Client 3 |
| :---: | :---: | :---: | :---: |
| Total number of rooms in home | 5 rooms | 4 rooms | 8 rooms |
| Neighborhood poverty level (as %) | 17% | 32% | 3% |
| Student-teacher ratio of nearby schools | 15-to-1 | 22-to-1 | 12-to-1 |
What price would you recommend each client sell his/her home at? Do these prices seem reasonable given the values for the respective features?
Hint: Use the statistics you calculated in the Data Exploration section to help justify your response.
Run the code block below to have your optimized model make predictions for each client's home.
End of explanation
from matplotlib import pyplot as plt
clients = np.transpose(client_data)
pred = reg.predict(client_data)
for i, feat in enumerate(['RM', 'LSTAT', 'PTRATIO']):
plt.scatter(features[feat], prices, alpha=0.25, c=prices)
plt.scatter(clients[i], pred, color='black', marker='x', linewidths=2)
plt.xlabel(feat)
plt.ylabel('MEDV')
plt.show()
Explanation: Answer:
The predicted selling prices are \$391,183.33, \$189,123.53 and \$942,666.67 for Client 1's home, Client 2's home and Client 3's home respectively.
Facts from the descriptive statistics:
- Distribution:
Statistics for Boston housing dataset:
Minimum price: \$105,000.00
Maximum price: \$1,024,800.00
Mean price: \$454,342.94
Median price \$438,900.00
Standard deviation of prices: \$165,171.13
- Effects of features:
Based on my intution, each of the below features explained thier effect on the value of 'MEDV' (label or target).
- 'RM' : Typically increase in 'RM' would lead to an increase in 'MEDV'. The 'RM' is also a good indication of the size of the homes, bigger house, bigger value.
- 'LSTAT' : Typically decrease in 'LSTAT' would lead to an increse in 'MEDV'. As 'LSTAT' is the indication of working class, "lower" class reduces the value of home.
- 'PTRATIO' : Typically decrease in 'PTRATIO' (students) would lead to an increase in 'MEDV'. As 'PTRATIO' is an indication that schools are good and suffeciently staffed and funded.
Are the estimates reasonable:
- Client 1's home (\$391,183.33):
- Distribution: The estimate is inside the normal range of prices we have (closer than one standard deviation to mean and median).
- Feature effects: The feature values all are in between those for the other clients. Thus, it seems reasonable that the estimated price is also in between.
- Conclusion: reasonable estimate
- Client 2's home (\$189,123.53)
- Distribution: The estimate is more than one standard deviation below the mean but less than two. Thus, it is not really a typical value for me still ok.
- Feature effects: Of the 3 clients' houses, this one has lowest RM, highest LSTAT, and highest PTRATIO. All this should decrease the price, which is in line with it being the lowest of all prices.
- Conclusion: it is reasonable that the price is low, but my confidence in the exact value of the estimate is lower than for client 1. Still, you would say you could use the model for client 2.
- Client 3's home (\$942,666.67):
- Distribution: The estimate is more than 3 standard deviations above the mean (bigger than \$906,930.78) and very close to the maximum of \$1,024,800.00. Thus, this value is very atypical for this dataset and should be viewed with scepticism.
- Feature effects: This is the house with highest RM, lowest LSTAT, and lowest PTRATIO of all 3 clients. Thus, it seems theoretically ok that is has the highest price too.
- Conclusion: The price should indeed be high, but I would not trust an estimate that far off the mean. Hence, my confidence in this prediction is lowest. I would not recommend using the model for estimates in this range.
Side note: arguing with summary statistics like mean and standard deviations relies on house prices being at least somewhat normally distributed.
We can alos see from below plots, the client features against datset.
End of explanation
vs.PredictTrials(features, prices, fit_model, client_data)
Explanation: Sensitivity
An optimal model is not necessarily a robust model. Sometimes, a model is either too complex or too simple to sufficiently generalize to new data. Sometimes, a model could use a learning algorithm that is not appropriate for the structure of the data given. Other times, the data itself could be too noisy or contain too few samples to allow a model to adequately capture the target variable — i.e., the model is underfitted. Run the code cell below to run the fit_model function ten times with different training and testing sets to see how the prediction for a specific client changes with the data it's trained on.
End of explanation |
8,725 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PLEASE MAKE A COPY BEFORE CHANGING
Copyright 2021 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step1: Install Dependencies
Step2: Define function to enable charting library
Step11: Authenticate against the ADH API
ADH documentation
Step12: Frequency Analysis
<b>Purpose
Step13: Step 2 - Create a function for the final calculations
From DT data Calculate metrics using pandas
Pass through the pandas dataframe when you call this function
Step14: Step 3 - Build the query
Set up the vairables
Step16: Part 1 - Find all impressions from the impression table
Step18: Part 2 - Find all clicks from the clicks table
Step20: output example
Step22: output example
Step23: output example
Step24: Create the query required for ADH
* When working with ADH the standard BigQuery query needs to be adapted to run in ADH
* This can be done bia the API
Step26: Full Query
Step27: Check your query exists
https
Step28: Retrieve the results from BigQuery
Check to make sure the query has finished running and is saved in the new BigQuery TAble
When it is done we cane retrieve it
Step29: We are using the pandas library to run the query.
We pass in the query (q), the project id and set the SQL language to 'standard' (as opposed to legacy SQL)
Step30: Save the output as a CSV
Step31: Step 6 - Set up the data and all the charts that will be plotted
6.1 Transform data
Use the calculation function created to calculate all the values based off your data
Step32: Analysis 1
Step33: Step 2
Step34: Output
Step35: Clicks and CPC Comparison on Each Frequency
What is your CPC ceiling
Understand what the frequency is at that level
Determine what impact changing your frequency will have on clicks
Step36: CTR and CPC Comparison on Each Frequency
How does your CTR and CPC impact each other
Make an informed decision regarding suitable goals
Step37: Cumulative Clicks and CPC Comparison on Each Frequency
Understand what a suitable CPC goal might be
1. What is the change in cost for increased clicks
2. What is the incremental gains for an increased cost
Step38: Cumulative Clicks and CTR Comparison on Each Frequency
At what frequency does your CTR drop below an acceptable value
Step39: Analysis 2
Step40: Output
Step41: Cummulative Impressions and Cummulative Clicks on Each Frequency
To obtain my goals in terms of clicks, what frequency do I need, at what impression cost?
Step42: Analysis 3
Step43: Output | Python Code:
# The Developer Key is used to retrieve a discovery document containing the
# non-public Full Circle Query v2 API. This is used to build the service used
# in the samples to make API requests. Please see the README for instructions
# on how to configure your Google Cloud Project for access to the Full Circle
# Query v2 API.
DEVELOPER_KEY = 'yourkey' #'INSERT_DEVELOPER_KEY_HERE'
# The client secrets file can be downloaded from the Google Cloud Console.
CLIENT_SECRETS_FILE = 'adh-key.json' #'Make sure you have correctly renamed this file and you have uploaded it in this colab'
Explanation: PLEASE MAKE A COPY BEFORE CHANGING
Copyright 2021 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Important
This content are intended for educational and informational purposes only.
Configuration
ADH APIs Configuration Steps
Enable the ADH v1 API in the Google Cloud Storage account you use to access the API.
When searching for the API in your GCP Console API Library, use the search term “adsdatahub”.
Go to the Google Developers Console and verify that you have access to your Google Cloud project via the drop-down menu at the top of the page. If you don't see the right Google Cloud project, you should reach out to your Ads Data Hub team to get access.
From the project drop-down menu, select your Big Query project.
Click on the hamburger button on the top left corner of the page and click APIs & services > Credentials.
If you have not done so already, create an API key by clicking the Create credentials drop-down menu and select API key. This will create an API key that you will need for a later step.
If you have not done so already, create a new OAuth 2.0 client ID by clicking the Create credentials button and select OAuth client ID. For the Application type select Other and optionally enter a name to be associated with the client ID. Click Create to create the new Client ID and a dialog will appear to show you your client ID and secret. On the Credentials page for
your project, find your new client ID listed under OAuth 2.0 client IDs, and click the corresponding download icon. The downloaded file will contain your credentials, which will be needed to step through the OAuth 2.0 installed application flow.
update the DEVELOPER_KEY field to match the
API key you retrieved earlier.
Rename the credentials file you downloaded earlier to adh-key.json and upload the file in this colab (on the left menu click on the "Files" tab and then click on the "upload" button
End of explanation
import json
import sys
import argparse
import pprint
import random
import datetime
import pandas as pd
import plotly.plotly as py
import plotly.graph_objs as go
from google_auth_oauthlib.flow import InstalledAppFlow
from googleapiclient import discovery
from oauthlib.oauth2.rfc6749.errors import InvalidGrantError
from google.auth.transport.requests import AuthorizedSession
from google.auth.transport.requests import Request
from google.oauth2.credentials import Credentials
from plotly.offline import iplot
from plotly.graph_objs import Contours, Histogram2dContour, Marker, Scatter
from googleapiclient.errors import HttpError
from google.colab import auth
auth.authenticate_user()
print('Authenticated')
Explanation: Install Dependencies
End of explanation
# Allow plot images to be displayed
%matplotlib inline
# Functions
def enable_plotly_in_cell():
import IPython
from plotly.offline import init_notebook_mode
display(IPython.core.display.HTML('''
<script src="/static/components/requirejs/require.js"></script>
'''))
init_notebook_mode(connected=False)
Explanation: Define function to enable charting library
End of explanation
#!/usr/bin/python
#
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Utilities used to step through OAuth 2.0 flow.
These are intended to be used for stepping through samples for the Full Circle
Query v2 API.
_APPLICATION_NAME = 'ADH Campaign Overlap'
_CREDENTIALS_FILE = 'fcq-credentials.json'
_SCOPES = 'https://www.googleapis.com/auth/adsdatahub'
_DISCOVERY_URL_TEMPLATE = 'https://%s/$discovery/rest?version=%s&key=%s'
_FCQ_DISCOVERY_FILE = 'fcq-discovery.json'
_FCQ_SERVICE = 'adsdatahub.googleapis.com'
_FCQ_VERSION = 'v1'
_REDIRECT_URI = 'urn:ietf:wg:oauth:2.0:oob'
_SCOPE = ['https://www.googleapis.com/auth/adsdatahub']
_TOKEN_URI = 'https://accounts.google.com/o/oauth2/token'
MAX_PAGE_SIZE = 50
def _GetCredentialsFromInstalledApplicationFlow():
Get new credentials using the installed application flow.
flow = InstalledAppFlow.from_client_secrets_file(
CLIENT_SECRETS_FILE, scopes=_SCOPE)
flow.redirect_uri = _REDIRECT_URI # Set the redirect URI used for the flow.
auth_url, _ = flow.authorization_url(prompt='consent')
print ('Log into the Google Account you use to access the adsdatahub Query '
'v1 API and go to the following URL:\n%s\n' % auth_url)
print 'After approving the token, enter the verification code (if specified).'
code = raw_input('Code: ')
try:
flow.fetch_token(code=code)
except InvalidGrantError as ex:
print 'Authentication has failed: %s' % ex
sys.exit(1)
credentials = flow.credentials
_SaveCredentials(credentials)
return credentials
def _LoadCredentials():
Loads and instantiates Credentials from JSON credentials file.
with open(_CREDENTIALS_FILE, 'rb') as handler:
stored_creds = json.loads(handler.read())
creds = Credentials(client_id=stored_creds['client_id'],
client_secret=stored_creds['client_secret'],
token=None,
refresh_token=stored_creds['refresh_token'],
token_uri=_TOKEN_URI)
return creds
def _SaveCredentials(creds):
Save credentials to JSON file.
stored_creds = {
'client_id': getattr(creds, '_client_id'),
'client_secret': getattr(creds, '_client_secret'),
'refresh_token': getattr(creds, '_refresh_token')
}
with open(_CREDENTIALS_FILE, 'wb') as handler:
handler.write(json.dumps(stored_creds))
def GetCredentials():
Get stored credentials if they exist, otherwise return new credentials.
If no stored credentials are found, new credentials will be produced by
stepping through the Installed Application OAuth 2.0 flow with the specified
client secrets file. The credentials will then be saved for future use.
Returns:
A configured google.oauth2.credentials.Credentials instance.
try:
creds = _LoadCredentials()
creds.refresh(Request())
except IOError:
creds = _GetCredentialsFromInstalledApplicationFlow()
return creds
def GetDiscoveryDocument():
Downloads the adsdatahub v1 discovery document.
Downloads the adsdatahub v1 discovery document to fcq-discovery.json
if it is accessible. If the file already exists, it will be overwritten.
Raises:
ValueError: raised if the discovery document is inaccessible for any reason.
credentials = GetCredentials()
discovery_url = _DISCOVERY_URL_TEMPLATE % (
_FCQ_SERVICE, _FCQ_VERSION, DEVELOPER_KEY)
auth_session = AuthorizedSession(credentials)
discovery_response = auth_session.get(discovery_url)
if discovery_response.status_code == 200:
with open(_FCQ_DISCOVERY_FILE, 'wb') as handler:
handler.write(discovery_response.text)
else:
raise ValueError('Unable to retrieve discovery document for api name "%s"'
'and version "%s" via discovery URL: %s'
% _FCQ_SERVICE, _FCQ_VERSION, discovery_url)
def GetService():
Builds a configured adsdatahub v1 API service.
Returns:
A googleapiclient.discovery.Resource instance configured for the adsdatahub v1 service.
credentials = GetCredentials()
discovery_url = _DISCOVERY_URL_TEMPLATE % (
_FCQ_SERVICE, _FCQ_VERSION, DEVELOPER_KEY)
service = discovery.build(
'adsdatahub', _FCQ_VERSION, credentials=credentials,
discoveryServiceUrl=discovery_url)
return service
def GetServiceFromDiscoveryDocument():
Builds a configured Full Circle Query v2 API service via discovery file.
Returns:
A googleapiclient.discovery.Resource instance configured for the Full Circle
Query API v2 service.
credentials = GetCredentials()
with open(_FCQ_DISCOVERY_FILE, 'rb') as handler:
discovery_doc = handler.read()
service = discovery.build_from_document(
service=discovery_doc, credentials=credentials)
return service
try:
full_circle_query = GetService()
except IOError as ex:
print ('Unable to create ads data hub service - %s' % ex)
print ('Did you specify the client secrets file in samples_util.py?')
sys.exit(1)
try:
# Execute the request.
response = full_circle_query.customers().list().execute()
except HttpError as e:
print (e)
sys.exit(1)
if 'customers' in response:
print ('ADH API Returned {} Ads Data Hub customers for the current user!'.format(len(response['customers'])))
for customer in response['customers']:
print(json.dumps(customer))
else:
print ('No customers found for current user.')
Explanation: Authenticate against the ADH API
ADH documentation
End of explanation
#@title Define ADH configuration parameters
customer_id = 000000001 #@param
dataset_id = 000000001 #@param
query_name = "query_name" #@param {type:"string"}
big_query_project = 'bq_project_id' #@param Destination Project ID {type:"string"}
big_query_dataset = 'dataset_name' #@param Destination Dataset {type:"string"}
big_query_destination_table = 'table_name' #@param Destination Table {type:"string"}
start_date = '2019-09-01' #@param {type:"date", allow-input: true}
end_date = '2019-09-30' #@param {type:"date", allow-input: true}
max_freq = 100 #@param {type:"integer", allow-input: true}
cpm = 5 #@param {type:"number", allow-input: true}
id_type = "campaign_id" #@param ["", "advertiser_id", "campaign_id", "placement_id", "ad_id"] {type: "string", allow-input: false}
IDs = "" #@param {type: "string", allow-input: true}
Explanation: Frequency Analysis
<b>Purpose:</b> This tool should be used to guide you defining an optimal frequency cap considering the CTR curve. Due to that it is more useful in awareness use cases.
Key notes
For some campaings the user ID will be <b>zeroed</b> (e.g. Googel Data, ITP browsers and YouTube Data), therefore <b>excluded</b> from the analysis. For more information click <a href="https://support.google.com/dcm/answer/9006418" > here</a>;
It will be only included in the analysis campaigns which clicks and impressions were tracked.
Instructions
* First of all: <b>MAKE A COPY</b> =);
* Fulfill the query parameters in the Box 1;
* In the menu above click in Runtime > Run All;
* Authorize your credentials;
* Go to the end of the colab and your figures will be ready;
* After defining what should be the optimal frequency cap fill it in the Box 2 and press play.
Step 1 - Instructions - Defining parameters to find the optimal frequency
<b>max_freq:</b> Stands for the amount of frequency you want to plot the graphics (e.g. if you put 50, you will look for impressions that was shown up to 50 times for users);
<b>id_type:</b> How do you want to filter your data (if you don't want to filter leave it blank);
<b>IDs:</b> Accordingly to the id_type chosen before, fill in this field following this patterns: 'id-1111', 'id-2222', ...
End of explanation
def df_calc_fields(df):
df['ctr'] = df.clicks / df.impressions
df['cost'] = (df.impressions / 1000 ) * cpm
df['cpc'] = df.cost / df.clicks
df['cumulative_clicks'] = df.clicks.cumsum()
df['cumulative_impressions'] = df.impressions.cumsum()
df['cumulative_reach'] = df.reach.cumsum()
df['cumulative_cost'] = df.cost.cumsum()
df['coverage_clicks'] = df.cumulative_clicks / df.clicks.sum()
df['coverage_impressions'] = df.cumulative_impressions / df.impressions.sum()
df['coverage_reach'] = df.cumulative_reach / df.reach.sum()
return df
Explanation: Step 2 - Create a function for the final calculations
From DT data Calculate metrics using pandas
Pass through the pandas dataframe when you call this function
End of explanation
# Build the query
dc = {}
if (IDs == ""):
dc['ID_filters'] = ""
else:
dc['id_type'] = id_type
dc['IDs'] = IDs
dc['ID_filters'] = '''AND event.{id_type} IN ({IDs})'''.format(**dc)
Explanation: Step 3 - Build the query
Set up the vairables
End of explanation
q1 =
WITH
imp_u_clicks AS (
SELECT
User_ID,
event.event_time AS interaction_time,
'imp' AS interaction_type
FROM
adh.cm_dt_impressions
WHERE
user_id != '0'
{ID_filters}
Explanation: Part 1 - Find all impressions from the impression table:
* Select all user IDs from the impression table
* Select the event_time
* Mark the interaction type as 'imp' for all of these rows
* Filter for the dates set in Step 1 using the partition files to reduce bigQuery costs by only searching in files within a 2 day interval of the set date range
* Filter out any user IDs that are 0
* If specific ID filters were applied in Step 1 filter the data for those IDs
End of explanation
q2 =
UNION ALL (
SELECT
User_ID,
event.event_time AS interaction_time,
'click' AS interaction_type
FROM
adh.cm_dt_clicks
WHERE
user_id != '0'
{ID_filters} ) ),
Explanation: Part 2 - Find all clicks from the clicks table:
Select all User IDs from the click table
Select the event_time
Mark the interaction type as 'click' for all of these rows
Filter for the dates set in Step 1 using the partition files to reduce BigQuery costs by only searching in files within a 2 day interval of the set date range
If specific ID filters were applied in Step 2 filter the data for those IDs
Use a union to create a single table with both impressions and clicks
End of explanation
q3 =
user_level_data AS (
SELECT
user_id,
SUM(IF(interaction_type = 'imp',
1,
0)) AS impressions,
SUM(IF(interaction_type = 'click',
1,
0)) AS clicks
FROM
imp_u_clicks
GROUP BY
user_id)
Explanation: output example:
<table>
<tr>
<th>USER_ID</th>
<th>interaction_time</th>
<th>interaction_type</th>
</tr>
<tr>
<td>001</td>
<td>timestamp</td>
<td>impression</td>
</tr>
<tr>
<td>001</td>
<td>timestamp</td>
<td>impression</td>
</tr>
<tr>
<td>001</td>
<td>timestamp</td>
<td>click</td>
</tr>
<tr>
<td>002</td>
<td>timestamp</td>
<td>impression</td>
</tr>
</tr>
<tr>
<td>002</td>
<td>timestamp</td>
<td>click</td>
</tr>
</tr>
<tr>
<td>003</td>
<td>timestamp</td>
<td>impression</td>
</tr>
<tr>
<td>001</td>
<td>timestamp</td>
<td>impression</td>
</tr>
</table>
Part 3 - Calculate impressions and clicks per user:
For each user, calculate the number of impressions and clicks using the table created in Part 1 and 2
End of explanation
q4 =
SELECT
impressions AS frequency,
SUM(clicks) AS clicks,
SUM(impressions) AS impressions,
COUNT(*) AS reach
FROM
user_level_data
GROUP BY
1
ORDER BY
frequency ASC
Explanation: output example:
<table>
<tr>
<th>USER_ID</th>
<th>impressions</th>
<th>clicks</th>
</tr>
<tr>
<td>001</td>
<td>3</td>
<td>1</td>
</tr>
<tr>
<td>002</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>003</td>
<td>1</td>
<td>0</td>
</tr>
</table>
Part 4 - Calculate metrics per frequency:
Use the table created in Part 3 with metrics at user level to calculate metrics per each frequency
Frequency: The number of impressions served to each user
Clicks: The sum of clicks that occured at each frequency
Impressions: The sum of all impressions that occured at each frequency
Reach: The total number of unique users (the count of all user ids)
Group by Frequency
End of explanation
query_text = (q1 + q2 + q3 + q4).format(**dc)
print(query_text)
Explanation: output example:
<table>
<tr>
<th>frequency</th>
<th>clicks</th>
<th>impression</th>
<th>reach</th>
</tr>
<tr>
<td>1</td>
<td>1</td>
<td>2</td>
<td>2</td>
</tr>
<tr>
<td>2</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>3</td>
<td>1</td>
<td>3</td>
<td>1</td>
</tr>
</table>
Join the query and use pythons format method to pass in your parameters set in step 1
End of explanation
try:
full_circle_query = GetService()
except IOError, ex:
print 'Unable to create ads data hub service - %s' % ex
print 'Did you specify the client secrets file?'
sys.exit(1)
query_create_body = {
'name': query_name,
'title': query_name,
'queryText': query_text
}
try:
# Execute the request.
new_query = full_circle_query.customers().analysisQueries().create(body=query_create_body, parent='customers/' + str(customer_id)).execute()
new_query_name = new_query["name"]
except HttpError as e:
print e
sys.exit(1)
print 'New query %s created for customer ID "%s":' % (new_query_name, customer_id)
print(json.dumps(new_query))
Explanation: Create the query required for ADH
* When working with ADH the standard BigQuery query needs to be adapted to run in ADH
* This can be done bia the API
End of explanation
# Build the query
dc = {}
if (IDs == ""):
dc['ID_filters'] = ""
else:
dc['id_type'] = id_type
dc['IDs'] = IDs
dc['ID_filters'] = '''AND event.{id_type} IN ({IDs})'''.format(**dc)
query_text =
WITH
imp_u_clicks AS (
SELECT
User_ID,
event.event_time AS interaction_time,
'imp' AS interaction_type
FROM
adh.cm_dt_impressions
WHERE
user_id != '0'
{ID_filters}
UNION ALL (
SELECT
User_ID,
event.event_time AS interaction_time,
'click' AS interaction_type
FROM
adh.cm_dt_clicks
WHERE
user_id != '0'
{ID_filters} ) ),
user_level_data AS (
SELECT
user_id,
SUM(IF(interaction_type = 'imp',
1,
0)) AS impressions,
SUM(IF(interaction_type = 'click',
1,
0)) AS clicks
FROM
imp_u_clicks
GROUP BY
user_id)
SELECT
impressions AS frequency,
SUM(clicks) AS clicks,
SUM(impressions) AS impressions,
COUNT(*) AS reach
FROM
user_level_data
GROUP BY
1
ORDER BY
frequency ASC
.format(**dc)
print(query_text)
try:
full_circle_query = GetService()
except IOError, ex:
print 'Unable to create ads data hub service - %s' % ex
print 'Did you specify the client secrets file?'
sys.exit(1)
query_create_body = {
'name': query_name,
'title': query_name,
'queryText': query_text
}
try:
# Execute the request.
new_query = full_circle_query.customers().analysisQueries().create(body=query_create_body, parent='customers/'+ str(customer_id)).execute()
new_query_name = new_query["name"]
except HttpError as e:
print e
sys.exit(1)
print 'New query %s for customer ID "%s":' % (new_query_name, customer_id)
print(json.dumps(new_query))
Explanation: Full Query
End of explanation
destination_table_full_path = big_query_project + '.' + big_query_dataset + '.' + big_query_destination_table
CUSTOMER_ID = customer_id
DATASET_ID = dataset_id
QUERY_NAME = query_name
DEST_TABLE = destination_table_full_path
#Dates
format_str = '%Y-%m-%d' # The format
start_date_obj = datetime.datetime.strptime(start_date, format_str)
end_date_obj = datetime.datetime.strptime(end_date, format_str)
START_DATE = {
"year": start_date_obj.year,
"month": start_date_obj.month,
"day": start_date_obj.day
}
END_DATE = {
"year": end_date_obj.year,
"month": end_date_obj.month,
"day": end_date_obj.day
}
try:
full_circle_query = GetService()
except IOError, ex:
print('Unable to create ads data hub service - %s' % ex)
print('Did you specify the client secrets file?')
sys.exit(1)
query_start_body = {
'spec': {
'startDate': START_DATE,
'endDate': END_DATE,
'adsDataCustomerId': DATASET_ID
},
'destTable': DEST_TABLE,
'customerId': CUSTOMER_ID
}
try:
# Execute the request.
operation = full_circle_query.customers().analysisQueries().start(body=query_start_body, name=new_query_name).execute()
except HttpError as e:
print(e)
sys.exit(1)
print('Running query with name "%s" via the following operation:' % query_name)
print(json.dumps(operation))
Explanation: Check your query exists
https://adsdatahub.google.com/u/0/#/jobs
Find your query in the my queries tab
Check and ensure your query is valid (there will be a green tick in the top right corner)
If your query is not valid hover over the red exclamation mark to see issues that need to be resolved
Step 4 - Run the query
Start the query
Pass the query in to ADH using the full_circle_query method set at the start
Pass in the dates, the destination table name in BigQuery and the customer ID
End of explanation
import time
statusDone = False
while statusDone is False:
print("waiting for the job to complete...")
updatedOperation = full_circle_query.operations().get(name=operation['name']).execute()
if updatedOperation.has_key('done') and updatedOperation['done'] == True:
statusDone = True
time.sleep(5)
print("Job completed... Getting results")
#run bigQuery query
dc = {}
dc['table'] = big_query_dataset + '.' + big_query_destination_table
q1 = '''
select * from {table}
'''.format(**dc)
Explanation: Retrieve the results from BigQuery
Check to make sure the query has finished running and is saved in the new BigQuery TAble
When it is done we cane retrieve it
End of explanation
# Run query as save as a table (also known as dataframe)
df = pd.io.gbq.read_gbq(q1, project_id=big_query_project, dialect='standard', reauth=True)
print(df)
Explanation: We are using the pandas library to run the query.
We pass in the query (q), the project id and set the SQL language to 'standard' (as opposed to legacy SQL)
End of explanation
# Save the original dataframe as a csv file in case you need to recover the original data
df.to_csv('base_final_user.csv', index=False)
Explanation: Save the output as a CSV
End of explanation
df = df[1:max_freq+1] # Reduces the dataframe to have the size you set as the maximum frequency (max_freq)
df = df_calc_fields(df)
df2=df.copy() # Copy the dataframe you calculated the fields in case you need to recover it
graphs = [] # Variable to save all graphics
Explanation: Step 6 - Set up the data and all the charts that will be plotted
6.1 Transform data
Use the calculation function created to calculate all the values based off your data
End of explanation
# Save all data into a list to plot the graphics
impressions = dict(type='bar', x=df.frequency, y=df.impressions,
name='impressions',
marker=dict(color='rgb(0, 29, 255)',
line=dict(width=1)))
ctr = dict(
type='scatter',
x=df.frequency,
y=df.ctr,
name='ctr',
marker=dict(color='rgb(255, 148, 0)', line=dict(width=1)),
xaxis='x1',
yaxis='y2',
)
layout = dict(
title='Impressions and CTR Comparison on Each Frequency',
autosize=True,
legend=dict(x=1.15, y=1),
hovermode='x',
xaxis=dict(tickangle=-45, autorange=True, tickfont=dict(size=10),
title='frequency', type='category'),
yaxis=dict(showgrid=True, title='impressions'),
yaxis2=dict(overlaying='y', anchor='x', side='right',
showgrid=False, title='ctr'),
)
fig = dict(data=[impressions, ctr], layout=layout)
graphs.append(fig)
clicks = dict(type='bar',
x= df.frequency,
y= df.clicks,
name='Clicks',
marker=dict(color= 'rgb(0, 29, 255)', line= dict(width= 1))
)
ctr = dict(type='scatter',
x= df.frequency,
y= df.cpc,
name='cpc',
marker=dict(color= 'rgb(255, 148, 0)', line= dict(width= 1)),
xaxis='x1',
yaxis='y2'
)
layout = dict(autosize= True,
title='Clicks and CPC Comparison on Each Frequency',
legend= dict(x= 1.15,
y= 1
),
hovermode='x',
xaxis=dict(tickangle= -45,
autorange=True,
tickfont=dict(size= 10),
title= 'frequency',
type= 'category'
),
yaxis=dict(
showgrid=True,
title= 'clicks'
),
yaxis2=dict(
overlaying= 'y',
anchor= 'x',
side= 'right',
showgrid= False,
title= 'cpc'
)
)
fig = dict(data=[clicks, ctr], layout=layout)
graphs.append(fig)
ctr = dict(type='scatter',
x= df.frequency,
y= df.ctr,
name='ctr',
marker=dict(color= 'rgb(0, 29, 255)', line= dict(width= 1))
)
cpc = dict(type='scatter',
x= df.frequency,
y= df.cpc,
name='cpc',
marker=dict(color= 'rgb(255, 148, 0)', line= dict(width= 1)),
xaxis='x1',
yaxis='y2'
)
layout = dict(autosize= True,
title='CTR and CPC Comparison on Each Frequency',
legend= dict(x= 1.15,
y= 1
),
hovermode='x',
xaxis=dict(tickangle= -45,
autorange=True,
tickfont=dict(size= 10),
title= 'frequency',
type= 'category',
showgrid =False
),
yaxis=dict(
showgrid=False,
title= 'ctr'
),
yaxis2=dict(
overlaying= 'y',
anchor= 'x',
side= 'right',
showgrid= False,
title= 'cpc'
)
)
fig = dict(data=[ctr, cpc], layout=layout)
graphs.append(fig)
pareto = dict(type='scatter',
x= df.frequency,
y= df.coverage_clicks,
name='Cumulative % Clicks',
marker=dict(color= 'rgb(0, 29, 255)', line= dict(width= 1))
)
cpc = dict(type='scatter',
x= df.frequency,
y= df.cpc,
name='cpc',
marker=dict(color= 'rgb(255, 148, 0)', line= dict(width= 1)),
xaxis='x1',
yaxis='y2'
)
layout = dict(autosize= True,
title='Cumulative Clicks and CPC Comparison on Each Frequency',
legend= dict(x= 1.15,
y= 1
),
hovermode='x',
xaxis=dict(tickangle= -45,
autorange=True,
tickfont=dict(size= 10),
title= 'frequency',
type= 'category'
),
yaxis=dict(
showgrid=True,
title= 'cum clicks'
),
yaxis2=dict(
overlaying= 'y',
anchor= 'x',
side= 'right',
showgrid= False,
title= 'cpc'
)
)
fig = dict(data=[pareto, cpc], layout=layout)
graphs.append(fig)
pareto = dict(type='scatter',
x= df.frequency,
y= df.coverage_clicks,
name='Cumulative % Clicks',
marker=dict(color= 'rgb(0, 29, 255)', line= dict(width= 1))
)
cpc = dict(type='scatter',
x= df.frequency,
y= df.ctr,
name='ctr',
marker=dict(color= 'rgb(255, 148, 0)', line= dict(width= 1)),
xaxis='x1',
yaxis='y2'
)
layout = dict(autosize= True,
title='Cumulative Clicks and CTR Comparison on Each Frequency',
legend= dict(x= 1.15,
y= 1
),
hovermode='x',
xaxis=dict(tickangle= -45,
autorange=True,
tickfont=dict(size= 10),
title= 'frequency',
type= 'category'
),
yaxis=dict(
showgrid=True,
title= 'cum clicks'
),
yaxis2=dict(
overlaying= 'y',
anchor= 'x',
side= 'right',
showgrid= False,
title= 'ctr'
)
)
fig = dict(data=[pareto, cpc], layout=layout)
graphs.append(fig)
pareto = dict(type='scatter',
x= df.frequency,
y= df.coverage_reach,
name='Cumulative % Reach',
marker=dict(color= 'rgb(0, 29, 255)', line= dict(width= 1))
)
cpc = dict(type='scatter',
x= df.frequency,
y= df.cost,
name='cost',
marker=dict(color= 'rgb(255, 148, 0)', line= dict(width= 1)),
xaxis='x1',
yaxis='y2'
)
layout = dict(autosize= True,
title='Cumulative Reach and Cost Comparison on Each Frequency',
legend= dict(x= 1.15,
y= 1
),
hovermode='x',
xaxis=dict(tickangle= -45,
autorange=True,
tickfont=dict(size= 10),
title= 'frequency',
type= 'category'
),
yaxis=dict(
showgrid=True,
title= 'cummulative reach'
),
yaxis2=dict(
overlaying= 'y',
anchor= 'x',
side= 'right',
showgrid= False,
title= 'cost'
)
)
Explanation: Analysis 1: Frequency Analysis by user
Step 1: Set up graphs
End of explanation
# Show the first 5 rows of the dataframe (data matrix) with the final data
df.head()
# Export the whole dataframe to a csv file that can be used in an external environment
df.to_csv('freq_analysis.csv', index=False)
Explanation: Step 2: Export all the data (optional)
End of explanation
enable_plotly_in_cell()
iplot(graphs[0])
Explanation: Output: Visualise the data
Impression and CTR on each frequency
Clicks and CPC Comparison on Each Frequency
CTR and CPC Comparison on Each Frequency
Cumulative Clicks and CPC Comparison on Each Frequency
Cumulative Clicks and CTR Comparison on Each Frequency
Impression and CTR on each frequency
Consider your frequency range, ensure frequency management is in place.
Where is your CTR floor? At what point does your CTR drop below a level that you care about.
Determine what the wasted impressions is if you don't change your frequency.
End of explanation
enable_plotly_in_cell()
iplot(graphs[1])
Explanation: Clicks and CPC Comparison on Each Frequency
What is your CPC ceiling
Understand what the frequency is at that level
Determine what impact changing your frequency will have on clicks
End of explanation
enable_plotly_in_cell()
iplot(graphs[2])
Explanation: CTR and CPC Comparison on Each Frequency
How does your CTR and CPC impact each other
Make an informed decision regarding suitable goals
End of explanation
enable_plotly_in_cell()
iplot(graphs[3])
Explanation: Cumulative Clicks and CPC Comparison on Each Frequency
Understand what a suitable CPC goal might be
1. What is the change in cost for increased clicks
2. What is the incremental gains for an increased cost
End of explanation
enable_plotly_in_cell()
iplot(graphs[4])
Explanation: Cumulative Clicks and CTR Comparison on Each Frequency
At what frequency does your CTR drop below an acceptable value
End of explanation
#Understand the logic behind calculation
graphs2 = []
pareto = dict(type='scatter',
x= df.frequency,
y= df.coverage_reach,
name='Cummulative % Reach',
marker=dict(color= 'rgb(0, 29, 255)', line= dict(width= 1))
)
ccm_imp = dict(type='scatter',
x= df.frequency,
y= df.coverage_impressions,
name='Cummulative % Impressions',
marker=dict(color= 'rgb(255, 148, 0)', line= dict(width= 1)),
xaxis='x1',
yaxis='y'
)
layout = dict(autosize= True,
title='Cummulative Impressions and Cummulative Reach on Each Frequency',
legend= dict(x= 1.15,
y= 1
),
hovermode='x',
xaxis=dict(tickangle= -45,
autorange=True,
tickfont=dict(size= 10),
title= 'frequency',
type= 'category'
),
yaxis=dict(
showgrid=True,
title= 'cummulative %'
)
)
fig = dict(data=[pareto, ccm_imp], layout=layout)
graphs2.append(fig)
pareto = dict(type='scatter',
x= df.frequency,
y= df.coverage_clicks,
name='Cummulative % Clicks',
marker=dict(color= 'rgb(0, 29, 255)', line= dict(width= 1))
)
ccm_imp = dict(type='scatter',
x= df.frequency,
y= df.coverage_impressions,
name='Cummulative % Impressions',
marker=dict(color= 'rgb(255, 148, 0)', line= dict(width= 1)),
xaxis='x1',
yaxis='y'
)
layout = dict(autosize= True,
title='Cumulative Impressions and Cummulative Clicks on Each Frequency',
legend= dict(x= 1.15,
y= 1
),
hovermode='x',
xaxis=dict(tickangle= -45,
autorange=True,
tickfont=dict(size= 10),
title= 'frequency',
type= 'category'
),
yaxis=dict(
showgrid=True,
title= 'cummulative %'
)
)
fig = dict(data=[pareto, ccm_imp], layout=layout)
graphs2.append(fig)
Explanation: Analysis 2: Understanding optimal frequency
Step 1: Set up charts
End of explanation
enable_plotly_in_cell()
iplot(graphs2[0])
Explanation: Output: Visualise the results
Cummulative Impressions and Cummulative Reach on Each Frequency
How do you maximise your reach without drastically increasing your impressions?
To obtain my reach goals, what frequency do I need at what impression cost?
With higher frequency caps you will need more impressions to maximise your reach
End of explanation
enable_plotly_in_cell()
iplot(graphs2[1])
Explanation: Cummulative Impressions and Cummulative Clicks on Each Frequency
To obtain my goals in terms of clicks, what frequency do I need, at what impression cost?
End of explanation
#@title 1.1 - Optimal Frequency
optimal_freq = 3 #@param {type:"integer", allow-input: true}
Explanation: Analysis 3: Determine impressions outside optimal frequency
Step 1: Define parameter to be the Optimal Frequency
This parameter below will guide the analysis of media loss talking about impressions. We will calculate the percentage of impressions that are out of the number you set as the optimal frequency.
End of explanation
from __future__ import division
df2 = df_calc_fields(df2)
df_opt, df_not_opt = df[1:optimal_freq+1], df[optimal_freq+1:]
total_impressions = list(df2.cumulative_impressions)[-1]
total_imp_not_opt = list(df_not_opt.cumulative_impressions)[-1] - list(df_opt.cumulative_impressions)[-1]
imp_not_opt_ratio = total_imp_not_opt / total_impressions
total_clicks = list(df2.cumulative_clicks)[-1]
total_clicks_not_opt = list(df_not_opt.cumulative_clicks)[-1] - list(df_opt.cumulative_clicks)[-1]
clicks_within_opt_ratio = 1-(total_clicks_not_opt / total_clicks)
print("{:.1f}% of your total impressions are out of the optimal frequency.".format(imp_not_opt_ratio*100))
print("{:,} of your impressions are out of the optimal frequency".format(total_imp_not_opt))
print("At a CPM of {} - preventing these would result in a cost saving of {:,}".format(cpm, cpm*total_imp_not_opt))
print("")
print("If you limited frequency to {}, you would still achieve {:.1f}% of your clicks").format(optimal_freq, clicks_within_opt_ratio*100)
Explanation: Output: Calculate impression loss
End of explanation |
8,726 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Spark RDD basic manipulation
RDD creation
We create a simple RDD by paralellizing a collection local to the driver (usually they would be created by fetching from external sources or reading from files).
Step1: Transformations & actions
We want to compute $\sum_{i=0}^{499} cos(2i+1) $ using the numbers RDD we have just created. We are going to make it a bit convoluted, just to be able to chain transformations; in practice we could do it in a more direct way.
First we start by taking only the odd numbers
Step2: Now we compute the cosine of each number. We could use a map with
Python
lambda x
Step3: Finally we sum all values. Again, we could use a lambda function such as
Python
lambda a,b
Step4: The many forms of mapping
map vs. flatMap
We create a small RDD
Step5: Now we do a classic map+reduce to sum its squared values
Step6: Now we try with flatMap. First let's do it wrong
Step7: Let's do it right
Step8: So, why should we use flatMap? Because we can create several rows (including zero) out of each input RDD rows
Step9: map vs. mapPartitions
We repeat the same operation as above, but using mapPartitions. This time is different
Step10: mapPartitions vs. mapPartitionsWithIndex
For a final twist, mapPartitionsWithIndex works the same as mapPartitions, but our function will receive two arguments | Python Code:
# All numbers from 0 to 1000. Split in 4 partitions
numbers = sc.parallelize( range(0,1001), 4 )
print numbers.getNumPartitions()
print numbers.count()
print numbers.take(10)
Explanation: Spark RDD basic manipulation
RDD creation
We create a simple RDD by paralellizing a collection local to the driver (usually they would be created by fetching from external sources or reading from files).
End of explanation
# Transformation: take only the odd numbers
odd = numbers.filter( lambda x : x % 2 )
odd.take(10) # action
Explanation: Transformations & actions
We want to compute $\sum_{i=0}^{499} cos(2i+1) $ using the numbers RDD we have just created. We are going to make it a bit convoluted, just to be able to chain transformations; in practice we could do it in a more direct way.
First we start by taking only the odd numbers: the list of odd numbers from 0 to 1000 is the same as the list of $(2i+1)$ when $ i \in [0,499] $
End of explanation
# Transformation: compute the cosine of each number
from math import cos
odd_cosine = odd.map( cos )
odd_cosine.take(10) # action
Explanation: Now we compute the cosine of each number. We could use a map with
Python
lambda x : cos(x)
but in this case, since it's just calling a function, we use the function directly:
End of explanation
# Action: sum all values
from operator import add
result = odd_cosine.reduce( add )
print result
Explanation: Finally we sum all values. Again, we could use a lambda function such as
Python
lambda a,b : a+b
but Python already defines the "sum" function for us, so we just use it.
Note this is an action, therefore is the one that triggers the stage computation; the previous transformations didn't produce results (that's in theory, in practive since we executed take, we forced realization of the operations)
End of explanation
a = sc.parallelize( xrange(20), 4 )
Explanation: The many forms of mapping
map vs. flatMap
We create a small RDD:
End of explanation
b1 = a.map( lambda x : x*x )
from operator import add
result1 = b1.reduce( add )
print result1
Explanation: Now we do a classic map+reduce to sum its squared values:
End of explanation
b2 = a.flatMap( lambda x : x*x )
# This will trigger an error
b2.take(1)
Explanation: Now we try with flatMap. First let's do it wrong:
End of explanation
# Ensure flatMap returns a list, even if it's a list of 1
b2 = a.flatMap( lambda x : [x*x] )
result2 = b2.reduce( add )
print result2
result2 == result1
Explanation: Let's do it right: flatMap must produce a list. Even if it's a list of 1 element (or 0)
End of explanation
b2b = a.flatMap( lambda x : [x, x*x] )
b2b.take(6)
Explanation: So, why should we use flatMap? Because we can create several rows (including zero) out of each input RDD rows
End of explanation
# In Python, the easiest way of returning an iterator is by creating
# a generator function via yield
def mapper( it ):
for n in it:
yield n*n
# Now we have the function, let's use it
b3 = a.mapPartitions( mapper )
result3 = b3.reduce( add )
print result3
result3 == result1
Explanation: map vs. mapPartitions
We repeat the same operation as above, but using mapPartitions. This time is different: our function will not receive an element, but a whole partition (actually an iterator over its elements). We must iterate over it and return another iterator over the result of our computation.
Admittedly, to use mapPartitions for this operation does not make much sense. But in general it might be handy to have access in our function to all the elements in a partition.
End of explanation
# In Python, the easiest way of returning an iterator is by creating
# a generator function via yield
def mapper( partitionIndex, it ):
for n in it:
yield n*n
# Now we have the function, let's use it
b4 = a.mapPartitionsWithIndex( mapper )
result4 = b4.reduce( add )
print result4
result4 == result1
Explanation: mapPartitions vs. mapPartitionsWithIndex
For a final twist, mapPartitionsWithIndex works the same as mapPartitions, but our function will receive two arguments: the iterator over the elements of the partition, as before, and the index of the partition, i.e. an integer in $[0,numPartitions)$. So we can know which partition we are in when processing its elements.
End of explanation |
8,727 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 9 – Up and running with TensorFlow
This notebook contains all the sample code and solutions to the exercices in chapter 9.
Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures
Step1: Linear Regression
Using the Normal Equation
Step2: Compare with pure NumPy
Step3: Compare with Scikit-Learn
Step4: Using Batch Gradient Descent
Gradient Descent requires scaling the feature vectors first. We could do this using TF, but let's just use Scikit-Learn for now.
Step5: Manually computing the gradients
Step6: Using autodiff
Same as above except for the gradients = ... line
Step7: How could you find the partial derivatives of the following function with regards to a and b?
Step8: Let's compute the function at $a=0.2$ and $b=0.3$, and the partial derivatives at that point with regards to $a$ and with regards to $b$
Step9: Using a GradientDescentOptimizer
Step10: Using a momentum optimizer
Step11: Feeding data to the training algorithm
Placeholder nodes
Step12: Mini-batch Gradient Descent
Step13: Saving and restoring a model
Step14: If you want to have a saver that loads and restores theta with a different name, such as "weights"
Step15: By default the saver also saves the graph structure itself in a second file with the extension .meta. You can use the function tf.train.import_meta_graph() to restore the graph structure. This function loads the graph into the default graph and returns a Saver that can then be used to restore the graph state (i.e., the variable values)
Step20: This means that you can import a pretrained model without having to have the corresponding Python code to build the graph. This is very handy when you keep tweaking and saving your model
Step21: Using TensorBoard
Step22: Name scopes
Step23: Modularity
An ugly flat code
Step24: Much better, using a function to build the ReLUs
Step25: Even better using name scopes
Step26: Sharing Variables
Sharing a threshold variable the classic way, by defining it outside of the relu() function then passing it as a parameter
Step27: Extra material
Step28: The first variable_scope() block first creates the shared variable x0, named my_scope/x. For all operations other than shared variables (including non-shared variables), the variable scope acts like a regular name scope, which is why the two variables x1 and x2 have a name with a prefix my_scope/. Note however that TensorFlow makes their names unique by adding an index
Step29: Implementing a Home-Made Computation Graph
Step30: Computing gradients
Mathematical differentiation
Step31: Numerical differentiation
Step32: Symbolic differentiation
Step33: Automatic differentiation (autodiff) – forward mode
Step34: $3 + (3 + 4 \epsilon) = 6 + 4\epsilon$
Step35: $(3 + 4ε)\times(5 + 7ε) = 3 \times 5 + 3 \times 7ε + 4ε \times 5 + 4ε \times 7ε = 15 + 21ε + 20ε + 28ε^2 = 15 + 41ε + 28 \times 0 = 15 + 41ε$
Step36: Autodiff – Reverse mode
Step37: Autodiff – reverse mode (using TensorFlow) | Python Code:
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
import tensorflow as tf
import pandas as pd
from sklearn.decomposition import PCA
from sklearn.model_selection import train_test_split
# to make this notebook's output stable across runs
def reset_graph(seed=42):
tf.reset_default_graph()
tf.set_random_seed(seed)
np.random.seed(seed)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "tensorflow"
def save_fig(fig_id, tight_layout=True):
path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png")
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)
Explanation: Chapter 9 – Up and running with TensorFlow
This notebook contains all the sample code and solutions to the exercices in chapter 9.
Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
End of explanation
beetles_full = pd.read_csv('beetleTrainingData.csv')
# beetles = beetles_full.drop(['accuracy_num', 'accuracy_txt'], axis=1).as_matrix()
beetles_test_train = beetles_full[beetles_full.columns[~beetles_full.columns.str.contains('_RA')]].drop(['accuracy_num', 'accuracy_txt'], axis=1).as_matrix()
# beetles_target = beetles_full['accuracy'].as_matrix()
beetles_target_test_train = beetles_full.accuracy_txt.apply(lambda a: int(a == 'correct')).as_matrix()
# beetles_target = beetles_full.accuracy_num.as_matrix()
pca = PCA(n_components = 50)
beetles_pca_test_train = pca.fit_transform(beetles_test_train)
beetles_pca, beetles_test, beetles_target, beetles_target_test = train_test_split(
beetles_pca_test_train, beetles_target_test_train, test_size=0.33, random_state=42)
m, n = beetles_pca.shape
beetles_plus_bias = np.c_[np.ones((m, 1)), beetles_pca]
tm, tn = beetles_test.shape
beetles_test_plus_bias = np.c_[np.ones((tm, 1)), beetles_test]
X = beetles_plus_bias
np.linalg.inv(X.T.dot(X))
pca.components_
reset_graph()
X = tf.constant(beetles_plus_bias, dtype=tf.float32, name="X")
y = tf.constant(beetles_target.reshape(-1, 1), dtype=tf.float32, name="y")
XT = tf.transpose(X)
theta = tf.matmul(tf.matmul(tf.matrix_inverse(tf.matmul(XT, X)), XT), y)
with tf.Session() as sess:
theta_value = theta.eval()
theta_value
# y_predict = X.dot(theta_best)
y_predict = tf.reduce_sum(tf.multiply(tf.transpose(theta_value), X))
y_predict
with tf.Session() as sess:
print(y_predict.eval())
# y_predict = X.dot(theta_best)
y_predict = tf.reduce_sum(tf.multiply(tf.transpose(theta_value), beetles_test_plus_bias))
with tf.Session() as sess:
print(y_predict.eval())
y_predict
Explanation: Linear Regression
Using the Normal Equation
End of explanation
X = beetles_plus_bias
y = beetles_target.reshape(-1, 1)
theta_numpy = np.linalg.inv(X.T.dot(X)).dot(X.T).dot(y)
print(theta_numpy)
Explanation: Compare with pure NumPy
End of explanation
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(beetles_pca, beetles_target.reshape(-1, 1))
print(np.r_[lin_reg.intercept_.reshape(-1, 1), lin_reg.coef_.T])
Explanation: Compare with Scikit-Learn
End of explanation
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaled_beetles_data = scaler.fit_transform(beetles_pca)
scaled_beetles_data_plus_bias = np.c_[np.ones((m, 1)), scaled_beetles_data]
print(scaled_beetles_data_plus_bias.mean(axis=0))
print(scaled_beetles_data_plus_bias.mean(axis=1))
print(scaled_beetles_data_plus_bias.mean())
print(scaled_beetles_data_plus_bias.shape)
Explanation: Using Batch Gradient Descent
Gradient Descent requires scaling the feature vectors first. We could do this using TF, but let's just use Scikit-Learn for now.
End of explanation
reset_graph()
n_epochs = 1000
learning_rate = 0.01
X = tf.constant(scaled_beetles_data_plus_bias, dtype=tf.float32, name="X")
y = tf.constant(beetles_target.reshape(-1, 1), dtype=tf.float32, name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions")
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")
gradients = 2/m * tf.matmul(tf.transpose(X), error)
training_op = tf.assign(theta, theta - learning_rate * gradients)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
if epoch % 100 == 0:
print("Epoch", epoch, "MSE =", mse.eval())
sess.run(training_op)
best_theta = theta.eval()
best_theta
Explanation: Manually computing the gradients
End of explanation
reset_graph()
n_epochs = 1000
learning_rate = 0.01
X = tf.constant(scaled_beetles_data_plus_bias, dtype=tf.float32, name="X")
y = tf.constant(beetles_target.reshape(-1, 1), dtype=tf.float32, name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions")
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")
gradients = tf.gradients(mse, [theta])[0]
training_op = tf.assign(theta, theta - learning_rate * gradients)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
if epoch % 100 == 0:
print("Epoch", epoch, "MSE =", mse.eval())
sess.run(training_op)
best_theta = theta.eval()
print("Best theta:")
print(best_theta)
Explanation: Using autodiff
Same as above except for the gradients = ... line:
End of explanation
def my_func(a, b):
z = 0
for i in range(100):
z = a * np.cos(z + i) + z * np.sin(b - i)
return z
my_func(0.2, 0.3)
reset_graph()
a = tf.Variable(0.2, name="a")
b = tf.Variable(0.3, name="b")
z = tf.constant(0.0, name="z0")
for i in range(100):
z = a * tf.cos(z + i) + z * tf.sin(b - i)
grads = tf.gradients(z, [a, b])
init = tf.global_variables_initializer()
Explanation: How could you find the partial derivatives of the following function with regards to a and b?
End of explanation
with tf.Session() as sess:
init.run()
print(z.eval())
print(sess.run(grads))
Explanation: Let's compute the function at $a=0.2$ and $b=0.3$, and the partial derivatives at that point with regards to $a$ and with regards to $b$:
End of explanation
reset_graph()
n_epochs = 1000
learning_rate = 0.01
X = tf.constant(scaled_beetles_data_plus_bias, dtype=tf.float32, name="X")
y = tf.constant(beetles_target.reshape(-1, 1), dtype=tf.float32, name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions")
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(mse)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
if epoch % 100 == 0:
print("Epoch", epoch, "MSE =", mse.eval())
sess.run(training_op)
best_theta = theta.eval()
print("Best theta:")
print(best_theta)
Explanation: Using a GradientDescentOptimizer
End of explanation
reset_graph()
n_epochs = 1000
learning_rate = 0.01
X = tf.constant(scaled_beetles_data_plus_bias, dtype=tf.float32, name="X")
y = tf.constant(beetles_target.reshape(-1, 1), dtype=tf.float32, name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions")
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")
optimizer = tf.train.MomentumOptimizer(learning_rate=learning_rate,
momentum=0.9)
training_op = optimizer.minimize(mse)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
sess.run(training_op)
best_theta = theta.eval()
print("Best theta:")
print(best_theta)
Explanation: Using a momentum optimizer
End of explanation
reset_graph()
A = tf.placeholder(tf.float32, shape=(None, 3))
B = A + 5
with tf.Session() as sess:
B_val_1 = B.eval(feed_dict={A: [[1, 2, 3]]})
B_val_2 = B.eval(feed_dict={A: [[4, 5, 6], [7, 8, 9]]})
print(B_val_1)
print(B_val_2)
Explanation: Feeding data to the training algorithm
Placeholder nodes
End of explanation
n_epochs = 1000
learning_rate = 0.01
reset_graph()
X = tf.placeholder(tf.float32, shape=(None, n + 1), name="X")
y = tf.placeholder(tf.float32, shape=(None, 1), name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions")
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(mse)
init = tf.global_variables_initializer()
n_epochs = 10
batch_size = 100
n_batches = int(np.ceil(m / batch_size))
def fetch_batch(epoch, batch_index, batch_size):
np.random.seed(epoch * n_batches + batch_index) # not shown in the book
indices = np.random.randint(m, size=batch_size) # not shown
X_batch = scaled_beetles_data_plus_bias[indices] # not shown
y_batch = beetles_target.reshape(-1, 1)[indices] # not shown
return X_batch, y_batch
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
for batch_index in range(n_batches):
X_batch, y_batch = fetch_batch(epoch, batch_index, batch_size)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
best_theta = theta.eval()
best_theta
Explanation: Mini-batch Gradient Descent
End of explanation
reset_graph()
n_epochs = 1000 # not shown in the book
learning_rate = 0.01 # not shown
X = tf.constant(scaled_beetles_data_plus_bias, dtype=tf.float32, name="X") # not shown
y = tf.constant(beetles_target.reshape(-1, 1), dtype=tf.float32, name="y") # not shown
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions") # not shown
error = y_pred - y # not shown
mse = tf.reduce_mean(tf.square(error), name="mse") # not shown
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate) # not shown
training_op = optimizer.minimize(mse) # not shown
init = tf.global_variables_initializer()
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
if epoch % 100 == 0:
print("Epoch", epoch, "MSE =", mse.eval()) # not shown
save_path = saver.save(sess, "/tmp/my_model.ckpt")
sess.run(training_op)
best_theta = theta.eval()
save_path = saver.save(sess, "/tmp/my_model_final.ckpt")
best_theta
with tf.Session() as sess:
saver.restore(sess, "/tmp/my_model_final.ckpt")
best_theta_restored = theta.eval() # not shown in the book
np.allclose(best_theta, best_theta_restored)
Explanation: Saving and restoring a model
End of explanation
saver = tf.train.Saver({"weights": theta})
Explanation: If you want to have a saver that loads and restores theta with a different name, such as "weights":
End of explanation
reset_graph()
# notice that we start with an empty graph.
saver = tf.train.import_meta_graph("/tmp/my_model_final.ckpt.meta") # this loads the graph structure
theta = tf.get_default_graph().get_tensor_by_name("theta:0") # not shown in the book
with tf.Session() as sess:
saver.restore(sess, "/tmp/my_model_final.ckpt") # this restores the graph's state
best_theta_restored = theta.eval() # not shown in the book
np.allclose(best_theta, best_theta_restored)
Explanation: By default the saver also saves the graph structure itself in a second file with the extension .meta. You can use the function tf.train.import_meta_graph() to restore the graph structure. This function loads the graph into the default graph and returns a Saver that can then be used to restore the graph state (i.e., the variable values):
End of explanation
from IPython.display import clear_output, Image, display, HTML
def strip_consts(graph_def, max_const_size=32):
Strip large constant values from graph_def.
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = b"<stripped %d bytes>"%size
return strip_def
def show_graph(graph_def, max_const_size=32):
Visualize TensorFlow graph.
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code =
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
.format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe =
<iframe seamless style="width:1200px;height:620px;border:0" srcdoc="{}"></iframe>
.format(code.replace('"', '"'))
display(HTML(iframe))
show_graph(tf.get_default_graph())
Explanation: This means that you can import a pretrained model without having to have the corresponding Python code to build the graph. This is very handy when you keep tweaking and saving your model: you can load a previously saved model without having to search for the version of the code that built it.
Visualizing the graph
inside Jupyter
End of explanation
reset_graph()
from datetime import datetime
now = datetime.utcnow().strftime("%Y%m%d%H%M%S")
root_logdir = "tf_logs"
logdir = "{}/run-{}/".format(root_logdir, now)
n_epochs = 1000
learning_rate = 0.01
X = tf.placeholder(tf.float32, shape=(None, n + 1), name="X")
y = tf.placeholder(tf.float32, shape=(None, 1), name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions")
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(mse)
init = tf.global_variables_initializer()
mse_summary = tf.summary.scalar('MSE', mse)
file_writer = tf.summary.FileWriter(logdir, tf.get_default_graph())
n_epochs = 10
batch_size = 100
n_batches = int(np.ceil(m / batch_size))
with tf.Session() as sess: # not shown in the book
sess.run(init) # not shown
for epoch in range(n_epochs): # not shown
for batch_index in range(n_batches):
X_batch, y_batch = fetch_batch(epoch, batch_index, batch_size)
if batch_index % 10 == 0:
summary_str = mse_summary.eval(feed_dict={X: X_batch, y: y_batch})
step = epoch * n_batches + batch_index
file_writer.add_summary(summary_str, step)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
best_theta = theta.eval() # not shown
file_writer.close()
best_theta
Explanation: Using TensorBoard
End of explanation
reset_graph()
now = datetime.utcnow().strftime("%Y%m%d%H%M%S")
root_logdir = "tf_logs"
logdir = "{}/run-{}/".format(root_logdir, now)
n_epochs = 1000
learning_rate = 0.01
X = tf.placeholder(tf.float32, shape=(None, n + 1), name="X")
y = tf.placeholder(tf.float32, shape=(None, 1), name="y")
theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
y_pred = tf.matmul(X, theta, name="predictions")
with tf.name_scope("loss") as scope:
error = y_pred - y
mse = tf.reduce_mean(tf.square(error), name="mse")
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(mse)
init = tf.global_variables_initializer()
mse_summary = tf.summary.scalar('MSE', mse)
file_writer = tf.summary.FileWriter(logdir, tf.get_default_graph())
n_epochs = 10
batch_size = 100
n_batches = int(np.ceil(m / batch_size))
with tf.Session() as sess:
sess.run(init)
for epoch in range(n_epochs):
for batch_index in range(n_batches):
X_batch, y_batch = fetch_batch(epoch, batch_index, batch_size)
if batch_index % 10 == 0:
summary_str = mse_summary.eval(feed_dict={X: X_batch, y: y_batch})
step = epoch * n_batches + batch_index
file_writer.add_summary(summary_str, step)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
best_theta = theta.eval()
file_writer.flush()
file_writer.close()
print("Best theta:")
print(best_theta)
print(error.op.name)
print(mse.op.name)
reset_graph()
a1 = tf.Variable(0, name="a") # name == "a"
a2 = tf.Variable(0, name="a") # name == "a_1"
with tf.name_scope("param"): # name == "param"
a3 = tf.Variable(0, name="a") # name == "param/a"
with tf.name_scope("param"): # name == "param_1"
a4 = tf.Variable(0, name="a") # name == "param_1/a"
for node in (a1, a2, a3, a4):
print(node.op.name)
Explanation: Name scopes
End of explanation
reset_graph()
n_features = 3
X = tf.placeholder(tf.float32, shape=(None, n_features), name="X")
w1 = tf.Variable(tf.random_normal((n_features, 1)), name="weights1")
w2 = tf.Variable(tf.random_normal((n_features, 1)), name="weights2")
b1 = tf.Variable(0.0, name="bias1")
b2 = tf.Variable(0.0, name="bias2")
z1 = tf.add(tf.matmul(X, w1), b1, name="z1")
z2 = tf.add(tf.matmul(X, w2), b2, name="z2")
relu1 = tf.maximum(z1, 0., name="relu1")
relu2 = tf.maximum(z1, 0., name="relu2") # Oops, cut&paste error! Did you spot it?
output = tf.add(relu1, relu2, name="output")
Explanation: Modularity
An ugly flat code:
End of explanation
reset_graph()
def relu(X):
w_shape = (int(X.get_shape()[1]), 1)
w = tf.Variable(tf.random_normal(w_shape), name="weights")
b = tf.Variable(0.0, name="bias")
z = tf.add(tf.matmul(X, w), b, name="z")
return tf.maximum(z, 0., name="relu")
n_features = 3
X = tf.placeholder(tf.float32, shape=(None, n_features), name="X")
relus = [relu(X) for i in range(5)]
output = tf.add_n(relus, name="output")
file_writer = tf.summary.FileWriter("logs/relu1", tf.get_default_graph())
Explanation: Much better, using a function to build the ReLUs:
End of explanation
reset_graph()
def relu(X):
with tf.name_scope("relu"):
w_shape = (int(X.get_shape()[1]), 1) # not shown in the book
w = tf.Variable(tf.random_normal(w_shape), name="weights") # not shown
b = tf.Variable(0.0, name="bias") # not shown
z = tf.add(tf.matmul(X, w), b, name="z") # not shown
return tf.maximum(z, 0., name="max") # not shown
n_features = 3
X = tf.placeholder(tf.float32, shape=(None, n_features), name="X")
relus = [relu(X) for i in range(5)]
output = tf.add_n(relus, name="output")
file_writer = tf.summary.FileWriter("logs/relu2", tf.get_default_graph())
file_writer.close()
Explanation: Even better using name scopes:
End of explanation
reset_graph()
def relu(X, threshold):
with tf.name_scope("relu"):
w_shape = (int(X.get_shape()[1]), 1) # not shown in the book
w = tf.Variable(tf.random_normal(w_shape), name="weights") # not shown
b = tf.Variable(0.0, name="bias") # not shown
z = tf.add(tf.matmul(X, w), b, name="z") # not shown
return tf.maximum(z, threshold, name="max")
threshold = tf.Variable(0.0, name="threshold")
X = tf.placeholder(tf.float32, shape=(None, n_features), name="X")
relus = [relu(X, threshold) for i in range(5)]
output = tf.add_n(relus, name="output")
reset_graph()
def relu(X):
with tf.name_scope("relu"):
if not hasattr(relu, "threshold"):
relu.threshold = tf.Variable(0.0, name="threshold")
w_shape = int(X.get_shape()[1]), 1 # not shown in the book
w = tf.Variable(tf.random_normal(w_shape), name="weights") # not shown
b = tf.Variable(0.0, name="bias") # not shown
z = tf.add(tf.matmul(X, w), b, name="z") # not shown
return tf.maximum(z, relu.threshold, name="max")
X = tf.placeholder(tf.float32, shape=(None, n_features), name="X")
relus = [relu(X) for i in range(5)]
output = tf.add_n(relus, name="output")
reset_graph()
with tf.variable_scope("relu"):
threshold = tf.get_variable("threshold", shape=(),
initializer=tf.constant_initializer(0.0))
with tf.variable_scope("relu", reuse=True):
threshold = tf.get_variable("threshold")
with tf.variable_scope("relu") as scope:
scope.reuse_variables()
threshold = tf.get_variable("threshold")
reset_graph()
def relu(X):
with tf.variable_scope("relu", reuse=True):
threshold = tf.get_variable("threshold")
w_shape = int(X.get_shape()[1]), 1 # not shown
w = tf.Variable(tf.random_normal(w_shape), name="weights") # not shown
b = tf.Variable(0.0, name="bias") # not shown
z = tf.add(tf.matmul(X, w), b, name="z") # not shown
return tf.maximum(z, threshold, name="max")
X = tf.placeholder(tf.float32, shape=(None, n_features), name="X")
with tf.variable_scope("relu"):
threshold = tf.get_variable("threshold", shape=(),
initializer=tf.constant_initializer(0.0))
relus = [relu(X) for relu_index in range(5)]
output = tf.add_n(relus, name="output")
file_writer = tf.summary.FileWriter("logs/relu6", tf.get_default_graph())
file_writer.close()
reset_graph()
def relu(X):
with tf.variable_scope("relu"):
threshold = tf.get_variable("threshold", shape=(), initializer=tf.constant_initializer(0.0))
w_shape = (int(X.get_shape()[1]), 1)
w = tf.Variable(tf.random_normal(w_shape), name="weights")
b = tf.Variable(0.0, name="bias")
z = tf.add(tf.matmul(X, w), b, name="z")
return tf.maximum(z, threshold, name="max")
X = tf.placeholder(tf.float32, shape=(None, n_features), name="X")
with tf.variable_scope("", default_name="") as scope:
first_relu = relu(X) # create the shared variable
scope.reuse_variables() # then reuse it
relus = [first_relu] + [relu(X) for i in range(4)]
output = tf.add_n(relus, name="output")
file_writer = tf.summary.FileWriter("logs/relu8", tf.get_default_graph())
file_writer.close()
reset_graph()
def relu(X):
threshold = tf.get_variable("threshold", shape=(),
initializer=tf.constant_initializer(0.0))
w_shape = (int(X.get_shape()[1]), 1) # not shown in the book
w = tf.Variable(tf.random_normal(w_shape), name="weights") # not shown
b = tf.Variable(0.0, name="bias") # not shown
z = tf.add(tf.matmul(X, w), b, name="z") # not shown
return tf.maximum(z, threshold, name="max")
X = tf.placeholder(tf.float32, shape=(None, n_features), name="X")
relus = []
for relu_index in range(5):
with tf.variable_scope("relu", reuse=(relu_index >= 1)) as scope:
relus.append(relu(X))
output = tf.add_n(relus, name="output")
file_writer = tf.summary.FileWriter("logs/relu9", tf.get_default_graph())
file_writer.close()
Explanation: Sharing Variables
Sharing a threshold variable the classic way, by defining it outside of the relu() function then passing it as a parameter:
End of explanation
reset_graph()
with tf.variable_scope("my_scope"):
x0 = tf.get_variable("x", shape=(), initializer=tf.constant_initializer(0.))
x1 = tf.Variable(0., name="x")
x2 = tf.Variable(0., name="x")
with tf.variable_scope("my_scope", reuse=True):
x3 = tf.get_variable("x")
x4 = tf.Variable(0., name="x")
with tf.variable_scope("", default_name="", reuse=True):
x5 = tf.get_variable("my_scope/x")
print("x0:", x0.op.name)
print("x1:", x1.op.name)
print("x2:", x2.op.name)
print("x3:", x3.op.name)
print("x4:", x4.op.name)
print("x5:", x5.op.name)
print(x0 is x3 and x3 is x5)
Explanation: Extra material
End of explanation
reset_graph()
text = np.array("Do you want some café?".split())
text_tensor = tf.constant(text)
with tf.Session() as sess:
print(text_tensor.eval())
Explanation: The first variable_scope() block first creates the shared variable x0, named my_scope/x. For all operations other than shared variables (including non-shared variables), the variable scope acts like a regular name scope, which is why the two variables x1 and x2 have a name with a prefix my_scope/. Note however that TensorFlow makes their names unique by adding an index: my_scope/x_1 and my_scope/x_2.
The second variable_scope() block reuses the shared variables in scope my_scope, which is why x0 is x3. Once again, for all operations other than shared variables it acts as a named scope, and since it's a separate block from the first one, the name of the scope is made unique by TensorFlow (my_scope_1) and thus the variable x4 is named my_scope_1/x.
The third block shows another way to get a handle on the shared variable my_scope/x by creating a variable_scope() at the root scope (whose name is an empty string), then calling get_variable() with the full name of the shared variable (i.e. "my_scope/x").
Strings
End of explanation
class Const(object):
def __init__(self, value):
self.value = value
def evaluate(self):
return self.value
def __str__(self):
return str(self.value)
class Var(object):
def __init__(self, init_value, name):
self.value = init_value
self.name = name
def evaluate(self):
return self.value
def __str__(self):
return self.name
class BinaryOperator(object):
def __init__(self, a, b):
self.a = a
self.b = b
class Add(BinaryOperator):
def evaluate(self):
return self.a.evaluate() + self.b.evaluate()
def __str__(self):
return "{} + {}".format(self.a, self.b)
class Mul(BinaryOperator):
def evaluate(self):
return self.a.evaluate() * self.b.evaluate()
def __str__(self):
return "({}) * ({})".format(self.a, self.b)
x = Var(3, name="x")
y = Var(4, name="y")
f = Add(Mul(Mul(x, x), y), Add(y, Const(2))) # f(x,y) = x²y + y + 2
print("f(x,y) =", f)
print("f(3,4) =", f.evaluate())
Explanation: Implementing a Home-Made Computation Graph
End of explanation
df_dx = Mul(Const(2), Mul(x, y)) # df/dx = 2xy
df_dy = Add(Mul(x, x), Const(1)) # df/dy = x² + 1
print("df/dx(3,4) =", df_dx.evaluate())
print("df/dy(3,4) =", df_dy.evaluate())
Explanation: Computing gradients
Mathematical differentiation
End of explanation
def gradients(func, vars_list, eps=0.0001):
partial_derivatives = []
base_func_eval = func.evaluate()
for var in vars_list:
original_value = var.value
var.value = var.value + eps
tweaked_func_eval = func.evaluate()
var.value = original_value
derivative = (tweaked_func_eval - base_func_eval) / eps
partial_derivatives.append(derivative)
return partial_derivatives
df_dx, df_dy = gradients(f, [x, y])
print("df/dx(3,4) =", df_dx)
print("df/dy(3,4) =", df_dy)
Explanation: Numerical differentiation
End of explanation
Const.derive = lambda self, var: Const(0)
Var.derive = lambda self, var: Const(1) if self is var else Const(0)
Add.derive = lambda self, var: Add(self.a.derive(var), self.b.derive(var))
Mul.derive = lambda self, var: Add(Mul(self.a, self.b.derive(var)), Mul(self.a.derive(var), self.b))
x = Var(3.0, name="x")
y = Var(4.0, name="y")
f = Add(Mul(Mul(x, x), y), Add(y, Const(2))) # f(x,y) = x²y + y + 2
df_dx = f.derive(x) # 2xy
df_dy = f.derive(y) # x² + 1
print("df/dx(3,4) =", df_dx.evaluate())
print("df/dy(3,4) =", df_dy.evaluate())
Explanation: Symbolic differentiation
End of explanation
class DualNumber(object):
def __init__(self, value=0.0, eps=0.0):
self.value = value
self.eps = eps
def __add__(self, b):
return DualNumber(self.value + self.to_dual(b).value,
self.eps + self.to_dual(b).eps)
def __radd__(self, a):
return self.to_dual(a).__add__(self)
def __mul__(self, b):
return DualNumber(self.value * self.to_dual(b).value,
self.eps * self.to_dual(b).value + self.value * self.to_dual(b).eps)
def __rmul__(self, a):
return self.to_dual(a).__mul__(self)
def __str__(self):
if self.eps:
return "{:.1f} + {:.1f}ε".format(self.value, self.eps)
else:
return "{:.1f}".format(self.value)
def __repr__(self):
return str(self)
@classmethod
def to_dual(cls, n):
if hasattr(n, "value"):
return n
else:
return cls(n)
Explanation: Automatic differentiation (autodiff) – forward mode
End of explanation
3 + DualNumber(3, 4)
Explanation: $3 + (3 + 4 \epsilon) = 6 + 4\epsilon$
End of explanation
DualNumber(3, 4) * DualNumber(5, 7)
x.value = DualNumber(3.0)
y.value = DualNumber(4.0)
f.evaluate()
x.value = DualNumber(3.0, 1.0) # 3 + ε
y.value = DualNumber(4.0) # 4
df_dx = f.evaluate().eps
x.value = DualNumber(3.0) # 3
y.value = DualNumber(4.0, 1.0) # 4 + ε
df_dy = f.evaluate().eps
df_dx
df_dy
Explanation: $(3 + 4ε)\times(5 + 7ε) = 3 \times 5 + 3 \times 7ε + 4ε \times 5 + 4ε \times 7ε = 15 + 21ε + 20ε + 28ε^2 = 15 + 41ε + 28 \times 0 = 15 + 41ε$
End of explanation
class Const(object):
def __init__(self, value):
self.value = value
def evaluate(self):
return self.value
def backpropagate(self, gradient):
pass
def __str__(self):
return str(self.value)
class Var(object):
def __init__(self, init_value, name):
self.value = init_value
self.name = name
self.gradient = 0
def evaluate(self):
return self.value
def backpropagate(self, gradient):
self.gradient += gradient
def __str__(self):
return self.name
class BinaryOperator(object):
def __init__(self, a, b):
self.a = a
self.b = b
class Add(BinaryOperator):
def evaluate(self):
self.value = self.a.evaluate() + self.b.evaluate()
return self.value
def backpropagate(self, gradient):
self.a.backpropagate(gradient)
self.b.backpropagate(gradient)
def __str__(self):
return "{} + {}".format(self.a, self.b)
class Mul(BinaryOperator):
def evaluate(self):
self.value = self.a.evaluate() * self.b.evaluate()
return self.value
def backpropagate(self, gradient):
self.a.backpropagate(gradient * self.b.value)
self.b.backpropagate(gradient * self.a.value)
def __str__(self):
return "({}) * ({})".format(self.a, self.b)
x = Var(3, name="x")
y = Var(4, name="y")
f = Add(Mul(Mul(x, x), y), Add(y, Const(2))) # f(x,y) = x²y + y + 2
result = f.evaluate()
f.backpropagate(1.0)
print("f(x,y) =", f)
print("f(3,4) =", result)
print("df_dx =", x.gradient)
print("df_dy =", y.gradient)
Explanation: Autodiff – Reverse mode
End of explanation
reset_graph()
x = tf.Variable(3., name="x")
y = tf.Variable(4., name="y")
f = x*x*y + y + 2
gradients = tf.gradients(f, [x, y])
init = tf.global_variables_initializer()
with tf.Session() as sess:
init.run()
f_val, gradients_val = sess.run([f, gradients])
f_val, gradients_val
Explanation: Autodiff – reverse mode (using TensorFlow)
End of explanation |
8,728 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Calculating Transit Timing Variations (TTV) with REBOUND
The following code finds the transit times in a two planet system. The transit times of the inner planet are not exactly periodic, due to planet-planet interactions.
First, let's import the REBOUND and numpy packages.
Step1: Let's set up a coplanar two planet system.
Step2: We're now going to integrate the system forward in time. We assume the observer of the system is in the direction of the positive x-axis. We want to meassure the time when the inner planet transits. In this geometry, this happens when the y coordinate of the planet changes sign. Whenever we detect a change in sign between two steps, we try to find the transit time, which must lie somewhere within the last step, by bisection.
Step3: Next, we do a linear least square fit to remove the linear trend from the transit times, thus leaving us with the transit time variations.
Step4: Finally, let us plot the TTVs. | Python Code:
import rebound
import numpy as np
Explanation: Calculating Transit Timing Variations (TTV) with REBOUND
The following code finds the transit times in a two planet system. The transit times of the inner planet are not exactly periodic, due to planet-planet interactions.
First, let's import the REBOUND and numpy packages.
End of explanation
sim = rebound.Simulation()
sim.add(m=1)
sim.add(m=1e-5, a=1,e=0.1,omega=0.25)
sim.add(m=1e-5, a=1.757)
sim.move_to_com()
Explanation: Let's set up a coplanar two planet system.
End of explanation
N=174
transittimes = np.zeros(N)
p = sim.particles
i = 0
while i<N:
y_old = p[1].y
t_old = sim.t
sim.integrate(sim.t+0.5) # check for transits every 0.5 time units. Note that 0.5 is shorter than one orbit
t_new = sim.t
if y_old*p[1].y<0. and p[1].x>0.: # sign changed (y_old*y<0), planet in front of star (x>0)
while t_new-t_old>1e-7: # bisect until prec of 1e-5 reached
if y_old*p[1].y<0.:
t_new = sim.t
else:
t_old = sim.t
sim.integrate( (t_new+t_old)/2.)
transittimes[i] = sim.t
i += 1
sim.integrate(sim.t+0.05) # integrate 0.05 to be past the transit
Explanation: We're now going to integrate the system forward in time. We assume the observer of the system is in the direction of the positive x-axis. We want to meassure the time when the inner planet transits. In this geometry, this happens when the y coordinate of the planet changes sign. Whenever we detect a change in sign between two steps, we try to find the transit time, which must lie somewhere within the last step, by bisection.
End of explanation
A = np.vstack([np.ones(N), range(N)]).T
c, m = np.linalg.lstsq(A, transittimes)[0]
Explanation: Next, we do a linear least square fit to remove the linear trend from the transit times, thus leaving us with the transit time variations.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(10,5))
ax = plt.subplot(111)
ax.set_xlim([0,N])
ax.set_xlabel("Transit number")
ax.set_ylabel("TTV [hours]")
plt.scatter(range(N), (transittimes-m*np.array(range(N))-c)*(24.*365./2./np.pi));
Explanation: Finally, let us plot the TTVs.
End of explanation |
8,729 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Weights and Biases (wandb) Demo
In deep learning, we perform a lot of model training especially for novel neural architectures. The problem is deep learning frameworks like PyTorch do not provide sufficient tools to visualize input data, track the progress of our experiments, log data, and visualize the outputs.
wandb addresses this problem. In this demo, we will train a ResNet18 model from scratch. We show how to use wandb to visualize input data, prediction, and training progress using loss function value and accuracy.
Note
Step1: Import the required modules.
Step2: Login to and initialize wandb. You will need to use your wandb API key to run this demo.
As the config indicates, we will train our model using cifar10 dataset, learning rate of 0.1, and batch size of 128 for 100 epochs.
epochs means a complete sampling of the dataset (train). In the wandb plots, step is the term used instead of epoch.
batch size is the number of samples per training step.
Step3: Build the model
Use a ResNet18 from torchvision. Remove the last layer that was used for 1k-class ImageNet classification. Since we will use CIFAR10, the last layer is replaced by a linear layer with 10 outputs. We will train the model from scratch, so we set pretrained=False.
Step4: Loss function, Optimizer, Scheduler and DataLoader
The appropriate loss function is cross entropy for multi-category classfications. We use SGD or stochastic gradient descent for optimization. Our learning rate that starts at 0.1 decays to zero at the end of total number of epochs. The decay is controlled by a cosine learning rate decay scheduler.
Finally, we use cifar10 dataset that is available in torchvision. We will discuss datasets and dataloaders in our future demo. For the meantime, we can treat dataloader as a data strcuture that dispenses batch size data from either the train or test split of the dataset.
Step5: Visualizing sample data from test split
We can visualize data from the test split by getting a batch sample
Step6: The train loop
At every epoch, we will run the train loop for the model. At every iteration, we will get a batch of data from the train split. We will use the data to update the model parameters. We will use the loss function to calculate the loss value. We will use the optimizer to update the model parameters. We will use the scheduler to update the learning rate. Later, we will use the wandb table to visualize the loss and accuracy.
We use progress_bar to show the progress of the training.
Step7: The validation loop
After every epoch, we will run the validation loop for the model. In this way, we can track the progress of our model training. Both the average loss and accuracy are calculated. During training, we will use the wandb table to visualize the loss and accuracy.
Step8: wandb plots
Finally, we will use wandb to visualize the training progress. We will use the following plots
Step9: Load the best performing model
In the following code, we load the best performing model. The model is saved in ./resnet18_best_acc.pth. The average accuracy of the model is the same as the one in the previous section. | Python Code:
!pip install wandb
Explanation: Weights and Biases (wandb) Demo
In deep learning, we perform a lot of model training especially for novel neural architectures. The problem is deep learning frameworks like PyTorch do not provide sufficient tools to visualize input data, track the progress of our experiments, log data, and visualize the outputs.
wandb addresses this problem. In this demo, we will train a ResNet18 model from scratch. We show how to use wandb to visualize input data, prediction, and training progress using loss function value and accuracy.
Note: Before running this demo, please make sure that you have wandb.ai free account.
Let us install wandb.
End of explanation
import torch
import torchvision
import wandb
import datetime
from torch.optim import SGD
from torch.optim.lr_scheduler import CosineAnnealingLR
from torch.utils.data import DataLoader
from torchvision import datasets, transforms
from ui import progress_bar
Explanation: Import the required modules.
End of explanation
wandb.login()
config = {
"learning_rate": 0.1,
"epochs": 100,
"batch_size": 128,
"dataset": "cifar10"
}
run = wandb.init(project="wandb-project", entity="upeee", config=config)
Explanation: Login to and initialize wandb. You will need to use your wandb API key to run this demo.
As the config indicates, we will train our model using cifar10 dataset, learning rate of 0.1, and batch size of 128 for 100 epochs.
epochs means a complete sampling of the dataset (train). In the wandb plots, step is the term used instead of epoch.
batch size is the number of samples per training step.
End of explanation
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = torchvision.models.resnet18(pretrained=False, progress=True)
model.fc = torch.nn.Linear(model.fc.in_features, 10)
model.to(device)
# watch model gradients during training
wandb.watch(model)
Explanation: Build the model
Use a ResNet18 from torchvision. Remove the last layer that was used for 1k-class ImageNet classification. Since we will use CIFAR10, the last layer is replaced by a linear layer with 10 outputs. We will train the model from scratch, so we set pretrained=False.
End of explanation
loss = torch.nn.CrossEntropyLoss()
optimizer = SGD(model.parameters(), lr=wandb.config.learning_rate)
scheduler = CosineAnnealingLR(optimizer, T_max=wandb.config.epochs)
x_train = datasets.CIFAR10(root='./data', train=True,
download=True,
transform=transforms.ToTensor())
x_test = datasets.CIFAR10(root='./data',
train=False,
download=True,
transform=transforms.ToTensor())
train_loader = DataLoader(x_train,
batch_size=wandb.config.batch_size,
shuffle=True,
num_workers=2)
test_loader = DataLoader(x_test,
batch_size=wandb.config.batch_size,
shuffle=False,
num_workers=2)
Explanation: Loss function, Optimizer, Scheduler and DataLoader
The appropriate loss function is cross entropy for multi-category classfications. We use SGD or stochastic gradient descent for optimization. Our learning rate that starts at 0.1 decays to zero at the end of total number of epochs. The decay is controlled by a cosine learning rate decay scheduler.
Finally, we use cifar10 dataset that is available in torchvision. We will discuss datasets and dataloaders in our future demo. For the meantime, we can treat dataloader as a data strcuture that dispenses batch size data from either the train or test split of the dataset.
End of explanation
label_human = ["airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck"]
table_test = wandb.Table(columns=['Image', "Ground Truth", "Initial Pred Label",])
image, label = iter(test_loader).next()
model.eval()
with torch.no_grad():
pred = torch.argmax(model(image.to(device)), dim=1).cpu().numpy()
for i in range(8):
table_test.add_data(wandb.Image(image[i]),
label_human[label[i]],
label_human[pred[i]])
print(label_human[label[i]], "vs. ", label_human[pred[i]])
Explanation: Visualizing sample data from test split
We can visualize data from the test split by getting a batch sample: image, label = iter(test_loader).next(). We use wandb table to create a column for image, grount truth label and initial model predicted label. The wandb table will show up when we run wandb.log() during training.
CIFAR10 dataset is made of small 32x32 RGB images. Each image belongs to one of the 10 categories or classes. Below are sample images from CIFAR10 and their corresponding human labels.
<img src="cifar10-samples.png" width="600" height="600">
End of explanation
def train(epoch):
model.train()
train_loss = 0
correct = 0
train_samples = 0
# sample a batch. compute loss and backpropagate
for batch_idx, (data, target) in enumerate(train_loader):
optimizer.zero_grad()
target = target.to(device)
output = model(data.to(device))
loss_value = loss(output, target)
loss_value.backward()
optimizer.step()
scheduler.step(epoch)
train_loss += loss_value.item()
train_samples += len(data)
pred = output.argmax(dim=1, keepdim=True)
correct += pred.eq(target.view_as(pred)).sum().item()
if batch_idx % 10 == 0:
accuracy = 100. * correct / len(train_loader.dataset)
progress_bar(batch_idx,
len(train_loader),
'Train Epoch: {}, Loss: {:.6f}, Acc: {:.2f}%'.format(epoch+1,
train_loss/train_samples, accuracy))
train_loss /= len(train_loader.dataset)
accuracy = 100. * correct / len(train_loader.dataset)
return accuracy, train_loss
Explanation: The train loop
At every epoch, we will run the train loop for the model. At every iteration, we will get a batch of data from the train split. We will use the data to update the model parameters. We will use the loss function to calculate the loss value. We will use the optimizer to update the model parameters. We will use the scheduler to update the learning rate. Later, we will use the wandb table to visualize the loss and accuracy.
We use progress_bar to show the progress of the training.
End of explanation
def test():
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
output = model(data.to(device))
target = target.to(device)
test_loss += loss(output, target).item()
pred = output.argmax(dim=1, keepdim=True)
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
accuracy = 100. * correct / len(test_loader.dataset)
print('\nTest Loss: {:.4f}, Acc: {:.2f}%\n'.format(test_loss, accuracy))
return accuracy, test_loss
Explanation: The validation loop
After every epoch, we will run the validation loop for the model. In this way, we can track the progress of our model training. Both the average loss and accuracy are calculated. During training, we will use the wandb table to visualize the loss and accuracy.
End of explanation
run.display(height=1000)
start_time = datetime.datetime.now()
best_acc = 0
for epoch in range(wandb.config["epochs"]):
train_acc, train_loss = train(epoch)
test_acc, test_loss = test()
if test_acc > best_acc:
wandb.run.summary["Best accuracy"] = test_acc
best_acc = test_acc
torch.save(model, "resnet18_best_acc.pth")
wandb.log({
"Train accuracy": train_acc,
"Test accuracy": test_acc,
"Train loss": train_loss,
"Test loss": test_loss,
"Learning rate": optimizer.param_groups[0]['lr']
})
elapsed_time = datetime.datetime.now() - start_time
print("Elapsed time: %s" % elapsed_time)
wandb.run.summary["Elapsed train time"] = str(elapsed_time)
model.eval()
with torch.no_grad():
pred = torch.argmax(model(image.to(device)), dim=1).cpu().numpy()
final_pred = []
for i in range(8):
final_pred.append(label_human[pred[i]])
print(label_human[label[i]], "vs. ", final_pred[i])
table_test.add_column(name="Final Pred Label", data=final_pred)
wandb.log({"Test data": table_test})
wandb.finish()
Explanation: wandb plots
Finally, we will use wandb to visualize the training progress. We will use the following plots:
- Model gradients (wandb.watch(model))
- Train and test losses ("train loss": train_loss, "test loss": test_loss,)
- Train and validation accuracies ("Train accuracy": train_acc, "Test accuracy": test_acc,)
- Learning rate which decreases over epochs ("Learning rate": optimizer.param_groups[0]['lr'])
We re-use the earlier table_test to see the final prediction.
We save the best peforming model to ./resnet18_best_acc.pth. This can be used as a pretrained model like the pre-trained model in torchvision.
End of explanation
model = torch.load("resnet18_best_acc.pth")
accuracy, _ = test()
print("Best accuracy: %.2f" % accuracy)
Explanation: Load the best performing model
In the following code, we load the best performing model. The model is saved in ./resnet18_best_acc.pth. The average accuracy of the model is the same as the one in the previous section.
End of explanation |
8,730 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Aprendizado Supervisionado
Testando modelos
Step1: Agora vamos analisar os modelos do KNN utilizando K=3 e K=10, mas calculando a acurácia na base de teste.
Step2: Apesar de ter dado valores bem parecidos, essa acurácia representa melhor o modelo que foi criado já que este foi testado com dados diferentes daqueles que foram utilizados para treina-lo.
Validação Cruzada
A forma como testamos o modelo criado é válido, mas existe uma forma mais interessante de testar nosso modelo para garantir a sua capacidade de generalizar. Isso é chamado de Validação Cruzada (cross-validation). Esse tipo de validação prover uma estimativa mais confiável e, consequentemente, uma maneira mais adequada para escolher o modelo que melhor generaliza os dados coletados.
O princípio da validação cruzada é simples
Step3: O parâmetro scores do método cross_val_score refere-se ao tipo de métrica que será utilizada para avaliar nosso modelo. No caso da classificação, vamos utilizar a acurácia (que corresponde a porcentagem de instâncias classificadas corretamente). Vale ressaltar que possuem várias métricas que podem ser utilizadas na classificação. Elas serão melhor trabalhadas mais adiante no curso.
Exemplo 1
Vamos calcular o melhor valor de K para a base da Iris treinando nosso modelo com validação cruzada de 5 folds.
O valor de k vai ser variado de 1 até 25. O resultado será plotado em um gráfico.
Step4: Exemplo 2
Vamos voltar para o exemplo que utilizamos no tutorial de Regressão Linear. O modelo utilizava informações de gastos de propaganda em Rádio, TV e Jornal para dar uma estimativa de vendas de um determinado produto. Vamos utilizar agora a validação para testar diferentes modelos de Regressão Linear.
Vamos utilizar a base de dados de três formas diferentes | Python Code:
#Imports necessários
from sklearn.datasets import load_iris
from sklearn import metrics
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
import matplotlib.pyplot as plt
%matplotlib inline
data_iris = load_iris()
X = data_iris.data
y = data_iris.target
# Criando um teste que consiste em 40% dos dados originais
X_train, X_test, Y_train, Y_test = train_test_split(X, y, test_size=0.4, random_state=4)
#Imprimindo o tamanho das bases
print("Tamanho de X na base de Treino: ", X_train.shape)
print("Tamanho de X na base de Teste: ", X_test.shape)
print("Tamanho de Y na base de Treino: ", Y_train.shape)
print("Tamanho de Y na base de Teste: ", Y_test.shape)
Explanation: Aprendizado Supervisionado
Testando modelos: base de treino e teste
Terminamos o tutorial passado avaliando os modelos construídos. No entanto, os testes foram realizados a partir da mesma base que foi utilizada para realizar o treinamento. É fácil perceber que essa não é uma abordagem interessante. Fazendo uma analogia, é como se estivéssemos avaliando nossos alunos com um prova que era igual a uma lista de exercício passada antes de prova.
A principal questão quando estamos testando modelos em aprendizagem de máquina é garantir que nosso modelo seja capaz de generalizar o suficiente para que consiga acertar bem instâncias ainda desconhecidas.
Quando treinamos e testamos nosso modelo com a mesma base, caímos no risco de construir um modelo que não seja capaz de generalizar aquele conhecimento adquirido. Normalmente, quando isso acontece, lidamos com um problema chamado de overfit. O correto é treinarmos nosso modelo com uma base e testar com um conjunto de dados novo para o modelo. Desta forma, aumentamos as chances de construir um modelo capaz de generalizar o "conhecimento" extraído da base de dados.
Na verdade, esse problema está relacionado com dois problemas clássicos da aprendizagem de máquina underfitting e overfitting. Vamos trabalhar melhor estes dois conceitos mais a frente no nosso curso.
Treinar e testar com bases distintas consiste em selecionar nosso dataset e dividi-lo em duas partes: (1) uma base de treinamento e (2) uma base de teste. Desta forma podemos treinar e testar com base distintas, mas também podemos avaliar se o nosso modelo aprendeu de fato já que a classe de cada instância da base de teste é conhecida. A acurácia de teste é uma estimativa melhor do que a acurácia de treinamento já que avalia a performance do modelo em um conjunto de dados desconhecido.
Vamos trabalhar com a base da Iris também.
End of explanation
#Instanciando e treinando os modelos
knn_3 = KNeighborsClassifier(n_neighbors=3)
knn_3.fit(X_train, Y_train)
knn_10 = KNeighborsClassifier(n_neighbors=10)
knn_10.fit(X_train, Y_train)
# Fazendo a predição para base de teste
pred_test_3 = knn_3.predict(X_test)
pred_test_10 = knn_10.predict(X_test)
from sklearn import metrics
#Calculando a acurácia
print("Acurácia de teste para K=3:", '%0.4f' % metrics.accuracy_score(Y_test, pred_test_3))
print("Acurácia de teste para K=10:", '%0.4f' % metrics.accuracy_score(Y_test, pred_test_10))
Explanation: Agora vamos analisar os modelos do KNN utilizando K=3 e K=10, mas calculando a acurácia na base de teste.
End of explanation
# Importa o método de valiação cruzada
from sklearn.model_selection import cross_val_score
# Aplica a validação cruzada (5 folds) no modelo KNN (k=3) criado anteriomente
scores_3 = cross_val_score(knn_3, X, y, cv=5, scoring='accuracy')
# Aplica a validação cruzada (5 folds) no modelo KNN (k=10) criado anteriormente
scores_10 = cross_val_score(knn_10, X, y, cv=5, scoring='accuracy')
print("Média da Acurácia KNN (K=3): ", "%0.3f" % scores_3.mean())
print("Média da Acurácia KNN (K=10): ", "%0.3f" % scores_10.mean())
Explanation: Apesar de ter dado valores bem parecidos, essa acurácia representa melhor o modelo que foi criado já que este foi testado com dados diferentes daqueles que foram utilizados para treina-lo.
Validação Cruzada
A forma como testamos o modelo criado é válido, mas existe uma forma mais interessante de testar nosso modelo para garantir a sua capacidade de generalizar. Isso é chamado de Validação Cruzada (cross-validation). Esse tipo de validação prover uma estimativa mais confiável e, consequentemente, uma maneira mais adequada para escolher o modelo que melhor generaliza os dados coletados.
O princípio da validação cruzada é simples: dividimos a base em Treino e Teste e testamos nosso modelo (M). Em seguida, fazemos uma nova divisão em teste e treino e aplicamos novamente o modelo. Repetimos esse processo um número $x$ de vezes (fold). O resultado final do modelo é dado pela média dos valores encontrados em cada configuração da base.
A imagem a seguir ilustra esse processo para uma divisão de 20% para teste e 80% para treino. Utilizamos 5 folds para ilustrar. Acc0 indica a acurácia do fold 0, Acc1 a acurácia do fold 1 e assim por diante.
<img src="https://dl.dropboxusercontent.com/u/25405260/imagens/crossvalidation.png" width="60%" />
Observe que para cada fold o conjunto de dados de treino/teste é diferente.
Vamos executar a validação cruzada no ScikitLearn. A biblioteca possui o método cross_val_score do pacaote sklearn.model_selection que treina e testa o modelo usando validação cruzada.
End of explanation
k_range = list(range(1, 26))
scores = []
maior_k = 0.0
maior_acc = 0.0
for k in k_range:
knn = KNeighborsClassifier(n_neighbors=k)
score = cross_val_score(knn, X, y, cv=5, scoring='accuracy')
mean_score = score.mean()
if mean_score > maior_acc:
maior_k = k
maior_acc = mean_score
scores.append(score.mean())
plt.plot(k_range, scores)
print("Maior valor de K = ", maior_k)
print("Maior valor de Acurácia = ", maior_acc)
Explanation: O parâmetro scores do método cross_val_score refere-se ao tipo de métrica que será utilizada para avaliar nosso modelo. No caso da classificação, vamos utilizar a acurácia (que corresponde a porcentagem de instâncias classificadas corretamente). Vale ressaltar que possuem várias métricas que podem ser utilizadas na classificação. Elas serão melhor trabalhadas mais adiante no curso.
Exemplo 1
Vamos calcular o melhor valor de K para a base da Iris treinando nosso modelo com validação cruzada de 5 folds.
O valor de k vai ser variado de 1 até 25. O resultado será plotado em um gráfico.
End of explanation
import pandas as pd
from sklearn.linear_model import LinearRegression
# Carrega a base de dados
data = pd.read_csv("http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv", index_col=0)
# Definindo as features que serão usadas para cada base
feature_cols_1 = ['TV']
feature_cols_2 = ['TV', 'Radio']
feature_cols_3 = ['TV', 'Radio', 'Newspaper']
# Criando os dados de entrada
X_1 = data[feature_cols_1] # Corresponde aos dados só com a feature TV
X_2 = data[feature_cols_2] # Corresponde aos dados com as features TV e Radio
X_3 = data[feature_cols_3] # Corresponde aos dados com as features TV, Radio e Newspaper
# Valores de Y. Lembrando que esse valor é igual para as três bases
y = data.Sales
#Instanciando os modelos
lm_1 = LinearRegression()
lm_2 = LinearRegression()
lm_3 = LinearRegression()
#Validação cruzada de 10 folds. Observe que mudamos o parâmetro scoring para atender à técnica que estamos utilizando.
scores_1 = cross_val_score(lm_1, X_1, y, cv=10, scoring='r2')
scores_2 = cross_val_score(lm_2, X_2, y, cv=10, scoring='r2')
scores_3 = cross_val_score(lm_3, X_3, y, cv=10, scoring='r2')
# Resultados obtidos com a métrica R^2
print("R2 Model 1: %0.6f" % (scores_1.mean()))
print("R2 Model 2: %0.6f" % (scores_2.mean()))
print("R2 Model 3: %0.6f" % (scores_3.mean()))
Explanation: Exemplo 2
Vamos voltar para o exemplo que utilizamos no tutorial de Regressão Linear. O modelo utilizava informações de gastos de propaganda em Rádio, TV e Jornal para dar uma estimativa de vendas de um determinado produto. Vamos utilizar agora a validação para testar diferentes modelos de Regressão Linear.
Vamos utilizar a base de dados de três formas diferentes:
Utilizando somente a feature TV
Utilizando as features TV e Radio
Utilizando todas as features (TV, Radio e Newspaper)
Já tinha comentado anteriormente que o atributo Newspaper não contribui positivamente com as vendas. Vamos mostrar isso treinando três modelos distintos e calculando a acurácia na validação cruzada com 10 folds
End of explanation |
8,731 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Recursion and Dictionaries
Dr. Chris Gwilliams
gwilliamsc@cardiff.ac.uk
Overview
Scripts in Python
Types
Methods and Functions
Flow control
Lists
Iteration
for loops
while loops
Now
Dicts
Tuples
Iteration vs Recursion
Recursion
Dictionaries
Python has many different data structures available (see here)
The dictionary structure is similar to a list, but the index is specified by you.
It is also known as an associative array, where values are mapped to a key.
Step1: This follows the format of JSON (JavaScript Object Notation).
Keys can be accessed the same way as lists
Step2: Exercise
Create a dictionary with information about the software academy
Loop through it and print the values
Now use enumerate to print the key index
Modify the loop to only print the values where the length is > 5
Step3: Keys and Values
Dictionaries have methods associated with them to access the keys, values or both from within the dict.
Exercise
Use the dir function (and the Python docs) on your soft_accc dict and write down the 3 methods that can be used to access the keys, values and both
Step4: Exercise
Using the methods you found, write a function that has a dictionary as an argument and loops through the values to return the first value that is of type int
Create a new functiom that does the same but returns the key of that value.
Step5: Accessing and Reassigning
With dicts, we can access keys through square bracket notation
Step6: Tuples
We have just seen that tuples are a data structure in Python and that they are not as simple as lists or ints!
Tuples are like lists but
Step7: Unpacking Tuples
Tuples can hold any number of values and it is easy enough to access them using square bracket notation.
But, we may receive a tuple that contains many values and this is no fun
Step8: Searching Dictionaries
Unlike lists, we are not just looking for values within dictionaries. Since we can specify our own keys, there are many times we may want to search this as well.
Exercise
Using the student dictionary from a few slides back, find the index of the student with the name "jen" using only the name and then the student number and then a tuple of the two.
Hint
Step9: Exercise
Write a script that
Step10: Recursion
An iterative function is a function that loops to repeat a block of code.
A recursive function is a function that calls itself until a condition is met.
Step11: What is Happening Here?
Python sees our call to the function and executes it.
Recursion Level
1
Does x equal 1? No
Return x + sum_until(2)
2
Does x equal 1? No
Return x + (2 + sum_until(1))
3
Does x equal 1? Yes
Return x (6)
Exercise
Write a function that takes a list as an input, a start index and checks if the value at that index is greater than the value at the next index. If it is more
Step12: Exercise
Now modify the function to use an end index as an argument (which is the length of the list to begin with).
In your check for whether the start index is more than the length of the list, do the following things
Step13: Bubble Sort
Congratulations, you just implemented your first sorting algorithm. You can find more information on the bubble sort here
Recursion vs. Iteration
You have seen both, which is better/faster/more optimal?
While recursive approaches are typically shorter and easier to read. However, it also results in slower code because of all the funtion calls it results in, as well as the risk of a stack overflow when too many calls are made.
Typically, math-based apparoaches will use recursion and most software engineering code will use iteration. Basically, most algorithms will use recursion so you need to understand how it works.
When Should You Use It?
Recursion is often seen as some mythical beast but the breakdown (as we have seen) is quite simple.
However, most (not all) languages are not tuned for recursion and, in performance terms, iteration is often vastly quicker.
Step14: Why the Difference?
To understand this, we need to understand a little bit about how programs are run.
Two key things are the stack and the heap
Stack
Every time a function or a method is called, it is put on the stack to be executed. Recursion uses the stack extensively because each function calls itself (until some condition is met).
See the code below | Python Code:
empty_dict = {}
contact_dict = {
"name": "Homer",
"email": "homer@simpsons.com",
"phone": 999
}
print(contact_dict)
Explanation: Recursion and Dictionaries
Dr. Chris Gwilliams
gwilliamsc@cardiff.ac.uk
Overview
Scripts in Python
Types
Methods and Functions
Flow control
Lists
Iteration
for loops
while loops
Now
Dicts
Tuples
Iteration vs Recursion
Recursion
Dictionaries
Python has many different data structures available (see here)
The dictionary structure is similar to a list, but the index is specified by you.
It is also known as an associative array, where values are mapped to a key.
End of explanation
print(contact_dict['email'])
Explanation: This follows the format of JSON (JavaScript Object Notation).
Keys can be accessed the same way as lists:
End of explanation
soft_acc = {
"post_code": "np20",
"floor": 3
}
for key in soft_acc:
print(key)
for key in soft_acc:
if len(key) > 5:
print(key)
Explanation: Exercise
Create a dictionary with information about the software academy
Loop through it and print the values
Now use enumerate to print the key index
Modify the loop to only print the values where the length is > 5
End of explanation
print(soft_acc.keys())
print(soft_acc.values())
print(soft_acc.items())
Explanation: Keys and Values
Dictionaries have methods associated with them to access the keys, values or both from within the dict.
Exercise
Use the dir function (and the Python docs) on your soft_accc dict and write down the 3 methods that can be used to access the keys, values and both
End of explanation
def find_first_int_value(dictionary):
for val in dictionary.values():
if type(val) is int:
return val
def find_first_int_key(dictionary):
for key, val in dictionary.items():
if type(key) is int:
return key
def example(dictionary):
for key, val in enumerate(dictionary):
if type(key) is int:
return key
find_first_int_value(soft_acc)
find_first_int_key(soft_acc)
example(dictionary)
Explanation: Exercise
Using the methods you found, write a function that has a dictionary as an argument and loops through the values to return the first value that is of type int
Create a new functiom that does the same but returns the key of that value.
End of explanation
students = {1234: "gary", 4567: "jen"}
print(students.get(1234))
gary = students.popitem() #how does this differ from pop?
print(gary)
print(students)
#pop gives a value but popitem gives you a tuple
students[gary[0]] = gary[1]
print(students)
print(students.pop(48789492, "Sorry, that student number does not exist"))
Explanation: Accessing and Reassigning
With dicts, we can access keys through square bracket notation:
my_dict['key']
or through the get method:
mydict.get('key')
Removing Items
Much like lists, we can pop elements from a dict, but the way this is done is slightly different:
pop() - One must provide the key of the item to be removed and the value is returned. An error is given if nothing was found
popitem() - This works much like pop on a list, removing the last item in the dict and providing the key and the value.
Exercise
Create a dict of student numbers as keys and student names as values.
Print the third value in the dict using the get method
Choose any item in the list and pop it off and save it to a variable
Now add it back into the dict
Using the docs, explain the difference between pop and popitem return types
Using the docs, call the pop method for a key that does not exist, but make it return a string that reads, "Sorry, that student number does not exist"
End of explanation
my_tuple = 1, 2
print(my_tuple)
new_tuple = 3, "a", 6
print(new_tuple)
print(new_tuple[1])
Explanation: Tuples
We have just seen that tuples are a data structure in Python and that they are not as simple as lists or ints!
Tuples are like lists but: they are immutable!
We can access them like lists as well:
End of explanation
my_tuple = 999, "Dave", "dave@dave.com", True, True
phone = my_tuple[0]
name = my_tuple[1]
email = my_tuple[2]
yawn = my_tuple[3]
still_not_done = my_tuple[4]
#unpacking tuples: number of names on left MUST match values in tuple
phone, name, email, yawn, still_not_done = my_tuple
Explanation: Unpacking Tuples
Tuples can hold any number of values and it is easy enough to access them using square bracket notation.
But, we may receive a tuple that contains many values and this is no fun:
End of explanation
students = {1234: "gary", 4567: "jen"}
print("1. {}".format(1234 in students))
print(1234 in students.keys())
print("jen" in students)
print("gary" in students.values())
print("gary" in students.items())
print((1234, "gary") in students.items())
Explanation: Searching Dictionaries
Unlike lists, we are not just looking for values within dictionaries. Since we can specify our own keys, there are many times we may want to search this as well.
Exercise
Using the student dictionary from a few slides back, find the index of the student with the name "jen" using only the name and then the student number and then a tuple of the two.
Hint: Use the methods attached to dicts.
End of explanation
strings = ['hey', 'a', 'you there', 'what to the what to the what', 'sup', 'oi oi', 'how are you doing, good sir?']
# strings_dict = {}
strings_dict = {33: 'dfhjkdshfjkhdsfjkhdskahfksahkjdfk', 18: 'dkjhskjhskdhffkdjh', 19: 'dfkjsdhfkjdhsfkjhdk', 5: 'kdkdf', 6: 'fdjhfd', 9: 'fkljrwlgj', 28: 'fdjfkdjfkljlskdfjlksdjflsk;a'}
# while True:
# msg = input("Please type your message here:")
# if msg is not 'q':
# strings_dict[len(msg)] = msg
# else:
# break
def list_to_dict(strings):
strings_dict = {}
for string in strings:
strings_dict[len(string)] = string
return strings_dict
def sort_list(input_list):
is_sorted = True
for key in range(0, len(input_list)):
for i in range(0, len(input_list)):
current = input_list[i]
if i + 1 < len(input_list):
if len(current) > len(input_list[i + 1]):
input_list[i] = input_list[i + 1]
input_list[i + 1] = current
return input_list
def mean_length(lengths):
total_length = 0
for length in lengths:
total_length += length
return total_length/len(lengths)
# strings_dict = list_to_dict(strings)
print("Average string length is {0}".format(mean_length(strings_dict.keys())))
sorted_list = sort_list(list(strings_dict.values()))
print("Sorted list is {0}".format(sorted_list))
Explanation: Exercise
Write a script that:
1. Users can input messages until they type 'q'
2. Messages are added to a dictionary with the length of the message as the key
3. Write a function that uses the keys as the input to return the average length
4. Write a second function that takes the values of the dictionary and sorts them according to length
End of explanation
def sum_until(x):
if x == 1:
return x
else:
return x + sum_until(x - 1)
print(sum_until(3))
Explanation: Recursion
An iterative function is a function that loops to repeat a block of code.
A recursive function is a function that calls itself until a condition is met.
End of explanation
def check_value(input_list, start):
if(start == len(input_list) - 1):
return
elif(input_list[start] > input_list[start+1]):
current = input_list[start]
input_list[start] = input_list[start + 1]
input_list[start + 1] = current
return input_list
l = [3,1,4]
print(check_value(l, 0))
Explanation: What is Happening Here?
Python sees our call to the function and executes it.
Recursion Level
1
Does x equal 1? No
Return x + sum_until(2)
2
Does x equal 1? No
Return x + (2 + sum_until(1))
3
Does x equal 1? Yes
Return x (6)
Exercise
Write a function that takes a list as an input, a start index and checks if the value at that index is greater than the value at the next index. If it is more: swap them. Return the list.
HINT: You must make sure that the index + 1 must be less than the length of the list.
End of explanation
#function receives list, start point and endpoint as args
def recursive_sort(input_list, index, end):
#if the startpoint goes beyond the endpoint then return
if index > end:
return(input_list)
#if the start point is equal to the end then decrement the end
if index == end:
recursive_sort(input_list, 0, end - 1)
# check if the string at index is longer than the string at index + 1
# replace it if it is
# why do we need a temporary variable?
elif len(input_list[index]) > len(input_list[index + 1]):
current = input_list[index]
print("Switching \"{0}\" at {1} for \"{2}\"".format(current, index, input_list[index + 1]))
input_list[index] = input_list[index + 1]
input_list[index + 1] = current
# call the function again and increment the index
recursive_sort(input_list, index + 1, end)
# Why do we need this here?
return input_list
strings = ['hey', 'a', 'you there', 'what to the what to the what', 'sup', 'oi oi', 'how are you doing, good sir?']
sorted_list = recursive_sort(strings, 0, len(strings)-1)
print(sorted_list)
#uncommented
def recursive_sort(input_list, index, end):
if index > end:
return(input_list)
if index == end:
recursive_sort(input_list, 0, end - 1)
elif len(input_list[index]) > len(input_list[index + 1]):
current = input_list[index]
print("Switching \"{0}\" at {1} for \"{2}\"".format(current, index, input_list[index + 1]))
input_list[index] = input_list[index + 1]
input_list[index + 1] = current
recursive_sort(input_list, index + 1, end)
return input_list
strings = ['hey', 'a', 'you there', 'what to the what to the what', 'sup', 'oi oi', 'how are you doing, good sir?']
sorted_list = recursive_sort(strings, 0, len(strings)-1)
print(sorted_list)
Explanation: Exercise
Now modify the function to use an end index as an argument (which is the length of the list to begin with).
In your check for whether the start index is more than the length of the list, do the following things:
- call the function again, with the same list as the arguments
- the start index set to 0
- the end index decremented
- Before returning the original list, call the function again but increment the start index
- Add a check to return the list at the start of the function, if the start index is more than the end
End of explanation
import timeit
def recursive_factorial(n):
if n == 1:
return 1
else:
return n * recursive_factorial(n-1)
def iterative_factorial(n):
x = 1
for each in range(1, n + 1):
x = x * each
return x
print("Timing runs for recursive approach: ")
%timeit for x in range(100): recursive_factorial(500)
print("Timing runs for iterative approach: ")
%timeit for x in range(100): iterative_factorial(500)
# print(timeit.repeat("factorial(10)",number=100000))
Explanation: Bubble Sort
Congratulations, you just implemented your first sorting algorithm. You can find more information on the bubble sort here
Recursion vs. Iteration
You have seen both, which is better/faster/more optimal?
While recursive approaches are typically shorter and easier to read. However, it also results in slower code because of all the funtion calls it results in, as well as the risk of a stack overflow when too many calls are made.
Typically, math-based apparoaches will use recursion and most software engineering code will use iteration. Basically, most algorithms will use recursion so you need to understand how it works.
When Should You Use It?
Recursion is often seen as some mythical beast but the breakdown (as we have seen) is quite simple.
However, most (not all) languages are not tuned for recursion and, in performance terms, iteration is often vastly quicker.
End of explanation
import random
def sort_numbers(s):
for i in range(1, len(s)):
val = s[i]
j = i - 1
while (j >= 0) and (s[j] > val):
s[j+1] = s[j]
j = j - 1
s[j+1] = val
# x = eval(input("Enter numbers to be sorted: "))
# x = list(range(0, 10)) #list(x)
x = random.sample(range(1, 1000000001), 100000000)
print(x)
sort_numbers(x)
print(x)
Explanation: Why the Difference?
To understand this, we need to understand a little bit about how programs are run.
Two key things are the stack and the heap
Stack
Every time a function or a method is called, it is put on the stack to be executed. Recursion uses the stack extensively because each function calls itself (until some condition is met).
See the code below:
python
def recursive():
return recursive()
Running this will result in the recursive function being called an infinite number of times. The Python interpreter cannot handle this, so it will shut itself down and cause a stack overflow
Heap
The heap is the space for dynamic allocation of objects. The more objects created, the greater the heap. Although, this is dynamic and can grow as the application grows.
Python also takes care of this for us, by using a garbage collector. This tracks allocations of objects and cleans them up when they are no longer used. We can force things to be cleared by using:
del my_var
However, if assigning that variable takes up 50MB, Python may not always clear 50MB when it is deallocated. Why do you think this is?
FizzBuzz Exercise
Write a for loop that goes from 1 to 100 (inclusive) and prints:
* fizz if the number is a multiple of 3
* buzz if the number is a multiple of 5
* fizzbuzz if the number is a multiple of both 3 and 5
* the value for any other case
Exercise
Now turn this into a function and modify it to not use a for loop and use recursion. I.e. calling the function until the value reaches 100.
Homework
Insertion Sort
The insertion sort is a basic algorithm to build the sorted array in a similar way to the bubble sort.
The list is sorted by looping through all the elements from the index to the end, moving the index along for each loop.
End of explanation |
8,732 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Problem 1
Using the following dictionary
Step1: Problem 2
Write a function that finds the number of elements in a list (without using the built-in len function). Now, use %%timeit to compare the speed to len for a list of 100,000 elements.
Step2: It's not a coincidence that the custom function is roughly 100,000x slower than the built-in. Python lists actually have an attribute that stores their length - so using the len built-in means just looking up that number (so basically one step). Iterating over the entire list takes 100,000 steps (one for each element in the list). Try changing the length of the list to 1,000 elements - the custom length function now should be approximately 1,000x slower than the built-in.
Problem 3
Write a function that calculates the median of a list of numbers (without using statistics). Use the randint function the the random module to create a list of integers to test your function. | Python Code:
my_dict = {
'a': 3,
'b': 2,
'c': 10,
'd': 7,
'e': 9,
'f' : 12,
'g' : 13
}
# print the keys with even values
print('keys with even values:')
for key, value in my_dict.items():
# modulo 2 == 0 implies the number is even
if value % 2 == 0:
print(key)
# print the key corresponding to the largest value
print('\nkey with max value:')
max_val = 0 # assume the values are positive numbers!
for key, value in my_dict.items():
if value > max_val:
max_val = value
max_key = key
print(max_key)
# print the sum of all of the values
print('\nsum of values:')
val_sum = 0
for value in my_dict.values():
val_sum += value
print(val_sum)
# or, using Python built-ins
print('\nsum of values (again):')
print(sum(my_dict.values()))
Explanation: Problem 1
Using the following dictionary:
my_dict = {
'a': 3,
'b': 2,
'c': 10,
'd': 7,
'e': 9,
'f' : 12,
'g' : 13
}
Print out:
- the keys of all values that are even.
- the key with the maximum value.
- the sum of all the values.
End of explanation
def my_len(my_list):
n = 0
for element in my_list:
n += 1
return n
my_list = list(range(100000))
%%timeit
my_len(my_list)
%%timeit
len(my_list)
Explanation: Problem 2
Write a function that finds the number of elements in a list (without using the built-in len function). Now, use %%timeit to compare the speed to len for a list of 100,000 elements.
End of explanation
def median(x):
# sort the input list first
x.sort()
mid = int(len(x) / 2)
if len(x) % 2 == 0:
# for even length, median is average of middle 2 vals
return (x[mid] + x[mid-1]) / 2
else:
# for odd length, median is just the middle value
return x[mid]
my_list = [1, 2, 3, 4, 5]
median(my_list)
import random
my_random_list = []
# make a list of 1000 random integers between 0 and 99,999
for _ in range(1000):
my_random_list.append(random.randint(0, 100000))
median(my_random_list)
# double check with median function from `statistics` module
import statistics
statistics.median(my_random_list)
Explanation: It's not a coincidence that the custom function is roughly 100,000x slower than the built-in. Python lists actually have an attribute that stores their length - so using the len built-in means just looking up that number (so basically one step). Iterating over the entire list takes 100,000 steps (one for each element in the list). Try changing the length of the list to 1,000 elements - the custom length function now should be approximately 1,000x slower than the built-in.
Problem 3
Write a function that calculates the median of a list of numbers (without using statistics). Use the randint function the the random module to create a list of integers to test your function.
End of explanation |
8,733 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hybrid coding scheme for diagonal Gaussians
```
Copyright 2022 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step4: Below are basic implementations of two general reverse channel coding schemes based on the Poisson functional representation (Li & El Gamal, 2018) and the hybrid coding scheme (Theis & Yosri, 2022). These implementations are meant for educational purposes and were not designed for production use.
Using these functions, a sample from a continuous target distribution $q$ can be encoded into a low-entropy integer code. It is assumed that the encoder and decoder have shared knowledge of a proposal distribution $p$ and a random seed (given in the form of rs below).
Step7: The code below deals with the special case of diagonal Gaussian target and proposal distributions.
Step8: The following cell encodes Gaussian samples.
Step9: The following cell decodes the Gaussian samples. While it is not obvious from the implementation of sample_gaussian, the factor M is independent of the mean of q. This allows us to use it in the decoding step below.
Step11: Below we estimate the cost of encoding the samples, and check that the cost agrees with the theoretical bound on the coding cost.
Step12: Below we measure the computational efficiency of the hybrid coding scheme. | Python Code:
import numpy as np
import scipy as sp
import scipy.stats
import matplotlib.pyplot as plt
from tqdm import tqdm
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
Explanation: Hybrid coding scheme for diagonal Gaussians
```
Copyright 2022 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```
End of explanation
def sample_pfr(q, p, w_min, rs=np.random.RandomState(0)):
Reverse channel coding scheme based on the Poisson functional representation
(Li & El Gamal, 2018).
Parameters
----------
q: sp.stats.rv_continuous
Target distribution
p: sp.stats.rv_continuous
Prior distribution known to decoder
w_min: float
Lower bound on the density ratio, ideally inf_z p(z) / q(z)
Returns
-------
tuple
Returns a sample, its code, and the number of iterations that were needed
to identify the code.
if w_min > 1.0 or w_min < 0.0 or np.isnan(w_min):
raise ValueError('`w_min` should not be outside the range [0, 1]')
t = 0 # Poisson process
s = np.inf # score of current best candidate
n = 0 # index of last accepted proposal
i = 0 # index of current proposal considered
z = None # value of last accepted sample
while s > t * w_min:
# draw candidate from proposal distribution
z_ = p.rvs(random_state=rs)
# advance Poisson process
t += rs.exponential()
# evaluate candidate
s_ = t * np.prod(p.pdf(z_) / q.pdf(z_))
# accept/reject candidate
if s_ < s:
n = i
s = s_
z = z_
i += 1
return {'sample': z, 'code': n, 'iterations': i}
def sample_hybrid(q, p, w_min, M, rs=np.random.RandomState(0)):
Hybrid reverse channel coding scheme based on universal quantization
and the Poisson functional representation (Theis & Yosri, 2022).
This implementation assumes that q.a and q.b indicate the support of the
distribution. Note that Scipy's truncated distributions implement q.a and
q.b but these do *not* directly correspond to the distributions' support.
Parameters
----------
q: sp.stats.rv_continuous
Target distribution with finite support indicated by q.a and q.b
p: sp.stats.rv_continuous
Prior distribution known to decoder
w_min: float
Lower bound on the density ratio, ideally inf_z p(z) / q(z)
M: np.ndarray
After transformation, the support of p is at least M times the support of q
Returns
-------
tuple
Returns (z, n, k, i), where (n, k) is a code for the sample z and
i is the number of iterations that were needed to find the code.
if w_min > 1.0 or w_min < 0.0 or np.isnan(w_min):
raise ValueError('`w_min` should not be outside the range [0, 1]')
dim = p.mean().size
# transformation, its inverse, and its log-derivative
phi = lambda u: p.ppf(u / M)
phi_inv = lambda z: p.cdf(z) * M
log_phi_prime = lambda u: -p.logpdf(phi(u)) - np.log(M)
# transform target distribution
q_phi_logpdf = lambda u: q.logpdf(phi(u)) + log_phi_prime(u)
# transform support of target distribution
q_phi_support = phi_inv(np.asarray([q.a, q.b]))
if np.any(np.abs(np.diff(q_phi_support, axis=0)) > 1):
raise ValueError('The support of the target distribution is too large')
# center of bridge proposal distribution
c = np.mean(q_phi_support, axis=0)
# don't generate proposals outside support of p
c = np.clip(c, 0.5, M - 0.5)
# apply reverse channel coding
t = 0 # Poisson process
s = np.inf # score of last accepted proposal
n = 0 # index of last accepted proposal
i = 0 # index of current proposal
while np.exp(s) > t * w_min * np.prod(M):
# generate candidate using universal quantization
u = rs.rand(dim)
k_ = np.array(c - u + 0.5, dtype=int) # equals round(c - u) since c - u + 0.5 >= 0
y_ = k_ + u
# evaluate candidate
t += rs.exponential()
s_ = np.log(t) - np.sum(q_phi_logpdf(y_))
# accept/reject candidate
if s_ < s:
n = i
s = s_
k = k_
y = y_
i += 1
# transform sample back
z = phi(y)
# n and k contain all the information needed to produce z
return {
'sample': z,
'code': (n, np.asarray(k, dtype=int)),
'iterations': i}
def decode_hybrid(n, k, p, M, rs=np.random.RandomState(0)):
Accepts a code representing a sample and decodes it.
Parameters
----------
n: int
The index of the candidate
k: int
Additional information required to generate the candidate
p: sp.stats.rv_continuous
Assumed marginal distribution of the samples
M: np.ndarray
See Theis & Yosri (2022) for an explanation of this parameter
Returns
-------
numpy.ndarray
A decoded sample
dim = p.mean().size
# transformation, its inverse, and its log-derivative
phi = lambda u: p.ppf(u / M)
# advance random seed
for _ in range(n + 1):
u = rs.rand(dim)
rs.exponential()
return phi(k + u)
Explanation: Below are basic implementations of two general reverse channel coding schemes based on the Poisson functional representation (Li & El Gamal, 2018) and the hybrid coding scheme (Theis & Yosri, 2022). These implementations are meant for educational purposes and were not designed for production use.
Using these functions, a sample from a continuous target distribution $q$ can be encoded into a low-entropy integer code. It is assumed that the encoder and decoder have shared knowledge of a proposal distribution $p$ and a random seed (given in the form of rs below).
End of explanation
def minimum_weight(q, p):
Returns min_z p(z) / q(z) for normal distributions q and p.
assert np.all(p.var() >= q.var())
with np.errstate(invalid='ignore'):
x_min = (p.var() * q.mean() - q.var() * p.mean()) / (p.var() - q.var())
w_min = p.logpdf(x_min) - q.logpdf(x_min)
return np.where(np.isnan(w_min), 1.0, np.exp(w_min))
def sample_gaussian(q, p, D=1e-4, rs=np.random.RandomState(0)):
Encodes diagonal Gaussian using hybrid coding scheme.
Parameters
----------
q: sp.stats.rv_continuous
Target normal distribution
p: sp.stats.rv_continuous
Marginal normal distribution
D: float
Controls fraction of mass of q truncated
rs: numpy.random.RandomState
Source of randomness shared between encoder and decoder
Returns
-------
dict
A dictionary containing the a sample, its code and additional information
dim = q.mean().size
# adjust for dimensionality
D = 1.0 - np.power(1.0 - D, 1.0 / dim)
# support of standard truncated normal
a = sp.stats.norm().ppf(D / 2.0)
b = sp.stats.norm().ppf(1.0 - D / 2.0)
# approximate normal with truncated normal
q_tr = sp.stats.truncnorm(a=a, b=b, loc=q.mean(), scale=q.std())
# fix broken support indicators
q_tr.a = q_tr.a * q_tr.std() + q_tr.mean()
q_tr.b = q_tr.b * q_tr.std() + q_tr.mean()
# determine support of widest q (after transformation with p.cdf)
c = p.cdf(p.mean() - q_tr.mean() + q_tr.a)
d = p.cdf(p.mean() - q_tr.mean() + q_tr.b)
M = np.asarray(np.floor(1 / (d - c)), dtype=int)
# lower bound on w_min = min_z p(z) / q_tr(z)
w_min = np.prod(minimum_weight(q, p) * (1 - D))
results = sample_hybrid(q_tr, p, w_min, M, rs)
results['factor'] = M
return results
Explanation: The code below deals with the special case of diagonal Gaussian target and proposal distributions.
End of explanation
mean_scale = np.asarray([40.0, 20.0])
target_scale = np.asarray([1.0, 1.0])
# proposal distribution
p = sp.stats.norm(
loc=[0, 0],
scale=np.sqrt(target_scale ** 2 + mean_scale ** 2))
# target distributions
targets = []
for _ in range(2000):
target_mean = np.random.randn(2) * mean_scale
targets.append(sp.stats.norm(loc=target_mean, scale=target_scale))
# encode samples
samples = [
sample_gaussian(q, p, D=1e-4, rs=np.random.RandomState(i))
for i, q in enumerate(targets)]
Explanation: The following cell encodes Gaussian samples.
End of explanation
# decode samples
z = [
decode_hybrid(
n=s['code'][0],
k=s['code'][1],
p=p,
M=s['factor'],
rs=np.random.RandomState(i))
for i, s in enumerate(samples)]
z = np.asarray(z)
t = np.linspace(0, 2 * np.pi, 100)
x = np.cos(t) * p.std()[0] * 2
y = np.sin(t) * p.std()[1] * 2
# visualize prior distribution and samples
plt.figure(figsize=(5, 5))
plt.plot(x, y, 'k')
xl, yl = plt.xlim(), plt.ylim()
plt.plot(z[:, 0], z[:, 1], '.')
plt.axis('equal')
plt.xlim(*xl)
plt.ylim(*yl);
Explanation: The following cell decodes the Gaussian samples. While it is not obvious from the implementation of sample_gaussian, the factor M is independent of the mean of q. This allows us to use it in the decoding step below.
End of explanation
def coding_cost_hybrid(q, p, sample):
Estimate the coding cost (assuming a Zipf distribution)
as well as theoretical upper and lower bounds.
The target and marginal distribution are used to compute the bounds
and the parameter of the Zipf distribution used to encode the sample.
It is assumed that the entropy of `q` is the same across
target distributions and that `p` is the marginal distribution, i.e.,
E[q_X(z)] = p(z).
Parameters
----------
q: sp.stats.rv_continuous
Target distribution
p: sp.stats.rv_continuous
Marginal distribution
sample: dict
A single sample as produced by `sample_hybrid`
Returns
-------
dict
Coding cost and bounds
# mutual information between source and communicated sample
marg_diff_entropy = p.entropy().sum() / np.log(2)
cond_diff_entropy = q.entropy().sum() / np.log(2)
mi = marg_diff_entropy - cond_diff_entropy
# upper bound on entropy
log2M = np.log2(sample['factor']).sum()
coding_cost_bound = mi + np.log2(mi - log2M + 1) + 4
# coding cost of N under Zipf distribution
exponent = 1.0 + 1.0 / (1.0 + np.log2(np.e) / np.e + mi - log2M)
log2Pn = -exponent * np.log2(sample['code'][0] + 1)
log2Pn = log2Pn - np.log2(sp.special.zeta(exponent))
# coding cost of N and K (log2M is the cost of encoding K)
coding_cost_zipf = -log2Pn + log2M
return {
'zipf': coding_cost_zipf,
'lower_bound': mi,
'upper_bound': coding_cost_bound,
}
coding_costs = [coding_cost_hybrid(q, p, s) for q, s in zip(targets, samples)]
print('{:.4f} <= {:.4f} <= {:.4f}'.format(
coding_costs[0]['lower_bound'],
np.mean([c['zipf'] for c in coding_costs]),
coding_costs[0]['upper_bound']))
Explanation: Below we estimate the cost of encoding the samples, and check that the cost agrees with the theoretical bound on the coding cost.
End of explanation
np.random.seed(1)
dim = 2
mean_scales = np.linspace(0, 50, 8)
num_samples = 200
coding_costs = []
samples = []
targets = []
for sigma in tqdm(mean_scales):
mean_scale = np.ones(dim) * sigma
target_scale = np.ones(dim)
# prior/marginal distribution of communicated sample
p = sp.stats.norm(
loc=np.zeros(dim),
scale=np.sqrt(target_scale ** 2 + mean_scale ** 2))
# draw random target distributions
targets.append([])
for _ in range(num_samples):
target_mean = np.random.randn(dim) * mean_scale
targets[-1].append(sp.stats.norm(loc=target_mean, scale=target_scale))
# encode samples
samples.append([
sample_gaussian(
q,
p,
D=1e-4,
rs=np.random.RandomState(i))
for i, q in enumerate(targets[-1])])
coding_costs.append([
coding_cost_hybrid(q, p, s)
for q, s in zip(targets[-1], samples[-1])])
plt.figure(figsize=(10, 5))
# plot coding costs
plt.subplot(1, 2, 1)
coding_cost_lower = [cc[0]['lower_bound'] for cc in coding_costs] # mutual information
coding_cost_upper = [cc[0]['upper_bound'] for cc in coding_costs]
coding_cost_zipf = [np.mean([c['zipf'] for c in cc]) for cc in coding_costs]
plt.plot(mean_scales, coding_cost_zipf, 'r')
plt.plot(mean_scales, coding_cost_lower, color=(0.5, 0, 0), ls='--')
plt.plot(mean_scales, coding_cost_upper, color=(0.5, 0, 0), ls='--')
plt.xlabel(r'$\sigma$', fontsize=14)
plt.ylabel('Coding cost [bit]', fontsize=14)
# plot computational cost
plt.subplot(1, 2, 2)
num_iter_hybrid_10 = [np.percentile([s['iterations'] for s in ss], 10) for ss in samples]
num_iter_hybrid_25 = [np.percentile([s['iterations'] for s in ss], 25) for ss in samples]
num_iter_hybrid_50 = [np.median([s['iterations'] for s in ss]) for ss in samples]
num_iter_hybrid_75 = [np.percentile([s['iterations'] for s in ss], 75) for ss in samples]
num_iter_hybrid_90 = [np.percentile([s['iterations'] for s in ss], 90) for ss in samples]
num_iter_mean = [np.mean([s['iterations'] for s in ss]) for ss in samples]
plt.plot(mean_scales, num_iter_hybrid_50, 'r')
plt.plot(mean_scales, num_iter_mean, 'r--')
plt.fill_between(
np.hstack([mean_scales, mean_scales[::-1]]),
np.hstack([num_iter_hybrid_25, num_iter_hybrid_75[::-1]]),
color='r',
alpha=0.1,
lw=0)
plt.xlabel(r'$\sigma$', fontsize=14)
plt.ylabel('Computational cost [#proposals]', fontsize=13)
plt.tight_layout();
Explanation: Below we measure the computational efficiency of the hybrid coding scheme.
End of explanation |
8,734 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analysis of Oscar-nominated Films
Step1: Descriptive Analysis
To better understand general trends in the data. This is a work in progress. last updated on
Step2: This can be more or less confirmed by calculating the Pearson correlation coefficient, which measures the linear dependence between two variables
Step3: Q1 and Q4 have a higher coefficient than Q2 and Q3, so that points in the right direction...
This won't really help us determine who will win the actual Oscar, but at least we know that if we want a shot, we need to be releasing in late Q4 and early Q1.
Profitability
How do the financial details contribute to Oscar success?
Step4: Profitability by Award Category
Since 1980, the profitability for films which won an Oscar were on average higher than all films nominated that year.
Step5: The biggest losers...that won?
This is just a fun fact. There were 5 awards since 1980 that were given to films that actually lost money.
Step6: Other Awards
Do the BAFTAs, Golden Globes, Screen Actors Guild Awards, etc. forecast who is going to win the Oscars? Let's find out...
Step7: It looks like if the Golden Globes and Screen Actors Guild awards are better indicators of Oscar success than the BAFTAs. Let's take a look at the same analysis, but for Best Picture. The "Guild" award we use is the Screen Actor Guild Award for Outstanding Performance by a Cast in a Motion Picture. | Python Code:
import re
import numpy as np
import pandas as pd
import scipy.stats as stats
pd.set_option('display.float_format', lambda x: '%.3f' % x)
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sb
sb.set(color_codes=True)
sb.set_palette("muted")
np.random.seed(sum(map(ord, "regression")))
awards = pd.read_csv('../data/nominations.csv')
oscars = pd.read_csv('../data/analysis.csv')
Explanation: Analysis of Oscar-nominated Films
End of explanation
sb.countplot(x="release_month", data=oscars)
Explanation: Descriptive Analysis
To better understand general trends in the data. This is a work in progress. last updated on: February 26, 2017
Seasonality
It is well known that movies gunning for an Academy Award aim to be released between December and February, two months before the award ceremony. This is pretty evident looking at a distribution of film release months:
End of explanation
def print_pearsonr(data, dependent, independent):
for field in independent:
coeff = stats.pearsonr(data[dependent], data[field])
print "{0} | coeff: {1} | p-value: {2}".format(field, coeff[0], coeff[1])
print_pearsonr(oscars, 'Oscar', ['q1_release', 'q2_release', 'q3_release', 'q4_release'])
Explanation: This can be more or less confirmed by calculating the Pearson correlation coefficient, which measures the linear dependence between two variables:
End of explanation
# In case we want to examine the data based on the release decade...
oscars['decade'] = oscars['year'].apply(lambda y: str(y)[2] + "0")
# Adding some fields to slice and dice...
profit = oscars[~oscars['budget'].isnull()]
profit = profit[~profit['box_office'].isnull()]
profit['profit'] = profit['box_office'] - profit['budget']
profit['margin'] = profit['profit'] / profit['box_office']
Explanation: Q1 and Q4 have a higher coefficient than Q2 and Q3, so that points in the right direction...
This won't really help us determine who will win the actual Oscar, but at least we know that if we want a shot, we need to be releasing in late Q4 and early Q1.
Profitability
How do the financial details contribute to Oscar success?
End of explanation
avg_margin_for_all = profit.groupby(['category'])['margin'].mean()
avg_margin_for_win = profit[profit['Oscar'] == 1].groupby(['category'])['margin'].mean()
fig, ax = plt.subplots()
index = np.arange(len(profit['category'].unique()))
rects1 = plt.bar(index, avg_margin_for_win, 0.45, color='r', label='Won')
rects2 = plt.bar(index, avg_margin_for_all, 0.45, color='b', label='All')
plt.xlabel('Award Category')
ax.set_xticklabels(profit['category'].unique(), rotation='vertical')
plt.ylabel('Profit Margin (%)')
plt.title('Average Profit Margin by Award Category')
plt.legend()
plt.show()
Explanation: Profitability by Award Category
Since 1980, the profitability for films which won an Oscar were on average higher than all films nominated that year.
End of explanation
fields = ['year', 'film', 'category', 'name', 'budget', 'box_office', 'profit', 'margin']
profit[(profit['profit'] < 0) & (profit['Oscar'] == 1)][fields]
Explanation: The biggest losers...that won?
This is just a fun fact. There were 5 awards since 1980 that were given to films that actually lost money.
End of explanation
winning_awards = oscars[['category', 'Oscar', 'BAFTA', 'Golden Globe', 'Guild']]
winning_awards.head()
acting_categories = ['Actor', 'Actress', 'Supporting Actor', 'Supporting Actress']
y = winning_awards[(winning_awards['Oscar'] == 1)&(winning_awards['category'].isin(acting_categories))]
fig, (ax1, ax2, ax3) = plt.subplots(ncols=3, sharey=True)
plt.title('Count Plot of Wins by Award')
sb.countplot(x="BAFTA", data=y, ax=ax1)
sb.countplot(x="Golden Globe", data=y, ax=ax2)
sb.countplot(x="Guild", data=y, ax=ax3)
print "Pearson correlation for acting categories\n"
print_pearsonr(oscars[oscars['category'].isin(acting_categories)], 'Oscar', ['BAFTA', 'Golden Globe', 'Guild'])
Explanation: Other Awards
Do the BAFTAs, Golden Globes, Screen Actors Guild Awards, etc. forecast who is going to win the Oscars? Let's find out...
End of explanation
y = winning_awards[(winning_awards['Oscar'] == 1)&(winning_awards['category'] == 'Picture')]
fig, (ax1, ax2, ax3) = plt.subplots(ncols=3, sharey=True)
plt.title('Count Plot of Wins by Award')
sb.countplot(x="BAFTA", data=y, ax=ax1)
sb.countplot(x="Golden Globe", data=y, ax=ax2)
sb.countplot(x="Guild", data=y, ax=ax3)
print "Pearson correlation for acting categories\n"
print_pearsonr(oscars[oscars['category'] == 'Picture'], 'Oscar', ['BAFTA', 'Golden Globe', 'Guild'])
Explanation: It looks like if the Golden Globes and Screen Actors Guild awards are better indicators of Oscar success than the BAFTAs. Let's take a look at the same analysis, but for Best Picture. The "Guild" award we use is the Screen Actor Guild Award for Outstanding Performance by a Cast in a Motion Picture.
End of explanation |
8,735 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Context
John Doe remarked in #AP1432 that there may be too much code in our application that isn't used at all. Before migrating the application to the new platform, we have to analyze which parts of the system are still in use and which are not.
Idea
To understand how much code isn't used, we recorded the executed code in production with the coverage tool JaCoCo. The measurement took place between 21st Oct 2017 and 27st Oct 2017. The results were exported into a CSV file using the JaCoCo command line tool with the following command
Step1: Analysis
It was stated that whole packages wouldn't be needed anymore and that they could be safely removed. Therefore, we sum up the coverage data per class for each package and calculate the coverage ratio for each package.
Step2: We plot the data for the coverage ratio to get a brief overview of the result. | Python Code:
import pandas as pd
coverage = pd.read_csv("datasets/jacoco.csv")
coverage = coverage[['PACKAGE', 'CLASS', 'LINE_COVERED' ,'LINE_MISSED']]
coverage['LINES'] = coverage.LINE_COVERED + coverage.LINE_MISSED
coverage.head(1)
Explanation: Context
John Doe remarked in #AP1432 that there may be too much code in our application that isn't used at all. Before migrating the application to the new platform, we have to analyze which parts of the system are still in use and which are not.
Idea
To understand how much code isn't used, we recorded the executed code in production with the coverage tool JaCoCo. The measurement took place between 21st Oct 2017 and 27st Oct 2017. The results were exported into a CSV file using the JaCoCo command line tool with the following command:
bash
java -jar jacococli.jar report "C:\Temp\jacoco.exec" --classfiles \
C:\dev\repos\buschmais-spring-petclinic\target\classes --csv jacoco.csv
The CSV file contains all lines of code that were passed through during the measurement's time span. We just take the relevant data and add an additional LINES column to be able to calculate the ratio between covered and missed lines later on.
End of explanation
grouped_by_packages = coverage.groupby("PACKAGE").sum()
grouped_by_packages['RATIO'] = grouped_by_packages.LINE_COVERED / grouped_by_packages.LINES
grouped_by_packages = grouped_by_packages.sort_values(by='RATIO')
grouped_by_packages
Explanation: Analysis
It was stated that whole packages wouldn't be needed anymore and that they could be safely removed. Therefore, we sum up the coverage data per class for each package and calculate the coverage ratio for each package.
End of explanation
%matplotlib inline
grouped_by_packages[['RATIO']].plot(kind="barh", figsize=(8,2))
Explanation: We plot the data for the coverage ratio to get a brief overview of the result.
End of explanation |
8,736 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
what we will learn ...
what is CNN
<hr/>
Convolution Operation
Relu layer
<hr/>
Pooling?
<hr/>
Flattening
<hr/>
Full Connection
1. What is CNN
----------------------------------------------------------------------------------
----------------------------------------------------------------------------------
필요성
Step1: Initialising the CNN
Step2: Step 1 - Convolution
Step3: Step 2 - Pooling
Step4: Adding a second convolutional layer
Step5: Step 3 - Flattening
Step6: Step 4 - Full connection
Step7: Compiling the CNN
Step8: ----------------------Part 2 - Fitting the CNN to the images-------------------------
Step9: ---------------------------Part 3 - Making new predictions--------------------------- | Python Code:
from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
Explanation: what we will learn ...
what is CNN
<hr/>
Convolution Operation
Relu layer
<hr/>
Pooling?
<hr/>
Flattening
<hr/>
Full Connection
1. What is CNN
----------------------------------------------------------------------------------
----------------------------------------------------------------------------------
필요성 :::: ::::: Image classification problem --------------------------------
----------------------------------------------------------------------------------
----------------------------------------------------------------------------------
2. Convolution Op
convolution : filter 를 여러번 input_image 에 적용시켜 여러개의 feature_map 을 구하는 과정
convolution op만으로도 여러 포토샵의 효과를 만들수 있음
여러개의 feature map == convolutional layer
3. Relu
여러개의 feature map 으로 이루어진 convolutional layer 를 relu 에 적용시킨다.
(각각의 픽셀값들이 0 이하이면 사라질 것이다.)
4. Max Pooling
max pooling : 하나의 feature map 을 max pool filter 통해서 pooled feature map 으로 바꾼다.
장점 : reducing size / calculation / parameters(-> prevent overfitting)
이러한 장점때문에
결과적으로 cnn 에선 이런식으로
5. Flattening
RESULT
6. Full Connection ( ANN at the tail )
****Loss Function 을 다양하게 사용하는 이유
이렇게 생긴 모델이 있다고 합시다.
강아지와 고양이 두가지의 classification 을 수행할 수 있는 모델입니다.
서로 구성을 조금 다르게 해서 성능이 약간 다른 두 모델의 loss 를 비교하고 싶습니다.
각각의 모델들은 이런 결과값을 내놓았습니다.
만약
1. 단순히 몇개 맞추었는지 비교한다면 ?
-> 결과 동일
2. label 과 softmax로 구한 ^ 을 제곱해서 평균 구하면 ( Mean Square )
-> 차이 조금 명확
3. Cross Entropy 라는 함수에 label 과 ^ 넣어서 구하면
-> 차이 매우 명확
<<< CODE >>>
------------------------Part 1 - Building the CNN-------------------------------
Importing the Keras libraries and packages
End of explanation
classifier = Sequential()
Explanation: Initialising the CNN
End of explanation
classifier.add(Conv2D(32, (3, 3), input_shape = (64, 64, 3), activation = 'relu'))
Explanation: Step 1 - Convolution
End of explanation
classifier.add(MaxPooling2D(pool_size = (2, 2)))
Explanation: Step 2 - Pooling
End of explanation
classifier.add(Conv2D(32, (3, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
Explanation: Adding a second convolutional layer
End of explanation
classifier.add(Flatten())
Explanation: Step 3 - Flattening
End of explanation
classifier.add(Dense(units = 128, activation = 'relu'))
classifier.add(Dense(units = 1, activation = 'sigmoid'))
Explanation: Step 4 - Full connection
End of explanation
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'] )
Explanation: Compiling the CNN
End of explanation
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
test_datagen = ImageDataGenerator(rescale = 1./255)
training_set = train_datagen.flow_from_directory('dataset/training_set',
target_size = (64, 64),
batch_size = 32,
class_mode = 'binary')
test_set = test_datagen.flow_from_directory('dataset/test_set',
target_size = (64, 64),
batch_size = 32,
class_mode = 'binary')
print(type(test_set))
classifier.fit_generator(training_set,
steps_per_epoch = 8000,
epochs = 25,
validation_data = test_set,
validation_steps = 2000)
Explanation: ----------------------Part 2 - Fitting the CNN to the images-------------------------
End of explanation
import numpy as np
from keras.preprocessing import image
test_image = image.load_img('dataset/single_prediction/cat_or_dog_1.jpg', target_size = (64, 64))
test_image = image.img_to_array(test_image)
test_image = np.expand_dims(test_image, axis = 0)
result = classifier.predict(test_image)
training_set.class_indices
if result[0][0] == 1:
prediction = 'dog'
else:
prediction = 'cat'
Explanation: ---------------------------Part 3 - Making new predictions---------------------------
End of explanation |
8,737 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Defining and Run a Custom Analytical Model
Here you will be creating trivial analytical model following the API.
You can start by importing the necessary module components.
Step1: You also need the ability to convert astropyunits, manipulate numpy arrays
and use MayaVi for visualisation.
Step2: You are going to try and define a 3D cuboid grid of 20x22x20 with ranges in
arcseconds, these parameters can be stored in the following lists and astropy
quantities.
Step4: From the above parameters you can derive the grid step size and total size in
each dimension.
Step5: You can define this analytical model as a child of the AnalyticalModel class.
Step6: You can instansiate a copy of the new analytical model.
Step7: Note
Step8: You can now see the 2D boundary data used for extrapolation.
Step10: You also visulise the 3D vector field | Python Code:
# Module imports
from solarbextrapolation.map3dclasses import Map3D
from solarbextrapolation.analyticalmodels import AnalyticalModel
from solarbextrapolation.visualisation_functions import visualise
Explanation: Defining and Run a Custom Analytical Model
Here you will be creating trivial analytical model following the API.
You can start by importing the necessary module components.
End of explanation
# General imports
import astropy.units as u
import numpy as np
from mayavi import mlab
Explanation: You also need the ability to convert astropyunits, manipulate numpy arrays
and use MayaVi for visualisation.
End of explanation
# Input parameters:
qua_shape = u.Quantity([ 20, 20, 20] * u.pixel)
qua_x_range = u.Quantity([ -80.0, 80 ] * u.Mm)
qua_y_range = u.Quantity([ -80.0, 80 ] * u.Mm)
qua_z_range = u.Quantity([ 0.0, 120 ] * u.Mm)
Explanation: You are going to try and define a 3D cuboid grid of 20x22x20 with ranges in
arcseconds, these parameters can be stored in the following lists and astropy
quantities.
End of explanation
# Derived parameters (make SI where applicable)
x_0 = x_range[0].to(u.m).value
Dx = (( x_range[1] - x_range[0] ) / ( tup_shape[0] * 1.0 )).to(u.m).value
x_size = Dx * tup_shape[0]
y_0 = y_range[0].to(u.m).value
Dy = (( y_range[1] - y_range[0] ) / ( tup_shape[1] * 1.0 )).to(u.m).value
y_size = Dy * tup_shape[1]
z_0 = z_range[0].to(u.m).value
Dz = (( z_range[1] - z_range[0] ) / ( tup_shape[2] * 1.0 )).to(u.m).value
z_size = Dy * tup_shape[2]
Explanation: From the above parameters you can derive the grid step size and total size in
each dimension.
End of explanation
class AnaOnes(AnalyticalModel):
def __init__(self, **kwargs):
super(AnaOnes, self).__init__(**kwargs)
def _generate_field(self, **kwargs):
# Adding in custom parameters to the metadata
self.meta['analytical_model_routine'] = 'Ones Model'
# Generate a trivial field and return (X,Y,Z,Vec)
arr_4d = np.ones(self.shape.value.tolist() + [3])
self.field = arr_4d
# Extract the LoS Magnetogram from this:
self.magnetogram.data = arr_4d[:,:,0,2]
# Now return the vector field.
return Map3D( arr_4d, self.meta )
Explanation: You can define this analytical model as a child of the AnalyticalModel class.
End of explanation
aAnaMod = AnaOnes(shape=qua_shape, xrange=qua_x_range, yrange=qua_y_range, zrange=qua_z_range)
Explanation: You can instansiate a copy of the new analytical model.
End of explanation
aMap3D = aAnaMod.generate()
Explanation: Note: you could use default ranges and grid shape using aAnaMod = AnaOnes().
You can now calculate the vector field.
End of explanation
aMap2D = aAnaMod.to_los_magnetogram()
aMap2D.peek()
Explanation: You can now see the 2D boundary data used for extrapolation.
End of explanation
fig = visualise(aMap3D,
show_boundary_axes=False,
show_volume_axes=False,
debug=False)
mlab.show()
# Note: you can add boundary axes using:
fig = visualise(aMap3D,
show_boundary_axes=False,
boundary_units=[1.0*u.arcsec, 1.0*u.arcsec],
show_volume_axes=True,
debug=False)
Explanation: You also visulise the 3D vector field:
End of explanation |
8,738 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Partitioner examples
This is a jupyter notebook with a few vignettes that present some of the Python partitioner package's functionality.
Note
Step1: Process the English Wiktionary to generate the (default) partition probabilities.
Note
Step2: Perform a few one-off partitions.
Step3: Solve for the informed stochastic expectation partition (given the informed partition probabilities).
Step4: Perform a pure random (uniform) one-off partition.
Step5: Solve for the uniform stochastic expectation partition (given the uniform partition probabilities).
Step6: Build a rank-frequency distribution for a text and determine its Zipf/Simon (bag-of-phrase) $R^2$.
Step7: Process the some other Wiktionaries to generate the partition probabilities.
Note
Step8: Test partitioner on some other languages. | Python Code:
from partitioner import partitioner
from partitioner.methods import *
Explanation: Partitioner examples
This is a jupyter notebook with a few vignettes that present some of the Python partitioner package's functionality.
Note: Cleaning of text and determination of clauses occurs in the partitionText method. Because of this, it is unwise to pass large, uncleaned pieces of text as 'clauses' directly through the .partition() method (regardless of the type of partition being taken), as this will simply tokenize the text by splitting on " ", producing many long, punctuation-filled phrases, and likely run very slow. As such, best practices only use .partition() for testing and exploring the tool on case-interested clauses.
End of explanation
## Vignette 1: Build informed partition data from a dictionary,
## and store to local collection
def preprocessENwiktionary():
pa = partitioner(informed = True, dictionary = "./dictionaries/enwiktionary.txt")
pa.dumpqs(qsname="enwiktionary")
preprocessENwiktionary()
Explanation: Process the English Wiktionary to generate the (default) partition probabilities.
Note: this step can take significant time for large dictionaries (~5 min).
End of explanation
## Vignette 2: An informed, one-off partition of a single clause
def informedOneOffPartition(clause = "How are you doing today?"):
pa = oneoff()
print pa.partition(clause)
informedOneOffPartition()
informedOneOffPartition("Fine, thanks a bunch for asking!")
Explanation: Perform a few one-off partitions.
End of explanation
## Vignette 3: An informed, stochastic expectation partition of a single clause
def informedStochasticPartition(clause = "How are you doing today?"):
pa = stochastic()
print pa.partition(clause)
informedStochasticPartition()
Explanation: Solve for the informed stochastic expectation partition (given the informed partition probabilities).
End of explanation
## Vignette 4: An uniform, one-off partition of a single clause
def uniformOneOffPartition(informed = False, clause = "How are you doing today?", qunif = 0.25):
pa = oneoff(informed = informed, qunif = qunif)
print pa.partition(clause)
uniformOneOffPartition()
uniformOneOffPartition(qunif = 0.75)
Explanation: Perform a pure random (uniform) one-off partition.
End of explanation
## Vignette 5: An uniform, stochastic expectation partition of a single clause
def uniformStochasticPartition(informed = False, clause = "How are you doing today?", qunif = 0.25):
pa = stochastic(informed = informed, qunif = qunif)
print pa.partition(clause)
uniformStochasticPartition()
uniformStochasticPartition(clause = "Fine, thanks a bunch for asking!")
Explanation: Solve for the uniform stochastic expectation partition (given the uniform partition probabilities).
End of explanation
## Vignette 6: Use the default partitioning method to partition the main partitioner.py file and compute rsq
def testPartitionTextAndFit():
pa = oneoff()
pa.partitionText(textfile = pa.home+"/../README.md")
pa.testFit()
print "R-squared: ",round(pa.rsq,2)
print
phrases = sorted(pa.counts, key = lambda x: pa.counts[x], reverse = True)
for j in range(25):
phrase = phrases[j]
print phrase, pa.counts[phrase]
testPartitionTextAndFit()
Explanation: Build a rank-frequency distribution for a text and determine its Zipf/Simon (bag-of-phrase) $R^2$.
End of explanation
## Vignette X1: Build informed partition data from other dictionaries,
## and store to local collection
def preprocessOtherWiktionaries():
for lang in ["ru", "pt", "pl", "nl", "it", "fr", "fi", "es", "el", "de", "en"]:
print "working on "+lang+"..."
pa = partitioner(informed = True, dictionary = "./dictionaries/"+lang+".txt")
pa.dumpqs(qsname=lang)
preprocessOtherWiktionaries()
Explanation: Process the some other Wiktionaries to generate the partition probabilities.
Note: These dictionaries are not as well curated and potentially contain phrases from other languages (a consequence of wiktionary construction). As a result, they hold many many more phrases and will take longer to process. However, since the vast majority of these dictionaries are language-correct, effects on the partitioner and its (course) partition probabilities is likely negligable.
End of explanation
from partitioner import partitioner
from partitioner.methods import *
## Vignette X2: Use the default partitioning method to partition the main partitioner.py file and compute rsq
def testFrPartitionTextAndFit():
for lang in ["ru", "pt", "pl", "nl", "it", "fr", "fi", "es", "el", "de", "en"]:
pa = oneoff(qsname = lang)
pa.partitionText(textfile = "./tests/test_"+lang+".txt")
pa.testFit()
print
print lang+" R-squared: ",round(pa.rsq,2)
print
phrases = sorted(pa.counts, key = lambda x: pa.counts[x], reverse = True)
for j in range(5):
phrase = phrases[j]
print phrase, pa.counts[phrase]
testFrPartitionTextAndFit()
Explanation: Test partitioner on some other languages.
End of explanation |
8,739 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TD 7
Step1: La programmation dynamique est une façon de résoudre de manière similaire une classe de problèmes d'optimisation qui vérifie la même propriété. On suppose qu'il est possible de découper le problème $P$ en plusieurs parties $P_1$, $P_2$, ... Si $S$ est la solution optimale du problème $P$, alors chaque partie $S_1$, $S_2$, ... de cette solution appliquée aux sous-problèmes est aussi optimale.
Par exemple, on cherche le plus court chemin $c(A,B)$ entre les villes $A$ et $B$. Si celui-ci passe par la ville $M$ alors les chemins $c(A,M)+c(M,B) = c(A,B)$ sont aussi les plus courts chemins entre les villes $A,M$ et $M,B$. La démonstration se fait simplement par l'absurde
Step2: On peut lire ce fichier soit avec le module pandas introduit lors de la séance 10 TD 10
Step3: Le membre values se comporte comme une matrice, une liste de listes
Step4: On peut aussi utiliser le petit exemple qui a été présenté lors de la séance 4 sur les fichiers TD 4
Step5: Chaque ligne définit un voyage entre deux villes effectué d'une traite, sans étape. Les accents ont été supprimés du fichier.
Exercice 1
Construire la liste des villes sans doublons.
Exercice 2
Constuire un dictionnaire { (a,b)
Step6: Exercice 7
Quelle est la meilleure distribution des skis aux skieurs ?
Exercice 8
Quels sont les coûts des deux algorithmes (plus court chemin et ski) ?
Prolongements
Step7: Il faut décompresser ce fichier avec 7zip si vous utilisez pysense < 0.8. Sous Linux (et Mac), il faudra utiliser une commande décrite ici tar. | Python Code:
import pyensae
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: TD 7 : Programmation dynamique et plus court chemin
End of explanation
import pyensae
pyensae.download_data("matrix_distance_7398.zip", website = "xd")
Explanation: La programmation dynamique est une façon de résoudre de manière similaire une classe de problèmes d'optimisation qui vérifie la même propriété. On suppose qu'il est possible de découper le problème $P$ en plusieurs parties $P_1$, $P_2$, ... Si $S$ est la solution optimale du problème $P$, alors chaque partie $S_1$, $S_2$, ... de cette solution appliquée aux sous-problèmes est aussi optimale.
Par exemple, on cherche le plus court chemin $c(A,B)$ entre les villes $A$ et $B$. Si celui-ci passe par la ville $M$ alors les chemins $c(A,M)+c(M,B) = c(A,B)$ sont aussi les plus courts chemins entre les villes $A,M$ et $M,B$. La démonstration se fait simplement par l'absurde : si la distance $c(A,M)$ n'est pas optimale alors il est possible de constuire un chemin plus court entre les villes $A$ et $B$. Cela contredit l'hypothèse de départ.
Ces problèmes ont en règle générale une expression simple sous forme de récurrence : si on sait résoudre le problème pour un échantillon de taille $n$, on appelle cette solution $S(n)$ alors on peut facilement la solution $S(n+1)$ en fonction de $S(n)$. Parfois cette récurrence va au delà : $S(n+1) = f(S(n), S(n-1), ..., S(0))$.
Les données
On récupère le fichier matrix_distance_7398.txt qui contient des distances entre différentes villes (pas toutes).
End of explanation
import pandas
df = pandas.read_csv("matrix_distance_7398.txt", sep="\t", header=False, names=["v1","v2","distance"])
df.head()
Explanation: On peut lire ce fichier soit avec le module pandas introduit lors de la séance 10 TD 10 : DataFrame et Matrice :
End of explanation
matrice = df.values
matrice[:5]
Explanation: Le membre values se comporte comme une matrice, une liste de listes :
End of explanation
with open ("matrix_distance_7398.txt", "r") as f :
matrice = [ row.strip(' \n').split('\t') for row in f.readlines() ]
for row in matrice:
row[2] = float(row[2])
print(matrice[:5])
Explanation: On peut aussi utiliser le petit exemple qui a été présenté lors de la séance 4 sur les fichiers TD 4 : Modules, fichiers, expressions régulières. Les données se présente sous forme de matrice. Les deux premières colonnes sont des chaînes de caractères, la dernière est une valeur numérique qu'il faut convertir.
End of explanation
import random
skieurs = [ random.gauss(1.75, 0.1) for i in range(0,10) ]
paires = [ random.gauss(1.75, 0.1) for i in range(0,15) ]
skieurs.sort()
paires.sort()
print(skieurs)
print(paires)
Explanation: Chaque ligne définit un voyage entre deux villes effectué d'une traite, sans étape. Les accents ont été supprimés du fichier.
Exercice 1
Construire la liste des villes sans doublons.
Exercice 2
Constuire un dictionnaire { (a,b) : d, (b,a) : d } où a,b sont des villes et d la distance qui les sépare ?
On veut calculer la distance entre la ville de Charleville-Mezieres et Bordeaux ? Est-ce que cette distance existe dans la liste des distances dont on dispose ?
Algorithme du plus court chemin
On créé un tableau d[v] qui contient ou contiendra la distance optimale entre les villes v et Charleville-Mezieres. La valeur qu'on cherche est d['Bordeaux']. On initialise le tableau comme suit :
d['Charleville-Mezieres'] = 0
d[v] = infini pour tout $v \neq 'Charleville-Mezieres'$.
Exercice 3
Quelles sont les premières cases qu'on peut remplir facilement ?
Exercice 4
Soit une ville $v$ et une autre $w$, on s'aperçoit que $d[w] > d[v] + dist[w,v]$. Que proposez-vous de faire ? En déduire un algorithme qui permet de déterminer la distance la plus courte entre Charleville-Mezieres et Bordeaux.
Si la solution vous échappe encore, vous pouvez vous inspirer de l'Algorithme de Djikstra.
La répartition des skis
Ce problème est un exemple pour lequel il faut d'abord prouver que la solution vérifie une certaine propriété avant de pouvoir lui appliquer une solution issue de la programmation dynamique.
$N=10$ skieurs rentrent dans un magasins pour louer 10 paires de skis (parmi $M>N$). On souhaite leur donner à tous une paire qui leur convient (on suppose que la taille de la paire de skis doit être au plus proche de la taille du skieurs. On cherche donc à minimiser :
$\arg \min_\sigma \sum_{i=1}^{N} \left| t_i - s_{\sigma(i)} \right|$
Où $\sigma$ est un ensemble de $N$ paires de skis parmi $M$ (un arrangement pour être plus précis).
A première vue, il faut chercher la solution du problème dans l'ensemble des arrangements de $N$ paires parmi $M$. Mais si on ordonne les paires et les skieurs par taille croissantes : $t_1 \leqslant t_2 \leqslant ... \leqslant t_N$ (tailles de skieurs) et $s_1 \leqslant s_2 \leqslant ... \leqslant s_M$ (tailles de skis), résoudre le problème revient à prendre les skieurs dans l'ordre croissant et à les placer en face d'une paire dans l'ordre où elles viennent. C'est comme si on insérait des espaces dans la séquence des skieurs sans changer l'ordre :
$\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline t_1 & & t_2 & t_3 & & & t_4 & ... & t_{N-1} & & t_{N} & \ \hline s_1 & s_2 & s_3 & s_4 & s_5 & s_6 & s_7 & ... & s_{M-3} & s_{M-2} & s_{M-1} & s_M \ \hline \end{array}$
Exercice facultatif
Il faut d'abord prouver que l'algorithme suggéré ci-dessus permet bien d'obtenir la solution optimale.
Exercice 5
Après avoir avoir trié les skieurs et les paires par tailles croissantes. On définit :
$p(n,m) = \sum_{i=1}^{n} \left| t_i - s_{\sigma_m^*(i)} \right|$
Où $\sigma_m^*$ est le meilleur choix possible de $n$ paires de skis parmi les $m$ premières. Exprimer $p(n,m)$ par récurrence (en fonction de $p(n,m-1)$ et $p(n-1,m-1)$. On suppose qu'un skieur sans paire de ski correspond au cas où la paire est de taille nulle.
Exercice 6
Ecrire une fonction qui calcule l'erreur pour la distribution optimale ? On pourra choisir des skieurs et des paires de tailles aléatoires par exemple.
End of explanation
import pyensae
files = pyensae.download_data("facebook.tar.gz",website="http://snap.stanford.edu/data/")
fe = [ f for f in files if "edge" in f ]
fe
Explanation: Exercice 7
Quelle est la meilleure distribution des skis aux skieurs ?
Exercice 8
Quels sont les coûts des deux algorithmes (plus court chemin et ski) ?
Prolongements : degré de séparation sur Facebook
Le plus court chemin dans un graphe est un des algorithmes les plus connus en programmation. Il permet de déterminer la solution en un coût polynômial - chaque itération est en $O(n^2)$. La programmation dynamique caractèrise le passage d'une vision combinatoire à une compréhension récursif du même problème. Dans le cas du plus court chemin, l'approche combinatoire consiste à énumérer tous les chemins du graphe. L'approche dynamique consiste à démontrer que la première approche combinatoire aboutit à un calcul très redondant. On note $e(v,w)$ la matrice des longueurs des routes, $e(v,w) = \infty$ s'il n'existe aucune route entre les villes $v$ et $w$. On suppose que $e(v,w)=e(w,v)$. La construction du tableau d se définit de manière itérative et récursive comme suit :
Etape 0
$d(v) = \infty, \, \forall v \in V$
Etape $n$
$d(v) = \left { \begin{array}{ll} 0 & \text{si } v = \text{Charleville-Mezieres} \ \min { d(w) + e(v,w) \, | \, w \in V } & \text{sinon} \end{array} \right .$
Tant que l'étape $n$ continue à faire des mises à jour ($\sum_v d(v)$ diminue), on répète l'étape $n$. Ce même algorithme peut être appliqué pour déterminer le degré de séparation dans un réseau social. L'agorithme s'applique presque tel quel à condition de définir ce que sont une ville et une distance entre villes dans ce nouveau graphe. Vous pouvez tester vos idées sur cet exemple de graphe Social circles: Facebook. L'algorithme de Dikjstra calcule le plus court chemin entre deux noeuds d'un graphe, l'algorithme de Bellman-Ford est une variante qui calcule toutes les distances des plus courts chemin entre deux noeuds d'un graphe.
End of explanation
import pandas
df = pandas.read_csv("facebook/1912.edges", sep=" ", names=["v1","v2"])
print(df.shape)
df.head()
Explanation: Il faut décompresser ce fichier avec 7zip si vous utilisez pysense < 0.8. Sous Linux (et Mac), il faudra utiliser une commande décrite ici tar.
End of explanation |
8,740 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Now that I've streamlined the MCMC process, I am going to submit multiple chains simultaneously. This notebook will make multiple, similar config files, for broad comparison.
This may be rolled into pearce as a helper function, I haven't decided.
For rmin 0, 0.5, 1.0
Step2: Vpeak SHAM
Mpeak SHAM
HOD | Python Code:
import yaml
import copy
from os import path
import numpy as np
orig_cfg_fname = '/home/users/swmclau2//Git/pearce/bin/mcmc/nh_gg_sham_hsab_mcmc_config.yaml'
with open(orig_cfg_fname, 'r') as yamlfile:
orig_cfg = yaml.load(yamlfile)
orig_cfg
#this will enable easier string formatting
sbatch_template = #!/bin/bash
#SBATCH --job-name={jobname}
#SBATCH --time=12:00:00
#SBATCH -p kipac
#SBATCH -o /home/users/swmclau2/Git/pearce/bin/mcmc/config/{jobname}.out
#SBATCH --ntasks=16
###SBATCH --exclusive
module load python/2.7.13
module load py-scipystack
module load hdf5/1.10.0p1
module load py-numpy
python /home/users/swmclau2/Git/pearce/pearce/inference/initialize_mcmc.py {jobname}.yaml
python /home/users/swmclau2/Git/pearce/pearce/inference/run_mcmc.py {jobname}.yaml
#emu fnames
#emu_fnames = [#'/nfs/slac/g/ki/ki18/des/swmclau2/xi_gg_zheng07_v4/PearceXiggCosmo.hdf5',\
# '/nfs/slac/g/ki/ki18/des/swmclau2/xi_gg_hsabzheng07_v2/PearceXiggCosmoCorrAB.hdf5']
emu_fnames = [['/scratch/users/swmclau2/wp_zheng07/PearceWpCosmo.hdf5', '/scratch/users/swmclau2/ds_zheng07/PearceDsCosmo.hdf5']]
#emu_cov_fnames = [#'/afs/slac.stanford.edu/u/ki/swmclau2/Git/pearce/bin/covmat/xi_gg_nh_emu_cov_v4.npy',
# '/afs/slac.stanford.edu/u/ki/swmclau2/Git/pearce/bin/covmat/xi_gg_nh_emu_hsab_cov_v4.npy']
emu_names = ['HOD']
np.save('dummy_emu_covmat.npy', np.zeros((18,18)))
meas_cov_fname = '/home/users/swmclau2/Git/pearce/bin/covmat/wp_ds_full_covmat.npy'
# TODO replace with actual ones onace test boxes are done
emu_cov_fnames = [['/home/users/swmclau2/Git/pearce/notebooks/dummy_emu_covmat.npy' for i in xrange(2)]]
Explanation: Now that I've streamlined the MCMC process, I am going to submit multiple chains simultaneously. This notebook will make multiple, similar config files, for broad comparison.
This may be rolled into pearce as a helper function, I haven't decided.
For rmin 0, 0.5, 1.0:
For no ab, HSAB and CorrAB emu:
Vpeak sham
Mpeak sham
HOD
HSAB HOD
End of explanation
orig_cfg_fname = '/home/users/swmclau2/Git/pearce/bin/mcmc/nh_gg_mcmc_config.yaml'
with open(orig_cfg_fname, 'r') as yamlfile:
orig_cfg = yaml.load(yamlfile)
tmp_cfg = copy.deepcopy(orig_cfg)
directory = "/home/users/swmclau2/Git/pearce/bin/mcmc/config/"
output_dir = "/home/users/swmclau2/scratch/PearceMCMC/"
jobname_template = "HOD_wp_ds_rmin_{rmin}_{emu_name}"
for rmin in [None, 0.5, 1.0, 2.0]:
for emu_fname, emu_name, emu_cov in zip(emu_fnames, emu_names, emu_cov_fnames):
tmp_cfg['chain']['nwalkers'] = 500
if rmin is not None:
tmp_cfg['emu']['fixed_params'] = {'z': 0.0, 'rmin':rmin}
tmp_cfg['emu']['training_file'] = emu_fname
tmp_cfg['emu']['emu_type'] = ['NashvilleHot' for i in xrange(len(emu_fname))]
tmp_cfg['emu']['emu_cov_fname'] = emu_cov_fnames
tmp_cfg['data']['cov']['meas_cov_fname'] = meas_cov_fname
jobname = jobname_template.format(rmin=rmin, emu_name=emu_name)
tmp_cfg['fname'] = path.join(output_dir, jobname+'.hdf5')
tmp_cfg['sim']= {'gal_type': 'HOD',
'hod_name': 'zheng07',
'hod_params': {'alpha': 1.083,
'logM0': 13.2,
'logM1': 14.2,
'sigma_logM': 0.2},
'nd': '5e-4',
'scale_factor': 1.0,
'sim_hps': {'boxno': 1,
'downsample_factor': '1e-2',
'particles': True,
'realization': 0,
'system': 'sherlock'},
'simname': 'testbox'}
tmp_cfg['data']['sim']['sim_hps']['system'] = 'sherlock'
tmp_cfg['chain']['nsteps'] = 20000
with open(path.join(directory, jobname +'.yaml'), 'w') as f:
yaml.dump(tmp_cfg, f)
with open(path.join(directory, jobname + '.sbatch'), 'w') as f:
f.write(sbatch_template.format(jobname=jobname))
Explanation: Vpeak SHAM
Mpeak SHAM
HOD
End of explanation |
8,741 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Preparing Scraped Data for Prediction
This notebook describes the process in which the raw films.csv and nominations.csv files are "wrangled" into a workable format for our classifier(s). At the time of this writing (February 25, 2017), the resulting dataset is only used in a decision tree classifier.
Step1: Pivot Nominations
Since we pull in four award types, we know that each nominee can have a maximum of four line items. The nominations table is pivoted to ensure that each nomination has its own unique line while still maintaining a count of wins per award.
Step2: Merge Oscars with Nominations
We only care about films that were nominated for an Academy Award. The pd.merge function is used to perform a left join between the oscars dataframe and the wins. In other words, we are pruning out any films that were never nominated for an Academy Award based on the join fields.
Step3: Read in Films Dataframe
We pull the films.csv file into a dataframe called films. This is then merged to the awards dataframe from above. Note that we only include specific fields. Fields like metacritic_score and bom_worldwide have been excluded because too many null values exist, which would have an adverse effect on our model.
Step4: So we obviously have some null values, which is disappointing. We'll take the time to clean these up.
Step5: Adding some more fields and removing remaining nulls
While we are pretty happy with our MPAA field, we can't input it into a predictive model as is. The decision tree would not know how to treat a string (e.g., "PG"). So, instead, we pivot those values into separate, boolean fields.
So instead of...
|film | mpaa |
|---------|-------|
|Raging Bull | R |
| Kramer vs. Kramer | PG |
We get...
| film | G | PG | PG13 | R |
|-------|---|----|------|---|
|Raging Bull | 0 | 0 | 0 | 1 |
| Kramer vs. Kramer | 0 | 1 | 0 | 0 |
This essentially "quantifies" the MPAA feature so that our algorithm can properly interpret it. Note that we perform a similar action for production country (just for the USA) and seasonality. | Python Code:
import re
import pandas as pd
import numpy as np
pd.set_option('display.float_format', lambda x: '%.3f' % x)
nominations = pd.read_csv('../data/nominations.csv')
# clean out some obvious mistakes...
nominations = nominations[~nominations['film'].isin(['2001: A Space Odyssey', 'Oliver!', 'Closely Observed Train'])]
nominations = nominations[nominations['year'] >= 1980]
# scraper pulled in some character names instead of film names...
nominations.loc[nominations['film'] == 'Penny Lane', 'film'] = 'Almost Famous'
nominations.loc[nominations['film'] == 'Sister James', 'film'] = 'Doubt'
Explanation: Preparing Scraped Data for Prediction
This notebook describes the process in which the raw films.csv and nominations.csv files are "wrangled" into a workable format for our classifier(s). At the time of this writing (February 25, 2017), the resulting dataset is only used in a decision tree classifier.
End of explanation
wins = pd.pivot_table(nominations, values='winner', index=['year', 'category', 'film', 'name'], columns=['award'], aggfunc=np.sum)
wins = wins.fillna(0) # if a nominee wasn't in a specific ceremony, we just fill it as a ZERO.
wins.reset_index(inplace=True) # flattens the dataframe
wins.head()
Explanation: Pivot Nominations
Since we pull in four award types, we know that each nominee can have a maximum of four line items. The nominations table is pivoted to ensure that each nomination has its own unique line while still maintaining a count of wins per award.
End of explanation
oscars = nominations[nominations['award'] == 'Oscar'][['year', 'category', 'film', 'name']]
awards = pd.merge(oscars, wins, how='left', on=['year', 'category', 'name', 'film'])
awards.head()
Explanation: Merge Oscars with Nominations
We only care about films that were nominated for an Academy Award. The pd.merge function is used to perform a left join between the oscars dataframe and the wins. In other words, we are pruning out any films that were never nominated for an Academy Award based on the join fields.
End of explanation
films = pd.read_csv('../data/films.csv')
relevant_fields = [
'film',
'country',
'release_date',
'running_time',
'mpaa',
'box_office',
'budget',
'imdb_score',
'rt_audience_score',
'rt_critic_score',
'stars_count',
'writers_count'
]
df = pd.merge(awards, films[relevant_fields], how='left', on='film')
print "Total Observations:", len(df)
print
print "Observations with NaN fields:"
for column in df.columns:
l = len(df[df[column].isnull()])
if l != 0:
print len(df[df[column].isnull()]), "\t", column
Explanation: Read in Films Dataframe
We pull the films.csv file into a dataframe called films. This is then merged to the awards dataframe from above. Note that we only include specific fields. Fields like metacritic_score and bom_worldwide have been excluded because too many null values exist, which would have an adverse effect on our model.
End of explanation
### FIX RUN TIME ###
# df[df['running_time'].isnull()] # Hilary and Jackie
df.loc[df['film'] == 'Hilary and Jackie', 'running_time'] = '121 minutes'
df.loc[df['film'] == 'Fanny and Alexander', 'running_time'] = '121 minutes'
### FIX MPAA RATING ###
df = df.replace('NOT RATED', np.nan)
df = df.replace('UNRATED', np.nan)
df = df.replace('M', np.nan)
df = df.replace('NC-17', np.nan)
df = df.replace('APPROVED', np.nan)
# df[df['mpaa'].isnull()]
df.loc[df['film'].isin(['L.A. Confidential', 'In the Loop']), 'mpaa'] = 'R'
df.loc[df['film'].isin(['True Grit', 'A Room with a View']), 'mpaa'] = 'PG-13'
### FIX COUNTRY ###
# df[df['country'].isnull()] # Ulee's Gold, The Constant Gardner, Dave
df.loc[df['film'].isin(["Ulee's Gold", "Dave"]), 'country'] = 'United States'
df.loc[df['country'].isnull(), 'country'] = 'United Kingdom'
df.loc[df['country'] == 'Germany\\', 'country'] = 'Germany'
df.loc[df['country'] == 'United States & Australia', 'country'] = 'United States'
df['country'].unique()
### FIX STARS COUNT ###
# df[df['stars_count'].isnull()]
df.loc[df['film'].isin(['Before Sunset', 'Before Midnight']), 'stars_count'] = 2
df.loc[df['film'] == 'Dick Tracy', 'stars_count'] = 10
df.loc[df['stars_count'].isnull(), 'stars_count'] = 1
df = df[~df['release_date'].isin(['1970'])]
def to_numeric(value):
multiplier = 1
try:
value = re.sub(r'([$,])', '', str(value)).strip()
value = re.sub(r'\([^)]*\)', '', str(value)).strip()
if 'million' in value:
multiplier = 1000000
elif 'billion' in value:
multiplier = 10000000
for replace in ['US', 'billion', 'million']:
value = value.replace(replace, '')
value = value.split(' ')[0]
if isinstance(value, str):
value = value.split('-')[0]
value = float(value) * multiplier
except:
return np.nan
return value
def to_runtime(value):
try:
return re.findall(r'\d+', value)[0]
except:
return np.nan
### Apply function to appropriate fields ###
for field in ['box_office', 'budget']:
df[field] = df[field].apply(to_numeric)
df['release_month'] = df['release_date'].apply(lambda y: int(y.split('-')[1]))
df['running_time'] = df['running_time'].apply(to_runtime)
### FIX BOX OFFICE ###
list(df[df['mpaa'].isnull()]['film'].unique())
# cleaned_box_offices = {
# 'Mona Lisa': 5794184,
# 'Testament': 2044982,
# 'Pennies from Heaven': 9171289,
# 'The Year of Living Dangerously': 10300000
# }
# for key, value in cleaned_box_offices.items():
# df.loc[df['film'] == key, 'box_office'] = value
# ### FIX BUDGET ###
# # df[(df['budget'].isnull())]['film'].unique()
# cleaned_budgets = {'Juno': 6500000, 'Blue Sky': 16000000, 'Pollock': 6000000 }
# for key, value in cleaned_budgets.items():
# df.loc[df['film'] == key, 'budget'] = value
Explanation: So we obviously have some null values, which is disappointing. We'll take the time to clean these up.
End of explanation
df = df[~df['mpaa'].isnull()]
df['produced_USA'] = df['country'].apply(lambda x: 1 if x == 'United States' else 0)
for column in df['mpaa'].unique():
df[column.replace('-', '')] = df['mpaa'].apply(lambda x: 1 if x == column else 0)
df['q1_release'] = df['release_month'].apply(lambda m: 1 if m <= 3 else 0)
df['q2_release'] = df['release_month'].apply(lambda m: 1 if m > 3 and m <= 6 else 0)
df['q3_release'] = df['release_month'].apply(lambda m: 1 if m > 6 and m <= 9 else 0)
df['q4_release'] = df['release_month'].apply(lambda m: 1 if m > 9 else 0)
df.to_csv('../data/analysis.csv', index=False)
del df['mpaa']
del df['country']
del df['release_date']
del df['release_month']
del df['budget']
for column in df.columns:
df = df[~df[column].isnull()]
df.to_csv('../data/prepared.csv', index=False)
Explanation: Adding some more fields and removing remaining nulls
While we are pretty happy with our MPAA field, we can't input it into a predictive model as is. The decision tree would not know how to treat a string (e.g., "PG"). So, instead, we pivot those values into separate, boolean fields.
So instead of...
|film | mpaa |
|---------|-------|
|Raging Bull | R |
| Kramer vs. Kramer | PG |
We get...
| film | G | PG | PG13 | R |
|-------|---|----|------|---|
|Raging Bull | 0 | 0 | 0 | 1 |
| Kramer vs. Kramer | 0 | 1 | 0 | 0 |
This essentially "quantifies" the MPAA feature so that our algorithm can properly interpret it. Note that we perform a similar action for production country (just for the USA) and seasonality.
End of explanation |
8,742 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have two tensors of dimension like 1000 * 1. I want to check how many of the elements are not equal in the two tensors. I think I should be able to do this in few lines like Numpy but couldn't find a similar function. | Problem:
import numpy as np
import pandas as pd
import torch
A, B = load_data()
cnt_not_equal = int(len(A)) - int((A == B).sum()) |
8,743 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
deepchem
Step1: Let's see what dataset looks like
Step2: One of the missions of deepchem is to form a synapse between the chemical and the algorithmic worlds
Step3: Now that we're oriented, let's use ML to do some chemistry.
So, step (2) will entail featurizing the dataset.
The available featurizations that come standard with deepchem are ECFP4 fingerprints, RDKit descriptors, NNScore-style bdescriptors, and hybrid binding pocket descriptors. Details can be found on deepchem.io.
Step4: Note how we separate our featurizers into those that featurize individual chemical compounds, compound_featurizers, and those that featurize molecular complexes, complex_featurizers.
Now, let's perform the actual featurization. Calling featurizer.featurize() will return an instance of class FeaturizedSamples. Internally, featurizer.featurize() (a) computes the user-specified features on the data, (b) transforms the inputs into X and y NumPy arrays suitable for ML algorithms, and (c) constructs a FeaturizedSamples() instance that has useful methods, such as an iterator, over the featurized data.
Step5: Now, we conduct a train-test split. If you'd like, you can choose splittype="scaffold" instead to perform a train-test split based on Bemis-Murcko scaffolds.
Step6: We generate separate instances of the Dataset() object to hermetically seal the train dataset from the test dataset. This style lends itself easily to validation-set type hyperparameter searches, which we will illustate in a separate section of this tutorial.
Step7: The performance of many ML algorithms hinges greatly on careful data preprocessing. Deepchem comes standard with a few options for such preprocessing.
Step8: Now, we're ready to do some learning! To set up a model, we will need
Step9: In this simple example, in few yet intuitive lines of code, we traced the machine learning arc from featurizing a raw dataset to fitting and evaluating a model.
Here, we featurized only the ligand. The signal we observed in R^2 reflects the ability of circular fingerprints and random forests to learn general features that make ligands "drug-like."
Step10: The protein-ligand complex view.
The preceding simple example, in few yet intuitive lines of code, traces the machine learning arc from featurizing a raw dataset to fitting and evaluating a model.
In this next section, we illustrate deepchem's modularity, and thereby the ease with which one can explore different featurization schemes, different models, and combinations thereof, to achieve the best performance on a given dataset. We will demonstrate this by examining protein-ligand interactions.
In the previous section, we featurized only the ligand. The signal we observed in R^2 reflects the ability of circular fingerprints and random forests to learn general features that make ligands "drug-like." However, the affinity of a drug for a target is determined not only by the drug itself, of course, but the way in which it interacts with a protein. | Python Code:
%load_ext autoreload
%autoreload 2
%pdb off
# set DISPLAY = True when running tutorial
DISPLAY = False
# set PARALLELIZE to true if you want to use ipyparallel
PARALLELIZE = False
import warnings
warnings.filterwarnings('ignore')
dataset_file= "../datasets/pdbbind_core_df.pkl.gz"
from deepchem.utils.save import load_from_disk
dataset = load_from_disk(dataset_file)
Explanation: deepchem: Machine Learning models for Drug Discovery
Tutorial 1: Basic Protein-Ligand Complex Featurized Models
Written by Evan Feinberg and Bharath Ramsundar
Copyright 2016, Stanford University
Welcome to the deepchem tutorial. In this iPython Notebook, one can follow along with the code below to learn how to fit machine learning models with rich predictive power on chemical datasets.
Overview:
In this tutorial, you will trace an arc from loading a raw dataset to fitting a cutting edge ML technique for predicting binding affinities. This will be accomplished by writing simple commands to access the deepchem Python API, encompassing the following broad steps:
Loading a chemical dataset, consisting of a series of protein-ligand complexes.
Featurizing each protein-ligand complexes with various featurization schemes.
Fitting a series of models with these featurized protein-ligand complexes.
Visualizing the results.
First, let's point to a "dataset" file. This can come in the format of a CSV file or Pandas DataFrame. Regardless
of file format, it must be columnar data, where each row is a molecular system, and each column represents
a different piece of information about that system. For instance, in this example, every row reflects a
protein-ligand complex, and the following columns are present: a unique complex identifier; the SMILES string
of the ligand; the binding affinity (Ki) of the ligand to the protein in the complex; a Python list of all lines
in a PDB file for the protein alone; and a Python list of all lines in a ligand file for the ligand alone.
This should become clearer with the example. (Make sure to set DISPLAY = True)
End of explanation
print("Type of dataset is: %s" % str(type(dataset)))
print(dataset[:5])
print("Shape of dataset is: %s" % str(dataset.shape))
Explanation: Let's see what dataset looks like:
End of explanation
import nglview
import tempfile
import os
import mdtraj as md
import numpy as np
import deepchem.utils.visualization
from deepchem.utils.visualization import combine_mdtraj, visualize_complex, convert_lines_to_mdtraj
first_protein, first_ligand = dataset.iloc[0]["protein_pdb"], dataset.iloc[0]["ligand_pdb"]
protein_mdtraj = convert_lines_to_mdtraj(first_protein)
ligand_mdtraj = convert_lines_to_mdtraj(first_ligand)
complex_mdtraj = combine_mdtraj(protein_mdtraj, ligand_mdtraj)
if DISPLAY:
ngltraj = visualize_complex(complex_mdtraj)
ngltraj
Explanation: One of the missions of deepchem is to form a synapse between the chemical and the algorithmic worlds: to be able to leverage the powerful and diverse array of tools available in Python to analyze molecules. This ethos applies to visual as much as quantitative examination:
End of explanation
from deepchem.featurizers.fingerprints import CircularFingerprint
from deepchem.featurizers.basic import RDKitDescriptors
from deepchem.featurizers.nnscore import NNScoreComplexFeaturizer
from deepchem.featurizers.grid_featurizer import GridFeaturizer
grid_featurizer = GridFeaturizer(voxel_width=16.0, feature_types="voxel_combined", voxel_feature_types=["ecfp",
"splif", "hbond", "pi_stack", "cation_pi", "salt_bridge"], ecfp_power=5, splif_power=5,
parallel=True, flatten=True)
compound_featurizers = [CircularFingerprint(size=128)]
# TODO(rbharath, enf): The grid featurizer breaks. Need to debug before code release
complex_featurizers = []
#complex_featurizers = [grid_featurizer]
Explanation: Now that we're oriented, let's use ML to do some chemistry.
So, step (2) will entail featurizing the dataset.
The available featurizations that come standard with deepchem are ECFP4 fingerprints, RDKit descriptors, NNScore-style bdescriptors, and hybrid binding pocket descriptors. Details can be found on deepchem.io.
End of explanation
#Make a directory in which to store the featurized complexes.
import tempfile, shutil
base_dir = "./tutorial_output"
if not os.path.exists(base_dir):
os.makedirs(base_dir)
data_dir = os.path.join(base_dir, "data")
if not os.path.exists(data_dir):
os.makedirs(data_dir)
featurized_samples_file = os.path.join(data_dir, "featurized_samples.joblib")
feature_dir = os.path.join(base_dir, "features")
if not os.path.exists(feature_dir):
os.makedirs(feature_dir)
samples_dir = os.path.join(base_dir, "samples")
if not os.path.exists(samples_dir):
os.makedirs(samples_dir)
train_dir = os.path.join(base_dir, "train")
if not os.path.exists(train_dir):
os.makedirs(train_dir)
valid_dir = os.path.join(base_dir, "valid")
if not os.path.exists(valid_dir):
os.makedirs(valid_dir)
test_dir = os.path.join(base_dir, "test")
if not os.path.exists(test_dir):
os.makedirs(test_dir)
model_dir = os.path.join(base_dir, "model")
if not os.path.exists(model_dir):
os.makedirs(model_dir)
import deepchem.featurizers.featurize
from deepchem.featurizers.featurize import DataFeaturizer
featurizers = compound_featurizers + complex_featurizers
featurizer = DataFeaturizer(tasks=["label"],
smiles_field="smiles",
protein_pdb_field="protein_pdb",
ligand_pdb_field="ligand_pdb",
compound_featurizers=compound_featurizers,
complex_featurizers=complex_featurizers,
id_field="complex_id",
verbose=False)
if PARALLELIZE:
from ipyparallel import Client
c = Client()
dview = c[:]
else:
dview = None
featurized_samples = featurizer.featurize(dataset_file, feature_dir, samples_dir,
worker_pool=dview, shard_size=32)
from deepchem.utils.save import save_to_disk, load_from_disk
save_to_disk(featurized_samples, featurized_samples_file)
featurized_samples = load_from_disk(featurized_samples_file)
Explanation: Note how we separate our featurizers into those that featurize individual chemical compounds, compound_featurizers, and those that featurize molecular complexes, complex_featurizers.
Now, let's perform the actual featurization. Calling featurizer.featurize() will return an instance of class FeaturizedSamples. Internally, featurizer.featurize() (a) computes the user-specified features on the data, (b) transforms the inputs into X and y NumPy arrays suitable for ML algorithms, and (c) constructs a FeaturizedSamples() instance that has useful methods, such as an iterator, over the featurized data.
End of explanation
splittype = "random"
train_samples, test_samples = featurized_samples.train_test_split(
splittype, train_dir, test_dir, seed=2016)
Explanation: Now, we conduct a train-test split. If you'd like, you can choose splittype="scaffold" instead to perform a train-test split based on Bemis-Murcko scaffolds.
End of explanation
from deepchem.utils.dataset import Dataset
train_dataset = Dataset(data_dir=train_dir, samples=train_samples,
featurizers=compound_featurizers, tasks=["label"])
test_dataset = Dataset(data_dir=test_dir, samples=test_samples,
featurizers=compound_featurizers, tasks=["label"])
Explanation: We generate separate instances of the Dataset() object to hermetically seal the train dataset from the test dataset. This style lends itself easily to validation-set type hyperparameter searches, which we will illustate in a separate section of this tutorial.
End of explanation
from deepchem.transformers import NormalizationTransformer
from deepchem.transformers import ClippingTransformer
input_transformers = [NormalizationTransformer(transform_X=True, dataset=train_dataset),
ClippingTransformer(transform_X=True, dataset=train_dataset)]
output_transformers = [NormalizationTransformer(transform_y=True, dataset=train_dataset)]
transformers = input_transformers + output_transformers
for transformer in transformers:
transformer.transform(train_dataset)
for transformer in transformers:
transformer.transform(test_dataset)
Explanation: The performance of many ML algorithms hinges greatly on careful data preprocessing. Deepchem comes standard with a few options for such preprocessing.
End of explanation
from sklearn.ensemble import RandomForestRegressor
from deepchem.models.standard import SklearnModel
task_types = {"label": "regression"}
model_params = {"data_shape": train_dataset.get_data_shape()}
model = SklearnModel(task_types, model_params, model_instance=RandomForestRegressor())
model.fit(train_dataset)
model_dir = tempfile.mkdtemp()
model.save(model_dir)
from deepchem.utils.evaluate import Evaluator
import pandas as pd
evaluator = Evaluator(model, train_dataset, output_transformers, verbose=True)
with tempfile.NamedTemporaryFile() as train_csv_out:
with tempfile.NamedTemporaryFile() as train_stats_out:
_, train_r2score = evaluator.compute_model_performance(
train_csv_out, train_stats_out)
evaluator = Evaluator(model, test_dataset, output_transformers, verbose=True)
test_csv_out = tempfile.NamedTemporaryFile()
with tempfile.NamedTemporaryFile() as test_stats_out:
_, test_r2score = evaluator.compute_model_performance(
test_csv_out, test_stats_out)
print test_csv_out.name
train_test_performance = pd.concat([train_r2score, test_r2score])
train_test_performance["split"] = ["train", "test"]
train_test_performance
Explanation: Now, we're ready to do some learning! To set up a model, we will need: (a) a dictionary task_types that maps a task, in this case label, i.e. the Ki, to the type of the task, in this case regression. For the multitask use case, one will have a series of keys, each of which is a different task (Ki, solubility, renal half-life, etc.) that maps to a different task type (regression or classification).
To fit a deepchem model, first we instantiate one of the provided (or user-written) model classes. In this case, we have a created a convenience class to wrap around any ML model available in Sci-Kit Learn that can in turn be used to interoperate with deepchem. To instantiate an SklearnModel, you will need (a) task_types, (b) model_params, another dict as illustrated below, and (c) a model_instance defining the type of model you would like to fit, in this case a RandomForestRegressor.
End of explanation
predictions = pd.read_csv(test_csv_out.name)
predictions = predictions.sort(['label'], ascending=[0])
from deepchem.utils.visualization import visualize_ligand
top_ligand = predictions.iloc[0]['ids']
ligand1 = convert_lines_to_mdtraj(dataset.loc[dataset['complex_id']==top_ligand]['ligand_pdb'].values[0])
if DISPLAY:
ngltraj = visualize_ligand(ligand1)
ngltraj
worst_ligand = predictions.iloc[predictions.shape[0]-2]['ids']
ligand1 = convert_lines_to_mdtraj(dataset.loc[dataset['complex_id']==worst_ligand]['ligand_pdb'].values[0])
if DISPLAY:
ngltraj = visualize_ligand(ligand1)
ngltraj
Explanation: In this simple example, in few yet intuitive lines of code, we traced the machine learning arc from featurizing a raw dataset to fitting and evaluating a model.
Here, we featurized only the ligand. The signal we observed in R^2 reflects the ability of circular fingerprints and random forests to learn general features that make ligands "drug-like."
End of explanation
import deepchem.models.standard
from deepchem.models.standard import SklearnModel
from deepchem.utils.dataset import Dataset
from deepchem.utils.evaluate import Evaluator
from deepchem.hyperparameters import HyperparamOpt
train_dir, validation_dir, test_dir = tempfile.mkdtemp(), tempfile.mkdtemp(), tempfile.mkdtemp()
splittype="random"
train_samples, validation_samples, test_samples = featurized_samples.train_valid_test_split(
splittype, train_dir, validation_dir, test_dir, seed=2016)
task_types = {"label": "regression"}
performance = pd.DataFrame()
def model_builder(task_types, params_dict, verbosity):
n_estimators = params_dict["n_estimators"]
return SklearnModel(
task_types, params_dict,
model_instance=RandomForestRegressor(n_estimators=n_estimators))
params_dict = {
"n_estimators": [10, 20, 40, 80, 160],
"data_shape": [train_dataset.get_data_shape()],
}
optimizer = HyperparamOpt(model_builder, {"pIC50": "regression"})
for feature_type in (complex_featurizers + compound_featurizers):
train_dataset = Dataset(data_dir=train_dir, samples=train_samples,
featurizers=[feature_type], tasks=["label"])
validation_dataset = Dataset(data_dir=validation_dir, samples=validation_samples,
featurizers=[feature_type], tasks=["label"])
for transformer in transformers:
transformer.transform(train_dataset)
for transformer in transformers:
transformer.transform(test_dataset)
best_rf, best_rf_hyperparams, all_rf_results = optimizer.hyperparam_search(
params_dict, train_dataset, test_dataset, output_transformers, metric="r2_score")
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
# TODO(rbharath, enf): Need to fix this to work with new hyperparam-opt framework.
#df = pd.DataFrame(performance[['r2_score','split','featurizer']].values, index=performance['n_trees'].values, columns=['r2_score', 'split', 'featurizer'])
#df = df.loc[df['split']=="validation"]
#df = df.drop('split', 1)
#fingerprint_df = df[df['featurizer'].str.contains('fingerprint')].drop('featurizer', 1)
#print fingerprint_df
#fingerprint_df.columns = ['ligand fingerprints']
#grid_df = df[df['featurizer'].str.contains('grid')].drop('featurizer', 1)
#grid_df.columns = ['complex features']
#df = pd.concat([fingerprint_df, grid_df], axis=1)
#print(df)
#plt.clf()
#df.plot()
#plt.ylabel("$R^2$")
#plt.xlabel("Number of trees")
train_dir, validation_dir, test_dir = tempfile.mkdtemp(), tempfile.mkdtemp(), tempfile.mkdtemp()
splittype="random"
train_samples, validation_samples, test_samples = featurized_samples.train_valid_test_split(
splittype, train_dir, validation_dir, test_dir, seed=2016)
feature_type = complex_featurizers
train_dataset = Dataset(data_dir=train_dir, samples=train_samples,
featurizers=feature_type, tasks=["label"])
validation_dataset = Dataset(data_dir=validation_dir, samples=validation_samples,
featurizers=feature_type, tasks=["label"])
test_dataset = Dataset(data_dir=test_dir, samples=test_samples,
featurizers=feature_type, tasks=["label"])
for transformer in transformers:
transformer.transform(train_dataset)
for transformer in transformers:
transformer.transform(valid_dataset)
for transformer in transformers:
transformer.transform(test_dataset)
model_params = {"data_shape": train_dataset.get_data_shape()}
rf_model = SklearnModel(task_types, model_params, model_instance=RandomForestRegressor(n_estimators=20))
rf_model.fit(train_dataset)
model_dir = tempfile.mkdtemp()
rf_model.save(model_dir)
evaluator = Evaluator(rf_model, train_dataset, output_transformers, verbose=True)
with tempfile.NamedTemporaryFile() as train_csv_out:
with tempfile.NamedTemporaryFile() as train_stats_out:
_, train_r2score = evaluator.compute_model_performance(
train_csv_out, train_stats_out)
evaluator = Evaluator(rf_model, test_dataset, output_transformers, verbose=True)
test_csv_out = tempfile.NamedTemporaryFile()
with tempfile.NamedTemporaryFile() as test_stats_out:
predictions, test_r2score = evaluator.compute_model_performance(
test_csv_out, test_stats_out)
train_test_performance = pd.concat([train_r2score, test_r2score])
train_test_performance["split"] = ["train", "test"]
train_test_performance["featurizer"] = [str(feature_type.__class__), str(feature_type.__class__)]
train_test_performance["n_trees"] = [n_trees, n_trees]
print(train_test_performance)
import deepchem.models.deep
from deepchem.models.deep import SingleTaskDNN
import numpy.random
from operator import mul
import itertools
params_dict = {"activation": ["relu"],
"momentum": [.9],
"batch_size": [50],
"init": ["glorot_uniform"],
"data_shape": [train_dataset.get_data_shape()],
"learning_rate": np.power(10., np.random.uniform(-5, -2, size=5)),
"decay": np.power(10., np.random.uniform(-6, -4, size=5)),
"nb_hidden": [1000],
"nb_epoch": [40],
"nesterov": [False],
"dropout": [.5],
"nb_layers": [1],
"batchnorm": [False],
}
optimizer = HyperparamOpt(SingleTaskDNN, task_types)
best_dnn, best_hyperparams, all_results = optimizer.hyperparam_search(
params_dict, train_dataset, valid_dataset, output_transformers, metric="r2_score", verbosity=None)
dnn_test_csv_out = tempfile.NamedTemporaryFile()
dnn_test_stats_out = tempfile.NamedTemporaryFile()
dnn_test_evaluator = Evaluator(best_dnn, test_dataset)
dnn_test_df, dnn_test_r2score = dnn_test_evaluator.compute_model_performance(
dnn_test_csv_out, dnn_test_stats_out)
dnn_test_r2_score = dnn_test_r2score.iloc[0]["r2_score"]
print("DNN Test set R^2 %f" % (dnn_test_r2_score))
task = "label"
dnn_predicted_test = np.array(dnn_test_df[task + "_pred"])
dnn_true_test = np.array(dnn_test_df[task])
plt.clf()
plt.scatter(dnn_true_test, dnn_predicted_test)
plt.xlabel('Predicted Ki')
plt.ylabel('True Ki')
plt.title(r'DNN predicted vs. true Ki')
plt.xlim([-2, 2])
plt.ylim([-2, 2])
plt.plot([-3, 3], [-3, 3], marker=".", color='k')
rf_test_csv_out = tempfile.NamedTemporaryFile()
rf_test_stats_out = tempfile.NamedTemporaryFile()
rf_test_evaluator = Evaluator(rf_model, test_dataset)
rf_test_df, rf_test_r2score = rf_test_evaluator.compute_model_performance(
rf_test_csv_out, rf_test_stats_out)
rf_test_r2_score = rf_test_r2score.iloc[0]["r2_score"]
print("RF Test set R^2 %f" % (rf_test_r2_score))
plt.show()
task = "label"
rf_predicted_test = np.array(rf_test_df[task + "_pred"])
rf_true_test = np.array(rf_test_df[task])
plt.scatter(rf_true_test, rf_predicted_test)
plt.xlabel('Predicted Ki')
plt.ylabel('True Ki')
plt.title(r'RF predicted vs. true Ki')
plt.xlim([-2, 2])
plt.ylim([-2, 2])
plt.plot([-3, 3], [-3, 3], marker=".", color='k')
plt.show()
predictions = dnn_test_df.sort(['label'], ascending=[0])
top_complex = predictions.iloc[0]['ids']
best_complex = dataset.loc[dataset['complex_id']==top_complex]
protein_mdtraj = convert_lines_to_mdtraj(best_complex["protein_pdb"].values[0])
ligand_mdtraj = convert_lines_to_mdtraj(best_complex["ligand_pdb"].values[0])
complex_mdtraj = combine_mdtraj(protein_mdtraj, ligand_mdtraj)
if DISPLAY:
ngltraj = visualize_complex(complex_mdtraj)
ngltraj
top_complex = predictions.iloc[1]['ids']
best_complex = dataset.loc[dataset['complex_id']==top_complex]
protein_mdtraj = convert_lines_to_mdtraj(best_complex["protein_pdb"].values[0])
ligand_mdtraj = convert_lines_to_mdtraj(best_complex["ligand_pdb"].values[0])
complex_mdtraj = combine_mdtraj(protein_mdtraj, ligand_mdtraj)
if DISPLAY:
ngltraj = visualize_complex(complex_mdtraj)
ngltraj
top_complex = predictions.iloc[predictions.shape[0]-1]['ids']
best_complex = dataset.loc[dataset['complex_id']==top_complex]
protein_mdtraj = convert_lines_to_mdtraj(best_complex["protein_pdb"].values[0])
ligand_mdtraj = convert_lines_to_mdtraj(best_complex["ligand_pdb"].values[0])
complex_mdtraj = combine_mdtraj(protein_mdtraj, ligand_mdtraj)
if DISPLAY:
ngltraj = visualize_complex(complex_mdtraj)
ngltraj
Explanation: The protein-ligand complex view.
The preceding simple example, in few yet intuitive lines of code, traces the machine learning arc from featurizing a raw dataset to fitting and evaluating a model.
In this next section, we illustrate deepchem's modularity, and thereby the ease with which one can explore different featurization schemes, different models, and combinations thereof, to achieve the best performance on a given dataset. We will demonstrate this by examining protein-ligand interactions.
In the previous section, we featurized only the ligand. The signal we observed in R^2 reflects the ability of circular fingerprints and random forests to learn general features that make ligands "drug-like." However, the affinity of a drug for a target is determined not only by the drug itself, of course, but the way in which it interacts with a protein.
End of explanation |
8,744 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Planning Algorithms
Do you remember on lesson 2 and 3 we discussed algorithms that basically solve MDPs? That is, find a policy given a exact representation of the environment. In this section, we will explore 2 such algorithms. Value Iteration and policy iteration.
Step1: Value Iteration
The Value Iteration algorithm uses dynamic programming by dividing the problem into common sub-problems and leveraging that optimal structure to speed-up computations.
Let me show you how value iterations look like
Step2: As we can see, value iteration expects a set of states, e.g. (0,1,2,3,4) a set of actions, e.g. (0,1) and a set of transition probabilities that represent the dynamics of the environment. Let's take a look at these variables
Step3: You see the world we are looking into "FrozenLake-v0" has 16 different states, 4 different actions. The P[10] is basically showing us a peek into the dynamics of the world. For example, in this case, if you are in state "10" (from P[10]) and you take action 0 (see dictionary key 0), you have equal probability of 0.3333 to land in either state 6, 9 or 14. None of those transitions give you any reward and none of them is terminal.
In contrast, we can see taking action 2, might transition you to state 11, which is terminal.
Get the hang of it? Let's run it!
Step4: Now, value iteration calculates two important things. First, it calculates V, which tells us how much should we expect from each state if we always act optimally. Second, it gives us pi, which is the optimal policy given V. Let's take a deeper look
Step5: See? This policy basically says in state 0, take action 0. In state 1 take action 3. In state 2 take action 3 and so on. Got it?
Now, we have the "directions" or this "map". With this, we can just use this policy and solve the environment as we interact with it.
Let's try it out!
Step6: That was the agent interacting with the environment. Let's take a look at some of the episodes
Step8: You can look on that link, or better, let's show it on the notebook
Step9: Interesting right? Did you get the world yet?
So, 'S' is the starting state, 'G' the goal. 'F' are Frozen grids, and 'H' are holes. Your goal is to go from S to G without falling into any H. The problem is, F is slippery so, often times you are better of by trying moves that seems counter-intuitive. But because you are preventing falling on 'H's it makes sense in the end. For example, the second row, first column 'F', you can see how our agent was trying so hard to go left!! Smashing his head against the wall?? Silly. But why?
Step10: See how action 0 (left) doesn't have any transition leading to a terminal state??
All other actions give you a 0.333333 chance each of pushing you into the hole in state '5'!!! So it actually makes sense to go left until it slips you downward to state 8.
Cool right?
Step11: See how the "prescribed" action is 0 (left) on the policy calculated by value iteration?
How about the values?
Step12: These show the expected rewards on each state.
Step13: See how the state '15' gives you a reward of +1?? These signal gets propagated all the way to the start state using Value Iteration and it shows the values all accross.
Cool? Good.
Step14: If you want to submit to OpenAI Gym, get your API Key and paste it here
Step15: Policy Iteration
There is another method called policy iteration. This method is composed of 2 other methods, policy evaluation and policy improvement. The logic goes that policy iteration is 'evaluating' a policy to check for convergence (meaning the policy doesn't change), and 'improving' the policy, which is applying something similar to a 1 step value iteration to get a slightly better policy, but definitely not worse.
These two functions cycling together are what policy iteration is about.
Can you implement this algorithm yourself? Try it. Make sure to look the solution notebook in case you get stuck.
I will give you the policy evaluation and policy improvement methods, you build the policy iteration cycling between the evaluation and improvement methods until there are no changes to the policy.
Step16: After you implement the algorithms, you can run it and calculate the optimal policy
Step18: And, of course, interact with the environment looking at the "directions" or "policy"
Step19: Similar as before. Policies could be slightly different if there is a state in which more than one action give the same value in the end.
Step20: That's it let's wrap up.
Step21: If you want to submit to OpenAI Gym, get your API Key and paste it here | Python Code:
import numpy as np
import pandas as pd
import tempfile
import pprint
import json
import sys
import gym
from gym import wrappers
from subprocess import check_output
from IPython.display import HTML
Explanation: Planning Algorithms
Do you remember on lesson 2 and 3 we discussed algorithms that basically solve MDPs? That is, find a policy given a exact representation of the environment. In this section, we will explore 2 such algorithms. Value Iteration and policy iteration.
End of explanation
def value_iteration(S, A, P, gamma=.99, theta = 0.0000001):
V = np.random.random(len(S))
for i in range(100000):
old_V = V.copy()
Q = np.zeros((len(S), len(A)), dtype=float)
for s in S:
for a in A:
for prob, s_prime, reward, done in P[s][a]:
Q[s][a] += prob * (reward + gamma * old_V[s_prime] * (not done))
V[s] = Q[s].max()
if np.all(np.abs(old_V - V) < theta):
break
pi = np.argmax(Q, axis=1)
return pi, V
Explanation: Value Iteration
The Value Iteration algorithm uses dynamic programming by dividing the problem into common sub-problems and leveraging that optimal structure to speed-up computations.
Let me show you how value iterations look like:
End of explanation
mdir = tempfile.mkdtemp()
env = gym.make('FrozenLake-v0')
env = wrappers.Monitor(env, mdir, force=True)
S = range(env.env.observation_space.n)
A = range(env.env.action_space.n)
P = env.env.env.P
S
A
P[10]
Explanation: As we can see, value iteration expects a set of states, e.g. (0,1,2,3,4) a set of actions, e.g. (0,1) and a set of transition probabilities that represent the dynamics of the environment. Let's take a look at these variables:
End of explanation
pi, V = value_iteration(S, A, P)
Explanation: You see the world we are looking into "FrozenLake-v0" has 16 different states, 4 different actions. The P[10] is basically showing us a peek into the dynamics of the world. For example, in this case, if you are in state "10" (from P[10]) and you take action 0 (see dictionary key 0), you have equal probability of 0.3333 to land in either state 6, 9 or 14. None of those transitions give you any reward and none of them is terminal.
In contrast, we can see taking action 2, might transition you to state 11, which is terminal.
Get the hang of it? Let's run it!
End of explanation
V
pi
Explanation: Now, value iteration calculates two important things. First, it calculates V, which tells us how much should we expect from each state if we always act optimally. Second, it gives us pi, which is the optimal policy given V. Let's take a deeper look:
End of explanation
for _ in range(10000):
state = env.reset()
while True:
state, reward, done, info = env.step(pi[state])
if done:
break
Explanation: See? This policy basically says in state 0, take action 0. In state 1 take action 3. In state 2 take action 3 and so on. Got it?
Now, we have the "directions" or this "map". With this, we can just use this policy and solve the environment as we interact with it.
Let's try it out!
End of explanation
last_video = env.videos[-1][0]
out = check_output(["asciinema", "upload", last_video])
out = out.decode("utf-8").replace('\n', '').replace('\r', '')
print(out)
Explanation: That was the agent interacting with the environment. Let's take a look at some of the episodes:
End of explanation
castid = out.split('/')[-1]
html_tag =
<script type="text/javascript"
src="https://asciinema.org/a/{0}.js"
id="asciicast-{0}"
async data-autoplay="true" data-size="big">
</script>
html_tag = html_tag.format(castid)
HTML(data=html_tag)
Explanation: You can look on that link, or better, let's show it on the notebook:
End of explanation
P[4]
Explanation: Interesting right? Did you get the world yet?
So, 'S' is the starting state, 'G' the goal. 'F' are Frozen grids, and 'H' are holes. Your goal is to go from S to G without falling into any H. The problem is, F is slippery so, often times you are better of by trying moves that seems counter-intuitive. But because you are preventing falling on 'H's it makes sense in the end. For example, the second row, first column 'F', you can see how our agent was trying so hard to go left!! Smashing his head against the wall?? Silly. But why?
End of explanation
pi
Explanation: See how action 0 (left) doesn't have any transition leading to a terminal state??
All other actions give you a 0.333333 chance each of pushing you into the hole in state '5'!!! So it actually makes sense to go left until it slips you downward to state 8.
Cool right?
End of explanation
V
Explanation: See how the "prescribed" action is 0 (left) on the policy calculated by value iteration?
How about the values?
End of explanation
P[15]
Explanation: These show the expected rewards on each state.
End of explanation
env.close()
Explanation: See how the state '15' gives you a reward of +1?? These signal gets propagated all the way to the start state using Value Iteration and it shows the values all accross.
Cool? Good.
End of explanation
gym.upload(mdir, api_key='<YOUR OPENAI API KEY>')
Explanation: If you want to submit to OpenAI Gym, get your API Key and paste it here:
End of explanation
def policy_evaluation(pi, S, A, P, gamma=.99, theta=0.0000001):
V = np.zeros(len(S))
while True:
delta = 0
for s in S:
v = V[s]
V[s] = 0
for prob, dst, reward, done in P[s][pi[s]]:
V[s] += prob * (reward + gamma * V[dst] * (not done))
delta = max(delta, np.abs(v - V[s]))
if delta < theta:
break
return V
def policy_improvement(pi, V, S, A, P, gamma=.99):
for s in S:
old_a = pi[s]
Qs = np.zeros(len(A), dtype=float)
for a in A:
for prob, s_prime, reward, done in P[s][a]:
Qs[a] += prob * (reward + gamma * V[s_prime] * (not done))
pi[s] = np.argmax(Qs)
V[s] = np.max(Qs)
return pi, V
def policy_iteration(S, A, P, gamma=.99):
pi = np.random.choice(A, len(S))
while True:
V = policy_evaluation(pi, S, A, P, gamma)
new_pi, new_V = policy_improvement(
pi.copy(), V.copy(), S, A, P, gamma)
if np.all(pi == new_pi):
break
pi = new_pi
V = new_V
return pi
Explanation: Policy Iteration
There is another method called policy iteration. This method is composed of 2 other methods, policy evaluation and policy improvement. The logic goes that policy iteration is 'evaluating' a policy to check for convergence (meaning the policy doesn't change), and 'improving' the policy, which is applying something similar to a 1 step value iteration to get a slightly better policy, but definitely not worse.
These two functions cycling together are what policy iteration is about.
Can you implement this algorithm yourself? Try it. Make sure to look the solution notebook in case you get stuck.
I will give you the policy evaluation and policy improvement methods, you build the policy iteration cycling between the evaluation and improvement methods until there are no changes to the policy.
End of explanation
mdir = tempfile.mkdtemp()
env = gym.make('FrozenLake-v0')
env = wrappers.Monitor(env, mdir, force=True)
S = range(env.env.observation_space.n)
A = range(env.env.action_space.n)
P = env.env.env.P
pi = policy_iteration(S, A, P)
print(pi)
Explanation: After you implement the algorithms, you can run it and calculate the optimal policy:
End of explanation
for _ in range(10000):
state = env.reset()
while True:
state, reward, done, info = env.step(pi[state])
if done:
break
last_video = env.videos[-1][0]
out = check_output(["asciinema", "upload", last_video])
out = out.decode("utf-8").replace('\n', '').replace('\r', '')
print(out)
castid = out.split('/')[-1]
html_tag =
<script type="text/javascript"
src="https://asciinema.org/a/{0}.js"
id="asciicast-{0}"
async data-autoplay="true" data-size="big">
</script>
html_tag = html_tag.format(castid)
HTML(data=html_tag)
Explanation: And, of course, interact with the environment looking at the "directions" or "policy":
End of explanation
V
pi
Explanation: Similar as before. Policies could be slightly different if there is a state in which more than one action give the same value in the end.
End of explanation
env.close()
Explanation: That's it let's wrap up.
End of explanation
gym.upload(mdir, api_key='<YOUR OPENAI API KEY>')
Explanation: If you want to submit to OpenAI Gym, get your API Key and paste it here:
End of explanation |
8,745 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Naive Bayes Classifiers
Naive Bayes classifiers are a family of classifiers that are quite similar to the linear models discussed previously. However, they tend to be even faster in training. The price paid for this efficiency is that naive Bayes models often provide generalization performance that is slightly worse than that of linear classifiers like LogisticRegression and LinearSVC.
The reason that naive Bayes models are so efficient is that they learn parameters by looking at each feature individually and collect simple per-class statistics from each feature. There are three kinds of naive Bayes classifiers implemented in scikit-learn
Step1: The BernoulliNB classifier counts how often every feature of each class is not zero. This is most easily understood with an example
Step2: Here, we have four data points, with four binary features each. There are two classes, 0 and 1. For class 0 (the first and third data points), the first feature is zero two times and nonzero zero times, the second feature is zero one time and nonzero one time, and so on. These same counts are then calculated for the data points in the second class. Counting the nonzero entries per class in essence looks like this
Step3: The other two naive Bayes models, MultinomialNB and GaussianNB, are slightly different in what kinds of statistics they compute. MultinomialNB takes into account the average value of each feature for each class, while GaussianNB stores the average value as well as the standard deviation of each feature for each class.
To make a prediction, a data point is compared to the statistics for each of the classes, and the best matching class is predicted. Interestingly, for both MultinomialNB and BernoulliNB, this leads to a prediction formula that is of the same form as in the linear models. Unfortunately, coef_ for the naive Bayes models has a somewhat different meaning than in the linear models, in that coef_ is not the same as w.
GaussianNB
Let's apply a GaussianNB model to the Iris dataset | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Naive Bayes Classifiers
Naive Bayes classifiers are a family of classifiers that are quite similar to the linear models discussed previously. However, they tend to be even faster in training. The price paid for this efficiency is that naive Bayes models often provide generalization performance that is slightly worse than that of linear classifiers like LogisticRegression and LinearSVC.
The reason that naive Bayes models are so efficient is that they learn parameters by looking at each feature individually and collect simple per-class statistics from each feature. There are three kinds of naive Bayes classifiers implemented in scikit-learn: GaussianNB, BernoulliNB, and MultinomialNB. GaussianNB can be applied to any continuous data, while BernoulliNB assumes binary data and MultinomialNB assumes count data (that is, that each feature represents an integer count of something, like how often a word appears in a sentence). BernoulliNB and MultinomialNB are mostly used in text data classification.
Advantages of Naive Bayes
Very fast to train and to predict
Training procedure is easy to understand
The models work very well with high-dimensional sparse data and are relatively robust to the parameters
Disadvantages of Naive Bayes
Relatively poor generalization performance
Disclaimer: Much of the code in this notebook was borrowed from the excellent book Introduction to Machine Learning with Python by Andreas Muller and Sarah Guido.
End of explanation
X = np.array([[0, 1, 0, 1],
[1, 0, 1, 1],
[0, 0, 0, 1],
[1, 0, 1, 0]])
y = np.array([0, 1, 0, 1])
Explanation: The BernoulliNB classifier counts how often every feature of each class is not zero. This is most easily understood with an example:
End of explanation
counts = {}
for label in np.unique(y):
# iterate over each class
# count (sum) entries of 1 per feature
counts[label] = X[y == label].sum(axis=0)
print("Feature counts:\n{}".format(counts))
Explanation: Here, we have four data points, with four binary features each. There are two classes, 0 and 1. For class 0 (the first and third data points), the first feature is zero two times and nonzero zero times, the second feature is zero one time and nonzero one time, and so on. These same counts are then calculated for the data points in the second class. Counting the nonzero entries per class in essence looks like this:
End of explanation
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import GaussianNB
iris = datasets.load_iris()
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, random_state=5)
gnb = GaussianNB()
gnb.fit(X_train, y_train)
print("Accuracy on training set: {:.2f}".format(gnb.score(X_train, y_train)))
print("Accuracy on test set: {:.2f}".format(gnb.score(X_test, y_test)))
Explanation: The other two naive Bayes models, MultinomialNB and GaussianNB, are slightly different in what kinds of statistics they compute. MultinomialNB takes into account the average value of each feature for each class, while GaussianNB stores the average value as well as the standard deviation of each feature for each class.
To make a prediction, a data point is compared to the statistics for each of the classes, and the best matching class is predicted. Interestingly, for both MultinomialNB and BernoulliNB, this leads to a prediction formula that is of the same form as in the linear models. Unfortunately, coef_ for the naive Bayes models has a somewhat different meaning than in the linear models, in that coef_ is not the same as w.
GaussianNB
Let's apply a GaussianNB model to the Iris dataset:
End of explanation |
8,746 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Session 3
Step2: Then we're going to try this with the MNIST dataset, which I've included a simple interface for in the libs module.
Step3: Let's take a look at what this returns
Step4: So we can see that there are a few interesting accessors. ... we're not going to worry about the labels until a bit later when we talk about a different type of model which can go from the input image to predicting which label the image is. But for now, we're going to focus on trying to encode the image and be able to reconstruct the image from our encoding. let's take a look at the images which are stored in the variable X. Remember, in this course, we'll always use the variable X to denote the input to a network. and we'll use the variable Y to denote its output.
Step5: So each image has 784 features, and there are 70k of them. If we want to draw the image, we're going to have to reshape it to a square. 28 x 28 is 784. So we're just going to reshape it to a square so that we can see all the pixels arranged in rows and columns instead of one giant vector.
Step6: Let's take a look at the mean of the dataset
Step7: And the standard deviation
Step8: So recall from session 1 that these two images are really saying whats more or less contant across every image, and what's changing. We're going to try and use an autoencoder to try to encode everything that could possibly change in the image.
<a name="fully-connected-model"></a>
Fully Connected Model
To try and encode our dataset, we are going to build a series of fully connected layers that get progressively smaller. So in neural net speak, every pixel is going to become its own input neuron. And from the original 784 neurons, we're going to slowly reduce that information down to smaller and smaller numbers. It's often standard practice to use other powers of 2 or 10. I'll create a list of the number of dimensions we'll use for each new layer.
Step9: So we're going to reduce our 784 dimensions down to 512 by multiplyling them by a 784 x 512 dimensional matrix. Then we'll do the same thing again using a 512 x 256 dimensional matrix, to reduce our dimensions down to 256 dimensions, and then again to 128 dimensions, then finally to 64. To get back to the size of the image, we're going to just going to do the reverse. But we're going to use the exact same matrices. We do that by taking the transpose of the matrix, which reshapes the matrix so that the rows become columns, and vice-versa. So our last matrix which was 128 rows x 64 columns, when transposed, becomes 64 rows x 128 columns.
So by sharing the weights in the network, we're only really learning half of the network, and those 4 matrices are going to make up the bulk of our model. We just have to find out what they are using gradient descent.
We're first going to create placeholders for our tensorflow graph. We're going to set the first dimension to None. This is something special for placeholders which tells tensorflow "let this dimension be any possible value". 1, 5, 100, 1000, it doesn't matter. We're going to pass our entire dataset in minibatches. So we'll send 100 images at a time. But we'd also like to be able to send in only 1 image and see what the prediction of the network is. That's why we let this dimension be flexible in the graph.
Step10: Now we're going to create a network which will perform a series of multiplications on X, followed by adding a bias, and then wrapping all of this in a non-linearity
Step11: So now we've created a series of multiplications in our graph which take us from our input of batch size times number of features which started as None x 784, and then we're multiplying it by a series of matrices which will change the size down to None x 64.
Step12: In order to get back to the original dimensions of the image, we're going to reverse everything we just did. Let's see how we do that
Step13: After this, our current_input will become the output of the network
Step14: Now that we have the output of the network, we just need to define a training signal to train the network with. To do that, we create a cost function which will measure how well the network is doing
Step15: And then take the mean again across batches
Step16: We can now train our network just like we did in the last session. We'll need to create an optimizer which takes a parameter learning_rate. And we tell it that we want to minimize our cost, which is measuring the difference between the output of the network and the input.
Step17: Now we'll create a session to manage the training in minibatches
Step18: Now we'll train
Step19: <a name="convolutional-autoencoder"></a>
Convolutional Autoencoder
To get even better encodings, we can also try building a convolutional network. Why would a convolutional network perform any different to a fully connected one? Let's see what we were doing in the fully connected network. For every pixel in our input, we have a set of weights corresponding to every output neuron. Those weights are unique to each pixel. Each pixel gets its own row in the weight matrix. That really doesn't make a lot of sense, since we would guess that nearby pixels are probably not going to be so different. And we're not really encoding what's happening around that pixel, just what that one pixel is doing.
In a convolutional model, we're explicitly modeling what happens around a pixel. And we're using the exact same convolutions no matter where in the image we are. But we're going to use a lot of different convolutions.
Recall in session 1 we created a Gaussian and Gabor kernel and used this to convolve an image to either blur it or to accentuate edges. Armed with what you know now, you could try to train a network to learn the parameters that map an untouched image to a blurred or edge filtered version of it. What you should find is the kernel will look sort of what we built by hand. I'll leave that as an excercise for you.
But in fact, that's too easy really. That's just 1 filter you would have to learn. We're going to see how we can use many convolutional filters, way more than 1, and how it will help us to encode the MNIST dataset.
To begin we'll need to reset the current graph and start over.
Step20: Since X is currently [batch, height*width], we need to reshape it to a
4-D tensor to use it in a convolutional graph. Remember back to the first session that in order to perform convolution, we have to use 4-dimensional tensors describing the
Step21: We'll now setup the first convolutional layer. Remember from Session 2 that the weight matrix for convolution should be
[height x width x input_channels x output_channels]
Think a moment about how this is different to the fully connected network. In the fully connected network, every pixel was being multiplied by its own weight to every other neuron. With a convolutional network, we use the extra dimensions to allow the same set of filters to be applied everywhere across an image. This is also known in the literature as weight sharing, since we're sharing the weights no matter where in the input we are. That's unlike the fully connected approach, which has unique weights for every pixel. What's more is after we've performed the convolution, we've retained the spatial organization of the input. We still have dimensions of height and width. That's again unlike the fully connected network which effectively shuffles or takes int account information from everywhere, not at all caring about where anything is. That can be useful or not depending on what we're trying to achieve. Often, it is something we might want to do after a series of convolutions to encode translation invariance. Don't worry about that for now. With MNIST especially we won't need to do that since all of the numbers are in the same position.
Now with our tensor ready, we're going to do what we've just done with the fully connected autoencoder. Except, instead of performing matrix multiplications, we're going to create convolution operations. To do that, we'll need to decide on a few parameters including the filter size, how many convolution filters we want, and how many layers we want. I'll start with a fairly small network, and let you scale this up in your own time.
Step22: Now we'll create a loop to create every layer's convolution, storing the convolution operations we create so that we can do the reverse.
Step23: Now with our convolutional encoder built and the encoding weights stored, we'll reverse the whole process to decode everything back out to the original image.
Step24: Now we have the reconstruction through the network
Step25: We can measure the cost and train exactly like before with the fully connected network
Step26: <a name="denoising-autoencoder"></a>
Denoising Autoencoder
The denoising autoencoder is a very simple extension to an autoencoder. Instead of seeing the input, it is corrupted, for instance by masked noise. but the reconstruction loss is still measured on the original uncorrupted image. What this does is lets the model try to interpret occluded or missing parts of the thing it is reasoning about. It would make sense for many models, that not every datapoint in an input is necessary to understand what is going on. Denoising autoencoders try to enforce that, and as a result, the encodings at the middle most layer are often far more representative of the actual classes of different objects.
In the resources section, you'll see that I've included a general framework autoencoder allowing you to use either a fully connected or convolutional autoencoder, and whether or not to include denoising. If you interested in the mechanics of how this works, I encourage you to have a look at the code.
<a name="variational-autoencoders"></a>
Variational Autoencoders
A variational autoencoder extends the traditional autoencoder by using an additional layer called the variational layer. It is actually two networks that are cleverly connected using a simple reparameterization trick, to help the gradient flow through both networks during backpropagation allowing both to be optimized.
We dont' have enough time to get into the details, but I'll try to quickly explain
Step27: To see what this is doing, let's compare setting it to false versus true
Step28: And now let's look at what the one hot version looks like
Step29: So instead of have a number from 0-9, we have 10 numbers corresponding to the digits, 0-9, and each value is either 0 or 1. Whichever digit the image represents is the one that is 1.
To summarize, we have all of the images of the dataset stored as
Step30: And labels stored as n_observations x n_labels where each observation is a one-hot vector, where only one element is 1 indicating which class or label it is.
Step31: <a name="one-hot-encoding"></a>
One-Hot Encoding
Remember in the last session, we saw how to build a network capable of taking 2 inputs representing the row and column of an image, and predicting 3 outputs, the red, green, and blue colors. Just like in our unsupervised model, instead of having 2 inputs, we'll now have 784 inputs, the brightness of every pixel in our image. And instead of 3 outputs, like in our painting network from last session, or the 784 outputs we had in our unsupervised MNIST network, we'll now have 10 outputs representing the one-hot encoding of its label.
So why don't we just have 1 output? A number from 0-9? Wouldn't having 10 different outputs instead of just 1 be harder to learn? Consider how we normally train the network. We have to give it a cost which it will use to minimize. What could our cost be if our output was just a single number, 0-9? We would still have the true label, and the predicted label. Could we just take the subtraction of the two values? e.g. the network predicted 0, but the image was really the number 8. Okay so then our distance could be
Step32: But in this example, the cost would be 8. If the image was a 4, and the network predicted a 0 again, the cost would be 4... but isn't the network still just as wrong, not half as much as when the image was an 8? In a one-hot encoding, the cost would be 1 for both, meaning they are both just as wrong. So we're able to better measure the cost, by separating each class's label into its own dimension.
<a name="using-regression-for-classification"></a>
Using Regression for Classification
The network we build will be trained to output values between 0 and 1. They won't output exactly a 0 or 1. But rather, they are able to produce any value. 0, 0.1, 0.2, ... and that means the networks we've been using are actually performing regression. In regression, the output is "continuous", rather than "discrete". The difference is this
Step33: As output, we have our 10 one-hot-encoding values
Step34: We're going to create placeholders for our tensorflow graph. We're going to set the first dimension to None. Remember from our unsupervised model, this is just something special for placeholders which tells tensorflow "let this dimension be any possible value". 1, 5, 100, 1000, it doesn't matter. Since we're going to pass our entire dataset in batches we'll need this to be say 100 images at a time. But we'd also like to be able to send in only 1 image and see what the prediction of the network is. That's why we let this dimension be flexible.
Step35: For the output, we'll have None again, since for every input, we'll have the same number of images that have outputs.
Step36: Now we'll connect our input to the output with a linear layer. Instead of relu, we're going to use softmax. This will perform our exponential scaling of the outputs and make sure the output sums to 1, making it a probability.
Step37: And then we write our loss function as the cross entropy. And then we'll give our optimizer the cross_entropy measure just like we would with GradientDescent. The formula for cross entropy is
Step38: To determine the correct class from our regression output, we have to take the maximum index.
Step39: We can then measure the accuracy by seeing whenever these are equal. Note, this is just for us to see, and is not at all used to "train" the network!
Step40: Training the Network
The rest of the code will be exactly the same as before. We chunk the training dataset into batch_size chunks, and let these images help train the network over a number of iterations.
Step41: What we should see is the accuracy being printed after each "epoch", or after every run over the entire dataset. Since we're using batches, we use the notion of an "epoch" to denote whenever we've gone through the entire dataset.
<a name="inspecting-the-network"></a>
Inspecting the Trained Network
Let's try and now inspect how the network is accomplishing this task. We know that our network is a single matrix multiplication of our 784 pixel values. The weight matrix, W, should therefore have 784 rows. As outputs, it has 10 values. So the matrix is composed in the linear function as n_input x n_output values. So the matrix is 784 rows x 10 columns.
<TODO
Step42: Looking at the names of the operations, we see there is one linear/W. But this is the tf.Operation. Not the tf.Tensor. The tensor is the result of the operation. To get the result of the operation, we simply add "
Step43: We can use the existing session to compute the current value of this tensor
Step44: And now we have our tensor! Let's try visualizing every neuron, or every column of this matrix
Step45: We're going to use the coolwarm color map, which will use "cool" values, or blue-ish colors for low values. And "warm" colors, red, basically, for high values. So what we begin to see is that there is a weighting of all the input values, where pixels that are likely to describe that number are being weighted high, and pixels that are not likely to describe that number are being weighted low. By summing all of these multiplications together, the network is able to begin to predict what number is in the image. This is not a very good network though, and the representations it learns could still do a much better job. We were only right about 93% of the time according to our accuracy. State of the art models will get about 99.9% accuracy.
<a name="convolutional-networks"></a>
Convolutional Networks
To get better performance, we can build a convolutional network. We've already seen how to create a convolutional network with our unsupervised model. We're going to make the same modifications here to help us predict the digit labels in MNIST.
Defining the Network
I'll first reset the current graph, so we can build a new one. We'll use tensorflow's nice helper function for doing this.
Step46: And just to confirm, let's see what's in our graph
Step47: Great. Empty.
Now let's get our dataset, and create some placeholders like before
Step48: Since X is currently [batch, height*width], we need to reshape to a
4-D tensor to use it in a convolutional graph. Remember, in order to perform convolution, we have to use 4-dimensional tensors describing the
Step49: We'll now setup the first convolutional layer. Remember that the weight matrix for convolution should be
[height x width x input_channels x output_channels]
Let's create 32 filters. That means every location in the image, depending on the stride I set when we perform the convolution, will be filtered by this many different kernels. In session 1, we convolved our image with just 2 different types of kernels. Now, we're going to let the computer try to find out what 32 filters helps it map the input to our desired output via our training signal.
Step50: Bias is always [output_channels] in size.
Step51: Now we can build a graph which does the first layer of convolution
Step52: And just like the first layer, add additional layers to create a deep net.
Step53: 4d -> 2d
Step54: Create a fully-connected layer
Step55: And one last fully-connected layer which will give us the correct number of outputs, and use a softmax to expoentially scale the outputs and convert them to a probability
Step56: <TODO
Step57: Monitor accuracy
Step58: And create a new session to actually perform the initialization of all the variables
Step59: Then we'll train in minibatches and report accuracy
Step60: <TODO
Step61: What we're looking at are all of the convolution kernels that have been learned. Compared to the previous network we've learned, it is much harder to understand what's happening here. But let's try and explain these a little more. The kernels that have been automatically learned here are responding to edges of different scales, orientations, and rotations. It's likely these are really describing parts of letters, or the strokes that make up letters. Put another way, they are trying to get at the "information" in the image by seeing what changes.
That's a pretty fundamental idea. That information would be things that change. Of course, there are filters for things that aren't changing as well. Some filters may even seem to respond to things that are mostly constant. However, if our network has learned a lot of filters that look like that, it's likely that the network hasn't really learned anything at all. The flip side of this is if the filters all look more or less random. That's also a bad sign.
Let's try looking at the second layer's kernels
Step62: It's really difficult to know what's happening here. There are many more kernels in this layer. They've already passed through a set of filters and an additional non-linearity. How can we really know what the network is doing to learn its objective function? The important thing for now is to see that most of these filters are different, and that they are not all constant or uniformly activated. That means it's really doing something, but we aren't really sure yet how to see how that effects the way we think of and perceive the image. In the next session, we'll learn more about how we can start to interrogate these deeper representations and try to understand what they are encoding. Along the way, we'll learn some pretty amazing tricks for producing entirely new aesthetics that eventually led to the "deep dream" viral craze.
<a name="savingloading-models"></a>
Saving/Loading Models
Tensorflow provides a few ways of saving/loading models. The easiest way is to use a checkpoint. Though, this really useful while you are training your network. When you are ready to deploy or hand out your network to others, you don't want to pass checkpoints around as they contain a lot of unnecessary information, and it also requires you to still write code to create your network. Instead, you can create a protobuf which contains the definition of your graph and the model's weights. Let's see how to do both
Step63: Creating the checkpoint is easy. After a few iterations of training, depending on your application say between 1/10 of the time to train the full model, you'll want to write the saved model. You can do this like so
Step64: <a name="protobuf"></a>
Protobuf
The second way of saving a model is really useful for when you don't want to pass around the code for producing the tensors or computational graph itself. It is also useful for moving the code to deployment or for use in the C++ version of Tensorflow. To do this, you'll want to run an operation to convert all of your trained parameters into constants. Then, you'll create a second graph which copies the necessary tensors, extracts the subgraph, and writes this to a model. The summarized code below shows you how you could use a checkpoint to restore your models parameters, and then export the saved model as a protobuf.
Step65: When you wanted to import this model, now you wouldn't need to refer to the checkpoint or create the network by specifying its placeholders or operations. Instead, you'd use the import_graph_def operation like so | Python Code:
# imports
%matplotlib inline
# %pylab osx
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors as colors
import matplotlib.cm as cmx
# Some additional libraries which we'll use just
# to produce some visualizations of our training
from libs.utils import montage
from libs import gif
import IPython.display as ipyd
plt.style.use('ggplot')
# Bit of formatting because I don't like the default inline code style:
from IPython.core.display import HTML
HTML(<style> .rendered_html code {
padding: 2px 4px;
color: #c7254e;
background-color: #f9f2f4;
border-radius: 4px;
} </style>)
Explanation: Session 3: Unsupervised and Supervised Learning
<p class="lead">
Parag K. Mital<br />
<a href="https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/info">Creative Applications of Deep Learning w/ Tensorflow</a><br />
<a href="https://www.kadenze.com/partners/kadenze-academy">Kadenze Academy</a><br />
<a href="https://twitter.com/hashtag/CADL">#CADL</a>
</p>
<a name="learning-goals"></a>
Learning Goals
Build an autoencoder w/ linear and convolutional layers
Understand how one hot encodings work
Build a classification network w/ linear and convolutional layers
<!-- MarkdownTOC autolink=true autoanchor=true bracket=round -->
Introduction
Unsupervised vs. Supervised Learning
Autoencoders
MNIST
Fully Connected Model
Convolutional Autoencoder
Denoising Autoencoder
Variational Autoencoders
Predicting Image Labels
One-Hot Encoding
Using Regression for Classification
Fully Connected Network
Convolutional Networks
Saving/Loading Models
Checkpoint
Protobuf
Wrap Up
Reading
<!-- /MarkdownTOC -->
<a name="introduction"></a>
Introduction
In the last session we created our first neural network.
We saw that in order to create a neural network, we needed to define a cost function which would allow gradient descent to optimize all the parameters in our network <TODO: Insert animation of gradient descent from previous session>. We also saw how neural networks become much more expressive by introducing series of linearities followed by non-linearities, or activation functions. <TODO: Insert graphic of activation functions from previous session>.
We then explored a fun application of neural networks using regression to learn to paint color values given x, y positions. This allowed us to build up a sort of painterly like version of an image.
In this session, we'll see how to use some simple deep nets with about 3 or 4 layers capable of performing unsupervised and supervised learning, and I'll explain those terms in a bit. The components we learn here will let us explore data in some very interesting ways.
<a name="unsupervised-vs-supervised-learning"></a>
Unsupervised vs. Supervised Learning
Machine learning research in deep networks performs one of two types of learning. You either have a lot of data and you want the computer to reason about it, maybe to encode the data using less data, and just explore what patterns there might be. That's useful for clustering data, reducing the dimensionality of the data, or even for generating new data. That's generally known as unsupervised learning. In the supervised case, you actually know what you want out of your data. You have something like a label or a class that is paired with every single piece of data. In this first half of this session, we'll see how unsupervised learning works using something called an autoencoder and how it can be extended using convolution.. Then we'll get into supervised learning and show how we can build networks for performing regression and classification. By the end of this session, hopefully all of that will make a little more sense. Don't worry if it doesn't yet! Really the best way to learn is to put this stuff into practice in the homeworks.
<a name="autoencoders"></a>
Autoencoders
<TODO: Graphic of autoencoder network diagram>
An autoencoder is a type of neural network that learns to encode its inputs, often using much less data. It does so in a way that it can still output the original input with just the encoded values. For it to learn, it does not require "labels" as its output. Instead, it tries to output whatever it was given as input. So in goes an image, and out should also go the same image. But it has to be able to retain all the details of the image, even after possibly reducing the information down to just a few numbers.
We'll also explore how this method can be extended and used to cluster or organize a dataset, or to explore latent dimensions of a dataset that explain some interesting ideas. For instance, we'll see how with handwritten numbers, we will be able to see how each number can be encoded in the autoencoder without ever telling it which number is which.
<TODO: place teaser of MNIST video learning>
But before we get there, we're going to need to develop an understanding of a few more concepts.
First, imagine a network that takes as input an image. The network can be composed of either matrix multiplications or convolutions to any number of filters or dimensions. At the end of any processing, the network has to be able to recompose the original image it was input.
In the last session, we saw how to build a network capable of taking 2 inputs representing the row and column of an image, and predicting 3 outputs, the red, green, and blue colors. Instead if having 2 inputs, we'll now have an entire image as an input, the brightness of every pixel in our image. And as output, we're going to have the same thing, the entire image being output.
<a name="mnist"></a>
MNIST
Let's first get some standard imports:
End of explanation
from libs.datasets import MNIST
ds = MNIST()
Explanation: Then we're going to try this with the MNIST dataset, which I've included a simple interface for in the libs module.
End of explanation
# ds.<tab>
Explanation: Let's take a look at what this returns:
End of explanation
print(ds.X.shape)
Explanation: So we can see that there are a few interesting accessors. ... we're not going to worry about the labels until a bit later when we talk about a different type of model which can go from the input image to predicting which label the image is. But for now, we're going to focus on trying to encode the image and be able to reconstruct the image from our encoding. let's take a look at the images which are stored in the variable X. Remember, in this course, we'll always use the variable X to denote the input to a network. and we'll use the variable Y to denote its output.
End of explanation
plt.imshow(ds.X[0].reshape((28, 28)))
# Let's get the first 1000 images of the dataset and reshape them
imgs = ds.X[:1000].reshape((-1, 28, 28))
# Then create a montage and draw the montage
plt.imshow(montage(imgs), cmap='gray')
Explanation: So each image has 784 features, and there are 70k of them. If we want to draw the image, we're going to have to reshape it to a square. 28 x 28 is 784. So we're just going to reshape it to a square so that we can see all the pixels arranged in rows and columns instead of one giant vector.
End of explanation
# Take the mean across all images
mean_img = np.mean(ds.X, axis=0)
# Then plot the mean image.
plt.figure()
plt.imshow(mean_img.reshape((28, 28)), cmap='gray')
Explanation: Let's take a look at the mean of the dataset:
End of explanation
# Take the std across all images
std_img = np.std(ds.X, axis=0)
# Then plot the std image.
plt.figure()
plt.imshow(std_img.reshape((28, 28)))
Explanation: And the standard deviation
End of explanation
dimensions = [512, 256, 128, 64]
Explanation: So recall from session 1 that these two images are really saying whats more or less contant across every image, and what's changing. We're going to try and use an autoencoder to try to encode everything that could possibly change in the image.
<a name="fully-connected-model"></a>
Fully Connected Model
To try and encode our dataset, we are going to build a series of fully connected layers that get progressively smaller. So in neural net speak, every pixel is going to become its own input neuron. And from the original 784 neurons, we're going to slowly reduce that information down to smaller and smaller numbers. It's often standard practice to use other powers of 2 or 10. I'll create a list of the number of dimensions we'll use for each new layer.
End of explanation
# So the number of features is the second dimension of our inputs matrix, 784
n_features = ds.X.shape[1]
# And we'll create a placeholder in the tensorflow graph that will be able to get any number of n_feature inputs.
X = tf.placeholder(tf.float32, [None, n_features])
Explanation: So we're going to reduce our 784 dimensions down to 512 by multiplyling them by a 784 x 512 dimensional matrix. Then we'll do the same thing again using a 512 x 256 dimensional matrix, to reduce our dimensions down to 256 dimensions, and then again to 128 dimensions, then finally to 64. To get back to the size of the image, we're going to just going to do the reverse. But we're going to use the exact same matrices. We do that by taking the transpose of the matrix, which reshapes the matrix so that the rows become columns, and vice-versa. So our last matrix which was 128 rows x 64 columns, when transposed, becomes 64 rows x 128 columns.
So by sharing the weights in the network, we're only really learning half of the network, and those 4 matrices are going to make up the bulk of our model. We just have to find out what they are using gradient descent.
We're first going to create placeholders for our tensorflow graph. We're going to set the first dimension to None. This is something special for placeholders which tells tensorflow "let this dimension be any possible value". 1, 5, 100, 1000, it doesn't matter. We're going to pass our entire dataset in minibatches. So we'll send 100 images at a time. But we'd also like to be able to send in only 1 image and see what the prediction of the network is. That's why we let this dimension be flexible in the graph.
End of explanation
# let's first copy our X placeholder to the name current_input
current_input = X
n_input = n_features
# We're going to keep every matrix we create so let's create a list to hold them all
Ws = []
# We'll create a for loop to create each layer:
for layer_i, n_output in enumerate(dimensions):
# just like in the last session,
# we'll use a variable scope to help encapsulate our variables
# This will simply prefix all the variables made in this scope
# with the name we give it.
with tf.variable_scope("encoder/layer/{}".format(layer_i)):
# Create a weight matrix which will increasingly reduce
# down the amount of information in the input by performing
# a matrix multiplication
W = tf.get_variable(
name='W',
shape=[n_input, n_output],
initializer=tf.random_normal_initializer(mean=0.0, stddev=0.02))
# Now we'll multiply our input by our newly created W matrix
# and add the bias
h = tf.matmul(current_input, W)
# And then use a relu activation function on its output
current_input = tf.nn.relu(h)
# Finally we'll store the weight matrix so we can build the decoder.
Ws.append(W)
# We'll also replace n_input with the current n_output, so that on the
# next iteration, our new number inputs will be correct.
n_input = n_output
Explanation: Now we're going to create a network which will perform a series of multiplications on X, followed by adding a bias, and then wrapping all of this in a non-linearity:
End of explanation
print(current_input.get_shape())
Explanation: So now we've created a series of multiplications in our graph which take us from our input of batch size times number of features which started as None x 784, and then we're multiplying it by a series of matrices which will change the size down to None x 64.
End of explanation
# We'll first reverse the order of our weight matrices
Ws = Ws[::-1]
# then reverse the order of our dimensions
# appending the last layers number of inputs.
dimensions = dimensions[::-1][1:] + [ds.X.shape[1]]
print(dimensions)
for layer_i, n_output in enumerate(dimensions):
# we'll use a variable scope again to help encapsulate our variables
# This will simply prefix all the variables made in this scope
# with the name we give it.
with tf.variable_scope("decoder/layer/{}".format(layer_i)):
# Now we'll grab the weight matrix we created before and transpose it
# So a 3072 x 784 matrix would become 784 x 3072
# or a 256 x 64 matrix, would become 64 x 256
W = tf.transpose(Ws[layer_i])
# Now we'll multiply our input by our transposed W matrix
h = tf.matmul(current_input, W)
# And then use a relu activation function on its output
current_input = tf.nn.relu(h)
# We'll also replace n_input with the current n_output, so that on the
# next iteration, our new number inputs will be correct.
n_input = n_output
Explanation: In order to get back to the original dimensions of the image, we're going to reverse everything we just did. Let's see how we do that:
End of explanation
Y = current_input
Explanation: After this, our current_input will become the output of the network:
End of explanation
# We'll first measure the average difference across every pixel
cost = tf.reduce_mean(tf.squared_difference(X, Y), 1)
print(cost.get_shape())
Explanation: Now that we have the output of the network, we just need to define a training signal to train the network with. To do that, we create a cost function which will measure how well the network is doing:
End of explanation
cost = tf.reduce_mean(cost)
Explanation: And then take the mean again across batches:
End of explanation
learning_rate = 0.001
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
Explanation: We can now train our network just like we did in the last session. We'll need to create an optimizer which takes a parameter learning_rate. And we tell it that we want to minimize our cost, which is measuring the difference between the output of the network and the input.
End of explanation
# %%
# We create a session to use the graph
sess = tf.Session()
sess.run(tf.global_variables_initializer())
Explanation: Now we'll create a session to manage the training in minibatches:
End of explanation
# Some parameters for training
batch_size = 100
n_epochs = 5
# We'll try to reconstruct the same first 100 images and show how
# The network does over the course of training.
examples = ds.X[:100]
# We'll store the reconstructions in a list
imgs = []
fig, ax = plt.subplots(1, 1)
for epoch_i in range(n_epochs):
for batch_X, _ in ds.train.next_batch():
sess.run(optimizer, feed_dict={X: batch_X - mean_img})
recon = sess.run(Y, feed_dict={X: examples - mean_img})
recon = np.clip((recon + mean_img).reshape((-1, 28, 28)), 0, 255)
img_i = montage(recon).astype(np.uint8)
imgs.append(img_i)
ax.imshow(img_i, cmap='gray')
fig.canvas.draw()
print(epoch_i, sess.run(cost, feed_dict={X: batch_X - mean_img}))
gif.build_gif(imgs, saveto='ae.gif', cmap='gray')
ipyd.Image(url='ae.gif?{}'.format(np.random.rand()),
height=500, width=500)
Explanation: Now we'll train:
End of explanation
from tensorflow.python.framework.ops import reset_default_graph
reset_default_graph()
# And we'll create a placeholder in the tensorflow graph that will be able to get any number of n_feature inputs.
X = tf.placeholder(tf.float32, [None, n_features])
Explanation: <a name="convolutional-autoencoder"></a>
Convolutional Autoencoder
To get even better encodings, we can also try building a convolutional network. Why would a convolutional network perform any different to a fully connected one? Let's see what we were doing in the fully connected network. For every pixel in our input, we have a set of weights corresponding to every output neuron. Those weights are unique to each pixel. Each pixel gets its own row in the weight matrix. That really doesn't make a lot of sense, since we would guess that nearby pixels are probably not going to be so different. And we're not really encoding what's happening around that pixel, just what that one pixel is doing.
In a convolutional model, we're explicitly modeling what happens around a pixel. And we're using the exact same convolutions no matter where in the image we are. But we're going to use a lot of different convolutions.
Recall in session 1 we created a Gaussian and Gabor kernel and used this to convolve an image to either blur it or to accentuate edges. Armed with what you know now, you could try to train a network to learn the parameters that map an untouched image to a blurred or edge filtered version of it. What you should find is the kernel will look sort of what we built by hand. I'll leave that as an excercise for you.
But in fact, that's too easy really. That's just 1 filter you would have to learn. We're going to see how we can use many convolutional filters, way more than 1, and how it will help us to encode the MNIST dataset.
To begin we'll need to reset the current graph and start over.
End of explanation
X_tensor = tf.reshape(X, [-1, 28, 28, 1])
Explanation: Since X is currently [batch, height*width], we need to reshape it to a
4-D tensor to use it in a convolutional graph. Remember back to the first session that in order to perform convolution, we have to use 4-dimensional tensors describing the:
N x H x W x C
We'll reshape our input placeholder by telling the shape parameter to be these new dimensions. However, since our batch dimension is None, we cannot reshape without using the special value -1, which says that the size of that dimension should be computed so that the total size remains constant. Since we haven't defined the batch dimension's shape yet, we use -1 to denote this
dimension should not change size.
End of explanation
n_filters = [16, 16, 16]
filter_sizes = [4, 4, 4]
Explanation: We'll now setup the first convolutional layer. Remember from Session 2 that the weight matrix for convolution should be
[height x width x input_channels x output_channels]
Think a moment about how this is different to the fully connected network. In the fully connected network, every pixel was being multiplied by its own weight to every other neuron. With a convolutional network, we use the extra dimensions to allow the same set of filters to be applied everywhere across an image. This is also known in the literature as weight sharing, since we're sharing the weights no matter where in the input we are. That's unlike the fully connected approach, which has unique weights for every pixel. What's more is after we've performed the convolution, we've retained the spatial organization of the input. We still have dimensions of height and width. That's again unlike the fully connected network which effectively shuffles or takes int account information from everywhere, not at all caring about where anything is. That can be useful or not depending on what we're trying to achieve. Often, it is something we might want to do after a series of convolutions to encode translation invariance. Don't worry about that for now. With MNIST especially we won't need to do that since all of the numbers are in the same position.
Now with our tensor ready, we're going to do what we've just done with the fully connected autoencoder. Except, instead of performing matrix multiplications, we're going to create convolution operations. To do that, we'll need to decide on a few parameters including the filter size, how many convolution filters we want, and how many layers we want. I'll start with a fairly small network, and let you scale this up in your own time.
End of explanation
current_input = X_tensor
# notice instead of having 784 as our input features, we're going to have
# just 1, corresponding to the number of channels in the image.
# We're going to use convolution to find 16 filters, or 16 channels of information
# in each spatial location we perform convolution at.
n_input = 1
# We're going to keep every matrix we create so let's create a list to hold them all
Ws = []
shapes = []
# We'll create a for loop to create each layer:
for layer_i, n_output in enumerate(n_filters):
# just like in the last session,
# we'll use a variable scope to help encapsulate our variables
# This will simply prefix all the variables made in this scope
# with the name we give it.
with tf.variable_scope("encoder/layer/{}".format(layer_i)):
# we'll keep track of the shapes of each layer
# As we'll need these for the decoder
shapes.append(current_input.get_shape().as_list())
# Create a weight matrix which will increasingly reduce
# down the amount of information in the input by performing
# a matrix multiplication
W = tf.get_variable(
name='W',
shape=[
filter_sizes[layer_i],
filter_sizes[layer_i],
n_input,
n_output],
initializer=tf.random_normal_initializer(mean=0.0, stddev=0.02))
# Now we'll convolve our input by our newly created W matrix
h = tf.nn.conv2d(current_input, W,
strides=[1, 2, 2, 1], padding='SAME')
# And then use a relu activation function on its output
current_input = tf.nn.relu(h)
# Finally we'll store the weight matrix so we can build the decoder.
Ws.append(W)
# We'll also replace n_input with the current n_output, so that on the
# next iteration, our new number inputs will be correct.
n_input = n_output
Explanation: Now we'll create a loop to create every layer's convolution, storing the convolution operations we create so that we can do the reverse.
End of explanation
# We'll first reverse the order of our weight matrices
Ws.reverse()
# and the shapes of each layer
shapes.reverse()
# and the number of filters (which is the same but could have been different)
n_filters.reverse()
# and append the last filter size which is our input image's number of channels
n_filters = n_filters[1:] + [1]
print(n_filters, filter_sizes, shapes)
# and then loop through our convolution filters and get back our input image
# we'll enumerate the shapes list to get us there
for layer_i, shape in enumerate(shapes):
# we'll use a variable scope to help encapsulate our variables
# This will simply prefix all the variables made in this scope
# with the name we give it.
with tf.variable_scope("decoder/layer/{}".format(layer_i)):
# Create a weight matrix which will increasingly reduce
# down the amount of information in the input by performing
# a matrix multiplication
W = Ws[layer_i]
# Now we'll convolve by the transpose of our previous convolution tensor
h = tf.nn.conv2d_transpose(current_input, W,
tf.stack([tf.shape(X)[0], shape[1], shape[2], shape[3]]),
strides=[1, 2, 2, 1], padding='SAME')
# And then use a relu activation function on its output
current_input = tf.nn.relu(h)
Explanation: Now with our convolutional encoder built and the encoding weights stored, we'll reverse the whole process to decode everything back out to the original image.
End of explanation
Y = current_input
Y = tf.reshape(Y, [-1, n_features])
Y.get_shape()
Explanation: Now we have the reconstruction through the network:
End of explanation
cost = tf.reduce_mean(tf.reduce_mean(tf.squared_difference(X, Y), 1))
learning_rate = 0.001
# pass learning rate and cost to optimize
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
# Session to manage vars/train
sess = tf.Session()
sess.run(tf.global_variables_initializer())
# Some parameters for training
batch_size = 100
n_epochs = 5
# We'll try to reconstruct the same first 100 images and show how
# The network does over the course of training.
examples = ds.X[:100]
# We'll store the reconstructions in a list
imgs = []
fig, ax = plt.subplots(1, 1)
for epoch_i in range(n_epochs):
for batch_X, _ in ds.train.next_batch():
sess.run(optimizer, feed_dict={X: batch_X - mean_img})
recon = sess.run(Y, feed_dict={X: examples - mean_img})
recon = np.clip((recon + mean_img).reshape((-1, 28, 28)), 0, 255)
img_i = montage(recon).astype(np.uint8)
imgs.append(img_i)
ax.imshow(img_i, cmap='gray')
fig.canvas.draw()
print(epoch_i, sess.run(cost, feed_dict={X: batch_X - mean_img}))
gif.build_gif(imgs, saveto='conv-ae.gif', cmap='gray')
ipyd.Image(url='conv-ae.gif?{}'.format(np.random.rand()),
height=500, width=500)
Explanation: We can measure the cost and train exactly like before with the fully connected network:
End of explanation
from libs import datasets
# ds = datasets.MNIST(one_hot=True)
Explanation: <a name="denoising-autoencoder"></a>
Denoising Autoencoder
The denoising autoencoder is a very simple extension to an autoencoder. Instead of seeing the input, it is corrupted, for instance by masked noise. but the reconstruction loss is still measured on the original uncorrupted image. What this does is lets the model try to interpret occluded or missing parts of the thing it is reasoning about. It would make sense for many models, that not every datapoint in an input is necessary to understand what is going on. Denoising autoencoders try to enforce that, and as a result, the encodings at the middle most layer are often far more representative of the actual classes of different objects.
In the resources section, you'll see that I've included a general framework autoencoder allowing you to use either a fully connected or convolutional autoencoder, and whether or not to include denoising. If you interested in the mechanics of how this works, I encourage you to have a look at the code.
<a name="variational-autoencoders"></a>
Variational Autoencoders
A variational autoencoder extends the traditional autoencoder by using an additional layer called the variational layer. It is actually two networks that are cleverly connected using a simple reparameterization trick, to help the gradient flow through both networks during backpropagation allowing both to be optimized.
We dont' have enough time to get into the details, but I'll try to quickly explain: it tries to optimize the likelihood that a particular distribution would create an image, rather than trying to optimize simply the L2 loss at the end of the network. Or put another way it hopes that there is some distribution that a distribution of image encodings could be defined as. This is a bit tricky to grasp, so don't worry if you don't understand the details. The major difference to hone in on is that instead of optimizing distance in the input space of pixel to pixel distance, which is actually quite arbitrary if you think about it... why would we care about the exact pixels being the same? Human vision would not care for most cases, if there was a slight translation of our image, then the distance could be very high, but we would never be able to tell the difference. So intuitively, measuring error based on raw pixel to pixel distance is not such a great approach.
Instead of relying on raw pixel differences, the variational autoencoder tries to optimize two networks. One which says that given my pixels, I am pretty sure I can encode them to the parameters of some well known distribution, like a set of Gaussians, instead of some artbitrary density of values. And then I can optimize the latent space, by saying that particular distribution should be able to represent my entire dataset, and I try to optimize the likelihood that it will create the images I feed through a network. So distance is somehow encoded in this latent space. Of course I appreciate that is a difficult concept so forgive me for not being able to expand on it in more details.
But to make up for the lack of time and explanation, I've included this model under the resources section for you to play with! Just like the "vanilla" autoencoder, this one supports both fully connected, convolutional, and denoising models.
This model performs so much better than the vanilla autoencoder. In fact, it performs so well that I can even manage to encode the majority of MNIST into 2 values. The following visualization demonstrates the learning of a variational autoencoder over time.
<mnist visualization>
There are of course a lot more interesting applications of such a model. You could for instance, try encoding a more interesting dataset, such as CIFAR which you'll find a wrapper for in the libs/datasets module.
<TODO: produce GIF visualization madness>
Or the celeb faces dataset:
<celeb dataset>
Or you could try encoding an entire movie. We tried it with the copyleft movie, "Sita Sings The Blues". Every 2 seconds, we stored an image of this movie, and then fed all of these images to a deep variational autoencoder. This is the result.
<show sita sings the blues training images>
And I'm sure we can get closer with deeper nets and more train time. But notice how in both celeb faces and sita sings the blues, the decoding is really blurred. That is because of the assumption of the underlying representational space. We're saying the latent space must be modeled as a gaussian, and those factors must be distributed as a gaussian. This enforces a sort of discretization of my representation, enforced by the noise parameter of the gaussian. In the last session, we'll see how we can avoid this sort of blurred representation and get even better decodings using a generative adversarial network.
For now, consider the applications that this method opens up. Once you have an encoding of a movie, or image dataset, you are able to do some very interesting things. You have effectively stored all the representations of that movie, although its not perfect of course. But, you could for instance, see how another movie would be interpretted by the same network. That's similar to what Terrance Broad did for his project on reconstructing blade runner and a scanner darkly, though he made use of both the variational autoencoder and the generative adversarial network. We're going to look at that network in more detail in the last session.
We'll also look at how to properly handle very large datasets like celeb faces or the one used here to create the sita sings the blues autoencoder. Taking every 60th frame of Sita Sings The Blues gives you aobut 300k images. And that's a lot of data to try and load in all at once. We had to size it down considerably, and make use of what's called a tensorflow input pipeline. I've included all the code for training this network, which took about 1 day on a fairly powerful machine, but I will not get into the details of the image pipeline bits until session 5 when we look at generative adversarial networks. I'm delaying this because we'll need to learn a few things along the way before we can build such a network.
<a name="predicting-image-labels"></a>
Predicting Image Labels
We've just seen a variety of types of autoencoders and how they are capable of compressing information down to its inner most layer while still being able to retain most of the interesting details. Considering that the CelebNet dataset was nearly 200 thousand images of 64 x 64 x 3 pixels, and we're able to express those with just an inner layer of 50 values, that's just magic basically. Magic.
Okay, let's move on now to a different type of learning often called supervised learning. Unlike what we just did, which is work with a set of data and not have any idea what that data should be labeled as, we're going to explicitly tell the network what we want it to be labeled by saying what the network should output for a given input. In the previous cause, we just had a set of Xs, our images. Now, we're going to have Xs and Ys given to us, and use the Xs to try and output the Ys.
With MNIST, the outputs of each image are simply what numbers are drawn in the input image. The wrapper for grabbing this dataset from the libs module takes an additional parameter which I didn't talk about called one_hot.
End of explanation
ds = datasets.MNIST(one_hot=False)
# let's look at the first label
print(ds.Y[0])
# okay and what does the input look like
plt.imshow(np.reshape(ds.X[0], (28, 28)), cmap='gray')
# great it is just the label of the image
plt.figure()
# Let's look at the next one just to be sure
print(ds.Y[1])
# Yea the same idea
plt.imshow(np.reshape(ds.X[1], (28, 28)), cmap='gray')
Explanation: To see what this is doing, let's compare setting it to false versus true:
End of explanation
ds = datasets.MNIST(one_hot=True)
plt.figure()
plt.imshow(np.reshape(ds.X[0], (28, 28)), cmap='gray')
print(ds.Y[0])
# array([ 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.])
# Woah a bunch more numbers. 10 to be exact, which is also the number
# of different labels in the dataset.
plt.imshow(np.reshape(ds.X[1], (28, 28)), cmap='gray')
print(ds.Y[1])
# array([ 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.])
Explanation: And now let's look at what the one hot version looks like:
End of explanation
print(ds.X.shape)
Explanation: So instead of have a number from 0-9, we have 10 numbers corresponding to the digits, 0-9, and each value is either 0 or 1. Whichever digit the image represents is the one that is 1.
To summarize, we have all of the images of the dataset stored as:
n_observations x n_features tensor (n-dim array)
End of explanation
print(ds.Y.shape)
print(ds.Y[0])
Explanation: And labels stored as n_observations x n_labels where each observation is a one-hot vector, where only one element is 1 indicating which class or label it is.
End of explanation
# cost = tf.reduce_sum(tf.abs(y_pred - y_true))
Explanation: <a name="one-hot-encoding"></a>
One-Hot Encoding
Remember in the last session, we saw how to build a network capable of taking 2 inputs representing the row and column of an image, and predicting 3 outputs, the red, green, and blue colors. Just like in our unsupervised model, instead of having 2 inputs, we'll now have 784 inputs, the brightness of every pixel in our image. And instead of 3 outputs, like in our painting network from last session, or the 784 outputs we had in our unsupervised MNIST network, we'll now have 10 outputs representing the one-hot encoding of its label.
So why don't we just have 1 output? A number from 0-9? Wouldn't having 10 different outputs instead of just 1 be harder to learn? Consider how we normally train the network. We have to give it a cost which it will use to minimize. What could our cost be if our output was just a single number, 0-9? We would still have the true label, and the predicted label. Could we just take the subtraction of the two values? e.g. the network predicted 0, but the image was really the number 8. Okay so then our distance could be:
End of explanation
import tensorflow as tf
from libs import datasets
ds = datasets.MNIST(split=[0.8, 0.1, 0.1])
n_input = 28 * 28
Explanation: But in this example, the cost would be 8. If the image was a 4, and the network predicted a 0 again, the cost would be 4... but isn't the network still just as wrong, not half as much as when the image was an 8? In a one-hot encoding, the cost would be 1 for both, meaning they are both just as wrong. So we're able to better measure the cost, by separating each class's label into its own dimension.
<a name="using-regression-for-classification"></a>
Using Regression for Classification
The network we build will be trained to output values between 0 and 1. They won't output exactly a 0 or 1. But rather, they are able to produce any value. 0, 0.1, 0.2, ... and that means the networks we've been using are actually performing regression. In regression, the output is "continuous", rather than "discrete". The difference is this: a discrete output means the network can only output one of a few things. Like, 0, 1, 2, or 3, and that's it. But a continuous output means it can output any real number.
In order to perform what's called classification, we're just simply going to look at whichever value is the highest in our one hot encoding. In order to do that a little better, we're actually going interpret our one hot encodings as probabilities by scaling the total output by their sum. What this does is allows us to understand that as we grow more confident in one prediction, we should grow less confident in all other predictions. We only have so much certainty to go around, enough to add up to 1. If we think the image might also be the number 1, then we lose some certainty of it being the number 0.
It turns out there is a better cost function that simply measuring the distance between two vectors when they are probabilities. It's called cross entropy:
\begin{align}
\Large{H(x) = -\sum{y_{\text{t}}(x) * \log(y_{\text{p}}(x))}}
\end{align}
What this equation does is measures the similarity of our prediction with our true distribution, by exponentially increasing error whenever our prediction gets closer to 1 when it should be 0, and similarly by exponentially increasing error whenever our prediction gets closer to 0, when it should be 1. I won't go into more detail here, but just know that we'll be using this measure instead of a normal distance measure.
<a name="fully-connected-network"></a>
Fully Connected Network
Defining the Network
Let's see how our one hot encoding and our new cost function will come into play. We'll create our network for predicting image classes in pretty much the same way we've created previous networks:
We will have as input to the network 28 x 28 values.
End of explanation
n_output = 10
Explanation: As output, we have our 10 one-hot-encoding values
End of explanation
X = tf.placeholder(tf.float32, [None, n_input])
Explanation: We're going to create placeholders for our tensorflow graph. We're going to set the first dimension to None. Remember from our unsupervised model, this is just something special for placeholders which tells tensorflow "let this dimension be any possible value". 1, 5, 100, 1000, it doesn't matter. Since we're going to pass our entire dataset in batches we'll need this to be say 100 images at a time. But we'd also like to be able to send in only 1 image and see what the prediction of the network is. That's why we let this dimension be flexible.
End of explanation
Y = tf.placeholder(tf.float32, [None, n_output])
Explanation: For the output, we'll have None again, since for every input, we'll have the same number of images that have outputs.
End of explanation
# We'll use the linear layer we created in the last session, which I've stored in the libs file:
# NOTE: The lecture used an older version of this function which had a slightly different definition.
from libs import utils
Y_pred, W = utils.linear(
x=X,
n_output=n_output,
activation=tf.nn.softmax,
name='layer1')
Explanation: Now we'll connect our input to the output with a linear layer. Instead of relu, we're going to use softmax. This will perform our exponential scaling of the outputs and make sure the output sums to 1, making it a probability.
End of explanation
# We add 1e-12 because the log is undefined at 0.
cross_entropy = -tf.reduce_sum(Y * tf.log(Y_pred + 1e-12))
optimizer = tf.train.AdamOptimizer(0.001).minimize(cross_entropy)
Explanation: And then we write our loss function as the cross entropy. And then we'll give our optimizer the cross_entropy measure just like we would with GradientDescent. The formula for cross entropy is:
\begin{align}
\Large{H(x) = -\sum{\text{Y}{\text{true}} * log(\text{Y}{pred})}}
\end{align}
End of explanation
predicted_y = tf.argmax(Y_pred, 1)
actual_y = tf.argmax(Y, 1)
Explanation: To determine the correct class from our regression output, we have to take the maximum index.
End of explanation
correct_prediction = tf.equal(predicted_y, actual_y)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
Explanation: We can then measure the accuracy by seeing whenever these are equal. Note, this is just for us to see, and is not at all used to "train" the network!
End of explanation
sess = tf.Session()
sess.run(tf.global_variables_initializer())
# Now actually do some training:
batch_size = 50
n_epochs = 5
for epoch_i in range(n_epochs):
for batch_xs, batch_ys in ds.train.next_batch():
sess.run(optimizer, feed_dict={
X: batch_xs,
Y: batch_ys
})
valid = ds.valid
print(sess.run(accuracy,
feed_dict={
X: valid.images,
Y: valid.labels
}))
# Print final test accuracy:
test = ds.test
print(sess.run(accuracy,
feed_dict={
X: test.images,
Y: test.labels
}))
Explanation: Training the Network
The rest of the code will be exactly the same as before. We chunk the training dataset into batch_size chunks, and let these images help train the network over a number of iterations.
End of explanation
# We first get the graph that we used to compute the network
g = tf.get_default_graph()
# And can inspect everything inside of it
[op.name for op in g.get_operations()]
Explanation: What we should see is the accuracy being printed after each "epoch", or after every run over the entire dataset. Since we're using batches, we use the notion of an "epoch" to denote whenever we've gone through the entire dataset.
<a name="inspecting-the-network"></a>
Inspecting the Trained Network
Let's try and now inspect how the network is accomplishing this task. We know that our network is a single matrix multiplication of our 784 pixel values. The weight matrix, W, should therefore have 784 rows. As outputs, it has 10 values. So the matrix is composed in the linear function as n_input x n_output values. So the matrix is 784 rows x 10 columns.
<TODO: graphic w/ wacom showing network and matrix multiplication and pulling out single neuron/column>
In order to get this matrix, we could have had our linear function return the tf.Tensor. But since everything is part of the tensorflow graph, and we've started using nice names for all of our operations, we can actually find this tensor using tensorflow:
End of explanation
W = g.get_tensor_by_name('layer1/W:0')
Explanation: Looking at the names of the operations, we see there is one linear/W. But this is the tf.Operation. Not the tf.Tensor. The tensor is the result of the operation. To get the result of the operation, we simply add ":0" to the name of the operation:
End of explanation
W_arr = np.array(W.eval(session=sess))
print(W_arr.shape)
Explanation: We can use the existing session to compute the current value of this tensor:
End of explanation
fig, ax = plt.subplots(1, 10, figsize=(20, 3))
for col_i in range(10):
ax[col_i].imshow(W_arr[:, col_i].reshape((28, 28)), cmap='coolwarm')
Explanation: And now we have our tensor! Let's try visualizing every neuron, or every column of this matrix:
End of explanation
from tensorflow.python.framework.ops import reset_default_graph
reset_default_graph()
Explanation: We're going to use the coolwarm color map, which will use "cool" values, or blue-ish colors for low values. And "warm" colors, red, basically, for high values. So what we begin to see is that there is a weighting of all the input values, where pixels that are likely to describe that number are being weighted high, and pixels that are not likely to describe that number are being weighted low. By summing all of these multiplications together, the network is able to begin to predict what number is in the image. This is not a very good network though, and the representations it learns could still do a much better job. We were only right about 93% of the time according to our accuracy. State of the art models will get about 99.9% accuracy.
<a name="convolutional-networks"></a>
Convolutional Networks
To get better performance, we can build a convolutional network. We've already seen how to create a convolutional network with our unsupervised model. We're going to make the same modifications here to help us predict the digit labels in MNIST.
Defining the Network
I'll first reset the current graph, so we can build a new one. We'll use tensorflow's nice helper function for doing this.
End of explanation
# We first get the graph that we used to compute the network
g = tf.get_default_graph()
# And can inspect everything inside of it
[op.name for op in g.get_operations()]
Explanation: And just to confirm, let's see what's in our graph:
End of explanation
# We'll have placeholders just like before which we'll fill in later.
ds = datasets.MNIST(one_hot=True, split=[0.8, 0.1, 0.1])
X = tf.placeholder(tf.float32, [None, 784])
Y = tf.placeholder(tf.float32, [None, 10])
Explanation: Great. Empty.
Now let's get our dataset, and create some placeholders like before:
End of explanation
X_tensor = tf.reshape(X, [-1, 28, 28, 1])
Explanation: Since X is currently [batch, height*width], we need to reshape to a
4-D tensor to use it in a convolutional graph. Remember, in order to perform convolution, we have to use 4-dimensional tensors describing the:
N x H x W x C
We'll reshape our input placeholder by telling the shape parameter to be these new dimensions and we'll use -1 to denote this dimension should not change size.
End of explanation
filter_size = 5
n_filters_in = 1
n_filters_out = 32
W_1 = tf.get_variable(
name='W',
shape=[filter_size, filter_size, n_filters_in, n_filters_out],
initializer=tf.random_normal_initializer())
Explanation: We'll now setup the first convolutional layer. Remember that the weight matrix for convolution should be
[height x width x input_channels x output_channels]
Let's create 32 filters. That means every location in the image, depending on the stride I set when we perform the convolution, will be filtered by this many different kernels. In session 1, we convolved our image with just 2 different types of kernels. Now, we're going to let the computer try to find out what 32 filters helps it map the input to our desired output via our training signal.
End of explanation
b_1 = tf.get_variable(
name='b',
shape=[n_filters_out],
initializer=tf.constant_initializer())
Explanation: Bias is always [output_channels] in size.
End of explanation
h_1 = tf.nn.relu(
tf.nn.bias_add(
tf.nn.conv2d(input=X_tensor,
filter=W_1,
strides=[1, 2, 2, 1],
padding='SAME'),
b_1))
Explanation: Now we can build a graph which does the first layer of convolution:
We define our stride as batch x height x width x channels. This has the effect of resampling the image down to half of the size.
End of explanation
n_filters_in = 32
n_filters_out = 64
W_2 = tf.get_variable(
name='W2',
shape=[filter_size, filter_size, n_filters_in, n_filters_out],
initializer=tf.random_normal_initializer())
b_2 = tf.get_variable(
name='b2',
shape=[n_filters_out],
initializer=tf.constant_initializer())
h_2 = tf.nn.relu(
tf.nn.bias_add(
tf.nn.conv2d(input=h_1,
filter=W_2,
strides=[1, 2, 2, 1],
padding='SAME'),
b_2))
Explanation: And just like the first layer, add additional layers to create a deep net.
End of explanation
# We'll now reshape so we can connect to a fully-connected/linear layer:
h_2_flat = tf.reshape(h_2, [-1, 7 * 7 * n_filters_out])
Explanation: 4d -> 2d
End of explanation
# NOTE: This uses a slightly different version of the linear function than the lecture!
h_3, W = utils.linear(h_2_flat, 128, activation=tf.nn.relu, name='fc_1')
Explanation: Create a fully-connected layer:
End of explanation
# NOTE: This uses a slightly different version of the linear function than the lecture!
Y_pred, W = utils.linear(h_3, n_output, activation=tf.nn.softmax, name='fc_2')
Explanation: And one last fully-connected layer which will give us the correct number of outputs, and use a softmax to expoentially scale the outputs and convert them to a probability:
End of explanation
cross_entropy = -tf.reduce_sum(Y * tf.log(Y_pred + 1e-12))
optimizer = tf.train.AdamOptimizer().minimize(cross_entropy)
Explanation: <TODO: Draw as graphical representation>
Training the Network
The rest of the training process is the same as the previous network. We'll define loss/eval/training functions:
End of explanation
correct_prediction = tf.equal(tf.argmax(Y_pred, 1), tf.argmax(Y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, 'float'))
Explanation: Monitor accuracy:
End of explanation
sess = tf.Session()
sess.run(tf.global_variables_initializer())
Explanation: And create a new session to actually perform the initialization of all the variables:
End of explanation
batch_size = 50
n_epochs = 10
for epoch_i in range(n_epochs):
for batch_xs, batch_ys in ds.train.next_batch():
sess.run(optimizer, feed_dict={
X: batch_xs,
Y: batch_ys
})
valid = ds.valid
print(sess.run(accuracy,
feed_dict={
X: valid.images,
Y: valid.labels
}))
# Print final test accuracy:
test = ds.test
print(sess.run(accuracy,
feed_dict={
X: test.images,
Y: test.labels
}))
Explanation: Then we'll train in minibatches and report accuracy:
End of explanation
from libs.utils import montage_filters
W1 = sess.run(W_1)
plt.figure(figsize=(10, 10))
plt.imshow(montage_filters(W1), cmap='coolwarm', interpolation='nearest')
Explanation: <TODO: Fun timelapse of waiting>
Inspecting the Trained Network
Let's take a look at the kernels we've learned using the following montage function, similar to the one we've been using for creating image montages, except this one is suited for the dimensions of convolution kernels instead of 4-d images. So it has the height and width first, unlike images which have batch then height then width. We'll use this function to visualize every convolution kernel in the first and second layers of our network.
End of explanation
W2 = sess.run(W_2)
plt.imshow(montage_filters(W2 / np.max(W2)), cmap='coolwarm')
Explanation: What we're looking at are all of the convolution kernels that have been learned. Compared to the previous network we've learned, it is much harder to understand what's happening here. But let's try and explain these a little more. The kernels that have been automatically learned here are responding to edges of different scales, orientations, and rotations. It's likely these are really describing parts of letters, or the strokes that make up letters. Put another way, they are trying to get at the "information" in the image by seeing what changes.
That's a pretty fundamental idea. That information would be things that change. Of course, there are filters for things that aren't changing as well. Some filters may even seem to respond to things that are mostly constant. However, if our network has learned a lot of filters that look like that, it's likely that the network hasn't really learned anything at all. The flip side of this is if the filters all look more or less random. That's also a bad sign.
Let's try looking at the second layer's kernels:
End of explanation
import os
sess = tf.Session()
init_op = tf.global_variables_initializer()
saver = tf.train.Saver()
sess.run(init_op)
if os.path.exists("model.ckpt"):
saver.restore(sess, "model.ckpt")
print("Model restored.")
Explanation: It's really difficult to know what's happening here. There are many more kernels in this layer. They've already passed through a set of filters and an additional non-linearity. How can we really know what the network is doing to learn its objective function? The important thing for now is to see that most of these filters are different, and that they are not all constant or uniformly activated. That means it's really doing something, but we aren't really sure yet how to see how that effects the way we think of and perceive the image. In the next session, we'll learn more about how we can start to interrogate these deeper representations and try to understand what they are encoding. Along the way, we'll learn some pretty amazing tricks for producing entirely new aesthetics that eventually led to the "deep dream" viral craze.
<a name="savingloading-models"></a>
Saving/Loading Models
Tensorflow provides a few ways of saving/loading models. The easiest way is to use a checkpoint. Though, this really useful while you are training your network. When you are ready to deploy or hand out your network to others, you don't want to pass checkpoints around as they contain a lot of unnecessary information, and it also requires you to still write code to create your network. Instead, you can create a protobuf which contains the definition of your graph and the model's weights. Let's see how to do both:
<a name="checkpoint"></a>
Checkpoint
Creating a checkpoint requires you to have already created a set of operations in your tensorflow graph. Once you've done this, you'll create a session like normal and initialize all of the variables. After this, you create a tf.train.Saver which can restore a previously saved checkpoint, overwriting all of the variables with your saved parameters.
End of explanation
save_path = saver.save(sess, "./model.ckpt")
print("Model saved in file: %s" % save_path)
Explanation: Creating the checkpoint is easy. After a few iterations of training, depending on your application say between 1/10 of the time to train the full model, you'll want to write the saved model. You can do this like so:
End of explanation
path='./'
ckpt_name = './model.ckpt'
fname = 'model.tfmodel'
dst_nodes = ['Y']
g_1 = tf.Graph()
with tf.Session(graph=g_1) as sess:
x = tf.placeholder(tf.float32, shape=(1, 224, 224, 3))
# Replace this with some code which will create your tensorflow graph:
net = create_network()
sess.run(tf.global_variables_initializer())
saver.restore(sess, ckpt_name)
graph_def = tf.python.graph_util.convert_variables_to_constants(
sess, sess.graph_def, dst_nodes)
g_2 = tf.Graph()
with tf.Session(graph=g_2) as sess:
tf.train.write_graph(
tf.python.graph_util.extract_sub_graph(
graph_def, dst_nodes), path, fname, as_text=False)
Explanation: <a name="protobuf"></a>
Protobuf
The second way of saving a model is really useful for when you don't want to pass around the code for producing the tensors or computational graph itself. It is also useful for moving the code to deployment or for use in the C++ version of Tensorflow. To do this, you'll want to run an operation to convert all of your trained parameters into constants. Then, you'll create a second graph which copies the necessary tensors, extracts the subgraph, and writes this to a model. The summarized code below shows you how you could use a checkpoint to restore your models parameters, and then export the saved model as a protobuf.
End of explanation
with open("model.tfmodel", mode='rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
tf.import_graph_def(net['graph_def'], name='model')
Explanation: When you wanted to import this model, now you wouldn't need to refer to the checkpoint or create the network by specifying its placeholders or operations. Instead, you'd use the import_graph_def operation like so:
End of explanation |
8,747 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute source power spectral density (PSD) of VectorView and OPM data
Here we compute the resting state from raw for data recorded using
a Neuromag VectorView system and a custom OPM system.
The pipeline is meant to mostly follow the Brainstorm [1]
OMEGA resting tutorial pipeline <bst_omega_>.
The steps we use are
Step1: Load data, resample. We will store the raw objects in dicts with entries
"vv" and "opm" to simplify housekeeping and simplify looping later.
Step2: Do some minimal artifact rejection just for VectorView data
Step3: Explore data
Step4: Alignment and forward
Step5: Compute and apply inverse to PSD estimated using multitaper + Welch.
Group into frequency bands, then normalize each source point and sensor
independently. This makes the value of each sensor point and source location
in each frequency band the percentage of the PSD accounted for by that band.
Step7: Now we can make some plots of each frequency band. Note that the OPM head
coverage is only over right motor cortex, so only localization
of beta is likely to be worthwhile.
Theta
Step8: Alpha
Step9: Beta
Here we also show OPM data, which shows a profile similar to the VectorView
data beneath the sensors.
Step10: Gamma | Python Code:
# Authors: Denis Engemann <denis.engemann@gmail.com>
# Luke Bloy <luke.bloy@gmail.com>
# Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD (3-clause)
import os.path as op
from mne.filter import next_fast_len
import mne
print(__doc__)
data_path = mne.datasets.opm.data_path()
subject = 'OPM_sample'
subjects_dir = op.join(data_path, 'subjects')
bem_dir = op.join(subjects_dir, subject, 'bem')
bem_fname = op.join(subjects_dir, subject, 'bem',
subject + '-5120-5120-5120-bem-sol.fif')
src_fname = op.join(bem_dir, '%s-oct6-src.fif' % subject)
vv_fname = data_path + '/MEG/SQUID/SQUID_resting_state.fif'
vv_erm_fname = data_path + '/MEG/SQUID/SQUID_empty_room.fif'
vv_trans_fname = data_path + '/MEG/SQUID/SQUID-trans.fif'
opm_fname = data_path + '/MEG/OPM/OPM_resting_state_raw.fif'
opm_erm_fname = data_path + '/MEG/OPM/OPM_empty_room_raw.fif'
opm_trans_fname = None
opm_coil_def_fname = op.join(data_path, 'MEG', 'OPM', 'coil_def.dat')
Explanation: Compute source power spectral density (PSD) of VectorView and OPM data
Here we compute the resting state from raw for data recorded using
a Neuromag VectorView system and a custom OPM system.
The pipeline is meant to mostly follow the Brainstorm [1]
OMEGA resting tutorial pipeline <bst_omega_>.
The steps we use are:
Filtering: downsample heavily.
Artifact detection: use SSP for EOG and ECG.
Source localization: dSPM, depth weighting, cortically constrained.
Frequency: power spectral density (Welch), 4 sec window, 50% overlap.
Standardize: normalize by relative power for each source.
:depth: 1
Preprocessing
End of explanation
raws = dict()
raw_erms = dict()
new_sfreq = 90. # Nyquist frequency (45 Hz) < line noise freq (50 Hz)
raws['vv'] = mne.io.read_raw_fif(vv_fname, verbose='error') # ignore naming
raws['vv'].load_data().resample(new_sfreq)
raws['vv'].info['bads'] = ['MEG2233', 'MEG1842']
raw_erms['vv'] = mne.io.read_raw_fif(vv_erm_fname, verbose='error')
raw_erms['vv'].load_data().resample(new_sfreq)
raw_erms['vv'].info['bads'] = ['MEG2233', 'MEG1842']
raws['opm'] = mne.io.read_raw_fif(opm_fname)
raws['opm'].load_data().resample(new_sfreq)
raw_erms['opm'] = mne.io.read_raw_fif(opm_erm_fname)
raw_erms['opm'].load_data().resample(new_sfreq)
# Make sure our assumptions later hold
assert raws['opm'].info['sfreq'] == raws['vv'].info['sfreq']
Explanation: Load data, resample. We will store the raw objects in dicts with entries
"vv" and "opm" to simplify housekeeping and simplify looping later.
End of explanation
titles = dict(vv='VectorView', opm='OPM')
ssp_ecg, _ = mne.preprocessing.compute_proj_ecg(
raws['vv'], tmin=-0.1, tmax=0.1, n_grad=1, n_mag=1)
raws['vv'].add_proj(ssp_ecg, remove_existing=True)
# due to how compute_proj_eog works, it keeps the old projectors, so
# the output contains both projector types (and also the original empty-room
# projectors)
ssp_ecg_eog, _ = mne.preprocessing.compute_proj_eog(
raws['vv'], n_grad=1, n_mag=1, ch_name='MEG0112')
raws['vv'].add_proj(ssp_ecg_eog, remove_existing=True)
raw_erms['vv'].add_proj(ssp_ecg_eog)
fig = mne.viz.plot_projs_topomap(raws['vv'].info['projs'][-4:],
info=raws['vv'].info)
fig.suptitle(titles['vv'])
fig.subplots_adjust(0.05, 0.05, 0.95, 0.85)
Explanation: Do some minimal artifact rejection just for VectorView data
End of explanation
kinds = ('vv', 'opm')
n_fft = next_fast_len(int(round(4 * new_sfreq)))
print('Using n_fft=%d (%0.1f sec)' % (n_fft, n_fft / raws['vv'].info['sfreq']))
for kind in kinds:
fig = raws[kind].plot_psd(n_fft=n_fft, proj=True)
fig.suptitle(titles[kind])
fig.subplots_adjust(0.1, 0.1, 0.95, 0.85)
Explanation: Explore data
End of explanation
# Here we use a reduced size source space (oct5) just for speed
src = mne.setup_source_space(
subject, 'oct5', add_dist=False, subjects_dir=subjects_dir)
# This line removes source-to-source distances that we will not need.
# We only do it here to save a bit of memory, in general this is not required.
del src[0]['dist'], src[1]['dist']
bem = mne.read_bem_solution(bem_fname)
fwd = dict()
trans = dict(vv=vv_trans_fname, opm=opm_trans_fname)
# check alignment and generate forward
with mne.use_coil_def(opm_coil_def_fname):
for kind in kinds:
dig = True if kind == 'vv' else False
fig = mne.viz.plot_alignment(
raws[kind].info, trans=trans[kind], subject=subject,
subjects_dir=subjects_dir, dig=dig, coord_frame='mri',
surfaces=('head', 'white'))
mne.viz.set_3d_view(figure=fig, azimuth=0, elevation=90,
distance=0.6, focalpoint=(0., 0., 0.))
fwd[kind] = mne.make_forward_solution(
raws[kind].info, trans[kind], src, bem, eeg=False, verbose=True)
del trans, src, bem
Explanation: Alignment and forward
End of explanation
freq_bands = dict(
delta=(2, 4), theta=(5, 7), alpha=(8, 12), beta=(15, 29), gamma=(30, 45))
topos = dict(vv=dict(), opm=dict())
stcs = dict(vv=dict(), opm=dict())
snr = 3.
lambda2 = 1. / snr ** 2
for kind in kinds:
noise_cov = mne.compute_raw_covariance(raw_erms[kind])
inverse_operator = mne.minimum_norm.make_inverse_operator(
raws[kind].info, forward=fwd[kind], noise_cov=noise_cov, verbose=True)
stc_psd, sensor_psd = mne.minimum_norm.compute_source_psd(
raws[kind], inverse_operator, lambda2=lambda2,
n_fft=n_fft, dB=False, return_sensor=True, verbose=True)
topo_norm = sensor_psd.data.sum(axis=1, keepdims=True)
stc_norm = stc_psd.sum() # same operation on MNE object, sum across freqs
# Normalize each source point by the total power across freqs
for band, limits in freq_bands.items():
data = sensor_psd.copy().crop(*limits).data.sum(axis=1, keepdims=True)
topos[kind][band] = mne.EvokedArray(
100 * data / topo_norm, sensor_psd.info)
stcs[kind][band] = \
100 * stc_psd.copy().crop(*limits).sum() / stc_norm.data
del inverse_operator
del fwd, raws, raw_erms
Explanation: Compute and apply inverse to PSD estimated using multitaper + Welch.
Group into frequency bands, then normalize each source point and sensor
independently. This makes the value of each sensor point and source location
in each frequency band the percentage of the PSD accounted for by that band.
End of explanation
def plot_band(kind, band):
Plot activity within a frequency band on the subject's brain.
title = "%s %s\n(%d-%d Hz)" % ((titles[kind], band,) + freq_bands[band])
topos[kind][band].plot_topomap(
times=0., scalings=1., cbar_fmt='%0.1f', vmin=0, cmap='inferno',
time_format=title)
brain = stcs[kind][band].plot(
subject=subject, subjects_dir=subjects_dir, views='cau', hemi='both',
time_label=title, title=title, colormap='inferno',
clim=dict(kind='percent', lims=(70, 85, 99)), smoothing_steps=10)
brain.show_view(dict(azimuth=0, elevation=0), roll=0)
return fig, brain
fig_theta, brain_theta = plot_band('vv', 'theta')
Explanation: Now we can make some plots of each frequency band. Note that the OPM head
coverage is only over right motor cortex, so only localization
of beta is likely to be worthwhile.
Theta
End of explanation
fig_alpha, brain_alpha = plot_band('vv', 'alpha')
Explanation: Alpha
End of explanation
fig_beta, brain_beta = plot_band('vv', 'beta')
fig_beta_opm, brain_beta_opm = plot_band('opm', 'beta')
Explanation: Beta
Here we also show OPM data, which shows a profile similar to the VectorView
data beneath the sensors.
End of explanation
fig_gamma, brain_gamma = plot_band('vv', 'gamma')
Explanation: Gamma
End of explanation |
8,748 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: ノートブックで TensorBoard を使用する
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https
Step2: TensorFlow、datetime、および os をインポートします。
Step3: ノートブックにおける TensorBoard
FashionMNIST データセットをダウンロードし、スケーリングします。
Step4: 非常に単純なモデルを作成します。
Step5: Keras と TensorBoard コールバックを使ってモデルをトレーニングします。
Step6: magics を使って、ノートブック内で TensorBoard を起動します。
Step7: <!-- <img class="tfo-display-only-on-site" src="https
Step8: <!-- <img class="tfo-display-only-on-site" src="https
Step9: tensorboard.notebook API を使用すると、もう少し制御できるようになります。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
# Load the TensorBoard notebook extension
%load_ext tensorboard
Explanation: ノートブックで TensorBoard を使用する
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks"> <img src="https://www.tensorflow.org/images/tf_logo_32px.png"> TensorFlow.org で表示</a>
</td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tensorboard/tensorboard_in_notebooks.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png"> Google Colab で実行</a>
</td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tensorboard/tensorboard_in_notebooks.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png"> GitHubでソースを表示</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tensorboard/tensorboard_in_notebooks.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a></td>
</table>
TensorBoard は、Colab や Jupyter などのノートブックエクスペリエンス内で直接使用できます。そのため、結果の共有や既存のワークフローへの TensorBoard の統合を行い、ローカルに何もインストールせずに TensorBoard を使用することができます。
セットアップ
TF 2.0 をインストールし、TensorBoard ノートブック拡張機能を読み込んで起動します。
Jupyter ユーザー: Jupyter と TensorBoard を同じ virtualenv にインストールしている場合は、このまま先にお進みください。異なる Conda/virtualenv 環境に対してグローバルの Jupyter インストールとカーネルがあるといったより複雑なセットアップを使用している場合は、tensorboard バイナリが Jupyter ノートブックのコンテキスト内の PATH にあることを確認する必要があります。これには、環境の bin ディレクトリを PATH に付加するように kernel_spec を変更する方法があります。こちらをご覧ください。
Docker ユーザーの場合: <a>TensorFlow のナイトリーを使用する Jupyter Notebook サーバー</a>の Docker イメージを実行している場合は、ノートブックのポートだけでなく、TensorBoard のポートも公開する必要があります。次のコマンドでコンテナを実行します。
docker run -it -p 8888:8888 -p 6006:6006 \
tensorflow/tensorflow:nightly-py3-jupyter
上記の -p 6006 は TensorBoard のデフォルトのポートです。これにより、1 つの TesorBoard インスタンスを実行するためのポートが割り当てられます。同時インスタンスを実行する場合は、さらにぽポートを割り当てる必要があります。また、--bind_all を %tensorboard に渡してコンテナの外部にポートを公開します。
End of explanation
import tensorflow as tf
import datetime, os
Explanation: TensorFlow、datetime、および os をインポートします。
End of explanation
fashion_mnist = tf.keras.datasets.fashion_mnist
(x_train, y_train),(x_test, y_test) = fashion_mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
Explanation: ノートブックにおける TensorBoard
FashionMNIST データセットをダウンロードし、スケーリングします。
End of explanation
def create_model():
return tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
Explanation: 非常に単純なモデルを作成します。
End of explanation
def train_model():
model = create_model()
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
logdir = os.path.join("logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1)
model.fit(x=x_train,
y=y_train,
epochs=5,
validation_data=(x_test, y_test),
callbacks=[tensorboard_callback])
train_model()
Explanation: Keras と TensorBoard コールバックを使ってモデルをトレーニングします。
End of explanation
%tensorboard --logdir logs
Explanation: magics を使って、ノートブック内で TensorBoard を起動します。
End of explanation
%tensorboard --logdir logs
Explanation: <!-- <img class="tfo-display-only-on-site" src="https://github.com/tensorflow/tensorboard/blob/master/docs/images/notebook_tensorboard.png?raw=1"/> -->
スカラー、グラフ、ヒストグラムなどのダッシュボードを表示できるようになりました。一部のダッシュボード(プロファイルプラグインなど)はまだ Colab では使用できません。
%tensorboard マジックのフォーマットは、TensorBoard コマンドライン呼び出しとまったく同じですが、先頭に % 記号が付きます。
トレーニング前に TensorBoard を起動すると、その進捗状況を監視することもできます。
End of explanation
train_model()
Explanation: <!-- <img class="tfo-display-only-on-site" src="https://github.com/tensorflow/tensorboard/blob/master/docs/images/notebook_tensorboard_two_runs.png?raw=1"/> -->
同じコマンドを発行すると、同じ TensorBoard バックエンドを再利用できます。異なる logs ディレクトリが選択されている場合、新しい TensorBoard インスタンスが開きます。ポートは自動的に管理されます。
新しいモデルのトレーニングを開始すると、TensorBoard が 30 秒ごとに自動更新されます。または、右上にあるボタンを使って再読み込みすることもできます。
End of explanation
from tensorboard import notebook
notebook.list() # View open TensorBoard instances
# Control TensorBoard display. If no port is provided,
# the most recently launched TensorBoard is used
notebook.display(port=6006, height=1000)
Explanation: tensorboard.notebook API を使用すると、もう少し制御できるようになります。
End of explanation |
8,749 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
It appears the two states are equivalent, which means this is a single state. This is the spacetime equivalent of a fair coin, so this is the desired result. Which makes me feel better about the local epsilon machine constructed from the lightcone equivalence relation.
*** This has been fixed by excluding the present from future light cones. This eliminates the state splitting issue in the reconstruction algorithm ***
Step1: Would like that the value of intrinsic randomness of this field be 1 bit in both time and space. Here we have three bits for both with past depth 1. (still need to change code to have correct value of depth)
Step2: It seems we can get correct value of 1 bit of uncertainty if we treat each direction separately (not just each dimension) and divide the branching uncertainty by the size of the fringe along that direction. This procedure does make some sense, and it's good that it works out in this simple case. | Python Code:
state_overlay_diagram(field, random_states.get_causal_field(), t_max = 50, x_max = 50)
for state in random_states.causal_states():
print state.plc_configs()
for state in random_states.causal_states():
print state.morph()
t_trans = random_states.all_transitions(zipped = False)[1]
print np.unique(t_trans)
print np.log(8)/np.log(2)
print random_states.entropy_rate('forward')
print random_states.entropy_rate('right')
print random_states.entropy_rate('left')
Explanation: It appears the two states are equivalent, which means this is a single state. This is the spacetime equivalent of a fair coin, so this is the desired result. Which makes me feel better about the local epsilon machine constructed from the lightcone equivalence relation.
*** This has been fixed by excluding the present from future light cones. This eliminates the state splitting issue in the reconstruction algorithm ***
End of explanation
random_states = epsilon_field(random_field(600,600))
random_states.estimate_states(3,2,1)
random_states.filter_data()
t_trans = random_states.all_transitions(zipped = False)[1]
print np.unique(t_trans)
print np.log(32)/np.log(2)
print np.log(8)/np.log(2)
Explanation: Would like that the value of intrinsic randomness of this field be 1 bit in both time and space. Here we have three bits for both with past depth 1. (still need to change code to have correct value of depth)
End of explanation
wildcard_field = wildcard_tiling(1000,1000)
wildcard_states = epsilon_field(wildcard_field)
wildcard_states.estimate_states(3,3,1)
wildcard_states.filter_data()
print wildcard_states.number_of_states()
Explanation: It seems we can get correct value of 1 bit of uncertainty if we treat each direction separately (not just each dimension) and divide the branching uncertainty by the size of the fringe along that direction. This procedure does make some sense, and it's good that it works out in this simple case.
End of explanation |
8,750 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Progress report
Neural Networks
Sara Jones
Abstract
This project will take hand written digits 0 to 9 and recognize them through a computer-learning program. The neural network will require a training sets to 'teach' the network how to recognize the indiviualites between the diffrent digits and return the proper identification. The Network will be required to know the diffrences between the diffrent styles of handwriting (such as bars or no bars in sevens) and account for other factors such as messy handwriting. these factors will be determined by giving weights to the characteristics of each digits (accounting for various stylization diffrences) to detrmine what factors are important for identification of a digit and what can be given less weight or even ignored in identifcation.
Base question
The base question fir this project is taking hand written numbers and recognizing the through a neural network. This will require a computerized learning system that must be trained to recognize the digits. This network should have over 90% accuracy when recognizing hand written digits.
Additional questions
This project will also attempt to take hand written numbers that are more then one digit (10 or grater) and recognize them. This will have to take in to account stylization factors such as comas in writing larger numbers and spacing between digits. This network will also attempt to integrate more hidden layers in to the network to train and work with more accuracy and efficiency.
Packages and Libraries Needed
Step2: Core Algorithms
Neuron
Step3: Neuron Layer
Step4: Training Set
Step5: Visualizations
Can be used for training set and learning to recognize the characteristics of each digit. This will be used as part of the training set. | Python Code:
import numpy as np
import math
import random
import string
from scipy import optimize
%matplotlib inline
import matplotlib.pyplot as plt
from IPython.html.widgets import interact
from sklearn.datasets import load_digits
digits = load_digits()
print(digits.data.shape)
Explanation: Progress report
Neural Networks
Sara Jones
Abstract
This project will take hand written digits 0 to 9 and recognize them through a computer-learning program. The neural network will require a training sets to 'teach' the network how to recognize the indiviualites between the diffrent digits and return the proper identification. The Network will be required to know the diffrences between the diffrent styles of handwriting (such as bars or no bars in sevens) and account for other factors such as messy handwriting. these factors will be determined by giving weights to the characteristics of each digits (accounting for various stylization diffrences) to detrmine what factors are important for identification of a digit and what can be given less weight or even ignored in identifcation.
Base question
The base question fir this project is taking hand written numbers and recognizing the through a neural network. This will require a computerized learning system that must be trained to recognize the digits. This network should have over 90% accuracy when recognizing hand written digits.
Additional questions
This project will also attempt to take hand written numbers that are more then one digit (10 or grater) and recognize them. This will have to take in to account stylization factors such as comas in writing larger numbers and spacing between digits. This network will also attempt to integrate more hidden layers in to the network to train and work with more accuracy and efficiency.
Packages and Libraries Needed
End of explanation
Neron for for detrminig what is put in and the weights of certin aspects of the input
def __init__(self, n_inputs ):
self.n_inputs = n_inputs
self.set_weights( [random.uniform(0,1) for x in range(0,n_inputs+1)] )
def sum(self, inputs ):
return sum(val*self.weights[i] for i,val in enumerate(inputs))
def set_weights(self, weights ):
self.weights = weights
def str_(self):
return ( str(self.weights[:-1]),str(self.weights[-1]) )
Explanation: Core Algorithms
Neuron
End of explanation
def init(self, n_neurons, n_inputs):
self.n_neurons = n_neurons
self.neurons = [Neuron( n_inputs ) for _ in range(0,self.n_neurons)]
def str_(self):
return 'Layer:\n\t'+'\n\t'.join([str(neuron) for neuron in self.neurons])+''
Explanation: Neuron Layer
End of explanation
def learn(network, X, y, learning_rate=0.2, epochs=10000):
X = np.atleast_2d(X)
temp = np.ones([X.shape[0], X.shape[1]+1])
temp[:, 0:-1] = X
X = temp
y = np.array(y)
for i in range(epochs):
k = np.random.randint(X.shape[0])
a = [X[i]]
for j in range(len(self.weights)):
a.append(self.activation(np.dot(a[j], self.weights[j])))
error = y[i] - a[-1]
deltas = [error * self.activation_deriv(a[-1])]
for i in range(len(a) - 2, 0, -1):
deltas.append(deltas[-1].dot(self.weights[i].T)*self.activation_deriv(a[i]))
deltas.reverse()
for i in range(len(self.weights)):
layer = np.atleast_2d(a[i])
delta = np.atleast_2d(deltas[i])
self.weights[i] += learning_rate * layer.T.dot(delta)
return self.weights[i]
Explanation: Training Set
End of explanation
def show_digit(i):
plt.matshow(digits.images[i]);
print (show_digit(0))
print (show_digit(1))
print (show_digit(2))
print (show_digit(3))
print (show_digit(4))
print (show_digit(5))
print (show_digit(6))
print (show_digit(7))
print (show_digit(8))
print (show_digit(9))
interact(show_digit, i=(0,100));
Explanation: Visualizations
Can be used for training set and learning to recognize the characteristics of each digit. This will be used as part of the training set.
End of explanation |
8,751 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introdução
Esse notebook traz as analises pedidas na disciplina Biologia Evolutiva - Bio507
Neste tutorial vamos utilizar a a linguagem Python 2.7 e os pacotes numpy, pandas, dendropy e matplotlib
Leitura de dados
Step1: Primeiro vamos ler os dados usando o pandas.
A função read_csv cria um data.frame com os dados.
Step2: Podemos usar a função describe para ter um resumo global, olhando para médias, desvio padrão e quartis.
Step3: Podemos também usar a função groupby para aplicar funções em sub conjuntos dos dados, agrupando pela coluna ESPECIE
Step4: Vamos calcular as médias, desvios padrão e coeficiantes de variação por caractere por espécie.
Step5: Visualizando os dados
Vamos usar o pacote matplotlib para fazer gráficos diagnósticos dos dados.
Step6: Calculando matrizes de covariância
Vamos calcular as matrizes de covariância e correlação por espécie, além das médias, e agrupá-las num dicionário.
Step7: Podemos também acessar as matrizes pelo nome da espécie
Step8: Ou fazer um heatplot de todas elas
Step9: Analise de variância
Calculando as matrizes dentro de grupos, entre grupos e total
Step10: Comparando as matrizes
Step11: Filogenia
Vamos incluir uma filogenia e calcular estados ancestrais
Step12: Esse código usa as matrizes e médias calculadas anteriormente, junto com os tamanhos amostrais, para calcular valores ponderados para todos os nós da filogenia.
A conta realizada para médias e matrizes é uma simples média ponderada.
Step13: Isso resulta num dicionário, chamado node_matrices, com todas as matrizes para todos os nós.
Step14: Um dicionário de tamanhos amostrais
Step15: E um dicionário de médias.
Step16: Temos tb uma lista de todos os nós
Step17: É interessante notar como a matriz estimada para a raiz, ponderando as matrizes ao longo da filogenia, é idêntica à matriz ponderada dentro de grupos, calculada pelos modelos lineares anteriormente
Step18: $\beta$ e $\Delta z$
Vamos agora estimar as mudanças evolutivas em cada ramo da filogenia, e, usando as matrizes ancestrais, calcular os gradientes de seleção estimados.
Os $\Delta z$ são apenas as diferenças nas médias de um nó com o seu acestral. Os $\beta$ são estimados com a equação de Lande
Step19: Podemos agora calcular a correlação entre os $\beta$ e $\Delta z$
Step20: Podemos também calcular a relação entre a resposta evolutiva e o primeiro componente principal da matriz de covariação, a linha de menor resistência evolutiva.
Como a direção primeiro componente é arbitrária, tomamos o valor absoluto da correlação.
Step21: Podemos utilizar o pandas para formatar esses resultados em tabelas | Python Code:
import numpy as np
import pandas as pd
import dendropy as dp
import matplotlib as mpl
Explanation: Introdução
Esse notebook traz as analises pedidas na disciplina Biologia Evolutiva - Bio507
Neste tutorial vamos utilizar a a linguagem Python 2.7 e os pacotes numpy, pandas, dendropy e matplotlib
Leitura de dados
End of explanation
import numpy as np
import pandas as pd
import dendropy as dp
import matplotlib as mpl
dados_brutos = pd.read_csv("./dados.csv")
num_traits = 4
traits = dados_brutos.columns[:num_traits]
dados_brutos.head(10)
Explanation: Primeiro vamos ler os dados usando o pandas.
A função read_csv cria um data.frame com os dados.
End of explanation
dados_brutos.describe()
Explanation: Podemos usar a função describe para ter um resumo global, olhando para médias, desvio padrão e quartis.
End of explanation
dados_brutos.groupby('ESPECIE').describe()
Explanation: Podemos também usar a função groupby para aplicar funções em sub conjuntos dos dados, agrupando pela coluna ESPECIE
End of explanation
medias = dados_brutos.groupby('ESPECIE').mean()
medias
std = dados_brutos.groupby('ESPECIE').std()
std
cv = std/medias
cv
Explanation: Vamos calcular as médias, desvios padrão e coeficiantes de variação por caractere por espécie.
End of explanation
# histogramas
# scatter plots
# boxplots
Explanation: Visualizando os dados
Vamos usar o pacote matplotlib para fazer gráficos diagnósticos dos dados.
End of explanation
cov_matrices = dados_brutos.groupby('ESPECIE').apply(lambda x: x.cov())
cor_matrices = dados_brutos.groupby('ESPECIE').apply(lambda x: x.corr())
especies_labels = list(pd.unique(dados_brutos['ESPECIE']))
cov_matrices
cor_matrices
Explanation: Calculando matrizes de covariância
Vamos calcular as matrizes de covariância e correlação por espécie, além das médias, e agrupá-las num dicionário.
End of explanation
cor_matrices.T['C']
Explanation: Podemos também acessar as matrizes pelo nome da espécie:
End of explanation
#heat plots
Explanation: Ou fazer um heatplot de todas elas
End of explanation
# Matrizes dentro e entre grupos
Explanation: Analise de variância
Calculando as matrizes dentro de grupos, entre grupos e total
End of explanation
# RandomSkewers
Explanation: Comparando as matrizes
End of explanation
tree = dp.Tree.get_from_string("(E, ((C, B)4,(A,D)3)2)1;", "newick")
tree.print_plot(display_width = 50, show_internal_node_labels = True, leaf_spacing_factor = 4)
Explanation: Filogenia
Vamos incluir uma filogenia e calcular estados ancestrais
End of explanation
get_node_name = lambda n: str(n.label or n.taxon or None)
nodes = [get_node_name(n) for n in tree.nodes()]
node_matrices = {}
node_sample_size = {}
for sp in especies_labels:
new_matrix = np.array(cov_matrices.T[sp])
node_matrices[sp] = new_matrix
node_sample_size[sp] = dados_brutos[dados_brutos['ESPECIE'] == sp].shape[0]
# Tirando quem nao esta na filogenia e trocando os keys
node_means = {}
for sp in especies_labels:
if tree.find_node_with_taxon_label(sp):
new_key = get_node_name(tree.find_node_with_taxon_label(sp))
node_means[new_key] = medias.T[sp]
node_sample_size[new_key] = node_sample_size.pop(sp)
node_matrices[new_key] = node_matrices.pop(sp)
else:
node_matrices.pop(sp)
node_sample_size.pop(sp)
# Função que recebe uma lista de filhos e calcula a matriz, média e tamanho amostral pro ancestral
def ancestral_mean(child_labels):
new_matrix = np.zeros((num_traits, num_traits))
sample = 0
new_mean = np.zeros(num_traits)
for child in child_labels:
node = get_node_name(child)
new_matrix = new_matrix +\
node_sample_size[node] * node_matrices[node]
sample = sample + node_sample_size[node]
new_mean = new_mean + node_sample_size[node] * node_means[node]
new_matrix = new_matrix/sample
new_mean = new_mean/sample
return new_matrix, sample, new_mean
# Calculando as matrizes e tamanhos amostrais para todos os nós
for n in tree.postorder_node_iter():
if get_node_name(n) not in node_matrices:
node_matrices[get_node_name(n)], node_sample_size[get_node_name(n)], node_means[get_node_name(n)] = ancestral_mean(n.child_nodes())
Explanation: Esse código usa as matrizes e médias calculadas anteriormente, junto com os tamanhos amostrais, para calcular valores ponderados para todos os nós da filogenia.
A conta realizada para médias e matrizes é uma simples média ponderada.
End of explanation
node_matrices
Explanation: Isso resulta num dicionário, chamado node_matrices, com todas as matrizes para todos os nós.
End of explanation
node_sample_size
Explanation: Um dicionário de tamanhos amostrais
End of explanation
node_means['1']
Explanation: E um dicionário de médias.
End of explanation
nodes
Explanation: Temos tb uma lista de todos os nós:
End of explanation
w_matrix
node_matrices['4']
Explanation: É interessante notar como a matriz estimada para a raiz, ponderando as matrizes ao longo da filogenia, é idêntica à matriz ponderada dentro de grupos, calculada pelos modelos lineares anteriormente:
End of explanation
delta_z = {}
beta = {}
for n in tree.nodes()[1:]: #começamos do 1 para pular a raiz, que não tem ancestral
parent = get_node_name(n.parent_node)
branch = get_node_name(n) + '_' + parent
delta_z[branch] = node_means[get_node_name(n)] - node_means[parent]
beta[branch] = np.linalg.solve(node_matrices[parent], delta_z[branch])
delta_z
beta
Explanation: $\beta$ e $\Delta z$
Vamos agora estimar as mudanças evolutivas em cada ramo da filogenia, e, usando as matrizes ancestrais, calcular os gradientes de seleção estimados.
Os $\Delta z$ são apenas as diferenças nas médias de um nó com o seu acestral. Os $\beta$ são estimados com a equação de Lande:
$
\beta = G^{-1}\Delta z
$
End of explanation
def vector_corr(x, y): return (np.dot(x, y)/(np.linalg.norm(x)*np.linalg.norm(y)))
corr_beta_delta_z = {}
for branch in delta_z:
corr_beta_delta_z[branch] = vector_corr(beta[branch], delta_z[branch])
corr_beta_delta_z
Explanation: Podemos agora calcular a correlação entre os $\beta$ e $\Delta z$
End of explanation
corr_pc1 = {}
for branch in delta_z:
parent = branch.split("_")[1]
pc1 = np.linalg.eig(node_matrices[parent])[1][:,0]
corr_pc1[branch] = abs(vector_corr(delta_z[branch], pc1))
corr_pc1
Explanation: Podemos também calcular a relação entre a resposta evolutiva e o primeiro componente principal da matriz de covariação, a linha de menor resistência evolutiva.
Como a direção primeiro componente é arbitrária, tomamos o valor absoluto da correlação.
End of explanation
df_betas = pd.DataFrame.from_dict(beta, orient='index')
df_betas.columns = traits
df_betas
df_dz = pd.DataFrame.from_dict(delta_z, orient='index')
df_dz.columns = traits
df_dz
traits_c = list(traits)
traits_c.append('otu')
df_matrices = pd.DataFrame(columns=traits_c)
for node in node_matrices:
df = pd.DataFrame(node_matrices[node], columns=traits, index = traits)
df['otu'] = node
df_matrices = df_matrices.append(df)
df_matrices
Explanation: Podemos utilizar o pandas para formatar esses resultados em tabelas
End of explanation |
8,752 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example code that shows how to plot the function value and a decision
boundary for a simple logistic regression model using python
Author
Step1: Create the domain for the plot
Step2: Make the plots | Python Code:
%matplotlib inline
import numpy as np
from scipy.special import expit
import matplotlib.pyplot as plt
# define our hypothesis (vectorized!)
def f(x):
return expit(np.matrix([0, 1, -.5,.5])*x);
Explanation: Example code that shows how to plot the function value and a decision
boundary for a simple logistic regression model using python
Author: Nathan Jacobs (with some initial code by Hampton Young)
End of explanation
x_min = -5; x_max = 5
y_min = -5; y_max = 5
x1 = np.linspace(x_min, x_max, 200)
y1 = np.linspace(y_min, y_max , 200)
x,y = np.meshgrid(x1, y1)
#
# evalute it in a vectorized way (and reshape into a matrix)
#
# make a 3 x N matrix of the sample points
data = np.vstack((
np.ones(x.size), # add the bias term
x.ravel(), # make the matrix into a vector
y.ravel(),
y.ravel()**2)) # add a quadratic term for fun
z = f(data)
z = z.reshape(x.shape)
Explanation: Create the domain for the plot
End of explanation
# show the function value in the background
cs = plt.imshow(z,
extent=(x_min,x_max,y_max,y_min), # define limits of grid, note reversed y axis
cmap=plt.cm.jet)
plt.clim(0,1) # defines the value to assign the min/max color
# draw the line on top
levels = np.array([.5])
cs_line = plt.contour(x,y,z,levels)
# add a color bar
CB = plt.colorbar(cs)
plt.show()
Explanation: Make the plots
End of explanation |
8,753 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 4
Imports
Step2: Line with Gaussian noise
Write a function named random_line that creates x and y data for a line with y direction random noise that has a normal distribution $N(0,\sigma^2)$
Step5: Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function
Step6: Use interact to explore the plot_random_line function using | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
Explanation: Interact Exercise 4
Imports
End of explanation
def random_line(m, b, sigma, size=10):
Create a line y = m*x + b + N(0,sigma**2) between x=[-1.0,1.0]
Parameters
----------
m : float
The slope of the line.
b : float
The y-intercept of the line.
sigma : float
The standard deviation of the y direction normal distribution noise.
size : int
The number of points to create for the line.
Returns
-------
x : array of floats
The array of x values for the line with `size` points.
y : array of floats
The array of y values for the lines with `size` points.
y = m*x + b+ N(0, sigma**2)
m.dtype(float)
b.dtype(float)
sigma = np.std(x,axis=y,dtpe=np.float64)
for x in range(-1.0,1.0):
m = 0.0; b = 1.0; sigma=0.0; size=3
x, y = random_line(m, b, sigma, size)
assert len(x)==len(y)==size
assert list(x)==[-1.0,0.0,1.0]
assert list(y)==[1.0,1.0,1.0]
sigma = 1.0
m = 0.0; b = 0.0
size = 500
x, y = random_line(m, b, sigma, size)
assert np.allclose(np.mean(y-m*x-b), 0.0, rtol=0.1, atol=0.1)
assert np.allclose(np.std(y-m*x-b), sigma, rtol=0.1, atol=0.1)
Explanation: Line with Gaussian noise
Write a function named random_line that creates x and y data for a line with y direction random noise that has a normal distribution $N(0,\sigma^2)$:
$$
y = m x + b + N(0,\sigma^2)
$$
Be careful about the sigma=0.0 case.
End of explanation
def ticks_out(ax):
Move the ticks to the outside of the box.
ax.get_xaxis().set_tick_params(direction='out', width=1, which='both')
ax.get_yaxis().set_tick_params(direction='out', width=1, which='both')
def plot_random_line(m, b, sigma, size=10, color='red'):
Plot a random line with slope m, intercept b and size points.
# YOUR CODE HERE
raise NotImplementedError()
plot_random_line(5.0, -1.0, 2.0, 50)
assert True # use this cell to grade the plot_random_line function
Explanation: Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function:
Make the marker color settable through a color keyword argument with a default of red.
Display the range $x=[-1.1,1.1]$ and $y=[-10.0,10.0]$.
Customize your plot to make it effective and beautiful.
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
#### assert True # use this cell to grade the plot_random_line interact
Explanation: Use interact to explore the plot_random_line function using:
m: a float valued slider from -10.0 to 10.0 with steps of 0.1.
b: a float valued slider from -5.0 to 5.0 with steps of 0.1.
sigma: a float valued slider from 0.0 to 5.0 with steps of 0.01.
size: an int valued slider from 10 to 100 with steps of 10.
color: a dropdown with options for red, green and blue.
End of explanation |
8,754 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
My first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass.
You'll implement the forward pass through the network. You'll also set the hyperparameters
Step8: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly before you starting trying to train it. These tests must all be successful to pass the project.
Step9: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
Step10: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
Step11: OPTIONAL | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: My first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
self.activation_function = lambda x : 1 / (1 + np.exp(-x))
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
### Forward pass ###
hidden_inputs = np.dot(self.weights_input_to_hidden.T, X)
hidden_outputs = self.activation_function(hidden_inputs)
final_inputs = np.dot(self.weights_hidden_to_output.T, hidden_outputs)
final_outputs = final_inputs
### Backward pass ###
error = y - final_outputs
output_error_term = error
hidden_error = output_error_term * self.weights_hidden_to_output
hidden_error_term = hidden_error.T * hidden_outputs * (1 - hidden_outputs)
# Weight step (input to hidden)
delta_weights_i_h += hidden_error_term * X[:, None]
delta_weights_h_o += output_error_term * hidden_outputs[:, None]
self.weights_hidden_to_output += self.lr / n_records * delta_weights_h_o
self.weights_input_to_hidden += self.lr / n_records * delta_weights_i_h
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
hidden_inputs = np.dot(features, self.weights_input_to_hidden)
hidden_outputs = self.activation_function(hidden_inputs)
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output)
final_outputs = final_inputs
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass.
You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has three layers: an input layer, a hidden layer and an output layer.
The hidden layer will use the sigmoid function for activations.
The output layer has only one node and is used for the regression
The output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly before you starting trying to train it. These tests must all be successful to pass the project.
End of explanation
import sys
### Set the hyperparameters here ###
iterations = 2000
learning_rate = 1
hidden_nodes = 6
output_nodes = 1
batch_size = 128
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=batch_size)
X, y = train_features.iloc[batch].values, train_targets.iloc[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
plt.ylim((0,1.5))
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
fig, ax = plt.subplots(figsize=(16,8))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.plot(((test_features['weekday_0'] + test_features['weekday_6'])*std + mean).values, label='Weekend')
ax.plot((test_features['holiday']*std + mean).values, label='Holiday')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.iloc[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
ax.set_ylim((-100,1000));
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation
fig, ax = plt.subplots(figsize=(16,8))
mean, std = scaled_features['cnt']
predictions = network.run(train_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((train_targets['cnt']*std + mean).values, label='Data')
ax.plot(((train_features['weekday_0'] + train_features['weekday_6'])*std + mean).values, label='Weekend')
ax.plot((train_features['holiday']*std + mean).values, label='Holiday')
ax.legend()
dates = pd.to_datetime(rides.iloc[data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
ax.set_xlim((8150,8644))
ax.set_ylim((-100,1000))
ax.set_title('"Predicitons" on the training set');
fig, ax = plt.subplots()
heatmap = ax.pcolor(network.weights_input_to_hidden, cmap=plt.cm.Blues)
fig = plt.gcf()
fig.set_size_inches(7, 15)
ax.set_xticks(np.arange(hidden_nodes) + 0.5, minor=False)
ax.set_xticklabels([str(x+1) for x in range(0,hidden_nodes)], minor=False)
ax.set_yticks(np.arange(features.columns.shape[0]) + 0.5, minor=False)
ax.set_yticklabels(features.columns, minor=False)
ax.invert_yaxis()
fig.colorbar(heatmap)
ax.set_title('Hidden layer weights')
ax.set_xlabel('Hidden neuron');
fig, ax = plt.subplots()
heatmap = ax.pcolor(network.weights_hidden_to_output, cmap=plt.cm.Blues)
fig = plt.gcf()
ax.invert_yaxis()
ax.set_xticks(np.arange(output_nodes) + 0.5, minor=False)
ax.set_xticklabels([str(x+1) for x in range(0,output_nodes)], minor=False)
ax.set_yticks(np.arange(hidden_nodes) + 0.5, minor=False)
ax.set_yticklabels([str(x+1) for x in range(0,hidden_nodes)], minor=False)
fig.colorbar(heatmap)
ax.set_title('Output layer weights')
ax.set_ylabel('Hidden neuron');
Explanation: OPTIONAL: Thinking about your results(this question will not be evaluated in the rubric).
Answer these questions about your results.How well does the model predict the data? Where does it fail? Why does it fail where it does?
Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter
Your answer below
How well does the model predict the data?
Model predics the data satisfyngly well. The daily cycle is refleced in the model. Quiet night periods are well modelled as well as commuting rush hours. The first weekend of December (15 & 16th) is also corretly predicted.
Where does it fail?
The model performs significantly worse on the christmas week and surrounding weekends. The bike rental was presumably lower, because the chrismas happend to be in the middle of the week, which encouraged people to take a whole week of holidays. The rush hour peak can suggest that the bike rentals are correlated with the fact whether people are working or not, on particular day.
Why does it fail where it does?
The model fails, becasue:
It's architecture is too simple to synthesize features like: "day before or after the holiday" or "single working day between two non-working days". Additional hidden layer could inscrease the model capability to understand such concepts. Adding more neurons in the single existing hidden layer has no significant effect.
It has not enought data to reason about public holidays like christmas. Especially that christmas occurrs only one time in the training set. It data was spanning over more years back, would be an improvement.
Self Network analysis
End of explanation |
8,755 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
def sghmc(Y, X, stogradU, M, eps, m, theta, C, V)
Step1: Correct coefficients
Step2: Our code - SGHMC
Step3: Our code - Gradient descent
Step5: Cliburn's code | Python Code:
# Load data
X = np.concatenate((np.ones((pima.shape[0],1)),pima[:,0:8]), axis=1)
Y = pima[:,8]
Xs = (X - np.mean(X, axis=0))/np.concatenate((np.ones(1),np.std(X[:,1:], axis=0)))
n, p = X.shape
M = np.identity(p)
### HMC version
def logistic(x):
return 1/(1+np.exp(-x))
def U(theta, Y, X):
return - (Y.T @ X @ theta - np.sum(np.log(1+np.exp(X @ theta))) - 0.5 * phi * np.sum(theta**2))
def gradU(theta, Y, X, nbatch):
'''A function that returns the stochastic gradient. Adapted from Eq. 5.
Inputs are:
theta, the parameters
Y, the response
X, the covariates
nbatch, the number of samples to take from the full data
'''
n = X.shape[0]
Y_pred = logistic(X @ theta)
epsilon = (Y[:,np.newaxis] - Y_pred[:,np.newaxis])
grad = X.T @ epsilon - phi * theta[:, np.newaxis]
return -grad/n
def stogradU(theta, Y, X, nbatch):
'''A function that returns the stochastic gradient. Adapted from Eq. 5.
Inputs are:
theta, the parameters
Y, the response
X, the covariates
nbatch, the number of samples to take from the full data
'''
n, p = X.shape
# Sample minibatch
batch_id = np.random.choice(np.arange(n),nbatch,replace=False)
Y_pred = logistic(X[batch_id,:] @ theta[:,np.newaxis])
epsilon = (Y[batch_id,np.newaxis] - Y_pred)
grad = n/nbatch * X[batch_id,:].T @ epsilon - phi * theta[:, np.newaxis]
#return -grad/n
return -grad
def sghmc(Y, X, gradU, M, eps, m, theta, C, V):
n, p = X.shape
# Precompute
Minv = np.linalg.inv(M)
B = 0.5 * V * eps
D = 2*(C-B)*eps
# Randomly sample momentum
r = np.random.multivariate_normal(np.zeros(p),M)[:,np.newaxis]
# Hamiltonian dynamics
#r = r - (eps/2)*gradU(theta, Y, X, nbatch)
for i in range(m):
theta = theta + (eps*Minv@r).ravel()
r = r - eps*stogradU(theta, Y, X, nbatch) - eps*C @ Minv @ r \
+ np.random.multivariate_normal(np.zeros(p),D)[:,np.newaxis]
#theta = theta + (eps*Minv@r).ravel()
#r = r - (eps/2)*gradU(theta, Y, X, nbatch)
return theta
def my_gd(Y, X, gradU, M, eps, m, theta, C, V):
# gradient descent
n = X.shape[0]
p = X.shape[1]
for i in range(m):
theta = theta - eps*gradU(theta, Y, X, nbatch).ravel()
return theta
Explanation: def sghmc(Y, X, stogradU, M, eps, m, theta, C, V):
n = X.shape[0]
p = X.shape[1]
# Randomly sample momentum
r = np.random.multivariate_normal(np.zeros(M.shape[0]),M)[:,np.newaxis]
# Precompute
B = 0.5 * V * eps
D = 2*(C-B)*eps
Minv = np.linalg.inv(M)
# Hamiltonian dynamics
for i in range(m):
theta = theta + (eps*np.linalg.inv(M) @ r).ravel()
r = r - eps*stogradU(theta, Y, X, nbatch) - eps*C @ Minv @ r \
+ np.random.multivariate_normal(np.zeros(M.shape[0]),D)[:,np.newaxis]
return(theta)
def stogradU(theta, Y, X, nbatch):
'''A function that returns the stochastic gradient. Adapted from Eq. 5.
Inputs are:
theta, the parameters
Y, the response
X, the covariates
nbatch, the number of samples to take from the full data
'''
alpha=5
n = X.shape[0]
batch_id = np.random.choice(np.arange(n),nbatch,replace=False)
grad = -n/nbatch * X[batch_id,:].T @ (Y[batch_id][:,np.newaxis] - \
1/(1+np.exp(-X[batch_id,:] @ theta[:,np.newaxis]))) - theta[:,np.newaxis]/alpha
return grad
def logistic(x):
return 1/(1+np.exp(-x))
def stogradU(theta, Y, X, nbatch):
'''A function that returns the stochastic gradient. Adapted from Eq. 5.
Inputs are:
theta, the parameters
Y, the response
X, the covariates
nbatch, the number of samples to take from the full data
'''
alpha=5
n = X.shape[0]
batch_id = np.random.choice(np.arange(n),nbatch,replace=False)
Y_pred = logistic(X[batch_id,:] @ theta[:,np.newaxis])
epsilon = (Y[batch_id][:,np.newaxis] - Y_pred)
grad = -n/nbatch * X[batch_id,:].T @ epsilon - theta[:,np.newaxis]/alpha
return grad/n
End of explanation
from sklearn.linear_model import LogisticRegression
# Unscaled
mod_logis = LogisticRegression(fit_intercept=False, C=1e50)
mod_logis.fit(X,Y)
beta_true_unscale = mod_logis.coef_.ravel()
beta_true_unscale
# Scaled
mod_logis = LogisticRegression(fit_intercept=False, C=1e50)
mod_logis.fit(Xs,Y)
beta_true_scale = mod_logis.coef_.ravel()
beta_true_scale
X.shape, Y.shape
U(np.ones(p)*.1,Y,X)
gradU(np.ones(p)*.1, Y, X, 1)*n
stogradU(np.ones(p)*.1, Y, X, 768)*n
Explanation: Correct coefficients
End of explanation
# HMC - Unscaled
nsample = 100
m = 20
eps = .0001
#theta = np.zeros(p)
theta = beta_true_unscale.copy()
phi = 0.01
nbatch = 500
C = 0 * np.identity(p)
V = 0 * np.identity(p)
np.random.seed(2)
samples = np.zeros((nsample, p))
u = np.zeros(nsample)
for i in range(nsample):
theta = sghmc(Y, X, stogradU, M, eps, m, theta, C, V)
samples[i] = theta
u[i] = U(theta, Y, X)
np.mean(samples, axis=0) - beta_true_unscale
plt.plot((samples - beta_true_unscale)[:,3])
plt.show()
plt.plot(u)
plt.show()
# SGHMC - Scaled
nsample = 10000
m = 20
eps = .001
theta = np.zeros(p)
#theta = beta_true_scale.copy()
phi = 0.1
nbatch = 768
C = 0 * np.identity(p)
V = 0 * np.identity(p)
np.random.seed(2)
samples = np.zeros((nsample, p))
u = np.zeros(nsample)
for i in range(nsample):
theta = sghmc(Y, Xs, stogradU, M, eps, m, theta, C, V)
samples[i] = theta
u[i] = U(theta, Y, Xs)
np.mean(samples, axis=0) - beta_true_scale
plt.plot((samples - beta_true_scale)[:,1])
plt.show()
plt.plot(u)
plt.show()
# HMC - Scaled (no intercept)
nsample = 1000
m = 20
eps = .002
theta = np.zeros(p-1)
#theta = beta_true_scale.copy()[1:]
phi = 5
nbatch = 500
C = 1 * np.identity(p-1)
V = 0 * np.identity(p-1)
np.random.seed(2)
samples = np.zeros((nsample, p-1))
u = np.zeros(nsample)
for i in range(nsample):
theta = sghmc(Y, Xs[:,1:], stogradU, np.identity(p-1), eps, m, theta, C, V)
samples[i] = theta
u[i] = U(theta, Y, Xs[:,1:])
np.mean(samples, axis=0) - beta_true_scale[1:]
plt.plot((samples - beta_true_scale[1:])[:,0])
plt.show()
plt.plot(u)
plt.show()
Explanation: Our code - SGHMC
End of explanation
# Gradient descent - Unscaled
np.random.seed(2)
#res = my_gd(Y, X, gradU, M, .0001, 10000, np.zeros(p), C, V) # Starting at zero
#res = my_gd(Y, X, gradU, M, .0001, 10000, beta_true_unscale.copy(), C, V) # Starting at true values
res = my_gd(Y, X, gradU, M, .0001, 10000, beta_true_unscale.copy(), C, V) # Starting at true values
res - beta_true_unscale
# Gradient descent - Scaled
np.random.seed(2)
res = my_gd(Y, Xs, gradU, M, .1, 20000, np.zeros(p), C, V)
res - beta_true_scale
Explanation: Our code - Gradient descent
End of explanation
# Cliburn's gradient descent code
def gd(X, y, beta, alpha, niter):
Gradient descent algorihtm.
n, p = X.shape
Xt = X.T
for i in range(niter):
y_pred = logistic(X @ beta)
epsilon = y - y_pred
grad = Xt @ epsilon / n
beta += alpha * grad
return beta
# Unscaled
#res = gd(X, Y.ravel(), np.zeros(p), alpha=.1, niter=2) # Starting at zero
res = gd(X, Y.ravel(), beta_true_unscale.copy(), alpha=.0001, niter=10000) # Starting at true coefficients
res - beta_true_unscale
# Scaled
res = gd(Xs, Y.ravel(), np.zeros(p), alpha=.1, niter=20000)
res - beta_true_scale
Explanation: Cliburn's code
End of explanation |
8,756 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation.
To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.
Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.
<img src='assets/tokenize_lookup.png' width=500>
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.
Word2Vec
The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
Step1: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
Step2: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
Step3: And here I'm creating dictionaries to convert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
Step4: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.
Exercise
Step5: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.
Step6: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
Step7: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise
Step8: Embedding
The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.
Exercise
Step9: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise
Step10: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
Step11: Training
Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
Step12: Restore the trained network if you need to
Step13: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data. | Python Code:
import time
import numpy as np
import tensorflow as tf
import utils
Explanation: Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation.
To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.
Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.
<img src='assets/tokenize_lookup.png' width=500>
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.
Word2Vec
The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
End of explanation
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'http://mattmahoney.net/dc/text8.zip',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
Explanation: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
End of explanation
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
Explanation: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
End of explanation
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
Explanation: And here I'm creating dictionaries to convert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
End of explanation
## Your code here
# My solution was not using Counter thus was extremely slow. here it is:
# def freq(word,corpus):
# return corpus.count(word)/len(corpus)
#
# def prob(word,freqs,th):
# return 1-np.sqrt(th/freqs[word])
#
# freqs = {word:freq(word,int_words) for word in int_words}
# p_drop = {word: prob(word,freqs,th) for word in int_words}
# train_words = {w for w in int_words if p_drop[w]>np.random.rand()}
from collections import Counter
import random
word_counts=Counter(int_words) # dictionary like with k:v=int_words:count
total_count = len(int_words)
freqs={word: count/total_count for word,count in word_counts.items()}
p_drop={word: 1-np.sqrt(th/freqs[word]) for word in word_counts}
train_words = [word for word in int_words if p_drop[word]<random.random()]
Explanation: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.
Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to train_words.
End of explanation
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
# Your code here
# get random number in the range (1,window_size) - this will be the number of words we'll take
R=random.randint(1,window_size)
# what about warping arond ? do we want to allow it ?
start = max(idx-R,0)
stop = min(idx+R+1,len(words))
return words[start:idx]+words[idx+1:stop]
# note that the reference solution used np.random.randint
# note that the reference solution returned list(set(words[start:idx]+words[idx+1:stop])). not clear why the set() is needed...
Explanation: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.:
"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels."
Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you choose a random number of words from the window.
End of explanation
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
Explanation: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
End of explanation
train_graph = tf.Graph()
with train_graph.as_default():
inputs = tf.placeholder(dtype=tf.int32,shape=[None], name='inputs')
labels = tf.placeholder(dtype=tf.int32,shape=[None,None],name='labels') # ??? To make things work later, you'll need to set the second dimension of labels to None or 1.
Explanation: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.
End of explanation
n_vocab = len(int_to_vocab)
n_embedding = 200
with train_graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_vocab,n_embedding),-1,1))
embed = tf.nn.embedding_lookup(embedding,inputs)
Explanation: Embedding
The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.
Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform.
End of explanation
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.Variable(tf.truncated_normal((n_vocab,n_embedding),stddev=0.1))
softmax_b = tf.Variable(tf.zeros(n_vocab))
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(softmax_w,softmax_b,labels,embed,n_sampled,n_vocab)
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
Explanation: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.
End of explanation
with train_graph.as_default(): # Question : why do we need this context ?
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
Explanation: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
End of explanation
epochs = 10
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
## From Thushan Ganegedara's implementation
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
Explanation: Training
Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
End of explanation
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
Explanation: Restore the trained network if you need to:
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
Explanation: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.
End of explanation |
8,757 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
If you have not already read it, you may want to start with the first tutorial
Step1: Here we will again load some pre-generated data meant to represent well-sampled, precise radial velocity observations of a single luminous source with a single companion (we again downsample the data set here just for demonstration)
Step2: We will use the default prior, but feel free to play around with these values
Step3: The data above look fairly constraining
Step4: The sample that was returned by The Joker does look like it is a reasonable fit to the RV data, but to fully explore the posterior pdf we will use standard MCMC through pymc3. Here we will use the NUTS sampler, but you could also experiment with other backends (e.g., Metropolis-Hastings, or even emcee by following this blog post)
Step5: If you get warnings from running the sampler above, they usually indicate that we should run the sampler for many more steps to tune the sampler and for our main run, but let's ignore that for now. With the MCMC traces in hand, we can summarize the properties of the chains using pymc3.summary
Step6: To convert the trace into a JokerSamples instance, we can use the TheJoker.trace_to_samples() method. Note here that the sign of K is arbitrary, so to compare to the true value, we also call wrap_K() to store only the absolute value of K (which also increases omega by π, to stay consistent)
Step7: We can now compare the samples we got from MCMC to the true orbital parameters used to generate this data | Python Code:
import astropy.coordinates as coord
import astropy.table as at
from astropy.time import Time
import astropy.units as u
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
import corner
import pymc3 as pm
import pymc3_ext as pmx
import exoplanet as xo
import arviz as az
import thejoker as tj
# set up a random number generator to ensure reproducibility
rnd = np.random.default_rng(seed=42)
Explanation: If you have not already read it, you may want to start with the first tutorial: Getting started with The Joker.
Continue generating samples with standard MCMC
When many prior samples are used with The Joker, and the sampler returns one sample, or the samples returned are within the same mode of the posterior, the posterior pdf is likely unimodal. In these cases, we can use standard MCMC methods to generate posterior samples, which will typically be much more efficient than The Joker itself. In this example, we will use pymc3 to "continue" sampling for data that are very constraining.
First, some imports we will need later:
End of explanation
data_tbl = at.QTable.read('data.ecsv')
sub_tbl = data_tbl[rnd.choice(len(data_tbl), size=18, replace=False)] # downsample data
data = tj.RVData.guess_from_table(sub_tbl, t_ref=data_tbl.meta['t_ref'])
_ = data.plot()
Explanation: Here we will again load some pre-generated data meant to represent well-sampled, precise radial velocity observations of a single luminous source with a single companion (we again downsample the data set here just for demonstration):
End of explanation
prior = tj.JokerPrior.default(
P_min=2*u.day, P_max=1e3*u.day,
sigma_K0=30*u.km/u.s,
sigma_v=100*u.km/u.s)
Explanation: We will use the default prior, but feel free to play around with these values:
End of explanation
prior_samples = prior.sample(size=250_000,
random_state=rnd)
joker = tj.TheJoker(prior, random_state=rnd)
joker_samples = joker.rejection_sample(data, prior_samples,
max_posterior_samples=256)
joker_samples
joker_samples.tbl
_ = tj.plot_rv_curves(joker_samples, data=data)
Explanation: The data above look fairly constraining: it would be hard to draw many distinct orbital solutions through the RV data plotted above. In cases like this, we will often only get back 1 or a few samples from The Joker even if we use a huge number of prior samples. Since we are only going to use the samples from The Joker to initialize standard MCMC, we will only use a moderate number of prior samples:
End of explanation
with prior.model:
mcmc_init = joker.setup_mcmc(data, joker_samples)
trace = pmx.sample(tune=500, draws=500,
start=mcmc_init,
cores=1, chains=2)
Explanation: The sample that was returned by The Joker does look like it is a reasonable fit to the RV data, but to fully explore the posterior pdf we will use standard MCMC through pymc3. Here we will use the NUTS sampler, but you could also experiment with other backends (e.g., Metropolis-Hastings, or even emcee by following this blog post):
End of explanation
az.summary(trace, var_names=prior.par_names)
Explanation: If you get warnings from running the sampler above, they usually indicate that we should run the sampler for many more steps to tune the sampler and for our main run, but let's ignore that for now. With the MCMC traces in hand, we can summarize the properties of the chains using pymc3.summary:
End of explanation
mcmc_samples = joker.trace_to_samples(trace, data)
mcmc_samples.wrap_K()
mcmc_samples
Explanation: To convert the trace into a JokerSamples instance, we can use the TheJoker.trace_to_samples() method. Note here that the sign of K is arbitrary, so to compare to the true value, we also call wrap_K() to store only the absolute value of K (which also increases omega by π, to stay consistent):
End of explanation
import pickle
with open('true-orbit.pkl', 'rb') as f:
truth = pickle.load(f)
# make sure the angles are wrapped the same way
if np.median(mcmc_samples['omega']) < 0:
truth['omega'] = coord.Angle(truth['omega']).wrap_at(np.pi*u.radian)
if np.median(mcmc_samples['M0']) < 0:
truth['M0'] = coord.Angle(truth['M0']).wrap_at(np.pi*u.radian)
df = mcmc_samples.tbl.to_pandas()
truths = []
colnames = []
for name in df.columns:
if name in truth:
colnames.append(name)
truths.append(truth[name].value)
_ = corner.corner(df[colnames], truths=truths)
Explanation: We can now compare the samples we got from MCMC to the true orbital parameters used to generate this data:
End of explanation |
8,758 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
* Visualizing of genetic similarity with Lightning + GraphX *
Setup lightning
Step1: Load structure similarity data
Public data from http
Step2: Show the network (unlabeled)
Step3: Show the network colored by degree
Step4: Show the network colored by connected components | Python Code:
%libraryDependencies += "org.viz.lightning" %% "lightning-scala" % "0.1.6"
%update
import org.viz.lightning._
import org.apache.spark.graphx._
val lgn = Lightning(host="https://lightning-spark-summit.herokuapp.com" )
lgn.enableNotebook()
Explanation: * Visualizing of genetic similarity with Lightning + GraphX *
Setup lightning
End of explanation
val source = "/Users/mathisonian/projects/spark-summit/notebooks/data/allen-connectivity.txt"
val g = GraphLoader.edgeListFile(sc, source)
Explanation: Load structure similarity data
Public data from http://www.brain-map.org/
End of explanation
val links = g.edges.collect().map(e => Array(e.srcId.toInt, e.dstId.toInt))
lgn.force(links)
Explanation: Show the network (unlabeled)
End of explanation
val links = g.edges.collect().map(e => Array(e.srcId.toInt, e.dstId.toInt))
val degrees = g.degrees.sortBy(_._1).collect().map(x => Math.log(x._2))
lgn.force(links, value=degrees, colormap="Lightning")
Explanation: Show the network colored by degree
End of explanation
val links = g.edges.collect().map(e => Array(e.srcId.toInt, e.dstId.toInt))
val connectedComponents = g.connectedComponents().vertices.sortBy(_._1).map(_._2.toInt).collect()
lgn.force(links, label=connectedComponents)
Explanation: Show the network colored by connected components
End of explanation |
8,759 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Wk1.0
Warm-up
Step1: 3. Extend your program to n objects. How many different combinations do I have for 5 objects? How about 15? What is the max number of objects I could calculate for if I was storing the result in a 32 bit integer? What happens if the combinations exceed 32 bits?
Step3: 4. What will the following code yield? Was it what you expected? What's going on here?
.1 + .1 + .1 == .3
5. Try typing in the command below and read this page
format(.1, '.100g')
Data structure of the day
Step7: Strategy 1 | Python Code:
count = 1
for elem in range(1, 3 + 1):
count *= elem
print(count)
Explanation: Wk1.0
Warm-up: I got 32767 problems and overflow is one of them.
1. Swap the values of two variables, a and b without using a temporary variable.
2. Suppose I had six different sodas. In how many different combinations could I drink the sodas? Write a program that calculates the number of unique combinations for 6 objects. Assume that I finish a whole sode before moving onto another one.
End of explanation
from math import factorial as f
f(3)
Explanation: 3. Extend your program to n objects. How many different combinations do I have for 5 objects? How about 15? What is the max number of objects I could calculate for if I was storing the result in a 32 bit integer? What happens if the combinations exceed 32 bits?
End of explanation
def n_max():
inpt = eval(input("Please enter some values: "))
maximum = max_val(inpt)
print("The largest value is", maximum)
def max_val(ints):
Input: collection of ints.
Returns: maximum of the collection
int - the max integer.
max = ints[0]
for x in ints:
if x > max:
max = x
return max
assert max_val([1, 2, 3]) == 3
assert max_val([1, 1, 1]) == 1
assert max_val([1, 2, 2]) == 2
n_max()
inpt = eval(input("Please enter three values: "))
list(inpt)
Explanation: 4. What will the following code yield? Was it what you expected? What's going on here?
.1 + .1 + .1 == .3
5. Try typing in the command below and read this page
format(.1, '.100g')
Data structure of the day: tuples
Switching variables: a second look
How do we make a single tuple?
slicing, indexing
mutability
tuple packing and unpacking
using tuples in loops
using tuples to unpack enumerate(lst)
tuples as return values
comparing tuples
(0, 1, 2000000) < (0, 3, 4)
Design pattern: DSU
Decorate
Sort
Undecorate
Ex.
```
txt = 'but soft what light in yonder window breaks'
words = txt.split()
t = list()
for word in words:
t.append((len(word), word))
t.sort(reverse=True)
res = list()
for length, word in t:
res.append(word)
print(res)
```
Why would words.sort() not work?
We can use tuples as a way to store related data
addr = 'monty@python.org'
uname, domain = addr.split('@')
Advanced: tuples as argument parameters
t = (a, b, c)
func(*t)
Tuples: exercises
Exercise 1
Revise a previous program as follows: Read and parse the "From" lines and pull out the addresses from the line. Count the number of messages from each person using a dictionary.
After all the data has been read print the person with the most commits by creating a list of (count, email) tuples from the dictionary and then sorting the list in reverse order and print out the person who has the most commits.
```
Sample Line:
From stephen.marquard@uct.ac.za Sat Jan 5 09:14:16 2008
Enter a file name: mbox-short.txt
cwen@iupui.edu 5
Enter a file name: mbox.txt
zqian@umich.edu 195
```
Exercise 2
This program counts the distribution of the hour of the day for each of the messages. You can pull the hour from the "From" line by finding the time string and then splitting that string into parts using the colon character. Once you have accumulated the counts for each hour, print out the counts, one per line, sorted by hour as shown below.
Sample Execution:
python timeofday.py
Enter a file name: mbox-short.txt
04 3
06 1
07 1
09 2
10 3
11 6
14 1
15 2
16 4
17 2
18 1
19 1
Exercise 3
Write a program that reads a file and prints the letters in decreasing order of frequency. Your program should convert all the input to lower case and only count the letters a-z. Your program should not count spaces, digits, punctuation or anything other than the letters a-z. Find text samples from several different languages and see how letter frequency varies between languages. Compare your results with the tables at wikipedia.org/wiki/Letter_frequencies.
Afternoon warm-up
Write a function that takes three numbers $x_1, x_2, x_3$ from a user and returns the max value. Don't use the built in max function. Would your function work on more than three values?
End of explanation
assert compress('AAAADDBBBBBCCEAA') == 'A4D2B5C2E1A2'
# %load ../scripts/compress/compressor.py
def groupby_char(lst):
Returns a list of strings containing identical characters.
Takes a list of characters produced by running split on a string.
Groups runs (in order sequences) of identical characters into string elements in the list.
Parameters
---------
Input:
lst: list
A list of single character strings.
Output:
grouped: list
A list of strings containing grouped characters.
new_lst = []
count = 1
for i in range(len(lst) - 1): # we range to the second to last index since we're checking if lst[i] == lst[i + 1].
if lst[i] == lst[i + 1]:
count += 1
else:
new_lst.append([lst[i],count]) # Create a lst of lists. Each list contains a character and the count of adjacent identical characters.
count = 1
new_lst.append((lst[-1],count)) # Return the last character (we didn't reach it with our for loop since indexing until second to last).
grouped = [char*count for [char, count] in new_lst]
return grouped
def compress_group(string):
Returns a compressed two character string containing a character and a number.
Takes in a string of identical characters and returns the compressed string
consisting of the character and the length of the original string.
Example
-------
"AAA"-->"A3"
Parameters:
-----------
Input:
string: str
A string of identical characters.
Output:
------
compressed_str: str
A compressed string of length two containing a character and a number.
return str(string[0]) + str(len(string))
def compress(string):
Returns a compressed representation of a string.
Compresses the string by mapping each run of identical characters to a
single character and a count.
Ex.
--
compress('AAABBCDDD')--> 'A3B2C1D3'.
Only compresses string if the compression is shorter than the original string.
Ex.
--
compress('A')--> 'A' # not 'A1'.
Parameters
----------
Input:
string: str
The string to compress
Output:
compressed: str
The compressed representation of the string.
try:
split_str = [char for char in string] # Create list of single characters.
grouped = groupby_char(split_str) # Group characters if characters are identical.
compressed = ''.join( # Compress each element of the grouped list and join to a string.
[compress_group(elem) for elem in grouped])
if len(compressed) < len(string): # Only return compressed if compressed is actually shorter.
return compressed
else:
return string
except IndexError: # If our input string is empty, return an empty string.
return ""
except TypeError: # If we get something that's not compressible (including NoneType) return None.
return None
# %load ../scripts/compress/compress_tests.py
# This will fail to run because in wrong directory
from compress.compressor import *
def compress_test():
assert compress('AAABBCDDD') == 'A3B2C1D3'
assert compress('A') == 'A'
assert compress('') == ''
assert compress('AABBCC') == 'AABBCC' # compressing doesn't shorten string so just return string.
assert compress(None) == None
def groupby_char_test():
assert groupby_char(["A", "A", "A", "B", "B"]) == ["AAA", "BB"]
def compress_group_test():
assert compress_group("AAA") == "A3"
assert compress_group("A") == "A1"
Explanation: Strategy 1: Compare each to all (brute force)
Strategy 2: Decision Tree
Strategy 3: Sequential processing
Strategy 4: Use python
The development process
A Problem Solving Algorithm
See Polya's How to Solve it
1. Understand the problem
2. Brainstorm on paper
3. Plan out program
4. Refine design
5. Create function
6. Create function docstring
7. Create function tests
8. Check that tests fail
9. If function is trivial, then solve it (i.e. get function tests to pass). Else, create sub-function (aka divide and conquer) and repeat steps 5-8.
Example: Compress
End of explanation |
8,760 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div align="right">Python 3.6</div>
Testing The Google Maps Subclass
This notebook was created to test objects associated with extracting information into a Dataframe using the Google Maps API. Initially, this was part of an effort to operationalize the interesting bits of code in a messy procedure I did for some research. The original use case was to extract just the address and add it to tables of information with latitude and longitude in them. This notebook may grow to test more related objects as they are developed and/or expansions of the original code.
Enrich or Change Larger Dataframe Section by Section
The purpose of the <font color=blue><b>DFBuilder</b></font> object is to allow scanning of a larger dataframe, a small number of rows at a time. It then allows code to be customized to make changes and build up a new dataframe from the results. The operation is in
a standard loop by design. The original use case was to add a field with data accessed from an API off the web, and time delays were necessary (as well as other logic) to prevent (or at least reduce the risk of) server timeouts during operation.
Scanning through the source a few lines at a time, performing the operation and adding back out to the target DF
creates a "caching effect" where data is saved along the way so in the event of a server time-out all is not lost. The resulting DF can then be saved out to a file and a rerun of <font color=blue><b>buildOutDF()</b></font> should make it possible to pick up where you left off and add in more data (instead of losing everything and having to begin again).
The abstract class sets up the core logic and subclasses add in functions to modify the data in different ways and potentially using different APIs. This notebook only tests the subclass designed for the Google Maps API.
Libraries Needed
Import statements included in this notebook are for the main abstract object and a test object.
Step1: Test Data
Input Data Set up Here
Step2: Code Testing
The abstract class which follows is intended to be the "work horse" of this code. Intent is that it gets the developer to the point where all they need to think about is what their final subclass will do to enrich the data. The parent class sets up a loop that can extract from a larger input DF, a small number of rows to be operated on in a temp DF and then be added to an outputDF. In the event of something interrupting the process (a common event when dealing with web APIs), modified rows created before the incident are waiting in output DF and can be extracted. Then code can be restarted or continued to allow building up the rest of the Dataframe without losing previous work or having to go all the way back to the beginnin.
Step3: Testing of The Subclass
A different subclass was created in another notebook to test most if not all of the non-web related logic of the Abstract class. This means testing in this notebook can focus on the code that produces final results and that interacts with the Google Maps API.
This section shows how the code can build up outDF() adding addresses obtained from the Google Maps API to the latitude and longitude provided in the input data. Tests show how errors are handled, both as text of the form "<errorTxt>" in the address field, or as empty strings if you set rtn_null to True. Tests also show how data can be added to outDF by re-running the build function. This allows adding of additional data to outDF, or of adding in data that was missed due to server timeout errors or other interruptions to the web process.
Test Main Logic with Error Handling Exposed
These tests were designed to show the error handling in action. For the sake of brevity, earlier testing was deleted to just show later tests in which errors are expected (due to exceeding daily license allotment from Google).
Step4: Test Main Logic - Fresh Alotment of Licenses (No Errors Expected)
Note that an error could still occur due to a server timeout, a server being down (on the Google site) or some other unexpected event. This test was set up to maximize the likelihood of showing what output can look like when no errors occur. Since at least one error seems to occur in batches of 900 or more, data is split in half with the second half added in after the first for the test set.
Step5: Experiment in Cleaning Up Results
This test was run with a fresh alotment of google license records for the day. It should have completed without error but the server went down causing 5 error records instead. This test shows what to do in this scenario.
Step6: Testing of Enhanced Print() and set_ Functions
Now has logic to handle output in different way. We get to see location.raw if possible, and it knows what to do if location is None or an empty string. Also testing build parameters for first time and new set_ functions.
Step7: Test of Other Internal Functions
These functions can be used in testing or to just get back a single value. These examples might prove useful.
Step8: Documentation Tests | Python Code:
# general libraries
import pandas as pd
## required for Google Maps API code
import os
## for larger data and/or make many requests in one day - get Google API key and use these lines:
# os.environ["GOOGLE_API_KEY"] = "YOUR_GOOGLE_API_Key"
## for better security (PROD environments) - install key to server and use just this line to load it:
# os.environ.get('GOOGLE_API_KEY')
# set up geocode
from geopy.geocoders import Nominatim
geolocator = Nominatim()
from geopy.exc import GeocoderTimedOut
import time
# note: for now could do this ... used time because it is already in use
Explanation: <div align="right">Python 3.6</div>
Testing The Google Maps Subclass
This notebook was created to test objects associated with extracting information into a Dataframe using the Google Maps API. Initially, this was part of an effort to operationalize the interesting bits of code in a messy procedure I did for some research. The original use case was to extract just the address and add it to tables of information with latitude and longitude in them. This notebook may grow to test more related objects as they are developed and/or expansions of the original code.
Enrich or Change Larger Dataframe Section by Section
The purpose of the <font color=blue><b>DFBuilder</b></font> object is to allow scanning of a larger dataframe, a small number of rows at a time. It then allows code to be customized to make changes and build up a new dataframe from the results. The operation is in
a standard loop by design. The original use case was to add a field with data accessed from an API off the web, and time delays were necessary (as well as other logic) to prevent (or at least reduce the risk of) server timeouts during operation.
Scanning through the source a few lines at a time, performing the operation and adding back out to the target DF
creates a "caching effect" where data is saved along the way so in the event of a server time-out all is not lost. The resulting DF can then be saved out to a file and a rerun of <font color=blue><b>buildOutDF()</b></font> should make it possible to pick up where you left off and add in more data (instead of losing everything and having to begin again).
The abstract class sets up the core logic and subclasses add in functions to modify the data in different ways and potentially using different APIs. This notebook only tests the subclass designed for the Google Maps API.
Libraries Needed
Import statements included in this notebook are for the main abstract object and a test object.
End of explanation
## Test code on a reasonably small DF
tst_lat_lon_df = pd.read_csv("testset_unique_lat_and_lon_vals.csv", index_col=0)
tst_lat_lon_df.describe()
tst_lat_lon_df.tail()
Explanation: Test Data
Input Data Set up Here
End of explanation
# note: gmtime() produced results in Grenwich Mean Time
# localtime() seems to get the local time from the computer (in my case EST)
from time import localtime, strftime
def getNow():
return strftime("%Y-%m-%d %H:%M:%S", localtime())
getNow()
from abc import ABCMeta, abstractmethod
import pandas as pd
class DFBuilder(object, metaclass=ABCMeta): # sets up abstract class
'''DataFrame Builder abstract class. Sets up logic to be inherited by objects that need to loop over a DataFrame
and cache the results. Original use case involves making API calls to the web which can get interrupted by
errors and server timeouts. This object stores all the logic to build up and save a DataFrame a small number of
records at a time. Then a subclass can define an abstract method in the base class as to what we want to do to
the input data. Original use case added in content extracted form the web to a new column. But subclasses can
be built to do more. Initialization argumens: endRw, time_delay. endRw = number of records to cache at a time
when building outDF. time_delay is number of seconds delay between each cycle of the loop that builds outDF.'''
def __init__(self,endRw,time_delay): # abstract classes can be subclassed
self.endRow=endRw # but cannot be instantiated
self.delay=time_delay
self.tmpDF=pd.DataFrame() # temp DF will be endRow rows in length
self.outDF=pd.DataFrame() # final DF build in sets of endRow rows so all is not lost in a failure
self.lastIndex = None
self.statusMsgGrouping = 100
def __str__(self):
return ("Global Settings for this object: \n" +
"endRow: " + str(self.endRow) + "\n" +
"delay: " + str(self.delay) + "\n" +
"statusMsgGrouping: " + str(self.statusMsgGrouping) + "\n"
"Length of outDF: " + str(len(self.outDF)) + "\n" +
"nextIndex: " + str(self.lastIndex))
# if continuing build process with last added table - index of next rec.
@abstractmethod # abstract method definition in Python
def _modifyTempDF_(): pass # This method will operate on TempDF inside the loop
def set_statusMsgGrouping(self, newValue):
'''Change number of records used to determine when to provide output messages during buildOutDF().
Default is 100 records. newValue=x sets this to a new number. Note that If endRow is not a factor of
statusMsgGrouping output may appear at unexpected intervals. endRow sets the number of rows to cache to
outDF in each iteration of the build loop.'''
self.statusMsgGrouping = newValue
print(self)
def set_timeDelay(self, newValue):
'''Change number of seconds in time delay between requests while creating outDF().
Default is 1 second. newValue=x sets this to a new number.'''
self.delay = newValue
print(self)
def set_endRow_OutDf_caching(self, newValue):
'''Change value of endRow which controls how many rows to cache at a time within buildOutDF().
Default is 5. If something goes wrong and you have to restart the process, this value also represents
the maximum number of requests you will lose. The rest will have already been added to outDF.
newValue=x sets this to a new number.'''
self.endRow = newValue
print(self)
def buildOutDF(self, inputDF):
'''Scans inputDF using self.endRow rows (default of 5) at a time to do it. It then calls in logic
from _modifyTempDF()_ to make changes to each subset of rows and appends tiny tempDF onto an outDF. When the
subclass is using a web API, self.delay tells it how much time to delay each iteration of the loop. Should
this function fail in the middle, outDF will have all work up to the failure.
This can be saved out to a DF or csv. The function can be run again on a subset of the data
(the records not encountered yet before the failure).'''
lenDF = len(inputDF)
print("Timestamp: ", getNow())
print("Processing inputDF with length of: ", lenDF)
print("Please wait ...")
endIndx = 0
i = 0
while i < lenDF:
# print("i: ", i)
endIndx = i + self.endRow
if endIndx > lenDF:
endIndx = lenDF
# print("Range to use: ", i, ":", endIndx)
if i % self.statusMsgGrouping == 0:
print(getNow(), "Now processing index: ", i)
self.tmpDF = inputDF[i:endIndx].copy(deep=True)
self._modifyTempDF_()
time.sleep(self.delay)
self.outDF = self.outDF.append(self.tmpDF)
self.lastIndex = endIndx
i = endIndx
# print("i at end of loop: ", i)
self.reindex_OutDF()
print("Process complete. %d records added to outDF." %(self.lastIndex))
print("Timestamp: ", getNow())
def reindex_OutDF(self):
'''Reindex OutDF using same settings that are used internally for the index during its creation.
This is like doing: outDF.reset_index(drop=True, inplace=True).'''
self.outDF.reset_index(drop=True, inplace=True)
class GMapsLoc_DFBuilder(DFBuilder):
'''This class inherits DFBuilder.buildOutDF() which makes use of data extraction and nodification functions in
this subclass. endRw sets number of rows to process at a time while building outDF (default=5). time_delay
can set the time delay between loop iterations to help prevent licensing issues and related server timeouts.
Default is 1 second. Initialization arguments: endRw, time_delay, return_null.
* endRw controls grouping: process endRow rows at a time and add to outDF (default is 5).
* time_delay has default of 1 second and sets how much time to wait each request whild building outDF.
* return_null, if False, records error text formatted as "_<errTxt>_" for records that failed to process.
Set to True to have it return blank records when errors occur instead (default is False).'''
def __init__(self, endRw=5,time_delay=1, return_null=False):
super().__init__(endRw,time_delay)
self.rtn_null = return_null
self.timeout = 10
self.location = "" # stores last location accessed using getGeoAddr
def __str__(self):
outStr = (super().__str__() + "\n" +
"rtn_null: " + str(self.rtn_null) + "\n" +
"timeout: " + str(self.timeout) + "\n")
if isinstance(self.location, (type(None), str)):
outStr = outStr + "location (last obtained): " + str(self.location)
else:
outStr = outStr + "location (last obtained): " + str(self.location.raw)
return outStr
def set_ServerTimeout(self, newValue):
'''Change number of seconds for the server timeout setting used during web requests.
Default is 10 second. newValue=x sets this to a new number.'''
self.timeout = newValue
print(self)
def testConnection(self, lat=48.8588443, lon=2.2943506):
'''Test getGeoAddr() function to prove connection to Google Maps is working. Use this ahead of
performing much larger operations with Google Maps.'''
return self.getGeoAddr(lat, lon)
def getGeoAddr(self, lt, lng, timeout=10, test=False, rtn_null=False):
'''Make call to Google Maps API to return back just the address from the json location record. Errors
should result in text values to help identify why an address was not returned. This can be turned off and
records that failed can bring back just an empty field by setting rtn_null to True. timeout = server timeout
and has a default that worked well during testing. '''
try:
self.location = geolocator.reverse(str(lt) + ", " + str(lng), timeout=timeout)
if test == True:
print("===============================")
print("Address:\n")
print(self.location)
print("===============================")
rtnVal = self.location
else:
rtnVal = self.location.address
except GeocoderTimedOut as gEoTo:
print(type(gEoTo))
print(gEoTo)
self.location = None
rtnVal = "_" + str(eee).upper().replace(' ', '_').replace(':', '') + "_"
## old error text: "_TIME_OUT_ERROR_ENCOUNTERED_"
except Exception as eee:
print(type(eee))
print(eee)
self.location = None
rtnVal = "_" + str(eee).upper().replace(' ', '_').replace(':', '') + "_"
finally:
# time_delay is not included here and should be incorporated into
# the loop that calls this function if desirable
if rtn_null==True and self.location is None:
return ""
else:
return rtnVal
def _modifyTempDF_(self, test=False):
'''Add Address Field to tempDF based on lat, lon (latitude/longitude field values in inputDF)'''
self.tmpDF["Address"] = self.tmpDF.apply(lambda x: self.getGeoAddr(lt=x.lat,lng=x.lon,
timeout=self.timeout,test=False, rtn_null=self.rtn_null), axis=1)
Explanation: Code Testing
The abstract class which follows is intended to be the "work horse" of this code. Intent is that it gets the developer to the point where all they need to think about is what their final subclass will do to enrich the data. The parent class sets up a loop that can extract from a larger input DF, a small number of rows to be operated on in a temp DF and then be added to an outputDF. In the event of something interrupting the process (a common event when dealing with web APIs), modified rows created before the incident are waiting in output DF and can be extracted. Then code can be restarted or continued to allow building up the rest of the Dataframe without losing previous work or having to go all the way back to the beginnin.
End of explanation
## build main object using the defaults
testObj = GMapsLoc_DFBuilder()
print(testObj)
testObj.buildOutDF(tst_lat_lon_df) ## some tests not shown performed ahead of this run
## errors should be result of exceeding daily record allotment
## for free Google Maps API license
## this code tests basic functioning and default error handling
testObj.outDF.head()
testObj.outDF.tail() ## this check shows default behavior
## errors recorded in address field so user can find out why a particular location
## failed to return results - in this case "too many requests" (for license allotment)
## errors begin and end with "_" which an address will not.
## a query or filter of the data for addresses starting with "_" can inform the user
## which records need to be run again
## change error handling and a few other default parameters
testObj.rtn_null = True ## change error handling: bad records will not simply get blank Address values
testObj.set_statusMsgGrouping(10) ## get status message about every 10 records (this will be a small test)
testObj.set_timeDelay(0) ## remove time delay (this increases risk of errors)
## note: each set_ function outputs current state of variables
## each output begins with "Global settings ..."
## last one is what these settings look like going into the next test
testObj.buildOutDF(tst_lat_lon_df[-25:]) ## redo end of DF .. should be entirely blank since we're out of licenses
## rtn_null = False told code to return empty cell instead of error text
## in production, it may be easier to just search for the nulls to get
## which records to redo, then delete nulls and add in missing records.
testObj.outDF.tail()
print(testObj) ## final look at settings for this object after process is complete
Explanation: Testing of The Subclass
A different subclass was created in another notebook to test most if not all of the non-web related logic of the Abstract class. This means testing in this notebook can focus on the code that produces final results and that interacts with the Google Maps API.
This section shows how the code can build up outDF() adding addresses obtained from the Google Maps API to the latitude and longitude provided in the input data. Tests show how errors are handled, both as text of the form "<errorTxt>" in the address field, or as empty strings if you set rtn_null to True. Tests also show how data can be added to outDF by re-running the build function. This allows adding of additional data to outDF, or of adding in data that was missed due to server timeout errors or other interruptions to the web process.
Test Main Logic with Error Handling Exposed
These tests were designed to show the error handling in action. For the sake of brevity, earlier testing was deleted to just show later tests in which errors are expected (due to exceeding daily license allotment from Google).
End of explanation
### quick clean test with fresh alotment of license records for the day
# illustrates adding more in later and a run with no errors in it
# * do 600 initially
# * then add in the end of DF
testObj = GMapsLoc_DFBuilder()
print(testObj)
testObj.buildOutDF(tst_lat_lon_df[0:600])
testObj.buildOutDF(tst_lat_lon_df[600:]) ## end of the df added in
## in this text, indicies between input/output will match
## since every record was added in using the same sequence
tst_lat_lon_df.tail() ## final records in the input
testObj.outDF.tail() ## final records in the output
Explanation: Test Main Logic - Fresh Alotment of Licenses (No Errors Expected)
Note that an error could still occur due to a server timeout, a server being down (on the Google site) or some other unexpected event. This test was set up to maximize the likelihood of showing what output can look like when no errors occur. Since at least one error seems to occur in batches of 900 or more, data is split in half with the second half added in after the first for the test set.
End of explanation
## do test with testObjDocs - resetting it to blank to start fresh
testObjDocs = GMapsLoc_DFBuilder()
print(testObjDocs)
testObjDocs.buildOutDF(tst_lat_lon_df)
testObjDocs.outDF[975:1000] ## spot check run on batches of 25 records to find the bad ones (b/2 900 and 1000)
## bad records found here
len(testObjDocs.outDF) # current length of DF
## our first run: index of input will be same as index on output
## test showing that records on input match the problem range in output
tst_lat_lon_df[985:990]
testObjDocs.buildOutDF(tst_lat_lon_df[985:990])
testObjDocs.outDF.tail(10) ## new records on the end ... still need to delete the bad ones
testObjDocs.outDF.drop(testObjDocs.outDF.index[985:990], inplace=True)
testObjDocs.outDF[984:991] ## as expected - bad rows dropped but we now have an indexing issue
testObjDocs.outDF.tail()
## fix index:
testObjDocs.reindex_OutDF()
testObjDocs.outDF[984:991]
testObjDocs.outDF.tail() ## note: all records are in here now but indices will be different from input DF
Explanation: Experiment in Cleaning Up Results
This test was run with a fresh alotment of google license records for the day. It should have completed without error but the server went down causing 5 error records instead. This test shows what to do in this scenario.
End of explanation
gMapAddrDat = GMapsLoc_DFBuilder(endRw=4, time_delay=3, return_null=True)
gMapAddrDat.set_statusMsgGrouping(12)
gMapAddrDat.set_endRow_OutDf_caching(3)
gMapAddrDat.set_timeDelay(0)
gMapAddrDat.set_ServerTimeout(9)
gMapAddrDat.buildOutDF(tst_lat_lon_df[0:50])
print(gMapAddrDat)
gMapAddrDat.location = ""
print(gMapAddrDat)
Explanation: Testing of Enhanced Print() and set_ Functions
Now has logic to handle output in different way. We get to see location.raw if possible, and it knows what to do if location is None or an empty string. Also testing build parameters for first time and new set_ functions.
End of explanation
gMapAddrDat.testConnection() ## uses default test record to just ensure connection is working
gMapAddrDat.getGeoAddr(40.699100, -73.703697, test=True) ## function called by buildOutDF()
## use test mode to obtain more information
tstLoc1 = gMapAddrDat.getGeoAddr(40.699100, -73.703697, test=True) ## use .raw on output during testing
print(type(tstLoc1)) ## to view JSON structure of Location obj
tstLoc1.raw
tstLoc1 = gMapAddrDat.getGeoAddr(40.699100, -73.703697) ## default: test=False
print(type(tstLoc1)) ## when called internally to build the address field for
tstLoc1 ## for outDF, it just returns an address string
Explanation: Test of Other Internal Functions
These functions can be used in testing or to just get back a single value. These examples might prove useful.
End of explanation
# create new object to test the docstrings and some more quick coding tweaks
testObjDocs = GMapsLoc_DFBuilder()
print(testObjDocs)
help(testObjDocs)
print(testObjDocs.__doc__) # note: formatting is messed up if you do not use print() on the doc string
print(testObjDocs.buildOutDF.__doc__) # buildOutDF
help(DFBuilder)
Explanation: Documentation Tests
End of explanation |
8,761 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Self-Driving Car Engineer Nanodegree
Deep Learning
Project
Step1: Step 1
Step2: Include an exploratory visualization of the dataset
Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include
Step3: Step 2
Step4: Model Architecture
Step5: Train, Validate and Test the Model
A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation
sets imply underfitting. A high accuracy on the test set but low accuracy on the validation set implies overfitting.
Step6: Step 3
Step7: Predict the Sign Type for Each Image
Step8: Analyze Performance
Step9: Output Top 5 Softmax Probabilities For Each Image Found on the Web
For each of the new images, print out the model's softmax probabilities to show the certainty of the model's predictions (limit the output to the top 5 probabilities for each image). tf.nn.top_k could prove helpful here.
The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.
tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.
Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. tk.nn.top_k is used to choose the three classes with the highest probability
Step10: Note | Python Code:
# Load pickled data
import pickle
from keras.datasets import cifar10
from sklearn.model_selection import train_test_split
# TODO: Fill this in based on where you saved the training and testing data
#training_file = "traffic-signs-data/train.p"
#validation_file = "traffic-signs-data/valid.p"
#testing_file = "traffic-signs-data/test.p"
#with open(training_file, mode='rb') as f:
# train = pickle.load(f)
#with open(validation_file, mode='rb') as f:
# valid = pickle.load(f)
#with open(testing_file, mode='rb') as f:
# test = pickle.load(f)
#X_train, y_train = train['features'], train['labels']
#X_valid, y_valid = valid['features'], valid['labels']
#X_test, y_test = test['features'], test['labels']
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
X_train, X_valid, y_train, y_valid = train_test_split(X_train, y_train, test_size=0.2, random_state=0)
# y_train.shape is 2d, (50000, 1). While Keras is smart enough to handle this
# it's a good idea to flatten the array.
y_train = y_train.reshape(-1)
y_test = y_test.reshape(-1)
y_valid = y_valid.reshape(-1)
Explanation: Self-Driving Car Engineer Nanodegree
Deep Learning
Project: Build a Traffic Sign Recognition Classifier
In this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary.
Note: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n",
"File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.
In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing the code template and writeup template will cover all of the rubric points for this project.
The rubric contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
Step 0: Load The Data
End of explanation
### Replace each question mark with the appropriate value.
### Use python, pandas or numpy methods rather than hard coding the results
# TODO: Number of training examples
n_train = len(X_train)
# TODO: Number of testing examples.
n_test = len(X_test)
# TODO: What's the shape of an traffic sign image?
image_shape = X_train[0].shape
# TODO: How many unique classes/labels there are in the dataset.
n_classes = len(set(y_valid))
print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
Explanation: Step 1: Dataset Summary & Exploration
The pickled data is a dictionary with 4 key/value pairs:
'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
'labels' is a 1D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.
'sizes' is a list containing tuples, (width, height) representing the the original width and height the image.
'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES
Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the pandas shape method might be useful for calculating some of the summary results.
Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas
End of explanation
### Data exploration visualization code goes here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
# Visualizations will be shown in the notebook.
%matplotlib inline
label_name = ["" for x in range(43)]
label_name[0] = "Speed limit (20km/h)"
label_name[1] = "Speed limit (30km/h)"
label_name[2] = "Speed limit (50km/h)"
label_name[3] = "Speed limit (60km/h)"
label_name[4] = "Speed limit (70km/h)"
label_name[5] = "Speed limit (80km/h)"
label_name[6] = "End of speed limit (80km/h)"
label_name[7] = "Speed limit (100km/h)"
label_name[8] = "Speed limit (120km/h)"
label_name[9] = "No passing"
label_name[10] = "No passing for vehicles over 3.5 metric tons"
label_name[11] = "Right-of-way at the next intersection"
label_name[12] = "Priority road"
label_name[13] = "Yield"
label_name[14] = "Stop"
label_name[15] = "No vehicles"
label_name[16] = "Vehicles over 3.5 metric tons prohibited"
label_name[17] = "No entry"
label_name[18] = "General caution"
label_name[19] = "Dangerous curve to the left"
label_name[20] = "Dangerous curve to the right"
label_name[21] = "Double curve"
label_name[22] = "Bumpy road"
label_name[23] = "Slippery road"
label_name[24] = "Road narrows on the right"
label_name[25] = "Road work"
label_name[26] = "Traffic signals"
label_name[27] = "Pedestrians"
label_name[28] = "Children crossing"
label_name[29] = "Bicycles crossing"
label_name[30] = "Beware of ice/snow"
label_name[31] = "Wild animals crossing"
label_name[32] = "End of all speed and passing limits"
label_name[33] = "Turn right ahead"
label_name[34] = "Turn left ahead"
label_name[35] = "Ahead only"
label_name[36] = "Go straight or right"
label_name[37] = "Go straight or left"
label_name[38] = "Keep right"
label_name[39] = "Keep left"
label_name[40] = "Roundabout mandatory"
label_name[41] = "End of no passing"
label_name[42] = "End of no passing by vehicles over 3.5 metric tons"
unique_label = set(y_valid)
maximum_traffic_signs_to_print = 20
def print_overite(title):
print(title, end='\r')
def fast_draw_all_traffic_signs_in_different_images():
size_dataset = len(y_valid)
for label in unique_label:
plt.figure(figsize=(16,0.8))
traffic_sign_index = 0
number_of_signs_printed = 0
for i in range(size_dataset):
#title = "Label: " + str(label) + "/" + str(len(unique_label) - 1) + " processing " + str(i) + "/" + str(size_dataset - 1)
#print_overite(title)
if (y_valid[i] == label):
traffic_signs = plt.subplot(1, maximum_traffic_signs_to_print, traffic_sign_index+1)
traffic_signs.imshow(X_valid[i], interpolation='nearest')
traffic_signs.axis('off')
traffic_sign_index += 1
number_of_signs_printed += 1
if (number_of_signs_printed == maximum_traffic_signs_to_print):
break
print(str(label+1) + "/" + str(len(unique_label)) + " - " + label_name[label])
plt.show()
def slow_draw_all_traffic_signs_in_one_image():
number_of_labels = len(unique_label)
size = 0.8
width = size * maximum_traffic_signs_to_print
height = size * number_of_labels
size_dataset = len(y_valid)
print("Number of labels: " + str(number_of_labels))
print("Total size of the image w:" + str(width) + " h:" + str(height))
print("Total size of the dataset: " + str(size_dataset))
print("Creating subplots, this might take a long time...")
fig, traffic_signs = plt.subplots(number_of_labels, maximum_traffic_signs_to_print, figsize=(width,height))
for label in range(number_of_labels):
traffic_sign_index = 0
number_of_signs_printed = 0
for i in range(size_dataset):
title = "Processing label: " + str(label) + "/" + str(number_of_labels - 1)
print_overite(title)
if (y_valid[i] == label):
traffic_signs[label][traffic_sign_index].imshow(X_valid[i], interpolation='nearest')
traffic_signs[label][traffic_sign_index].axis('off')
traffic_sign_index += 1
number_of_signs_printed += 1
if (number_of_signs_printed == maximum_traffic_signs_to_print):
break
print()
print("Painting...")
fast_draw_all_traffic_signs_in_different_images()
#slow_draw_all_traffic_signs_in_one_image()
Explanation: Include an exploratory visualization of the dataset
Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.
The Matplotlib examples and gallery pages are a great resource for doing visualizations in Python.
NOTE: It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections.
End of explanation
### Preprocess the data here. Preprocessing steps could include normalization, converting to grayscale, etc.
### Feel free to use as many code cells as needed.
#X_train, y_train = train['features'], train['labels']
#X_valid, y_valid = valid['features'], valid['labels']
#X_test, y_test = test['features'], test['labels']
import cv2
import tensorflow as tf
import numpy as np
def preprocess(x):
x = [cv2.cvtColor(image, cv2.COLOR_RGB2GRAY) for image in x]
return np.reshape(x, (-1, 32, 32, 1))
X_train = preprocess(X_train)
X_valid = preprocess(X_valid)
X_test = preprocess(X_test)
from sklearn.utils import shuffle
X_train, y_train = shuffle(X_train, y_train)
Explanation: Step 2: Design and Test a Model Architecture
Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sigwww.omgubuntu.co.uk/page/3n Dataset.
There are various aspects to consider when thinking about this problem:
Neural network architecture
Play around preprocessing techniques (normalization, rgb to grayscale, etc)
Number of examples per label (some have more than others).
Generate fake data.
Here is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.
NOTE: The LeNet-5 implementation shown in the classroom at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!
Pre-process the Data Set (normalization, grayscale, etc.)
Use the code cell (or multiple code cells, if necessary) to implement the first step of your project.
End of explanation
### Define your architecture here.
### Feel free to use as many code cells as needed.
from tensorflow.contrib.layers import flatten
def LeNet(x):
# Hyperparameters
mu = 0
sigma = 0.1
# Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 1, 6), mean = mu, stddev = sigma))
conv1_b = tf.Variable(tf.zeros(6))
conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b
# Activation.
conv1 = tf.nn.relu(conv1)
# Pooling. Input = 28x28x6. Output = 14x14x6.
conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# Layer 2: Convolutional. Output = 10x10x16.
conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))
conv2_b = tf.Variable(tf.zeros(16))
conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
# Activation.
conv2 = tf.nn.relu(conv2)
# Pooling. Input = 10x10x16. Output = 5x5x16.
conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# Flatten. Input = 5x5x16. Output = 400.
fc0 = flatten(conv2)
# Layer 3: Fully Connected. Input = 400. Output = 120.
fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))
fc1_b = tf.Variable(tf.zeros(120))
fc1 = tf.matmul(fc0, fc1_W) + fc1_b
# Activation.
fc1 = tf.nn.relu(fc1)
# Layer 4: Fully Connected. Input = 120. Output = 84.
fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))
fc2_b = tf.Variable(tf.zeros(84))
fc2 = tf.matmul(fc1, fc2_W) + fc2_b
# Activation.
fc2 = tf.nn.relu(fc2)
# Layer 5: Fully Connected. Input = 84. Output = n_classes.
fc3_W = tf.Variable(tf.truncated_normal(shape=(84, n_classes), mean = mu, stddev = sigma))
fc3_b = tf.Variable(tf.zeros(n_classes))
logits = tf.matmul(fc2, fc3_W) + fc3_b
return logits
Explanation: Model Architecture
End of explanation
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected,
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.
#Constants
EPOCHS = 10
BATCH_SIZE = 128
# Features and Labels
x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, n_classes)
#Training Pipeline
rate = 0.001
logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, one_hot_y)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
#Model Evaluation
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
#Train the Model
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
print("EPOCH {}".format(i+1) + ", accuracy: {:.3f}".format(evaluate(X_valid, y_valid)))
saver.save(sess, './lenet')
print("Model saved")
#Evaluate accuracy of the system
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
#Model Evaluation
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
def evaluate_accuracy(kind, x, y):
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
accuracy = evaluate(x, y)
print(kind + " Accuracy = {:.3f}".format(accuracy))
evaluate_accuracy("Validation", X_valid, y_valid)
evaluate_accuracy("Training", X_train, y_train)
evaluate_accuracy("Test", X_test, y_test)
Explanation: Train, Validate and Test the Model
A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation
sets imply underfitting. A high accuracy on the test set but low accuracy on the validation set implies overfitting.
End of explanation
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
import matplotlib.image as mpimg
#Images from http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset#Downloads
#newImageFileNames = [
# "new-images/00000.ppm",
# "new-images/00001.ppm",
# "new-images/00002.ppm",
# "new-images/00003.ppm",
# "new-images/00004.ppm"
#]
#y_newImages = [16, 1, 38, 33, 11]
#Images from Google search https://www.google.co.uk/search?q=german+road+signs
newImageFileNames = [
"random-web-images/label_1.jpg",
"random-web-images/label_17.jpg",
"random-web-images/label_18.jpg",
"random-web-images/label_25.jpg",
"random-web-images/label_28.jpg"
]
y_newImages = [1, 17, 18, 25, 28]
newImages = [mpimg.imread(newImageFileName) for newImageFileName in newImageFileNames]
def displayImages(images):
size = len(images)
for i in range(size):
image = plt.subplot(1, size, i+1)
image.imshow(images[i])
displayImages(newImages)
Explanation: Step 3: Test a Model on New Images
To give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.
You may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name.
Load and Output the Images
End of explanation
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.
#Pre-Process images
newImages = [cv2.resize(newImage, (32, 32)) for newImage in newImages]
X_newImages = preprocess(newImages).astype(np.float32)
def predict_signs(X):
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
softmax = tf.nn.softmax(logits)
label_predictions = sess.run(softmax, feed_dict={x: X})
return [np.argmax(label_prediction) for label_prediction in label_predictions]
predicted_labels = predict_signs(X_newImages)
print("Predicted labels: " + str(predicted_labels))
print("Correct labels: " + str(y_newImages))
Explanation: Predict the Sign Type for Each Image
End of explanation
### Calculate the accuracy for these 5 new images.
### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.
def ratio_correct_matches(predicted_labels, correct_labels):
number_of_correct_matches = 0
number_of_labels = len(predicted_labels)
for i in range(number_of_labels):
if (predicted_labels[i] == correct_labels[i]):
number_of_correct_matches += 1
return number_of_correct_matches * 100 / number_of_labels
ratio_success = ratio_correct_matches(predicted_labels, y_newImages)
print("Accuracy is " + str(ratio_success) + "%")
Explanation: Analyze Performance
End of explanation
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web.
### Feel free to use as many code cells as needed.
def softmax_top_probabilities(X, number_of_probabilities):
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
softmax = tf.nn.softmax(logits)
softmax_probabilities = sess.run(softmax, feed_dict={x: X})
top_probabilities = tf.nn.top_k(softmax_probabilities, number_of_probabilities)
return sess.run(top_probabilities)
top_probabilities = softmax_top_probabilities(X_newImages, 5)
for i in range(len(X_newImages)):
print("Top five softmax probabilities for image number " + str(i+1))
print(top_probabilities.values[i])
print("-> top five labels: " + str(top_probabilities.indices[i]))
print()
Explanation: Output Top 5 Softmax Probabilities For Each Image Found on the Web
For each of the new images, print out the model's softmax probabilities to show the certainty of the model's predictions (limit the output to the top 5 probabilities for each image). tf.nn.top_k could prove helpful here.
The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.
tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.
Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. tk.nn.top_k is used to choose the three classes with the highest probability:
```
(5, 6) array
a = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497,
0.12789202],
[ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401,
0.15899337],
[ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 ,
0.23892179],
[ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 ,
0.16505091],
[ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137,
0.09155967]])
```
Running it through sess.run(tf.nn.top_k(tf.constant(a), k=3)) produces:
TopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202],
[ 0.28086119, 0.27569815, 0.18063401],
[ 0.26076848, 0.23892179, 0.23664738],
[ 0.29198961, 0.26234032, 0.16505091],
[ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5],
[0, 1, 4],
[0, 5, 1],
[1, 3, 5],
[1, 4, 3]], dtype=int32))
Looking just at the first row we get [ 0.34763842, 0.24879643, 0.12789202], you can confirm these are the 3 largest probabilities in a. You'll also notice [3, 0, 5] are the corresponding indices.
End of explanation
#Traffic sign General Caution (label 18) analysis
problematic_label = 18
general_caution = []
general_caution_label = []
for i in range(len(y_test)):
if (y_test[i] == problematic_label):
general_caution.append(X_test[i])
general_caution_label.append(problematic_label)
evaluate_accuracy("General Caution", general_caution, general_caution_label)
Explanation: Note: Once you have completed all of the code implementations, you need to finalize your work by exporting the IPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run. You can then export the notebook by using the menu above and navigating to \n",
"File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.
Project Writeup
Once you have completed the code implementation, document your results in a project writeup using this template as a guide. The writeup can be in a markdown or pdf file.
End of explanation |
8,762 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pandas Examples
http
Step1: Let's read data from the BRFSS
Step2: If we group by sex, we get a DataFrameGroupBy object.
Step3: If we select a particular column from the GroupBy, we get a SeriesGroupBy object.
Step4: If you invoke a reduce method on a DataFrameGroupBy, you get a DataFrame
Step5: If you invoke a reduce method on a SeriesGroupBy, you get a Series
Step6: You can use aggregate to apply a collection of reduce methods
Step7: If the reduce method you want is not available, you can make your own
Step8: Here's how it works when we apply it directly
Step9: And we can use apply to apply it to each group
Step10: The digitize-groupby combo
Let's say we want to group people into deciles (bottom 10%, next 10%, and so on).
We can start by defining the cumulative probabilities that mark the borders between deciles.
Step11: And then use deciles to find the values that correspond to those cumulative probabilities.
Step12: digitize takes a series and a sequence of bin boundaries, and computes the bin index for each element in the series.
Step13: Exercise
Step14: Now, if your digitize function is working, we can assign the results to a new column in the DataFrame
Step15: And then group by height_decile
Step16: Now we can compute means for each variable in each group
Step17: It looks like
Step18: If we apply quantile to a SeriesGroupBy, we get back a Series with a MultiIndex.
Step19: If you unstack a MultiIndex, the inner level of the MultIndex gets broken out into columns.
Step20: Which makes it convenient to plot each of the columns as a line.
Step21: The other view of this data we might like is the CDF of weight within each height group.
We can use apply with the Cdf constructor from thinkstats2. The results is a Series of Cdf objects.
Step22: And now we can plot the CDFs
Step23: Exercise | Python Code:
from __future__ import print_function, division
%matplotlib inline
import numpy as np
import nsfg
import first
import analytic
import thinkstats2
import seaborn
Explanation: Pandas Examples
http://thinkstats2.com
Copyright 2017 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
import brfss
df = brfss.ReadBrfss()
df.describe()
Explanation: Let's read data from the BRFSS:
End of explanation
groupby = df.groupby('sex')
groupby
Explanation: If we group by sex, we get a DataFrameGroupBy object.
End of explanation
seriesgroupby = groupby.htm3
seriesgroupby
Explanation: If we select a particular column from the GroupBy, we get a SeriesGroupBy object.
End of explanation
groupby.mean()
Explanation: If you invoke a reduce method on a DataFrameGroupBy, you get a DataFrame:
End of explanation
seriesgroupby.mean()
Explanation: If you invoke a reduce method on a SeriesGroupBy, you get a Series:
End of explanation
groupby.aggregate(['mean', 'std'])
seriesgroupby.aggregate(['mean', 'std'])
Explanation: You can use aggregate to apply a collection of reduce methods:
End of explanation
def trimmed_mean(series):
lower, upper = series.quantile([0.05, 0.95])
return series.clip(lower, upper).mean()
Explanation: If the reduce method you want is not available, you can make your own:
End of explanation
trimmed_mean(df.htm3)
Explanation: Here's how it works when we apply it directly:
End of explanation
seriesgroupby.apply(trimmed_mean)
Explanation: And we can use apply to apply it to each group:
End of explanation
ps = np.linspace(0, 1, 11)
ps
Explanation: The digitize-groupby combo
Let's say we want to group people into deciles (bottom 10%, next 10%, and so on).
We can start by defining the cumulative probabilities that mark the borders between deciles.
End of explanation
series = df.htm3
bins = series.quantile(ps)
bins
Explanation: And then use deciles to find the values that correspond to those cumulative probabilities.
End of explanation
np.digitize(series, bins)
Explanation: digitize takes a series and a sequence of bin boundaries, and computes the bin index for each element in the series.
End of explanation
def digitize(series, n=11):
ps = np.linspace(0, 1, n)
bins = series.quantile(ps)
return np.digitize(series, bins)
Explanation: Exercise: Collect the code snippets from the previous cells to write a function called digitize that takes a Series and a number of bins and return the results from np.digitize.
End of explanation
df['height_decile'] = digitize(df.htm3)
df.height_decile.describe()
Explanation: Now, if your digitize function is working, we can assign the results to a new column in the DataFrame:
End of explanation
groupby = df.groupby('height_decile')
Explanation: And then group by height_decile
End of explanation
groupby.mean()
Explanation: Now we can compute means for each variable in each group:
End of explanation
weights = groupby.wtkg2
weights
Explanation: It looks like:
The shortest people are older than the tallest people, on average.
The shortest people are much more likely to be female (no surprise there).
The shortest people are lighter than the tallest people (wtkg2), and they were lighter last year, too (wtyrago).
Shorter people are more oversampled, so they have lower final weights. This is at least partly, and maybe entirely, due to the relationship with sex.
The fact that all of these variables are associates with height suggests that it will be important to control for age and sex for almost any analysis we want to do with this data.
Nevertheless, we'll start with a simple analysis looking at weights within each height group.
End of explanation
quantiles = weights.quantile([0.25, 0.5, 0.75])
quantiles
type(quantiles.index)
Explanation: If we apply quantile to a SeriesGroupBy, we get back a Series with a MultiIndex.
End of explanation
quantiles.unstack()
Explanation: If you unstack a MultiIndex, the inner level of the MultIndex gets broken out into columns.
End of explanation
quantiles.unstack().plot()
Explanation: Which makes it convenient to plot each of the columns as a line.
End of explanation
from thinkstats2 import Cdf
cdfs = weights.apply(Cdf)
cdfs
Explanation: The other view of this data we might like is the CDF of weight within each height group.
We can use apply with the Cdf constructor from thinkstats2. The results is a Series of Cdf objects.
End of explanation
import thinkplot
thinkplot.Cdfs(cdfs[1:11:2])
thinkplot.Config(xlabel='Weight (kg)', ylabel='Cdf')
Explanation: And now we can plot the CDFs
End of explanation
groupby = df.groupby(['sex', 'height_decile'])
groupby.mean()
groupby.wtkg2.mean()
cdfs = groupby.wtkg2.apply(Cdf)
cdfs
cdfs.unstack()
men = cdfs.unstack().loc[1]
men
thinkplot.Cdfs(men[1:11:2])
women = cdfs.unstack().loc[2]
women
thinkplot.Cdfs(women[1:11:2])
Explanation: Exercise: Plot CDFs of weight for men and women separately, broken out by decile of height.
End of explanation |
8,763 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step3: Notation
$Y$ generic random variable
$U$ latent random variable
$V$ residual random variable
$X$ predictor
Parameters
$\eta$ and $\nu$ generic parameters
$\mu=E[Y]$ mean parameter
$\gamma=E[\log Y]$ geometric mean parameter
$\sigma^2=E[(Y-\mu)^2]$ standard deviation parameter
$Y=\alpha+U$ shift parameter
$Y= U/\theta$ scale parameter
$Y= U \lambda$ inverse-scale (rate) parameter
$Y=e^{-\tau} U$ log-scale parameter
$Y=U^\kappa$ exponent parameter
$Y=f(U,\rho)$ shape parameter
$Y=\alpha + \beta X$ linear predictor
$\psi$ digamma function
$\pi$ pi number
$\phi$ measurement scale
$\delta$ dirac function
$\zeta,\epsilon,\varepsilon,\vartheta,\iota,\xi,\varpi,\varrho,\varsigma,\varphi,\chi,\omega$
Gamma distribution
Paremeters $\eta$ and $\nu$ are orthogonal if
$$\operatorname{E}_Y
\left[
\frac{\partial \log f(Y;\eta,\nu)}{\partial\eta \ \partial\nu}
\right]=0$$
The probability density function of Gamma distribution parametrized by shape parameter $\rho$ and scale parameter $\theta$ is
$$f(Y=y;\rho,\theta)=\frac{1}{\Gamma(\rho) \theta^\rho} y^{\rho - 1} e^{-\frac{y}{\theta}}$$
with Fisher information
$$I_{\rho \theta} = \begin{pmatrix}
\psi'(\rho) & \theta^{-1} \
\theta^{-1} & \rho \theta^{-2} \end{pmatrix} $$
Consider parametrization in terms of logarithm of geometric mean $\gamma=E[\log Y]=\psi(\rho)+\log \theta$ and log-scale $\tau=\log(\theta)$, where $\psi$ is the digamma function. Then the logarithm of density function parametrized by $\gamma$ and $\tau$ is
$$\log f(Y=y;\gamma,\tau)=-\log{\Gamma(\omega(\gamma-\tau)) -\tau \omega(\gamma-\tau) + (\omega(\gamma-\tau)-1)\log y- y e^{-\tau}}$$
where we use $\omega$ to label the inverse digamma function. By $\omega'(y)$ $\omega''(y)$ and we denote the first and second derivative of inverse digamma function with respect to $y$. Next, we compute the first derivative of the log-density with respect to $\gamma$
Step6: Hierarchical parameter recovery
Step7: Weibull distribution
$$f(y)=\frac{\kappa}{y}\left(\frac{y}{\theta}\right)^{\kappa}e^{-\left(\frac{y}{\theta} \right)^\kappa}$$
$$I_{\theta \kappa} = \begin{pmatrix}
\frac{\kappa^2}{\theta^2} & -\frac{\psi(2)}{\theta}\
. & \frac{1}{\kappa^2}\left(\psi'(1)+\psi(2)^2\right)\end{pmatrix} $$
$E[\log Y]= \log \theta + \psi(1)/\kappa$
$E[Y^s]=\theta^s \Gamma(1+s/\kappa)$
$E[Y^\kappa]=\theta^\kappa $
$\mathrm{Var}[\log Y]=\psi'(1)/\kappa^2$
$E[(Y/\theta)^\kappa]=1$
$\mathrm{Var}[(Y/\theta)^\kappa]=1$
$E[\log (Y/\theta^\kappa)]= \psi(1)$
$E[\log^2 (Y/\theta^\kappa)]= \psi'(1)+\psi(1)^2$
$E[(Y/\theta)^\kappa \log (Y/\theta)^\kappa ]= \psi(2)= \psi(1)+1$
$E[(Y/\theta)^\kappa \log^2(Y/\theta)^\kappa ]= \psi'(2)+\psi(2)^2$
$$I_{\tau \kappa} = \begin{pmatrix}
\kappa^2 & - \psi(2)\
. & \frac{1}{\kappa^2}\left(\psi'(1)+\psi(2)^2\right)\end{pmatrix} $$
$\tau=\log \theta$
$r_{\tau \kappa}=\psi(2)/\sqrt{\psi'(1)+\psi(2)^2}=0.31$
This is orthogonal parametrization
$$\kappa= \frac{1}{\xi-H \tau}$$
$$\xi=\frac{1}{\kappa}+H \tau $$
$H=\frac{\psi(2)}{\psi'(1)+\psi(2)^2}=0.232$
$$I_{\tau \xi} = \frac{H}{(\xi-\tau)^{2}} \begin{pmatrix}
\left(1+\frac{\psi(2)^2}{\psi'(1)}\right)^{-1} &0 \
. & \left(1+\frac{\psi'(1)}{\psi(2)^2}\right)^{-1} \end{pmatrix} $$
$$I_{\tau \kappa} = \begin{pmatrix}
\kappa^2 & - \psi(2)\
. & \frac{1}{\kappa^2}\left(\psi'(1)+\psi(2)^2\right)\end{pmatrix} $$
$$I_{\tau,1/\kappa} =\kappa^{2} \begin{pmatrix}
1 & \psi(2)\
. & \psi'(1)+\psi(2)^2\end{pmatrix} $$
$$I_{\tau,1/\kappa-H\tau} =\kappa^{2} \begin{pmatrix}
1-\psi(2) H& 0\
. & \psi'(1)+\psi(2)^2\end{pmatrix} = \kappa^{2} \begin{pmatrix}
0.902 & 0\
. & 1.824\end{pmatrix}$$
$$I_{\tau,H\kappa} =\begin{pmatrix}
\kappa^{2} & \psi(2) H\
. & \kappa^{-2} \psi(2) H\end{pmatrix} $$
$$I_{\tau,1/(H\kappa)} =\kappa^{2} H^2 \begin{pmatrix}
1 & \psi(2) H\
. & \psi(2) H\end{pmatrix} $$
$$I_{\tau,1/(H\kappa)+\tau} =\kappa^{2} H^2 \begin{pmatrix}
1-\psi(2) H& 0\
. & \psi(2) H\end{pmatrix} \= \kappa^{2} H^2
\begin{pmatrix}
\left(1+\frac{\psi(2)^2}{\psi'(1)}\right)^{-1} &0 \
. & \left(1+\frac{\psi'(1)}{\psi(2)^2}\right)^{-1} \end{pmatrix}= \kappa^{2} H^2 \begin{pmatrix}
0.902 & 0\
. & 0.098\end{pmatrix}$$
$$I_{\tau,\epsilon} =(\epsilon-\tau)^2\begin{pmatrix}
\left(1+\frac{\psi(2)^2}{\psi'(1)}\right)^{-1} &0 \
. & \left(1+\frac{\psi'(1)}{\psi(2)^2}\right)^{-1} \end{pmatrix}$$
Orthogonal from Cox and Reid (1987)
$\epsilon= \exp(\log \theta + \psi(2)/\kappa)=\exp(1/\kappa)\exp(E[\log Y])=\exp E[(Y/\theta)^\kappa \log Y]$
$\theta= \epsilon \exp(-\psi(2)/\kappa)$
Step8: $$J_{a/H,b}=\begin{pmatrix} H &0 \0 & 1 \end{pmatrix}$$
$$J^T \begin{pmatrix} H^{-2} A & H^{-1} B \ H^{-1} B & C \end{pmatrix} J= \begin{pmatrix} A &B \B & C\end{pmatrix}$$
$H=B/A$
$$J^T \begin{pmatrix} A & B \ B & C \end{pmatrix} J= \begin{pmatrix} B^2/A &B^2/A \B^2/A & C\end{pmatrix}$$
$$J_{a+b,b}=\begin{pmatrix} 1 &-1 \0 & 1 \end{pmatrix}$$
$$J^T\begin{pmatrix} A &A \A & B \end{pmatrix} J= \begin{pmatrix} A &0 \0 & B-A\end{pmatrix}$$
$$J_{\log a,b}=\begin{pmatrix} e^a &0 \0 & 1 \end{pmatrix}$$
$$J_{\log a,b}^T \begin{pmatrix} e^{-2a} A & e^{-a} B \e^{-a} B & C \end{pmatrix} J_{\log a,b}= \begin{pmatrix} A &B \B & C\end{pmatrix}$$
$$J_{e^a,b}=\begin{pmatrix} 1/a &0 \0 & 1 \end{pmatrix}$$
$$J^T \begin{pmatrix} a^2 A & a B \ a B & C \end{pmatrix} J= \begin{pmatrix} A &B \B & C\end{pmatrix}$$
$$J_{a^{-1},b}=\begin{pmatrix} -a^{2} &0 \0 & 1 \end{pmatrix}$$
$$J^T \begin{pmatrix} a^{-4} A & a^{-2} B \ a^{-2} B & C \end{pmatrix} J= \begin{pmatrix} A &-B \-B & C\end{pmatrix}$$
Step13: old stuff
$$\mathrm{Cov}(\gamma,\phi)=J_{12}J_{11}\frac{\kappa^2}{\theta^2}+J_{21}J_{22}\frac{1}{\kappa^2}\left(\psi'(2)+\psi(2)^2+1\right)-\frac{\psi(2)}{\theta}(J_{12}J_{11}+J_{21}J_{22}+J_{21}J_{12}+J_{11}J_{22})$$
$\theta=e^\phi$
$J_{11}=\frac{\partial \theta}{\partial \phi}=e^\phi=\theta$
$J_{12}=\frac{\partial \theta}{\partial \gamma}=0$
$$\mathrm{Cov}(\gamma,\phi)=J_{21}J_{22}\frac{1}{\kappa^2}\left(\psi'(2)+\psi(2)^2+1\right)-\frac{\psi(2)}{J_{11}}(J_{21}J_{22}+J_{11}J_{22})$$
$$\mathrm{Cov}(\gamma,\phi)=J_{22}\left(J_{21}\frac{1}{\kappa^2}\left(\psi'(2)+\psi(2)^2+1\right)-\psi(2)(J_{21}/J_{11}+1)\right)\
= J_{21}J_{22}\frac{\psi(2)}{\kappa^2}\left(
\frac{\psi'(2)+\psi(2)^2+1}{\psi(2)}-\kappa^2\left(\frac{\partial \phi}{\partial \kappa}+e^{-\phi}\right)\right)\
= J_{21}J_{22}\psi(2)\left(
\frac{\psi'(2)+\psi(2)^2+1}{\kappa^2\psi(2)}-e^{-\phi}-\frac{\partial \phi}{\partial \kappa}\right)
$$
$\gamma=-\phi- \frac{\psi'(2)+\psi(2)^2+1}{\kappa \psi(2)}$
$\frac{\partial \gamma}{\partial \kappa}= -\frac{\psi'(2)+\psi(2)^2+1}{\kappa^2 \psi(2)}$
$\kappa=-\frac{\psi'(2)+\psi(2)^2+1}{(\gamma+\phi) \psi(2)}$
$\frac{\partial \kappa}{\partial \phi}= \frac{\psi'(2)+\psi(2)^2+1}{(\gamma+\phi)^2 \psi(2)}$
$$\mathrm{Cov}(\gamma,\phi)= J_{21}J_{22}\psi(2)\left(
\frac{(\gamma+\phi)^2 \psi(2)}{\psi'(2)+\psi(2)^2+1}-e^{-\phi}- \frac{(\gamma+\phi)^2 \psi(2)}{\psi'(2)+\psi(2)^2+1} \right)
$$
$c \mathrm{Ei}(\frac{c}{\kappa})-e^\frac{c}{\kappa}(e^{-\phi}+\kappa)=k$
Step14: Hierarchical weibull
Step16: Information matrix generalized gamma
$$f(y)=\frac{\kappa}{y \Gamma(\rho)}\left(\frac{y}{\theta}\right)^{\kappa \rho}e^{-\left(\frac{y}{\theta} \right)^\kappa}$$
$$\log f(y)=\log \kappa- \log y -\log \Gamma(\rho) +\kappa \rho \log y - \kappa \rho \log \theta -\left(\frac{y}{\theta} \right)^\kappa$$
$$I_{\rho \theta \kappa} = \begin{pmatrix} \psi'(\rho) & \frac{\kappa}{\theta} &- \frac{\psi(\rho)}{\kappa} \
. & \frac{\rho \kappa^2}{\theta^2} & -\frac{\rho}{\theta}\psi(\rho+1)\
. & . & \frac{\rho}{\kappa^2}\left(\psi'(\rho+1)+\psi(\rho+1)^2+\frac{1}{\rho}\right)\end{pmatrix} $$
$\rho (\psi'(\rho+1)+\psi(\rho+1)^2+\frac{1}{k})= \rho \psi'(\rho)+\rho \psi(\rho)^2 + 2\psi(\rho) +1$
$E[\log Y]= \log \theta + \psi(\rho)/\kappa$
$E[Y^s]=\theta^s \Gamma(\rho+s/\kappa)/\Gamma(\rho)$
$E[Y^\kappa]=\theta^\kappa \rho$
$E[Y^\kappa \log Y ]=\theta^\kappa \rho (\log \theta + \psi(\rho+1)/\kappa)= \theta^\kappa (\rho \log \theta + \rho \psi(\rho)/\kappa+1/\kappa)$
$E[\log^2 Y]= \log^2 \theta + 2 \log \theta \psi(\rho)/\kappa+(\psi'(\rho)+\psi(\rho)^2)/\kappa^2$
$E[Y^\kappa \log^2 Y]= \theta^\kappa \rho (\log^2 \theta + 2 \log \theta \psi(\rho+1)/\kappa+(\psi'(\rho+1)+\psi(\rho+1)^2)/\kappa^2)$
$E[Y^{2\kappa} \log^2 Y]= \theta^{2\kappa} (\rho+1) (\log^2 \theta + 2 \log \theta \psi(\rho+2)/\kappa+(\psi'(\rho+2)+\psi(\rho+2)^2)/\kappa^2)$
$\mathrm{Var}[\log Y]=\psi'(\rho)/\kappa^2$
$E[(Y/\theta)^\kappa]=\rho$
$\mathrm{Var}[(Y/\theta)^\kappa]=\rho$
$E[\log (Y/\theta)^\kappa]= \psi(\rho)$
$E[\log^2 (Y/\theta)^\kappa]= \psi'(\rho)+\psi(\rho)^2$
$E[(Y/\theta)^\kappa \log (Y/\theta)^\kappa ]= \rho \psi(\rho+1)= \rho \psi(\rho)+1$
$E[(Y/\theta)^\kappa \log^2(Y/\theta)^\kappa ]= \rho (\psi'(\rho+1)+\psi(\rho+1)^2)$
$$I_{\rho \tau \kappa} = \begin{pmatrix} \psi'(\rho) & \kappa &- \frac{\psi(\rho)}{\kappa} \
. & \rho \kappa^2 & -\rho\psi(\rho+1)\
. & . & \frac{\rho}{\kappa^2}\left(\psi'(\rho+1)+\psi(\rho+1)^2+\frac{1}{\rho}\right)\end{pmatrix} $$
$$I_{\rho, \tau, \log \kappa} = \begin{pmatrix} \psi'(\rho) & \kappa &- \psi(\rho) \
. & \rho \kappa^2 & -\kappa\rho\psi(\rho+1)\
. & . & \rho \left(\psi'(\rho+1)+\psi(\rho+1)^2+\frac{1}{\rho}\right)\end{pmatrix} $$
$$I_{\rho \tau,1/\kappa} = \begin{pmatrix} \psi'(\rho) & \kappa &- \kappa\psi(\rho) \
. & \rho \kappa^2 & -\kappa^2 \rho A\
. & . & \kappa^2 \rho B\end{pmatrix} $$
$$I_{\rho \tau,B/(A \kappa)} = \begin{pmatrix} \psi'(\rho) & \kappa &- \kappa\psi(\rho)A/B \
. & \rho \kappa^2 & -\kappa^2 \rho A^2/B\
. & . & \kappa^2 \rho A^2/B\end{pmatrix} $$
$$I_{\rho \tau,B/(A \kappa)-\tau} = \begin{pmatrix} \psi'(\rho) & \kappa-\kappa\psi(\rho)A/B &- \kappa\psi(\rho)A/B \
. & \rho \kappa^2 & 0\
. & . & \kappa^2 \rho A^2/B-\rho \kappa^2\end{pmatrix} $$
A=\psi(\rho+1)
B=\left(\psi'(\rho+1)+\psi(\rho+1)^2+\frac{1}{\rho}\right)
$\gamma=\tau+\psi(\rho)/\kappa$
$\rho=\omega(\kappa(\gamma-\tau))=\omega$
$$J=\begin{pmatrix}\kappa \omega' &-\kappa \omega' & (\gamma-\tau)\omega'\ 0&1 &0 \ 0& 0& 1 \end{pmatrix}$$
$$I_{\gamma \tau \kappa} = J^T\begin{pmatrix} \frac{1}{\omega'} & \kappa &-(\gamma-\tau) \
. & \omega \kappa^2 & -(\gamma-\tau)\omega-1\
. & . & \frac{R}{\kappa^2}\end{pmatrix} J $$
$$I_{\gamma \tau \kappa} = \begin{pmatrix} \kappa^2\omega' &0&0 \
. & \kappa^2(\omega -\omega')& (\gamma-\tau)(\kappa\omega'-\omega)-1\
. & . & \frac{R}{\kappa^2}-(\gamma-\tau)^2\omega'\end{pmatrix} $$
with $R=\frac{\omega}{\omega'} +\omega \kappa^2 (\gamma-\tau)^2 + 2\kappa (\gamma-\tau)+1$
Simplyfied Gamma
$$f(y;\rho)=\frac{ y^{\rho-1} e^{-y}}{\Gamma(\rho)}$$
$$\log f(y;\rho)=\rho \log y -\log y -y-\log \Gamma(\rho)$$
$\Gamma(z+1) = \int_0^\infty x^{z} e^{-x}\, dx$
$\Gamma(z+1)/\Gamma(z)=z$
$\frac{d^n}{dx^n}\Gamma(x) = \int_0^\infty t^{x-1} e^{-t} (\ln t)^n \, dt$
$\psi(x)=\log(\Gamma(x))'=\Gamma'(x)/\Gamma(x)$
$E[Y]= \int_0^\infty y^{\rho} e^{-y}\, dy / \Gamma(\rho)= \Gamma(\rho+1)/ \Gamma(\rho)=\rho$
$E[Y^s]=\Gamma(\rho+s)/ \Gamma(\rho)$
$\mathrm{Var}[Y]=E[Y^2]-E[Y]^2=\rho(\rho+1)-\rho^2=\rho$
$E[\log Y]=\Gamma'(\rho)/\Gamma(\rho)=\psi(\rho)$
$E[Y \log Y]=\Gamma'(\rho+1)/\Gamma(\rho)= \rho \psi(\rho+1)= \rho \psi(\rho)+1$
$E[1/Y]= \Gamma(\rho-1)/ \Gamma(\rho)=1/(\rho-1)$
$\mathrm{Var}[1/Y]=E[Y^2]-E[Y]^2=\frac{1}{(\rho-2)(\rho-1)^2}$
$E[\log^2 Y]=\Gamma''(\rho)/\Gamma(\rho)=\psi'(\rho)+\psi(\rho)^2$
use $\psi'(x)=(\Gamma'(x)/\Gamma(x))'=\Gamma''(x)/\Gamma(x)-(\Gamma'(x)/\Gamma(x))^2$
$E[Y \log^2 Y]=\Gamma''(\rho+1)/\Gamma(\rho)=\rho(\psi'(\rho+1)+\psi(\rho+1)^2)=\rho\psi'(\rho)+\rho\psi(\rho)^2+2\psi(\rho)$
Gengamma with $\theta=1$
$$f(y)=\frac{\kappa}{y \Gamma(\rho)}y^{\kappa \rho} e^{-y^\kappa}$$
$$\log f(y)=\log \kappa- \log y -\log \Gamma(\rho) +\kappa \rho \log y -y^\kappa$$
$$I_{\rho \kappa} = \begin{pmatrix} \psi'(\rho) & - \frac{\psi(\rho)}{\kappa} \
. & \frac{\rho}{\kappa^2}\left(\psi'(\rho+1)+\psi(\rho+1)^2+\frac{1}{\rho}\right)\end{pmatrix} $$
$$I_{\rho \log\kappa} = \begin{pmatrix} \psi'(\rho) & - \psi(\rho) \
. & \rho\psi'(\rho)+\rho\psi(\rho)^2+2\psi(\rho)+1\end{pmatrix} $$
$\gamma=\psi(\rho)/\kappa$
$\rho=\omega(\gamma \kappa)$
$1=d \psi(\omega(\gamma))/d \gamma$
$$I_{\gamma \kappa} = \begin{pmatrix} \kappa^2 \omega(\gamma\kappa) & 0 \
. & \frac{\omega(\gamma\kappa)}{\kappa\omega'(\gamma\kappa)}+ \omega(\gamma\kappa)\gamma^2\kappa+2\gamma+\frac{1}{\kappa^2}\end{pmatrix} $$
$$I_{\gamma \kappa} = \begin{pmatrix} \kappa^2 \omega(\gamma\kappa) & 0 \
. & \kappa^{-1}E[Y\log^2 Y]+\frac{1}{\kappa^2}\end{pmatrix} $$
TODO check the last result by transformation of $I_{\rho \kappa}$
orthogonal with
Step18: Beta distribution
Parameters $\alpha$ and $\beta$ are orthogonal if
$$\operatorname{E}_X
\left[
\frac{\partial \log f(X;\alpha,\beta)}{\partial\alpha \ \partial\beta}
\right]=0$$
The probability density function of Beta distribution parametrized by shape parameters $\alpha$ and $\beta$ is
$$f(X=x;\alpha,\beta)=\frac{ x^{\alpha-1}(1-x)^{\beta-1}}{B(\alpha,\beta)}$$
Consider parametrization in terms of logarithm of geometric mean $E[\log X]=\gamma=\psi(\alpha)-\psi(\alpha+\beta)$ and the logarithm of geometric mean of $1-X$
Step19: Wald distribution Fisher information
$$f(x)=\frac{\alpha}{\sigma \sqrt{2 \pi x^3}}\exp\left(-\frac{(\nu x-\alpha)^2}{2 \sigma^2 x}\right)$$
$E[X]=\alpha/\nu$
$E[1/X]=\nu/\alpha +\sigma^2/\alpha^2$
$$I_{\alpha \sigma \nu} = \begin{pmatrix} \frac{2}{\alpha^2}+\frac{\nu}{\sigma^2 \alpha} & \frac{2}{\sigma \alpha} & \frac{1}{\sigma}\
. & \frac{1}{\sigma^2} &0\
. & . & \frac{\alpha}{\sigma^2 \nu}\end{pmatrix} $$
$$I_{\log \alpha,\log \sigma \nu} = \begin{pmatrix} 2 \sigma+\frac{\nu \alpha}{\sigma} & 2 & \frac{1}{\sigma}\
. & 1 &0\
. & . & \frac{\alpha}{\sigma^2 \nu}\end{pmatrix} $$ | Python Code:
model =
data {
int<lower=0> N; //nr subjects
real<lower=0> k;
real<lower=0> t;
}generated quantities{
real<lower=0> y;
y=gamma_rng(k,1/t);
}
smGammaGen = pystan.StanModel(model_code=model)
model =
data {
int<lower=0> N; //nr subjects
real<lower=0> y[N];
}parameters{
real<lower=0> k;
real<lower=0> t;
}model{
for (n in 1:N)
y[n]~gamma(k,1/t);
}
smGamma = pystan.StanModel(model_code=model)
N=1000
fit=smGammaGen.sampling(data={'N':N,'k':10,'t':np.exp(-10)},
chains=1,n_jobs=1,seed=1,thin=1,iter=N,warmup=0,algorithm="Fixed_param")
w=fit.extract()
y=w['y']
print(y.shape)
fit=smGamma.sampling(data={'N':N,'y':y},
chains=4,n_jobs=4,seed=1,thin=2,iter=2000,warmup=1000)
print(fit)
w=fit.extract()
t=np.log(w['t'])
g=pg(0,w['k'])+t
w=fit.extract()
plt.plot(g,t,'.')
np.corrcoef(g,t)[0,1]
invgammafun='''functions{
vector invdigamma(vector x){
vector[num_elements(x)] y; vector[num_elements(x)] L;
for (i in 1:num_elements(x)){
if (x[i]==digamma(1)){
y[i]=1;
}else{ if (x[i]>=-2.22){
y[i]=(exp(x[i])+0.5);
}else{
y[i]=1/(x[i]-digamma(1));
}}}
L=digamma(y)-x;
while (min(L)>10^-12){
y=y-L ./trigamma(y);
L=digamma(y)-x;
}
return y;}
real invdigammaR(real x){
real y; real L;
if (x==digamma(1)){
y=1;
}else{ if (x>=-2.22){
y=(exp(x)+0.5);
}else{
y=1/(x-digamma(1));
}}
L=digamma(y)-x;
while (abs(L)>1e-5){
y=y-L ./trigamma(y);
L=digamma(y)-x;
}
return y;
}} '''
model =
data {
int<lower=0> N; //nr subjects
real<lower=0> y[N];
}parameters{
real<lower=-100,upper=100> g;
real<lower=-100,upper=100> t;
}transformed parameters{
real k;
k=invdigammaR(g-t);
}model{
for (n in 1:N)
y[n]~gamma(k,exp(-t));
}
smGammaGeom = pystan.StanModel(model_code=invgammafun+model)
N=10
fit=smGammaGen.sampling(data={'N':N,'k':1,'t':np.exp(0)},
chains=1,n_jobs=1,seed=1,thin=1,iter=N,warmup=0,algorithm="Fixed_param")
w=fit.extract()
y=w['y']
fit=smGammaGeom.sampling(data={'N':N,'y':y},
chains=4,n_jobs=4,seed=2,thin=1,iter=500,warmup=200)
#control={'adapt_delta':0.99})
print(fit)
w=fit.extract()
#plt.plot(pg(0,w['k']),w['g']-w['t'],'.')
#np.max(np.abs(pg(0,w['k'])-w['g']+w['t']))
plt.plot(w['g'],w['t'],'.')
Explanation: Notation
$Y$ generic random variable
$U$ latent random variable
$V$ residual random variable
$X$ predictor
Parameters
$\eta$ and $\nu$ generic parameters
$\mu=E[Y]$ mean parameter
$\gamma=E[\log Y]$ geometric mean parameter
$\sigma^2=E[(Y-\mu)^2]$ standard deviation parameter
$Y=\alpha+U$ shift parameter
$Y= U/\theta$ scale parameter
$Y= U \lambda$ inverse-scale (rate) parameter
$Y=e^{-\tau} U$ log-scale parameter
$Y=U^\kappa$ exponent parameter
$Y=f(U,\rho)$ shape parameter
$Y=\alpha + \beta X$ linear predictor
$\psi$ digamma function
$\pi$ pi number
$\phi$ measurement scale
$\delta$ dirac function
$\zeta,\epsilon,\varepsilon,\vartheta,\iota,\xi,\varpi,\varrho,\varsigma,\varphi,\chi,\omega$
Gamma distribution
Paremeters $\eta$ and $\nu$ are orthogonal if
$$\operatorname{E}_Y
\left[
\frac{\partial \log f(Y;\eta,\nu)}{\partial\eta \ \partial\nu}
\right]=0$$
The probability density function of Gamma distribution parametrized by shape parameter $\rho$ and scale parameter $\theta$ is
$$f(Y=y;\rho,\theta)=\frac{1}{\Gamma(\rho) \theta^\rho} y^{\rho - 1} e^{-\frac{y}{\theta}}$$
with Fisher information
$$I_{\rho \theta} = \begin{pmatrix}
\psi'(\rho) & \theta^{-1} \
\theta^{-1} & \rho \theta^{-2} \end{pmatrix} $$
Consider parametrization in terms of logarithm of geometric mean $\gamma=E[\log Y]=\psi(\rho)+\log \theta$ and log-scale $\tau=\log(\theta)$, where $\psi$ is the digamma function. Then the logarithm of density function parametrized by $\gamma$ and $\tau$ is
$$\log f(Y=y;\gamma,\tau)=-\log{\Gamma(\omega(\gamma-\tau)) -\tau \omega(\gamma-\tau) + (\omega(\gamma-\tau)-1)\log y- y e^{-\tau}}$$
where we use $\omega$ to label the inverse digamma function. By $\omega'(y)$ $\omega''(y)$ and we denote the first and second derivative of inverse digamma function with respect to $y$. Next, we compute the first derivative of the log-density with respect to $\gamma$:
$$\begin{align} \frac{\partial}{\partial\gamma}\log f(Y;\gamma,\tau) &= -\psi(\omega(\gamma-\tau)) \omega'(\gamma-\tau)-\tau \omega'(\gamma-\tau) + \omega'(\gamma-\tau) \log y \
&= -(\gamma-\tau) \omega'(\gamma-\tau)-\tau \omega'(\gamma-\tau) + \omega'(\gamma-\tau) \log y \
&= (\log y - \gamma)\omega'(\gamma -\tau)\end{align}$$
Next we obtain derivative with respect to $\gamma$ and $\tau$:
$$\begin{align} \frac{\partial}{\partial\gamma \partial\tau}\log f(Y;\gamma,\tau) &= \frac{\partial}{\partial\tau}\left[(\log y - \gamma)\omega'(\gamma -\tau)\right]\
&= (\gamma-\log y)\omega''(\gamma-\tau)
\end{align}$$
Finally, compute the expectation
$$\begin{align} \operatorname{E}_Y
\left[
\frac{\partial \log f(Y;\tau,\gamma)}{\partial\tau\ \partial\gamma}
\right]&= \operatorname{E}\left[\omega''(\gamma-\tau)(\gamma-\log y)\right] \
&=\omega''(\gamma-\tau)(\gamma-\operatorname{E}[\log y])\
&=\omega''(\gamma-\tau)(\gamma-\gamma)\
&=0
\end{align}$$
Note that $\operatorname{E}[\log y]$ is the logarithm of geometric mean and hence $\operatorname{E}[\log y]=\gamma$
$$I_{\gamma \tau} = \begin{pmatrix}
\omega'(\gamma-\tau) & 0\
0 & \omega(\gamma-\tau)-\omega'(\gamma-\tau)\end{pmatrix} $$
$$I_{\rho \tau} = \begin{pmatrix}
\psi'(\rho) & 1 \
1 & \rho \end{pmatrix} $$
$$I_{\rho, \tau+\log \rho} = \begin{pmatrix}
\psi'(\rho)-1/\rho & 0 \
0 & \rho \end{pmatrix} $$
$$I_{\psi(\rho), \tau} = \begin{pmatrix}
\psi'(\rho)^{-1} & \psi'(\rho)^{-1} \
\psi'(\rho)^{-1} & \rho \end{pmatrix} $$
$$I_{\psi(\rho)+\tau, \tau} = \begin{pmatrix}
\psi'(\rho)^{-1} & 0 \
0& \rho-\psi'(\rho)^{-1} \end{pmatrix} $$
End of explanation
model =
data {
int<lower=0> N; //nr subjects
int<lower=0> M;
real gm;
real gs;
real t;
}generated quantities{
real g[N];
real<lower=0> y[N,M];
for (n in 1:N){
g[n]=normal_rng(gm,gs);
for (m in 1:M){
y[n,m]=gamma_rng(invdigammaR(g[n]-t),exp(t));
}}}
smGammaGen = pystan.StanModel(model_code=invgammafun+model)
N=10;M=20
fit=smGammaGen.sampling(data={'N':N,'M':M,'gm':5,'gs':2,'t':1},
chains=4,n_jobs=4,seed=1,thin=1,iter=30,warmup=0,algorithm="Fixed_param")
w=fit.extract()
y=w['y'][0,:,:]
print(y.shape)
model =
data {
int<lower=0> N; //nr subjects
int<lower=0> M;
real<lower=0> y[N,M];
}parameters{
real g[N];
real gm;
real<lower=0> gs;
real t;
}model{
for (n in 1:N){
g[n]~normal(gm,gs);
for (m in 1:M){
y[n,m]~gamma(invdigammaR(g[n]-t),exp(t));
}}}
smGamma = pystan.StanModel(model_code=invgammafun+model)
fit=smGamma.sampling(data={'N':N,'M':M,'y':y},
chains=4,n_jobs=4,seed=2,thin=1,iter=1000,warmup=500)
print(fit)
%pylab inline
plt.plot(w['gm'])
Explanation: Hierarchical parameter recovery
End of explanation
1/H
Explanation: Weibull distribution
$$f(y)=\frac{\kappa}{y}\left(\frac{y}{\theta}\right)^{\kappa}e^{-\left(\frac{y}{\theta} \right)^\kappa}$$
$$I_{\theta \kappa} = \begin{pmatrix}
\frac{\kappa^2}{\theta^2} & -\frac{\psi(2)}{\theta}\
. & \frac{1}{\kappa^2}\left(\psi'(1)+\psi(2)^2\right)\end{pmatrix} $$
$E[\log Y]= \log \theta + \psi(1)/\kappa$
$E[Y^s]=\theta^s \Gamma(1+s/\kappa)$
$E[Y^\kappa]=\theta^\kappa $
$\mathrm{Var}[\log Y]=\psi'(1)/\kappa^2$
$E[(Y/\theta)^\kappa]=1$
$\mathrm{Var}[(Y/\theta)^\kappa]=1$
$E[\log (Y/\theta^\kappa)]= \psi(1)$
$E[\log^2 (Y/\theta^\kappa)]= \psi'(1)+\psi(1)^2$
$E[(Y/\theta)^\kappa \log (Y/\theta)^\kappa ]= \psi(2)= \psi(1)+1$
$E[(Y/\theta)^\kappa \log^2(Y/\theta)^\kappa ]= \psi'(2)+\psi(2)^2$
$$I_{\tau \kappa} = \begin{pmatrix}
\kappa^2 & - \psi(2)\
. & \frac{1}{\kappa^2}\left(\psi'(1)+\psi(2)^2\right)\end{pmatrix} $$
$\tau=\log \theta$
$r_{\tau \kappa}=\psi(2)/\sqrt{\psi'(1)+\psi(2)^2}=0.31$
This is orthogonal parametrization
$$\kappa= \frac{1}{\xi-H \tau}$$
$$\xi=\frac{1}{\kappa}+H \tau $$
$H=\frac{\psi(2)}{\psi'(1)+\psi(2)^2}=0.232$
$$I_{\tau \xi} = \frac{H}{(\xi-\tau)^{2}} \begin{pmatrix}
\left(1+\frac{\psi(2)^2}{\psi'(1)}\right)^{-1} &0 \
. & \left(1+\frac{\psi'(1)}{\psi(2)^2}\right)^{-1} \end{pmatrix} $$
$$I_{\tau \kappa} = \begin{pmatrix}
\kappa^2 & - \psi(2)\
. & \frac{1}{\kappa^2}\left(\psi'(1)+\psi(2)^2\right)\end{pmatrix} $$
$$I_{\tau,1/\kappa} =\kappa^{2} \begin{pmatrix}
1 & \psi(2)\
. & \psi'(1)+\psi(2)^2\end{pmatrix} $$
$$I_{\tau,1/\kappa-H\tau} =\kappa^{2} \begin{pmatrix}
1-\psi(2) H& 0\
. & \psi'(1)+\psi(2)^2\end{pmatrix} = \kappa^{2} \begin{pmatrix}
0.902 & 0\
. & 1.824\end{pmatrix}$$
$$I_{\tau,H\kappa} =\begin{pmatrix}
\kappa^{2} & \psi(2) H\
. & \kappa^{-2} \psi(2) H\end{pmatrix} $$
$$I_{\tau,1/(H\kappa)} =\kappa^{2} H^2 \begin{pmatrix}
1 & \psi(2) H\
. & \psi(2) H\end{pmatrix} $$
$$I_{\tau,1/(H\kappa)+\tau} =\kappa^{2} H^2 \begin{pmatrix}
1-\psi(2) H& 0\
. & \psi(2) H\end{pmatrix} \= \kappa^{2} H^2
\begin{pmatrix}
\left(1+\frac{\psi(2)^2}{\psi'(1)}\right)^{-1} &0 \
. & \left(1+\frac{\psi'(1)}{\psi(2)^2}\right)^{-1} \end{pmatrix}= \kappa^{2} H^2 \begin{pmatrix}
0.902 & 0\
. & 0.098\end{pmatrix}$$
$$I_{\tau,\epsilon} =(\epsilon-\tau)^2\begin{pmatrix}
\left(1+\frac{\psi(2)^2}{\psi'(1)}\right)^{-1} &0 \
. & \left(1+\frac{\psi'(1)}{\psi(2)^2}\right)^{-1} \end{pmatrix}$$
Orthogonal from Cox and Reid (1987)
$\epsilon= \exp(\log \theta + \psi(2)/\kappa)=\exp(1/\kappa)\exp(E[\log Y])=\exp E[(Y/\theta)^\kappa \log Y]$
$\theta= \epsilon \exp(-\psi(2)/\kappa)$
End of explanation
pg(1,1)
Explanation: $$J_{a/H,b}=\begin{pmatrix} H &0 \0 & 1 \end{pmatrix}$$
$$J^T \begin{pmatrix} H^{-2} A & H^{-1} B \ H^{-1} B & C \end{pmatrix} J= \begin{pmatrix} A &B \B & C\end{pmatrix}$$
$H=B/A$
$$J^T \begin{pmatrix} A & B \ B & C \end{pmatrix} J= \begin{pmatrix} B^2/A &B^2/A \B^2/A & C\end{pmatrix}$$
$$J_{a+b,b}=\begin{pmatrix} 1 &-1 \0 & 1 \end{pmatrix}$$
$$J^T\begin{pmatrix} A &A \A & B \end{pmatrix} J= \begin{pmatrix} A &0 \0 & B-A\end{pmatrix}$$
$$J_{\log a,b}=\begin{pmatrix} e^a &0 \0 & 1 \end{pmatrix}$$
$$J_{\log a,b}^T \begin{pmatrix} e^{-2a} A & e^{-a} B \e^{-a} B & C \end{pmatrix} J_{\log a,b}= \begin{pmatrix} A &B \B & C\end{pmatrix}$$
$$J_{e^a,b}=\begin{pmatrix} 1/a &0 \0 & 1 \end{pmatrix}$$
$$J^T \begin{pmatrix} a^2 A & a B \ a B & C \end{pmatrix} J= \begin{pmatrix} A &B \B & C\end{pmatrix}$$
$$J_{a^{-1},b}=\begin{pmatrix} -a^{2} &0 \0 & 1 \end{pmatrix}$$
$$J^T \begin{pmatrix} a^{-4} A & a^{-2} B \ a^{-2} B & C \end{pmatrix} J= \begin{pmatrix} A &-B \-B & C\end{pmatrix}$$
End of explanation
model =
data {
int<lower=0> N; //nr subjects
vector<lower=0>[N] y;
}parameters {
real<lower=0> k;
real<lower=0> t;
}model {
y~weibull(k,t);
}
smWeibull = pystan.StanModel(model_code=model)
model =
data {
int<lower=0> N; //nr subjects
vector<lower=0>[N] y;
}parameters {
real t;
real e;
}model {
y~weibull(4.313501020391736/(e-t),exp(t));
}
smWeibullE = pystan.StanModel(model_code=model)
model =
data {
int<lower=0> N;
int<lower=0> M;
vector<lower=0>[M] y[N];
}parameters {
real lnk[N];
real lnt[N];
real km;real tm;
real<lower=0> ks;
real<lower=0> ts;
}model {
lnk~normal(km,ks);
lnt~normal(tm,ts);
for (n in 1:N)
y[n]~weibull(exp(lnk[n]),exp(lnt[n]));
}
#smWeibullH = pystan.StanModel(model_code=model)
model =
data {
int<lower=0> N;
int<lower=0> M;
vector<lower=0>[M] y[N];
}parameters {
real<lower=0> lne[N];
real lnt[N];
real em;real tm;
real<lower=0> es;
real<lower=0> ts;
}model {
lne~normal(em,es);
lnt~normal(tm,ts);
for (n in 1:N)
y[n]~weibull(4.313501020391736/(lne[n]),exp(lnt[n]));
}
smWeibullEH = pystan.StanModel(model_code=model)
print(polygamma(0,1))
print(polygamma(0,2))
print(polygamma(1,1))
print(polygamma(1,2))
print(polygamma(1,1)**2)
print(polygamma(1,2)**2)
ts=[-10,-1,0,1,10]
k=1
for t in ts:
plt.subplot(2,3,k);k+=1
e=np.linspace(-10,10,101)
plt.plot(e,4.313501020391736/(e-t))
fit.get_adaptation_info()
from scipy import stats
def prs(x):
ts= x.rsplit('\n#')
out=[ts[1].rsplit('=')[1]]
out.extend(ts[3][:-2].rsplit(','))
return out
def computeConvergence(ms,data,reps=50):
from time import time
D=[[],[]]
R=[[],[]]
for sd in range(reps):
print(sd)
for m in range(len(ms)):
sm=ms[m]
t0=time()
try:
fit=sm.sampling(data=data,chains=6,n_jobs=6,
seed=1,thin=1,iter=1000,warmup=500)
D[m].append(time()-t0)
nfo=list(map(prs,fit.get_adaptation_info()) )
R[m].append(nfo)
except:
D[m].append(np.nan)
R[m].append(np.zeros((6,3))*np.nan)
D=np.array(D)
#R=np.float32(R)
print(np.mean(D,1))
return D, R
t=-1;e=1
k=4.313501020391736/(e-t)
print('k= ',k)
temp={'y':stats.weibull_min.rvs(k,0,np.exp(t),size=100),'N':100}
#D,R=computeConvergence([smWeibull, smWeibullE])
Explanation: old stuff
$$\mathrm{Cov}(\gamma,\phi)=J_{12}J_{11}\frac{\kappa^2}{\theta^2}+J_{21}J_{22}\frac{1}{\kappa^2}\left(\psi'(2)+\psi(2)^2+1\right)-\frac{\psi(2)}{\theta}(J_{12}J_{11}+J_{21}J_{22}+J_{21}J_{12}+J_{11}J_{22})$$
$\theta=e^\phi$
$J_{11}=\frac{\partial \theta}{\partial \phi}=e^\phi=\theta$
$J_{12}=\frac{\partial \theta}{\partial \gamma}=0$
$$\mathrm{Cov}(\gamma,\phi)=J_{21}J_{22}\frac{1}{\kappa^2}\left(\psi'(2)+\psi(2)^2+1\right)-\frac{\psi(2)}{J_{11}}(J_{21}J_{22}+J_{11}J_{22})$$
$$\mathrm{Cov}(\gamma,\phi)=J_{22}\left(J_{21}\frac{1}{\kappa^2}\left(\psi'(2)+\psi(2)^2+1\right)-\psi(2)(J_{21}/J_{11}+1)\right)\
= J_{21}J_{22}\frac{\psi(2)}{\kappa^2}\left(
\frac{\psi'(2)+\psi(2)^2+1}{\psi(2)}-\kappa^2\left(\frac{\partial \phi}{\partial \kappa}+e^{-\phi}\right)\right)\
= J_{21}J_{22}\psi(2)\left(
\frac{\psi'(2)+\psi(2)^2+1}{\kappa^2\psi(2)}-e^{-\phi}-\frac{\partial \phi}{\partial \kappa}\right)
$$
$\gamma=-\phi- \frac{\psi'(2)+\psi(2)^2+1}{\kappa \psi(2)}$
$\frac{\partial \gamma}{\partial \kappa}= -\frac{\psi'(2)+\psi(2)^2+1}{\kappa^2 \psi(2)}$
$\kappa=-\frac{\psi'(2)+\psi(2)^2+1}{(\gamma+\phi) \psi(2)}$
$\frac{\partial \kappa}{\partial \phi}= \frac{\psi'(2)+\psi(2)^2+1}{(\gamma+\phi)^2 \psi(2)}$
$$\mathrm{Cov}(\gamma,\phi)= J_{21}J_{22}\psi(2)\left(
\frac{(\gamma+\phi)^2 \psi(2)}{\psi'(2)+\psi(2)^2+1}-e^{-\phi}- \frac{(\gamma+\phi)^2 \psi(2)}{\psi'(2)+\psi(2)^2+1} \right)
$$
$c \mathrm{Ei}(\frac{c}{\kappa})-e^\frac{c}{\kappa}(e^{-\phi}+\kappa)=k$
End of explanation
N=20
M=50
e=np.random.randn(N)*1+2
t=np.random.randn(N)*1+1
#t=-1;e=1
k=4.313501020391736/(np.abs(e-t))
#print('k= ',k)
data={'y':stats.weibull_min.rvs(k,0,np.exp(t),size=(M,N)).T,'N':N,'M':M}
ms=[smWeibullH, smWeibullEH]
D,R=computeConvergence(ms,data,reps=50)
D
Explanation: Hierarchical weibull
End of explanation
import pystan
ggcode='''functions{
//' Naive implementation of the generalized Gamma density.
//' @param x Value to evaluate density at.
//' @param alpha Shape parameter.
//' @param beta Scale parameter.
//' @param nu Tail parameter.
real gengamma_pdf(real x, real k, real b, real q) {
real d;
d = q/(b*tgamma(k))*(x/b)^(k*q-1) * exp(-(x/b)^q);
return d;
}
real gengamma_lpdf(real x, real k, real b, real q) {
real d;
d = log(q) - log(b) - lgamma(k) +
(k*q-1)*(log(x) - log(b)) - (x/b)^q;
return d;
}
real generalized_gamma_cdf(real x, real k, real b, real q) {
real d;
d = gamma_p(k, (x/b)^q);
return d;
}
real generalized_gamma_lcdf(real x, real k, real b, real q) {
real d;
d = log(generalized_gamma_cdf(x, k, b, q));
return d;
}}'''
model =
data {
int<lower=0> N; //nr subjects
vector<lower=0>[N] yLT;
}parameters {
real k;
//real b;
real q;
}model {
for (n in 1:N)
yLT[n]~gengamma(exp(k),exp(0),exp(q));
}
smGengamma = pystan.StanModel(model_code=ggcode+model)
from scipy import stats
x=np.linspace(0,10,101)[1:]
#k,q,0,b
k=2;b=1;q=3;
plt.plot(x,stats.gengamma.pdf(x,k,q,0,b))
temp={'yLT':stats.gengamma.rvs(k,q,0,b,size=100),'N':100}
fit=smGengamma.sampling(data=temp,chains=6,n_jobs=6,
seed=1,thin=1,iter=10000,warmup=500)
print(fit)
w=fit.extract()
p=np.exp(w['k'])
#b=np.exp(w['b'])
H=(pg(1,p+1)+np.square(pg(0,p+1))+1/p)/pg(0,p+1)
e=H/np.exp(w['q'])+1
plt.figure()
plt.plot(p,e,'.')
np.corrcoef(p,e)[0,1]
from scipy.special import gamma, digamma,polygamma
plt.figure(figsize=(12,4))
g=np.log(b)+digamma(k)/q
c=(polygamma(1,k+1)+polygamma(0,k+1)**2+1/k)*q/polygamma(0,k+1)+np.log(b)
q1=g
q2=np.log(b)
q3=c
#*np.exp(-a)+q2
plt.subplot(1,3,1)
plt.plot(q1,q2,'.')
plt.title(np.corrcoef(q1,q2)[0,1])
plt.subplot(1,3,2)
plt.plot(q1,q3,'.')
plt.title(np.corrcoef(q1,q3)[0,1])
plt.ylim([-1000,1000])
plt.subplot(1,3,3)
plt.plot(q2,q3,'.')
plt.title(np.corrcoef(q2,q3)[0,1]);
plt.ylim([-50,50])
Explanation: Information matrix generalized gamma
$$f(y)=\frac{\kappa}{y \Gamma(\rho)}\left(\frac{y}{\theta}\right)^{\kappa \rho}e^{-\left(\frac{y}{\theta} \right)^\kappa}$$
$$\log f(y)=\log \kappa- \log y -\log \Gamma(\rho) +\kappa \rho \log y - \kappa \rho \log \theta -\left(\frac{y}{\theta} \right)^\kappa$$
$$I_{\rho \theta \kappa} = \begin{pmatrix} \psi'(\rho) & \frac{\kappa}{\theta} &- \frac{\psi(\rho)}{\kappa} \
. & \frac{\rho \kappa^2}{\theta^2} & -\frac{\rho}{\theta}\psi(\rho+1)\
. & . & \frac{\rho}{\kappa^2}\left(\psi'(\rho+1)+\psi(\rho+1)^2+\frac{1}{\rho}\right)\end{pmatrix} $$
$\rho (\psi'(\rho+1)+\psi(\rho+1)^2+\frac{1}{k})= \rho \psi'(\rho)+\rho \psi(\rho)^2 + 2\psi(\rho) +1$
$E[\log Y]= \log \theta + \psi(\rho)/\kappa$
$E[Y^s]=\theta^s \Gamma(\rho+s/\kappa)/\Gamma(\rho)$
$E[Y^\kappa]=\theta^\kappa \rho$
$E[Y^\kappa \log Y ]=\theta^\kappa \rho (\log \theta + \psi(\rho+1)/\kappa)= \theta^\kappa (\rho \log \theta + \rho \psi(\rho)/\kappa+1/\kappa)$
$E[\log^2 Y]= \log^2 \theta + 2 \log \theta \psi(\rho)/\kappa+(\psi'(\rho)+\psi(\rho)^2)/\kappa^2$
$E[Y^\kappa \log^2 Y]= \theta^\kappa \rho (\log^2 \theta + 2 \log \theta \psi(\rho+1)/\kappa+(\psi'(\rho+1)+\psi(\rho+1)^2)/\kappa^2)$
$E[Y^{2\kappa} \log^2 Y]= \theta^{2\kappa} (\rho+1) (\log^2 \theta + 2 \log \theta \psi(\rho+2)/\kappa+(\psi'(\rho+2)+\psi(\rho+2)^2)/\kappa^2)$
$\mathrm{Var}[\log Y]=\psi'(\rho)/\kappa^2$
$E[(Y/\theta)^\kappa]=\rho$
$\mathrm{Var}[(Y/\theta)^\kappa]=\rho$
$E[\log (Y/\theta)^\kappa]= \psi(\rho)$
$E[\log^2 (Y/\theta)^\kappa]= \psi'(\rho)+\psi(\rho)^2$
$E[(Y/\theta)^\kappa \log (Y/\theta)^\kappa ]= \rho \psi(\rho+1)= \rho \psi(\rho)+1$
$E[(Y/\theta)^\kappa \log^2(Y/\theta)^\kappa ]= \rho (\psi'(\rho+1)+\psi(\rho+1)^2)$
$$I_{\rho \tau \kappa} = \begin{pmatrix} \psi'(\rho) & \kappa &- \frac{\psi(\rho)}{\kappa} \
. & \rho \kappa^2 & -\rho\psi(\rho+1)\
. & . & \frac{\rho}{\kappa^2}\left(\psi'(\rho+1)+\psi(\rho+1)^2+\frac{1}{\rho}\right)\end{pmatrix} $$
$$I_{\rho, \tau, \log \kappa} = \begin{pmatrix} \psi'(\rho) & \kappa &- \psi(\rho) \
. & \rho \kappa^2 & -\kappa\rho\psi(\rho+1)\
. & . & \rho \left(\psi'(\rho+1)+\psi(\rho+1)^2+\frac{1}{\rho}\right)\end{pmatrix} $$
$$I_{\rho \tau,1/\kappa} = \begin{pmatrix} \psi'(\rho) & \kappa &- \kappa\psi(\rho) \
. & \rho \kappa^2 & -\kappa^2 \rho A\
. & . & \kappa^2 \rho B\end{pmatrix} $$
$$I_{\rho \tau,B/(A \kappa)} = \begin{pmatrix} \psi'(\rho) & \kappa &- \kappa\psi(\rho)A/B \
. & \rho \kappa^2 & -\kappa^2 \rho A^2/B\
. & . & \kappa^2 \rho A^2/B\end{pmatrix} $$
$$I_{\rho \tau,B/(A \kappa)-\tau} = \begin{pmatrix} \psi'(\rho) & \kappa-\kappa\psi(\rho)A/B &- \kappa\psi(\rho)A/B \
. & \rho \kappa^2 & 0\
. & . & \kappa^2 \rho A^2/B-\rho \kappa^2\end{pmatrix} $$
A=\psi(\rho+1)
B=\left(\psi'(\rho+1)+\psi(\rho+1)^2+\frac{1}{\rho}\right)
$\gamma=\tau+\psi(\rho)/\kappa$
$\rho=\omega(\kappa(\gamma-\tau))=\omega$
$$J=\begin{pmatrix}\kappa \omega' &-\kappa \omega' & (\gamma-\tau)\omega'\ 0&1 &0 \ 0& 0& 1 \end{pmatrix}$$
$$I_{\gamma \tau \kappa} = J^T\begin{pmatrix} \frac{1}{\omega'} & \kappa &-(\gamma-\tau) \
. & \omega \kappa^2 & -(\gamma-\tau)\omega-1\
. & . & \frac{R}{\kappa^2}\end{pmatrix} J $$
$$I_{\gamma \tau \kappa} = \begin{pmatrix} \kappa^2\omega' &0&0 \
. & \kappa^2(\omega -\omega')& (\gamma-\tau)(\kappa\omega'-\omega)-1\
. & . & \frac{R}{\kappa^2}-(\gamma-\tau)^2\omega'\end{pmatrix} $$
with $R=\frac{\omega}{\omega'} +\omega \kappa^2 (\gamma-\tau)^2 + 2\kappa (\gamma-\tau)+1$
Simplyfied Gamma
$$f(y;\rho)=\frac{ y^{\rho-1} e^{-y}}{\Gamma(\rho)}$$
$$\log f(y;\rho)=\rho \log y -\log y -y-\log \Gamma(\rho)$$
$\Gamma(z+1) = \int_0^\infty x^{z} e^{-x}\, dx$
$\Gamma(z+1)/\Gamma(z)=z$
$\frac{d^n}{dx^n}\Gamma(x) = \int_0^\infty t^{x-1} e^{-t} (\ln t)^n \, dt$
$\psi(x)=\log(\Gamma(x))'=\Gamma'(x)/\Gamma(x)$
$E[Y]= \int_0^\infty y^{\rho} e^{-y}\, dy / \Gamma(\rho)= \Gamma(\rho+1)/ \Gamma(\rho)=\rho$
$E[Y^s]=\Gamma(\rho+s)/ \Gamma(\rho)$
$\mathrm{Var}[Y]=E[Y^2]-E[Y]^2=\rho(\rho+1)-\rho^2=\rho$
$E[\log Y]=\Gamma'(\rho)/\Gamma(\rho)=\psi(\rho)$
$E[Y \log Y]=\Gamma'(\rho+1)/\Gamma(\rho)= \rho \psi(\rho+1)= \rho \psi(\rho)+1$
$E[1/Y]= \Gamma(\rho-1)/ \Gamma(\rho)=1/(\rho-1)$
$\mathrm{Var}[1/Y]=E[Y^2]-E[Y]^2=\frac{1}{(\rho-2)(\rho-1)^2}$
$E[\log^2 Y]=\Gamma''(\rho)/\Gamma(\rho)=\psi'(\rho)+\psi(\rho)^2$
use $\psi'(x)=(\Gamma'(x)/\Gamma(x))'=\Gamma''(x)/\Gamma(x)-(\Gamma'(x)/\Gamma(x))^2$
$E[Y \log^2 Y]=\Gamma''(\rho+1)/\Gamma(\rho)=\rho(\psi'(\rho+1)+\psi(\rho+1)^2)=\rho\psi'(\rho)+\rho\psi(\rho)^2+2\psi(\rho)$
Gengamma with $\theta=1$
$$f(y)=\frac{\kappa}{y \Gamma(\rho)}y^{\kappa \rho} e^{-y^\kappa}$$
$$\log f(y)=\log \kappa- \log y -\log \Gamma(\rho) +\kappa \rho \log y -y^\kappa$$
$$I_{\rho \kappa} = \begin{pmatrix} \psi'(\rho) & - \frac{\psi(\rho)}{\kappa} \
. & \frac{\rho}{\kappa^2}\left(\psi'(\rho+1)+\psi(\rho+1)^2+\frac{1}{\rho}\right)\end{pmatrix} $$
$$I_{\rho \log\kappa} = \begin{pmatrix} \psi'(\rho) & - \psi(\rho) \
. & \rho\psi'(\rho)+\rho\psi(\rho)^2+2\psi(\rho)+1\end{pmatrix} $$
$\gamma=\psi(\rho)/\kappa$
$\rho=\omega(\gamma \kappa)$
$1=d \psi(\omega(\gamma))/d \gamma$
$$I_{\gamma \kappa} = \begin{pmatrix} \kappa^2 \omega(\gamma\kappa) & 0 \
. & \frac{\omega(\gamma\kappa)}{\kappa\omega'(\gamma\kappa)}+ \omega(\gamma\kappa)\gamma^2\kappa+2\gamma+\frac{1}{\kappa^2}\end{pmatrix} $$
$$I_{\gamma \kappa} = \begin{pmatrix} \kappa^2 \omega(\gamma\kappa) & 0 \
. & \kappa^{-1}E[Y\log^2 Y]+\frac{1}{\kappa^2}\end{pmatrix} $$
TODO check the last result by transformation of $I_{\rho \kappa}$
orthogonal with
End of explanation
import pystan
model =
data {
int<lower=0> N; //nr subjects
vector<lower=0>[N] yLT;
}parameters {
real a;
real b;
}model {
for (n in 1:N)
yLT[n]~beta(exp(a),exp(b));
}
smBeta = pystan.StanModel(model_code=model)
from scipy import stats
x=np.linspace(0,1,101)[1:]
plt.plot(x,stats.beta.pdf(x,4,15,0,1))
temp={'yLT':stats.beta.rvs(4,15,0,1,size=100),'N':100}
fit=smBeta.sampling(data=temp,chains=6,n_jobs=6,
seed=1,thin=4,iter=55000,warmup=5000)
print(fit)
w=fit.extract()
a=np.exp(w['a'])
b=np.exp(w['b'])
from scipy.special import gamma, digamma,polygamma,beta
plt.figure(figsize=(12,12))
gA=digamma(a)-digamma(a+b)
gB=digamma(b)-digamma(a+b)
tA=polygamma(1,a)-polygamma(1,a+b)
var=a*b/np.square(a+b)/(a+b+1)
ex=a/(a+b)
q1=ex
q2=var
#q2=g
#k=np.exp(a)
#l=np.exp(b)
#q1=np.log(np.square(k)*digamma(2)+digamma(1))/(2*digamma(2))-g/(polygamma(1,1)+1)
plt.plot(q1,q2,'.')
#plt.ylim([0,1])
#plt.xlim([0,1])
np.corrcoef(q1,q2)[0,1]
Explanation: Beta distribution
Parameters $\alpha$ and $\beta$ are orthogonal if
$$\operatorname{E}_X
\left[
\frac{\partial \log f(X;\alpha,\beta)}{\partial\alpha \ \partial\beta}
\right]=0$$
The probability density function of Beta distribution parametrized by shape parameters $\alpha$ and $\beta$ is
$$f(X=x;\alpha,\beta)=\frac{ x^{\alpha-1}(1-x)^{\beta-1}}{B(\alpha,\beta)}$$
Consider parametrization in terms of logarithm of geometric mean $E[\log X]=\gamma=\psi(\alpha)-\psi(\alpha+\beta)$ and the logarithm of geometric mean of $1-X$: $E[\log (1-X)]=\phi=\psi(\beta)-\psi(\alpha+\beta)$
Then the fisher information matrix of the distribution parametrized by shape parameters is
$$I_{\alpha,\beta}=\begin{pmatrix}\psi'(\alpha)-\psi'(\alpha+\beta) & -\psi'(\alpha+\beta)\
-\psi'(\alpha+\beta) & \psi'(\beta)-\psi'(\alpha+\beta)
\end{pmatrix}$$
Fisher information matrix when parametrized by $\gamma$ and $\phi$ is
$$I_{\gamma,\phi}=J^\mathrm{T} I_{\alpha,\beta} J$$
Where $J$ is the Jacobian matrix defined as
$$J=\begin{pmatrix}\frac{\partial \alpha}{\partial \gamma} & \frac{\partial \alpha}{\partial \phi}\
\frac{\partial \beta}{\partial \gamma} & \frac{\partial \beta}{\partial \phi}
\end{pmatrix}$$
Note that $I_{\alpha,\beta}$ can be written as:
$$I_{\alpha,\beta}=\begin{pmatrix}\frac{\partial \gamma}{\partial \alpha} & \frac{\partial \phi}{\partial \alpha} \ \frac{\partial \gamma}{\partial \beta} & \frac{\partial \phi}{\partial \beta}
\end{pmatrix}$$
$$\mathrm{Cov}(\gamma,\phi)=J_{12}J_{11}\psi'(\alpha)+J_{21}J_{22}\psi'(\beta)-\psi'(\alpha+\beta)(J_{12}J_{11}+J_{21}J_{22}+J_{21}J_{12}+J_{11}J_{22})$$
$\gamma=\psi(\alpha)-\psi(\alpha+\beta)$
$\phi=\psi(\beta)-\psi(\alpha+\beta)$
$\gamma-\phi=\psi(\alpha)-\psi(\beta)$
$\alpha=\omega(\phi-\psi(\beta))-\beta$
$\beta=\omega(\gamma-\psi(\alpha))-\alpha$
$$\gamma=\frac{\partial \log \mathrm{B}(\alpha,\beta)}{\partial \alpha}=\frac{\partial \log \Gamma(\alpha)}{\partial \alpha}-\frac{\partial \log \Gamma(\alpha+\beta)}{\partial \alpha}$$
$$\phi=\frac{\partial \log \mathrm{B}(\alpha,\beta)}{\partial \beta}=\frac{\partial \log \Gamma(\beta)}{\partial \beta}-\frac{\partial \log \Gamma(\alpha+\beta)}{\partial \beta}$$
$\psi'(\alpha)=\psi'(\alpha+\beta)\frac{\partial \beta}{\partial \alpha} -\frac{1}{J_{11}}$
$\psi'(\beta)=\psi'(\alpha+\beta)\frac{\partial \alpha}{\partial \beta} -\frac{1}{J_{22}}$
$I_{\alpha,\beta}=\begin{pmatrix}A+C & C \ C & B+C
\end{pmatrix}$
$J^{-1}=\begin{pmatrix}A+C & C \ C & B+C \end{pmatrix}$
$I_{\gamma,\phi}=J^\mathrm{T} I_{\alpha,\beta} J= J^\mathrm{T} J^{-1} J=J$
$$J=\frac{1}{AB+BC+AC}\begin{pmatrix}B+C & -C \ -C & A+C \end{pmatrix}
= \begin{pmatrix}\frac{1}{A+\frac{BC}{B+C}} & -\frac{1}{A+B+\frac{AB}{C}} \ -\frac{1}{A+B+\frac{AB}{C}} & \frac{1}{B+\frac{AC}{A+C}} \end{pmatrix}$$
$$J_{11}=(A+C)^{-1}$$
$$J_{12}=J_{21}= C^{-1}$$
$$J_{22}=-(B+C)^{-1}$$
$$\frac{J_{11}J_{22}}{J_{12}J_{21}}=1$$
$$\frac{-C^2}{(A+C)(B+C)}=1$$
$$\mathrm{Cov}(\gamma,\phi)=J_{12}J_{11}A+J_{21}J_{22}B+C(J_{12}J_{11}+J_{21}J_{22}+J_{21}J_{12}+J_{11}J_{22})$$
$$\mathrm{Cov}(\gamma,\phi)=\frac{A}{C(A+C)}-\frac{B}{C(B+C)}+\frac{1}{A+C}-\frac{1}{B+C} +\frac{1}{C} +\frac{1}{C}\frac{-C^2}{(A+C)(B+C)}
= \frac{1}{C}\left(\frac{A}{A+C}-\frac{B}{B+C}+\frac{C}{A+C}-\frac{C}{B+C} +1 +1\right)
= \frac{2}{C}$$
End of explanation
1/(1+pg(0,2)**2/pg(1,1))
pg(0,2)
from scipy.special import gamma
gamma(1)
Explanation: Wald distribution Fisher information
$$f(x)=\frac{\alpha}{\sigma \sqrt{2 \pi x^3}}\exp\left(-\frac{(\nu x-\alpha)^2}{2 \sigma^2 x}\right)$$
$E[X]=\alpha/\nu$
$E[1/X]=\nu/\alpha +\sigma^2/\alpha^2$
$$I_{\alpha \sigma \nu} = \begin{pmatrix} \frac{2}{\alpha^2}+\frac{\nu}{\sigma^2 \alpha} & \frac{2}{\sigma \alpha} & \frac{1}{\sigma}\
. & \frac{1}{\sigma^2} &0\
. & . & \frac{\alpha}{\sigma^2 \nu}\end{pmatrix} $$
$$I_{\log \alpha,\log \sigma \nu} = \begin{pmatrix} 2 \sigma+\frac{\nu \alpha}{\sigma} & 2 & \frac{1}{\sigma}\
. & 1 &0\
. & . & \frac{\alpha}{\sigma^2 \nu}\end{pmatrix} $$
End of explanation |
8,764 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bayesian interpretation of medical tests
This notebooks explores several problems related to interpreting the results of medical tests.
Copyright 2016 Allen Downey
MIT License
Step3: Medical tests
Suppose we test a patient to see if they have a disease, and the test comes back positive. What is the probability that the patient is actually sick (that is, has the disease)?
To answer this question, we need to know
Step4: Now we can create a Test object with parameters chosen for demonstration purposes (most medical tests are better than this!)
Step5: And here's how we update the Test object with a positive outcome
Step8: The positive test provides evidence that the patient is sick, increasing the probability from 0.1 to 0.25.
Uncertainty about t
So far, this is basic Bayesian inference. Now let's add a wrinkle. Suppose that we don't know the value of t with certainty, but we have reason to believe that t is either 0.2 or 0.4 with equal probability.
Again, we would like to know the probability that a patient who tests positive actually has the disease. As we did with the Red Die problem, we will consider several scenarios
Step9: To update a MetaTest, we update each of the hypothetical Test objects. The return value from Update is the normalizing constant, which is the total probability of the data under the hypothesis.
We use the normalizing constants from the bottom level of the hierarchy as the likelihoods at the top level.
Here's how we create the MetaTest for the scenario we described
Step10: At the top level, there are two tests, with different values of t. Initially, they are equally likely.
When we update the MetaTest, it updates the embedded Test objects and then the MetaTest itself.
Step11: Here are the results.
Step13: Because a positive test is more likely if t=0.4, the positive test is evidence in favor of the hypothesis that t=0.4.
This MetaTest object represents what we should believe about t after seeing the test, as well as what we should believe about the probability that the patient is sick.
Marginal distributions
To compute the probability that the patient is sick, we have to compute the marginal probabilities of sick and notsick, averaging over the possible values of t. The following function computes this distribution
Step14: Here's the posterior predictive distribution
Step18: After seeing the test, the probability that the patient is sick is 0.25, which is the same result we got with t=0.3.
Two patients
Now suppose you test two patients and they both test positive. What is the probability that they are both sick?
To answer that, I define a few more functions to work with Metatests
Step19: MakeMetaTest makes a MetaTest object starting with a given PMF of t.
Marginal extracts the PMF of t from a MetaTest.
Conditional takes a specified value for t and returns the PMF of sick and notsick conditioned on t.
I'll test these functions using the same parameters from above
Step20: Here are the results
Step21: Same as before. Now we can extract the posterior distribution of t.
Step22: Having seen one positive test, we are a little more inclined to believe that t=0.4; that is, that the false positive rate for this patient/test is high.
And we can extract the conditional distributions for the patient
Step23: Finally, we can make the posterior marginal distribution of sick/notsick, which is a weighted mixture of the conditional distributions
Step24: At this point we have a MetaTest that contains our updated information about the test (the distribution of t) and about the patient that tested positive.
Now, to compute the probability that both patients are sick, we have to know the distribution of t for both patients. And that depends on details of the scenario.
In Scenario A, the reason we are uncertain about t is either (1) there are two versions of the test, with different false positive rates, and we don't know which test was used, or (2) there are two groups of people, the false positive rate is different for different groups, and we don't know which group the patient is in.
So the value of t for each patient is an independent choice from pmf_t; that is, if we learn something about t for one patient, that tells us nothing about t for other patients.
So if we consider two patients who have tested positive, the MetaTest we just computed represents our belief about each of the two patients independently.
To compute the probability that both patients are sick, we can convolve the two distributions.
Step25: Then we can compute the posterior marginal distribution of sick/notsick for the two patients
Step26: So in Scenario A the probability that both patients are sick is 1/16.
As an aside, we could have computed the marginal distributions first and then convolved them, which is computationally more efficient
Step27: We can confirm that this result is correct by simulation. Here's a generator that generates random pairs of patients
Step28: And here's a function that runs the simulation for a given number of iterations
Step29: As we increase iters, the probablity of (True, True) converges on 1/16, which is what we got from the analysis.
Good so far!
Scenario B
In Scenario B, we have reason to believe the t is the same for all patients, but we are not sure what it is. So each time we see a positive test, we get some information about t for all patients.
The first time we see positive test we do the same update as in Scenario A
Step30: And the marginal distribution of sick/notsick is the same
Step31: Now suppose the second patient arrives. We need a new MetaTest that contains the updated information about the test, but no information about the patient other than the prior probability of being sick, p
Step32: Now we can update this MetaTest with the result from the second test
Step33: This distribution contains updated information about the test, based on two positive outcomes, and updated information about a patient who has tested positive (once).
After seeing two patients with positive tests, the probability that t=0.4 has increased to 25/34, around 74%.
For either patient, the probability of being sick is given by the marginal distribution from metatest2
Step34: After two tests, the probability that the patient is sick is slightly lower than after one (4/17 is about 23.5%, compared to 25%). That's because the second positive test increases our belief that the false positive rate is high (t=0.4), which decreases our belief that either patient is sick.
Now, to compute the probability that both are sick, we can't just convolve the posterior marginal distribution with itself, as we did in Scenario A, because the selection of t is not independent for the two patients. Instead, we have to make a weighted mixture of conditional distributions.
If we know t=t1, we can compute the joint distribution for the two patients
Step35: If we know that t=t1, the probability of sicksick is 0.111. And for t=t2
Step36: If we know that t=t2, the probability of sicksick is 0.04.
The overall probability of sicksick is the weighted average of these probabilities
Step37: 1/17 is about 5.88%, somewhat smaller than in Scenario A (1/16, which is 6.25%).
To compute the probabilities for all four outcomes, I'll make a Metapmf that contains the two conditional distributions.
Step38: And finally we can use MakeMixture to compute the weighted averages of the posterior probabilities
Step39: To confirm that this result is correct, I'll use the simuation again with a different generator
Step40: The difference between Scenario A and Scenario B is the line I commented out. In Scenario B, we generate t once and it applies to both patients. | Python Code:
from __future__ import print_function, division
from thinkbayes2 import Pmf, Suite
from fractions import Fraction
Explanation: Bayesian interpretation of medical tests
This notebooks explores several problems related to interpreting the results of medical tests.
Copyright 2016 Allen Downey
MIT License: http://opensource.org/licenses/MIT
End of explanation
class Test(Suite):
Represents beliefs about a patient based on a medical test.
def __init__(self, p, s, t, label='Test'):
# initialize the prior probabilities
d = dict(sick=p, notsick=1-p)
super(Test, self).__init__(d, label)
# store the parameters
self.p = p
self.s = s
self.t = t
# make a nested dictionary to compute likelihoods
self.likelihood = dict(pos=dict(sick=s, notsick=t),
neg=dict(sick=1-s, notsick=1-t))
def Likelihood(self, data, hypo):
data: 'pos' or 'neg'
hypo: 'sick' or 'notsick'
return self.likelihood[data][hypo]
Explanation: Medical tests
Suppose we test a patient to see if they have a disease, and the test comes back positive. What is the probability that the patient is actually sick (that is, has the disease)?
To answer this question, we need to know:
The prevalence of the disease in the population the patient is from. Let's assume the patient is identified as a member of a population where the known prevalence is p.
The sensitivity of the test, s, which is the probability of a positive test if the patient is sick.
The false positive rate of the test, t, which is the probability of a positive test if the patient is not sick.
Given these parameters, we can compute the probability that the patient is sick, given a positive test.
Test class
To do that, I'll define a Test class that extends Suite, so it inherits Update and provides Likelihood.
The instance variables of Test are:
p, s, and t: Copies of the parameters.
d: a dictionary that maps from hypotheses to their probabilities. The hypotheses are the strings sick and notsick.
likelihood: a dictionary that encodes the likelihood of the possible data values pos and neg under the hypotheses.
End of explanation
p = Fraction(1, 10) # prevalence
s = Fraction(9, 10) # sensitivity
t = Fraction(3, 10) # false positive rate
test = Test(p, s, t)
test.Print()
Explanation: Now we can create a Test object with parameters chosen for demonstration purposes (most medical tests are better than this!):
End of explanation
test.Update('pos')
test.Print()
Explanation: And here's how we update the Test object with a positive outcome:
End of explanation
class MetaTest(Suite):
Represents a set of tests with different values of `t`.
def Likelihood(self, data, hypo):
data: 'pos' or 'neg'
hypo: Test object
# the return value from `Update` is the total probability of the
# data for a hypothetical value of `t`
return hypo.Update(data)
Explanation: The positive test provides evidence that the patient is sick, increasing the probability from 0.1 to 0.25.
Uncertainty about t
So far, this is basic Bayesian inference. Now let's add a wrinkle. Suppose that we don't know the value of t with certainty, but we have reason to believe that t is either 0.2 or 0.4 with equal probability.
Again, we would like to know the probability that a patient who tests positive actually has the disease. As we did with the Red Die problem, we will consider several scenarios:
Scenario A: The patients are drawn at random from the relevant population, and the reason we are uncertain about t is that either (1) there are two versions of the test, with different false positive rates, and we don't know which test was used, or (2) there are two groups of people, the false positive rate is different for different groups, and we don't know which group the patient is in.
Scenario B: As in Scenario A, the patients are drawn at random from the relevant population, but the reason we are uncertain about t is that previous studies of the test have been contradictory. That is, there is only one version of the test, and we have reason to believe that t is the same for all groups, but we are not sure what the correct value of t is.
Scenario C: As in Scenario A, there are two versions of the test or two groups of people. But now the patients are being filtered so we only see the patients who tested positive and we don't know how many patients tested negative. For example, suppose you are a specialist and patients are only referred to you after they test positive.
Scenario D: As in Scenario B, we have reason to think that t is the same for all patients, and as in Scenario C, we only see patients who test positive and don't know how many tested negative.
Scenario A
We can represent this scenario with a hierarchical model, where the levels of the hierarchy are:
At the top level, the possible values of t and their probabilities.
At the bottom level, the probability that the patient is sick or not, conditioned on t.
To represent the hierarchy, I'll define a MetaTest, which is a Suite that contains Test objects with different values of t as hypotheses.
End of explanation
q = Fraction(1, 2)
t1 = Fraction(2, 10)
t2 = Fraction(4, 10)
test1 = Test(p, s, t1, 'Test(t=0.2)')
test2 = Test(p, s, t2, 'Test(t=0.4)')
metatest = MetaTest({test1:q, test2:1-q})
metatest.Print()
Explanation: To update a MetaTest, we update each of the hypothetical Test objects. The return value from Update is the normalizing constant, which is the total probability of the data under the hypothesis.
We use the normalizing constants from the bottom level of the hierarchy as the likelihoods at the top level.
Here's how we create the MetaTest for the scenario we described:
End of explanation
metatest.Update('pos')
Explanation: At the top level, there are two tests, with different values of t. Initially, they are equally likely.
When we update the MetaTest, it updates the embedded Test objects and then the MetaTest itself.
End of explanation
metatest.Print()
Explanation: Here are the results.
End of explanation
def MakeMixture(metapmf, label='mix'):
Make a mixture distribution.
Args:
metapmf: Pmf that maps from Pmfs to probs.
label: string label for the new Pmf.
Returns: Pmf object.
mix = Pmf(label=label)
for pmf, p1 in metapmf.Items():
for x, p2 in pmf.Items():
mix.Incr(x, p1 * p2)
return mix
Explanation: Because a positive test is more likely if t=0.4, the positive test is evidence in favor of the hypothesis that t=0.4.
This MetaTest object represents what we should believe about t after seeing the test, as well as what we should believe about the probability that the patient is sick.
Marginal distributions
To compute the probability that the patient is sick, we have to compute the marginal probabilities of sick and notsick, averaging over the possible values of t. The following function computes this distribution:
End of explanation
predictive = MakeMixture(metatest)
predictive.Print()
Explanation: Here's the posterior predictive distribution:
End of explanation
def MakeMetaTest(p, s, pmf_t):
Makes a MetaTest object with the given parameters.
p: prevalence
s: sensitivity
pmf_t: Pmf of possible values for `t`
tests = {}
for t, q in pmf_t.Items():
label = 'Test(t=%s)' % str(t)
tests[Test(p, s, t, label)] = q
return MetaTest(tests)
def Marginal(metatest):
Extracts the marginal distribution of t.
marginal = Pmf()
for test, prob in metatest.Items():
marginal[test.t] = prob
return marginal
def Conditional(metatest, t):
Extracts the distribution of sick/notsick conditioned on t.
for test, prob in metatest.Items():
if test.t == t:
return test
Explanation: After seeing the test, the probability that the patient is sick is 0.25, which is the same result we got with t=0.3.
Two patients
Now suppose you test two patients and they both test positive. What is the probability that they are both sick?
To answer that, I define a few more functions to work with Metatests:
End of explanation
pmf_t = Pmf({t1:q, t2:1-q})
metatest = MakeMetaTest(p, s, pmf_t)
metatest.Print()
Explanation: MakeMetaTest makes a MetaTest object starting with a given PMF of t.
Marginal extracts the PMF of t from a MetaTest.
Conditional takes a specified value for t and returns the PMF of sick and notsick conditioned on t.
I'll test these functions using the same parameters from above:
End of explanation
metatest = MakeMetaTest(p, s, pmf_t)
metatest.Update('pos')
metatest.Print()
Explanation: Here are the results
End of explanation
Marginal(metatest).Print()
Explanation: Same as before. Now we can extract the posterior distribution of t.
End of explanation
cond1 = Conditional(metatest, t1)
cond1.Print()
cond2 = Conditional(metatest, t2)
cond2.Print()
Explanation: Having seen one positive test, we are a little more inclined to believe that t=0.4; that is, that the false positive rate for this patient/test is high.
And we can extract the conditional distributions for the patient:
End of explanation
MakeMixture(metatest).Print()
Explanation: Finally, we can make the posterior marginal distribution of sick/notsick, which is a weighted mixture of the conditional distributions:
End of explanation
convolution = metatest + metatest
convolution.Print()
Explanation: At this point we have a MetaTest that contains our updated information about the test (the distribution of t) and about the patient that tested positive.
Now, to compute the probability that both patients are sick, we have to know the distribution of t for both patients. And that depends on details of the scenario.
In Scenario A, the reason we are uncertain about t is either (1) there are two versions of the test, with different false positive rates, and we don't know which test was used, or (2) there are two groups of people, the false positive rate is different for different groups, and we don't know which group the patient is in.
So the value of t for each patient is an independent choice from pmf_t; that is, if we learn something about t for one patient, that tells us nothing about t for other patients.
So if we consider two patients who have tested positive, the MetaTest we just computed represents our belief about each of the two patients independently.
To compute the probability that both patients are sick, we can convolve the two distributions.
End of explanation
marginal = MakeMixture(metatest+metatest)
marginal.Print()
Explanation: Then we can compute the posterior marginal distribution of sick/notsick for the two patients:
End of explanation
marginal = MakeMixture(metatest) + MakeMixture(metatest)
marginal.Print()
Explanation: So in Scenario A the probability that both patients are sick is 1/16.
As an aside, we could have computed the marginal distributions first and then convolved them, which is computationally more efficient:
End of explanation
from random import random
def flip(p):
return random() < p
def generate_pair_A(p, s, pmf_t):
while True:
sick1, sick2 = flip(p), flip(p)
t = pmf_t.Random()
test1 = flip(s) if sick1 else flip(t)
t = pmf_t.Random()
test2 = flip(s) if sick2 else flip(t)
yield test1, test2, sick1, sick2
Explanation: We can confirm that this result is correct by simulation. Here's a generator that generates random pairs of patients:
End of explanation
def run_simulation(generator, iters=100000):
pmf_t = Pmf([0.2, 0.4])
pair_iterator = generator(0.1, 0.9, pmf_t)
outcomes = Pmf()
for i in range(iters):
test1, test2, sick1, sick2 = next(pair_iterator)
if test1 and test2:
outcomes[sick1, sick2] += 1
outcomes.Normalize()
return outcomes
outcomes = run_simulation(generate_pair_A)
outcomes.Print()
Explanation: And here's a function that runs the simulation for a given number of iterations:
End of explanation
metatest1 = MakeMetaTest(p, s, pmf_t)
metatest1.Update('pos')
metatest1.Print()
Explanation: As we increase iters, the probablity of (True, True) converges on 1/16, which is what we got from the analysis.
Good so far!
Scenario B
In Scenario B, we have reason to believe the t is the same for all patients, but we are not sure what it is. So each time we see a positive test, we get some information about t for all patients.
The first time we see positive test we do the same update as in Scenario A:
End of explanation
marginal = MakeMixture(metatest1)
marginal.Print()
Explanation: And the marginal distribution of sick/notsick is the same:
End of explanation
metatest2 = MakeMetaTest(p, s, Marginal(metatest1))
metatest2.Print()
Explanation: Now suppose the second patient arrives. We need a new MetaTest that contains the updated information about the test, but no information about the patient other than the prior probability of being sick, p:
End of explanation
metatest2.Update('pos')
metatest2.Print()
Explanation: Now we can update this MetaTest with the result from the second test:
End of explanation
predictive = MakeMixture(metatest2)
predictive.Print()
Explanation: This distribution contains updated information about the test, based on two positive outcomes, and updated information about a patient who has tested positive (once).
After seeing two patients with positive tests, the probability that t=0.4 has increased to 25/34, around 74%.
For either patient, the probability of being sick is given by the marginal distribution from metatest2:
End of explanation
cond_t1 = Conditional(metatest2, t1)
conjunction_t1 = cond_t1 + cond_t1
conjunction_t1.Print()
Explanation: After two tests, the probability that the patient is sick is slightly lower than after one (4/17 is about 23.5%, compared to 25%). That's because the second positive test increases our belief that the false positive rate is high (t=0.4), which decreases our belief that either patient is sick.
Now, to compute the probability that both are sick, we can't just convolve the posterior marginal distribution with itself, as we did in Scenario A, because the selection of t is not independent for the two patients. Instead, we have to make a weighted mixture of conditional distributions.
If we know t=t1, we can compute the joint distribution for the two patients:
End of explanation
cond_t2 = Conditional(metatest2, t2)
conjunction_t2 = cond_t2 + cond_t2
conjunction_t2.Print()
Explanation: If we know that t=t1, the probability of sicksick is 0.111. And for t=t2:
End of explanation
posterior_t = Marginal(metatest2)
posterior_t[t1] * conjunction_t1['sicksick'] + posterior_t[t2] * conjunction_t2['sicksick']
Explanation: If we know that t=t2, the probability of sicksick is 0.04.
The overall probability of sicksick is the weighted average of these probabilities:
End of explanation
metapmf = Pmf()
for t, prob in Marginal(metatest2).Items():
cond = Conditional(metatest2, t)
conjunction = cond + cond
metapmf[conjunction] = prob
metapmf.Print()
Explanation: 1/17 is about 5.88%, somewhat smaller than in Scenario A (1/16, which is 6.25%).
To compute the probabilities for all four outcomes, I'll make a Metapmf that contains the two conditional distributions.
End of explanation
predictive = MakeMixture(metapmf)
predictive.Print()
Explanation: And finally we can use MakeMixture to compute the weighted averages of the posterior probabilities:
End of explanation
def generate_pair_B(p, s, pmf_t):
while True:
sick1, sick2 = flip(p), flip(p)
t = pmf_t.Random()
test1 = flip(s) if sick1 else flip(t)
# Here's the difference
# t = pmf_t.Random()
test2 = flip(s) if sick2 else flip(t)
yield test1, test2, sick1, sick2
Explanation: To confirm that this result is correct, I'll use the simuation again with a different generator:
End of explanation
outcomes = run_simulation(generate_pair_B)
outcomes.Print()
Explanation: The difference between Scenario A and Scenario B is the line I commented out. In Scenario B, we generate t once and it applies to both patients.
End of explanation |
8,765 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Backpropagation Tutorial
(C) 2019 by Damir Cavar
Version
Step1: For plots of curves and functions we will use pyplot from matplotlib. We will import it here
Step2: Non-linearity Function and Derivatives
The Sigmoid function is defined as
Step3: We can now plot the sigmoid function for x values between -10 and 10
Step4: In the following for Backpropagation we will make use of the Derivative of the Sigmoid function. The Derivative of Sigmoid is defined as
Step5: We can plot the Derivative of the Sigmoid Function as follows
Step6: Forward- and Backpropagation
We will define a simple network that takes an input as defined for X and that generates a corresponding output as defined in y. The input array X is
Step7: The rows in $X$ are the input vectors for our training or learning phase. Each vector has 3 dimensions.
The output array y represents the expected output that the network is expected to learn from the input data. It is defined as a row-vector with 4 rows and 1 column
Step8: We will define a weight matrix W and initialize it with random weights
Step9: In this simple example W is the weight matrix that connects two layers, the input (X) and the output layer (O).
The optimization or learning phase consists of a certain number of iterations that
Step10: Let us keep track of the output error (as becomes clear below) in the following variable
Step11: Repeat for a specific number of iterations the following computations. Initially we take the entire set of training examples in X and process them all at the same time. This is called full batch training, inidicated by the dot-product between X and W. Computing O is the first prediction step by taking the dot-product of X and W and computing the sigmoid function over it
Step12: The matrix X has 4 rows and 3 columns. The weight matrix W has 3 rows and 1 column. The output will be a row vector with 4 rows and 1 column, representing the output that we want to align as close as possible to y.
O_error is the difference between y and the initial guess in O. We want to see O to reflect y as closely as possible. After {{ iterations }} in the loop above, we see that O is resembling y very well, with an error of {{ error }}.
In the next step we compute the derivative of the sigmoid function for the initial guess vector. The Derivative is weighted by the error, which means that if the slope was shallow (close to or approaching 0), the guess was quite good, that is the network was confident about the output for a given input. If the slope was higher, as for example for x = 0, the prediction was not very good. Such bad predictions get updated significantly, while the confident predictions get updated minimally, multiplying them with some small number close to 0.
For every single weight, we
Adding a Layer
In the following example we will slightly change the ground truth. Compare the following definition of y with the definition above
Step13: In the following network specification we introduce a second layer | Python Code:
import numpy as np
Explanation: Backpropagation Tutorial
(C) 2019 by Damir Cavar
Version: 0.1, November 2019
Download: This and various other Jupyter notebooks are available from my GitHub repo.
Introduction
For more details on Backpropagation and its use in Neural Networks see Rumelhart, Hinton, and Williams (1986a) and Rumelhart, Hinton & Williams (1986b). A detailed overview is also provided in Goodfellow, Bengio, and Courville (2016).
The ideas and initial versions of this Python-based notebook have been inspired by many open and public tutorials and articles, but in particular by these three:
- Andrew Trask (2015) A Neural Network in 11 lines of Python (Part 1)
- Matt Mazur (2015) A Step by Step Backpropagation Example
- Arunava Chakraborty (2018) Derivative of the Sigmoid function
A lot of code examples and discussion has been compiled here using these sources.
Preliminaries
This notebook uses nbextensions with python-markdown/main enabled. These extensions might not work in Jupyter Lab, thus some variable references in the markdown cells might not display.
We will use numpy in the following demo. Let us import it and assign the np alias to it:
End of explanation
from matplotlib import pyplot as plt
Explanation: For plots of curves and functions we will use pyplot from matplotlib. We will import it here:
End of explanation
def sigmoid(x):
return 1 / (1 + np.exp(-x))
Explanation: Non-linearity Function and Derivatives
The Sigmoid function is defined as:
$$\sigma(x) = \frac{1}{1 + e^{-x}} $$
We can specify it in Python as:
End of explanation
%matplotlib inline
x = np.arange(-10, 10, 0.2)
y = sigmoid(x)
fig = plt.figure()
ax = fig.add_axes([0, 0, 1, 1])
ax.plot(x, y)
ax.set_xlabel("x")
ax.set_ylabel("y")
ax.set_title("Sigmoid")
print()
Explanation: We can now plot the sigmoid function for x values between -10 and 10:
End of explanation
def sigmoidDerivative(x):
return sigmoid(x) * (1 - sigmoid(x))
Explanation: In the following for Backpropagation we will make use of the Derivative of the Sigmoid function. The Derivative of Sigmoid is defined as:
$$\frac{d}{dx}\sigma(x) = \sigma(x) (1 - \sigma(x))$$
We can derive this equation as follows. Assume that:
$$\frac{d}{dx} \sigma(x) = \frac{d}{dx} \frac{1}{1 + e^{-x}} $$
We can invert the fraction using a negative exponent:
$$\frac{d}{dx} \sigma(x) = \frac{d}{dx} \frac{1}{1 + e^{-x}} = \frac{d}{dx} (1 + e^{-x})^{-1}$$
We can apply the reciprocal rule, which is, the numerator is the derivative of the function ($g'(x)$) times -1 divided by the square of the denominator $g(x)$:
$$\frac{d}{dx} \left[ \frac{1}{g(x)} \right] = \frac{-g'(x)}{[g(x)]^2} = -g(x)^{-2} g'(x)$$
In our Derivative of Sigmoid derivation, we can now reformulate as:
$$\frac{d}{dx} (1 + e^{-x})^{-1} = -(1 + e^{-x})^{-2} \frac{d}{dx} (1 + e^{-x})$$
With $\alpha$ and $\beta$ constants, the Rule of Linearity says that:
$$\frac{d}{dx} \left( \alpha f(x) + \beta g(x) \right) = \frac{d}{dx} \left( \alpha f(x) \right) + \frac{d}{dx} \left( \beta g(x) \right) = \alpha f'(x) + \beta g'(x)$$
This means, using the Rule of Linearity and given that the derivative of a constant is 0, we can rewrite our equation as:
$$\frac{d}{dx} (1 + e^{-x})^{-1} = -(1 + e^{-x})^{-2} \frac{d}{dx} (1 + e^{-x}) = -(1 + e^{-x})^{-2} \left( \frac{d}{dx}[1] + \frac{d}{dx}[e^{-x}] \right) = -(1 + e^{-x})^{-2} \left( 0 + \frac{d}{dx}[e^{-x}] \right) = -(1 + e^{-x})^{-2} \frac{d}{dx}[e^{-x}] $$
The Exponential Rule says that:
$$\frac{d}{dx} e^{u(x)} = e^{u(x)} \frac{d}{dx} x$$
We can thus rewrite:
$$\frac{d}{dx} (1 + e^{-x})^{-1} = -(1 + e^{-x})^{-2} e^{-x} \frac{d}{dx}[-x] $$
This is equivalent to:
$$\frac{d}{dx} (1 + e^{-x})^{-1} = -(1 + e^{-x})^{-2} e^{-x} -\frac{d}{dx}[x]$$
Given that a derivative of a variable is 1, we can rewrite as:
$$\frac{d}{dx} (1 + e^{-x})^{-1} = -(1 + e^{-x})^{-2} e^{-x} -1 = (1 + e^{-x})^{-2} e^{-x} = \frac{e^{-x}}{(1 + e^{-x})^2}$$
We can rewrite the derivative as:
$$\frac{d}{dx} (1 + e^{-x})^{-1} = \frac{1 e^{-x}}{(1 + e^{-x}) (1 + e^{-x})} = \frac{1}{1 + e^{-x}} \frac{e^{-x}}{1 + e^{-x}} = \frac{1}{1 + e^{-x}} \frac{e^{-x} + 1 - 1}{1 + e^{-x}} = \frac{1}{1 + e^{-x}} \left( \frac{1 + e^{-x}}{1 + e^{-x}} - \frac{1}{1 + e^{-x}} \right)$$
We can simplify this to:
$$\frac{d}{dx} (1 + e^{-x})^{-1} = \frac{1}{1 + e^{-x}} \left( 1 - \frac{1}{1 + e^{-x}} \right)$$
This means that we can derive the Derivative of the Sigmoid function as:
$$\frac{d}{dx} \sigma(x) = \sigma(x) ( 1 - \sigma(x) )$$
We can specify the Python function of the Derivative of the Sigmoid function as:
End of explanation
%matplotlib inline
x = np.arange(-10, 10, 0.2)
fig = plt.figure()
ax = fig.add_axes([0, 0, 1, 1])
y = sigmoidDerivative(x)
ax.plot(x, y, color="red", label='Derivative of Sigmoid'.format(1))
y = sigmoid(x)
ax.plot(x, y, color="blue", label='Sigmoid'.format(1))
fig.legend(loc='center right')
ax.set_xlabel("x")
ax.set_ylabel("y")
ax.set_title("Derivative of the Sigmoid Function")
print()
Explanation: We can plot the Derivative of the Sigmoid Function as follows:
End of explanation
X = np.array( [ [0, 0, 1],
[0, 1, 1],
[1, 0, 1],
[1, 1, 1] ] )
Explanation: Forward- and Backpropagation
We will define a simple network that takes an input as defined for X and that generates a corresponding output as defined in y. The input array X is:
End of explanation
y = np.array( [0, 0, 1, 1] ).reshape(-1, 1)
np.shape(y)
Explanation: The rows in $X$ are the input vectors for our training or learning phase. Each vector has 3 dimensions.
The output array y represents the expected output that the network is expected to learn from the input data. It is defined as a row-vector with 4 rows and 1 column:
End of explanation
W = 2 * np.random.random((3, 1)) - 1
print(W)
Explanation: We will define a weight matrix W and initialize it with random weights:
End of explanation
iterations = 4000
Explanation: In this simple example W is the weight matrix that connects two layers, the input (X) and the output layer (O).
The optimization or learning phase consists of a certain number of iterations that :
End of explanation
error = 0.0
Explanation: Let us keep track of the output error (as becomes clear below) in the following variable:
End of explanation
for i in range(iterations):
O = sigmoid(np.dot(X, W))
O_error = y - O
error = np.mean(np.abs(O_error))
if (i % 100) == 0:
print("Error:", error)
# Compute the delta
O_delta = O_error * sigmoidDerivative(O)
# update weights
W += np.dot(X.T, O_delta)
print("O:", O)
Explanation: Repeat for a specific number of iterations the following computations. Initially we take the entire set of training examples in X and process them all at the same time. This is called full batch training, inidicated by the dot-product between X and W. Computing O is the first prediction step by taking the dot-product of X and W and computing the sigmoid function over it:
End of explanation
y = np.array([[0],
[1],
[1],
[0]])
Explanation: The matrix X has 4 rows and 3 columns. The weight matrix W has 3 rows and 1 column. The output will be a row vector with 4 rows and 1 column, representing the output that we want to align as close as possible to y.
O_error is the difference between y and the initial guess in O. We want to see O to reflect y as closely as possible. After {{ iterations }} in the loop above, we see that O is resembling y very well, with an error of {{ error }}.
In the next step we compute the derivative of the sigmoid function for the initial guess vector. The Derivative is weighted by the error, which means that if the slope was shallow (close to or approaching 0), the guess was quite good, that is the network was confident about the output for a given input. If the slope was higher, as for example for x = 0, the prediction was not very good. Such bad predictions get updated significantly, while the confident predictions get updated minimally, multiplying them with some small number close to 0.
For every single weight, we
Adding a Layer
In the following example we will slightly change the ground truth. Compare the following definition of y with the definition above:
End of explanation
np.random.seed(1)
# randomly initialize our weights with mean 0
Wh = 2 * np.random.random((3, 4)) - 1
Wo = 2 * np.random.random((4, 1)) - 1
Xt = X.T # precomputing the transform of X for the loop
for i in range(80000):
# Feed forward through layers X, H, and O
H = sigmoid(np.dot(X, Wh))
O = sigmoid(np.dot(H, Wo))
# how much did we miss the target value?
O_error = y - O
error = np.mean(np.abs(O_error))
if (i % 10000) == 0:
print("Error:", error)
# compute the direction of the optimization for the output layer
O_delta = O_error * sigmoidDerivative(O)
# how much did each H value contribute to the O error (according to the weights)?
H_error = O_delta.dot(Wo.T)
# compute the directions of the optimization for the hidden layer
H_delta = H_error * sigmoidDerivative(H)
Wo += H.T.dot(O_delta)
Wh += Xt.dot(H_delta)
print(O)
Explanation: In the following network specification we introduce a second layer
End of explanation |
8,766 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute LCMV beamformer on evoked data
Compute LCMV beamformer solutions on evoked dataset for three different choices
of source orientation and stores the solutions in stc files for visualisation.
Step1: Get epochs | Python Code:
# Author: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import matplotlib.pyplot as plt
import numpy as np
import mne
from mne.datasets import sample
from mne.beamformer import lcmv
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
label_name = 'Aud-lh'
fname_label = data_path + '/MEG/sample/labels/%s.label' % label_name
subjects_dir = data_path + '/subjects'
Explanation: Compute LCMV beamformer on evoked data
Compute LCMV beamformer solutions on evoked dataset for three different choices
of source orientation and stores the solutions in stc files for visualisation.
End of explanation
event_id, tmin, tmax = 1, -0.2, 0.5
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname, preload=True, proj=True)
raw.info['bads'] = ['MEG 2443', 'EEG 053'] # 2 bads channels
events = mne.read_events(event_fname)
# Set up pick list: EEG + MEG - bad channels (modify to your needs)
left_temporal_channels = mne.read_selection('Left-temporal')
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=True, eog=True,
exclude='bads', selection=left_temporal_channels)
# Pick the channels of interest
raw.pick_channels([raw.ch_names[pick] for pick in picks])
# Re-normalize our empty-room projectors, so they are fine after subselection
raw.info.normalize_proj()
# Read epochs
proj = False # already applied
epochs = mne.Epochs(raw, events, event_id, tmin, tmax,
baseline=(None, 0), preload=True, proj=proj,
reject=dict(grad=4000e-13, mag=4e-12, eog=150e-6))
evoked = epochs.average()
forward = mne.read_forward_solution(fname_fwd, surf_ori=True)
# Compute regularized noise and data covariances
noise_cov = mne.compute_covariance(epochs, tmin=tmin, tmax=0, method='shrunk')
data_cov = mne.compute_covariance(epochs, tmin=0.04, tmax=0.15,
method='shrunk')
plt.close('all')
pick_oris = [None, 'normal', 'max-power']
names = ['free', 'normal', 'max-power']
descriptions = ['Free orientation', 'Normal orientation', 'Max-power '
'orientation']
colors = ['b', 'k', 'r']
for pick_ori, name, desc, color in zip(pick_oris, names, descriptions, colors):
stc = lcmv(evoked, forward, noise_cov, data_cov, reg=0.01,
pick_ori=pick_ori)
# View activation time-series
label = mne.read_label(fname_label)
stc_label = stc.in_label(label)
plt.plot(1e3 * stc_label.times, np.mean(stc_label.data, axis=0), color,
hold=True, label=desc)
plt.xlabel('Time (ms)')
plt.ylabel('LCMV value')
plt.ylim(-0.8, 2.2)
plt.title('LCMV in %s' % label_name)
plt.legend()
plt.show()
# Plot last stc in the brain in 3D with PySurfer if available
brain = stc.plot(hemi='lh', subjects_dir=subjects_dir,
initial_time=0.1, time_unit='s')
brain.show_view('lateral')
Explanation: Get epochs
End of explanation |
8,767 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small>
Challenge Notebook
Problem
Step1: Unit Test | Python Code:
%run ../bst/bst.py
%load ../bst/bst.py
def in_order_traversal(node, visit_func):
# TODO: Implement me
pass
def pre_order_traversal(node, visit_func):
# TODO: Implement me
pass
def post_order_traversal(node, visit_func):
# TODO: Implement me
pass
Explanation: <small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small>
Challenge Notebook
Problem: Implement depth-first searches (in-order, pre-order, post-order traversals) on a binary tree.
Constraints
Test Cases
Algorithm
Code
Unit Test
Constraints
Can we assume we already have a Node class with an insert method?
Yes
Test Cases
In-Order Traversal
5, 2, 8, 1, 3 -> 1, 2, 3, 5, 8
1, 2, 3, 4, 5 -> 1, 2, 3, 4, 5
Pre-Order Traversal
5, 2, 8, 1, 3 -> 5, 2, 1, 3, 8
1, 2, 3, 4, 5 -> 1, 2, 3, 4, 5
Post-Order Traversal
5, 2, 8, 1, 3 -> 1, 3, 2, 8, 5
1, 2, 3, 4, 5 -> 5, 4, 3, 2, 1
Algorithm
Refer to the Solution Notebook. If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start.
Code
End of explanation
%run ../utils/results.py
# %load test_dfs.py
from nose.tools import assert_equal
class TestDfs(object):
def __init__(self):
self.results = Results()
def test_dfs(self):
node = Node(5)
insert(node, 2)
insert(node, 8)
insert(node, 1)
insert(node, 3)
in_order_traversal(node, self.results.add_result)
assert_equal(str(self.results), "[1, 2, 3, 5, 8]")
self.results.clear_results()
pre_order_traversal(node, self.results.add_result)
assert_equal(str(self.results), "[5, 2, 1, 3, 8]")
self.results.clear_results()
post_order_traversal(node, self.results.add_result)
assert_equal(str(self.results), "[1, 3, 2, 8, 5]")
self.results.clear_results()
node = Node(1)
insert(node, 2)
insert(node, 3)
insert(node, 4)
insert(node, 5)
in_order_traversal(node, self.results.add_result)
assert_equal(str(self.results), "[1, 2, 3, 4, 5]")
self.results.clear_results()
pre_order_traversal(node, self.results.add_result)
assert_equal(str(self.results), "[1, 2, 3, 4, 5]")
self.results.clear_results()
post_order_traversal(node, self.results.add_result)
assert_equal(str(self.results), "[5, 4, 3, 2, 1]")
print('Success: test_dfs')
def main():
test = TestDfs()
test.test_dfs()
if __name__ == '__main__':
main()
Explanation: Unit Test
End of explanation |
8,768 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Running simple example through EC2
start downloading linking_EC2
Tutorial for running liknking_EC2 see
Step1: start the actual cluster
Step2: login to main node and run
Step3: Terimante the cluster | Python Code:
%%bash
. ~/.bashrc
pip install --upgrade git+https://git@github.com/JonasWallin/linkingEC2
from linkingEC2 import LinkingHandler
from ConfigParser import ConfigParser
config = ConfigParser()
starfigconfig_folder = "/Users/jonaswallin/.starcluster/"
config.read(starfigconfig_folder + "config")
acess_key_id = config.get('aws info', 'aws_access_key_id' , 0)
aws_secret_key = config.get('aws info', 'aws_secret_access_key', 0)
aws_region_name = config.get('aws info', 'aws_region_name' , 0)
my_key_loc = config.get('key mykeyABC', 'key_location',0)
linker = LinkingHandler(aws_secret_access_key = aws_secret_key,
aws_access_key_id = acess_key_id,
aws_region_name = aws_region_name,
key_location = my_key_loc,
key_name = 'mykeyABC' )
Explanation: Running simple example through EC2
start downloading linking_EC2
Tutorial for running liknking_EC2 see:
https://github.com/JonasWallin/linkingEC2/blob/master/script/running%20MPI4py.ipynb
End of explanation
start_cluster= False
spot_cluster = True
n_nodes = 1
type_node = 'c4.8xlarge'
if spot_cluster:
linker.connect_spot_instance()
elif start_cluster:
linker.start_cluster('ami-d05e75b8', type_node, ['linking_EC2'], n_nodes)
else:
linker.connect_cluster()
PACKAGES_APT = [' libatlas3-base',
'libatlas-base-dev',
'python-dev',
'openmpi-bin',
'libopenmpi-dev',
'python-numpy',
'python-sklearn',
'python-matplotlib',
'git',
'python-scipy',
'r-base',
'r-base-core']
PACKAGES_PIP = ['cython',
'mpi4py',
'simplejson',
'rpy2']
#Adding later version of R
#http://philipp-burckhardt.com/2014/05/25/installing-r-rstudio-on-ubuntu/
command = 'sudo add-apt-repository "deb http://cran.rstudio.com/bin/linux/ubuntu trusty/"'
linker.send_command_ssh(command = 'gpg --keyserver pgpkeys.mit.edu --recv-key 51716619E084DAB9')
linker.send_command_ssh(command = 'gpg -a --export 51716619E084DAB9 | sudo apt-key add -')
linker.send_command_ssh(command = command)
linker.apt_install(PACKAGES_APT)
#problem with memory installing scipy:
#http://naokiwatanabe.blogspot.se/2014/12/install-numpy-schipy-matplotlib-and-etc.html
linker.send_command_ssh(command = 'sudo /bin/dd if=/dev/zero of=/var/swap.1 bs=1M count=1024')
linker.send_command_ssh(command = 'sudo /sbin/mkswap /var/swap.1')
linker.send_command_ssh(command = 'sudo /sbin/swapon /var/swap.1')
linker.pip_install('-U scipy')
linker.send_command_ssh(command = 'sudo swapoff /var/swap.1')
linker.send_command_ssh(command = 'sudo sudo rm /var/swap.1')
linker.pip_install(PACKAGES_PIP)
linker.pip_install(['git+https://git@github.com/JonasWallin/BayesFlow'])
import os
os.system('say "your packages is downloaded"')
#linker.send_command_ssh( command = 'rm ~/covs_.npy')
#linker.send_command_ssh( command = 'rm ~/means_.npy')
#linker.send_command_ssh( command = 'rm ~/weights_.npy')
#linker.send_command_ssh( command = 'rm ~/article_util.py')
#linker.send_command_ssh( command = 'rm ~/article_simulatedata.py')
#linker.send_command_ssh( command = 'rm ~/article_estimate_largerdata1_mpi.py')
linker.send_command_ssh( command = 'wget https://raw.githubusercontent.com/JonasWallin/BayesFlow/master/examples/article1/covs_.npy')
linker.send_command_ssh( command = 'wget https://raw.githubusercontent.com/JonasWallin/BayesFlow/master/examples/article1/means_.npy')
linker.send_command_ssh( command = 'wget https://raw.githubusercontent.com/JonasWallin/BayesFlow/master/examples/article1/weights_.npy')
linker.send_command_ssh( command = 'wget https://raw.githubusercontent.com/JonasWallin/BayesFlow/master/examples/article1/article_util.py')
linker.send_command_ssh( command = 'wget https://raw.githubusercontent.com/JonasWallin/BayesFlow/master/examples/article1/article_simulatedata.py')
linker.send_command_ssh( command = 'wget https://raw.githubusercontent.com/JonasWallin/BayesFlow/master/examples/article1/article_estimate_largerdata1_mpi.py')
print( linker.get_ssh_login() )
Explanation: start the actual cluster
End of explanation
import numpy as np
tot_process = np.sum([node['n_process'] for node in linker.nodes])
command = 'mpirun -hostfile nodefile -n %d python article_estimate_largerdata1_mpi.py'%(tot_process)
linker.send_command_ssh(nodes = 0, command = command)
linker.copy_files_from_node('simulation_result.npy')
linker.copy_files_from_node('mus_sim.npy')
linker.copy_files_from_node('sim_data.npy')
Explanation: login to main node and run:
bash
mpirun -hostfile nodefile -n 2 python article_estimate_largerdata1_mpi.py
End of explanation
linker.terminate_cluster()
print(linker.conn.get_all_spot_instance_requests())
spot_instance = linker.conn.get_all_spot_instance_requests()
nodes = []
reservation = []
print(spot_instance)
for i,spot in enumerate(spot_instance):
print(spot.instance_id)
if spot.instance_id is not None:
res = linker.conn.get_all_instances(instance_ids=[spot.instance_id])
node_alias = 'node{0:03d}'.format(i+1)
reservation.append(res[0].instances[0])
public_dns = res[0].instances[0].public_dns_name
private_dns = res[0].instances[0].private_dns_name
private_ip_address = res[0].instances[0].private_ip_address
nodes.append({'name' :node_alias,
'public_dns' :public_dns,
'private_dns':private_dns,
'private_ip' :private_ip_address})
linker.reservation = reservation
linker.nodes = nodes
linker.test_ssh_in()
from linkingEC2.linkingEC2 import get_number_processes
get_number_processes(nodes = linker.nodes,
my_key = linker.my_key_location,
user = linker.user,
silent = linker.silent)
#copying the ssh keys to nodes
linker.copy_files_to_nodes(file_name = linker.my_key_location,
destination = '~/.ssh/id_rsa')
linker._ssh_disable_StrictHostKeyChecking()
print("ssh -i {keyloc} -o 'StrictHostKeyChecking no' ubuntu@{hostname}".format(
keyloc = linker.my_key_location,
hostname = linker.nodes[0]['public_dns']))
linker.setup_nodefile()
linker.send_command_ssh( command = 'rm ~/covs_.npy')
linker.send_command_ssh( command = 'rm ~/means_.npy')
linker.send_command_ssh( command = 'rm ~/weights_.npy')
linker.send_command_ssh( command = 'rm ~/article_util.py')
linker.send_command_ssh( command = 'rm ~/article_simulatedata.py')
linker.send_command_ssh( command = 'rm ~/article_estimate_largerdata1_mpi.py')
linker.send_command_ssh( command = 'wget https://raw.githubusercontent.com/JonasWallin/BayesFlow/master/examples/article1/covs_.npy')
linker.send_command_ssh( command = 'wget https://raw.githubusercontent.com/JonasWallin/BayesFlow/master/examples/article1/means_.npy')
linker.send_command_ssh( command = 'wget https://raw.githubusercontent.com/JonasWallin/BayesFlow/master/examples/article1/weights_.npy')
linker.copy_files_from_node('simulation_result.npy')
linker.copy_files_from_node('mus_sim.npy')
Explanation: Terimante the cluster:
End of explanation |
8,769 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Micromagnetic standard problem 3
Author
Step1: Firstly, we import all necessary modules.
Step2: The following two functions are used for initialising the system's magnetisation [1].
Step3: The following function is used for convenience. It takes two arguments
Step4: Relaxed states
Vortex state
Step5: Flower state
Step6: Cross section
Now, we can plot the energies of both vortex and flower states as a function of cube edge length. This will give us an idea where the state transition occurrs.
Step7: We now know that the energy crossing occurrs between $8l_\text{ex}$ and $9l_\text{ex}$, so a bisection algorithm can be used to find the exact crossing. | Python Code:
!rm -rf standard_problem3/ # Delete old result files (if any).
Explanation: Micromagnetic standard problem 3
Author: Marijan Beg, Ryan Pepper
Date: 11 May 2016
Problem specification
This problem is to calculate the single domain limit of a cubic magnetic particle. This is the size $L$ of equal energy for the so-called flower state (which one may also call a splayed state or a modified single-domain state) on the one hand, and the vortex or curling state on the other hand.
Geometry:
A cube with edge length, $L$, expressed in units of the intrinsic length scale, $l_\text{ex} = \sqrt{A/K_\text{m}}$, where $K_\text{m}$ is a magnetostatic energy density, $K_\text{m} = \frac{1}{2}\mu_{0}M_\text{s}^{2}$.
Material parameters:
uniaxial anisotropy $K_\text{u}$ with $K_\text{u} = 0.1 K_\text{m}$, and with the easy axis directed parallel to a principal axis of the cube (0, 0, 1),
exchange energy constant is $A = \frac{1}{2}\mu_{0}M_\text{s}^{2}l_\text{ex}^{2}$.
More details about the standard problem 3 can be found in Ref. 1.
Simulation
End of explanation
import sys
sys.path.append('../')
from sim import Sim
from atlases import BoxAtlas
from meshes import RectangularMesh
from energies.exchange import UniformExchange
from energies.demag import Demag
from energies.zeeman import FixedZeeman
from energies.anisotropy import UniaxialAnisotropy
Explanation: Firstly, we import all necessary modules.
End of explanation
# Function for initiaising the flower state.
def m_init_flower(pos):
x, y, z = pos[0]/1e-9, pos[1]/1e-9, pos[2]/1e-9
mx = 0
my = 2*z - 1
mz = -2*y + 1
norm_squared = mx**2 + my**2 + mz**2
if norm_squared <= 0.05:
return (1, 0, 0)
else:
return (mx, my, mz)
# Function for initialising the vortex state.
def m_init_vortex(pos):
x, y, z = pos[0]/1e-9, pos[1]/1e-9, pos[2]/1e-9
mx = 0
my = np.sin(np.pi/2 * (x-0.5))
mz = np.cos(np.pi/2 * (x-0.5))
return (mx, my, mz)
Explanation: The following two functions are used for initialising the system's magnetisation [1].
End of explanation
import numpy as np
def relaxed_state(L, m_init):
mu0 = 4*np.pi*1e-7 # magnetic constant (H/m)
N = 16 # discretisation in one dimension
cubesize = 100e-9 # cude edge length (m)
cellsize = cubesize/N # discretisation in all three dimensions.
lex = cubesize/L # exchange length.
Km = 1e6 # magnetostatic energy density (J/m**3)
Ms = np.sqrt(2*Km/mu0) # magnetisation saturation (A/m)
A = 0.5 * mu0 * Ms**2 * lex**2 # exchange energy constant
K1 = 0.1*Km # Uniaxial anisotropy constant
axis = (0, 0, 1) # Uniaxial anisotropy easy-axis
cmin = (0, 0, 0) # Minimum sample coordinate.
cmax = (cubesize, cubesize, cubesize) # Maximum sample coordinate.
d = (cellsize, cellsize, cellsize) # Discretisation.
atlas = BoxAtlas(cmin, cmax) # Create an atlas object.
mesh = RectangularMesh(atlas, d) # Create a mesh object.
sim = Sim(mesh, Ms, name='standard_problem3') # Create a simulation object.
sim.add(UniformExchange(A)) # Add exchange energy.
sim.add(Demag()) # Add demagnetisation energy.
sim.add(UniaxialAnisotropy(K1, axis)) # Add uniaxial anisotropy energy.
sim.set_m(m_init) # Initialise the system.
sim.relax() # Relax the magnetisation.
return sim
Explanation: The following function is used for convenience. It takes two arguments:
$L$ - the cube edge length in units of $l_\text{ex}$
the function for initialising the system's magnetisation
It returns the relaxed simulation object.
End of explanation
sim_vortex = relaxed_state(8, m_init_vortex)
print 'The relaxed state energy is {} J'.format(sim_vortex.total_energy())
# Plot the magnetisation in the sample slice.
%matplotlib inline
sim_vortex.m.plot_slice('y', 50e-9, xsize=6)
Explanation: Relaxed states
Vortex state:
End of explanation
sim_flower = relaxed_state(8, m_init_flower)
print 'The relaxed state energy is {} J'.format(sim_flower.total_energy())
# Plot the magnetisation in the sample slice.
sim_flower.m.plot_slice('z', 50e-9, xsize=6)
Explanation: Flower state:
End of explanation
L_array = np.linspace(8, 9, 11) # values of L for which the system is relaxed.
vortex_energies = []
flower_energies = []
for L in L_array:
sim_vortex = relaxed_state(L, m_init_vortex)
sim_flower = relaxed_state(L, m_init_flower)
vortex_energies.append(sim_vortex.total_energy())
flower_energies.append(sim_flower.total_energy())
# Plot the energy dependences.
import matplotlib.pyplot as plt
plt.plot(L_array, vortex_energies, 'o-', label='vortex')
plt.plot(L_array, flower_energies, 'o-', label='flower')
plt.xlabel('L (lex)')
plt.ylabel('E')
plt.grid()
plt.legend()
Explanation: Cross section
Now, we can plot the energies of both vortex and flower states as a function of cube edge length. This will give us an idea where the state transition occurrs.
End of explanation
from scipy.optimize import bisect
def energy_difference(L):
sim_vortex = relaxed_state(L, m_init_vortex)
sim_flower = relaxed_state(L, m_init_flower)
return sim_vortex.total_energy() - sim_flower.total_energy()
cross_section = bisect(energy_difference, 8, 9, xtol=0.1)
print 'The transition between vortex and flower states occurs at {}*lex'.format(cross_section)
Explanation: We now know that the energy crossing occurrs between $8l_\text{ex}$ and $9l_\text{ex}$, so a bisection algorithm can be used to find the exact crossing.
End of explanation |
8,770 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
If not yet available some libraries and their python bindings have to be installed
Step1: Create a meshed screen with a central hole
The screen is rectangular (215*150 mm) with a 4mm central hole.<br>
Typical cell size is 5 mm along the outside of the screen and 1.0 mm near the inner hole.<br>
The wavelength is 0.3 mm only. That means we need about 0.02 mm resolution along the x/z axis.<br>
When generating the geometry we stretch it by a factor 50 and undo that after creating the mesh,
effectively generating cells that are denser along the the stretched direction.
Step2: The z-position of all mesh points is computed to lay on a toroid with 1.625 m focal length.<br>
The radius of curvature in the plane is 2 f, out of plane f.<br>
The elevation in z direction is computed for x and y positions independently, assuming the size of the mirror being small in comparison to its focal length.
Step3: The screen is placed at z=3.625 m from th origin. A beam is assumed to propagate in z direction
The fields shall be reflected to th x direction. The screen normal is pointing in the negative z and positive x direction (x - left, y - up). To achieve that the screen has to be rotated by 45 degrees about the y axis.
Step4: define the timing
The beam is assumed to start at t=0. The fields are propagating with c so the expected time a signal arrives at some screen point is z/c. | Python Code:
import numpy as np
from scipy import constants
import pygmsh
from MeshedFields import *
Explanation: If not yet available some libraries and their python bindings have to be installed :<br>
- gmsh (best installed globally through package management system)
- python3 -m pip install pygmsh --user
- VTK (best installed globally through package management system)
- python3 -m pip install vtk --user
End of explanation
with pygmsh.geo.Geometry() as geom:
Lx = 0.215
Ly = 0.150
Ri = 0.002
lca = 0.005
lci = 0.001
stretch = 50.0
p1 = geom.add_point([Lx/2.0*stretch, Ly/2.0], lca)
p2 = geom.add_point([-Lx/2.0*stretch, Ly/2.0], lca)
p3 = geom.add_point([-Lx/2.0*stretch, -Ly/2.0], lca)
p4 = geom.add_point([Lx/2.0*stretch, -Ly/2.0], lca)
p1i = geom.add_point([Ri*stretch, 0.0], lci)
p2i = geom.add_point([0.0, Ri], lci)
p3i = geom.add_point([-Ri*stretch, 0.0], lci)
p4i = geom.add_point([0.0, -Ri], lci)
pc = geom.add_point([0.0, 0.0])
pa = geom.add_point([0.0, 0.01])
# the mesh is circumscribed with a polygon
l1 = geom.add_line(p1, p2)
l2 = geom.add_line(p2, p3)
l3 = geom.add_line(p3, p4)
l4 = geom.add_line(p4, p1)
outline = geom.add_curve_loop([l1, l2, l3, l4])
# the hole is circumscribed with four elliptic arcs
e1i = geom.add_ellipse_arc(start=p1i, center=pc, point_on_major_axis=pa, end=p2i)
e2i = geom.add_ellipse_arc(start=p2i, center=pc, point_on_major_axis=pa, end=p3i)
e3i = geom.add_ellipse_arc(start=p3i, center=pc, point_on_major_axis=pa, end=p4i)
e4i = geom.add_ellipse_arc(start=p4i, center=pc, point_on_major_axis=pa, end=p1i)
hole = geom.add_curve_loop([e1i,e2i,e3i,e4i])
pl = geom.add_plane_surface(outline, holes=[hole])
mesh = geom.generate_mesh()
mesh
# un-stretch
pts = np.array([np.array([p[0]/stretch,p[1],0.0]) for p in mesh.points])
tris = mesh.cells_dict['triangle']
Explanation: Create a meshed screen with a central hole
The screen is rectangular (215*150 mm) with a 4mm central hole.<br>
Typical cell size is 5 mm along the outside of the screen and 1.0 mm near the inner hole.<br>
The wavelength is 0.3 mm only. That means we need about 0.02 mm resolution along the x/z axis.<br>
When generating the geometry we stretch it by a factor 50 and undo that after creating the mesh,
effectively generating cells that are denser along the the stretched direction.
End of explanation
def ToroidZ(x,y,f):
# return f-math.sqrt(f*f-y*y) + 2*f-math.sqrt(4*f*f-x*x)
return math.sqrt( math.pow(math.sqrt(f*f-y*y)+f,2) -x*x ) - 2*f
pts = np.array([np.array([p[0],p[1],ToroidZ(p[0],p[1],1.625)]) for p in pts])
screen = MeshedField(pts,tris)
print("%d points" % len(screen.points))
print("%d triangles" % len(screen.triangles))
area = screen.MeshArea()
normals = screen.MeshNormals()
average = np.sum(normals, axis=0)/screen.Np
print("total mesh area = %7.3f cm²" % (1.0e4*np.sum(area)))
print("screen normal = %s" % average)
screen.ShowMeshedField(showAxes=True)
Explanation: The z-position of all mesh points is computed to lay on a toroid with 1.625 m focal length.<br>
The radius of curvature in the plane is 2 f, out of plane f.<br>
The elevation in z direction is computed for x and y positions independently, assuming the size of the mirror being small in comparison to its focal length.
End of explanation
def RotXZ(φ):
return np.array([[np.cos(φ),0,-np.sin(φ)],[0,1,0],[np.sin(φ),0,np.cos(φ)]])
RR = RotXZ(45.0/180.0*math.pi)
pts = np.array([np.dot(RR,p) for p in pts])
screen = MeshedField(pts,tris)
print("%d points" % len(screen.points))
print("%d triangles" % len(screen.triangles))
area = screen.MeshArea()
normals = screen.MeshNormals()
average = np.sum(normals, axis=0)/screen.Np
print("total mesh area = %7.3f cm²" % (1.0e4*np.sum(area)))
print("screen normal = %s" % average)
screen.ShowMeshedField(showAxes=True)
pts = np.array([p+np.array([0.0,0.0,3.625]) for p in pts])
screen = MeshedField(pts,tris)
screen.ShowMeshedField(showAxes=True)
Explanation: The screen is placed at z=3.625 m from th origin. A beam is assumed to propagate in z direction
The fields shall be reflected to th x direction. The screen normal is pointing in the negative z and positive x direction (x - left, y - up). To achieve that the screen has to be rotated by 45 degrees about the y axis.
End of explanation
# time step
screen.dt = 0.5e-13
# some time shift of the waveform start
delay = 15.0e-12
# all points use the same timing grid
screen.Nt = 800
screen.t0 = np.array([p[2]/constants.c-screen.Nt/2*screen.dt+delay for p in screen.pos])
filename="OL8_ToroidalMirrorWithHole.h5"
screen.WriteMeshedField(filename)
Explanation: define the timing
The beam is assumed to start at t=0. The fields are propagating with c so the expected time a signal arrives at some screen point is z/c.
End of explanation |
8,771 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing
Step8: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step10: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step12: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
Step15: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below
Step18: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch.
Step21: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
Step24: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
Step27: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
Step30: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note
Step33: Build the Neural Network
Apply the functions you implemented above to
Step34: Neural Network Training
Hyperparameters
Tune the following parameters
Step36: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
Step41: Save Parameters
Save the batch_size and save_path parameters for inference.
Step43: Checkpoint
Step46: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
Step48: Translate
This will translate translate_sentence from English to French. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
source_id_text = [[source_vocab_to_int[word] for word in sentence.split()] for sentence in list(source_text.split('\n'))]
target_id_text = [[target_vocab_to_int[word] for word in sentence.split()] for sentence in list(target_text.split('\n'))]
for sentence in target_id_text:
sentence.append(target_vocab_to_int['<EOS>'])
return source_id_text, target_id_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
import problem_unittests as tests
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
def model_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
input_ = tf.placeholder(tf.int32, shape=(None, None), name="input")
targets = tf.placeholder(tf.int32, shape=(None, None), name="targets")
learning_rate = tf.placeholder(tf.float32, name="learning_rate")
keep_prob = tf.placeholder(tf.float32, name="keep_prob")
return input_, targets, learning_rate, keep_prob
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoding_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)
End of explanation
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for decoding
:param target_data: Target Placeholder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
return dec_input
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_decoding_input(process_decoding_input)
Explanation: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch.
End of explanation
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
enc_cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
_, enc_state = tf.nn.dynamic_rnn(enc_cell, rnn_inputs, dtype=tf.float32)
return enc_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
End of explanation
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
# Training Decoder
train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
drop = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob)
train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(
drop, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope)
# Apply output function
train_logits = output_fn(train_pred)
return train_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
Explanation: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
End of explanation
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: The maximum allowed time steps to decode
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
# TODO: Implement Function
infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(
output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length - 1, vocab_size)
inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope)
return inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
Explanation: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
End of explanation
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
# Decoder RNNs
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
dec_cell = tf.contrib.rnn.MultiRNNCell([lstm] * num_layers)
# Ouput Layer
output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope)
with tf.variable_scope("decoding") as decoding_scope:
train_logits = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob)
with tf.variable_scope("decoding", reuse=True) as decoding_scope:
inference_logits = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'],
sequence_length, vocab_size, decoding_scope, output_fn, keep_prob)
return train_logits, inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
# Encoder embedding
rnn_inputs = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size)
# Encode the input
enc_state = encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob)
# Process target data
dec_input = process_decoding_input(target_data, target_vocab_to_int, batch_size)
# Decoder Embedding
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
# Decode the encoded input
training_logits, inference_logits = \
decoding_layer(dec_embed_input, dec_embeddings, enc_state, target_vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)
return training_logits, inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Apply embedding to the input data for the encoder.
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
Apply embedding to the target data for the decoder.
Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
End of explanation
# Number of Epochs
epochs = 20
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 64
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 20
decoding_embedding_size = 20
# Learning Rate
learning_rate = 0.005
# Dropout Keep Probability
keep_probability = 0.5
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_source_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import time
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
if batch_i % (len(source_int_text) // batch_size // 10) is 0:
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
Explanation: Checkpoint
End of explanation
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
ls_word_ids = [vocab_to_int[word.lower()] if word in vocab_to_int else vocab_to_int['<UNK>'] for word in sentence.split() ]
return ls_word_ids
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPredicti on')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation |
8,772 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Binning
Following script is used to bin the data and check stats of participants
Step1: Reading scan json files and extracting scan parameters
Step2: Convention
Step3: Group Stats
The follwoing section checks the stats of participants lying in the ollwoing bins
Step4: AGE
Step5: Box Plots
Step6: Eyes Open vs Closed
Step7: Stats
Step8: Result
Step9: Result
Step10: Result
Step11: Result
Step12: Matching based on Volumes
Volume bins
100 - 150
150 - 200
200 - 250
250 - 300
Step13: Matching based on Age
Age bins
6 - 9
9 -12
12 - 15
15 - 18
Step14: Create a function to do volumes matching
Step15: Recycle Bin
Step16: Extract the sub_id where volume lies in a particular bin | Python Code:
import pandas as pd
import numpy as np
import json
import string
df = pd.read_csv('/home1/varunk/data/ABIDE1/RawDataBIDs/composite_phenotypic_file.csv') # , index_col='SUB_ID'
df = df.sort_values(['SUB_ID'])
df
Explanation: Data Binning
Following script is used to bin the data and check stats of participants
End of explanation
# saving the file paths
!find /home1/varunk/data/ABIDE1/RawDataBIDs/ -name 'task-rest_bold.json' > scan_params_file.txt
# read the above created file paths:
with open('scan_params_file.txt', 'r') as f:
scan_param_paths = f.read().split('\n')[0:-1]
scan_param_paths
# for json_path in scan_param_paths:
# with open(json_path, 'rt') as fp:
# task_info = json.load(fp)
# # Accessing the contents:
# tr = task_info['RepetitionTime']
# volumes = task_info['NumberofMeasurements']
# xdim_mm, ydim_mm = task_info['PixelSpacing'].split('x')
# zdim_mm = task_info['SpacingBetweenSlices']
# xdim_voxels, ydim_voxels = task_info['AcquisitionMatrix'].split('x')
# zdim_voxels = task_info['NumberOfSlices']
Explanation: Reading scan json files and extracting scan parameters
End of explanation
SITES = np.unique(df.as_matrix(['SITE_ID']).squeeze())
data_frame = pd.DataFrame({
'SITE_NAME': [] ,
'TR': [],
'VOLUMES': [],
'xdim_mm': [],
'ydim_mm': [],
'zdim_mm': [],
'xdim_voxels': [],
'ydim_voxels': [],
'zdim_voxels': [],
'NUM_AUT_DSM_V': [] ,
'NUM_AUT_MALE_DSM_V': [] ,
'NUM_AUT_FEMALE_DSM_V': [],
'NUM_AUT_AGE_lte12_DSM_V' : [],
'NUM_AUT_AGE_12_18_DSM_V' : [],
'NUM_AUT_AGE_18_24_DSM_V': [],
'NUM_AUT_AGE_24_34_DSM_V' :[],
'NUM_AUT_AGE_34_50_DSM_V' : [],
'NUM_AUT_AGE_gt50_DSM_V' : [],
'NUM_AUT_DSM_IV' : [],
'NUM_AUT_MALE_DSM_IV' : [],
'NUM_AUT_FEMALE_DSM_IV' : [],
'NUM_ASP_DSM_IV' : [],
'NUM_ASP_MALE_DSM_IV' : [],
'NUM_ASP_FEMALE_DSM_IV' : [],
'NUM_PDDNOS_DSM_IV' : [],
'NUM_PDDNOS_MALE_DSM_IV' : [],
'NUM_PDDNOS_FEMALE_DSM_IV' : [],
'NUM_ASP_PDDNOS_DSM_IV' : [],
'NUM_ASP_PDDNOS_MALE_DSM_IV' : [],
'NUM_ASP_PDDNOS_FEMALE_DSM_IV' : [],
'NUM_TD' : [],
'NUM_TD_MALE' : [],
'NUM_TD_FEMALE' : [],
'NUM_TD_AGE_lte12' : [],
'NUM_TD_AGE_12_18' : [],
'NUM_TD_AGE_18_24' : [],
'NUM_TD_AGE_24_34' : [],
'NUM_TD_AGE_34_50' : [],
'NUM_TD_AGE_gt50' : []
})
# NUM_AUT =
# df.loc[(df['DSM_IV_TR'] != 0) & (df['DSM_IV_TR'] != 1) & (df['DSM_IV_TR'] != 2) & (df['DSM_IV_TR'] != 3) & (df['DSM_IV_TR'] != 4)]
for SITE in SITES:
NUM_AUT_DSM_V = df.loc[(df['DX_GROUP'] == 1) & (df['SITE_ID'] == SITE)].shape[0]
NUM_AUT_MALE_DSM_V = df.loc[(df['DX_GROUP'] == 1) & (df['SEX'] == 1) & (df['SITE_ID'] == SITE)].shape[0]
NUM_AUT_FEMALE_DSM_V = df.loc[(df['DX_GROUP'] == 1) & (df['SEX'] == 2) & (df['SITE_ID'] == SITE)].shape[0]
NUM_AUT_AGE_lte12_DSM_V = df.loc[(df['DX_GROUP'] == 1) & (df['AGE_AT_SCAN'] <= 12) & (df['SITE_ID'] == SITE)].shape[0]
NUM_AUT_AGE_12_18_DSM_V = df.loc[(df['DX_GROUP'] == 1) & (df['AGE_AT_SCAN'] > 12) & (df['AGE_AT_SCAN'] <= 18) & (df['SITE_ID'] == SITE)].shape[0]
NUM_AUT_AGE_18_24_DSM_V = df.loc[(df['DX_GROUP'] == 1) & (df['AGE_AT_SCAN'] > 18) & (df['AGE_AT_SCAN'] <= 24) & (df['SITE_ID'] == SITE)].shape[0]
NUM_AUT_AGE_24_34_DSM_V = df.loc[(df['DX_GROUP'] == 1) & (df['AGE_AT_SCAN'] > 24) & (df['AGE_AT_SCAN'] <= 34) & (df['SITE_ID'] == SITE)].shape[0]
NUM_AUT_AGE_34_50_DSM_V = df.loc[(df['DX_GROUP'] == 1) & (df['AGE_AT_SCAN'] > 34) & (df['AGE_AT_SCAN'] <= 50) & (df['SITE_ID'] == SITE)].shape[0]
NUM_AUT_AGE_gt50_DSM_V = df.loc[(df['DX_GROUP'] == 1) & (df['AGE_AT_SCAN'] > 50 ) & (df['SITE_ID'] == SITE)].shape[0]
NUM_AUT_DSM_IV = df.loc[(df['DSM_IV_TR'] == 1) & (df['SITE_ID'] == SITE)].shape[0]
NUM_AUT_MALE_DSM_IV = df.loc[(df['DSM_IV_TR'] == 1) & (df['SEX'] == 1) & (df['SITE_ID'] == SITE)].shape[0]
NUM_AUT_FEMALE_DSM_IV = df.loc[(df['DSM_IV_TR'] == 1) & (df['SEX'] == 2) & (df['SITE_ID'] == SITE)].shape[0]
NUM_ASP_DSM_IV = df.loc[(df['DSM_IV_TR'] == 2) & (df['SITE_ID'] == SITE)].shape[0]
NUM_ASP_MALE_DSM_IV = df.loc[(df['DSM_IV_TR'] == 2) & (df['SEX'] == 1) & (df['SITE_ID'] == SITE)].shape[0]
NUM_ASP_FEMALE_DSM_IV = df.loc[(df['DSM_IV_TR'] == 2) & (df['SEX'] == 2) & (df['SITE_ID'] == SITE)].shape[0]
NUM_PDDNOS_DSM_IV = df.loc[(df['DSM_IV_TR'] == 3) & (df['SITE_ID'] == SITE)].shape[0]
NUM_PDDNOS_MALE_DSM_IV = df.loc[(df['DSM_IV_TR'] == 3) & (df['SEX'] == 1) & (df['SITE_ID'] == SITE)].shape[0]
NUM_PDDNOS_FEMALE_DSM_IV = df.loc[(df['DSM_IV_TR'] == 3) & (df['SEX'] == 2) & (df['SITE_ID'] == SITE)].shape[0]
NUM_ASP_PDDNOS_DSM_IV = df.loc[(df['DSM_IV_TR'] == 4) & (df['SITE_ID'] == SITE)].shape[0]
NUM_ASP_PDDNOS_MALE_DSM_IV = df.loc[(df['DSM_IV_TR'] == 4) & (df['SEX'] == 1) & (df['SITE_ID'] == SITE)].shape[0]
NUM_ASP_PDDNOS_FEMALE_DSM_IV = df.loc[(df['DSM_IV_TR'] == 4) & (df['SEX'] == 2) & (df['SITE_ID'] == SITE)].shape[0]
NUM_TD = df.loc[(df['DX_GROUP'] == 2) & (df['SITE_ID'] == SITE)].shape[0]
NUM_TD_MALE = df.loc[(df['DX_GROUP'] == 2) & (df['SEX'] == 1) & (df['SITE_ID'] == SITE)].shape[0]
NUM_TD_FEMALE = df.loc[(df['DX_GROUP'] == 2) & (df['SEX'] == 2) & (df['SITE_ID'] == SITE)].shape[0]
NUM_TD_AGE_lte12 = df.loc[(df['DX_GROUP'] == 2) & (df['AGE_AT_SCAN'] <= 12) & (df['SITE_ID'] == SITE)].shape[0]
NUM_TD_AGE_12_18 = df.loc[(df['DX_GROUP'] == 2) & (df['AGE_AT_SCAN'] > 12) & (df['AGE_AT_SCAN'] <= 18) & (df['SITE_ID'] == SITE)].shape[0]
NUM_TD_AGE_18_24 = df.loc[(df['DX_GROUP'] == 2) & (df['AGE_AT_SCAN'] > 18) & (df['AGE_AT_SCAN'] <= 24) & (df['SITE_ID'] == SITE)].shape[0]
NUM_TD_AGE_24_34 = df.loc[(df['DX_GROUP'] == 2) & (df['AGE_AT_SCAN'] > 24) & (df['AGE_AT_SCAN'] <= 34) & (df['SITE_ID'] == SITE)].shape[0]
NUM_TD_AGE_34_50 = df.loc[(df['DX_GROUP'] == 2) & (df['AGE_AT_SCAN'] > 34) & (df['AGE_AT_SCAN'] <= 50) & (df['SITE_ID'] == SITE)].shape[0]
NUM_TD_AGE_gt50 = df.loc[(df['DX_GROUP'] == 2) & (df['AGE_AT_SCAN'] > 50 ) & (df['SITE_ID'] == SITE)].shape[0]
tr = 0
volumes = 0
xdim_mm = 0
ydim_mm = 0
zdim_mm = 0
xdim_voxels = 0
ydim_voxels = 0
zdim_voxels = 0
# Accessing scan details
for json_path in scan_param_paths:
extracted_site = json_path.split('/')[-2]
if (SITE).lower() in (extracted_site).lower():
with open(json_path, 'rt') as fp:
print('Site matched with ',json_path)
task_info = json.load(fp)
# Accessing the contents:
tr = task_info['RepetitionTime']
volumes = task_info['NumberofMeasurements']
xdim_mm, ydim_mm = task_info['PixelSpacing'].split('x')
zdim_mm = task_info['SpacingBetweenSlices']
xdim_voxels, ydim_voxels = task_info['AcquisitionMatrix'].split('x')
zdim_voxels = task_info['NumberOfSlices']
_df = pd.DataFrame({
'SITE_NAME': SITE ,
'TR': tr ,
'VOLUMES': volumes,
'xdim_mm':xdim_mm,
'ydim_mm':ydim_mm,
'zdim_mm':zdim_mm,
'xdim_voxels':xdim_voxels,
'ydim_voxels':ydim_voxels,
'zdim_voxels':zdim_voxels,
'NUM_AUT_DSM_V': NUM_AUT_DSM_V ,
'NUM_AUT_MALE_DSM_V': NUM_AUT_MALE_DSM_V ,
'NUM_AUT_FEMALE_DSM_V': NUM_AUT_FEMALE_DSM_V,
'NUM_AUT_AGE_lte12_DSM_V' : NUM_AUT_AGE_lte12_DSM_V,
'NUM_AUT_AGE_12_18_DSM_V' : NUM_AUT_AGE_12_18_DSM_V,
'NUM_AUT_AGE_18_24_DSM_V': NUM_AUT_AGE_18_24_DSM_V,
'NUM_AUT_AGE_24_34_DSM_V' :NUM_AUT_AGE_24_34_DSM_V,
'NUM_AUT_AGE_34_50_DSM_V' : NUM_AUT_AGE_34_50_DSM_V,
'NUM_AUT_AGE_gt50_DSM_V' : NUM_AUT_AGE_gt50_DSM_V,
'NUM_AUT_DSM_IV' : NUM_AUT_DSM_IV,
'NUM_AUT_MALE_DSM_IV' : NUM_AUT_MALE_DSM_IV,
'NUM_AUT_FEMALE_DSM_IV' : NUM_AUT_FEMALE_DSM_IV,
'NUM_ASP_DSM_IV' : NUM_ASP_DSM_IV,
'NUM_ASP_MALE_DSM_IV' : NUM_ASP_MALE_DSM_IV,
'NUM_ASP_FEMALE_DSM_IV' : NUM_ASP_FEMALE_DSM_IV,
'NUM_PDDNOS_DSM_IV' : NUM_PDDNOS_DSM_IV,
'NUM_PDDNOS_MALE_DSM_IV' : NUM_PDDNOS_MALE_DSM_IV,
'NUM_PDDNOS_FEMALE_DSM_IV' : NUM_PDDNOS_FEMALE_DSM_IV,
'NUM_ASP_PDDNOS_DSM_IV' : NUM_ASP_PDDNOS_DSM_IV,
'NUM_ASP_PDDNOS_MALE_DSM_IV' : NUM_ASP_PDDNOS_MALE_DSM_IV,
'NUM_ASP_PDDNOS_FEMALE_DSM_IV' : NUM_ASP_PDDNOS_FEMALE_DSM_IV,
'NUM_TD' : NUM_TD,
'NUM_TD_MALE' : NUM_TD_MALE,
'NUM_TD_FEMALE' : NUM_TD_FEMALE,
'NUM_TD_AGE_lte12' : NUM_TD_AGE_lte12,
'NUM_TD_AGE_12_18' : NUM_TD_AGE_12_18,
'NUM_TD_AGE_18_24' : NUM_TD_AGE_18_24,
'NUM_TD_AGE_24_34' : NUM_TD_AGE_24_34,
'NUM_TD_AGE_34_50' : NUM_TD_AGE_34_50,
'NUM_TD_AGE_gt50' : NUM_TD_AGE_gt50
},index=[0],columns = [ 'SITE_NAME',
'TR',
'VOLUMES',
'xdim_mm',
'ydim_mm',
'zdim_mm',
'xdim_voxels',
'ydim_voxels',
'zdim_voxels',
'NUM_AUT_DSM_V',
'NUM_AUT_MALE_DSM_V',
'NUM_AUT_FEMALE_DSM_V',
'NUM_AUT_AGE_lte12_DSM_V',
'NUM_AUT_AGE_12_18_DSM_V',
'NUM_AUT_AGE_18_24_DSM_V',
'NUM_AUT_AGE_24_34_DSM_V',
'NUM_AUT_AGE_34_50_DSM_V',
'NUM_AUT_AGE_gt50_DSM_V',
'NUM_AUT_DSM_IV',
'NUM_AUT_MALE_DSM_IV',
'NUM_AUT_FEMALE_DSM_IV',
'NUM_ASP_DSM_IV',
'NUM_ASP_MALE_DSM_IV',
'NUM_ASP_FEMALE_DSM_IV',
'NUM_PDDNOS_DSM_IV',
'NUM_PDDNOS_MALE_DSM_IV',
'NUM_PDDNOS_FEMALE_DSM_IV',
'NUM_ASP_PDDNOS_DSM_IV',
'NUM_ASP_PDDNOS_MALE_DSM_IV',
'NUM_ASP_PDDNOS_FEMALE_DSM_IV',
'NUM_TD',
'NUM_TD_MALE',
'NUM_TD_FEMALE',
'NUM_TD_AGE_lte12',
'NUM_TD_AGE_12_18',
'NUM_TD_AGE_18_24',
'NUM_TD_AGE_24_34',
'NUM_TD_AGE_34_50',
'NUM_TD_AGE_gt50'])
data_frame = data_frame.append(_df, ignore_index=True)[_df.columns.tolist()]
# df = pd.DataFrame(raw_data, columns = [])
# Sanity Check
# NUM_AUT_DSM_V.shape[0] + NUM_TD.shape[0]
# df.loc[(df['DSM_IV_TR'] == 0)].shape[0] + NUM_AUT_DSM_V.shape[0] # Not exhaustive
# 'MAX_MUN'.lower() in '/home1/varunk/data/ABIDE1/RawDataBIDs/MaxMun_a/task-rest_bold.json'.lower()
_df
data_frame
# Save the csv file
data_frame.to_csv('demographics.csv')
Explanation: Convention:
DX_GROUP : 1=Autism, 2= Control
DSM_IV_TR : 0=TD,1=Autism,2=Asperger's, 3= PDD-NOS, 4=Asperger's or PDD-NOS
SEX : 1=Male, 2=Female
End of explanation
# df = pd.read_csv('/home1/varunk/data/ABIDE1/RawDataBIDs/composite_phenotypic_file.csv') # , index_col='SUB_ID'
# df = df.sort_values(['SUB_ID'])
# df_td_lt18_m_eyesopen = df.loc[(df['SEX'] == 1) & (df['AGE_AT_SCAN'] <=18) & (df['DSM_IV_TR'] == 0) & (df['EYE_STATUS_AT_SCAN'] == 1)]
# df_td_lt18_m_eyesopen;
# df_td_lt18_m_eyesclosed = df.loc[(df['SEX'] == 1) & (df['AGE_AT_SCAN'] <=18) & (df['DSM_IV_TR'] == 0) & (df['EYE_STATUS_AT_SCAN'] == 2)]
# df_td_lt18_m_eyesclosed;
# df_td_lt18_m_eyesopen;
# df_td_lt18_m_eyesclosed;
# Reading TR values
tr_path = '/home1/varunk/results_again_again/ABIDE1_Preprocess_Datasink/tr_paths/tr_list.npy'
tr = np.load(tr_path)
np.unique(tr)
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
bins = np.arange(0,3.5,0.1)
res = plt.hist(tr, rwidth=0.3, align='left', bins= bins)
# plt.xticks([0,0.5,1,1.5,2,2.5,3])
plt.xlabel('TR')
plt.ylabel('Number of participants')
plt.title('Frequency distribution of TRs')
# plt.text(60, .025, r'$\mu=100,\ \sigma=15$')
np.unique(tr)
df = pd.read_csv('/home1/varunk/data/ABIDE1/RawDataBIDs/composite_phenotypic_file.csv') # , index_col='SUB_ID'
df = df.sort_values(['SUB_ID'])
df_td_lt18_m_eyesopen = df.loc[(df['SEX'] == 1) & (df['AGE_AT_SCAN'] <=18) & (df['DSM_IV_TR'] == 0) & (df['EYE_STATUS_AT_SCAN'] == 1)]
df_td_lt18_m_eyesopen;
df_td_lt18_m_eyesclosed = df.loc[(df['SEX'] == 1) & (df['AGE_AT_SCAN'] <=18) & (df['DSM_IV_TR'] == 0) & (df['EYE_STATUS_AT_SCAN'] == 2)]
df_td_lt18_m_eyesclosed;
df_aut_lt18_m_eyesopen = df.loc[(df['SEX'] == 1) & (df['AGE_AT_SCAN'] <=18) & (df['DSM_IV_TR'] == 1) & (df['EYE_STATUS_AT_SCAN'] == 1)]
df_aut_lt18_m_eyesopen;
df_aut_lt18_m_eyesclosed = df.loc[(df['SEX'] == 1) & (df['AGE_AT_SCAN'] <=18) & (df['DSM_IV_TR'] == 1) & (df['EYE_STATUS_AT_SCAN'] == 2)]
df_aut_lt18_m_eyesclosed;
df_td_lt18_m_eyesopen_sub_id = df_td_lt18_m_eyesopen.as_matrix(['SUB_ID']).squeeze()
df_td_lt18_m_eyesclosed_sub_id = df_td_lt18_m_eyesclosed.as_matrix(['SUB_ID']).squeeze()
df_aut_lt18_m_eyesopen_sub_id = df_aut_lt18_m_eyesopen.as_matrix(['SUB_ID']).squeeze()
df_aut_lt18_m_eyesclosed_sub_id = df_aut_lt18_m_eyesclosed.as_matrix(['SUB_ID']).squeeze()
import re
sub_id = []
atlas_paths = np.load('/home1/varunk/results_again_again/ABIDE1_Preprocess_Datasink/atlas_paths/atlas_file_list.npy')
for path in atlas_paths:
sub_id_extracted = re.search('.+_subject_id_(\d+)', path).group(1)
sub_id.append(sub_id_extracted)
sub_id = list(map(int, sub_id))
# df_sub_id = df.as_matrix(['SUB_ID']).squeeze()
# Number of TD subjects with Age 12 to 18
df_td_lt18_m_eyesopen = df.loc[(df['SEX'] == 1) & (df['AGE_AT_SCAN'] >=12) &(df['AGE_AT_SCAN'] <=18) & (df['DSM_IV_TR'] == 0) & (df['EYE_STATUS_AT_SCAN'] == 1)]
df_td_lt18_m_eyesopen.shape
# Number of Autistic subjects with Age 12 to 18
df_aut_lt18_m_eyesopen = df.loc[(df['SEX'] == 1) & (df['AGE_AT_SCAN'] >=12) &(df['AGE_AT_SCAN'] <=18) & (df['DSM_IV_TR'] == 1) & (df['EYE_STATUS_AT_SCAN'] == 1)]
df_aut_lt18_m_eyesopen.shape
# tr[np.where(df_sub_id == df_td_lt18_m_eyesopen_sub_id)]
# np.isin(sub_id,df_td_lt18_m_eyesopen_sub_id)
tr1 = tr[np.isin(sub_id, df_aut_lt18_m_eyesopen_sub_id)]
bins = np.arange(0,3.5,0.1)
res = plt.hist(tr1, rwidth=0.3, align='left', bins= bins)
# plt.xticks([0,0.5,1,1.5,2,2.5,3])
plt.xlabel('TR')
plt.ylabel('Number of participants')
plt.title('Frequency distribution of TRs')
tr2 = tr[np.isin(sub_id, df_td_lt18_m_eyesopen_sub_id)]
bins = np.arange(0,3.5,0.1)
res = plt.hist(tr2, rwidth=0.3, align='left', bins= bins)
# plt.xticks([0,0.5,1,1.5,2,2.5,3])
plt.xlabel('TR')
plt.ylabel('Number of participants')
plt.title('Frequency distribution of TRs')
tr3 = tr[np.isin(sub_id, df_aut_lt18_m_eyesclosed_sub_id)]
bins = np.arange(0,3.5,0.1)
res = plt.hist(tr3, rwidth=0.3, align='left', bins= bins)
# plt.xticks([0,0.5,1,1.5,2,2.5,3])
plt.xlabel('TR')
plt.ylabel('Number of participants')
plt.title('Frequency distribution of TRs')
tr4 = tr[np.isin(sub_id, df_td_lt18_m_eyesclosed_sub_id)]
bins = np.arange(0,3.5,0.1)
res = plt.hist(tr4, rwidth=0.3, align='left', bins= bins)
# plt.xticks([0,0.5,1,1.5,2,2.5,3])
plt.xlabel('TR')
plt.ylabel('Number of participants')
plt.title('Frequency distribution of TRs')
Explanation: Group Stats
The follwoing section checks the stats of participants lying in the ollwoing bins:
Autistic(DSM-IV), Males, Age <=18, Eyes Closed
Autistic(DSM-IV), Males, Age <=18, Eyes Open
End of explanation
df_td_lt18_m_eyesopen_age = df_td_lt18_m_eyesopen.as_matrix(['AGE_AT_SCAN']).squeeze()
df_td_lt18_m_eyesclosed_age = df_td_lt18_m_eyesclosed.as_matrix(['AGE_AT_SCAN']).squeeze()
df_aut_lt18_m_eyesopen_age = df_aut_lt18_m_eyesopen.as_matrix(['AGE_AT_SCAN']).squeeze()
df_aut_lt18_m_eyesclosed_age = df_aut_lt18_m_eyesclosed.as_matrix(['AGE_AT_SCAN']).squeeze()
bins = np.arange(0,20,1)
# res = plt.hist(df_td_lt18_m_eyesopen_age, rwidth=0.3, align='left')
# res2 = plt.hist(df_aut_lt18_m_eyesopen_age, rwidth=0.3, align='left', bins= bins)
# # plt.xticks([0,0.5,1,1.5,2,2.5,3])
# plt.xlabel('TR')
# plt.ylabel('Number of participants')
# plt.title('Frequency distribution of TRs')
# import random
# import numpy
from matplotlib import pyplot
# x = [random.gauss(3,1) for _ in range(400)]
# y = [random.gauss(4,2) for _ in range(400)]
# bins = numpy.linspace(-10, 10, 100)
pyplot.hist(df_td_lt18_m_eyesopen_age, alpha=0.5,bins=bins, label='TD',rwidth=0.1, align='left')
pyplot.hist(df_aut_lt18_m_eyesopen_age,alpha=0.5, bins=bins, label='AUT',rwidth=0.1,align='right')
pyplot.legend(loc='upper right')
pyplot.xlabel('AGE')
pyplot.show()
pyplot.hist(df_td_lt18_m_eyesclosed_age, alpha=0.5,bins=bins, label='TD',rwidth=0.1, align='left')
pyplot.hist(df_aut_lt18_m_eyesclosed_age,alpha=0.5, bins=bins, label='AUT',rwidth=0.1,align='right')
pyplot.legend(loc='upper right')
pyplot.xlabel('AGE')
pyplot.show()
Explanation: AGE
End of explanation
pyplot.yticks(np.arange(0,20,1))
res = pyplot.boxplot([df_td_lt18_m_eyesopen_age,df_aut_lt18_m_eyesopen_age])
pyplot.yticks(np.arange(0,20,1))
res = pyplot.boxplot([df_td_lt18_m_eyesclosed_age, df_aut_lt18_m_eyesclosed_age])
Explanation: Box Plots:
https://www.wellbeingatschool.org.nz/information-sheet/understanding-and-interpreting-box-plots
End of explanation
eyes_open_age = np.concatenate((df_td_lt18_m_eyesopen_age,df_aut_lt18_m_eyesopen_age))
eyes_closed_age = np.concatenate((df_td_lt18_m_eyesclosed_age,df_aut_lt18_m_eyesclosed_age))
pyplot.yticks(np.arange(0,20,1))
res = pyplot.boxplot([eyes_open_age, eyes_closed_age])
Explanation: Eyes Open vs Closed
End of explanation
from scipy import stats
print(stats.ttest_ind(eyes_open_age,eyes_closed_age, equal_var = False))
print('Mean: ',np.mean(eyes_open_age), np.mean(eyes_closed_age))
print('Std: ',np.std(eyes_open_age), np.std(eyes_closed_age))
Explanation: Stats: Differences in Ages of closed vs open
End of explanation
# stats.ttest_ind(eyes_open_age,eyes_closed_age, equal_var = False)
eyes_open_tr = np.concatenate((tr1,tr2))
eyes_closed_tr = np.concatenate((tr3,tr4))
print(stats.ttest_ind(eyes_open_tr,eyes_closed_tr, equal_var = False))
print('Mean: ',np.mean(eyes_open_tr), np.mean(eyes_closed_tr))
print('Std: ',np.std(eyes_open_tr), np.std(eyes_closed_tr))
Explanation: Result:
Mean Age is significantly different in two groups. That may be the reason for discrepancies in regions.
Stats: Differences in TR of closed vs open
End of explanation
print(stats.ttest_ind(df_aut_lt18_m_eyesopen_age, df_td_lt18_m_eyesopen_age, equal_var = False))
print('Mean: ',np.mean(df_aut_lt18_m_eyesopen_age), np.mean(df_td_lt18_m_eyesopen_age))
print('Std: ',np.std(df_aut_lt18_m_eyesopen_age), np.std(df_td_lt18_m_eyesopen_age))
Explanation: Result:
TRs of two groups are also significantly different
Age differences in AUT vs TD
Eyes Open
End of explanation
print(stats.ttest_ind(df_aut_lt18_m_eyesclosed_age, df_td_lt18_m_eyesclosed_age, equal_var = False))
print('Mean: ',np.mean(df_aut_lt18_m_eyesclosed_age),np.mean(df_td_lt18_m_eyesclosed_age))
print('Std: ',np.std(df_aut_lt18_m_eyesclosed_age),np.std(df_td_lt18_m_eyesclosed_age))
Explanation: Result:
Age difference not significant for eyes open
Eyes Closed
End of explanation
motion_params_npy = '/home1/varunk/results_again_again/ABIDE1_Preprocess_Datasink/motion_params_paths/motion_params_file_list.npy'
mot_params_paths = np.load(motion_params_npy)
in_file = mot_params_paths[0]
trans_x = []
trans_y = []
trans_z = []
rot_x = []
rot_y = []
rot_z = []
# for in_file in mot_params_paths:
with open(in_file) as f:
for line in f:
line = line.split(' ')
print(line)
trans_x.append(float(line[6]))
trans_y.append(float(line[8]))
trans_z.append(float(line[10]))
rot_x.append(float(line[0]))
rot_y.append(float(line[2]))
rot_z.append(float(line[4]))
float('0.0142863')
max(rot_y)
Explanation: Result:
Age difference not significant for eyes closed
Motion Parameters
https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=fsl;cda6e2ea.1112
Format: rot_x, rot_y, rot_z, trans_x, trans_y, trans_z
End of explanation
# Load demographics file
df_demographics = pd.read_csv('/home1/varunk/Autism-Connectome-Analysis-brain_connectivity/notebooks/demographics.csv')
# df_demographics
df_demographics_volumes = df_demographics.as_matrix(['SITE_NAME','VOLUMES']).squeeze()
df_demographics_volumes
df_phenotype = pd.read_csv('/home1/varunk/data/ABIDE1/RawDataBIDs/composite_phenotypic_file.csv') # , index_col='SUB_ID'
df_phenotype = df_phenotype.sort_values(['SUB_ID'])
volumes_bins = np.array([[0,150],[151,200],[201,250],[251,300]])
bins_volumes_AUT = []
bins_volumes_TD = []
for counter, _bin in enumerate(volumes_bins):
df_demographics_volumes_selected_bin = df_demographics_volumes[np.where(np.logical_and((df_demographics_volumes[:,1] >= _bin[0]),(df_demographics_volumes[:,1] <= _bin[1])))]
selected_AUT = pd.DataFrame()
selected_TD = pd.DataFrame()
for site in df_demographics_volumes_selected_bin:
print(site[0])
selected_AUT = pd.concat([selected_AUT,df_phenotype.loc[(df_phenotype['SEX'] == 1) & (df_phenotype['DSM_IV_TR'] == 1) & (df_phenotype['SITE_ID'] == site[0])]])
selected_TD = pd.concat([selected_AUT,df_phenotype.loc[(df_phenotype['SEX'] == 1) & (df_phenotype['DSM_IV_TR'] == 0) & (df_phenotype['SITE_ID'] == site[0])]])
bins_volumes_AUT.append(selected_AUT)
bins_volumes_TD.append(selected_TD)
f = bins_volumes_AUT[0]
# f.loc[[2,3,4,5]]
f
f.iloc[[2,3,4,5,7]]
# num_bins = 4
print('Range ','TD ','AUT ','Ratio TD/AUT')
ratio = np.zeros((len(bins_volumes_AUT)))
for i in range(len(bins_volumes_AUT)):
ratio[i] = bins_volumes_TD[i].shape[0]/bins_volumes_AUT[i].shape[0]
print(volumes_bins[i],bins_volumes_TD[i].shape[0],bins_volumes_AUT[i].shape[0], ratio[i])
min_ratio = np.min(ratio)
min_index = np.argmin(ratio)
new_TD = np.zeros((len(bins_volumes_AUT)))
print('Range ','TD ','AUT ')
for i in range(len(bins_volumes_AUT)):
new_TD[i] = np.ceil(bins_volumes_AUT[i].shape[0] * min_ratio)
print(volumes_bins[i],new_TD[i],bins_volumes_AUT[i].shape[0])
# Now loop over all the bins created and select the specific number of subjects randomly from each TD bin
TD_idx_list = []
selected_df_TD = pd.DataFrame()
for i in range(len(bins_volumes_TD)):
idx = np.arange(len(bins_volumes_TD[i]))
np.random.shuffle(idx)
idx = idx[0:int(new_TD[i])]
TD_idx_list.append(idx)
selected_df_TD = pd.concat([selected_df_TD, bins_volumes_TD[i].iloc[idx]])
selected_df_TD= selected_df_TD.sort_values(['SUB_ID'])
# print(idx)
# Sanity check to see of no subjects are repeated
# subid = selected_df_TD.sort_values(['SUB_ID']).as_matrix(['SUB_ID']).squeeze()
# len(np.unique(subid)) == len(subid)
# Sanity check to see of the number of subjects are same as expected
# len(subid) == (89 + 105 + 109 + 56)
# Sanity check so that no subject index is repeated
# len(np.unique(TD_idx_list[3])) == len(TD_idx_list[3] )
# sanity check to check the new number of TD subjects in each Volumes bin
# len(TD_idx_list[3]) == 56
selected_df_TD
Explanation: Matching based on Volumes
Volume bins
100 - 150
150 - 200
200 - 250
250 - 300
End of explanation
age_bins = np.array([[0,9],[9,12],[12,15],[15,18]])
bins_age_AUT = []
bins_age_TD = []
# for counter, _bin in enumerate(age_bins):
for age in age_bins:
selected_AUT = pd.DataFrame()
selected_TD = pd.DataFrame()
print(age[0], age[1])
selected_AUT = pd.concat([selected_AUT,df_phenotype.loc[(df_phenotype['SEX'] == 1)
& (df_phenotype['DSM_IV_TR'] == 1)
& (df_phenotype['AGE_AT_SCAN'] > age[0])
& (df_phenotype['AGE_AT_SCAN'] <= age[1]) ]])
selected_TD = pd.concat([selected_TD,selected_df_TD.loc[(selected_df_TD['SEX'] == 1)
& (selected_df_TD['DSM_IV_TR'] == 0)
& (selected_df_TD['AGE_AT_SCAN'] > age[0])
& (selected_df_TD['AGE_AT_SCAN'] <= age[1]) ]])
bins_age_AUT.append(selected_AUT)
bins_age_TD.append(selected_TD)
bins_age_TD[0]
# num_bins = 4
print('Original data stats')
print('Age Range ','TD ','AUT ','Ratio TD/AUT')
ratio = np.zeros((len(bins_age_TD)))
for i in range(len(bins_age_TD)):
ratio[i] = bins_age_TD[i].shape[0]/bins_age_AUT[i].shape[0]
print(age_bins[i],bins_age_TD[i].shape[0],bins_age_AUT[i].shape[0], ratio[i])
min_ratio = np.min(ratio)
min_index = np.argmin(ratio)
new_TD = np.zeros((len(bins_age_AUT)))
print('Matched data stats')
print('Age Range ','TD ','AUT ')
for i in range(len(bins_age_AUT)):
new_TD[i] = np.ceil(bins_age_AUT[i].shape[0] * min_ratio)
print(age_bins[i],new_TD[i],bins_age_AUT[i].shape[0])
# Now loop over all the bins created and select the specific number of subjects randomly from each TD bin
TD_idx_list = []
selected_df_TD = pd.DataFrame()
for i in range(len(bins_age_TD)):
idx = np.arange(len(bins_age_TD[i]))
np.random.shuffle(idx)
idx = idx[0:int(new_TD[i])]
TD_idx_list.append(idx)
selected_df_TD = pd.concat([selected_df_TD, bins_age_TD[i].iloc[idx]])
selected_df_TD = selected_df_TD.sort_values(['SUB_ID'])
# print(idx)
selected_df_TD
# selected_df_TD.as_matrix(['SUB_ID']).squeeze()
x = np.arange(10)
np.random.shuffle(x)
x
48 * min_ratio
# selected = selected.loc[(selected['SEX'] == 1) & (selected['DSM_IV_TR'] == 0) & (selected['SITE_ID'] == site[0]) & (selected['EYE_STATUS_AT_SCAN'] == 1)]
selected;
df_phenotype.loc[(df_phenotype['SEX'] == 1) & (df_phenotype['DSM_IV_TR'] == 0) & (df_phenotype['SITE_ID'] == 'TRINITY') & (df_phenotype['EYE_STATUS_AT_SCAN'] == 1)]
Explanation: Matching based on Age
Age bins
6 - 9
9 -12
12 - 15
15 - 18
End of explanation
def volumes_matching(volumes_bins, demographics_file_path, phenotype_file_path):
# Load demographics file
# demographics_file_path = '/home1/varunk/Autism-Connectome-Analysis-brain_connectivity/notebooks/demographics.csv'
# phenotype_file_path = '/home1/varunk/data/ABIDE1/RawDataBIDs/composite_phenotypic_file.csv'
# volumes_bins = np.array([[0,150],[151,200],[201,250],[251,300]])
df_demographics = pd.read_csv(demographics_file_path)
df_demographics_volumes = df_demographics.as_matrix(['SITE_NAME','VOLUMES']).squeeze()
df_phenotype = pd.read_csv(phenotype_file_path)
df_phenotype = df_phenotype.sort_values(['SUB_ID'])
bins_volumes_AUT_data = []
bins_volumes_TD_data = []
for counter, _bin in enumerate(volumes_bins):
df_demographics_volumes_selected_bin = df_demographics_volumes[np.where(np.logical_and((df_demographics_volumes[:,1] >= _bin[0]),(df_demographics_volumes[:,1] <= _bin[1])))]
selected_AUT = pd.DataFrame()
selected_TD = pd.DataFrame()
for site in df_demographics_volumes_selected_bin:
# print(site[0])
selected_AUT = pd.concat([selected_AUT,df_phenotype.loc[(df_phenotype['SEX'] == 1) & (df_phenotype['DSM_IV_TR'] == 1) & (df_phenotype['SITE_ID'] == site[0])]])
selected_TD = pd.concat([selected_AUT,df_phenotype.loc[(df_phenotype['SEX'] == 1) & (df_phenotype['DSM_IV_TR'] == 0) & (df_phenotype['SITE_ID'] == site[0])]])
bins_volumes_AUT_data.append(selected_AUT)
bins_volumes_TD_data.append(selected_TD)
selected_df_TD = matching(volumes_bins, bins_volumes_TD_data, bins_volumes_AUT_data)
# sub_ids = selected_df_TD.as_matrix(['SUB_ID']).squeeze()
selected_df_TD.to_csv('selected_TD.csv')
return selected_df_TD
def matching(bins, bins_TD_data, bins_AUT_data):
# num_bins = 4
print('Original data stats')
print('Range ','TD ','AUT ','Ratio TD/AUT')
ratio = np.zeros((len(bins_TD_data)))
for i in range(len(bins_TD_data)):
ratio[i] = bins_TD_data[i].shape[0]/bins_AUT_data[i].shape[0]
print(bins[i],bins_TD_data[i].shape[0],bins_AUT_data[i].shape[0], ratio[i])
min_ratio = np.min(ratio)
min_index = np.argmin(ratio)
new_TD = np.zeros((len(bins_TD_data)))
print('Matched data stats')
print('Range ','TD ','AUT ')
for i in range(len(bins_TD_data)):
new_TD[i] = np.ceil(bins_AUT_data[i].shape[0] * min_ratio)
print(bins[i],new_TD[i],bins_AUT_data[i].shape[0])
# Now loop over all the bins created and select the specific number of subjects randomly from each TD bin
TD_idx_list = []
selected_df_TD = pd.DataFrame()
for i in range(len(bins_TD_data)):
idx = np.arange(len(bins_TD_data[i]))
np.random.shuffle(idx)
idx = idx[0:int(new_TD[i])]
TD_idx_list.append(idx)
selected_df_TD = pd.concat([selected_df_TD, bins_TD_data[i].iloc[idx]])
selected_df_TD = selected_df_TD.sort_values(['SUB_ID'])
return selected_df_TD
demographics_file_path = '/home1/varunk/Autism-Connectome-Analysis-brain_connectivity/notebooks/demographics.csv'
phenotype_file_path = '/home1/varunk/data/ABIDE1/RawDataBIDs/composite_phenotypic_file.csv'
volumes_bins = np.array([[0,150],[151,200],[201,250],[251,300]])
volumes_matching(volumes_bins, demographics_file_path, phenotype_file_path)
Explanation: Create a function to do volumes matching
End of explanation
df_phenotype.loc[(df_phenotype['SITE_ID'] == 'TRINITY')];
df_demographics_volumes_selected_bin
Explanation: Recycle Bin
End of explanation
df_phenotype = pd.read_csv('/home1/varunk/data/ABIDE1/RawDataBIDs/composite_phenotypic_file.csv') # , index_col='SUB_ID'
# df_phenotype = df.as_matrix(['SITE_ID']).squeeze()
df = df.sort_values(['SUB_ID'])
df_td_lt18_m_eyesopen_vol_100_150 = df.loc[(df['SEX'] == 1) & (df['AGE_AT_SCAN'] <=18) & (df['DSM_IV_TR'] == 0) & (df['EYE_STATUS_AT_SCAN'] == 1)]
df_td_lt18_m_eyesopen_vol_100_150;
np.unique(df_phenotype)
np.mean(eyes_open_tr), np.mean(eyes_closed_tr)
df_td_lt18_m_eyesopen_age
df_td_lt18_m_eyesopen_sub_id
tr[637]
'50003' in X[1]
Explanation: Extract the sub_id where volume lies in a particular bin
End of explanation |
8,773 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Computer Vision to find chess squares in a screenshot
Link to Github source code
The goal is to build a Reddit bot that listens on /r/chess for posts with an image in it (perhaps checking also for a statement "white/black to play" and an image link)
It then takes the image, uses some CV to find a chessboard on it, splits up into
a set of images of squares. These are the inputs to the tensorflow CNN
which will return probability of which piece is on it (or empty)
Dataset will include chessboard squares from chess.com, lichess
Different styles of each, all the pieces
Generate synthetic data via added noise
Step2: Load image
Let's first load a simple chessboard image taken off of reddit, we'll start simple, with the board filling up the entire space. Let's get the imports out of the way
Step11: We need to find the chessboard squares within the image (assuming images will vary, boards will vary in color, etc. between different posts in reddit). A assumption we can make that simplifies things greatly is to assume the chessboards will be aligned with the image (orthorectified), so we only need to search for horizontal and vertical lines.
One way is to use horizontal and vertical gradients, and then a simplified hough transform on those gradient images to find the lines.
Step12: Now that we've got our kernels ready for convolution, let's create our tf variables.
Step13: Let's look at the gradients, we apply opening to them also to clean up noise
Step14: Looks pretty good, now how to find lines? Well with a Hough transform we resample into a parameter space of lines based on two variables $r$ and $\theta$ for example. In our case we already know we're doing vertical and horizontal lines so instead of a 2D space we just need two 1D spaces. In fact, we can simply do this by summing along the axes for each gradient.
Originally I'd taken the absolute of gradients and found all lines, but a cool trick to take advantage of chessboard patterns is that the internal chessboard lines always alternate. So we take the amplitude of the gradient on that axis those lines will stand out.
Step15: Let's plot the responses of the summed gradients
Step20: Awesome, they show up clear as day. Since we've normalized the hough gradients to pixel values of 0-255, let's arbitrarily threshold half-way between.
Step22: Cool, we've got a set of lines now. We need to figure out which lines are associated with the chessboard, then split up the image into individual squares for feeding into the tensorflow CNN.
Step23: Awesome! We have squares, let's save them as 32x32 grayscale images in a subfolder with the same name as the image | Python Code:
import tensorflow as tf
import numpy as np
np.set_printoptions(suppress=True)
sess = tf.InteractiveSession()
Explanation: Computer Vision to find chess squares in a screenshot
Link to Github source code
The goal is to build a Reddit bot that listens on /r/chess for posts with an image in it (perhaps checking also for a statement "white/black to play" and an image link)
It then takes the image, uses some CV to find a chessboard on it, splits up into
a set of images of squares. These are the inputs to the tensorflow CNN
which will return probability of which piece is on it (or empty)
Dataset will include chessboard squares from chess.com, lichess
Different styles of each, all the pieces
Generate synthetic data via added noise:
* change in coloration
* highlighting
* occlusion from lines etc.
Take most probable set from TF response, use that to generate a FEN of the
board, and bot comments on thread with FEN and link to lichess analysis.
A lot of tensorflow code here is heavily adopted from the tensorflow tutorials
Start TF session
End of explanation
# Imports for visualization
import PIL.Image
from cStringIO import StringIO
from IPython.display import clear_output, Image, display
import scipy.ndimage as nd
import scipy.signal
def display_array(a, fmt='jpeg', rng=[0,1]):
Display an array as a picture.
a = (a - rng[0])/float(rng[1] - rng[0])*255
a = np.uint8(np.clip(a, 0, 255))
f = StringIO()
PIL.Image.fromarray(a).save(f, fmt)
display(Image(data=f.getvalue()))
# File
# img_file = 'img1.png'
# img_file = 'img2.png'
# img_file = 'img3.gif'
# img_file = 'img4.jpg'
# img_file = 'img7.png'
# img_file = 'img9.png' # Doesn't work anymore due to non-alternating checkerboard lines
# Bad fit example
# img_file = 't1.png'
img_file = 'bkn5nn4.png'
# img_file = 'lichess_5.png'
# folder = "chessboards/input_chessboards"
# folder = "chessboards/test_chessboards"
folder = "."
img = PIL.Image.open("%s/%s" % (folder,img_file))
print "Loaded %s (%dpx x %dpx)" % \
(img_file, img.size[0], img.size[1])
# Resize if image larger than 2k pixels on a side
if img.size[0] > 2000 or img.size[1] > 2000:
print "Image too big (%d x %d)" % (img.size[0], img.size[1])
new_size = 500.0 # px
if img.size[0] > img.size[1]:
# resize by width to new limit
ratio = new_size / img.size[0]
else:
# resize by height
ratio = new_size / img.size[1]
print "Reducing by factor of %.2g" % (1./ratio)
img = img.resize(img.size * ratio, PIL.Image.ADAPTIVE)
print "New size: (%d x %d)" % (img.size[0], img.size[1])
# See original image
display_array(np.asarray(img), rng=[0,255])
# Convert to grayscale and array
a = np.asarray(img.convert("L"), dtype=np.float32)
# Display array
display_array(a, rng=[0,255])
Explanation: Load image
Let's first load a simple chessboard image taken off of reddit, we'll start simple, with the board filling up the entire space. Let's get the imports out of the way
End of explanation
def make_kernel(a):
Transform a 2D array into a convolution kernel
a = np.asarray(a)
a = a.reshape(list(a.shape) + [1,1])
return tf.constant(a, dtype=1)
def simple_conv(x, k):
A simplified 2D convolution operation
x = tf.expand_dims(tf.expand_dims(x, 0), -1)
y = tf.nn.depthwise_conv2d(x, k, [1, 1, 1, 1], padding='SAME')
return y[0, :, :, 0]
def gradientx(x):
Compute the x gradient of an array
gradient_x = make_kernel([[-1.,0., 1.],
[-1.,0., 1.],
[-1.,0., 1.]])
return simple_conv(x, gradient_x)
def gradienty(x):
Compute the x gradient of an array
gradient_y = make_kernel([[-1., -1, -1],[0.,0,0], [1., 1, 1]])
return simple_conv(x, gradient_y)
def corners(x):
Find chess square corners in an array
chess_corner = make_kernel([[-1., 0, 1],[0., 0., 0.], [1.,0, -1]])
return simple_conv(x, chess_corner)
# Following are meant for binary images
def dilate(x, size=3):
Dilate
kernel = make_kernel(np.ones([size,size], dtype=np.float32))
return tf.clip_by_value(simple_conv(x, kernel),
np.float32(1),
np.float32(2))-np.float32(1)
def erode(x, size=3):
Erode
kernel = make_kernel(np.ones([size,size]))
return tf.clip_by_value(simple_conv(x, kernel),
np.float32(size*size-1),
np.float32(size*size))-np.float32(size*size-1)
def opening(x, size=3):
return dilate(erode(x,size),size)
def closing(x, size=3):
return erode(dilate(x,size),size)
def skeleton(x, size=3):
Skeletonize
return tf.clip_by_value(erode(x) - opening(erode(x)),
0.,
1.)
Explanation: We need to find the chessboard squares within the image (assuming images will vary, boards will vary in color, etc. between different posts in reddit). A assumption we can make that simplifies things greatly is to assume the chessboards will be aligned with the image (orthorectified), so we only need to search for horizontal and vertical lines.
One way is to use horizontal and vertical gradients, and then a simplified hough transform on those gradient images to find the lines.
End of explanation
# Get our grayscale image matrix
A = tf.Variable(a)
# Get X & Y gradients and subtract opposite gradient
# Strongest response where gradient is unidirectional
# clamp into range 0-1
# Dx = tf.clip_by_value(np.abs(gradientx(A)) - np.abs(gradienty(A)),
# 0., 1.)
# Dy = tf.clip_by_value(np.abs(gradienty(A)) - np.abs(gradientx(A)),
# 0., 1.)
Dx = gradientx(A)
Dy = gradienty(A)
# Dxy = np.abs(gradientx(A) * gradienty(A))
# Dc = np.abs(corners(A))
# Initialize state to initial conditions
tf.initialize_all_variables().run()
Explanation: Now that we've got our kernels ready for convolution, let's create our tf variables.
End of explanation
display_array(Dx.eval(), rng=[-255,255])
display_array(Dy.eval(), rng=[-255,255])
Explanation: Let's look at the gradients, we apply opening to them also to clean up noise
End of explanation
Dx_pos = tf.clip_by_value(Dx, 0., 255., name="dx_positive")
Dx_neg = tf.clip_by_value(Dx, -255., 0., name='dx_negative')
Dy_pos = tf.clip_by_value(Dy, 0., 255., name="dy_positive")
Dy_neg = tf.clip_by_value(Dy, -255., 0., name='dy_negative')
hough_Dx = tf.reduce_sum(Dx_pos, 0) * tf.reduce_sum(-Dx_neg, 0) / (a.shape[0]*a.shape[0])
hough_Dy = tf.reduce_sum(Dy_pos, 1) * tf.reduce_sum(-Dy_neg, 1) / (a.shape[1]*a.shape[1])
# Normalized to 0-255*255=65025 range
Explanation: Looks pretty good, now how to find lines? Well with a Hough transform we resample into a parameter space of lines based on two variables $r$ and $\theta$ for example. In our case we already know we're doing vertical and horizontal lines so instead of a 2D space we just need two 1D spaces. In fact, we can simply do this by summing along the axes for each gradient.
Originally I'd taken the absolute of gradients and found all lines, but a cool trick to take advantage of chessboard patterns is that the internal chessboard lines always alternate. So we take the amplitude of the gradient on that axis those lines will stand out.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
fig, (ax1, ax2) = plt.subplots(1,2,sharey=True, figsize=(15,5))
# Arbitrarily choose half of max value as threshold, since they're such strong responses
hough_Dx_thresh = tf.reduce_max(hough_Dx) * 3 / 5
hough_Dy_thresh = tf.reduce_max(hough_Dy) * 3 /5
ax1.plot(hough_Dx.eval());
ax1.axhline(hough_Dx_thresh.eval(), lw=2,linestyle=':',color='r')
ax1.set_title('Hough Gradient X')
ax1.set_xlabel('Pixel')
ax1.set_xlim(0,a.shape[1])
ax2.plot(hough_Dy.eval())
ax2.axhline(hough_Dy_thresh.eval(), lw=2,linestyle=':',color='r')
ax2.set_title('Hough Gradient Y')
ax2.set_xlim(0,a.shape[0])
ax2.set_xlabel('Pixel');
Explanation: Let's plot the responses of the summed gradients
End of explanation
def checkMatch(lineset):
Checks whether there exists 7 lines of consistent increasing order in set of lines
linediff = np.diff(lineset)
x = 0
cnt = 0
for line in linediff:
# Within 5 px of the other (allowing for minor image errors)
if np.abs(line - x) < 5:
cnt += 1
else:
cnt = 0
x = line
return cnt == 5
def pruneLines(lineset):
Prunes a set of lines to 7 in consistent increasing order (chessboard)
linediff = np.diff(lineset)
x = 0
cnt = 0
start_pos = 0
for i, line in enumerate(linediff):
# Within 5 px of the other (allowing for minor image errors)
if np.abs(line - x) < 5:
cnt += 1
if cnt == 5:
end_pos = i+2
return lineset[start_pos:end_pos]
else:
cnt = 0
x = line
print i, x
start_pos = i
return lineset
def skeletonize_1d(arr):
return skeletonized 1d array (thin to single value, favor to the right)
_arr = arr.copy() # create a copy of array to modify without destroying original
# Go forwards
for i in range(_arr.size-1):
# Will right-shift if they are the same
if arr[i] <= _arr[i+1]:
_arr[i] = 0
# Go reverse
for i in np.arange(_arr.size-1, 0,-1):
if _arr[i-1] > _arr[i]:
_arr[i] = 0
return _arr
def getChessLines(hdx, hdy, hdx_thresh, hdy_thresh):
Returns pixel indices for the 7 internal chess lines in x and y axes
# Blur
gausswin = scipy.signal.gaussian(21,4)
gausswin /= np.sum(gausswin)
# Blur where there is a strong horizontal or vertical line (binarize)
blur_x = np.convolve(hdx > hdx_thresh, gausswin, mode='same')
blur_y = np.convolve(hdy > hdy_thresh, gausswin, mode='same')
skel_x = skeletonize_1d(blur_x)
skel_y = skeletonize_1d(blur_y)
# Find points on skeletonized arrays (where returns 1-length tuple)
lines_x = np.where(skel_x)[0] # vertical lines
lines_y = np.where(skel_y)[0] # horizontal lines
# Prune inconsistent lines
lines_x = pruneLines(lines_x)
lines_y = pruneLines(lines_y)
is_match = len(lines_x) == 7 and len(lines_y) == 7 and checkMatch(lines_x) and checkMatch(lines_y)
return lines_x, lines_y, is_match
# Get chess lines
lines_x, lines_y, is_match = getChessLines(hough_Dx.eval().flatten(), \
hough_Dy.eval().flatten(), \
hough_Dx_thresh.eval(), \
hough_Dy_thresh.eval())
lines_x, lines_y, is_match = getChessLines(hough_Dx.eval().flatten(), \
hough_Dy.eval().flatten(), \
hough_Dx_thresh.eval()*.9, \
hough_Dy_thresh.eval()*.9)
print "X",lines_x, np.diff(lines_x)
print "Y",lines_y, np.diff(lines_y)
if is_match:
print "Chessboard found"
else:
print "Couldn't find Chessboard"
# Plot blurred 1d hough arrays and skeletonized versions
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(20,5))
ax1.plot(hough_Dx.eval());
ax1.axhline(hough_Dx_thresh.eval(), lw=2,linestyle=':',color='r')
ax1.set_title('Hough Gradient X')
ax1.set_xlabel('Pixel')
ax1.set_xlim(0,a.shape[1])
ax2.plot(hough_Dy.eval())
ax2.axhline(hough_Dy_thresh.eval(), lw=2,linestyle=':',color='r')
ax2.set_title('Hough Gradient Y')
ax2.set_xlim(0,a.shape[0])
ax2.set_xlabel('Pixel');
# Plot lines for where peaks where found
if len(lines_x < 20):
for hx in lines_x:
ax1.axvline(hx,color='r')
if len(lines_y < 20):
for hy in lines_y:
ax2.axvline(hy,color='r')
plt.imshow(img)
for hx in lines_x:
plt.axvline(hx, color='b', lw=2)
for hy in lines_y:
plt.axhline(hy, color='r', lw=2)
Explanation: Awesome, they show up clear as day. Since we've normalized the hough gradients to pixel values of 0-255, let's arbitrarily threshold half-way between.
End of explanation
print "X (vertical)",lines_x, np.diff(lines_x)
print "Y (horizontal)",lines_y, np.diff(lines_y)
def getChessTiles(a, lines_x, lines_y):
Split up input grayscale array into 64 tiles stacked in a 3D matrix using the chess linesets
# Find average square size, round to a whole pixel for determining edge pieces sizes
stepx = np.int32(np.round(np.mean(np.diff(lines_x))))
stepy = np.int32(np.round(np.mean(np.diff(lines_y))))
# Pad edges as needed to fill out chessboard (for images that are partially over-cropped)
# print stepx, stepy
# print "x",lines_x[0] - stepx, "->", lines_x[-1] + stepx, a.shape[1]
# print "y", lines_y[0] - stepy, "->", lines_y[-1] + stepy, a.shape[0]
padr_x = 0
padl_x = 0
padr_y = 0
padl_y = 0
if lines_x[0] - stepx < 0:
padl_x = np.abs(lines_x[0] - stepx)
if lines_x[-1] + stepx > a.shape[1]-1:
padr_x = np.abs(lines_x[-1] + stepx - a.shape[1])
if lines_y[0] - stepy < 0:
padl_y = np.abs(lines_y[0] - stepy)
if lines_y[-1] + stepx > a.shape[0]-1:
padr_y = np.abs(lines_y[-1] + stepy - a.shape[0])
# New padded array
# print "Padded image to", ((padl_y,padr_y),(padl_x,padr_x))
a2 = np.pad(a, ((padl_y,padr_y),(padl_x,padr_x)), mode='edge')
setsx = np.hstack([lines_x[0]-stepx, lines_x, lines_x[-1]+stepx]) + padl_x
setsy = np.hstack([lines_y[0]-stepy, lines_y, lines_y[-1]+stepy]) + padl_y
a2 = a2[setsy[0]:setsy[-1], setsx[0]:setsx[-1]]
setsx -= setsx[0]
setsy -= setsy[0]
# display_array(a2, rng=[0,255])
# print "X:",setsx
# print "Y:",setsy
# Matrix to hold images of individual squares (in grayscale)
# print "Square size: [%g, %g]" % (stepy, stepx)
squares = np.zeros([np.round(stepy), np.round(stepx), 64],dtype=np.uint8)
# For each row
for i in range(0,8):
# For each column
for j in range(0,8):
# Vertical lines
x1 = setsx[i]
x2 = setsx[i+1]
padr_x = 0
padl_x = 0
padr_y = 0
padl_y = 0
if (x2-x1) > stepx:
if i == 7:
x1 = x2 - stepx
else:
x2 = x1 + stepx
elif (x2-x1) < stepx:
if i == 7:
# right side, pad right
padr_x = stepx-(x2-x1)
else:
# left side, pad left
padl_x = stepx-(x2-x1)
# Horizontal lines
y1 = setsy[j]
y2 = setsy[j+1]
if (y2-y1) > stepy:
if j == 7:
y1 = y2 - stepy
else:
y2 = y1 + stepy
elif (y2-y1) < stepy:
if j == 7:
# right side, pad right
padr_y = stepy-(y2-y1)
else:
# left side, pad left
padl_y = stepy-(y2-y1)
# slicing a, rows sliced with horizontal lines, cols by vertical lines so reversed
# Also, change order so its A1,B1...H8 for a white-aligned board
# Apply padding as defined previously to fit minor pixel offsets
squares[:,:,(7-j)*8+i] = np.pad(a2[y1:y2, x1:x2],((padl_y,padr_y),(padl_x,padr_x)), mode='edge')
return squares
if is_match:
# Possibly check np.std(np.diff(lines_x)) for variance etc. as well/instead
print "7 horizontal and vertical lines found, slicing up squares"
squares = getChessTiles(a, lines_x, lines_y)
print "Tiles generated: (%dx%d)*%d" % (squares.shape[0], squares.shape[1], squares.shape[2])
else:
print "Number of lines not equal to 7"
letters = 'ABCDEFGH'
if is_match:
print "Order is row-wise from top left of image going right and down, so a8,b8....a7,b7,c7...h1"
print "Showing 5 random squares..."
for i in np.random.choice(np.arange(64),5,replace=False):
print "#%d: %s%d" % (i, letters[i%8], i/8+1)
display_array(squares[:,:,i],rng=[0,255])
else:
print "Didn't have lines to slice image up."
Explanation: Cool, we've got a set of lines now. We need to figure out which lines are associated with the chessboard, then split up the image into individual squares for feeding into the tensorflow CNN.
End of explanation
import os
img_save_dir = "chessboards/output_tiles/squares_%s" % img_file[:-4]
if not is_match:
print "No squares to save"
else:
if not os.path.exists(img_save_dir):
os.makedirs(img_save_dir)
print "Created dir %s" % img_save_dir
for i in range(64):
sqr_filename = "%s/%s_%s%d.png" % (img_save_dir, img_file[:-4], letters[i%8], i/8+1)
if i % 8 == 0:
print "#%d: saving %s..." % (i, sqr_filename)
# Make resized 32x32 image from matrix and save
PIL.Image.fromarray(squares[:,:,i]) \
.resize([32,32], PIL.Image.ADAPTIVE) \
.save(sqr_filename)
Explanation: Awesome! We have squares, let's save them as 32x32 grayscale images in a subfolder with the same name as the image
End of explanation |
8,774 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Variational Autoencoder
This scripts contains module for implementing variational autoencoder, the module contains
Step2: mnist_loader
Step3: Test mnist data
Step4: We are generating synthetic data in this project, so all the 55000 samples can be used for training
Step5: xavier_init
This function helps us to set initial weights properly to prevent the updates in deep layers to be too small or too large. In this implementation, we'll sample the weights from a uniform distribution
Step6: Test xavier_init
For a 3*3 neural network, the weights should be sampled from uniform(-1,1)
Step8: vae_init
This function initialize a variational encoder, returns tensorflow session, optimizer, cost function and input data will be returned (for further training)
The architecture can be defined by setting the parameters of the function, the default setup is
Step9: vae_train
This function loads the previously initialized VAE and do the training.
If verbose if set as 1, then every verb_step the program will print out cost information | Python Code:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import time
from tensorflow.python.client import timeline
%matplotlib inline
Explanation: Variational Autoencoder
This scripts contains module for implementing variational autoencoder, the module contains:
1. mnist_loader: loads mnist data, which will be used for this project
2. xavier_init: initialize weights for vae
3. vae_init: build variational autoencoder, return a tensorflow session
import libraries
End of explanation
FLAGS = tf.app.flags.FLAGS
# number of device count
tf.app.flags.DEFINE_integer('num_cpu_core', 1, 'Number of CPU cores to use')
tf.app.flags.DEFINE_integer('intra_op_parallelism_threads', 1, 'How many ops can be launched in parallel')
tf.app.flags.DEFINE_integer('num_gpu_core', 0, 'Number of GPU cores to use')
device_id = -1 # Global Variable Counter for device_id used
def next_device(use_cpu = True):
''' See if there is available next device;
Args: use_cpu, global device_id
Return: new device id
'''
global device_id
if (use_cpu):
if ((device_id + 1) < FLAGS.num_cpu_core):
device_id += 1
device = '/cpu:%d' % device_id
else:
if ((device_id + 1) < FLAGS.num_gpu_core):
device_id += 1
device = '/gpu:%d' % device_id
return device
def mnist_loader():
Load MNIST data in tensorflow readable format
The script comes from:
https://raw.githubusercontent.com/tensorflow/tensorflow/master/tensorflow/examples/tutorials/mnist/input_data.py
import gzip
import os
import tempfile
import numpy
from six.moves import urllib
from six.moves import xrange
from tensorflow.contrib.learn.python.learn.datasets.mnist import read_data_sets
mnist = read_data_sets('MNIST_data', one_hot=True)
n_samples = mnist.train.num_examples
return (mnist, n_samples)
(mnist, n_samples) = mnist_loader()
Explanation: mnist_loader
End of explanation
print('Number of available data: %d' % n_samples)
Explanation: Test mnist data
End of explanation
x_sample = mnist.test.next_batch(100)[0]
plt.figure(figsize=(8, 4))
for i in range(6):
plt.subplot(2, 3, i + 1)
plt.imshow(x_sample[i].reshape(28, 28), vmin=0, vmax=1, cmap="gray")
plt.title("MNIST Data")
plt.colorbar()
plt.tight_layout()
Explanation: We are generating synthetic data in this project, so all the 55000 samples can be used for training
End of explanation
def xavier_init(neuron_in, neuron_out, constant=1):
low = -constant*np.sqrt(6/(neuron_in + neuron_out))
high = constant*np.sqrt(6/(neuron_in + neuron_out))
return tf.random_uniform((neuron_in, neuron_out), minval=low, maxval=high, dtype=tf.float32)
Explanation: xavier_init
This function helps us to set initial weights properly to prevent the updates in deep layers to be too small or too large. In this implementation, we'll sample the weights from a uniform distribution:
$\mathbf{W} \ \sim \ uniform(-\sqrt{\frac{6}{#Neuron_{in}+#Neuron_{out}}},\sqrt{\frac{6}{#Neuron_{in}+#Neuron_{out}}})$
More detailed explanations of why we use xavier initialization can be found here
End of explanation
sess_ = tf.Session()
weights = []
for i in range(1000):
weights.append(sess_.run(xavier_init(3,3)))
weights = np.array(weights).reshape((-1,1))
n, bins, patches = plt.hist(weights, bins=20)
plt.xlabel('weight value')
plt.ylabel('counts')
plt.title('Histogram of Weights Initialized by Xavier')
plt.show()
Explanation: Test xavier_init
For a 3*3 neural network, the weights should be sampled from uniform(-1,1)
End of explanation
def vae_init(batch_size=100, learn_rate=0.001, x_in=784, encoder_1=500, encoder_2=500, decoder_1=500, decoder_2=500, z=20):
This function build a varational autoencoder based on https://jmetzen.github.io/2015-11-27/vae.html
In consideration of simplicity and future work on optimization, we removed the class structure
A tensorflow session, optimizer and cost function as well as input data will be returned
# configuration of network
# x_in = 784
# encoder_1 = 500
# encoder_2 = 500
# decoder_1 = 500
# decoder_2 = 500
# z = 20
# input
x = tf.placeholder(tf.float32, [None, x_in])
# initialize weights
# two layers encoder
encoder_h1 = tf.Variable(xavier_init(x_in, encoder_1))
encoder_h2 = tf.Variable(xavier_init(encoder_1, encoder_2))
encoder_mu = tf.Variable(xavier_init(encoder_2, z))
encoder_sigma = tf.Variable(xavier_init(encoder_2, z))
encoder_b1 = tf.Variable(tf.zeros([encoder_1], dtype=tf.float32))
encoder_b2 = tf.Variable(tf.zeros([encoder_2], dtype=tf.float32))
encoder_bias_mu = tf.Variable(tf.zeros([z], dtype=tf.float32))
encoder_bias_sigma = tf.Variable(tf.zeros([z], dtype=tf.float32))
# two layers decoder
decoder_h1 = tf.Variable(xavier_init(z, decoder_1))
decoder_h2 = tf.Variable(xavier_init(decoder_1, decoder_2))
decoder_mu = tf.Variable(xavier_init(decoder_2, x_in))
decoder_sigma = tf.Variable(xavier_init(decoder_2, x_in))
decoder_b1 = tf.Variable(tf.zeros([decoder_1], dtype=tf.float32))
decoder_b2 = tf.Variable(tf.zeros([decoder_2], dtype=tf.float32))
decoder_bias_mu = tf.Variable(tf.zeros([x_in], dtype=tf.float32))
decoder_bias_sigma = tf.Variable(tf.zeros([x_in], dtype=tf.float32))
# compute mean and sigma of z
with tf.device(next_device()):
layer_1 = tf.nn.softplus(tf.add(tf.matmul(x, encoder_h1), encoder_b1))
with tf.device(next_device()):
layer_2 = tf.nn.softplus(tf.add(tf.matmul(layer_1, encoder_h2), encoder_b2))
z_mean = tf.add(tf.matmul(layer_2, encoder_mu), encoder_bias_mu)
z_sigma = tf.add(tf.matmul(layer_2, encoder_sigma), encoder_bias_sigma)
# compute z by drawing sample from normal distribution
eps = tf.random_normal((batch_size, z), 0, 1, dtype=tf.float32)
z_val = tf.add(z_mean, tf.multiply(tf.sqrt(tf.exp(z_sigma)), eps))
# use z to reconstruct the network
with tf.device(next_device()):
layer_1 = tf.nn.softplus(tf.add(tf.matmul(z_val, decoder_h1), decoder_b1))
with tf.device(next_device()):
layer_2 = tf.nn.softplus(tf.add(tf.matmul(layer_1, decoder_h2), decoder_b2))
x_prime = tf.nn.sigmoid(tf.add(tf.matmul(layer_2, decoder_mu), decoder_bias_mu))
# define loss function
# reconstruction lost
recons_loss = -tf.reduce_sum(x * tf.log(1e-10 + x_prime) + (1-x) * tf.log(1e-10 + 1 - x_prime), 1)
# KL distance
latent_loss = -0.5 * tf.reduce_sum(1 + z_sigma - tf.square(z_mean) - tf.exp(z_val), 1)
# summing two loss terms together
cost = tf.reduce_mean(recons_loss + latent_loss)
# use ADAM to optimize
optimizer = tf.train.AdamOptimizer(learning_rate=learn_rate).minimize(cost)
# initialize all variables
init = tf.global_variables_initializer()
#
config_ = tf.ConfigProto(device_count={"CPU": FLAGS.num_cpu_core}, # limit to num_cpu_core CPU usage
inter_op_parallelism_threads = 1,
intra_op_parallelism_threads = FLAGS.intra_op_parallelism_threads,
log_device_placement=True)
# define and return the session
sess = tf.Session(config=config_)
sess.run(init)
return (sess, optimizer, cost, x, x_prime)
Explanation: vae_init
This function initialize a variational encoder, returns tensorflow session, optimizer, cost function and input data will be returned (for further training)
The architecture can be defined by setting the parameters of the function, the default setup is:
- input nodes: 784
- 1st layer of encoder: 500
- 2nd layer of encoder: 500
- 1st layer of decoder: 500
- 2nd layer of decoder: 500
- z: 20
End of explanation
def vae_train(sess, optimizer, cost, x, batch_size=100, learn_rate=0.001, x_in=784, encoder_1=500, encoder_2=500, decoder_1=500,
decoder_2=500, z=20, train_epoch=1, verb=1, verb_step=5):
start_time = time.time()
for epoch in range(train_epoch):
avg_cost = 0
total_batch = int(n_samples / batch_size)
for i in range(total_batch):
batch_x, _ = mnist.train.next_batch(batch_size)
run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)
run_metadata = tf.RunMetadata()
_, c = sess.run((optimizer, cost), feed_dict={x: batch_x}, options=run_options, run_metadata=run_metadata)
avg_cost += c / n_samples * batch_size
elapsed_time = (time.time() - start_time)* 1000 / verb_step
start_time = time.time()
if verb:
if epoch % verb_step == 0:
# print('Epoch:%04d\tCost=%.2f' % (epoch+1, avg_cost))
print('Epoch:%04d' % (epoch+1), 'cost=', '{:.9f}'.format(avg_cost), 'Elapsed time: ','%.9f' % elapsed_time)
# Create the Timeline object, and write it to a json
tl = timeline.Timeline(run_metadata.step_stats)
ctf = tl.generate_chrome_trace_format()
with open('timeline.json', 'w') as f:
f.write(ctf)
(sess, optimizer, cost, x, x_prime) = vae_init()
vae_train(sess, optimizer, cost, x, train_epoch=10)
x_sample = mnist.test.next_batch(100)[0]
x_reconstruct = sess.run(x_prime, feed_dict={x: x_sample})
plt.figure(figsize=(8, 12))
for i in range(5):
plt.subplot(5, 2, 2*i + 1)
plt.imshow(x_sample[i].reshape(28, 28), vmin=0, vmax=1, cmap="gray")
plt.title("Test input")
plt.colorbar()
plt.subplot(5, 2, 2*i + 2)
plt.imshow(x_reconstruct[i].reshape(28, 28), vmin=0, vmax=1, cmap="gray")
plt.title("Reconstruction")
plt.colorbar()
plt.tight_layout()
sess.close()
Explanation: vae_train
This function loads the previously initialized VAE and do the training.
If verbose if set as 1, then every verb_step the program will print out cost information
End of explanation |
8,775 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
============================================================================
Decoding in time-frequency space data using the Common Spatial Pattern (CSP)
============================================================================
The time-frequency decomposition is estimated by iterating over raw data that
has been band-passed at different frequencies. This is used to compute a
covariance matrix over each epoch or a rolling time-window and extract the CSP
filtered signals. A linear discriminant classifier is then applied to these
signals.
Step1: Set parameters and read data
Step2: Loop through frequencies, apply classifier and save scores
Step3: Plot frequency results
Step4: Loop through frequencies and time, apply classifier and save scores
Step5: Plot time-frequency results | Python Code:
# Authors: Laura Gwilliams <laura.gwilliams@nyu.edu>
# Jean-Remi King <jeanremi.king@gmail.com>
# Alex Barachant <alexandre.barachant@gmail.com>
# Alexandre Gramfort <alexandre.gramfort@inria.fr>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
from mne import Epochs, create_info, events_from_annotations
from mne.io import concatenate_raws, read_raw_edf
from mne.datasets import eegbci
from mne.decoding import CSP
from mne.time_frequency import AverageTFR
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.model_selection import StratifiedKFold, cross_val_score
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import LabelEncoder
Explanation: ============================================================================
Decoding in time-frequency space data using the Common Spatial Pattern (CSP)
============================================================================
The time-frequency decomposition is estimated by iterating over raw data that
has been band-passed at different frequencies. This is used to compute a
covariance matrix over each epoch or a rolling time-window and extract the CSP
filtered signals. A linear discriminant classifier is then applied to these
signals.
End of explanation
event_id = dict(hands=2, feet=3) # motor imagery: hands vs feet
subject = 1
runs = [6, 10, 14]
raw_fnames = eegbci.load_data(subject, runs)
raw = concatenate_raws([read_raw_edf(f, preload=True) for f in raw_fnames])
# Extract information from the raw file
sfreq = raw.info['sfreq']
events, _ = events_from_annotations(raw, event_id=dict(T1=2, T2=3))
raw.pick_types(meg=False, eeg=True, stim=False, eog=False, exclude='bads')
# Assemble the classifier using scikit-learn pipeline
clf = make_pipeline(CSP(n_components=4, reg=None, log=True, norm_trace=False),
LinearDiscriminantAnalysis())
n_splits = 5 # how many folds to use for cross-validation
cv = StratifiedKFold(n_splits=n_splits, shuffle=True)
# Classification & Time-frequency parameters
tmin, tmax = -.200, 2.000
n_cycles = 10. # how many complete cycles: used to define window size
min_freq = 5.
max_freq = 25.
n_freqs = 8 # how many frequency bins to use
# Assemble list of frequency range tuples
freqs = np.linspace(min_freq, max_freq, n_freqs) # assemble frequencies
freq_ranges = list(zip(freqs[:-1], freqs[1:])) # make freqs list of tuples
# Infer window spacing from the max freq and number of cycles to avoid gaps
window_spacing = (n_cycles / np.max(freqs) / 2.)
centered_w_times = np.arange(tmin, tmax, window_spacing)[1:]
n_windows = len(centered_w_times)
# Instantiate label encoder
le = LabelEncoder()
Explanation: Set parameters and read data
End of explanation
# init scores
freq_scores = np.zeros((n_freqs - 1,))
# Loop through each frequency range of interest
for freq, (fmin, fmax) in enumerate(freq_ranges):
# Infer window size based on the frequency being used
w_size = n_cycles / ((fmax + fmin) / 2.) # in seconds
# Apply band-pass filter to isolate the specified frequencies
raw_filter = raw.copy().filter(fmin, fmax, n_jobs=1, fir_design='firwin',
skip_by_annotation='edge')
# Extract epochs from filtered data, padded by window size
epochs = Epochs(raw_filter, events, event_id, tmin - w_size, tmax + w_size,
proj=False, baseline=None, preload=True)
epochs.drop_bad()
y = le.fit_transform(epochs.events[:, 2])
X = epochs.get_data()
# Save mean scores over folds for each frequency and time window
freq_scores[freq] = np.mean(cross_val_score(estimator=clf, X=X, y=y,
scoring='roc_auc', cv=cv,
n_jobs=1), axis=0)
Explanation: Loop through frequencies, apply classifier and save scores
End of explanation
plt.bar(freqs[:-1], freq_scores, width=np.diff(freqs)[0],
align='edge', edgecolor='black')
plt.xticks(freqs)
plt.ylim([0, 1])
plt.axhline(len(epochs['feet']) / len(epochs), color='k', linestyle='--',
label='chance level')
plt.legend()
plt.xlabel('Frequency (Hz)')
plt.ylabel('Decoding Scores')
plt.title('Frequency Decoding Scores')
Explanation: Plot frequency results
End of explanation
# init scores
tf_scores = np.zeros((n_freqs - 1, n_windows))
# Loop through each frequency range of interest
for freq, (fmin, fmax) in enumerate(freq_ranges):
# Infer window size based on the frequency being used
w_size = n_cycles / ((fmax + fmin) / 2.) # in seconds
# Apply band-pass filter to isolate the specified frequencies
raw_filter = raw.copy().filter(fmin, fmax, n_jobs=1, fir_design='firwin',
skip_by_annotation='edge')
# Extract epochs from filtered data, padded by window size
epochs = Epochs(raw_filter, events, event_id, tmin - w_size, tmax + w_size,
proj=False, baseline=None, preload=True)
epochs.drop_bad()
y = le.fit_transform(epochs.events[:, 2])
# Roll covariance, csp and lda over time
for t, w_time in enumerate(centered_w_times):
# Center the min and max of the window
w_tmin = w_time - w_size / 2.
w_tmax = w_time + w_size / 2.
# Crop data into time-window of interest
X = epochs.copy().crop(w_tmin, w_tmax).get_data()
# Save mean scores over folds for each frequency and time window
tf_scores[freq, t] = np.mean(cross_val_score(estimator=clf, X=X, y=y,
scoring='roc_auc', cv=cv,
n_jobs=1), axis=0)
Explanation: Loop through frequencies and time, apply classifier and save scores
End of explanation
# Set up time frequency object
av_tfr = AverageTFR(create_info(['freq'], sfreq), tf_scores[np.newaxis, :],
centered_w_times, freqs[1:], 1)
chance = np.mean(y) # set chance level to white in the plot
av_tfr.plot([0], vmin=chance, title="Time-Frequency Decoding Scores",
cmap=plt.cm.Reds)
Explanation: Plot time-frequency results
End of explanation |
8,776 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Question_3-1-3_Multiclass_Ridge
Janet Matsen
Code notes
Step1: Prepare MNIST training data
Step2: Dev | Python Code:
import numpy as np
import matplotlib as mpl
%matplotlib inline
import pandas as pd
import seaborn as sns
from mnist import MNIST # public package for making arrays out of MINST data.
import sys
sys.path.append('../code/')
from ridge_regression import RidgeBinary
from hyperparameter_explorer import HyperparameterExplorer
from mnist_helpers import mnist_training, mnist_testing, mnist_training_binary
import matplotlib.pyplot as plt
from pylab import rcParams
rcParams['figure.figsize'] = 4, 3
Explanation: Question_3-1-3_Multiclass_Ridge
Janet Matsen
Code notes:
* Indivudal regressions are done by instinces of RidgeRegression, defined in rige_regression.py.
* RidgeRegression gets some methods from ClassificationBase, defined in classification_base.py.
* The class HyperparameterSweep in hyperparameter_sweep_base is used to tune hyperparameters on training data.
End of explanation
train_X, train_y = mnist_training_binary(2)
print(train_X.shape, train_y.shape)
Explanation: Prepare MNIST training data
End of explanation
hyper_explorer = HyperparameterExplorer(X=train_X, y=train_y, model=RidgeBinary,
validation_split=0.10, score_name='RMSE')
hyper_explorer.train_X.shape
hyper_explorer.train_model(lam=100)
hyper_explorer.train_model(lam=10)
hyper_explorer.train_model(lam=.001)
hyper_explorer.train_model(lam=1e-5)
hyper_explorer.summary
fig, ax = plt.subplots(1, 1, figsize=(4, 3))
plt.semilogx(hyper_explorer.summary['lambda'], hyper_explorer.summary['validation RMSE'],
linestyle='--', marker='o', c='g')
plt.semilogx(hyper_explorer.summary['lambda'], hyper_explorer.summary['RMSE'],
linestyle='--', marker='o', c='grey')
plt.legend(loc='best')
plt.xlabel('lambda')
plt.ylabel('RMSE')
ax.axhline(y=0, color='k')
Explanation: Dev: make sure a single model runs fine
Explore hyperparameters before training model on all of the training data.
End of explanation |
8,777 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
What is machine learning?
One definition
Step1: What are the features?
- TV
Step2: Linear regression
Pros
Step3: Splitting X and y into training and testing sets
Step4: Linear regression in scikit-learn
Step5: Interpreting model coefficients
Step6: Making predictions
Step7: We need an evaluation metric in order to compare our predictions with the actual values!
Evaluation metric
Root Mean Squared Error (RMSE) is the square root of the mean of the squared errors
Step8: Classifications on the iris dataset
Framed as a supervised learning problem
Step9: Logistic regression
For iris dataset, we are predicting a categorical data. Linear Regression is not a good choice, instead we use logistic regression.
Step10: Evaluation metric
Classification accuracy
Step11: Alternatives
Step12: Apply SVM to iris
Step13: Supervised Learning In-Depth
Step14: Motivating Random Forests
Step15: Ensemble the decision tress
Step16: Unsupervised learning
Introducing K-Means
K Means is an algorithm for unsupervised clustering
Step17: By eye, it is relatively easy to pick out the four clusters. If you were to perform an exhaustive search for the different segmentations of the data, however, the search space would be exponential in the number of points. Fortunately, there is a well-known Expectation Maximization (EM) procedure which scikit-learn implements, so that KMeans can be solved relatively quickly.
Step18: Let's use scikit-learn for K-means clustering on Iris dataset
Step19: Introducing Principal Component Analysis
Principal Component Analysis is a very powerful unsupervised method for dimensionality reduction in data. It's easiest to visualize by looking at a two-dimensional dataset
Step20: We can see that there is a definite trend in the data. What PCA seeks to do is to find the Principal Axes in the data, and explain how important those axes are in describing the data distribution | Python Code:
import pandas as pd
# read CSV file directly from a URL and save the results
data = pd.read_csv('http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv', index_col=0)
# display the first 5 rows
data.head()
data.shape
Explanation: What is machine learning?
One definition: "Machine learning is the semi-automated extraction of knowledge from data"
Knowledge from data: Starts with a question that might be answerable using data
Automated extraction: A computer provides the insight
Semi-automated: Requires many smart decisions by a human
What are the two main categories of machine learning?
Supervised learning: Making predictions using data
Example: Is a given email "spam" or "ham"?
There is an outcome we are trying to predict
Unsupervised learning: Extracting structure from data
Example: Segment grocery store shoppers into clusters that exhibit similar behaviors
There is no "right answer"
Start with supervised learning
Types of supervised learning
Classification: Predict a categorical response
Regression: Predict a continuous response
End of explanation
# conventional way to import seaborn
import seaborn as sns
# allow plots to appear within the notebook
%matplotlib inline
sns.pairplot(data, x_vars=['TV','radio','newspaper'], y_vars='sales', size=7, aspect=0.7, kind='reg')
Explanation: What are the features?
- TV: advertising dollars spent on TV for a single product in a given market (in thousands of dollars)
- Radio: advertising dollars spent on Radio
- Newspaper: advertising dollars spent on Newspaper
What is the response?
- Sales: sales of a single product in a given market (in thousands of items)
What else do we know?
- Because the response variable is continuous, this is a regression problem.
- There are 200 observations (represented by the rows), and each observation is a single market.
End of explanation
feature_cols = ['TV', 'radio', 'newspaper']
# use the list to select a subset of the original DataFrame
X = data[feature_cols]
# equivalent command to do this in one line
X = data[['TV', 'radio', 'newspaper']]
# print the first 5 rows
X.head()
print(type(X))
print(X.shape)
# select a Series from the DataFrame
y = data['sales']
# equivalent command that works if there are no spaces in the column name
y = data.sales
# print the first 5 values
y.head()
# check the type and shape of y
print(type(y))
print(y.shape)
Explanation: Linear regression
Pros: fast, no tuning required, highly interpretable, well-understood
Cons: unlikely to produce the best predictive accuracy (presumes a linear relationship between the features and response)
Form of linear regression
$y = \beta_0 + \beta_1x_1 + \beta_2x_2 + ... + \beta_nx_n$
$y$ is the response
$\beta_0$ is the intercept
$\beta_1$ is the coefficient for $x_1$ (the first feature)
$\beta_n$ is the coefficient for $x_n$ (the nth feature)
In this case:
$y = \beta_0 + \beta_1 \times TV + \beta_2 \times Radio + \beta_3 \times Newspaper$
The $\beta$ values are called the model coefficients. These values are "learned" during the model fitting step using the "least squares" criterion. Then, the fitted model can be used to make predictions!
End of explanation
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
# default split is 75% for training and 25% for testing
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
print(y_test.shape)
Explanation: Splitting X and y into training and testing sets
End of explanation
# import model
from sklearn.linear_model import LinearRegression
# instantiate
linreg = LinearRegression()
# fit the model to the training data (learn the coefficients)
linreg.fit(X_train, y_train)
Explanation: Linear regression in scikit-learn
End of explanation
# print the intercept and coefficients
print(linreg.intercept_)
print(linreg.coef_)
# pair the feature names with the coefficients
list(zip(feature_cols, linreg.coef_))
Explanation: Interpreting model coefficients
End of explanation
y_pred = linreg.predict(X_test)
print(y_pred)
Explanation: Making predictions
End of explanation
from sklearn import metrics
import numpy as np
print(np.sqrt(metrics.mean_squared_error(y_test, y_pred)))
Explanation: We need an evaluation metric in order to compare our predictions with the actual values!
Evaluation metric
Root Mean Squared Error (RMSE) is the square root of the mean of the squared errors:
$$\sqrt{\frac 1n\sum_{i=1}^n(y_i-\hat{y}_i)^2}$$
End of explanation
# import load_iris function from datasets module
from sklearn.datasets import load_iris
iris = load_iris()
type(iris)
# print the iris data
print(iris.feature_names)
print(len(iris.data))
# print integers representing the species of each observation
print(iris.target)
print(len(iris.target))
# print the encoding scheme for species: 0 = setosa, 1 = versicolor, 2 = virginica
print(iris.target_names)
X = iris.data
# store response vector in "y"
y = iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=4)
print(X_train.shape)
print(X_test.shape)
print(y_train.shape)
print(y_test.shape)
Explanation: Classifications on the iris dataset
Framed as a supervised learning problem: Predict the species of an iris using the measurements
Famous dataset for machine learning because prediction is easy
Learn more about the iris dataset: UCI Machine Learning Repository
End of explanation
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression()
logreg.fit(X_train, y_train)
y_pred = logreg.predict(X_test)
print(y_pred)
Explanation: Logistic regression
For iris dataset, we are predicting a categorical data. Linear Regression is not a good choice, instead we use logistic regression.
End of explanation
print(metrics.accuracy_score(y_test, y_pred))
Explanation: Evaluation metric
Classification accuracy:
Proportion of correct predictions
Common evaluation metric for classification problems
End of explanation
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn;
from scipy import stats
import pylab as pl
seaborn.set()
from sklearn.datasets.samples_generator import make_blobs
X, y = make_blobs(n_samples=50, centers=2,
random_state=0, cluster_std=0.60)
xfit = np.linspace(-1, 3.5)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring')
# Draw three lines that couple separate the data
for m, b, d in [(1, 0.65, 0.33), (0.5, 1.6, 0.55), (-0.2, 2.9, 0.2)]:
yfit = m * xfit + b
plt.plot(xfit, yfit, '-k')
plt.fill_between(xfit, yfit - d, yfit + d, edgecolor='none', color='#AAAAAA', alpha=0.4)
plt.xlim(-1, 3.5);
Explanation: Alternatives: Support Vector Machine
Support Vector Machines (SVMs) are a powerful supervised learning algorithm used for classification. SVMs draw a boundary between clusters of data. SVMs attempt to maximize the margin between sets of points. Many lines can be drawn to separate the points above:
End of explanation
X = iris.data
y = iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=4)
print(X_train.shape)
print(X_test.shape)
print(y_train.shape)
print(y_test.shape)
from sklearn.svm import SVC
clf = SVC(kernel='linear')
clf.fit(X_train, y_train)
y_pred=clf.predict(X_test)
print(y_pred)
print(metrics.accuracy_score(y_test, y_pred))
Explanation: Apply SVM to iris
End of explanation
from sklearn.datasets import load_digits
digits = load_digits()
digits.keys()
X = digits.data
y = digits.target
print(X.shape)
print(y.shape)
# set up the figure
fig = plt.figure(figsize=(6, 6)) # figure size in inches
fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
# plot the digits: each image is 8x8 pixels
for i in range(64):
ax = fig.add_subplot(8, 8, i + 1, xticks=[], yticks=[])
ax.imshow(digits.images[i], cmap=plt.cm.binary, interpolation='nearest')
# label the image with the target value
ax.text(0, 7, str(digits.target[i]))
Explanation: Supervised Learning In-Depth: Random Forests
Previously we saw a powerful discriminative classifier, Support Vector Machines.
Here we'll take a look at motivating another powerful algorithm. This one is a non-parametric algorithm called Random Forests.
Example: Random Forest for Classifying Digits
Let's start with the hand-written digits data. Let's use that here to test the efficacy of Random Forest classifiers.
End of explanation
from sklearn.cross_validation import train_test_split
from sklearn import metrics
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, random_state=0)
clf = DecisionTreeClassifier(max_depth=11)
clf.fit(Xtrain, ytrain)
ypred = clf.predict(Xtest)
metrics.accuracy_score(ypred, ytest)
plt.imshow(metrics.confusion_matrix(ypred, ytest),
interpolation='nearest', cmap=plt.cm.binary)
plt.grid(False)
plt.colorbar()
plt.xlabel("predicted label")
plt.ylabel("true label");
from sklearn.ensemble import RandomForestClassifier
Explanation: Motivating Random Forests: Decision Trees
Random forests are an example of an ensemble learner built on decision trees.
For this reason we'll start by discussing decision trees themselves.
End of explanation
clf = RandomForestClassifier(n_jobs=2, random_state=0)
clf.fit(Xtrain, ytrain)
ypred = clf.predict(Xtest)
metrics.accuracy_score(ypred, ytest)
plt.imshow(metrics.confusion_matrix(ypred, ytest),
interpolation='nearest', cmap=plt.cm.binary)
plt.grid(False)
plt.colorbar()
plt.xlabel("predicted label")
plt.ylabel("true label");
Explanation: Ensemble the decision tress: Random Forest
End of explanation
from sklearn.datasets.samples_generator import make_blobs
X, y = make_blobs(n_samples=300, centers=4,
random_state=0, cluster_std=0.60)
plt.scatter(X[:, 0], X[:, 1], s=50);
Explanation: Unsupervised learning
Introducing K-Means
K Means is an algorithm for unsupervised clustering: that is, finding clusters in data based on the data attributes alone (not the labels).
K Means is a relatively easy-to-understand algorithm. It searches for cluster centers which are the mean of the points within them, such that every point is closest to the cluster center it is assigned to.
Let's look at how KMeans operates on the simple clusters we looked at previously. To emphasize that this is unsupervised, we'll not plot the colors of the clusters:
End of explanation
from sklearn.cluster import KMeans
est = KMeans(4) # 4 clusters
est.fit(X)
y_kmeans = est.predict(X)
plt.scatter(X[:, 0], X[:, 1], c=y_kmeans, s=50, cmap='rainbow');
Explanation: By eye, it is relatively easy to pick out the four clusters. If you were to perform an exhaustive search for the different segmentations of the data, however, the search space would be exponential in the number of points. Fortunately, there is a well-known Expectation Maximization (EM) procedure which scikit-learn implements, so that KMeans can be solved relatively quickly.
End of explanation
from sklearn import datasets, cluster
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(2)
# load data
iris = datasets.load_iris()
X_iris = iris.data
y_iris = iris.target
k_means = cluster.KMeans(n_clusters=3)
k_means.fit(X_iris)
labels = k_means.labels_
# check how many of the samples were correctly labeled
correct_labels = sum(y_iris == labels)
print("Result: %d out of %d samples were correctly labeled." % (correct_labels, y_iris.size))
Explanation: Let's use scikit-learn for K-means clustering on Iris dataset
End of explanation
from __future__ import print_function, division
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
# use seaborn plotting style defaults
np.random.seed(1)
X = np.dot(np.random.random(size=(2, 2)), np.random.normal(size=(2, 200))).T
plt.plot(X[:, 0], X[:, 1], 'o')
plt.axis('equal');
Explanation: Introducing Principal Component Analysis
Principal Component Analysis is a very powerful unsupervised method for dimensionality reduction in data. It's easiest to visualize by looking at a two-dimensional dataset:
End of explanation
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(X)
print(pca.explained_variance_)
print(pca.components_)
plt.plot(X[:, 0], X[:, 1], 'o', alpha=0.5)
for length, vector in zip(pca.explained_variance_, pca.components_):
v = vector * 3 * np.sqrt(length)
plt.plot([0, v[0]], [0, v[1]], '-k', lw=3)
plt.axis('equal');
Explanation: We can see that there is a definite trend in the data. What PCA seeks to do is to find the Principal Axes in the data, and explain how important those axes are in describing the data distribution:
End of explanation |
8,778 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using folium.colormap
A few examples of how to use folium.colormap in choropleths.
Let's load a GeoJSON file, and try to choropleth it.
Step2: Self-defined
You can build a choropleth in using a self-defined function.
It has to output an hexadecimal color string of the form #RRGGBB or #RRGGBBAA.
Step3: StepColormap
But to help you define you colormap, we've embedded StepColormap in folium.colormap.
You can simply define the colors you want, and the index (thresholds) that correspond.
Step4: If you specify no index, colors will be set uniformely.
Step5: LinearColormap
But sometimes, you would prefer to have a continuous set of colors. This can be done by LinearColormap.
Step6: Again, you can set the index if you want something irregular.
Step7: If you want to transform a linear map into a step one, you can use the method to_step.
Step8: You can also use more sophisticated rules to create the thresholds.
Step9: And the opposite is also possible with to_linear.
Step10: Build-in
For convenience, we provide a (small) set of built-in linear colormaps, in folium.colormap.linear.
Step11: You can also use them to generate regular StepColormap.
Step12: Of course, you may need to scale the colormaps to your bounds. This is doable with .scale.
Step13: At last, if you want to check them all, simply ask for linear in the notebook.
Step14: Draw a ColorMap on a map
By the way, a ColorMap is also a Folium Element that you can draw on a map. | Python Code:
import json
import pandas as pd
us_states = os.path.join('data', 'us-states.json')
US_Unemployment_Oct2012 = os.path.join('data', 'US_Unemployment_Oct2012.csv')
geo_json_data = json.load(open(us_states))
unemployment = pd.read_csv(US_Unemployment_Oct2012)
unemployment_dict = unemployment.set_index('State')['Unemployment']
Explanation: Using folium.colormap
A few examples of how to use folium.colormap in choropleths.
Let's load a GeoJSON file, and try to choropleth it.
End of explanation
def my_color_function(feature):
Maps low values to green and hugh values to red.
if unemployment_dict[feature['id']] > 6.5:
return '#ff0000'
else:
return '#008000'
m = folium.Map([43, -100], tiles='cartodbpositron', zoom_start=4)
folium.GeoJson(
geo_json_data,
style_function=lambda feature: {
'fillColor': my_color_function(feature),
'color': 'black',
'weight': 2,
'dashArray': '5, 5'
}
).add_to(m)
m.save(os.path.join('results', 'Colormaps_0.html'))
m
Explanation: Self-defined
You can build a choropleth in using a self-defined function.
It has to output an hexadecimal color string of the form #RRGGBB or #RRGGBBAA.
End of explanation
import branca.colormap as cm
step = cm.StepColormap(['green', 'yellow', 'red'],
vmin=3, vmax=10, index=[3, 4, 8, 10],
caption='step')
step
m = folium.Map([43, -100], tiles='cartodbpositron', zoom_start=4)
folium.GeoJson(
geo_json_data,
style_function=lambda feature: {
'fillColor': step(unemployment_dict[feature['id']]),
'color': 'black',
'weight': 2,
'dashArray': '5, 5'
}
).add_to(m)
m.save(os.path.join('results', 'Colormaps_1.html'))
m
Explanation: StepColormap
But to help you define you colormap, we've embedded StepColormap in folium.colormap.
You can simply define the colors you want, and the index (thresholds) that correspond.
End of explanation
cm.StepColormap(['r', 'y', 'g', 'c', 'b', 'm'])
Explanation: If you specify no index, colors will be set uniformely.
End of explanation
linear = cm.LinearColormap(['green', 'yellow', 'red'],
vmin=3, vmax=10)
linear
m = folium.Map([43, -100], tiles='cartodbpositron', zoom_start=4)
folium.GeoJson(
geo_json_data,
style_function=lambda feature: {
'fillColor': linear(unemployment_dict[feature['id']]),
'color': 'black',
'weight': 2,
'dashArray': '5, 5'
}
).add_to(m)
m.save(os.path.join('results', 'Colormaps_2.html'))
m
Explanation: LinearColormap
But sometimes, you would prefer to have a continuous set of colors. This can be done by LinearColormap.
End of explanation
cm.LinearColormap(['red', 'orange', 'yellow', 'green'],
index=[0, 0.1, 0.9, 1.0])
Explanation: Again, you can set the index if you want something irregular.
End of explanation
linear.to_step(6)
Explanation: If you want to transform a linear map into a step one, you can use the method to_step.
End of explanation
linear.to_step(
n=6,
data=[30.6, 50, 51, 52, 53, 54, 55, 60, 70, 100],
method='quantiles',
round_method='int'
)
Explanation: You can also use more sophisticated rules to create the thresholds.
End of explanation
step.to_linear()
Explanation: And the opposite is also possible with to_linear.
End of explanation
cm.linear.OrRd
Explanation: Build-in
For convenience, we provide a (small) set of built-in linear colormaps, in folium.colormap.linear.
End of explanation
cm.linear.PuBu.to_step(12)
Explanation: You can also use them to generate regular StepColormap.
End of explanation
cm.linear.YlGn.scale(3, 12)
cm.linear.RdGy.to_step(10).scale(5, 100)
Explanation: Of course, you may need to scale the colormaps to your bounds. This is doable with .scale.
End of explanation
cm.linear
Explanation: At last, if you want to check them all, simply ask for linear in the notebook.
End of explanation
m = folium.Map(tiles='cartodbpositron')
colormap = cm.linear.Set1.scale(0, 35).to_step(10)
colormap.caption = 'A colormap caption'
m.add_child(colormap)
#m.save(os.path.join('results', 'Colormaps_3.html'))
m
Explanation: Draw a ColorMap on a map
By the way, a ColorMap is also a Folium Element that you can draw on a map.
End of explanation |
8,779 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: 変分推論を使用した一般化線形混合効果モデルの適合
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 概要
このコラボでは、TensorFlow Probability の変分推論を使用して、一般化線形混合効果モデルを適合させる方法を示します。
モデルの族
一般化線形混合効果モデル (GLMM) は、一般化線形モデル (GLM) と似ていますが、予測される線形応答にサンプル固有のノイズが組み込まれている点が異なります。これは、まれな特徴がより一般的に見られる特徴と情報を共有できるため、有用な場合もあります。
生成プロセスとして、一般化線形混合効果モデル (GLMM) には次の特徴があります。
$$ \begin{align} \text{for } & r = 1\ldots R
Step3: また、GPU の可用性を確認します。
Step5: データセットの取得:
TensorFlow データセットからデータセットを読み込み、簡単な前処理を行います。
Step6: GLMM 族の特化
このセクションでは、GLMM 族をラドンレベルの予測タスクに特化します。これを行うには、まず GLMM の固定効果の特殊なケースを検討します
Step7: モデルをもう少し洗練されたものにするために、地理に関することを含めるとさらに良いでしょう。ラドンは土壌に含まれるウランの放射壊変により生ずるため、地理が重要であると考えられます。
$$ \mathbb{E}[\log(\text{radon}_j)] = c + \text{floor_effect}_j + \text{county_effect}_j $$
擬似コードは次のとおりです。
def estimate_log_radon(floor, county)
Step8: このモデルを適合させると、county_effect ベクトルは、トレーニングサンプルが少ない郡の結果を記憶することになり、おそらく過適合になり、一般化が不十分になります。
GLMM は、上記の 2 つの GLM の中間に位置します。以下の適合を検討します。
$$ \log(\text{radon}_j) \sim c + \text{floor_effect}_j + \mathcal{N}(\text{county_effect}_j, \text{county_scale}) $$
このモデルは最初のモデルと同じですが、正規分布になる可能性を修正し、(単一の) 変数 county_scale を介してすべての郡で分散を共有します。擬似コードは、以下のとおりです。
def estimate_log_radon(floor, county)
Step9: モデルの指定
Step10: 事後分布を指定する
ここで、サロゲート族 $q_{\lambda}$ を作成しました。パラメータ $\lambda$ はトレーニング可能です。この場合、分布族は、パラメータごとに 1 つずつ、独立した多変量正規分布であり、$\lambda = {(\mu_j, \sigma_j)}$ です。$j$ は 4 つのパラメータにインデックスを付けます。
サロゲート分布族を適合させるために使用するメソッドは、tf.Variables を使用します。また、tfp.util.TransformedVariable を Softplus とともに使用して、(トレーニング可能な) スケールパラメータを正に制約します。また、tfp.util.TransformedVariableを Softplus とともに使用して、(トレーニング可能な) スケールパラメータを正に制約します。
最適化を支援するために、これらのトレーニング可能な変数を初期化します。
Step11: このセルは、次のように tfp.experimental.vi.build_factored_surrogate_posterior に置き換えることができることに注意してください。
python
surrogate_posterior = tfp.experimental.vi.build_factored_surrogate_posterior(
event_shape=joint.event_shape_tensor()[
Step12: 推定された平均郡効果をその平均の不確実性とともにプロットし、観測数で並べ替えました。左側が最大です。観測値が多い郡では不確実性は小さく、観測値が 1 つか 2 つしかない郡では不確実性が大きいことに注意してください。
Step13: 実際、推定された標準偏差に対して観測値の対数をプロットすることで、このことを直接に確認でき、関係がほぼ線形であることがわかります。
Step14: R の lme4 との比較
Step15: 次の表は、結果をまとめたものです。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
#@title Install { display-mode: "form" }
TF_Installation = 'System' #@param ['TF Nightly', 'TF Stable', 'System']
if TF_Installation == 'TF Nightly':
!pip install -q --upgrade tf-nightly
print('Installation of `tf-nightly` complete.')
elif TF_Installation == 'TF Stable':
!pip install -q --upgrade tensorflow
print('Installation of `tensorflow` complete.')
elif TF_Installation == 'System':
pass
else:
raise ValueError('Selection Error: Please select a valid '
'installation option.')
#@title Install { display-mode: "form" }
TFP_Installation = "System" #@param ["Nightly", "Stable", "System"]
if TFP_Installation == "Nightly":
!pip install -q tfp-nightly
print("Installation of `tfp-nightly` complete.")
elif TFP_Installation == "Stable":
!pip install -q --upgrade tensorflow-probability
print("Installation of `tensorflow-probability` complete.")
elif TFP_Installation == "System":
pass
else:
raise ValueError("Selection Error: Please select a valid "
"installation option.")
Explanation: 変分推論を使用した一般化線形混合効果モデルの適合
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/probability/examples/Linear_Mixed_Effects_Model_Variational_Inference"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/probability/examples/Linear_Mixed_Effects_Model_Variational_Inference.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/probability/examples/Linear_Mixed_Effects_Model_Variational_Inference.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/probability/examples/Linear_Mixed_Effects_Model_Variational_Inference.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a></td>
</table>
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import os
from six.moves import urllib
import matplotlib.pyplot as plt; plt.style.use('ggplot')
import numpy as np
import pandas as pd
import seaborn as sns; sns.set_context('notebook')
import tensorflow_datasets as tfds
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
import tensorflow_probability as tfp
tfd = tfp.distributions
tfb = tfp.bijectors
Explanation: 概要
このコラボでは、TensorFlow Probability の変分推論を使用して、一般化線形混合効果モデルを適合させる方法を示します。
モデルの族
一般化線形混合効果モデル (GLMM) は、一般化線形モデル (GLM) と似ていますが、予測される線形応答にサンプル固有のノイズが組み込まれている点が異なります。これは、まれな特徴がより一般的に見られる特徴と情報を共有できるため、有用な場合もあります。
生成プロセスとして、一般化線形混合効果モデル (GLMM) には次の特徴があります。
$$ \begin{align} \text{for } & r = 1\ldots R: \hspace{2.45cm}\text{# for each random-effect group}\ &\begin{aligned} \text{for } &c = 1\ldots |C_r|: \hspace{1.3cm}\text{# for each category ("level") of group $r$}\ &\begin{aligned} \beta_{rc} &\sim \text{MultivariateNormal}(\text{loc}=0_{D_r}, \text{scale}=\Sigma_r^{1/2}) \end{aligned} \end{aligned}\ \text{for } & i = 1 \ldots N: \hspace{2.45cm}\text{# for each sample}\ &\begin{aligned} &\eta_i = \underbrace{\vphantom{\sum_{r=1}^R}x_i^\top\omega}\text{fixed-effects} + \underbrace{\sum{r=1}^R z_{r,i}^\top \beta_{r,C_r(i) }}\text{random-effects} \ &Y_i|x_i,\omega,{z{r,i} , \beta_r}_{r=1}^R \sim \text{Distribution}(\text{mean}= g^{-1}(\eta_i)) \end{aligned} \end{align} $$
ここでは、
$$ \begin{align} R &= \text{number of random-effect groups}\ |C_r| &= \text{number of categories for group $r$}\ N &= \text{number of training samples}\ x_i,\omega &\in \mathbb{R}^{D_0}\ D_0 &= \text{number of fixed-effects}\ C_r(i) &= \text{category (under group $r$) of the $i$th sample}\ z_{r,i} &\in \mathbb{R}^{D_r}\ D_r &= \text{number of random-effects associated with group $r$}\ \Sigma_{r} &\in {S\in\mathbb{R}^{D_r \times D_r} : S \succ 0 }\ \eta_i\mapsto g^{-1}(\eta_i) &= \mu_i, \text{inverse link function}\ \text{Distribution} &=\text{some distribution parameterizable solely by its mean} \end{align} $$
つまり、各グループのすべてのカテゴリが、多変量正規分布からのサンプル $\beta_{rc}$ に関連付けられていることを意味します。$\beta_{rc}$ の抽出は常に独立していますが、グループ $r$ に対してのみ同じように分散されます。$r\in{1,\ldots,R}$ ごとに 1 つの $\Sigma_r$ があることに注意してください。
サンプルのグループの特徴である $z_{r,i}$ と密接に組み合わせると、結果は $i$ 番目の予測線形応答 (それ以外の場合は $x_i^\top\omega$) のサンプル固有のノイズになります。
${\Sigma_r:r\in{1,\ldots,R}}$ を推定する場合、基本的に、変量効果グループがもつノイズの量を推定します。そうしないと、 $x_i^\top\omega$ に存在する信号が失われます。
$\text{Distribution}$ および逆リンク関数 $g^{-1}$ にはさまざまなオプションがあります。一般的なオプションは次のとおりです。
$Y_i\sim\text{Normal}(\text{mean}=\eta_i, \text{scale}=\sigma)$,
$Y_i\sim\text{Binomial}(\text{mean}=n_i \cdot \text{sigmoid}(\eta_i), \text{total_count}=n_i)$, and,
$Y_i\sim\text{Poisson}(\text{mean}=\exp(\eta_i))$.
その他のオプションについては、tfp.glm モジュールを参照してください。
変分推論
残念ながら、パラメータ $\beta,{\Sigma_r}_r^R$ の最尤推定値を見つけるには、非分析積分が必要です。この問題を回避するためには、
付録で $q_{\lambda}$ と示されている、パラメータ化された分布のファミリ (「代理密度」) を定義します。
$q_{\lambda}$ が実際の目標密度に近くなるように、パラメータ $\lambda$ を見つけます。
分布族は、適切な次元の独立したガウス分布になり、「目標密度に近い」とは、「カルバック・ライブラー情報量を最小化する」ことを意味します。導出と動機については、「変分推論:統計家のためのレビュー」のセクション 2.2 を参照してください。特に、K-L 情報量を最小化することは、負の変分証拠の下限 (ELBO) を最小限に抑えることと同じであることが示されています。
トイプロブレム
Gelman et al. (2007) の「ラドンデータセット」は、回帰のアプローチを示すために使用されるデータセットです。(密接に関連する PyMC3 ブログ記事を参照してください。) ラドンデータセットには、米国全体で取得されたラドンの屋内測定値が含まれています。ラドンは、高濃度で有毒な自然発生の放射性ガスです。
このデモでは、地下室がある家屋ではラドンレベルが高いという仮説を検証することに関心があると仮定します。また、ラドン濃度は土壌の種類、つまり地理的な問題に関連していると考えられます。
これを機械学習の問題としてフレーム化するために、測定が行われた階の線形関数に基づいて対数ラドンレベルを予測します。また、郡を変量効果として使用し、地理的条件による差異を考慮します。つまり、一般化線形混合効果モデルを使用します。
End of explanation
if tf.test.gpu_device_name() != '/device:GPU:0':
print("We'll just use the CPU for this run.")
else:
print('Huzzah! Found GPU: {}'.format(tf.test.gpu_device_name()))
Explanation: また、GPU の可用性を確認します。
End of explanation
def load_and_preprocess_radon_dataset(state='MN'):
Load the Radon dataset from TensorFlow Datasets and preprocess it.
Following the examples in "Bayesian Data Analysis" (Gelman, 2007), we filter
to Minnesota data and preprocess to obtain the following features:
- `county`: Name of county in which the measurement was taken.
- `floor`: Floor of house (0 for basement, 1 for first floor) on which the
measurement was taken.
The target variable is `log_radon`, the log of the Radon measurement in the
house.
ds = tfds.load('radon', split='train')
radon_data = tfds.as_dataframe(ds)
radon_data.rename(lambda s: s[9:] if s.startswith('feat') else s, axis=1, inplace=True)
df = radon_data[radon_data.state==state.encode()].copy()
df['radon'] = df.activity.apply(lambda x: x if x > 0. else 0.1)
# Make county names look nice.
df['county'] = df.county.apply(lambda s: s.decode()).str.strip().str.title()
# Remap categories to start from 0 and end at max(category).
df['county'] = df.county.astype(pd.api.types.CategoricalDtype())
df['county_code'] = df.county.cat.codes
# Radon levels are all positive, but log levels are unconstrained
df['log_radon'] = df['radon'].apply(np.log)
# Drop columns we won't use and tidy the index
columns_to_keep = ['log_radon', 'floor', 'county', 'county_code']
df = df[columns_to_keep].reset_index(drop=True)
return df
df = load_and_preprocess_radon_dataset()
df.head()
Explanation: データセットの取得:
TensorFlow データセットからデータセットを読み込み、簡単な前処理を行います。
End of explanation
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(12, 4))
df.groupby('floor')['log_radon'].plot(kind='density', ax=ax1);
ax1.set_xlabel('Measured log(radon)')
ax1.legend(title='Floor')
df['floor'].value_counts().plot(kind='bar', ax=ax2)
ax2.set_xlabel('Floor where radon was measured')
ax2.set_ylabel('Count')
fig.suptitle("Distribution of log radon and floors in the dataset");
Explanation: GLMM 族の特化
このセクションでは、GLMM 族をラドンレベルの予測タスクに特化します。これを行うには、まず GLMM の固定効果の特殊なケースを検討します: $$ \mathbb{E}[\log(\text{radon}_j)] = c + \text{floor_effect}_j $$
このモデルは、観測値 $j$ の対数ラドンが (予想では) $j$ 番目の測定が行われる階と一定の切片によって支配されることを前提としています。擬似コードでは、次のようになります。
def estimate_log_radon(floor):
return intercept + floor_effect[floor]
すべての階で学習された重みと、普遍的な intercept の条件があります。0 階と 1 階からのラドン測定値を見ると、これは良いスタートのように見えます。
End of explanation
fig, ax = plt.subplots(figsize=(22, 5));
county_freq = df['county'].value_counts()
county_freq.plot(kind='bar', ax=ax)
ax.set_xlabel('County')
ax.set_ylabel('Number of readings');
Explanation: モデルをもう少し洗練されたものにするために、地理に関することを含めるとさらに良いでしょう。ラドンは土壌に含まれるウランの放射壊変により生ずるため、地理が重要であると考えられます。
$$ \mathbb{E}[\log(\text{radon}_j)] = c + \text{floor_effect}_j + \text{county_effect}_j $$
擬似コードは次のとおりです。
def estimate_log_radon(floor, county):
return intercept + floor_effect[floor] + county_effect[county]
郡固有の重みを除いて、以前と同じです。
十分に大きなトレーニングセットなので、これは妥当なモデルです。ただし、ミネソタ州からのデータを見てみると、測定数が少ない郡が多数あります。たとえば、85 の郡のうち 39 の郡の観測値は 5 つ未満です。
そのため、郡ごとの観測数が増えるにつれて上記のモデルに収束するように、すべての観測間で統計的強度を共有するようにします。
End of explanation
features = df[['county_code', 'floor']].astype(int)
labels = df[['log_radon']].astype(np.float32).values.flatten()
Explanation: このモデルを適合させると、county_effect ベクトルは、トレーニングサンプルが少ない郡の結果を記憶することになり、おそらく過適合になり、一般化が不十分になります。
GLMM は、上記の 2 つの GLM の中間に位置します。以下の適合を検討します。
$$ \log(\text{radon}_j) \sim c + \text{floor_effect}_j + \mathcal{N}(\text{county_effect}_j, \text{county_scale}) $$
このモデルは最初のモデルと同じですが、正規分布になる可能性を修正し、(単一の) 変数 county_scale を介してすべての郡で分散を共有します。擬似コードは、以下のとおりです。
def estimate_log_radon(floor, county):
county_mean = county_effect[county]
random_effect = np.random.normal() * county_scale + county_mean
return intercept + floor_effect[floor] + random_effect
観測データを使用して、county_scale、county_mean および random_effect の同時分布を推測します。グローバルな county_scale を使用すると、郡間で統計的強度を共有できます。観測値が多い郡は、観測値が少ない郡の分散を強化します。さらに、より多くのデータを収集すると、このモデルは、プールされたスケール変数のないモデルに収束します。このデータセットを使用しても、どちらのモデルでも最も観察された郡について同様の結論に達します。
実験
次に、TensorFlow の変分推論を使用して、上記の GLMM を適合させます。まず、データを特徴とラベルに分割します。
End of explanation
def make_joint_distribution_coroutine(floor, county, n_counties, n_floors):
def model():
county_scale = yield tfd.HalfNormal(scale=1., name='scale_prior')
intercept = yield tfd.Normal(loc=0., scale=1., name='intercept')
floor_weight = yield tfd.Normal(loc=0., scale=1., name='floor_weight')
county_prior = yield tfd.Normal(loc=tf.zeros(n_counties),
scale=county_scale,
name='county_prior')
random_effect = tf.gather(county_prior, county, axis=-1)
fixed_effect = intercept + floor_weight * floor
linear_response = fixed_effect + random_effect
yield tfd.Normal(loc=linear_response, scale=1., name='likelihood')
return tfd.JointDistributionCoroutineAutoBatched(model)
joint = make_joint_distribution_coroutine(
features.floor.values, features.county_code.values, df.county.nunique(),
df.floor.nunique())
# Define a closure over the joint distribution
# to condition on the observed labels.
def target_log_prob_fn(*args):
return joint.log_prob(*args, likelihood=labels)
Explanation: モデルの指定
End of explanation
# Initialize locations and scales randomly with `tf.Variable`s and
# `tfp.util.TransformedVariable`s.
_init_loc = lambda shape=(): tf.Variable(
tf.random.uniform(shape, minval=-2., maxval=2.))
_init_scale = lambda shape=(): tfp.util.TransformedVariable(
initial_value=tf.random.uniform(shape, minval=0.01, maxval=1.),
bijector=tfb.Softplus())
n_counties = df.county.nunique()
surrogate_posterior = tfd.JointDistributionSequentialAutoBatched([
tfb.Softplus()(tfd.Normal(_init_loc(), _init_scale())), # scale_prior
tfd.Normal(_init_loc(), _init_scale()), # intercept
tfd.Normal(_init_loc(), _init_scale()), # floor_weight
tfd.Normal(_init_loc([n_counties]), _init_scale([n_counties]))]) # county_prior
Explanation: 事後分布を指定する
ここで、サロゲート族 $q_{\lambda}$ を作成しました。パラメータ $\lambda$ はトレーニング可能です。この場合、分布族は、パラメータごとに 1 つずつ、独立した多変量正規分布であり、$\lambda = {(\mu_j, \sigma_j)}$ です。$j$ は 4 つのパラメータにインデックスを付けます。
サロゲート分布族を適合させるために使用するメソッドは、tf.Variables を使用します。また、tfp.util.TransformedVariable を Softplus とともに使用して、(トレーニング可能な) スケールパラメータを正に制約します。また、tfp.util.TransformedVariableを Softplus とともに使用して、(トレーニング可能な) スケールパラメータを正に制約します。
最適化を支援するために、これらのトレーニング可能な変数を初期化します。
End of explanation
optimizer = tf.optimizers.Adam(learning_rate=1e-2)
losses = tfp.vi.fit_surrogate_posterior(
target_log_prob_fn,
surrogate_posterior,
optimizer=optimizer,
num_steps=3000,
seed=42,
sample_size=2)
(scale_prior_,
intercept_,
floor_weight_,
county_weights_), _ = surrogate_posterior.sample_distributions()
print(' intercept (mean): ', intercept_.mean())
print(' floor_weight (mean): ', floor_weight_.mean())
print(' scale_prior (approx. mean): ', tf.reduce_mean(scale_prior_.sample(10000)))
fig, ax = plt.subplots(figsize=(10, 3))
ax.plot(losses, 'k-')
ax.set(xlabel="Iteration",
ylabel="Loss (ELBO)",
title="Loss during training",
ylim=0);
Explanation: このセルは、次のように tfp.experimental.vi.build_factored_surrogate_posterior に置き換えることができることに注意してください。
python
surrogate_posterior = tfp.experimental.vi.build_factored_surrogate_posterior(
event_shape=joint.event_shape_tensor()[:-1],
constraining_bijectors=[tfb.Softplus(), None, None, None])
結果
ここでの目標は、扱いやすいパラメータ化された分布族を定義し、パラメータを選択して、ターゲット分布に近い扱いやすい分布を作成することでした。
上記のようにサロゲート分布を作成し、 tfp.vi.fit_surrogate_posterior を使用できます。これは、オプティマイザと指定された数のステップを受け入れて、負の ELBO を最小化するサロゲートモデルのパラメータを見つけます (これは、サロゲート分布とターゲット分布の間のカルバック・ライブラー情報を最小化することに対応します)。
戻り値は各ステップで負の ELBO であり、surrogate_posterior の分布はオプティマイザによって検出されたパラメータで更新されます。
End of explanation
county_counts = (df.groupby(by=['county', 'county_code'], observed=True)
.agg('size')
.sort_values(ascending=False)
.reset_index(name='count'))
means = county_weights_.mean()
stds = county_weights_.stddev()
fig, ax = plt.subplots(figsize=(20, 5))
for idx, row in county_counts.iterrows():
mid = means[row.county_code]
std = stds[row.county_code]
ax.vlines(idx, mid - std, mid + std, linewidth=3)
ax.plot(idx, means[row.county_code], 'ko', mfc='w', mew=2, ms=7)
ax.set(
xticks=np.arange(len(county_counts)),
xlim=(-1, len(county_counts)),
ylabel="County effect",
title=r"Estimates of county effects on log radon levels. (mean $\pm$ 1 std. dev.)",
)
ax.set_xticklabels(county_counts.county, rotation=90);
Explanation: 推定された平均郡効果をその平均の不確実性とともにプロットし、観測数で並べ替えました。左側が最大です。観測値が多い郡では不確実性は小さく、観測値が 1 つか 2 つしかない郡では不確実性が大きいことに注意してください。
End of explanation
fig, ax = plt.subplots(figsize=(10, 7))
ax.plot(np.log1p(county_counts['count']), stds.numpy()[county_counts.county_code], 'o')
ax.set(
ylabel='Posterior std. deviation',
xlabel='County log-count',
title='Having more observations generally\nlowers estimation uncertainty'
);
Explanation: 実際、推定された標準偏差に対して観測値の対数をプロットすることで、このことを直接に確認でき、関係がほぼ線形であることがわかります。
End of explanation
%%shell
exit # Trick to make this block not execute.
radon = read.csv('srrs2.dat', header = TRUE)
radon = radon[radon$state=='MN',]
radon$radon = ifelse(radon$activity==0., 0.1, radon$activity)
radon$log_radon = log(radon$radon)
# install.packages('lme4')
library(lme4)
fit <- lmer(log_radon ~ 1 + floor + (1 | county), data=radon)
fit
# Linear mixed model fit by REML ['lmerMod']
# Formula: log_radon ~ 1 + floor + (1 | county)
# Data: radon
# REML criterion at convergence: 2171.305
# Random effects:
# Groups Name Std.Dev.
# county (Intercept) 0.3282
# Residual 0.7556
# Number of obs: 919, groups: county, 85
# Fixed Effects:
# (Intercept) floor
# 1.462 -0.693
Explanation: R の lme4 との比較
End of explanation
print(pd.DataFrame(data=dict(intercept=[1.462, tf.reduce_mean(intercept_.mean()).numpy()],
floor=[-0.693, tf.reduce_mean(floor_weight_.mean()).numpy()],
scale=[0.3282, tf.reduce_mean(scale_prior_.sample(10000)).numpy()]),
index=['lme4', 'vi']))
Explanation: 次の表は、結果をまとめたものです。
End of explanation |
8,780 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
BigQuery command-line tool
The BigQuery command-line tool is installed as part of the Cloud SDK and can be used to interact with BigQuery. When you use CLI commands in a notebook, the command must be prepended with a !.
View available commands
To view the available commands for the BigQuery command-line tool, use the help command.
Step1: Create a new dataset
A dataset is contained within a specific project. Datasets are top-level containers that are used to organize and control access to your tables and views. A table or view must belong to a dataset. You need to create at least one dataset before loading data into BigQuery.
First, name your new dataset
Step2: The following command creates a new dataset in the US using the ID defined above.
NOTE
Step3: The response should look like the following
Step4: The response should look like the following
Step5: Load data from Cloud Storage to a table
The following example demonstrates how to load a local CSV file into a new table. See SourceFormat in the Python client library documentation for a list of available source formats. For more information, see Introduction to loading data from Cloud Storage in the BigQuery documentation.
Step6: Run a query
The BigQuery command-line tool has a query command for running queries, but it is recommended to use the magic command for this purpose.
Cleaning Up
The following code deletes the dataset created for this tutorial, including all tables in the dataset. | Python Code:
!bq help
Explanation: BigQuery command-line tool
The BigQuery command-line tool is installed as part of the Cloud SDK and can be used to interact with BigQuery. When you use CLI commands in a notebook, the command must be prepended with a !.
View available commands
To view the available commands for the BigQuery command-line tool, use the help command.
End of explanation
dataset_id = "your_new_dataset"
Explanation: Create a new dataset
A dataset is contained within a specific project. Datasets are top-level containers that are used to organize and control access to your tables and views. A table or view must belong to a dataset. You need to create at least one dataset before loading data into BigQuery.
First, name your new dataset:
End of explanation
!bq --location=US mk --dataset $dataset_id
Explanation: The following command creates a new dataset in the US using the ID defined above.
NOTE: In the examples in this notebook, the dataset_id variable is referenced in the commands using both {} and $. To avoid creating and using variables, replace these interpolated variables with literal values and remove the {} and $ characters.
End of explanation
!bq ls
Explanation: The response should look like the following:
Dataset 'your-project-id:your_new_dataset' successfully created.
List datasets
The following command lists all datasets in your default project.
End of explanation
!bq \
--location=US load \
--autodetect \
--skip_leading_rows=1 \
--source_format=CSV \
{dataset_id}.us_states_local_file \
'resources/us-states.csv'
Explanation: The response should look like the following:
```
datasetId
your_new_dataset
```
Load data from a local file to a table
The following example demonstrates how to load a local CSV file into a new or existing table. See SourceFormat in the Python client library documentation for a list of available source formats. For more information, see Loading Data into BigQuery from a local data source in the BigQuery documentation.
End of explanation
!bq \
--location=US load \
--autodetect \
--skip_leading_rows=1 \
--source_format=CSV \
{dataset_id}.us_states_gcs \
'gs://cloud-samples-data/bigquery/us-states/us-states.csv'
Explanation: Load data from Cloud Storage to a table
The following example demonstrates how to load a local CSV file into a new table. See SourceFormat in the Python client library documentation for a list of available source formats. For more information, see Introduction to loading data from Cloud Storage in the BigQuery documentation.
End of explanation
!bq rm -r -f --dataset $dataset_id
Explanation: Run a query
The BigQuery command-line tool has a query command for running queries, but it is recommended to use the magic command for this purpose.
Cleaning Up
The following code deletes the dataset created for this tutorial, including all tables in the dataset.
End of explanation |
8,781 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Read SST, Mask and Calculate global mean
In this notebook, we will carry out the following basic operations
* have a quick visualization of spatial data
* use mask array to mask out land
* calculate yearly climatology
* calculate global mean, simply.
1. Load basic libraries
Step1: 2. Read skt data
2.1 Read data
Step2: 2.2 Have a dirty look of the first month
You can fine the skt cover both of land and ocean.
Step3: 3. Read mask data
We hope only see skt over ocean. That is so-called (Sea Surface Temperature) SST. So have to mask the land part.
3.1 read data
Step4: 3.2 Have a dirty look of the mask
Step5: 3.3 Set "1" on Ocean and "nan" on the land
Step6: 4. Calculate yearly climatology in the first year (1948)
Get skt for the first year (i.e., 12 months)
Calculate yearly mean
Mask the yearly mean over ocean
Step7: 5. Simply calculate global mean
The following is the global averaged sst value. True only considering regular grid. No weighting needed. | Python Code:
%matplotlib inline
import numpy as np
from netCDF4 import Dataset # http://unidata.github.io/netcdf4-python/
import matplotlib.pyplot as plt # to generate plots
from matplotlib.pylab import rcParams
rcParams['figure.figsize'] = 15, 9
Explanation: Read SST, Mask and Calculate global mean
In this notebook, we will carry out the following basic operations
* have a quick visualization of spatial data
* use mask array to mask out land
* calculate yearly climatology
* calculate global mean, simply.
1. Load basic libraries
End of explanation
ncfile = 'data\skt.mon.mean.nc'
fh = Dataset(ncfile, mode='r') # file handle, open in read only mode
skt = fh.variables['skt'][:]
fh.close() # close the file
Explanation: 2. Read skt data
2.1 Read data
End of explanation
plt.imshow(skt[0]) #data for the first time-step (January 1948)
plt.title('Monthly SKT in Jan 1948 [$^oC$]')
Explanation: 2.2 Have a dirty look of the first month
You can fine the skt cover both of land and ocean.
End of explanation
lmfile = 'data\lsmask.19294.nc'
lmset = Dataset(lmfile)
lsmask = lmset['lsmask'][0,:,:]# read land mask
Explanation: 3. Read mask data
We hope only see skt over ocean. That is so-called (Sea Surface Temperature) SST. So have to mask the land part.
3.1 read data
End of explanation
plt.imshow(lsmask)
plt.title('Land and Ocean Mask')
Explanation: 3.2 Have a dirty look of the mask
End of explanation
lsm = lsmask + 1
lsm[lsm<1.0] = np.nan
# now only ocean available
plt.imshow(lsm)
plt.title('Land Mask')
Explanation: 3.3 Set "1" on Ocean and "nan" on the land
End of explanation
skt_y1 = np.mean(skt[0:12,:,:], axis=0)
# masking the land, leave ocean alone
sst_y1 = skt_y1*lsm
# dirty look
plt.imshow(sst_y1)
plt.title('Yearly Climatology of SST in 1948 [$^oC$]')
Explanation: 4. Calculate yearly climatology in the first year (1948)
Get skt for the first year (i.e., 12 months)
Calculate yearly mean
Mask the yearly mean over ocean
End of explanation
sst_global = np.nanmean(sst_y1)
sst_global
Explanation: 5. Simply calculate global mean
The following is the global averaged sst value. True only considering regular grid. No weighting needed.
End of explanation |
8,782 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Improving the Text Classifier
Goals
Try Random Forest on our Sentiment Data
Try Support Vector Machines on our Sentiment Data
Introduction to Hyperparameters
Introduction
There are three ways to improve a machine learning model
Improve the training data
Improve the feature extraction
Improve the learning algorithm
Typically the easiest way is to get more data or cleaner data if you can do it. If that's not possible adding more features is the next easiest. The toughest way is to try to improve the learning algorithm. Not that we have a baseline in place we will try improving the learning algorithm and adding a few features.
Support Vector Machines
Support vector machines or SVMs are a popular way to model data when you have many more features that records.
In our case we have about the same number of features as records so they might might sense to try. We actually used it in Lesson 3 but now we can try it out with Cross Validation. These take a little longer to train.
Step1: This SVM is 67% accurate.
Recall that the Naive Bayes Classifier was 65% accurate.
This does seem slightly better although there is a fair amount of variation in the folds and it probably wouldn't
hold up to a statistical test.
There are other versions of SVMs we an try, but most of the others have a fit time that is quadratic in the number of records so they will be really slow.
Random Forests
Random forest is a really useful algorithm. It starts with a very simple and effective algorithm called a decision tree which looks at a single feature at a time and branches based on whether or not that feature is above some threshold. It then builds hundreds or thousands of these trees in parallel, each on different subsets of the data and gives each tree a single vote on which class the data is in. Random Forests take a while to train, but will probably run faster in production than an SVM.
I introduce Random Forests herebecause they are incredibly robust - if I only had one algorithm to use in any scenario I would probably use Random Forests or their close cousin Boosted Decision Trees.
Let's try Random Forests on our dataset - this may take a while. | Python Code:
import pandas as pd
import numpy as np
from sklearn.model_selection import cross_val_score
df = pd.read_csv('../scikit/tweets.csv')
target = df['is_there_an_emotion_directed_at_a_brand_or_product']
text = df['tweet_text']
# We need to remove the empty rows from the text before we pass into CountVectorizer
fixed_text = text[pd.notnull(text)]
fixed_target = target[pd.notnull(text)]
# Do the feature extraction
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer() # initialize the count vectorizer
count_vect.fit(fixed_text) # set up the columns for the feature matrix
counts = count_vect.transform(fixed_text) # counts is the feature matrix
from sklearn.svm import LinearSVC
# Build a classifier using the LinearSVC algorithm
clf = LinearSVC() # initialize our classifier
clf.fit(counts, fixed_target) # fit our classifier to the training data
scores = cross_val_score(clf, counts, fixed_target, cv=10)
print(scores)
print(scores.mean())
Explanation: Improving the Text Classifier
Goals
Try Random Forest on our Sentiment Data
Try Support Vector Machines on our Sentiment Data
Introduction to Hyperparameters
Introduction
There are three ways to improve a machine learning model
Improve the training data
Improve the feature extraction
Improve the learning algorithm
Typically the easiest way is to get more data or cleaner data if you can do it. If that's not possible adding more features is the next easiest. The toughest way is to try to improve the learning algorithm. Not that we have a baseline in place we will try improving the learning algorithm and adding a few features.
Support Vector Machines
Support vector machines or SVMs are a popular way to model data when you have many more features that records.
In our case we have about the same number of features as records so they might might sense to try. We actually used it in Lesson 3 but now we can try it out with Cross Validation. These take a little longer to train.
End of explanation
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=50) # n_estimators is the number of trees
from sklearn.model_selection import cross_val_score
scores = cross_val_score(clf, counts, fixed_target, cv=10)
print(scores)
print(scores.mean())
Explanation: This SVM is 67% accurate.
Recall that the Naive Bayes Classifier was 65% accurate.
This does seem slightly better although there is a fair amount of variation in the folds and it probably wouldn't
hold up to a statistical test.
There are other versions of SVMs we an try, but most of the others have a fit time that is quadratic in the number of records so they will be really slow.
Random Forests
Random forest is a really useful algorithm. It starts with a very simple and effective algorithm called a decision tree which looks at a single feature at a time and branches based on whether or not that feature is above some threshold. It then builds hundreds or thousands of these trees in parallel, each on different subsets of the data and gives each tree a single vote on which class the data is in. Random Forests take a while to train, but will probably run faster in production than an SVM.
I introduce Random Forests herebecause they are incredibly robust - if I only had one algorithm to use in any scenario I would probably use Random Forests or their close cousin Boosted Decision Trees.
Let's try Random Forests on our dataset - this may take a while.
End of explanation |
8,783 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Measuring a Multiport Device with a 2-Port Network Analyzer
Introduction
In microwave measurements, one commonly needs to measure a n-port device with a m-port network analyzer ($m<n$ of course).
<img src="nports_with_2ports.svg"/>
This can be done by terminating each non-measured port with a matched load, and assuming the reflected power is negligible. With multiple measurements, it is then possible to reconstitute the original n-port. The first section of this example illustrates this method.
However, in some cases this may not provide the most accurate results, or even be possible in all measurement environments. Or, sometime it is not possible to have matched loads for all ports. The second part of this example presents an elegant solution to this problem, using impedance renormalization. We'll call it Tippet's technique, because it has a good ring to it.
Step1: Matched Ports
Let's assume that you have a 2-ports VNA. In order to measure a n-port network, you will need at least $p=n(n-1)/2$ measurements between the different pair of ports (total number of unique pairs of a set of n).
For example, let's assume we wants to measure a 3-ports network with a 2-ports VNA. One needs to perform at least 3 measurements
Step2: For the sake of the demonstration, we will "fake" the 3 distinct measurements by extracting 3 subsets of the original Network, i.e., 3 subnetworks
Step3: In reality of course, these three Networks comes from three measurements with distinct pair of ports, the non-used port being properly matched.
Before using the n_twoports_2_nport function, one must define the name of these subsets by setting the Network.name property, in order the function to know which corresponds to what
Step4: Now we can build the 3-ports Network from these three 2-port subnetworks
Step5: Tippet's Technique
This example demonstrates a numerical test of the technique described in "A Rigorous Technique for Measuring the Scattering Matrix of a Multiport Device with a 2-Port Network Analyzer" [1].
In Tippets technique, several sub-networks are measured in a similar way as before, but the port terminations are not assumed to be matched. Instead, the terminations just have to be known and no more than one can be completely reflective. So, in general $|\Gamma| \ne 1$.
During measurements, each port is terminated with a consistent termination. So port 1 is always terminated with $Z_1$ when not being measured. Once measured, each sub-network is re-normalized to these port impedances. Think about that. Finally, the composite network is constructed, and may then be re-normalized to the desired system impedance, say $50$ ohm.
[1] J. C. Tippet and R. A. Speciale, “A Rigorous Technique for Measuring the Scattering Matrix of a Multiport Device with a 2-Port Network Analyzer,” IEEE Transactions on Microwave Theory and Techniques, vol. 30, no. 5, pp. 661–666, May 1982.
Outline of Tippet's Technique
Following the example given in [1], measuring a 4-port network with a 2-port network analyzer.
An outline of the technique
Step6: Next, lets generate a random 4-port network which will be the DUT, that we are trying to measure with out 2-port network analyzer.
Step7: Now, we need to define the loads used to terminate each port when it is not being measured, note as described in [1] not more than one can be have full reflection, $|\Gamma| = 1$
Step8: Create required measurement port combinations. There are 6 different measurements required to measure a 4-port with a 2-port VNA. In general, #measurements = $n\choose 2$, for n-port DUT on a 2-port VNA.
Step9: Now to do it. Ok we loop over the port combo's and connect the loads to the right places, simulating actual measurements. Each raw subnetwork measurement is saved, along with the renormalized subnetwork. Finally, we stuff the result into the 4-port composit network.
Step10: Results
Self-Consistency
Note that 6-measurements of 2-port subnetworks works out to 24 s-parameters, and we only need 16. This is because each reflect, s-parameter is measured three-times. As, in [1], we will use this redundant measurement as a check of our accuracy.
The renormalized networks are stored in a dictionary with names based on their port indices, from this you can see that each have been renormalized to the appropriate z0.
Step11: Plotting all three raw measurements of $S_{11}$, we can see that they are not in agreement. These plots answer to plots 5 and 7 of [1]
Step12: However, the renormalized measurements agree perfectly. These plots answer to plots 6 and 8 of [1]
Step13: Test For Accuracy
Making sure our composite network is the same as our DUT
Step14: Nice!. How close ?
Step16: Dang!
Practical Application
This could be used in many ways. In waveguide, one could just make a measurement of a radiating open after a standard two-port calibration (like TRL). Then using Tippets technique, you can leave each port wide open while not being measured. This way you dont have to buy a bunch of loads. How sweet would that be?
More Complex Simulations | Python Code:
import skrf as rf
from itertools import combinations
%matplotlib inline
from pylab import *
rf.stylely()
Explanation: Measuring a Multiport Device with a 2-Port Network Analyzer
Introduction
In microwave measurements, one commonly needs to measure a n-port device with a m-port network analyzer ($m<n$ of course).
<img src="nports_with_2ports.svg"/>
This can be done by terminating each non-measured port with a matched load, and assuming the reflected power is negligible. With multiple measurements, it is then possible to reconstitute the original n-port. The first section of this example illustrates this method.
However, in some cases this may not provide the most accurate results, or even be possible in all measurement environments. Or, sometime it is not possible to have matched loads for all ports. The second part of this example presents an elegant solution to this problem, using impedance renormalization. We'll call it Tippet's technique, because it has a good ring to it.
End of explanation
tee = rf.data.tee
print(tee)
Explanation: Matched Ports
Let's assume that you have a 2-ports VNA. In order to measure a n-port network, you will need at least $p=n(n-1)/2$ measurements between the different pair of ports (total number of unique pairs of a set of n).
For example, let's assume we wants to measure a 3-ports network with a 2-ports VNA. One needs to perform at least 3 measurements: between ports 1 & 2, between ports 2 & 3 and between ports 1 & 3. We will assume these measurements are then converted into three 2-ports Network. To build the full 3-ports Network, one needs to provide a list of these 3 (sub)networks to the scikit-rf builtin function n_twoports_2_nport. While the order of the measurements in the list is not important, pay attention to define the Network.name properties of these subnetworks to contain the port index, for example p12 for the measurement between ports 1&2 or p23 between 2&3, etc.
Let's suppose we want to measure a tee:
End of explanation
# 2 port Networks as if one measures the tee with a 2 ports VNA
tee12 = rf.subnetwork(tee, [0, 1]) # 2 port Network btw ports 1 & 2, port 3 being matched
tee23 = rf.subnetwork(tee, [1, 2]) # 2 port Network btw ports 2 & 3, port 1 being matched
tee13 = rf.subnetwork(tee, [0, 2]) # 2 port Network btw ports 1 & 3, port 2 being matched
Explanation: For the sake of the demonstration, we will "fake" the 3 distinct measurements by extracting 3 subsets of the original Network, i.e., 3 subnetworks:
End of explanation
tee12.name = 'tee12'
tee23.name = 'tee23'
tee13.name = 'tee13'
Explanation: In reality of course, these three Networks comes from three measurements with distinct pair of ports, the non-used port being properly matched.
Before using the n_twoports_2_nport function, one must define the name of these subsets by setting the Network.name property, in order the function to know which corresponds to what:
End of explanation
ntw_list = [tee12, tee23, tee13]
tee_rebuilt = rf.n_twoports_2_nport(ntw_list, nports=3)
print(tee_rebuilt)
# this is an ideal example, both Network are thus identical
print(tee == tee_rebuilt)
Explanation: Now we can build the 3-ports Network from these three 2-port subnetworks:
End of explanation
wg = rf.wr10
wg.frequency.npoints = 101
Explanation: Tippet's Technique
This example demonstrates a numerical test of the technique described in "A Rigorous Technique for Measuring the Scattering Matrix of a Multiport Device with a 2-Port Network Analyzer" [1].
In Tippets technique, several sub-networks are measured in a similar way as before, but the port terminations are not assumed to be matched. Instead, the terminations just have to be known and no more than one can be completely reflective. So, in general $|\Gamma| \ne 1$.
During measurements, each port is terminated with a consistent termination. So port 1 is always terminated with $Z_1$ when not being measured. Once measured, each sub-network is re-normalized to these port impedances. Think about that. Finally, the composite network is constructed, and may then be re-normalized to the desired system impedance, say $50$ ohm.
[1] J. C. Tippet and R. A. Speciale, “A Rigorous Technique for Measuring the Scattering Matrix of a Multiport Device with a 2-Port Network Analyzer,” IEEE Transactions on Microwave Theory and Techniques, vol. 30, no. 5, pp. 661–666, May 1982.
Outline of Tippet's Technique
Following the example given in [1], measuring a 4-port network with a 2-port network analyzer.
An outline of the technique:
Calibrate 2-port network analyzer
Get four known terminations ($Z_1, Z_2, Z_3,Z_4$). No more than one can have $|\Gamma| = 1$
Measure all combinations of 2-port subnetworks (there are 6). Each port not currently being measured must be terminated with its corresponding load.
Renormalize each subnetwork to the impedances of the loads used to terminate it when note being measured.
Build composite 4-port, renormalize to VNA impedance.
Implementation
First, we create a Media object, which is used to generate networks for testing. We will use WR-10 Rectangular waveguide.
End of explanation
dut = wg.random(n_ports = 4,name= 'dut')
dut
Explanation: Next, lets generate a random 4-port network which will be the DUT, that we are trying to measure with out 2-port network analyzer.
End of explanation
loads = [wg.load(.1+.1j),
wg.load(.2-.2j),
wg.load(.3+.3j),
wg.load(.5),
]
# construct the impedance array, of shape FXN
z_loads = array([k.z.flatten() for k in loads]).T
Explanation: Now, we need to define the loads used to terminate each port when it is not being measured, note as described in [1] not more than one can be have full reflection, $|\Gamma| = 1$
End of explanation
ports = arange(dut.nports)
port_combos = list(combinations(ports, 2))
port_combos
Explanation: Create required measurement port combinations. There are 6 different measurements required to measure a 4-port with a 2-port VNA. In general, #measurements = $n\choose 2$, for n-port DUT on a 2-port VNA.
End of explanation
composite = wg.match(nports = 4) # composite network, to be filled.
measured,measured_renorm = {},{} # measured subnetworks and renormalized sub-networks
# ports `a` and `b` are the ports we will connect the VNA too
for a,b in port_combos:
# port `c` and `d` are the ports which we will connect the loads too
c,d =ports[(ports!=a)& (ports!=b)]
# determine where `d` will be on four_port, after its reduced to a three_port
e = where(ports[ports!=c]==d)[0][0]
# connect loads
three_port = rf.connect(dut,c, loads[c],0)
two_port = rf.connect(three_port,e, loads[d],0)
# save raw and renormalized 2-port subnetworks
measured['%i%i'%(a,b)] = two_port.copy()
two_port.renormalize(c_[z_loads[:,a],z_loads[:,b]])
measured_renorm['%i%i'%(a,b)] = two_port.copy()
# stuff this 2-port into the composite 4-port
for i,m in enumerate([a,b]):
for j,n in enumerate([a,b]):
composite.s[:,m,n] = two_port.s[:,i,j]
# properly copy the port impedances
composite.z0[:,a] = two_port.z0[:,0]
composite.z0[:,b] = two_port.z0[:,1]
# finally renormalize from load z0 to 50 ohms
composite.renormalize(50)
Explanation: Now to do it. Ok we loop over the port combo's and connect the loads to the right places, simulating actual measurements. Each raw subnetwork measurement is saved, along with the renormalized subnetwork. Finally, we stuff the result into the 4-port composit network.
End of explanation
measured_renorm
Explanation: Results
Self-Consistency
Note that 6-measurements of 2-port subnetworks works out to 24 s-parameters, and we only need 16. This is because each reflect, s-parameter is measured three-times. As, in [1], we will use this redundant measurement as a check of our accuracy.
The renormalized networks are stored in a dictionary with names based on their port indices, from this you can see that each have been renormalized to the appropriate z0.
End of explanation
s11_set = rf.NS([measured[k] for k in measured if k[0]=='0'])
figure(figsize = (8,4))
subplot(121)
s11_set .plot_s_db(0,0)
subplot(122)
s11_set .plot_s_deg(0,0)
tight_layout()
Explanation: Plotting all three raw measurements of $S_{11}$, we can see that they are not in agreement. These plots answer to plots 5 and 7 of [1]
End of explanation
s11_set = rf.NS([measured_renorm[k] for k in measured_renorm if k[0]=='0'])
figure(figsize = (8,4))
subplot(121)
s11_set .plot_s_db(0,0)
subplot(122)
s11_set .plot_s_deg(0,0)
tight_layout()
Explanation: However, the renormalized measurements agree perfectly. These plots answer to plots 6 and 8 of [1]
End of explanation
composite == dut
Explanation: Test For Accuracy
Making sure our composite network is the same as our DUT
End of explanation
sum((composite - dut).s_mag)
Explanation: Nice!. How close ?
End of explanation
def tippits(dut, gamma, noise=None):
simulate tippits technique on a 4-port dut.
ports = arange(dut.nports)
port_combos = list(combinations(ports, 2))
loads = [wg.load(gamma) for k in ports]
# construct the impedance array, of shape FXN
z_loads = array([k.z.flatten() for k in loads]).T
composite = wg.match(nports = dut.nports) # composite network, to be filled.
# ports `a` and `b` are the ports we will connect the VNA too
for a,b in port_combos:
# port `c` and `d` are the ports which we will connect the loads too
c,d =ports[(ports!=a)& (ports!=b)]
# determine where `d` will be on four_port, after its reduced to a three_port
e = where(ports[ports!=c]==d)[0][0]
# connect loads
three_port = rf.connect(dut,c, loads[c],0)
two_port = rf.connect(three_port,e, loads[d],0)
if noise is not None:
two_port.add_noise_polar(*noise)
# save raw and renormalized 2-port subnetworks
measured['%i%i'%(a,b)] = two_port.copy()
two_port.renormalize(c_[z_loads[:,a],z_loads[:,b]])
measured_renorm['%i%i'%(a,b)] = two_port.copy()
# stuff this 2-port into the composite 4-port
for i,m in enumerate([a,b]):
for j,n in enumerate([a,b]):
composite.s[:,m,n] = two_port.s[:,i,j]
# properly copy the port impedances
composite.z0[:,a] = two_port.z0[:,0]
composite.z0[:,b] = two_port.z0[:,1]
# finally renormalize from load z0 to 50 ohms
composite.renormalize(50)
return composite
wg.frequency.npoints = 11
dut = wg.random(4)
def er(gamma, *args):
return max(abs(tippits(dut, rf.db_2_mag(gamma),*args).s_db-dut.s_db).flatten())
gammas = linspace(-40,-0.1,11)
title('Error vs $|\Gamma|$')
plot(gammas, [er(k) for k in gammas])
semilogy()
xlabel('$|\Gamma|$ of Loads (dB)')
ylabel('Max Error in DUT\'s dB(S)')
figure()
noise = (1e-5,.1)
title('Error vs $|\Gamma|$ with reasonable noise')
plot(gammas, [er(k, noise) for k in gammas])
semilogy()
xlabel('$|\Gamma|$ of Loads (dB)')
ylabel('Max Error in DUT\'s dB(S)')
Explanation: Dang!
Practical Application
This could be used in many ways. In waveguide, one could just make a measurement of a radiating open after a standard two-port calibration (like TRL). Then using Tippets technique, you can leave each port wide open while not being measured. This way you dont have to buy a bunch of loads. How sweet would that be?
More Complex Simulations
End of explanation |
8,784 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Keras でのプルーニングの例
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https
Step2: プルーニングを使用せずに、MNIST のモデルをトレーニングする
Step3: ベースラインのテスト精度を評価して、後で使用できるようにモデルを保存します。
Step4: プルーニングを使ってトレーニング済みのモデルを微調整する
モデルを定義する
モデル全体にプルーニングを適用し、モデルの概要でこれを確認します。
この例では、50% のスパース性(50% が重みゼロ)でモデルを開始し、80% のスパース性で終了します。
総合ガイドでは、モデルの精度を改善するために、一部のレイヤーをプルーニングする方法をご覧いただけます。
Step5: モデルをトレーニングしてベースラインに対して評価する
2 エポック、プルーニングを使って微調整します。
トレーニング中は、tfmot.sparsity.keras.UpdatePruningStep が必要です。また、tfmot.sparsity.keras.PruningSummaries により、進捗状況の追跡とデバッグのログを得られます。
Step6: この例では、ベースラインと比較し、プルーニング後のテスト精度に最小限の損失があります。
Step7: ログには、レイヤーごとのスパース性の進行状況が示されます。
Step8: Colab を使用していないユーザーは、TensorBoard.dev で、このノートブックの前回の実行結果を閲覧できます。
プルーニングによって 3 倍小さなモデルを作成する
tfmot.sparsity.keras.strip_pruning と標準圧縮アルゴリズム(gzip など)の適用は、プルーニングの圧縮のメリットを確認する上で必要です。
strip_pruning は、トレーニング時にのみプルーニングが必要とするすべての tf.Variable を除去するため、必要です。そうでない場合、推論中にモデルのサイズが増大してしまいます。
シリアル化された重み行列はプルーニング前と同じサイズであるため、標準の圧縮アルゴリズムの適用が必要です。ただし、プルーニングによってほとんどの重みがゼロになるため、モデルをさらに圧縮するためにアルゴリズムが使用できる冗長性が追加されます。
まず、TensorFlow の圧縮可能なモデルを作成します。
Step9: 次に、TFLite の圧縮可能なモデルを作成します。
Step10: 実際に gzip でモデルを圧縮し、zip 圧縮されたサイズを測定するヘルパー関数を定義します。
Step11: 比較して、モデルがプルーニングによって 3 倍小さくなっていることを確認します。
Step12: プルーニングとポストトレーニング量子化を組み合わせて、10倍 小さなモデルを作成する
さらにメリットを得るために、ポストトレーニング量子化をプルーニングされたモデルに適用できます。
Step13: TF から TFLite への精度の永続性を確認する
テストデータセットで TFLite モデルを評価するヘルパー関数を定義します。
Step14: プルーニングされ、量子化されたモデルを評価し、TensorFlow の精度が TFLite バックエンドに持続されていることを確認します。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
! pip install -q tensorflow-model-optimization
import tempfile
import os
import tensorflow as tf
import numpy as np
from tensorflow import keras
%load_ext tensorboard
Explanation: Keras でのプルーニングの例
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/model_optimization/guide/pruning/pruning_with_keras"> <img src="https://www.tensorflow.org/images/tf_logo_32px.png"> TensorFlow.org で表示</a>
</td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/model_optimization/guide/pruning/pruning_with_keras.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a>
</td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/model_optimization/guide/pruning/pruning_with_keras.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png"> GitHubでソースを表示</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/model_optimization/guide/pruning/pruning_with_keras.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a></td>
</table>
概要
マグニチュードベースの重みプルーニングに関するエンドツーエンドの例へようこそ。
その他のページ
重みプルーニングの紹介、およびプルーニングを使用すべきかどうかの判定(サポート情報も含む)については、概要ページをご覧ください。
ユースケースに合った API を素早く特定するには(80%のスパース性を持つモデルの完全プルーニングを超えるユースケース)、総合ガイドをご覧ください。
要約
このチュートリアルでは、次について説明しています。
MNIST の tf.keras モデルを最初からトレーニングする
プルーニング API を適用してモデルを微調整し、精度を確認する
プルーニングによって 3 倍小さな TF および TFLite モデルを作成する
プルーニングとポストトレーニング量子化を組み合わせて、10 倍小さな TFLite モデルを作成する
TF から TFLite への精度の永続性を確認する
セットアップ
End of explanation
# Load MNIST dataset
mnist = keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Normalize the input image so that each pixel value is between 0 and 1.
train_images = train_images / 255.0
test_images = test_images / 255.0
# Define the model architecture.
model = keras.Sequential([
keras.layers.InputLayer(input_shape=(28, 28)),
keras.layers.Reshape(target_shape=(28, 28, 1)),
keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation='relu'),
keras.layers.MaxPooling2D(pool_size=(2, 2)),
keras.layers.Flatten(),
keras.layers.Dense(10)
])
# Train the digit classification model
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(
train_images,
train_labels,
epochs=4,
validation_split=0.1,
)
Explanation: プルーニングを使用せずに、MNIST のモデルをトレーニングする
End of explanation
_, baseline_model_accuracy = model.evaluate(
test_images, test_labels, verbose=0)
print('Baseline test accuracy:', baseline_model_accuracy)
_, keras_file = tempfile.mkstemp('.h5')
tf.keras.models.save_model(model, keras_file, include_optimizer=False)
print('Saved baseline model to:', keras_file)
Explanation: ベースラインのテスト精度を評価して、後で使用できるようにモデルを保存します。
End of explanation
import tensorflow_model_optimization as tfmot
prune_low_magnitude = tfmot.sparsity.keras.prune_low_magnitude
# Compute end step to finish pruning after 2 epochs.
batch_size = 128
epochs = 2
validation_split = 0.1 # 10% of training set will be used for validation set.
num_images = train_images.shape[0] * (1 - validation_split)
end_step = np.ceil(num_images / batch_size).astype(np.int32) * epochs
# Define model for pruning.
pruning_params = {
'pruning_schedule': tfmot.sparsity.keras.PolynomialDecay(initial_sparsity=0.50,
final_sparsity=0.80,
begin_step=0,
end_step=end_step)
}
model_for_pruning = prune_low_magnitude(model, **pruning_params)
# `prune_low_magnitude` requires a recompile.
model_for_pruning.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model_for_pruning.summary()
Explanation: プルーニングを使ってトレーニング済みのモデルを微調整する
モデルを定義する
モデル全体にプルーニングを適用し、モデルの概要でこれを確認します。
この例では、50% のスパース性(50% が重みゼロ)でモデルを開始し、80% のスパース性で終了します。
総合ガイドでは、モデルの精度を改善するために、一部のレイヤーをプルーニングする方法をご覧いただけます。
End of explanation
logdir = tempfile.mkdtemp()
callbacks = [
tfmot.sparsity.keras.UpdatePruningStep(),
tfmot.sparsity.keras.PruningSummaries(log_dir=logdir),
]
model_for_pruning.fit(train_images, train_labels,
batch_size=batch_size, epochs=epochs, validation_split=validation_split,
callbacks=callbacks)
Explanation: モデルをトレーニングしてベースラインに対して評価する
2 エポック、プルーニングを使って微調整します。
トレーニング中は、tfmot.sparsity.keras.UpdatePruningStep が必要です。また、tfmot.sparsity.keras.PruningSummaries により、進捗状況の追跡とデバッグのログを得られます。
End of explanation
_, model_for_pruning_accuracy = model_for_pruning.evaluate(
test_images, test_labels, verbose=0)
print('Baseline test accuracy:', baseline_model_accuracy)
print('Pruned test accuracy:', model_for_pruning_accuracy)
Explanation: この例では、ベースラインと比較し、プルーニング後のテスト精度に最小限の損失があります。
End of explanation
#docs_infra: no_execute
%tensorboard --logdir={logdir}
Explanation: ログには、レイヤーごとのスパース性の進行状況が示されます。
End of explanation
model_for_export = tfmot.sparsity.keras.strip_pruning(model_for_pruning)
_, pruned_keras_file = tempfile.mkstemp('.h5')
tf.keras.models.save_model(model_for_export, pruned_keras_file, include_optimizer=False)
print('Saved pruned Keras model to:', pruned_keras_file)
Explanation: Colab を使用していないユーザーは、TensorBoard.dev で、このノートブックの前回の実行結果を閲覧できます。
プルーニングによって 3 倍小さなモデルを作成する
tfmot.sparsity.keras.strip_pruning と標準圧縮アルゴリズム(gzip など)の適用は、プルーニングの圧縮のメリットを確認する上で必要です。
strip_pruning は、トレーニング時にのみプルーニングが必要とするすべての tf.Variable を除去するため、必要です。そうでない場合、推論中にモデルのサイズが増大してしまいます。
シリアル化された重み行列はプルーニング前と同じサイズであるため、標準の圧縮アルゴリズムの適用が必要です。ただし、プルーニングによってほとんどの重みがゼロになるため、モデルをさらに圧縮するためにアルゴリズムが使用できる冗長性が追加されます。
まず、TensorFlow の圧縮可能なモデルを作成します。
End of explanation
converter = tf.lite.TFLiteConverter.from_keras_model(model_for_export)
pruned_tflite_model = converter.convert()
_, pruned_tflite_file = tempfile.mkstemp('.tflite')
with open(pruned_tflite_file, 'wb') as f:
f.write(pruned_tflite_model)
print('Saved pruned TFLite model to:', pruned_tflite_file)
Explanation: 次に、TFLite の圧縮可能なモデルを作成します。
End of explanation
def get_gzipped_model_size(file):
# Returns size of gzipped model, in bytes.
import os
import zipfile
_, zipped_file = tempfile.mkstemp('.zip')
with zipfile.ZipFile(zipped_file, 'w', compression=zipfile.ZIP_DEFLATED) as f:
f.write(file)
return os.path.getsize(zipped_file)
Explanation: 実際に gzip でモデルを圧縮し、zip 圧縮されたサイズを測定するヘルパー関数を定義します。
End of explanation
print("Size of gzipped baseline Keras model: %.2f bytes" % (get_gzipped_model_size(keras_file)))
print("Size of gzipped pruned Keras model: %.2f bytes" % (get_gzipped_model_size(pruned_keras_file)))
print("Size of gzipped pruned TFlite model: %.2f bytes" % (get_gzipped_model_size(pruned_tflite_file)))
Explanation: 比較して、モデルがプルーニングによって 3 倍小さくなっていることを確認します。
End of explanation
converter = tf.lite.TFLiteConverter.from_keras_model(model_for_export)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
quantized_and_pruned_tflite_model = converter.convert()
_, quantized_and_pruned_tflite_file = tempfile.mkstemp('.tflite')
with open(quantized_and_pruned_tflite_file, 'wb') as f:
f.write(quantized_and_pruned_tflite_model)
print('Saved quantized and pruned TFLite model to:', quantized_and_pruned_tflite_file)
print("Size of gzipped baseline Keras model: %.2f bytes" % (get_gzipped_model_size(keras_file)))
print("Size of gzipped pruned and quantized TFlite model: %.2f bytes" % (get_gzipped_model_size(quantized_and_pruned_tflite_file)))
Explanation: プルーニングとポストトレーニング量子化を組み合わせて、10倍 小さなモデルを作成する
さらにメリットを得るために、ポストトレーニング量子化をプルーニングされたモデルに適用できます。
End of explanation
import numpy as np
def evaluate_model(interpreter):
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
# Run predictions on ever y image in the "test" dataset.
prediction_digits = []
for i, test_image in enumerate(test_images):
if i % 1000 == 0:
print('Evaluated on {n} results so far.'.format(n=i))
# Pre-processing: add batch dimension and convert to float32 to match with
# the model's input data format.
test_image = np.expand_dims(test_image, axis=0).astype(np.float32)
interpreter.set_tensor(input_index, test_image)
# Run inference.
interpreter.invoke()
# Post-processing: remove batch dimension and find the digit with highest
# probability.
output = interpreter.tensor(output_index)
digit = np.argmax(output()[0])
prediction_digits.append(digit)
print('\n')
# Compare prediction results with ground truth labels to calculate accuracy.
prediction_digits = np.array(prediction_digits)
accuracy = (prediction_digits == test_labels).mean()
return accuracy
Explanation: TF から TFLite への精度の永続性を確認する
テストデータセットで TFLite モデルを評価するヘルパー関数を定義します。
End of explanation
interpreter = tf.lite.Interpreter(model_content=quantized_and_pruned_tflite_model)
interpreter.allocate_tensors()
test_accuracy = evaluate_model(interpreter)
print('Pruned and quantized TFLite test_accuracy:', test_accuracy)
print('Pruned TF test accuracy:', model_for_pruning_accuracy)
Explanation: プルーニングされ、量子化されたモデルを評価し、TensorFlow の精度が TFLite バックエンドに持続されていることを確認します。
End of explanation |
8,785 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
You have already seen that when you change the input value to a function, you often get a different output. For instance, consider an add_five() function that just adds five to any number and returns the result. Then add_five(7) will return an output of 12 (=7+5), and add_five(8) will return an output of 13 (=8+5). Note that no matter what the input is, the action that the function performs is always the same
Step1: Python identifies this as False, since 2 is not greater than 3.
You can also use conditions to compare the values of variables. In the next code cell, var_one has a value of 1, and var_two has a value of two. In the conditions, we check if var_one is less than 1 (which is False), and we check if var_two is greater than or equal to var_one (which is True).
Step2: For a list of common symbols you can use to construct conditions, check out the chart below.
<table style="width
Step3: In the next code cell, we call the function, where the temperature is 37°C. The message is "Normal temperature", because the temperature is greater than 38°C (temp > 38 evaluates to False) in this case.
Step4: However, if the temperature is instead 39°C, since this is greater than 38°C, the message is updated to "Fever!".
Step5: Note that there are two levels of indentation
Step6: This evaluate_temp_with_else() function has equivalent behavior to the evaluate_temp() function.
In the next code cell, we call this new function, where the temperature is 37°C. In this case, temp > 38 evaluates to False, so the code under the "else" statement is executed, and the Normal temperature. message is returned.
Step7: As with the previous function, we indent the code blocks after the "if" and "else" statements.
"if ... elif ... else" statements
We can use "elif" (which is short for "else if") to check if multiple conditions might be true. The function below
Step8: In the code cell below, we run the code under the "elif" statement, because temp > 38 is False, and temp > 35 is True. Once this code is run, the function skips over the "else" statement and returns the message.
Step9: Finally, we try out a case where the temperature is less than 35°C. Since the conditionals in the "if" and "elif" statements both evaluate to False, the code block inside the "else" statement is executed.
Step10: Example - Calculations
In the examples so far, conditional statements were used to decide how to set the values of variables. But you can also use conditional statements to perform different calculations.
In this next example, say you live in a country with only two tax brackets. Everyone earning less than 12,000 pays 25% in taxes, and anyone earning 12,000 or more pays 30%. The function below calculates how much tax is owed.
Step11: The next code cell uses the function.
Step12: In each case, we call the get_taxes() function and use the value that is returned to set the value of a variable.
- For ana_taxes, we calculate taxes owed by a person who earns 9,000. In this case, we call the get_taxes() function with earnings set to 9000. Thus, earnings < 12000 is True, and tax_owed is set to .25 * 9000. Then we return the value of tax_owed.
- For bob_taxes, we calculate taxes owed by a person who earns 15,000. In this case, we call the get_taxes() function with earnings set to 15000. Thus, earnings < 12000 is False, and tax_owed is set to .30 * 15000. Then we return the value of tax_owed.
Before we move on to another example - remember the add_three_or_eight() function from the introduction? It accepts a number as input and adds three if the input is less than 10, and otherwise adds eight. Can you figure out how you would write this function? Once you have an answer, click on the "Show hidden code" button below to see the solution.
Step13: Example - Multiple "elif" statements
So far, you have seen "elif" used only once in a function. But there's no limit to the number of "elif" statements you can use. For instance, the next block of code calculates the dose of medication (in milliliters) to give to a child, based on weight (in kilograms).
Note
Step14: The next code cell runs the function. Make sure that the output makes sense to you!
- In this case, the "if" statement was False, and all of the "elif" statements evaluate to False, until we get to weight < 15.9, which is True, and dose is set to 5.
- Once an "elif" statement evaluates to True and the code block is run, the function skips over all remaining "elif" and "else" statements. After skipping these, all that is left is the return statement, which returns the value of dose.
- The order of the elif statements does matter here! Re-ordering the statements will return a very different result. | Python Code:
print(2 > 3)
Explanation: Introduction
You have already seen that when you change the input value to a function, you often get a different output. For instance, consider an add_five() function that just adds five to any number and returns the result. Then add_five(7) will return an output of 12 (=7+5), and add_five(8) will return an output of 13 (=8+5). Note that no matter what the input is, the action that the function performs is always the same: it always adds five.
But you might instead need a function that performs an action that depends on the input. For instance, you might need a function add_three_or_eight() that adds three if the input is less than 10, and adds eight if the input is 10 or more. Then add_three_or_eight(1) will return 4 (= 1+3), but add_three_or_eight(11) will return 19 (=11+8). In this case, the action that the function performs varies with the input.
In this lesson, you will learn how to use conditions and conditional statements to modify how your functions run.
Conditions
In programming, conditions are statements that are either True or False. There are many different ways to write conditions in Python, but some of the most common ways of writing conditions just compare two different values. For instance, you can check if 2 is greater than 3.
End of explanation
var_one = 1
var_two = 2
print(var_one < 1)
print(var_two >= var_one)
Explanation: Python identifies this as False, since 2 is not greater than 3.
You can also use conditions to compare the values of variables. In the next code cell, var_one has a value of 1, and var_two has a value of two. In the conditions, we check if var_one is less than 1 (which is False), and we check if var_two is greater than or equal to var_one (which is True).
End of explanation
def evaluate_temp(temp):
# Set an initial message
message = "Normal temperature."
# Update value of message only if temperature greater than 38
if temp > 38:
message = "Fever!"
return message
Explanation: For a list of common symbols you can use to construct conditions, check out the chart below.
<table style="width: 100%;">
<tbody>
<tr><th><b>Symbol</b></th><th><b>Meaning</b></th></tr>
<tr>
<td>==</td>
<td>equals</td>
</tr>
<tr>
<td>!=</td>
<td>does not equal</td>
</tr>
<tr>
<td><</td>
<td>less than</td>
</tr>
<tr>
<td><=</td>
<td>less than or equal to</td>
</tr>
<tr>
<td>></td>
<td>greater than</td>
</tr>
<tr>
<td>>=</td>
<td>greater than or equal to</td>
</tr>
</tbody>
</table>
Important Note: When you check two values are equal, make sure you use the == sign, and not the = sign.
- var_one==1 checks if the value of var_one is 1, but
- var_one=1 sets the value of var_one to 1.
Conditional statements
Conditional statements use conditions to modify how your function runs. They check the value of a condition, and if the condition evaluates to True, then a certain block of code is executed. (Otherwise, if the condition is False, then the code is not run.)
You will see several examples of this in the following sections.
"if" statements
The simplest type of conditional statement is an "if" statement. You can see an example of this in the evaluate_temp() function below. The function accepts a body temperature (in Celcius) as input.
- Initially, message is set to "Normal temperature".
- Then, if temp > 38 is True (e.g., the body temperature is greater than 38°C), the message is updated to "Fever!". Otherwise, if temp > 38 is False, then the message is not updated.
- Finally, message is returned by the function.
End of explanation
print(evaluate_temp(37))
Explanation: In the next code cell, we call the function, where the temperature is 37°C. The message is "Normal temperature", because the temperature is greater than 38°C (temp > 38 evaluates to False) in this case.
End of explanation
print(evaluate_temp(39))
Explanation: However, if the temperature is instead 39°C, since this is greater than 38°C, the message is updated to "Fever!".
End of explanation
def evaluate_temp_with_else(temp):
if temp > 38:
message = "Fever!"
else:
message = "Normal temperature."
return message
Explanation: Note that there are two levels of indentation:
- The first level of indentation is because we always need to indent the code block inside a function.
- The second level of indentation is because we also need to indent the code block belonging to the "if" statement. (As you'll see, we'll also need to indent the code blocks for "elif" and "else" statements.)
Note that because the return statement is not indented under the "if" statement, it is always executed, whether temp > 38 is True or False.
"if ... else" statements
We can use "else" statements to run code if a statement is False. The code under the "if" statement is run if the statement is True, and the code under "else" is run if the statement is False.
End of explanation
print(evaluate_temp_with_else(37))
Explanation: This evaluate_temp_with_else() function has equivalent behavior to the evaluate_temp() function.
In the next code cell, we call this new function, where the temperature is 37°C. In this case, temp > 38 evaluates to False, so the code under the "else" statement is executed, and the Normal temperature. message is returned.
End of explanation
def evaluate_temp_with_elif(temp):
if temp > 38:
message = "Fever!"
elif temp > 35:
message = "Normal temperature."
else:
message = "Low temperature."
return message
Explanation: As with the previous function, we indent the code blocks after the "if" and "else" statements.
"if ... elif ... else" statements
We can use "elif" (which is short for "else if") to check if multiple conditions might be true. The function below:
- First checks if temp > 38. If this is true, then the message is set to "Fever!".
- As long as the message has not already been set, the function then checks if temp > 35. If this is true, then the message is set to "Normal temperature.".
- Then, if still no message has been set, the "else" statement ensures that the message is set to "Low temperature." message is printed.
You can think of "elif" as saying ... "okay, that previous condition (e.g., temp > 38) was false, so let's check if this new condition (e.g., temp > 35) might be true!"
End of explanation
evaluate_temp_with_elif(36)
Explanation: In the code cell below, we run the code under the "elif" statement, because temp > 38 is False, and temp > 35 is True. Once this code is run, the function skips over the "else" statement and returns the message.
End of explanation
evaluate_temp_with_elif(34)
Explanation: Finally, we try out a case where the temperature is less than 35°C. Since the conditionals in the "if" and "elif" statements both evaluate to False, the code block inside the "else" statement is executed.
End of explanation
def get_taxes(earnings):
if earnings < 12000:
tax_owed = .25 * earnings
else:
tax_owed = .30 * earnings
return tax_owed
Explanation: Example - Calculations
In the examples so far, conditional statements were used to decide how to set the values of variables. But you can also use conditional statements to perform different calculations.
In this next example, say you live in a country with only two tax brackets. Everyone earning less than 12,000 pays 25% in taxes, and anyone earning 12,000 or more pays 30%. The function below calculates how much tax is owed.
End of explanation
ana_taxes = get_taxes(9000)
bob_taxes = get_taxes(15000)
print(ana_taxes)
print(bob_taxes)
Explanation: The next code cell uses the function.
End of explanation
#$HIDE_INPUT$
def add_three_or_eight(number):
if number < 10:
result = number + 3
else:
result = number + 8
return result
Explanation: In each case, we call the get_taxes() function and use the value that is returned to set the value of a variable.
- For ana_taxes, we calculate taxes owed by a person who earns 9,000. In this case, we call the get_taxes() function with earnings set to 9000. Thus, earnings < 12000 is True, and tax_owed is set to .25 * 9000. Then we return the value of tax_owed.
- For bob_taxes, we calculate taxes owed by a person who earns 15,000. In this case, we call the get_taxes() function with earnings set to 15000. Thus, earnings < 12000 is False, and tax_owed is set to .30 * 15000. Then we return the value of tax_owed.
Before we move on to another example - remember the add_three_or_eight() function from the introduction? It accepts a number as input and adds three if the input is less than 10, and otherwise adds eight. Can you figure out how you would write this function? Once you have an answer, click on the "Show hidden code" button below to see the solution.
End of explanation
def get_dose(weight):
# Dosage is 1.25 ml for anyone under 5.2 kg
if weight < 5.2:
dose = 1.25
elif weight < 7.9:
dose = 2.5
elif weight < 10.4:
dose = 3.75
elif weight < 15.9:
dose = 5
elif weight < 21.2:
dose = 7.5
# Dosage is 10 ml for anyone 21.2 kg or over
else:
dose = 10
return dose
Explanation: Example - Multiple "elif" statements
So far, you have seen "elif" used only once in a function. But there's no limit to the number of "elif" statements you can use. For instance, the next block of code calculates the dose of medication (in milliliters) to give to a child, based on weight (in kilograms).
Note: This function should not be used as medical advice, and represents a fake medication.
End of explanation
print(get_dose(12))
Explanation: The next code cell runs the function. Make sure that the output makes sense to you!
- In this case, the "if" statement was False, and all of the "elif" statements evaluate to False, until we get to weight < 15.9, which is True, and dose is set to 5.
- Once an "elif" statement evaluates to True and the code block is run, the function skips over all remaining "elif" and "else" statements. After skipping these, all that is left is the return statement, which returns the value of dose.
- The order of the elif statements does matter here! Re-ordering the statements will return a very different result.
End of explanation |
8,786 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lecture 7
Step1: Anytime you see a statement that starts with import, you'll recognize that the programmer is pulling in some sort of external functionality not previously available to Python by default. In this case, the random package provides some basic functionality for computing random numbers.
That's just one of countless examples...an infinite number that continues to nonetheless increase daily.
Python has a bunch of functionality that comes by default--no import required. Remember writing functions to compute the maximum and minimum of a list? Turns out, those already exist by default (sorry everyone)
Step2: Quite a bit of other functionality--still built-in to the default Python environment!--requires explicit import statements to unlock. Here are just a couple of examples
Step3: If you are so inclined, you can see the full Python default module index here
Step4: Dot-notation works by
specifying package_name (in this case, random)
dot
Step5: We can tweak it
Step6: You can put whatever you want after the as, and anytime you call methods from that module, you'll use the name you gave it.
Don't worry about trying to memorize all the available modules in core Python; in looking through them just now, I was amazed how many I'd never even heard of. Suffice to say, you can get by.
Especially since, once you get beyond the core modules, there's an ever-expanding universe of 3rd-party modules you can install and use. Anaconda comes prepackaged with quite a few (see the column "In Installer") and the option to manually install quite a few more.
Again, don't worry about trying to learn all these. There are simply too many. You'll come across packages as you need them. For now, we're going to focus on one specific package that is central to most modern data science
Step7: Indexing would still work as you would expect, but looping through a matrix--say, to do matrix multiplication--would be laborious and highly inefficient.
We'll demonstrate this experimentally later, but suffice to say Python lists embody the drawbacks of using an interpreted language such as Python
Step8: Now just call the array method using our list from before!
Step9: To reference an element in the array, just use the same notation we did for lists
Step10: You can also separate dimensions by commas
Step11: Remember, with indexing matrices
Step12: Now, let's see the same operation, this time with NumPy arrays.
Step13: No loops needed, far fewer lines of code, and a simple intuitive operation.
Operations involving arrays on both sides of the sign will also work (though the two arrays need to be the same length).
For example, adding two vectors together
Step14: Works exactly as you'd expect, but no [explicit] loop needed.
This becomes particularly compelling with matrix multiplication. Say you have two matrices, $A$ and $B$
Step15: If you recall from algebra, matrix multiplication $A \times B$ involves multipliying each row of $A$ by each column of $B$. But rather than write that code yourself, Python (as of version 3.5) gives us a dedicated matrix multiplication operator | Python Code:
import random
Explanation: Lecture 7: Vectorized Programming
CSCI 1360E: Foundations for Informatics and Analytics
Overview and Objectives
We've covered loops and lists, and how to use them to perform some basic arithmetic calculations. In this lecture, we'll see how we can use an external library to make these computations much easier and much faster.
Understand how to use import to add functionality beyond base Python
Compare and contrast NumPy arrays to built-in Python lists
Define "broadcasting" in the context of vectorized programming
Use NumPy arrays in place of explicit loops for basic arithmetic operations
Part 1: Importing modules
With all the data structures we've discussed so far--lists, sets, tuples, dictionaries, comprehensions, generators--it's hard to believe there's anything else. But oh man, is there a big huge world of Python extensions out there.
These extensions are known as modules. You've seen at least one in play in your assignments so far:
End of explanation
x = [3, 7, 2, 9, 4]
print("Maximum: {}".format(max(x)))
print("Minimum: {}".format(min(x)))
Explanation: Anytime you see a statement that starts with import, you'll recognize that the programmer is pulling in some sort of external functionality not previously available to Python by default. In this case, the random package provides some basic functionality for computing random numbers.
That's just one of countless examples...an infinite number that continues to nonetheless increase daily.
Python has a bunch of functionality that comes by default--no import required. Remember writing functions to compute the maximum and minimum of a list? Turns out, those already exist by default (sorry everyone):
End of explanation
import random # For generating random numbers, as we've seen.
import os # For interacting with the filesystem of your computer.
import re # For regular expressions. Unrelated: https://xkcd.com/1171/
import datetime # Helps immensely with determining the date and formatting it.
import math # Gives some basic math functions: trig, factorial, exponential, logarithms, etc.
import xml # Abandon all hope, ye who enter.
Explanation: Quite a bit of other functionality--still built-in to the default Python environment!--requires explicit import statements to unlock. Here are just a couple of examples:
End of explanation
import random
random.randint(0, 1)
Explanation: If you are so inclined, you can see the full Python default module index here: https://docs.python.org/3/py-modindex.html.
It's quite a bit! Made all the more mind-blowing to consider the default Python module index is bit a tiny, miniscule drop in the bucket compared to the myriad 3rd-party module ecosystem.
These packages provides methods and functions wrapped inside, which you can access via the "dot-notation":
End of explanation
import random
random.randint(0, 1)
Explanation: Dot-notation works by
specifying package_name (in this case, random)
dot: .
followed by function_name (in this case, randint, which returns a random integer between two numbers)
As a small tidbit--you can treat imported packages almost like variables, in that you can name them whatever you like, using the as keyword in the import statement.
Instead of
End of explanation
import random as r
r.randint(0, 1)
Explanation: We can tweak it
End of explanation
matrix = [[ 1, 2, 3],
[ 4, 5, 6],
[ 7, 8, 9] ]
print(matrix)
Explanation: You can put whatever you want after the as, and anytime you call methods from that module, you'll use the name you gave it.
Don't worry about trying to memorize all the available modules in core Python; in looking through them just now, I was amazed how many I'd never even heard of. Suffice to say, you can get by.
Especially since, once you get beyond the core modules, there's an ever-expanding universe of 3rd-party modules you can install and use. Anaconda comes prepackaged with quite a few (see the column "In Installer") and the option to manually install quite a few more.
Again, don't worry about trying to learn all these. There are simply too many. You'll come across packages as you need them. For now, we're going to focus on one specific package that is central to most modern data science:
NumPy, short for Numerical Python.
Part 2: Introduction to NumPy
NumPy, or Numerical Python, is an incredible library of basic functions and data structures that provide a robust foundation for computational scientists.
Put another way: if you're using Python and doing any kind of math, you'll probably use NumPy.
At this point, NumPy is so deeply embedded in so many other 3rd-party modules related to scientific computing that even if you're not making explicit use of it, at least one of the other modules you're using probably is.
NumPy's core: the ndarray
In core Python, if we wanted to represent a matrix, we would more or less have to build a "list of lists", a monstrosity along these lines:
End of explanation
import numpy
Explanation: Indexing would still work as you would expect, but looping through a matrix--say, to do matrix multiplication--would be laborious and highly inefficient.
We'll demonstrate this experimentally later, but suffice to say Python lists embody the drawbacks of using an interpreted language such as Python: they're easy to use, but oh so slow.
By contrast, in NumPy, we have the ndarray structure (short for "n-dimensional array") that is a highly optimized version of Python lists, perfect for fast and efficient computations. To make use of NumPy arrays, import NumPy (it's installed by default in Anaconda, and on JupyterHub):
End of explanation
arr = numpy.array(matrix)
print(arr)
Explanation: Now just call the array method using our list from before!
End of explanation
arr[0]
arr[2][2]
Explanation: To reference an element in the array, just use the same notation we did for lists:
End of explanation
arr[2, 2]
Explanation: You can also separate dimensions by commas:
End of explanation
vector = [4.0, 15.0, 6.0, 2.0]
# To normalize this to unit length, we need to divide each element by the vector's magnitude.
# To learn it's magnitude, we need to loop through the whole vector.
# So. We need two loops!
magnitude = 0.0
for element in vector:
magnitude += element ** 2
magnitude = (magnitude ** 0.5) # square root
print("Original magnitude: {:.2f}".format(magnitude))
new_magnitude = 0.0
for index, element in enumerate(vector):
vector[index] = element / magnitude
new_magnitude += vector[index] ** 2
new_magnitude = (new_magnitude ** 0.5)
print("Normalized magnitude: {:.2f}".format(new_magnitude))
Explanation: Remember, with indexing matrices: the first index is the row, the second index is the column.
NumPy's submodules
NumPy has an impressive array of utility modules that come along with it, optimized to use its ndarray data structure. I highly encourage you to use them, even if you're not using NumPy arrays.
1: Basic mathematical routines
All the core functions you could want; for example, all the built-in Python math routines (trig, logs, exponents, etc) all have NumPy versions. (numpy.sin, numpy.cos, numpy.log, numpy.exp, numpy.max, numpy.min)
2: Fourier transforms
If you do any signal processing using Fourier transforms (which we might, later!), NumPy has an entire sub-module full of tools for this type of analysis in numpy.fft
3: Linear algebra
We'll definitely be using this submodule later in the course. This is most of your vector and matrix linear algebra operations, from vector norms (numpy.linalg.norm) to singular value decomposition (numpy.linalg.svd) to matrix determinants (numpy.linalg.det).
4: Random numbers
NumPy has a phenomenal random number library in numpy.random. In addition to generating uniform random numbers in a certain range, you can also sample from any known parametric distribution.
Part 3: Vectorized Arithmetic
"Vectorized arithmetic" refers to how NumPy allows you to efficiently perform arithmetic operations on entire NumPy arrays at once, as you would with "regular" Python variables.
For example: let's say you have a vector and you want to normalize it to be unit length; that involves dividing every element in the vector by a constant (the magnitude of the vector). With lists, you'd have to loop through them manually.
End of explanation
import numpy as np # This tends to be the "standard" convention when importing NumPy.
import numpy.linalg as nla
vector = [4.0, 15.0, 6.0, 2.0]
np_vector = np.array(vector) # Convert to NumPy array.
magnitude = nla.norm(np_vector) # Computing the magnitude: one-liner.
print("Original magnitude: {:.2f}".format(magnitude))
np_vector /= magnitude # Vectorized division!!! No loop needed!
new_magnitude = nla.norm(np_vector)
print("Normalized magnitude: {:.2f}".format(new_magnitude))
Explanation: Now, let's see the same operation, this time with NumPy arrays.
End of explanation
x = np.array([1, 2, 3])
y = np.array([4, 5, 6])
z = x + y
print(z)
Explanation: No loops needed, far fewer lines of code, and a simple intuitive operation.
Operations involving arrays on both sides of the sign will also work (though the two arrays need to be the same length).
For example, adding two vectors together:
End of explanation
A = np.array([ [1, 2], [3, 4] ])
B = np.array([ [5, 6], [7, 8] ])
Explanation: Works exactly as you'd expect, but no [explicit] loop needed.
This becomes particularly compelling with matrix multiplication. Say you have two matrices, $A$ and $B$:
End of explanation
A @ B
Explanation: If you recall from algebra, matrix multiplication $A \times B$ involves multipliying each row of $A$ by each column of $B$. But rather than write that code yourself, Python (as of version 3.5) gives us a dedicated matrix multiplication operator: the @ symbol!
End of explanation |
8,787 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Single trial linear regression analysis with the LIMO dataset
Here we explore the structure of the data contained in the
LIMO dataset.
This example replicates and extends some of the main analysis
and tools integrated in LIMO MEEG, a MATLAB toolbox originally designed
to interface with EEGLAB_.
In summary, the example
Step1: About the data
In the original LIMO experiment (see
Step2: Note that the result of the loading process is an
Step3: Visualize events
We can visualise the distribution of the face events contained in the
limo_epochs structure. Events should appear clearly grouped, as the
epochs are ordered by condition.
Step4: As it can be seen above, conditions are coded as Face/A and Face/B.
Information about the phase-coherence of the presented faces is stored in the
epochs metadata. These information can be easily accessed by calling
limo_epochs.metadata. As shown below, the epochs metadata also contains
information about the presented faces for convenience.
Step5: Now let's take a closer look at the information in the epochs
metadata.
Step6: The first column of the summary table above provides more or less the same
information as the print(limo_epochs) command we ran before. There are
1055 faces (i.e., epochs), subdivided in 2 conditions (i.e., Face A and
Face B) and, for this particular subject, there are more epochs for the
condition Face B.
In addition, we can see in the second column that the values for the
phase-coherence variable range from -1.619 to 1.642. This is because the
phase-coherence values are provided as a z-scored variable in the LIMO
dataset. Note that they have a mean of zero and a standard deviation of 1.
Visualize condition ERPs
Let's plot the ERPs evoked by Face A and Face B, to see how similar they are.
Step7: We can also compute the difference wave contrasting Face A and Face B.
Although, looking at the evoked responses above, we shouldn't expect great
differences among these face-stimuli.
Step8: As expected, no clear pattern appears when contrasting
Face A and Face B. However, we could narrow our search a little bit more.
Since this is a "visual paradigm" it might be best to look at electrodes
located over the occipital lobe, as differences between stimuli (if any)
might easier to spot over visual areas.
Step9: We do see a difference between Face A and B, but it is pretty small.
Visualize effect of stimulus phase-coherence
Since phase-coherence
determined whether a face stimulus could be easily identified,
one could expect that faces with high phase-coherence should evoke stronger
activation patterns along occipital electrodes.
Step10: As shown above, there are some considerable differences between the
activation patterns evoked by stimuli with low vs. high phase-coherence at
the chosen electrodes.
Prepare data for linear regression analysis
Before we test the significance of these differences using linear
regression, we'll interpolate missing channels that were
dropped during preprocessing of the data.
Furthermore, we'll drop the EOG channels (marked by the "EXG" prefix)
present in the data
Step11: Define predictor variables and design matrix
To run the regression analysis,
we need to create a design matrix containing information about the
variables (i.e., predictors) we want to use for prediction of brain
activity patterns. For this purpose, we'll use the information we have in
limo_epochs.metadata
Step12: Now we can set up the linear model to be used in the analysis using
MNE-Python's func
Step13: Extract regression coefficients
The results are stored within the object reg,
which is a dictionary of evoked objects containing
multiple inferential measures for each predictor in the design matrix.
Step14: Plot model results
Now we can access and plot the results of the linear regression analysis by
calling
Step15: We can also plot the corresponding T values.
Step16: Conversely, there appears to be no (or very small) systematic effects when
comparing Face A and Face B stimuli. This is largely consistent with the
difference wave approach presented above. | Python Code:
# Authors: Jose C. Garcia Alanis <alanis.jcg@gmail.com>
#
# License: BSD-3-Clause
import numpy as np
import matplotlib.pyplot as plt
from mne.datasets.limo import load_data
from mne.stats import linear_regression
from mne.viz import plot_events, plot_compare_evokeds
from mne import combine_evoked
print(__doc__)
# subject to use
subj = 1
Explanation: Single trial linear regression analysis with the LIMO dataset
Here we explore the structure of the data contained in the
LIMO dataset.
This example replicates and extends some of the main analysis
and tools integrated in LIMO MEEG, a MATLAB toolbox originally designed
to interface with EEGLAB_.
In summary, the example:
Fetches epoched data files for a single subject of the LIMO dataset
:footcite:Rousselet2016. If the LIMO files are not found on disk, the
fetcher mne.datasets.limo.load_data() will automatically download
the files from a remote repository.
During import, information about the data (i.e., sampling rate, number of
epochs per condition, number and name of EEG channels per subject, etc.) is
extracted from the LIMO :file:.mat files stored on disk and added to the
epochs structure as metadata.
Fits linear models on the single subject's data and visualizes inferential
measures to evaluate the significance of the estimated effects.
End of explanation
# This step can take a little while if you're loading the data for the
# first time.
limo_epochs = load_data(subject=subj)
Explanation: About the data
In the original LIMO experiment (see :footcite:RousseletEtAl2010),
participants performed a
two-alternative forced choice task, discriminating between two face stimuli.
The same two faces were used during the whole experiment,
with varying levels of noise added, making the faces more or less
discernible to the observer (see Fig 1_ in :footcite:RousseletEtAl2008
for a similar approach).
The presented faces varied across a noise-signal (or phase-coherence)
continuum spanning from 0 to 85% in increasing steps of 5%.
In other words, faces with high phase-coherence (e.g., 85%) were easy to
identify, while faces with low phase-coherence (e.g., 5%) were hard to
identify and by extension very hard to discriminate.
Load the data
We'll begin by loading the data from subject 1 of the LIMO dataset.
End of explanation
print(limo_epochs)
Explanation: Note that the result of the loading process is an
:class:mne.EpochsArray containing the data ready to interface
with MNE-Python.
End of explanation
fig = plot_events(limo_epochs.events, event_id=limo_epochs.event_id)
fig.suptitle("Distribution of events in LIMO epochs")
Explanation: Visualize events
We can visualise the distribution of the face events contained in the
limo_epochs structure. Events should appear clearly grouped, as the
epochs are ordered by condition.
End of explanation
print(limo_epochs.metadata.head())
Explanation: As it can be seen above, conditions are coded as Face/A and Face/B.
Information about the phase-coherence of the presented faces is stored in the
epochs metadata. These information can be easily accessed by calling
limo_epochs.metadata. As shown below, the epochs metadata also contains
information about the presented faces for convenience.
End of explanation
# We want include all columns in the summary table
epochs_summary = limo_epochs.metadata.describe(include='all').round(3)
print(epochs_summary)
Explanation: Now let's take a closer look at the information in the epochs
metadata.
End of explanation
# only show -250 to 500 ms
ts_args = dict(xlim=(-0.25, 0.5))
# plot evoked response for face A
limo_epochs['Face/A'].average().plot_joint(times=[0.15],
title='Evoked response: Face A',
ts_args=ts_args)
# and face B
limo_epochs['Face/B'].average().plot_joint(times=[0.15],
title='Evoked response: Face B',
ts_args=ts_args)
Explanation: The first column of the summary table above provides more or less the same
information as the print(limo_epochs) command we ran before. There are
1055 faces (i.e., epochs), subdivided in 2 conditions (i.e., Face A and
Face B) and, for this particular subject, there are more epochs for the
condition Face B.
In addition, we can see in the second column that the values for the
phase-coherence variable range from -1.619 to 1.642. This is because the
phase-coherence values are provided as a z-scored variable in the LIMO
dataset. Note that they have a mean of zero and a standard deviation of 1.
Visualize condition ERPs
Let's plot the ERPs evoked by Face A and Face B, to see how similar they are.
End of explanation
# Face A minus Face B
difference_wave = combine_evoked([limo_epochs['Face/A'].average(),
limo_epochs['Face/B'].average()],
weights=[1, -1])
# plot difference wave
difference_wave.plot_joint(times=[0.15], title='Difference Face A - Face B')
Explanation: We can also compute the difference wave contrasting Face A and Face B.
Although, looking at the evoked responses above, we shouldn't expect great
differences among these face-stimuli.
End of explanation
# Create a dictionary containing the evoked responses
conditions = ["Face/A", "Face/B"]
evokeds = {condition: limo_epochs[condition].average()
for condition in conditions}
# concentrate analysis an occipital electrodes (e.g. B11)
pick = evokeds["Face/A"].ch_names.index('B11')
# compare evoked responses
plot_compare_evokeds(evokeds, picks=pick, ylim=dict(eeg=(-15, 7.5)))
Explanation: As expected, no clear pattern appears when contrasting
Face A and Face B. However, we could narrow our search a little bit more.
Since this is a "visual paradigm" it might be best to look at electrodes
located over the occipital lobe, as differences between stimuli (if any)
might easier to spot over visual areas.
End of explanation
phase_coh = limo_epochs.metadata['phase-coherence']
# get levels of phase coherence
levels = sorted(phase_coh.unique())
# create labels for levels of phase coherence (i.e., 0 - 85%)
labels = ["{0:.2f}".format(i) for i in np.arange(0., 0.90, 0.05)]
# create dict of evokeds for each level of phase-coherence
evokeds = {label: limo_epochs[phase_coh == level].average()
for level, label in zip(levels, labels)}
# pick channel to plot
electrodes = ['C22', 'B11']
# create figures
for electrode in electrodes:
fig, ax = plt.subplots(figsize=(8, 4))
plot_compare_evokeds(evokeds,
axes=ax,
ylim=dict(eeg=(-20, 15)),
picks=electrode,
cmap=("Phase coherence", "magma"))
Explanation: We do see a difference between Face A and B, but it is pretty small.
Visualize effect of stimulus phase-coherence
Since phase-coherence
determined whether a face stimulus could be easily identified,
one could expect that faces with high phase-coherence should evoke stronger
activation patterns along occipital electrodes.
End of explanation
limo_epochs.interpolate_bads(reset_bads=True)
limo_epochs.drop_channels(['EXG1', 'EXG2', 'EXG3', 'EXG4'])
Explanation: As shown above, there are some considerable differences between the
activation patterns evoked by stimuli with low vs. high phase-coherence at
the chosen electrodes.
Prepare data for linear regression analysis
Before we test the significance of these differences using linear
regression, we'll interpolate missing channels that were
dropped during preprocessing of the data.
Furthermore, we'll drop the EOG channels (marked by the "EXG" prefix)
present in the data:
End of explanation
# name of predictors + intercept
predictor_vars = ['face a - face b', 'phase-coherence', 'intercept']
# create design matrix
design = limo_epochs.metadata[['phase-coherence', 'face']].copy()
design['face a - face b'] = np.where(design['face'] == 'A', 1, -1)
design['intercept'] = 1
design = design[predictor_vars]
Explanation: Define predictor variables and design matrix
To run the regression analysis,
we need to create a design matrix containing information about the
variables (i.e., predictors) we want to use for prediction of brain
activity patterns. For this purpose, we'll use the information we have in
limo_epochs.metadata: phase-coherence and Face A vs. Face B.
End of explanation
reg = linear_regression(limo_epochs,
design_matrix=design,
names=predictor_vars)
Explanation: Now we can set up the linear model to be used in the analysis using
MNE-Python's func:~mne.stats.linear_regression function.
End of explanation
print('predictors are:', list(reg))
print('fields are:', [field for field in getattr(reg['intercept'], '_fields')])
Explanation: Extract regression coefficients
The results are stored within the object reg,
which is a dictionary of evoked objects containing
multiple inferential measures for each predictor in the design matrix.
End of explanation
reg['phase-coherence'].beta.plot_joint(ts_args=ts_args,
title='Effect of Phase-coherence',
times=[0.23])
Explanation: Plot model results
Now we can access and plot the results of the linear regression analysis by
calling :samp:reg['{<name of predictor>}'].{<measure of interest>} and
using the
:meth:~mne.Evoked.plot_joint method just as we would do with any other
evoked object.
Below we can see a clear effect of phase-coherence, with higher
phase-coherence (i.e., better "face visibility") having a negative effect on
the activity measured at occipital electrodes around 200 to 250 ms following
stimulus onset.
End of explanation
# use unit=False and scale=1 to keep values at their original
# scale (i.e., avoid conversion to micro-volt).
ts_args = dict(xlim=(-0.25, 0.5),
unit=False)
topomap_args = dict(scalings=dict(eeg=1),
average=0.05)
fig = reg['phase-coherence'].t_val.plot_joint(ts_args=ts_args,
topomap_args=topomap_args,
times=[0.23])
fig.axes[0].set_ylabel('T-value')
Explanation: We can also plot the corresponding T values.
End of explanation
ts_args = dict(xlim=(-0.25, 0.5))
reg['face a - face b'].beta.plot_joint(ts_args=ts_args,
title='Effect of Face A vs. Face B',
times=[0.23])
Explanation: Conversely, there appears to be no (or very small) systematic effects when
comparing Face A and Face B stimuli. This is largely consistent with the
difference wave approach presented above.
End of explanation |
8,788 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook is part of the clifford documentation
Step1: This creates an algebra with the required signature and imports the basis blades into the current workspace. We can check our metric by squaring our grade 1 multivectors.
Step2: As expected this gives us 4 basis vectors that square to 1.0 and one that squares to -1.0, therefore confirming our metric is (4,1).
The up() function implements the mapping of vectors from standard 3D space to conformal space. We can use this to construct conformal objects to play around with.
For example a line at the origin
Step3: The tools submodule of the clifford package contains a wide array of algorithms and tools that can be useful for manipulating objects in CGA. We will use these tools to generate rotors that rotate and translate objects
Step4: In the above snippet of code we have generated rotors for translation and rotation, then combined these, then applied the combined rotor to the original line that we made.
Visualizations
The clifford package can be used alongside pyganja to render CGA objects which can be rotated interactively in a jupyter notebook
Step5: We can also interpolate the objects using the tools in clifford and can visualise the result
Step6: We can do the same for all the other geometric primitives as well
Circles
Step7: Point pairs
Step8: Planes
Step9: Spheres | Python Code:
from clifford.g3c import *
blades
Explanation: This notebook is part of the clifford documentation: https://clifford.readthedocs.io/.
Example 1 Interpolating Conformal Objects
In this example we will look at a few of the tools provided by the clifford package for (4,1) conformal geometric algebra (CGA) and see how we can use them in a practical setting to interpolate geometric primitives.
The first step in using the package for CGA is to generate and import the algebra:
End of explanation
print('e1*e1 ', e1*e1)
print('e2*e2 ', e2*e2)
print('e3*e3 ', e3*e3)
print('e4*e4 ', e4*e4)
print('e5*e5 ', e5*e5)
Explanation: This creates an algebra with the required signature and imports the basis blades into the current workspace. We can check our metric by squaring our grade 1 multivectors.
End of explanation
line_a = ( up(0)^up(e1+e2)^einf ).normal()
print(line_a)
Explanation: As expected this gives us 4 basis vectors that square to 1.0 and one that squares to -1.0, therefore confirming our metric is (4,1).
The up() function implements the mapping of vectors from standard 3D space to conformal space. We can use this to construct conformal objects to play around with.
For example a line at the origin:
End of explanation
from clifford.tools.g3 import *
from clifford.tools.g3c import *
from numpy import pi
rotation_radians = pi/4
euc_vector_in_plane_m = e1
euc_vector_in_plane_n = e3
euc_translation = -5.2*e1 + 3*e2 - pi*e3
rotor_rotation = generate_rotation_rotor(rotation_radians, euc_vector_in_plane_m, euc_vector_in_plane_n)
rotor_translation = generate_translation_rotor(euc_translation)
print(rotor_rotation)
print(rotor_translation)
combined_rotor = (rotor_translation*rotor_rotation).normal()
line_b = (combined_rotor*line_a*~combined_rotor).normal()
print(line_b)
Explanation: The tools submodule of the clifford package contains a wide array of algorithms and tools that can be useful for manipulating objects in CGA. We will use these tools to generate rotors that rotate and translate objects:
End of explanation
from pyganja import GanjaScene, draw
sc = GanjaScene()
sc.add_object(line_a,color=0xFF0000, label='a')
sc.add_object(line_b,color=0x00FF00, label='b')
draw(sc, scale=0.1)
Explanation: In the above snippet of code we have generated rotors for translation and rotation, then combined these, then applied the combined rotor to the original line that we made.
Visualizations
The clifford package can be used alongside pyganja to render CGA objects which can be rotated interactively in a jupyter notebook:
End of explanation
def interpolate_objects_linearly(L1, L2, n_steps=10, color_1=np.array([255,0,0]), color_2=np.array([0,255,0])):
alpha_list = np.linspace(0, 1, num=n_steps)
intermediary_list = []
sc = GanjaScene()
for alpha in alpha_list:
intermediate_color = (alpha*color_1 + (1-alpha)*color_2).astype(np.uint8)
intermediate_object = interp_objects_root(L1, L2, alpha)
intermediary_list.append(intermediate_object)
color_string = int(
(intermediate_color[0] << 16) | (intermediate_color[1] << 8) | intermediate_color[2]
)
sc.add_object(intermediate_object, color_string)
return intermediary_list, sc
intermediary_list, finished_scene = interpolate_objects_linearly(line_a, line_b)
draw(finished_scene, scale=0.1)
Explanation: We can also interpolate the objects using the tools in clifford and can visualise the result
End of explanation
circle_a = (up(0)^up(e1)^up(e2)).normal()
circle_b = (combined_rotor*circle_a*~combined_rotor).normal()
intermediary_list, finished_scene = interpolate_objects_linearly(circle_a, circle_b)
draw(finished_scene, scale=0.1)
Explanation: We can do the same for all the other geometric primitives as well
Circles
End of explanation
point_pair_a = (up(e3)^up(e1+e2)).normal()
point_pair_b = (combined_rotor*point_pair_a*~combined_rotor).normal()
intermediary_list, finished_scene = interpolate_objects_linearly(point_pair_a, point_pair_b)
draw(finished_scene, scale=0.1)
Explanation: Point pairs
End of explanation
plane_a = (up(0)^up(e1)^up(e2)^einf).normal()
plane_b = (combined_rotor*plane_a*~combined_rotor).normal()
intermediary_list, finished_scene = interpolate_objects_linearly(plane_a, plane_b)
draw(finished_scene)
Explanation: Planes
End of explanation
sphere_a = (up(0)^up(e1)^up(e2)^up(e3)).normal()
sphere_b = (combined_rotor*sphere_a*~combined_rotor).normal()
intermediary_list, finished_scene = interpolate_objects_linearly(sphere_a, sphere_b)
draw(finished_scene, scale=0.1)
Explanation: Spheres
End of explanation |
8,789 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hyperparameter Optimization xgboost
What the options there're for tuning?
* GridSearch
* RandomizedSearch
All right!
Xgboost has about 20 params
Step1: Modeling
Step2: Tuning hyperparmeters using Bayesian optimization algorithms | Python Code:
import pandas as pd
import xgboost as xgb
import numpy as np
import seaborn as sns
from hyperopt import hp
from hyperopt import hp, fmin, tpe, STATUS_OK, Trials
%matplotlib inline
train = pd.read_csv('bike.csv')
train['datetime'] = pd.to_datetime( train['datetime'] )
train['day'] = train['datetime'].map(lambda x: x.day)
Explanation: Hyperparameter Optimization xgboost
What the options there're for tuning?
* GridSearch
* RandomizedSearch
All right!
Xgboost has about 20 params:
1. base_score
2. colsample_bylevel
3. colsample_bytree
4. gamma
5. learning_rate
6. max_delta_step
7. max_depth
8. min_child_weight
9. missing
10. n_estimators
11. nthread
12. objective
13. reg_alpha
14. reg_lambda
15. scale_pos_weight
16. seed
17. silent
18. subsample
Let's for tuning will be use 12 of them them with 5-10 possible values, so... there're 12^5 - 12^10 possible cases.
If you will check one case in 10s, for 12^5 you need 30 days for 12^10 about 20K years :).
This is too long.. but there's a thid option - Bayesan optimisation.
End of explanation
def assing_test_samples(data, last_training_day=0.3, seed=1):
days = data.day.unique()
np.random.seed(seed)
np.random.shuffle(days)
test_days = days[: int(len(days) * 0.3)]
data['is_test'] = data.day.isin(test_days)
def select_features(data):
columns = data.columns[ (data.dtypes == np.int64) | (data.dtypes == np.float64) | (data.dtypes == np.bool) ].values
return [feat for feat in columns if feat not in ['count', 'casual', 'registered'] and 'log' not in feat ]
def get_X_y(data, target_variable):
features = select_features(data)
X = data[features].values
y = data[target_variable].values
return X,y
def train_test_split(train, target_variable):
df_train = train[train.is_test == False]
df_test = train[train.is_test == True]
X_train, y_train = get_X_y(df_train, target_variable)
X_test, y_test = get_X_y(df_test, target_variable)
return X_train, X_test, y_train, y_test
def fit_and_predict(train, model, target_variable):
X_train, X_test, y_train, y_test = train_test_split(train, target_variable)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
return (y_test, y_pred)
def post_pred(y_pred):
y_pred[y_pred < 0] = 0
return y_pred
def rmsle(y_true, y_pred, y_pred_only_positive=True):
if y_pred_only_positive: y_pred = post_pred(y_pred)
diff = np.log(y_pred+1) - np.log(y_true+1)
mean_error = np.square(diff).mean()
return np.sqrt(mean_error)
assing_test_samples(train)
def etl_datetime(df):
df['year'] = df['datetime'].map(lambda x: x.year)
df['month'] = df['datetime'].map(lambda x: x.month)
df['hour'] = df['datetime'].map(lambda x: x.hour)
df['minute'] = df['datetime'].map(lambda x: x.minute)
df['dayofweek'] = df['datetime'].map(lambda x: x.dayofweek)
df['weekend'] = df['datetime'].map(lambda x: x.dayofweek in [5,6])
etl_datetime(train)
train['{0}_log'.format('count')] = train['count'].map(lambda x: np.log2(x) )
for name in ['registered', 'casual']:
train['{0}_log'.format(name)] = train[name].map(lambda x: np.log2(x+1) )
Explanation: Modeling
End of explanation
def objective(space):
model = xgb.XGBRegressor(
max_depth = space['max_depth'],
n_estimators = int(space['n_estimators']),
subsample = space['subsample'],
colsample_bytree = space['colsample_bytree'],
learning_rate = space['learning_rate'],
reg_alpha = space['reg_alpha']
)
X_train, X_test, y_train, y_test = train_test_split(train, 'count')
eval_set = [( X_train, y_train), ( X_test, y_test)]
(_, registered_pred) = fit_and_predict(train, model, 'registered_log')
(_, casual_pred) = fit_and_predict(train, model, 'casual_log')
y_test = train[train.is_test == True]['count']
y_pred = (np.exp2(registered_pred) - 1) + (np.exp2(casual_pred) -1)
score = rmsle(y_test, y_pred)
print "SCORE:", score
return{'loss':score, 'status': STATUS_OK }
space ={
'max_depth': hp.quniform("x_max_depth", 2, 20, 1),
'n_estimators': hp.quniform("n_estimators", 100, 1000, 1),
'subsample': hp.uniform ('x_subsample', 0.8, 1),
'colsample_bytree': hp.uniform ('x_colsample_bytree', 0.1, 1),
'learning_rate': hp.uniform ('x_learning_rate', 0.01, 0.1),
'reg_alpha': hp.uniform ('x_reg_alpha', 0.1, 1)
}
trials = Trials()
best = fmin(fn=objective,
space=space,
algo=tpe.suggest,
max_evals=15,
trials=trials)
print(best)
Explanation: Tuning hyperparmeters using Bayesian optimization algorithms
End of explanation |
8,790 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: can we predict which learner will be best on a dataset from dataset properties?
Step2: can we predict the best score achievable on a dataset from dataset properties? | Python Code:
# get best classifier for each dataset
from tqdm import tqdm
best_method = dict()
for i,(dataset, group_data) in enumerate(tqdm(data.groupby('dataset'))):
best_method[dataset] = group_data['classifier'][np.argmax(group_data['accuracy'])]
# print(best_method)
# make new dataset combining metafeatures and best methods
y = np.empty(metafeatures.shape[0])
methods = data['classifier'].unique()
print('methods:',methods)
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
# le.fit(methods)
print('metafeatures[''dataset''].shape:',metafeatures['dataset'].shape)
y_str = [best_method[ds] for ds in metafeatures['dataset'].values]
y = le.fit_transform(y_str)
metaf = metafeatures.dropna(axis=1,how='all')
metaf.fillna(value=0,axis=1,inplace=True)
print(metafeatures.shape[1]-metaf.shape[1],' features dropped due to missing values')
# print(metaf[:10])
from sklearn.preprocessing import StandardScaler, Normalizer
X = Normalizer().fit_transform(metaf.drop('dataset',axis=1).values)
print('X shape:',X.shape)
print('y shape:',y.shape)
# set up ML
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import cross_val_score, StratifiedShuffleSplit, LeaveOneOut, cross_val_predict
from sklearn.metrics import confusion_matrix
# dtc = DecisionTreeClassifier()
dtc = RandomForestClassifier(n_estimators=1000)
# dtc = KNeighborsClassifier(n_neighbors=1)
# dtc.fit(X_t,y_t)
cv = StratifiedShuffleSplit(n_splits=30,test_size=0.1)
print('fitting model...')
# print('mean CV score:',np.mean(cross_val_score(dtc,X,y,cv=LeaveOneOut())))
# print('mean CV score:',np.mean(cross_val_score(dtc,X,y,cv=cv)))
print('confusion matrix:')
import matplotlib.pyplot as plt
import itertools
%matplotlib inline
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=90)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# Compute confusion matrix
cnf_matrix = confusion_matrix(y,cross_val_predict(dtc,X,y,cv=LeaveOneOut()))
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=le.classes_,
title='Confusion matrix, without normalization')
# Plot normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=le.classes_, normalize=True,
title='Normalized confusion matrix')
plt.show()
Explanation: can we predict which learner will be best on a dataset from dataset properties?
End of explanation
# get best classifier for each dataset
from tqdm import tqdm
best_score = dict()
# print(best_method)
# make new dataset combining metafeatures and best methods
y = np.empty(metafeatures.shape[0])
for i,(dataset, group_data) in enumerate(tqdm(data.groupby('dataset'))):
y[i] = group_data['bal_accuracy'].max()
print('metafeatures[''dataset''].shape:',metafeatures['dataset'].shape)
metaf = metafeatures.dropna(axis=1,how='all')
metaf.fillna(value=0,axis=1,inplace=True)
print(metafeatures.shape[1]-metaf.shape[1],' features dropped due to missing values')
# print(metaf[:10])
from sklearn.preprocessing import StandardScaler
X = StandardScaler().fit_transform(metaf.drop('dataset',axis=1).values)
print('X shape:',X.shape)
print('y shape:',y.shape)
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.linear_model import LassoLarsCV
from sklearn.model_selection import cross_val_score
# X_t,X_v,y_t,y_v = train_test_split(X,y)
# est = DecisionTreeClassifier()
est = RandomForestRegressor(n_estimators=100)
# est = LassoLarsCV()
# dtc.fit(X_t,y_t)
print('fitting model...')
print('mean CV score:',np.mean(cross_val_score(est,X,y,cv=5)))
Explanation: can we predict the best score achievable on a dataset from dataset properties?
End of explanation |
8,791 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Desription
Simulation of data-aided frequency synchronization
QPSK symbols are sampled, pulse shaped and transmitted
Uniformly distributed frequency distortion is added, before frequency is estimated using algorithm as discussed in the lecture
Import
Step1: Initialization
Parameters
Step2: Function for Getting RRC Impulse Response
Step3: Simulation
Get Tx signal
Step4: Visualize Tx signal
Step5: Parameters of estimation
Step6: Loop for SNRs and perform simulation
Step7: Show Results
Step8: Correct frequency offset and show symbols | Python Code:
# importing
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
# showing figures inline
%matplotlib inline
# plotting options
font = {'size' : 20}
plt.rc('font', **font)
plt.rc('text', usetex=True)
matplotlib.rc('figure', figsize=(28, 8) )
Explanation: Desription
Simulation of data-aided frequency synchronization
QPSK symbols are sampled, pulse shaped and transmitted
Uniformly distributed frequency distortion is added, before frequency is estimated using algorithm as discussed in the lecture
Import
End of explanation
# number of symbols per sequence/packet
n_symb = 32
# constellation points for modulation scheme
constellation = np.array( [ 1+1j, -1+1j, -1-1j, +1-1j ] ) / np.sqrt(2)
# snr range for simulation
EsN0_dB_min = -15
EsN0_dB_max = 15
EsN0_dB_step = 5
EsN0_dB = np.arange( EsN0_dB_min, EsN0_dB_max + EsN0_dB_step, EsN0_dB_step)
# parameters of the filter
beta = 0.5
n_sps = 4 # samples per symbol
syms_per_filt = 4 # symbols per filter (plus minus in both directions)
K_filt = 2 * syms_per_filt * n_sps + 1 # length of the fir filter
# set symbol time and sample time
t_symb = 1.0
t_sample = t_symb / n_sps
Explanation: Initialization
Parameters
End of explanation
########################
# find impulse response of an RRC filter
########################
def get_rrc_ir(K, n_up, t_symbol, beta):
'''
Determines coefficients of an RRC filter
Formula out of: J. Huber, Trelliscodierung, Springer, 1992, S. 15
NOTE: Length of the IR has to be an odd number
IN: length of IR, upsampling factor, symbol time, roll-off factor
OUT: filter coefficients
'''
K = int(K)
if ( K%2 == 0):
print('Length of the impulse response should be an odd number')
sys.exit()
# initialize np.array
rrc = np.zeros( K )
# find sample time and initialize index vector
t_sample = t_symbol / n_sps
time_ind = np.linspace( -(K-1)/2, (K-1)/2, K)
# assign values of rrc
# if roll-off factor equals 0 use rc
if beta != 0:
# loop for times and assign according values
for t_i in time_ind:
t = (t_i)* t_sample
if t_i==0:
rrc[ int( t_i+(K-1)/2 ) ] = (1-beta+4*beta/np.pi)
elif np.abs(t)==t_symbol/(4*beta):
# apply l'Hospital
rrc[ int( t_i+(K-1)/2 ) ] = beta*np.sin(np.pi/(4*beta)*(1+beta)) - 2*beta/np.pi*np.cos(np.pi/(4*beta)*(1+beta))
else:
rrc[ int( t_i+(K-1)/2 ) ] = (4*beta*t/t_symbol * np.cos(np.pi*(1+beta)*t/t_symbol) + np.sin(np.pi*(1-beta)*t/t_symbol) ) / (np.pi*t/t_symbol*(1-(4*beta*t/t_symbol)**2) )
rrc = rrc / np.sqrt(t_symbol)
else:
for t_i in time_ind:
t = t_i * t_sample
if np.abs(t)<t_sample/20:
rrc[ t_i + (K-1)/2 ] = 1
else:
rrc[ t_i + (K-1)/2 ] =np.sin(np.pi*t/t_symbol)/(np.pi*t/t_symbol)
return rrc
Explanation: Function for Getting RRC Impulse Response
End of explanation
# find rrc response and normalize to energy 1
rrc = get_rrc_ir( K_filt, n_sps, t_symb, beta)
rrc = rrc / np.linalg.norm( rrc )
# generate random binary vector and modulate the specified modulation scheme
data = np.random.randint( 4, size=n_symb )
s = [ constellation[ d ] for d in data ]
# prepare sequence to be filtered by upsampling
s_up = np.zeros( n_symb * n_sps, dtype = complex )
s_up[ : : n_sps ] = s
# apply RRC filtering
s_Tx = np.convolve( rrc, s_up )
# vector of time samples for Tx signal
t_Tx = np.arange( len(s_Tx) ) * t_sample
t_symbol = np.arange( n_symb ) * t_symb
Explanation: Simulation
Get Tx signal
End of explanation
plt.subplot(121)
plt.plot( np.real( s_Tx ), label='Inphase' )
plt.plot( np.imag( s_Tx ), label='Quadrature' )
plt.xlabel('$t$')
plt.ylabel('$s(t)$')
plt.grid(True)
plt.legend( loc='upper right' )
plt.subplot(122)
plt.psd( s_Tx, Fs=1/t_sample);
Explanation: Visualize Tx signal
End of explanation
# vector for storing variances of estimation
var_delta_f = np.zeros( len(EsN0_dB) )
# number of trials per simulation point
N_trials_f = int( 1e3 )
# delta f max (taken in both directions, i.e. [-delta_f_max, delta_f_max])
delta_f_max = .15 / t_symb
# vector for (discrete) search for frequency estimation
numb_steps_f = 50
#numb_steps_f = int( 1e3 )
f_est_vector = np.linspace( -delta_f_max, delta_f_max, numb_steps_f )
# switch determining whether estimation shall be interpolated or used as it stands
# NOTE: Explained in the slides; start with choice "0"
interpolate = 1
Explanation: Parameters of estimation
End of explanation
# loop for SNRs
for ind_esn0, esn0 in enumerate(EsN0_dB):
# determine variance of the noise
sigma2 = 10**( -esn0/10 )
# initialize error vector
delta_f = np.zeros( N_trials_f )
# loop for trials with different f_off
for n in range( N_trials_f ):
# apply phase and frequency offset to obtain distorted Tx signal
f_off = np.random.uniform( - delta_f_max, delta_f_max )
phi_off = 0
s_Rx = np.exp( 1j * phi_off ) * np.exp( 1j * 2 * np.pi * f_off * t_Tx ) * s_Tx
# add noise
noise = np.sqrt(sigma2/2) * ( np.random.randn(len(s_Rx)) + 1j * np.random.randn(len(s_Rx)) )
# apply noise and insert asynchronity with respect to time
delta_tau = 0 #np.random.randint( 0, n_up )
r = np.roll( s_Rx + noise, delta_tau)
# signal after MF
y_mf = np.convolve( rrc, r )
# down-sampling to symbol time
y_down = y_mf[ K_filt-1 : K_filt-1 + len(s) * n_sps : n_sps ]
# determine frequency estimation according to the approximated ML rule out of [Mengali]
f_est = 0
Gamma_f = [
np.abs(
np.sum(
np.conjugate( s ) *
np.exp( - 1j * 2 * np.pi * t_symbol * f ) *
y_down
)
)
for f in f_est_vector
]
ind_est = np.argmax( Gamma_f )
f_est = f_est_vector[ ind_est ]
# apply interpolation if activated
if interpolate == True:
# neighbored frequency samples and according values
f_est_neighborhood = f_est_vector[ ind_est-1 : ind_est + 2 ]
gamma_neighborhood = [ np.abs( np.sum( np.conjugate( s ) *
np.exp( - 1j * 2 * np.pi * t_symbol * f ) * y_down ) )
for f in f_est_neighborhood ]
# find coefficients in quadratic equation by construction matrix-vector-equation and solving by pseudo-inverse
# NOTE: frequencies on border points are avoided
A = np.ones( (3, 3) )
try:
A[ :, 0 ] = f_est_neighborhood[ : ]**2
A[ :, 1 ] = f_est_neighborhood[ : ]
except:
continue
# solve by using pseudo-inverse
coeff_quad_interpol = np.dot( np.linalg.pinv( A ) , gamma_neighborhood )
f_est = - coeff_quad_interpol[ 1 ] / ( 2*coeff_quad_interpol[ 0 ] )
# determining deviation
delta_f[ n ] = f_est - f_off
# find mean and mse of estimation
var_delta_f[ ind_esn0 ] = np.var( delta_f )
# show progress
print('SNR: {}'.format( esn0 ) )
Explanation: Loop for SNRs and perform simulation
End of explanation
# determine modified Cramer Rao bound according to (3.2.29) in [Mengali]
# NOTE: Gives a lower bound on variance of estimation error
mcrb = 3.0 / ( 2 * np.pi**2 * n_symb**3 * 10**(EsN0_dB/10) * t_symb**2)
# plot frequency error
plt.figure()
plt.plot( EsN0_dB, var_delta_f, '-D', label='MSE', ms = 18, linewidth=2.0 )
plt.plot( EsN0_dB, mcrb, label='MCRB', linewidth=2.0 )
plt.grid(True)
plt.legend(loc='upper right')
plt.xlabel('$E_s/N_0 \; (dB)$')
plt.ylabel('$E( (\\hat{f}-f_{off})^2)$')
plt.semilogy()
Explanation: Show Results
End of explanation
# correct frequency deviation and resample
r_corrected = r * np.exp( -1j * 2 * np.pi * f_est * t_Tx )
y_mf_corrected = np.convolve( rrc, r_corrected )
y_down_corrected = y_mf_corrected[ K_filt-1 : K_filt-1 + len(s) * n_sps : n_sps ]
# show signals
plt.plot( np.real( s_Tx ), label='Tx' )
plt.plot( np.real( y_mf ), label='After MF' )
plt.plot( np.real( y_mf_corrected ), label='MF corrected' )
plt.title('Signals in Tx and Rx')
plt.grid( True )
plt.xlabel('$t$')
plt.legend( loc='upper right' )
# show symbols
markerline, stemlines, baseline = plt.stem( np.arange(len(s)), np.real(s), use_line_collection='True', label='syms Tx')
plt.setp(markerline, 'markersize', 8, 'markerfacecolor', 'b')
markerline, stemlines, baseline = plt.stem( np.arange(len(y_down)), np.real(y_down), use_line_collection='True', label='syms Rx')
plt.setp(markerline, 'markersize', 12, 'markerfacecolor', 'r',)
markerline, stemlines, baseline = plt.stem( np.arange(len(y_down_corrected)), np.real(y_down_corrected), use_line_collection='True', label='$y_{corr}(t)$')
plt.setp(markerline, 'markersize', 12, 'markerfacecolor', 'g',)
plt.title('Symbols in Tx and Rx')
plt.legend(loc='upper right')
plt.grid(True)
plt.xlabel('$n$')
plt.ylabel('Re$\{I_n\}$')
Explanation: Correct frequency offset and show symbols
End of explanation |
8,792 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Examples
Step1: Try It Yourself
Go to the section 4.4. Numeric Types in the Python 3 documentation at https
Step2: Variables can be reassigned.
Step3: The ability to reassign variable values becomes important when iterating through groups of objects for batch processing or other purposes. In the example below, the value of b is dynamically updated every time the while loop is executed
Step4: Variable data types can be inferred, so Python does not require us to declare the data type of a variable on assignment.
Step5: is equivalent to
Step6: There are cases when we may want to declare the data type, for example to assign a different data type from the default that will be inferred. Concatenating strings provides a good example.
Step7: Above, Python has inferred the type of the variable pizza to be an integer. Since strings can only be concatenated with other strings, our print statement generates an error. There are two ways we can resolve the error
Step8: Given the following variable assignments
Step9: There are multiple ways to create a list
Step10: We can inspect our lists
Step11: The above output for typed_list may seem odd. Referring to the documentation, we see that the argument to the type constructor is an iterable, which according to the documentation is "An object capable of returning its members one at a time." In our construtor statement above
```
Using the type constructor
constructed_list = list('purple')
```
the word 'purple' is the object - in this case a word - that when used to construct a list returns its members (individual letters) one at a time.
Compare the outputs below
Step12: Lists in Python are
Step13: Info on additional list methods is available at https | Python Code:
# The interpreter can be used as a calculator, and can also echo or concatenate strings.
3 + 3
3 * 3
3 ** 3
3 / 2 # classic division - output is a floating point number
# Use quotes around strings
'dogs'
# + operator can be used to concatenate strings
'dogs' + "cats"
print('Hello World!')
Explanation: Examples: Variables and Data Types
The Interpreter
End of explanation
a = 5
b = 10
a + b
Explanation: Try It Yourself
Go to the section 4.4. Numeric Types in the Python 3 documentation at https://docs.python.org/3.4/library/stdtypes.html. The table in that section describes different operators - try some!
What is the difference between the different division operators (/, //, and %)?
Variables
Variables allow us to store values for later use.
End of explanation
b = 38764289.1097
a + b
Explanation: Variables can be reassigned.
End of explanation
a = 5
b = 10
while b > a:
print("b="+str(b))
b = b-1
Explanation: The ability to reassign variable values becomes important when iterating through groups of objects for batch processing or other purposes. In the example below, the value of b is dynamically updated every time the while loop is executed:
End of explanation
a = 5
type(a)
Explanation: Variable data types can be inferred, so Python does not require us to declare the data type of a variable on assignment.
End of explanation
a = int(5)
type(a)
c = 'dogs'
print(type(c))
c = str('dogs')
print(type(c))
Explanation: is equivalent to
End of explanation
customer = 'Carol'
pizzas = 2
print(customer + ' ordered ' + pizzas + ' pizzas.')
Explanation: There are cases when we may want to declare the data type, for example to assign a different data type from the default that will be inferred. Concatenating strings provides a good example.
End of explanation
customer = 'Carol'
pizzas = str(2)
print(customer + ' ordered ' + pizzas + ' pizzas.')
customer = 'Carol'
pizzas = 2
print(customer + ' ordered ' + str(pizzas) + ' pizzas.')
Explanation: Above, Python has inferred the type of the variable pizza to be an integer. Since strings can only be concatenated with other strings, our print statement generates an error. There are two ways we can resolve the error:
Declare the pizzas variable as type string (str) on assignment or
Re-cast the pizzas variable as a string within the print statement.
End of explanation
# Separate list items with commas!
number_list = [1, 2, 3, 4, 5]
string_list = ['apples', 'oranges', 'pears', 'grapes', 'pineapples']
combined_list = [1, 2, 'oranges', 3.14, 'peaches', 'grapes', 99.19876]
# Nested lists - lists of lists - are allowed.
list_of_lists = [[1, 2, 3], ['oranges', 'grapes', 8], [['small list'], ['bigger', 'list', 55], ['url_1', 'url_2']]]
Explanation: Given the following variable assignments:
x = 12
y = str(14)
z = donuts
Predict the output of the following:
y + z
x + y
x + int(y)
str(x) + y
Check your answers in the interpreter.
Variable Naming Rules
Variable names are case senstive and:
Can only consist of one "word" (no spaces).
Must begin with a letter or underscore character ('_').
Can only use letters, numbers, and the underscore character.
We further recommend using variable names that are meaningful within the context of the script and the research.
Lists
https://docs.python.org/3/library/stdtypes.html?highlight=lists#list
Lists are a type of collection in Python. Lists allow us to store sequences of items that are typically but not always similar. All of the following lists are legal in Python:
End of explanation
# Create an empty list
empty_list = []
# As we did above, by using square brackets around a comma-separated sequence of items
new_list = [1, 2, 3]
# Using the type constructor
constructed_list = list('purple')
# Using a list comprehension
result_list = [i for i in range(1, 20)]
Explanation: There are multiple ways to create a list:
End of explanation
empty_list
new_list
result_list
constructed_list
Explanation: We can inspect our lists:
End of explanation
constructed_list_int = list(123)
constructed_list_str = list('123')
constructed_list_str
Explanation: The above output for typed_list may seem odd. Referring to the documentation, we see that the argument to the type constructor is an iterable, which according to the documentation is "An object capable of returning its members one at a time." In our construtor statement above
```
Using the type constructor
constructed_list = list('purple')
```
the word 'purple' is the object - in this case a word - that when used to construct a list returns its members (individual letters) one at a time.
Compare the outputs below:
End of explanation
ordered = [3, 2, 7, 1, 19, 0]
ordered
# There is a 'sort' method for sorting list items as needed:
ordered.sort()
ordered
Explanation: Lists in Python are:
mutable - the list and list items can be changed
ordered - list items keep the same "place" in the list
Ordered here does not mean sorted. The list below is printed with the numbers in the order we added them to the list, not in numeric order:
End of explanation
string_list = ['apples', 'oranges', 'pears', 'grapes', 'pineapples']
string_list[0]
# We can use positions to 'slice' or selection sections of a list:
string_list[3:]
string_list[:3]
string_list[1:4]
# If we don't know the position of a list item, we can use the 'index()' method to find out.
# Note that in the case of duplicate list items, this only returns the position of the first one:
string_list.index('pears')
string_list.append('oranges')
string_list
string_list.index('oranges')
Explanation: Info on additional list methods is available at https://docs.python.org/3/library/stdtypes.html?highlight=lists#mutable-sequence-types
Because lists are ordered, it is possible to access list items by referencing their positions. Note that the position of the first item in a list is 0 (zero), not 1!
End of explanation |
8,793 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
# Defensive programming (2)
We have seen the basic idea that we can insert
assert statments into code, to check that the
results are what we expect, but how can we test
software more fully? Can doing this help us
avoid bugs in the first place?
One possible approach is test driven development.
Many people think this reduces the number of bugs in
software as it is written, but evidence for this in the
sciences is somewhat limited as it is not always easy
to say what the right answer should be before writing the
software. Having said that, the tests involved in test
driven development are certanly useful even if some of
them are written after the software.
We will look at a new (and quite difficult) problem,
finding the overlap between ranges of numbers. For
example, these could be the dates that different
sensors were running, and you need to find the
date ranges where all sensors recorded data before
running further analysis.
<img src="python-overlapping-ranges.svg">
Start off by imagining you have a working function range_overlap that takes
a list of tuples. Write some assert statments that would check if the answer from this
function is correct. Put these in a function. Think of different cases and
about edge cases (which may show a subtle bug).
Step1: But what if there is no overlap? What if they just touch?
Step2: What about the case of a single range?
Step3: The write a solution - one possible one is below.
Step4: And test it...
Step5: Should we add to the tests?
Can you write version with fewer bugs. My attempt is below. | Python Code:
def test_range_overlap():
assert range_overlap([(-3.0, 5.0), (0.0, 4.5), (-1.5, 2.0)]) == (0.0, 2.0)
assert range_overlap([ (2.0, 3.0), (2.0, 4.0) ]) == (2.0, 3.0)
assert range_overlap([ (0.0, 1.0), (0.0, 2.0), (-1.0, 1.0) ]) == (0.0, 1.0)
Explanation: # Defensive programming (2)
We have seen the basic idea that we can insert
assert statments into code, to check that the
results are what we expect, but how can we test
software more fully? Can doing this help us
avoid bugs in the first place?
One possible approach is test driven development.
Many people think this reduces the number of bugs in
software as it is written, but evidence for this in the
sciences is somewhat limited as it is not always easy
to say what the right answer should be before writing the
software. Having said that, the tests involved in test
driven development are certanly useful even if some of
them are written after the software.
We will look at a new (and quite difficult) problem,
finding the overlap between ranges of numbers. For
example, these could be the dates that different
sensors were running, and you need to find the
date ranges where all sensors recorded data before
running further analysis.
<img src="python-overlapping-ranges.svg">
Start off by imagining you have a working function range_overlap that takes
a list of tuples. Write some assert statments that would check if the answer from this
function is correct. Put these in a function. Think of different cases and
about edge cases (which may show a subtle bug).
End of explanation
def test_range_overlap_no_overlap():
assert range_overlap([ (0.0, 1.0), (5.0, 6.0) ]) == None
assert range_overlap([ (0.0, 1.0), (1.0, 2.0) ]) == None
Explanation: But what if there is no overlap? What if they just touch?
End of explanation
def test_range_overlap_one_range():
assert range_overlap([ (0.0, 1.0) ]) == (0.0, 1.0)
Explanation: What about the case of a single range?
End of explanation
def range_overlap(ranges):
# Return common overlap among a set of [low, high] ranges.
lowest = -1000.0
highest = 1000.0
for (low, high) in ranges:
lowest = max(lowest, low)
highest = min(highest, high)
return (lowest, highest)
Explanation: The write a solution - one possible one is below.
End of explanation
test_range_overlap()
test_range_overlap_no_overlap()
test_range_overlap_one_range()
Explanation: And test it...
End of explanation
def pairs_overlap(rangeA, rangeB):
# Check if A starts after B ends and
# A ends before B starts. If both are
# false, there is an overlap.
# We are assuming (0.0 1.0) and
# (1.0 2.0) do not overlap. If these should
# overlap swap >= for > and <= for <.
overlap = not ((rangeA[0] >= rangeB[1]) or
(rangeA[1] <= rangeB[0]))
return overlap
def find_overlap(rangeA, rangeB):
# Return the overlap between range
# A and B
if pairs_overlap(rangeA, rangeB):
low = max(rangeA[0], rangeB[0])
high = min(rangeA[1], rangeB[1])
return (low, high)
else:
return None
def range_overlap(ranges):
# Return common overlap among a set of
# [low, high] ranges.
if len(ranges) == 1:
# Special case of one range -
# overlaps with itself
return(ranges[0])
elif len(ranges) == 2:
# Just return from find_overlap
return find_overlap(ranges[0], ranges[1])
else:
# Range of A, B, C is the
# range of range(B,C) with
# A, etc. Do this by recursion...
overlap = find_overlap(ranges[-1], ranges[-2])
if overlap is not None:
# Chop off the end of ranges and
# replace with the overlap
ranges = ranges[:-2]
ranges.append(overlap)
# Now run again, with the smaller list.
return range_overlap(ranges)
else:
return None
test_range_overlap()
test_range_overlap_one_range()
test_range_overlap_no_overlap()
Explanation: Should we add to the tests?
Can you write version with fewer bugs. My attempt is below.
End of explanation |
8,794 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Planning
Chapters 10-11
This notebook serves as supporting material for topics covered in Chapter 10 - Classical Planning and Chapter 11 - Planning and Acting in the Real World from the book Artificial Intelligence
Step1: CONTENTS
Classical Planning
- PlanningProblem
- Action
- Planning Problems
* Air cargo problem
* Spare tire problem
* Three block tower problem
* Shopping Problem
* Socks and shoes problem
* Cake problem
- Solving Planning Problems
* GraphPlan
* Linearize
* PartialOrderPlanner
<br>
Planning in the real world
- Problem
- HLA
- Planning Problems
* Job shop problem
* Double tennis problem
- Solving Planning Problems
* Hierarchical Search
* Angelic Search
PlanningProblem
PDDL stands for Planning Domain Definition Language.
The PlanningProblem class is used to represent planning problems in this module. The following attributes are essential to be able to define a problem
Step2: The init attribute is an expression that forms the initial knowledge base for the problem.
<br>
The goals attribute is an expression that indicates the goals to be reached by the problem.
<br>
Lastly, actions contains a list of Action objects that may be executed in the search space of the problem.
<br>
The goal_test method checks if the goal has been reached.
<br>
The act method acts out the given action and updates the current state.
<br>
ACTION
To be able to model a planning problem properly, it is essential to be able to represent an Action. Each action we model requires at least three things
Step3: This class represents an action given the expression, the preconditions and its effects.
A list precond stores the preconditions of the action and a list effect stores its effects.
Negative preconditions and effects are input using a ~ symbol before the clause, which are internally prefixed with a Not to make it easier to work with.
For example, the negation of At(obj, loc) will be input as ~At(obj, loc) and internally represented as NotAt(obj, loc).
This equivalently creates a new clause for each negative literal, removing the hassle of maintaining two separate knowledge bases.
This greatly simplifies algorithms like GraphPlan as we will see later.
The convert method takes an input string, parses it, removes conjunctions if any and returns a list of Expr objects.
The check_precond method checks if the preconditions for that action are valid, given a kb.
The act method carries out the action on the given knowledge base.
Now lets try to define a planning problem using these tools. Since we already know about the map of Romania, lets see if we can plan a trip across a simplified map of Romania.
Here is our simplified map definition
Step4: Let us add some logic propositions to complete our knowledge about travelling around the map. These are the typical symmetry and transitivity properties of connections on a map. We can now be sure that our knowledge_base understands what it truly means for two locations to be connected in the sense usually meant by humans when we use the term.
Let's also add our starting location - Sibiu to the map.
Step5: We now have a complete knowledge base, which can be seen like this
Step6: We now define possible actions to our problem. We know that we can drive between any connected places. But, as is evident from this list of Romanian airports, we can also fly directly between Sibiu, Bucharest, and Craiova.
We can define these flight actions like this
Step7: And the drive actions like this.
Step8: Our goal is defined as
Step9: Finally, we can define a a function that will tell us when we have reached our destination, Bucharest.
Step10: Thus, with all the components in place, we can define the planning problem.
Step11: PLANNING PROBLEMS
Air Cargo Problem
In the Air Cargo problem, we start with cargo at two airports, SFO and JFK. Our goal is to send each cargo to the other airport. We have two airplanes to help us accomplish the task.
The problem can be defined with three actions
Step12: At(c, a)
Step13: Before taking any actions, we will check if airCargo has reached its goal
Step14: It returns False because the goal state is not yet reached. Now, we define the sequence of actions that it should take in order to achieve the goal.
The actions are then carried out on the airCargo PlanningProblem.
The actions available to us are the following
Step15: As the airCargo has taken all the steps it needed in order to achieve the goal, we can now check if it has acheived its goal
Step16: It has now achieved its goal.
The Spare Tire Problem
Let's consider the problem of changing a flat tire of a car.
The goal is to mount a spare tire onto the car's axle, given that we have a flat tire on the axle and a spare tire in the trunk.
Step17: At(obj, loc)
Step18: Before taking any actions, we will check if spare_tire has reached its goal
Step19: As we can see, it hasn't completed the goal.
We now define a possible solution that can help us reach the goal of having a spare tire mounted onto the car's axle.
The actions are then carried out on the spareTire PlanningProblem.
The actions available to us are the following
Step20: This is a valid solution.
<br>
Another possible solution is
Step21: Notice that both solutions work, which means that the problem can be solved irrespective of the order in which the Remove actions take place, as long as both Remove actions take place before the PutOn action.
We have successfully mounted a spare tire onto the axle.
Three Block Tower Problem
This problem's domain consists of a set of cube-shaped blocks sitting on a table.
The blocks can be stacked, but only one block can fit directly on top of another.
A robot arm can pick up a block and move it to another position, either on the table or on top of another block.
The arm can pick up only one block at a time, so it cannot pick up a block that has another one on it.
The goal will always be to build one or more stacks of blocks.
In our case, we consider only three blocks.
The particular configuration we will use is called the Sussman anomaly after Prof. Gerry Sussman.
Let's take a look at the definition of three_block_tower() in the module.
Step22: On(b, x)
Step23: Before taking any actions, we will check if threeBlockTower has reached its goal
Step24: As we can see, it hasn't completed the goal.
We now define a sequence of actions that can stack three blocks in the required order.
The actions are then carried out on the threeBlockTower PlanningProblem.
The actions available to us are the following
Step25: As the three_block_tower has taken all the steps it needed in order to achieve the goal, we can now check if it has acheived its goal.
Step26: It has now successfully achieved its goal i.e, to build a stack of three blocks in the specified order.
The three_block_tower problem can also be defined in simpler terms using just two actions ToTable(x, y) and FromTable(x, y).
The underlying problem remains the same however, stacking up three blocks in a certain configuration given a particular starting state.
Let's have a look at the alternative definition.
Step27: On(x, y)
Step28: Before taking any actions, we will see if simple_bw has reached its goal.
Step29: As we can see, it hasn't completed the goal.
We now define a sequence of actions that can stack three blocks in the required order.
The actions are then carried out on the simple_bw PlanningProblem.
The actions available to us are the following
Step30: As the three_block_tower has taken all the steps it needed in order to achieve the goal, we can now check if it has acheived its goal.
Step31: It has now successfully achieved its goal i.e, to build a stack of three blocks in the specified order.
Shopping Problem
This problem requires us to acquire a carton of milk, a banana and a drill.
Initially, we start from home and it is known to us that milk and bananas are available in the supermarket and the hardware store sells drills.
Let's take a look at the definition of the shopping_problem in the module.
Step32: At(x)
Step33: Let's first check whether the goal state Have(Milk), Have(Banana), Have(Drill) is reached or not.
Step34: Let's look at the possible actions
Buy(x, store)
Step35: We have taken the steps required to acquire all the stuff we need.
Let's see if we have reached our goal.
Step36: It has now successfully achieved the goal.
Socks and Shoes
This is a simple problem of putting on a pair of socks and shoes.
The problem is defined in the module as given below.
Step37: LeftSockOn
Step38: Let's first check whether the goal state is reached or not.
Step39: As the goal state isn't reached, we will define a sequence of actions that might help us achieve the goal.
These actions will then be acted upon the socksShoes PlanningProblem to check if the goal state is reached.
Step40: We have reached our goal.
Cake Problem
This problem requires us to reach the state of having a cake and having eaten a cake simlutaneously, given a single cake.
Let's first take a look at the definition of the have_cake_and_eat_cake_too problem in the module.
Step41: Since this problem doesn't involve variables, states can be considered similar to symbols in propositional logic.
Have(Cake)
Step42: First let us check whether the goal state 'Have(Cake)' and 'Eaten(Cake)' are reached or not.
Step43: Let us look at the possible actions.
Bake(x)
Step44: Now we have made actions to bake the cake and eat the cake. Let us check if we have reached the goal.
Step45: It has now successfully achieved its goal i.e, to have and eat the cake.
One might wonder if the order of the actions matters for this problem.
Let's see for ourselves.
Step46: It raises an exception.
Indeed, according to the problem, we cannot bake a cake if we already have one.
In planning terms, '~Have(Cake)' is a precondition to the action 'Bake(Cake)'.
Hence, this solution is invalid.
PLANNING IN THE REAL WORLD
PROBLEM
The Problem class is a wrapper for PlanningProblem with some additional functionality and data-structures to handle real-world planning problems that involve time and resource constraints.
The Problem class includes everything that the PlanningProblem class includes.
Additionally, it also includes the following attributes essential to define a real-world planning problem
Step47: HLA
To be able to model a real-world planning problem properly, it is essential to be able to represent a high-level action (HLA) that can be hierarchically reduced to primitive actions.
Step48: In addition to preconditions and effects, an object of the HLA class also stores
Step49: The states of this problem are
Step50: Before taking any actions, we will check if jobShopProblem has reached its goal.
Step51: We now define a possible solution that can help us reach the goal.
The actions are then carried out on the jobShopProblem object.
The following actions are available to us
Step52: This is a valid solution and one of many correct ways to solve this problem.
Double tennis problem
This problem is a simple case of a multiactor planning problem, where two agents act at once and can simultaneously change the current state of the problem.
A correct plan is one that, if executed by the actors, achieves the goal.
In the true multiagent setting, of course, the agents may not agree to execute any particular plan, but atleast they will know what plans would work if they did agree to execute them.
<br>
In the double tennis problem, two actors A and B are playing together and can be in one of four locations
Step53: The states of this problem are
Step54: Before taking any actions, we will check if doubleTennisProblem has reached the goal.
Step55: As we can see, the goal hasn't been reached.
We now define a possible solution that can help us reach the goal of having the ball returned.
The actions will then be carried out on the doubleTennisProblem object.
The actions available to us are the following | Python Code:
from planning import *
from notebook import psource
Explanation: Planning
Chapters 10-11
This notebook serves as supporting material for topics covered in Chapter 10 - Classical Planning and Chapter 11 - Planning and Acting in the Real World from the book Artificial Intelligence: A Modern Approach.
This notebook uses implementations from the planning.py module.
See the intro notebook for instructions.
We'll start by looking at PlanningProblem and Action data types for defining problems and actions.
Then, we will see how to use them by trying to plan a trip from Sibiu to Bucharest across the familiar map of Romania, from search.ipynb
followed by some common planning problems and methods of solving them.
Let's start by importing everything from the planning module.
End of explanation
psource(PlanningProblem)
Explanation: CONTENTS
Classical Planning
- PlanningProblem
- Action
- Planning Problems
* Air cargo problem
* Spare tire problem
* Three block tower problem
* Shopping Problem
* Socks and shoes problem
* Cake problem
- Solving Planning Problems
* GraphPlan
* Linearize
* PartialOrderPlanner
<br>
Planning in the real world
- Problem
- HLA
- Planning Problems
* Job shop problem
* Double tennis problem
- Solving Planning Problems
* Hierarchical Search
* Angelic Search
PlanningProblem
PDDL stands for Planning Domain Definition Language.
The PlanningProblem class is used to represent planning problems in this module. The following attributes are essential to be able to define a problem:
* an initial state
* a set of goals
* a set of viable actions that can be executed in the search space of the problem
View the source to see how the Python code tries to realise these.
End of explanation
psource(Action)
Explanation: The init attribute is an expression that forms the initial knowledge base for the problem.
<br>
The goals attribute is an expression that indicates the goals to be reached by the problem.
<br>
Lastly, actions contains a list of Action objects that may be executed in the search space of the problem.
<br>
The goal_test method checks if the goal has been reached.
<br>
The act method acts out the given action and updates the current state.
<br>
ACTION
To be able to model a planning problem properly, it is essential to be able to represent an Action. Each action we model requires at least three things:
* preconditions that the action must meet
* the effects of executing the action
* some expression that represents the action
The module models actions using the Action class
End of explanation
from utils import *
# this imports the required expr so we can create our knowledge base
knowledge_base = [
expr("Connected(Bucharest,Pitesti)"),
expr("Connected(Pitesti,Rimnicu)"),
expr("Connected(Rimnicu,Sibiu)"),
expr("Connected(Sibiu,Fagaras)"),
expr("Connected(Fagaras,Bucharest)"),
expr("Connected(Pitesti,Craiova)"),
expr("Connected(Craiova,Rimnicu)")
]
Explanation: This class represents an action given the expression, the preconditions and its effects.
A list precond stores the preconditions of the action and a list effect stores its effects.
Negative preconditions and effects are input using a ~ symbol before the clause, which are internally prefixed with a Not to make it easier to work with.
For example, the negation of At(obj, loc) will be input as ~At(obj, loc) and internally represented as NotAt(obj, loc).
This equivalently creates a new clause for each negative literal, removing the hassle of maintaining two separate knowledge bases.
This greatly simplifies algorithms like GraphPlan as we will see later.
The convert method takes an input string, parses it, removes conjunctions if any and returns a list of Expr objects.
The check_precond method checks if the preconditions for that action are valid, given a kb.
The act method carries out the action on the given knowledge base.
Now lets try to define a planning problem using these tools. Since we already know about the map of Romania, lets see if we can plan a trip across a simplified map of Romania.
Here is our simplified map definition:
End of explanation
knowledge_base.extend([
expr("Connected(x,y) ==> Connected(y,x)"),
expr("Connected(x,y) & Connected(y,z) ==> Connected(x,z)"),
expr("At(Sibiu)")
])
Explanation: Let us add some logic propositions to complete our knowledge about travelling around the map. These are the typical symmetry and transitivity properties of connections on a map. We can now be sure that our knowledge_base understands what it truly means for two locations to be connected in the sense usually meant by humans when we use the term.
Let's also add our starting location - Sibiu to the map.
End of explanation
knowledge_base
Explanation: We now have a complete knowledge base, which can be seen like this:
End of explanation
#Sibiu to Bucharest
precond = 'At(Sibiu)'
effect = 'At(Bucharest) & ~At(Sibiu)'
fly_s_b = Action('Fly(Sibiu, Bucharest)', precond, effect)
#Bucharest to Sibiu
precond = 'At(Bucharest)'
effect = 'At(Sibiu) & ~At(Bucharest)'
fly_b_s = Action('Fly(Bucharest, Sibiu)', precond, effect)
#Sibiu to Craiova
precond = 'At(Sibiu)'
effect = 'At(Craiova) & ~At(Sibiu)'
fly_s_c = Action('Fly(Sibiu, Craiova)', precond, effect)
#Craiova to Sibiu
precond = 'At(Craiova)'
effect = 'At(Sibiu) & ~At(Craiova)'
fly_c_s = Action('Fly(Craiova, Sibiu)', precond, effect)
#Bucharest to Craiova
precond = 'At(Bucharest)'
effect = 'At(Craiova) & ~At(Bucharest)'
fly_b_c = Action('Fly(Bucharest, Craiova)', precond, effect)
#Craiova to Bucharest
precond = 'At(Craiova)'
effect = 'At(Bucharest) & ~At(Craiova)'
fly_c_b = Action('Fly(Craiova, Bucharest)', precond, effect)
Explanation: We now define possible actions to our problem. We know that we can drive between any connected places. But, as is evident from this list of Romanian airports, we can also fly directly between Sibiu, Bucharest, and Craiova.
We can define these flight actions like this:
End of explanation
#Drive
precond = 'At(x)'
effect = 'At(y) & ~At(x)'
drive = Action('Drive(x, y)', precond, effect)
Explanation: And the drive actions like this.
End of explanation
goals = 'At(Bucharest)'
Explanation: Our goal is defined as
End of explanation
def goal_test(kb):
return kb.ask(expr('At(Bucharest)'))
Explanation: Finally, we can define a a function that will tell us when we have reached our destination, Bucharest.
End of explanation
prob = PlanningProblem(knowledge_base, goals, [fly_s_b, fly_b_s, fly_s_c, fly_c_s, fly_b_c, fly_c_b, drive])
Explanation: Thus, with all the components in place, we can define the planning problem.
End of explanation
psource(air_cargo)
Explanation: PLANNING PROBLEMS
Air Cargo Problem
In the Air Cargo problem, we start with cargo at two airports, SFO and JFK. Our goal is to send each cargo to the other airport. We have two airplanes to help us accomplish the task.
The problem can be defined with three actions: Load, Unload and Fly.
Let us look how the air_cargo problem has been defined in the module.
End of explanation
airCargo = air_cargo()
Explanation: At(c, a): The cargo 'c' is at airport 'a'.
~At(c, a): The cargo 'c' is not at airport 'a'.
In(c, p): Cargo 'c' is in plane 'p'.
~In(c, p): Cargo 'c' is not in plane 'p'.
Cargo(c): Declare 'c' as cargo.
Plane(p): Declare 'p' as plane.
Airport(a): Declare 'a' as airport.
In the initial_state, we have cargo C1, plane P1 at airport SFO and cargo C2, plane P2 at airport JFK.
Our goal state is to have cargo C1 at airport JFK and cargo C2 at airport SFO. We will discuss on how to achieve this. Let us now define an object of the air_cargo problem:
End of explanation
print(airCargo.goal_test())
Explanation: Before taking any actions, we will check if airCargo has reached its goal:
End of explanation
solution = [expr("Load(C1 , P1, SFO)"),
expr("Fly(P1, SFO, JFK)"),
expr("Unload(C1, P1, JFK)"),
expr("Load(C2, P2, JFK)"),
expr("Fly(P2, JFK, SFO)"),
expr("Unload (C2, P2, SFO)")]
for action in solution:
airCargo.act(action)
Explanation: It returns False because the goal state is not yet reached. Now, we define the sequence of actions that it should take in order to achieve the goal.
The actions are then carried out on the airCargo PlanningProblem.
The actions available to us are the following: Load, Unload, Fly
Load(c, p, a): Load cargo 'c' into plane 'p' from airport 'a'.
Fly(p, f, t): Fly the plane 'p' from airport 'f' to airport 't'.
Unload(c, p, a): Unload cargo 'c' from plane 'p' to airport 'a'.
This problem can have multiple valid solutions.
One such solution is shown below.
End of explanation
print(airCargo.goal_test())
Explanation: As the airCargo has taken all the steps it needed in order to achieve the goal, we can now check if it has acheived its goal:
End of explanation
psource(spare_tire)
Explanation: It has now achieved its goal.
The Spare Tire Problem
Let's consider the problem of changing a flat tire of a car.
The goal is to mount a spare tire onto the car's axle, given that we have a flat tire on the axle and a spare tire in the trunk.
End of explanation
spareTire = spare_tire()
Explanation: At(obj, loc): object 'obj' is at location 'loc'.
~At(obj, loc): object 'obj' is not at location 'loc'.
Tire(t): Declare a tire of type 't'.
Let us now define an object of spare_tire problem:
End of explanation
print(spareTire.goal_test())
Explanation: Before taking any actions, we will check if spare_tire has reached its goal:
End of explanation
solution = [expr("Remove(Flat, Axle)"),
expr("Remove(Spare, Trunk)"),
expr("PutOn(Spare, Axle)")]
for action in solution:
spareTire.act(action)
print(spareTire.goal_test())
Explanation: As we can see, it hasn't completed the goal.
We now define a possible solution that can help us reach the goal of having a spare tire mounted onto the car's axle.
The actions are then carried out on the spareTire PlanningProblem.
The actions available to us are the following: Remove, PutOn
Remove(obj, loc): Remove the tire 'obj' from the location 'loc'.
PutOn(t, Axle): Attach the tire 't' on the Axle.
LeaveOvernight(): We live in a particularly bad neighborhood and all tires, flat or not, are stolen if we leave them overnight.
End of explanation
spareTire = spare_tire()
solution = [expr('Remove(Spare, Trunk)'),
expr('Remove(Flat, Axle)'),
expr('PutOn(Spare, Axle)')]
for action in solution:
spareTire.act(action)
print(spareTire.goal_test())
Explanation: This is a valid solution.
<br>
Another possible solution is
End of explanation
psource(three_block_tower)
Explanation: Notice that both solutions work, which means that the problem can be solved irrespective of the order in which the Remove actions take place, as long as both Remove actions take place before the PutOn action.
We have successfully mounted a spare tire onto the axle.
Three Block Tower Problem
This problem's domain consists of a set of cube-shaped blocks sitting on a table.
The blocks can be stacked, but only one block can fit directly on top of another.
A robot arm can pick up a block and move it to another position, either on the table or on top of another block.
The arm can pick up only one block at a time, so it cannot pick up a block that has another one on it.
The goal will always be to build one or more stacks of blocks.
In our case, we consider only three blocks.
The particular configuration we will use is called the Sussman anomaly after Prof. Gerry Sussman.
Let's take a look at the definition of three_block_tower() in the module.
End of explanation
threeBlockTower = three_block_tower()
Explanation: On(b, x): The block 'b' is on 'x'. 'x' can be a table or a block.
~On(b, x): The block 'b' is not on 'x'. 'x' can be a table or a block.
Block(b): Declares 'b' as a block.
Clear(x): To indicate that there is nothing on 'x' and it is free to be moved around.
~Clear(x): To indicate that there is something on 'x' and it cannot be moved.
Let us now define an object of three_block_tower problem:
End of explanation
print(threeBlockTower.goal_test())
Explanation: Before taking any actions, we will check if threeBlockTower has reached its goal:
End of explanation
solution = [expr("MoveToTable(C, A)"),
expr("Move(B, Table, C)"),
expr("Move(A, Table, B)")]
for action in solution:
threeBlockTower.act(action)
Explanation: As we can see, it hasn't completed the goal.
We now define a sequence of actions that can stack three blocks in the required order.
The actions are then carried out on the threeBlockTower PlanningProblem.
The actions available to us are the following: MoveToTable, Move
MoveToTable(b, x): Move box 'b' stacked on 'x' to the table, given that box 'b' is clear.
Move(b, x, y): Move box 'b' stacked on 'x' to the top of 'y', given that both 'b' and 'y' are clear.
End of explanation
print(threeBlockTower.goal_test())
Explanation: As the three_block_tower has taken all the steps it needed in order to achieve the goal, we can now check if it has acheived its goal.
End of explanation
psource(simple_blocks_world)
Explanation: It has now successfully achieved its goal i.e, to build a stack of three blocks in the specified order.
The three_block_tower problem can also be defined in simpler terms using just two actions ToTable(x, y) and FromTable(x, y).
The underlying problem remains the same however, stacking up three blocks in a certain configuration given a particular starting state.
Let's have a look at the alternative definition.
End of explanation
simpleBlocksWorld = simple_blocks_world()
Explanation: On(x, y): The block 'x' is on 'y'. Both 'x' and 'y' have to be blocks.
~On(x, y): The block 'x' is not on 'y'. Both 'x' and 'y' have to be blocks.
OnTable(x): The block 'x' is on the table.
~OnTable(x): The block 'x' is not on the table.
Clear(x): To indicate that there is nothing on 'x' and it is free to be moved around.
~Clear(x): To indicate that there is something on 'x' and it cannot be moved.
Let's now define a simple_blocks_world prolem.
End of explanation
simpleBlocksWorld.goal_test()
Explanation: Before taking any actions, we will see if simple_bw has reached its goal.
End of explanation
solution = [expr('ToTable(A, B)'),
expr('FromTable(B, A)'),
expr('FromTable(C, B)')]
for action in solution:
simpleBlocksWorld.act(action)
Explanation: As we can see, it hasn't completed the goal.
We now define a sequence of actions that can stack three blocks in the required order.
The actions are then carried out on the simple_bw PlanningProblem.
The actions available to us are the following: MoveToTable, Move
ToTable(x, y): Move box 'x' stacked on 'y' to the table, given that box 'y' is clear.
FromTable(x, y): Move box 'x' from wherever it is, to the top of 'y', given that both 'x' and 'y' are clear.
End of explanation
print(simpleBlocksWorld.goal_test())
Explanation: As the three_block_tower has taken all the steps it needed in order to achieve the goal, we can now check if it has acheived its goal.
End of explanation
psource(shopping_problem)
Explanation: It has now successfully achieved its goal i.e, to build a stack of three blocks in the specified order.
Shopping Problem
This problem requires us to acquire a carton of milk, a banana and a drill.
Initially, we start from home and it is known to us that milk and bananas are available in the supermarket and the hardware store sells drills.
Let's take a look at the definition of the shopping_problem in the module.
End of explanation
shoppingProblem = shopping_problem()
Explanation: At(x): Indicates that we are currently at 'x' where 'x' can be Home, SM (supermarket) or HW (Hardware store).
~At(x): Indicates that we are currently not at 'x'.
Sells(s, x): Indicates that item 'x' can be bought from store 's'.
Have(x): Indicates that we possess the item 'x'.
End of explanation
print(shoppingProblem.goal_test())
Explanation: Let's first check whether the goal state Have(Milk), Have(Banana), Have(Drill) is reached or not.
End of explanation
solution = [expr('Go(Home, SM)'),
expr('Buy(Milk, SM)'),
expr('Buy(Banana, SM)'),
expr('Go(SM, HW)'),
expr('Buy(Drill, HW)')]
for action in solution:
shoppingProblem.act(action)
Explanation: Let's look at the possible actions
Buy(x, store): Buy an item 'x' from a 'store' given that the 'store' sells 'x'.
Go(x, y): Go to destination 'y' starting from source 'x'.
We now define a valid solution that will help us reach the goal.
The sequence of actions will then be carried out onto the shoppingProblem PlanningProblem.
End of explanation
shoppingProblem.goal_test()
Explanation: We have taken the steps required to acquire all the stuff we need.
Let's see if we have reached our goal.
End of explanation
psource(socks_and_shoes)
Explanation: It has now successfully achieved the goal.
Socks and Shoes
This is a simple problem of putting on a pair of socks and shoes.
The problem is defined in the module as given below.
End of explanation
socksShoes = socks_and_shoes()
Explanation: LeftSockOn: Indicates that we have already put on the left sock.
RightSockOn: Indicates that we have already put on the right sock.
LeftShoeOn: Indicates that we have already put on the left shoe.
RightShoeOn: Indicates that we have already put on the right shoe.
End of explanation
socksShoes.goal_test()
Explanation: Let's first check whether the goal state is reached or not.
End of explanation
solution = [expr('RightSock'),
expr('RightShoe'),
expr('LeftSock'),
expr('LeftShoe')]
for action in solution:
socksShoes.act(action)
socksShoes.goal_test()
Explanation: As the goal state isn't reached, we will define a sequence of actions that might help us achieve the goal.
These actions will then be acted upon the socksShoes PlanningProblem to check if the goal state is reached.
End of explanation
psource(have_cake_and_eat_cake_too)
Explanation: We have reached our goal.
Cake Problem
This problem requires us to reach the state of having a cake and having eaten a cake simlutaneously, given a single cake.
Let's first take a look at the definition of the have_cake_and_eat_cake_too problem in the module.
End of explanation
cakeProblem = have_cake_and_eat_cake_too()
Explanation: Since this problem doesn't involve variables, states can be considered similar to symbols in propositional logic.
Have(Cake): Declares that we have a 'Cake'.
~Have(Cake): Declares that we don't have a 'Cake'.
End of explanation
print(cakeProblem.goal_test())
Explanation: First let us check whether the goal state 'Have(Cake)' and 'Eaten(Cake)' are reached or not.
End of explanation
solution = [expr("Eat(Cake)"),
expr("Bake(Cake)")]
for action in solution:
cakeProblem.act(action)
Explanation: Let us look at the possible actions.
Bake(x): To bake ' x '.
Eat(x): To eat ' x '.
We now define a valid solution that can help us reach the goal.
The sequence of actions will then be acted upon the cakeProblem PlanningProblem.
End of explanation
print(cakeProblem.goal_test())
Explanation: Now we have made actions to bake the cake and eat the cake. Let us check if we have reached the goal.
End of explanation
cakeProblem = have_cake_and_eat_cake_too()
solution = [expr('Bake(Cake)'),
expr('Eat(Cake)')]
for action in solution:
cakeProblem.act(action)
Explanation: It has now successfully achieved its goal i.e, to have and eat the cake.
One might wonder if the order of the actions matters for this problem.
Let's see for ourselves.
End of explanation
psource(Problem)
Explanation: It raises an exception.
Indeed, according to the problem, we cannot bake a cake if we already have one.
In planning terms, '~Have(Cake)' is a precondition to the action 'Bake(Cake)'.
Hence, this solution is invalid.
PLANNING IN THE REAL WORLD
PROBLEM
The Problem class is a wrapper for PlanningProblem with some additional functionality and data-structures to handle real-world planning problems that involve time and resource constraints.
The Problem class includes everything that the PlanningProblem class includes.
Additionally, it also includes the following attributes essential to define a real-world planning problem:
- a list of jobs to be done
- a dictionary of resources
It also overloads the act method to call the do_action method of the HLA class,
and also includes a new method refinements that finds refinements or primitive actions for high level actions.
<br>
hierarchical_search and angelic_search are also built into the Problem class to solve such planning problems.
End of explanation
psource(HLA)
Explanation: HLA
To be able to model a real-world planning problem properly, it is essential to be able to represent a high-level action (HLA) that can be hierarchically reduced to primitive actions.
End of explanation
psource(job_shop_problem)
Explanation: In addition to preconditions and effects, an object of the HLA class also stores:
- the duration of the HLA
- the quantity of consumption of consumable resources
- the quantity of reusable resources used
- a bool completed denoting if the HLA has been completed
The class also has some useful helper methods:
- do_action: checks if required consumable and reusable resources are available and if so, executes the action.
- has_consumable_resource: checks if there exists sufficient quantity of the required consumable resource.
- has_usable_resource: checks if reusable resources are available and not already engaged.
- inorder: ensures that all the jobs that had to be executed before the current one have been successfully executed.
PLANNING PROBLEMS
Job-shop Problem
This is a simple problem involving the assembly of two cars simultaneously.
The problem consists of two jobs, each of the form [AddEngine, AddWheels, Inspect] to be performed on two cars with different requirements and availability of resources.
<br>
Let's look at how the job_shop_problem has been defined on the module.
End of explanation
jobShopProblem = job_shop_problem()
Explanation: The states of this problem are:
<br>
<br>
Has(x, y): Car 'x' has 'y' where 'y' can be an Engine or a Wheel.
~Has(x, y): Car 'x' does not have 'y' where 'y' can be an Engine or a Wheel.
Inspected(c): Car 'c' has been inspected.
~Inspected(c): Car 'c' has not been inspected.
In the initial state, C1 and C2 are cars and neither have an engine or wheels and haven't been inspected.
E1 and E2 are engines.
W1 and W2 are wheels.
<br>
Our goal is to have engines and wheels on both cars and to get them inspected. We will discuss how to achieve this.
<br>
Let's define an object of the job_shop_problem.
End of explanation
print(jobShopProblem.goal_test())
Explanation: Before taking any actions, we will check if jobShopProblem has reached its goal.
End of explanation
solution = [jobShopProblem.jobs[1][0],
jobShopProblem.jobs[1][1],
jobShopProblem.jobs[1][2],
jobShopProblem.jobs[0][0],
jobShopProblem.jobs[0][1],
jobShopProblem.jobs[0][2]]
for action in solution:
jobShopProblem.act(action)
print(jobShopProblem.goal_test())
Explanation: We now define a possible solution that can help us reach the goal.
The actions are then carried out on the jobShopProblem object.
The following actions are available to us:
AddEngine1: Adds an engine to the car C1. Takes 30 minutes to complete and uses an engine hoist.
AddEngine2: Adds an engine to the car C2. Takes 60 minutes to complete and uses an engine hoist.
AddWheels1: Adds wheels to car C1. Takes 30 minutes to complete. Uses a wheel station and consumes 20 lug nuts.
AddWheels2: Adds wheels to car C2. Takes 15 minutes to complete. Uses a wheel station and consumes 20 lug nuts as well.
Inspect1: Gets car C1 inspected. Requires 10 minutes of inspection by one inspector.
Inspect2: Gets car C2 inspected. Requires 10 minutes of inspection by one inspector.
End of explanation
psource(double_tennis_problem)
Explanation: This is a valid solution and one of many correct ways to solve this problem.
Double tennis problem
This problem is a simple case of a multiactor planning problem, where two agents act at once and can simultaneously change the current state of the problem.
A correct plan is one that, if executed by the actors, achieves the goal.
In the true multiagent setting, of course, the agents may not agree to execute any particular plan, but atleast they will know what plans would work if they did agree to execute them.
<br>
In the double tennis problem, two actors A and B are playing together and can be in one of four locations: LeftBaseLine, RightBaseLine, LeftNet and RightNet.
The ball can be returned only if a player is in the right place.
Each action must include the actor as an argument.
<br>
Let's first look at the definition of the double_tennis_problem in the module.
End of explanation
doubleTennisProblem = double_tennis_problem()
Explanation: The states of this problem are:
Approaching(Ball, loc): The Ball is approaching the location loc.
Returned(Ball): One of the actors successfully hit the approaching ball from the correct location which caused it to return to the other side.
At(actor, loc): actor is at location loc.
~At(actor, loc): actor is not at location loc.
Let's now define an object of double_tennis_problem.
End of explanation
print(doubleTennisProblem.goal_test())
Explanation: Before taking any actions, we will check if doubleTennisProblem has reached the goal.
End of explanation
solution = [expr('Go(A, RightBaseLine, LeftBaseLine)'),
expr('Hit(A, Ball, RightBaseLine)'),
expr('Go(A, LeftNet, RightBaseLine)')]
for action in solution:
doubleTennisProblem.act(action)
doubleTennisProblem.goal_test()
Explanation: As we can see, the goal hasn't been reached.
We now define a possible solution that can help us reach the goal of having the ball returned.
The actions will then be carried out on the doubleTennisProblem object.
The actions available to us are the following:
Hit(actor, ball, loc): returns an approaching ball if actor is present at the loc that the ball is approaching.
Go(actor, to, loc): moves an actor from location loc to location to.
We notice something different in this problem though,
which is quite unlike any other problem we have seen so far.
The goal state of the problem contains a variable a.
This happens sometimes in multiagent planning problems
and it means that it doesn't matter which actor is at the LeftNet or the RightNet, as long as there is atleast one actor at either LeftNet or RightNet.
End of explanation |
8,795 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matching Market
This simple model consists of a buyer, a supplier, and a market.
The buyer represents a group of customers whose willingness to pay for a single unit of the good is captured by a vector of prices wta. You can initiate the buyer with a set_quantity function which randomly assigns the willingness to pay according to your specifications. You may ask for these willingness to pay quantities with a get_bid function.
The supplier is similiar, but instead the supplier is willing to be paid to sell a unit of technology. The supplier for instance may have non-zero variable costs that make them unwilling to produce the good unless they receive a specified price. Similarly the supplier has a get_ask function which returns a list of desired prices.
The willingness to pay or sell are set randomly using uniform random distributions. The resultant lists of bids are effectively a demand curve. Likewise the list of asks is effectively a supply curve. A more complex determination of bids and asks is possible, for instance using time of year to vary the quantities being demanded.
Microeconomic Foundations
The market assumes the presence of an auctioneer which will create a book, which seeks to match the bids and the asks as much as possible. If the auctioneer is neutral, then it is incentive compatible for the buyer and the supplier to truthfully announce their bids and asks. The auctioneer will find a single price which clears as much of the market as possible. Clearing the market means that as many willing swaps happens as possible. You may ask the market object at what price the market clears with the get_clearing_price function. You may also ask the market how many units were exchanged with the get_units_cleared function.
Agent-Based Objects
The following section presents three objects which can be used to make an agent-based model of an efficient, two-sided market.
Step1: Example Market
In the following code example we use the buyer and supplier objects to create a market. At the market a single price is announced which causes as many units of goods to be swapped as possible. The buyers and sellers stop trading when it is no longer in their own interest to continue. | Python Code:
import random as rnd
class Supplier():
def __init__(self):
self.wta = []
# the supplier has n quantities that they can sell
# they may be willing to sell this quantity anywhere from a lower price of l
# to a higher price of u
def set_quantity(self,n,l,u):
for i in range(n):
p = rnd.uniform(l,u)
self.wta.append(p)
# return the dictionary of willingness to ask
def get_ask(self):
return self.wta
class Buyer():
def __init__(self):
self.wtp = []
# the supplier has n quantities that they can buy
# they may be willing to sell this quantity anywhere from a lower price of l
# to a higher price of u
def set_quantity(self,n,l,u):
for i in range(n):
p = rnd.uniform(l,u)
self.wtp.append(p)
# return list of willingness to pay
def get_bid(self):
return self.wtp
class Market():
count = 0
last_price = ''
b = []
s = []
def __init__(self,b,s):
# buyer list sorted in descending order
self.b = sorted(b, reverse=True)
# seller list sorted in ascending order
self.s = sorted(s, reverse=False)
# return the price at which the market clears
# assume equal numbers of sincere buyers and sellers
def get_clearing_price(self):
# buyer makes a bid, starting with the buyer which wants it most
for i in range(len(self.b)):
if (self.b[i] > self.s[i]):
self.count +=1
self.last_price = self.b[i]
return self.last_price
def get_units_cleared(self):
return self.count
Explanation: Matching Market
This simple model consists of a buyer, a supplier, and a market.
The buyer represents a group of customers whose willingness to pay for a single unit of the good is captured by a vector of prices wta. You can initiate the buyer with a set_quantity function which randomly assigns the willingness to pay according to your specifications. You may ask for these willingness to pay quantities with a get_bid function.
The supplier is similiar, but instead the supplier is willing to be paid to sell a unit of technology. The supplier for instance may have non-zero variable costs that make them unwilling to produce the good unless they receive a specified price. Similarly the supplier has a get_ask function which returns a list of desired prices.
The willingness to pay or sell are set randomly using uniform random distributions. The resultant lists of bids are effectively a demand curve. Likewise the list of asks is effectively a supply curve. A more complex determination of bids and asks is possible, for instance using time of year to vary the quantities being demanded.
Microeconomic Foundations
The market assumes the presence of an auctioneer which will create a book, which seeks to match the bids and the asks as much as possible. If the auctioneer is neutral, then it is incentive compatible for the buyer and the supplier to truthfully announce their bids and asks. The auctioneer will find a single price which clears as much of the market as possible. Clearing the market means that as many willing swaps happens as possible. You may ask the market object at what price the market clears with the get_clearing_price function. You may also ask the market how many units were exchanged with the get_units_cleared function.
Agent-Based Objects
The following section presents three objects which can be used to make an agent-based model of an efficient, two-sided market.
End of explanation
# make a supplier and get the asks
supplier = Supplier()
supplier.set_quantity(100,0,10)
ask = supplier.get_ask()
# make a buyer and get the bids
buyer = Buyer()
buyer.set_quantity(100,0,10)
bid = buyer.get_bid()
# make a market where the buyers and suppliers can meet
# the bids and asks are a list of prices
market = Market(bid,ask)
price = market.get_clearing_price()
quantity = market.get_units_cleared()
# output the results of the market
print("Goods cleared for a price of ",price)
print("Units sold are ", quantity)
Explanation: Example Market
In the following code example we use the buyer and supplier objects to create a market. At the market a single price is announced which causes as many units of goods to be swapped as possible. The buyers and sellers stop trading when it is no longer in their own interest to continue.
End of explanation |
8,796 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comparing Encoder-Decoders Analysis
Model Architecture
Step1: Perplexity on Each Dataset
Step2: Loss vs. Epoch
Step3: Perplexity vs. Epoch
Step4: Generations
Step5: BLEU Analysis
Step6: N-pairs BLEU Analysis
This analysis randomly samples 1000 pairs of generations/ground truths and treats them as translations, giving their BLEU score. We can expect very low scores in the ground truth and high scores can expose hyper-common generations
Step7: Alignment Analysis
This analysis computs the average Smith-Waterman alignment score for generations, with the same intuition as N-pairs BLEU, in that we expect low scores in the ground truth and hyper-common generations to raise the scores | Python Code:
report_files = ["/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing6_200_512_04drb/encdec_noing6_200_512_04drb.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing6_bow_200_512_04drb/encdec_noing6_bow_200_512_04drb.json"]
log_files = ["/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing6_200_512_04drb/encdec_noing6_200_512_04drb_logs.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing6_bow_200_512_04drb/encdec_noing6_bow_200_512_04drb_logs.json"]
reports = []
logs = []
import json
import matplotlib.pyplot as plt
import numpy as np
for report_file in report_files:
with open(report_file) as f:
reports.append((report_file.split('/')[-1].split('.json')[0], json.loads(f.read())))
for log_file in log_files:
with open(log_file) as f:
logs.append((log_file.split('/')[-1].split('.json')[0], json.loads(f.read())))
for report_name, report in reports:
print '\n', report_name, '\n'
print 'Encoder: \n', report['architecture']['encoder']
print 'Decoder: \n', report['architecture']['decoder']
Explanation: Comparing Encoder-Decoders Analysis
Model Architecture
End of explanation
%matplotlib inline
from IPython.display import HTML, display
def display_table(data):
display(HTML(
u'<table><tr>{}</tr></table>'.format(
u'</tr><tr>'.join(
u'<td>{}</td>'.format('</td><td>'.join(unicode(_) for _ in row)) for row in data)
)
))
def bar_chart(data):
n_groups = len(data)
train_perps = [d[1] for d in data]
valid_perps = [d[2] for d in data]
test_perps = [d[3] for d in data]
fig, ax = plt.subplots(figsize=(10,8))
index = np.arange(n_groups)
bar_width = 0.3
opacity = 0.4
error_config = {'ecolor': '0.3'}
train_bars = plt.bar(index, train_perps, bar_width,
alpha=opacity,
color='b',
error_kw=error_config,
label='Training Perplexity')
valid_bars = plt.bar(index + bar_width, valid_perps, bar_width,
alpha=opacity,
color='r',
error_kw=error_config,
label='Valid Perplexity')
test_bars = plt.bar(index + 2*bar_width, test_perps, bar_width,
alpha=opacity,
color='g',
error_kw=error_config,
label='Test Perplexity')
plt.xlabel('Model')
plt.ylabel('Scores')
plt.title('Perplexity by Model and Dataset')
plt.xticks(index + bar_width / 3, [d[0] for d in data])
plt.legend()
plt.tight_layout()
plt.show()
data = [['<b>Model</b>', '<b>Train Perplexity</b>', '<b>Valid Perplexity</b>', '<b>Test Perplexity</b>']]
for rname, report in reports:
data.append([rname, report['train_perplexity'], report['valid_perplexity'], report['test_perplexity']])
display_table(data)
bar_chart(data[1:])
Explanation: Perplexity on Each Dataset
End of explanation
%matplotlib inline
plt.figure(figsize=(10, 8))
for rname, l in logs:
for k in l.keys():
plt.plot(l[k][0], l[k][1], label=str(k) + ' ' + rname + ' (train)')
plt.plot(l[k][0], l[k][2], label=str(k) + ' ' + rname + ' (valid)')
plt.title('Loss v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.show()
Explanation: Loss vs. Epoch
End of explanation
%matplotlib inline
plt.figure(figsize=(10, 8))
for rname, l in logs:
for k in l.keys():
plt.plot(l[k][0], l[k][3], label=str(k) + ' ' + rname + ' (train)')
plt.plot(l[k][0], l[k][4], label=str(k) + ' ' + rname + ' (valid)')
plt.title('Perplexity v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Perplexity')
plt.legend()
plt.show()
Explanation: Perplexity vs. Epoch
End of explanation
def print_sample(sample, best_bleu=None):
enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>'])
gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>'])
print('Input: '+ enc_input + '\n')
print('Gend: ' + sample['generated'] + '\n')
print('True: ' + gold + '\n')
if best_bleu is not None:
cbm = ' '.join([w for w in best_bleu['best_match'].split(' ') if w != '<mask>'])
print('Closest BLEU Match: ' + cbm + '\n')
print('Closest BLEU Score: ' + str(best_bleu['best_score']) + '\n')
print('\n')
def display_sample(samples, best_bleu=False):
for enc_input in samples:
data = []
for rname, sample in samples[enc_input]:
gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>'])
data.append([rname, '<b>Generated: </b>' + sample['generated']])
if best_bleu:
cbm = ' '.join([w for w in sample['best_match'].split(' ') if w != '<mask>'])
data.append([rname, '<b>Closest BLEU Match: </b>' + cbm + ' (Score: ' + str(sample['best_score']) + ')'])
data.insert(0, ['<u><b>' + enc_input + '</b></u>', '<b>True: ' + gold+ '</b>'])
display_table(data)
def process_samples(samples):
# consolidate samples with identical inputs
result = {}
for rname, t_samples, t_cbms in samples:
for i, sample in enumerate(t_samples):
enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>'])
if t_cbms is not None:
sample.update(t_cbms[i])
if enc_input in result:
result[enc_input].append((rname, sample))
else:
result[enc_input] = [(rname, sample)]
return result
samples = process_samples([(rname, r['train_samples'], r['best_bleu_matches_train'] if 'best_bleu_matches_train' in r else None) for (rname, r) in reports])
display_sample(samples, best_bleu='best_bleu_matches_train' in reports[1][1])
samples = process_samples([(rname, r['valid_samples'], r['best_bleu_matches_valid'] if 'best_bleu_matches_valid' in r else None) for (rname, r) in reports])
display_sample(samples, best_bleu='best_bleu_matches_valid' in reports[1][1])
samples = process_samples([(rname, r['test_samples'], r['best_bleu_matches_test'] if 'best_bleu_matches_test' in r else None) for (rname, r) in reports])
display_sample(samples, best_bleu='best_bleu_matches_test' in reports[1][1])
Explanation: Generations
End of explanation
def print_bleu(blue_structs):
data= [['<b>Model</b>', '<b>Overall Score</b>','<b>1-gram Score</b>','<b>2-gram Score</b>','<b>3-gram Score</b>','<b>4-gram Score</b>']]
for rname, blue_struct in blue_structs:
data.append([rname, blue_struct['score'], blue_struct['components']['1'], blue_struct['components']['2'], blue_struct['components']['3'], blue_struct['components']['4']])
display_table(data)
# Training Set BLEU Scores
print_bleu([(rname, report['train_bleu']) for (rname, report) in reports])
# Validation Set BLEU Scores
print_bleu([(rname, report['valid_bleu']) for (rname, report) in reports])
# Test Set BLEU Scores
print_bleu([(rname, report['test_bleu']) for (rname, report) in reports])
# All Data BLEU Scores
print_bleu([(rname, report['combined_bleu']) for (rname, report) in reports])
Explanation: BLEU Analysis
End of explanation
# Training Set BLEU n-pairs Scores
print_bleu([(rname, report['n_pairs_bleu_train']) for (rname, report) in reports])
# Validation Set n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_valid']) for (rname, report) in reports])
# Test Set n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_test']) for (rname, report) in reports])
# Combined n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_all']) for (rname, report) in reports])
# Ground Truth n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_gold']) for (rname, report) in reports])
Explanation: N-pairs BLEU Analysis
This analysis randomly samples 1000 pairs of generations/ground truths and treats them as translations, giving their BLEU score. We can expect very low scores in the ground truth and high scores can expose hyper-common generations
End of explanation
def print_align(reports):
data= [['<b>Model</b>', '<b>Average (Train) Generated Score</b>','<b>Average (Valid) Generated Score</b>','<b>Average (Test) Generated Score</b>','<b>Average (All) Generated Score</b>', '<b>Average (Gold) Score</b>']]
for rname, report in reports:
data.append([rname, report['average_alignment_train'], report['average_alignment_valid'], report['average_alignment_test'], report['average_alignment_all'], report['average_alignment_gold']])
display_table(data)
print_align(reports)
Explanation: Alignment Analysis
This analysis computs the average Smith-Waterman alignment score for generations, with the same intuition as N-pairs BLEU, in that we expect low scores in the ground truth and hyper-common generations to raise the scores
End of explanation |
8,797 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
6.2 - Using a pre-trained model with Keras
In this tutorial we will load the model we trained in the previous section, along with the training data and mapping dictionaries, and use it to generate longer sequences of text.
Let's start by importing the libraries we will be using
Step1: Next, we will import the data we saved previously using the pickle library.
Step2: Now we need to define the Keras model. Since we will be loading parameters from a pre-trained model, this needs to match exactly the definition from the previous lab section. The only difference is that we will comment out the dropout layer so that the model uses all the hidden neurons when doing the predictions.
Step3: Next we will load the parameters from the model we trained previously, and compile it with the same loss and optimizer function.
Step4: We also need to rewrite the sample() and generate() helper functions so that we can use them in our code
Step5: Now we can use the generate() function to generate text of any length based on our imported pre-trained model and a seed text of our choice. For best result, the length of the seed text should be the same as the length of training sequences (100 in the previous lab section).
In this case, we will test the overfitting of the model by supplying it two seeds | Python Code:
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers import LSTM
from keras.callbacks import ModelCheckpoint
from keras.utils import np_utils
import sys
import re
import pickle
Explanation: 6.2 - Using a pre-trained model with Keras
In this tutorial we will load the model we trained in the previous section, along with the training data and mapping dictionaries, and use it to generate longer sequences of text.
Let's start by importing the libraries we will be using:
End of explanation
pickle_file = '-basic_data.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
X = save['X']
y = save['y']
char_to_int = save['char_to_int']
int_to_char = save['int_to_char']
del save # hint to help gc free up memory
print('Training set', X.shape, y.shape)
Explanation: Next, we will import the data we saved previously using the pickle library.
End of explanation
# define the LSTM model
model = Sequential()
model.add(LSTM(128, return_sequences=False, input_shape=(X.shape[1], X.shape[2])))
# model.add(Dropout(0.50))
model.add(Dense(y.shape[1], activation='softmax'))
Explanation: Now we need to define the Keras model. Since we will be loading parameters from a pre-trained model, this needs to match exactly the definition from the previous lab section. The only difference is that we will comment out the dropout layer so that the model uses all the hidden neurons when doing the predictions.
End of explanation
# load the parameters from the pretrained model
filename = "-basic_LSTM.hdf5"
model.load_weights(filename)
model.compile(loss='categorical_crossentropy', optimizer='adam')
Explanation: Next we will load the parameters from the model we trained previously, and compile it with the same loss and optimizer function.
End of explanation
def sample(preds, temperature=1.0):
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
def generate(sentence, sample_length=50, diversity=0.35):
generated = sentence
sys.stdout.write(generated)
for i in range(sample_length):
x = np.zeros((1, X.shape[1], X.shape[2]))
for t, char in enumerate(sentence):
x[0, t, char_to_int[char]] = 1.
preds = model.predict(x, verbose=0)[0]
next_index = sample(preds, diversity)
next_char = int_to_char[next_index]
generated += next_char
sentence = sentence[1:] + next_char
sys.stdout.write(next_char)
sys.stdout.flush()
print()
Explanation: We also need to rewrite the sample() and generate() helper functions so that we can use them in our code:
End of explanation
prediction_length = 500
seed_from_text = "america has shown that progress is possible. last year, income gains were larger for households at t"
seed_original = "and as people around the world began to hear the tale of the lowly colonists who overthrew an empire"
for seed in [seed_from_text, seed_original]:
generate(seed, prediction_length, .50)
print("-" * 20)
Explanation: Now we can use the generate() function to generate text of any length based on our imported pre-trained model and a seed text of our choice. For best result, the length of the seed text should be the same as the length of training sequences (100 in the previous lab section).
In this case, we will test the overfitting of the model by supplying it two seeds:
one which comes verbatim from the training text, and
one which comes from another earlier speech by Obama
If the model has not overfit our training data, we should expect it to produce reasonable results for both seeds. If it has overfit, it might produce pretty good results for something coming directly from the training set, but perform poorly on a new seed. This means that it has learned to replicate our training text, but cannot generalize to produce text based on other inputs. Since the original article was very short, however, the entire vocabulary of the model might be very limited, which is why as input we use a part of another speech given by Obama, instead of completely random text.
Since we have not trained the model for that long, we will also use a lower temperature to get the model to generate more accurate if less diverse results. Try running the code a few times with different temperature settings to generate different results.
End of explanation |
8,798 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
5.2 - Using convnets with small datasets
This notebook contains the code sample found in Chapter 5, Section 2 of Deep Learning with Python. Note that the original text features far more content, in particular further explanations and figures
Step1: As a sanity check, let's count how many pictures we have in each training split (train/validation/test)
Step2: So we have indeed 2000 training images, and then 1000 validation images and 1000 test images. In each split, there is the same number of
samples from each class
Step3: Let's take a look at how the dimensions of the feature maps change with every successive layer
Step4: For our compilation step, we'll go with the RMSprop optimizer as usual. Since we ended our network with a single sigmoid unit, we will
use binary crossentropy as our loss (as a reminder, check out the table in Chapter 4, section 5 for a cheatsheet on what loss function to
use in various situations).
Step5: Data preprocessing
As you already know by now, data should be formatted into appropriately pre-processed floating point tensors before being fed into our
network. Currently, our data sits on a drive as JPEG files, so the steps for getting it into our network are roughly
Step6: Let's take a look at the output of one of these generators
Step7: Let's fit our model to the data using the generator. We do it using the fit_generator method, the equivalent of fit for data generators
like ours. It expects as first argument a Python generator that will yield batches of inputs and targets indefinitely, like ours does.
Because the data is being generated endlessly, the generator needs to know example how many samples to draw from the generator before
declaring an epoch over. This is the role of the steps_per_epoch argument
Step8: It is good practice to always save your models after training
Step9: Let's plot the loss and accuracy of the model over the training and validation data during training
Step10: These plots are characteristic of overfitting. Our training accuracy increases linearly over time, until it reaches nearly 100%, while our
validation accuracy stalls at 70-72%. Our validation loss reaches its minimum after only five epochs then stalls, while the training loss
keeps decreasing linearly until it reaches nearly 0.
Because we only have relatively few training samples (2000), overfitting is going to be our number one concern. You already know about a
number of techniques that can help mitigate overfitting, such as dropout and weight decay (L2 regularization). We are now going to
introduce a new one, specific to computer vision, and used almost universally when processing images with deep learning models
Step11: These are just a few of the options available (for more, see the Keras documentation). Let's quickly go over what we just wrote
Step12: If we train a new network using this data augmentation configuration, our network will never see twice the same input. However, the inputs
that it sees are still heavily intercorrelated, since they come from a small number of original images -- we cannot produce new information,
we can only remix existing information. As such, this might not be quite enough to completely get rid of overfitting. To further fight
overfitting, we will also add a Dropout layer to our model, right before the densely-connected classifier
Step13: Let's train our network using data augmentation and dropout
Step14: Let's save our model -- we will be using it in the section on convnet visualization.
Step15: Let's plot our results again | Python Code:
import os, shutil
# The path to the directory where the original
# dataset was uncompressed
original_dataset_dir = '/Users/fchollet/Downloads/kaggle_original_data'
# The directory where we will
# store our smaller dataset
base_dir = '/Users/fchollet/Downloads/cats_and_dogs_small'
os.mkdir(base_dir)
# Directories for our training,
# validation and test splits
train_dir = os.path.join(base_dir, 'train')
os.mkdir(train_dir)
validation_dir = os.path.join(base_dir, 'validation')
os.mkdir(validation_dir)
test_dir = os.path.join(base_dir, 'test')
os.mkdir(test_dir)
# Directory with our training cat pictures
train_cats_dir = os.path.join(train_dir, 'cats')
os.mkdir(train_cats_dir)
# Directory with our training dog pictures
train_dogs_dir = os.path.join(train_dir, 'dogs')
os.mkdir(train_dogs_dir)
# Directory with our validation cat pictures
validation_cats_dir = os.path.join(validation_dir, 'cats')
os.mkdir(validation_cats_dir)
# Directory with our validation dog pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs')
os.mkdir(validation_dogs_dir)
# Directory with our validation cat pictures
test_cats_dir = os.path.join(test_dir, 'cats')
os.mkdir(test_cats_dir)
# Directory with our validation dog pictures
test_dogs_dir = os.path.join(test_dir, 'dogs')
os.mkdir(test_dogs_dir)
# Copy first 1000 cat images to train_cats_dir
fnames = ['cat.{}.jpg'.format(i) for i in range(1000)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(train_cats_dir, fname)
shutil.copyfile(src, dst)
# Copy next 500 cat images to validation_cats_dir
fnames = ['cat.{}.jpg'.format(i) for i in range(1000, 1500)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(validation_cats_dir, fname)
shutil.copyfile(src, dst)
# Copy next 500 cat images to test_cats_dir
fnames = ['cat.{}.jpg'.format(i) for i in range(1500, 2000)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(test_cats_dir, fname)
shutil.copyfile(src, dst)
# Copy first 1000 dog images to train_dogs_dir
fnames = ['dog.{}.jpg'.format(i) for i in range(1000)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(train_dogs_dir, fname)
shutil.copyfile(src, dst)
# Copy next 500 dog images to validation_dogs_dir
fnames = ['dog.{}.jpg'.format(i) for i in range(1000, 1500)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(validation_dogs_dir, fname)
shutil.copyfile(src, dst)
# Copy next 500 dog images to test_dogs_dir
fnames = ['dog.{}.jpg'.format(i) for i in range(1500, 2000)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(test_dogs_dir, fname)
shutil.copyfile(src, dst)
Explanation: 5.2 - Using convnets with small datasets
This notebook contains the code sample found in Chapter 5, Section 2 of Deep Learning with Python. Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.
Training a convnet from scratch on a small dataset
Having to train an image classification model using only very little data is a common situation, which you likely encounter yourself in
practice if you ever do computer vision in a professional context.
Having "few" samples can mean anywhere from a few hundreds to a few tens of thousands of images. As a practical example, we will focus on
classifying images as "dogs" or "cats", in a dataset containing 4000 pictures of cats and dogs (2000 cats, 2000 dogs). We will use 2000
pictures for training, 1000 for validation, and finally 1000 for testing.
In this section, we will review one basic strategy to tackle this problem: training a new model from scratch on what little data we have. We
will start by naively training a small convnet on our 2000 training samples, without any regularization, to set a baseline for what can be
achieved. This will get us to a classification accuracy of 71%. At that point, our main issue will be overfitting. Then we will introduce
data augmentation, a powerful technique for mitigating overfitting in computer vision. By leveraging data augmentation, we will improve
our network to reach an accuracy of 82%.
In the next section, we will review two more essential techniques for applying deep learning to small datasets: doing feature extraction
with a pre-trained network (this will get us to an accuracy of 90% to 93%), and fine-tuning a pre-trained network (this will get us to
our final accuracy of 95%). Together, these three strategies -- training a small model from scratch, doing feature extracting using a
pre-trained model, and fine-tuning a pre-trained model -- will constitute your future toolbox for tackling the problem of doing computer
vision with small datasets.
The relevance of deep learning for small-data problems
You will sometimes hear that deep learning only works when lots of data is available. This is in part a valid point: one fundamental
characteristic of deep learning is that it is able to find interesting features in the training data on its own, without any need for manual
feature engineering, and this can only be achieved when lots of training examples are available. This is especially true for problems where
the input samples are very high-dimensional, like images.
However, what constitutes "lots" of samples is relative -- relative to the size and depth of the network you are trying to train, for
starters. It isn't possible to train a convnet to solve a complex problem with just a few tens of samples, but a few hundreds can
potentially suffice if the model is small and well-regularized and if the task is simple.
Because convnets learn local, translation-invariant features, they are very
data-efficient on perceptual problems. Training a convnet from scratch on a very small image dataset will still yield reasonable results
despite a relative lack of data, without the need for any custom feature engineering. You will see this in action in this section.
But what's more, deep learning models are by nature highly repurposable: you can take, say, an image classification or speech-to-text model
trained on a large-scale dataset then reuse it on a significantly different problem with only minor changes. Specifically, in the case of
computer vision, many pre-trained models (usually trained on the ImageNet dataset) are now publicly available for download and can be used
to bootstrap powerful vision models out of very little data. That's what we will do in the next section.
For now, let's get started by getting our hands on the data.
Downloading the data
The cats vs. dogs dataset that we will use isn't packaged with Keras. It was made available by Kaggle.com as part of a computer vision
competition in late 2013, back when convnets weren't quite mainstream. You can download the original dataset at:
https://www.kaggle.com/c/dogs-vs-cats/data (you will need to create a Kaggle account if you don't already have one -- don't worry, the
process is painless).
The pictures are medium-resolution color JPEGs. They look like this:
Unsurprisingly, the cats vs. dogs Kaggle competition in 2013 was won by entrants who used convnets. The best entries could achieve up to
95% accuracy. In our own example, we will get fairly close to this accuracy (in the next section), even though we will be training our
models on less than 10% of the data that was available to the competitors.
This original dataset contains 25,000 images of dogs and cats (12,500 from each class) and is 543MB large (compressed). After downloading
and uncompressing it, we will create a new dataset containing three subsets: a training set with 1000 samples of each class, a validation
set with 500 samples of each class, and finally a test set with 500 samples of each class.
Here are a few lines of code to do this:
End of explanation
print('total training cat images:', len(os.listdir(train_cats_dir)))
print('total training dog images:', len(os.listdir(train_dogs_dir)))
print('total validation cat images:', len(os.listdir(validation_cats_dir)))
print('total validation dog images:', len(os.listdir(validation_dogs_dir)))
print('total test cat images:', len(os.listdir(test_cats_dir)))
print('total test dog images:', len(os.listdir(test_dogs_dir)))
Explanation: As a sanity check, let's count how many pictures we have in each training split (train/validation/test):
End of explanation
from keras import layers
from keras import models
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',
input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
Explanation: So we have indeed 2000 training images, and then 1000 validation images and 1000 test images. In each split, there is the same number of
samples from each class: this is a balanced binary classification problem, which means that classification accuracy will be an appropriate
measure of success.
Building our network
We've already built a small convnet for MNIST in the previous example, so you should be familiar with them. We will reuse the same
general structure: our convnet will be a stack of alternated Conv2D (with relu activation) and MaxPooling2D layers.
However, since we are dealing with bigger images and a more complex problem, we will make our network accordingly larger: it will have one
more Conv2D + MaxPooling2D stage. This serves both to augment the capacity of the network, and to further reduce the size of the
feature maps, so that they aren't overly large when we reach the Flatten layer. Here, since we start from inputs of size 150x150 (a
somewhat arbitrary choice), we end up with feature maps of size 7x7 right before the Flatten layer.
Note that the depth of the feature maps is progressively increasing in the network (from 32 to 128), while the size of the feature maps is
decreasing (from 148x148 to 7x7). This is a pattern that you will see in almost all convnets.
Since we are attacking a binary classification problem, we are ending the network with a single unit (a Dense layer of size 1) and a
sigmoid activation. This unit will encode the probability that the network is looking at one class or the other.
End of explanation
model.summary()
Explanation: Let's take a look at how the dimensions of the feature maps change with every successive layer:
End of explanation
from keras import optimizers
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-4),
metrics=['acc'])
Explanation: For our compilation step, we'll go with the RMSprop optimizer as usual. Since we ended our network with a single sigmoid unit, we will
use binary crossentropy as our loss (as a reminder, check out the table in Chapter 4, section 5 for a cheatsheet on what loss function to
use in various situations).
End of explanation
from keras.preprocessing.image import ImageDataGenerator
# All images will be rescaled by 1./255
train_datagen = ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
# This is the target directory
train_dir,
# All images will be resized to 150x150
target_size=(150, 150),
batch_size=20,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
Explanation: Data preprocessing
As you already know by now, data should be formatted into appropriately pre-processed floating point tensors before being fed into our
network. Currently, our data sits on a drive as JPEG files, so the steps for getting it into our network are roughly:
Read the picture files.
Decode the JPEG content to RBG grids of pixels.
Convert these into floating point tensors.
Rescale the pixel values (between 0 and 255) to the [0, 1] interval (as you know, neural networks prefer to deal with small input values).
It may seem a bit daunting, but thankfully Keras has utilities to take care of these steps automatically. Keras has a module with image
processing helper tools, located at keras.preprocessing.image. In particular, it contains the class ImageDataGenerator which allows to
quickly set up Python generators that can automatically turn image files on disk into batches of pre-processed tensors. This is what we
will use here.
End of explanation
for data_batch, labels_batch in train_generator:
print('data batch shape:', data_batch.shape)
print('labels batch shape:', labels_batch.shape)
break
Explanation: Let's take a look at the output of one of these generators: it yields batches of 150x150 RGB images (shape (20, 150, 150, 3)) and binary
labels (shape (20,)). 20 is the number of samples in each batch (the batch size). Note that the generator yields these batches
indefinitely: it just loops endlessly over the images present in the target folder. For this reason, we need to break the iteration loop
at some point.
End of explanation
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=30,
validation_data=validation_generator,
validation_steps=50)
Explanation: Let's fit our model to the data using the generator. We do it using the fit_generator method, the equivalent of fit for data generators
like ours. It expects as first argument a Python generator that will yield batches of inputs and targets indefinitely, like ours does.
Because the data is being generated endlessly, the generator needs to know example how many samples to draw from the generator before
declaring an epoch over. This is the role of the steps_per_epoch argument: after having drawn steps_per_epoch batches from the
generator, i.e. after having run for steps_per_epoch gradient descent steps, the fitting process will go to the next epoch. In our case,
batches are 20-sample large, so it will take 100 batches until we see our target of 2000 samples.
When using fit_generator, one may pass a validation_data argument, much like with the fit method. Importantly, this argument is
allowed to be a data generator itself, but it could be a tuple of Numpy arrays as well. If you pass a generator as validation_data, then
this generator is expected to yield batches of validation data endlessly, and thus you should also specify the validation_steps argument,
which tells the process how many batches to draw from the validation generator for evaluation.
End of explanation
model.save('cats_and_dogs_small_1.h5')
Explanation: It is good practice to always save your models after training:
End of explanation
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
Explanation: Let's plot the loss and accuracy of the model over the training and validation data during training:
End of explanation
datagen = ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
Explanation: These plots are characteristic of overfitting. Our training accuracy increases linearly over time, until it reaches nearly 100%, while our
validation accuracy stalls at 70-72%. Our validation loss reaches its minimum after only five epochs then stalls, while the training loss
keeps decreasing linearly until it reaches nearly 0.
Because we only have relatively few training samples (2000), overfitting is going to be our number one concern. You already know about a
number of techniques that can help mitigate overfitting, such as dropout and weight decay (L2 regularization). We are now going to
introduce a new one, specific to computer vision, and used almost universally when processing images with deep learning models: data
augmentation.
Using data augmentation
Overfitting is caused by having too few samples to learn from, rendering us unable to train a model able to generalize to new data.
Given infinite data, our model would be exposed to every possible aspect of the data distribution at hand: we would never overfit. Data
augmentation takes the approach of generating more training data from existing training samples, by "augmenting" the samples via a number
of random transformations that yield believable-looking images. The goal is that at training time, our model would never see the exact same
picture twice. This helps the model get exposed to more aspects of the data and generalize better.
In Keras, this can be done by configuring a number of random transformations to be performed on the images read by our ImageDataGenerator
instance. Let's get started with an example:
End of explanation
# This is module with image preprocessing utilities
from keras.preprocessing import image
fnames = [os.path.join(train_cats_dir, fname) for fname in os.listdir(train_cats_dir)]
# We pick one image to "augment"
img_path = fnames[3]
# Read the image and resize it
img = image.load_img(img_path, target_size=(150, 150))
# Convert it to a Numpy array with shape (150, 150, 3)
x = image.img_to_array(img)
# Reshape it to (1, 150, 150, 3)
x = x.reshape((1,) + x.shape)
# The .flow() command below generates batches of randomly transformed images.
# It will loop indefinitely, so we need to `break` the loop at some point!
i = 0
for batch in datagen.flow(x, batch_size=1):
plt.figure(i)
imgplot = plt.imshow(image.array_to_img(batch[0]))
i += 1
if i % 4 == 0:
break
plt.show()
Explanation: These are just a few of the options available (for more, see the Keras documentation). Let's quickly go over what we just wrote:
rotation_range is a value in degrees (0-180), a range within which to randomly rotate pictures.
width_shift and height_shift are ranges (as a fraction of total width or height) within which to randomly translate pictures
vertically or horizontally.
shear_range is for randomly applying shearing transformations.
zoom_range is for randomly zooming inside pictures.
horizontal_flip is for randomly flipping half of the images horizontally -- relevant when there are no assumptions of horizontal
asymmetry (e.g. real-world pictures).
fill_mode is the strategy used for filling in newly created pixels, which can appear after a rotation or a width/height shift.
Let's take a look at our augmented images:
End of explanation
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',
input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dropout(0.5))
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-4),
metrics=['acc'])
Explanation: If we train a new network using this data augmentation configuration, our network will never see twice the same input. However, the inputs
that it sees are still heavily intercorrelated, since they come from a small number of original images -- we cannot produce new information,
we can only remix existing information. As such, this might not be quite enough to completely get rid of overfitting. To further fight
overfitting, we will also add a Dropout layer to our model, right before the densely-connected classifier:
End of explanation
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,)
# Note that the validation data should not be augmented!
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
# This is the target directory
train_dir,
# All images will be resized to 150x150
target_size=(150, 150),
batch_size=32,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=32,
class_mode='binary')
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=100,
validation_data=validation_generator,
validation_steps=50)
Explanation: Let's train our network using data augmentation and dropout:
End of explanation
model.save('cats_and_dogs_small_2.h5')
Explanation: Let's save our model -- we will be using it in the section on convnet visualization.
End of explanation
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
Explanation: Let's plot our results again:
End of explanation |
8,799 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Twitter
Step3: So above we use the LIKE statement in conjunction with the % sign. The LIKE operator is going to match a string while the % matches any string of 0 or greater length. If you know the exact match then you needn't use the % sign.
<span style="background-color
Step5: A Couple Things to Think About
When dealing with timestamps, the timestamp itself is often too precise to extract anything meaningful. Therefore, we generally have to bin them into larger time buckets, say weeks, months or even years depending on the amount of data and the type of problem. That is where we find ourselves right now.
To start, we are going to practice using Postgres to creat columns of month and year so that we can do some aggregations on them.
Step7: Now we can apply our counting of languages per month. Now that we have month and year columns, we just need to add that to our GROUP BY clause like so...
Step9: Well, it looks like limiting by 100,000 rows only gives a single month. That's not that interesting. What if we decrease the scope of time a little bit? Let's say by week of the year.
<span style="background-color
Step10: <span style="background-color
Step12: Even weeks are rather few. So let's take a look at days. Keep in mind that this next query could take a few minutes.
Step13: We can use the head method to see what this data frame looks like.
Step14: And now we can find shannon for each day...
Step15: We also want the day, month, and year columns to be one date column. We can do that by using the to_datetime method and specify the columns that contribute to the date. We will call this new column date.
Step16: Let's glimpse at what this gave us...
Step17: AND FINALLY...
...we want to plot this relationship between date and shannon.
Step18: pandas actually has matplotlib built in so that we can plot relationships. In this case, the date is going to be the x-axis and shannon will be the y-axis. pandas likes the x-axis to be the index of the data frame, so we first want to subset the data to be only the columns we want to plot, and then set the index to date. After that, we just call the plot method like so...
Step19: <span style="background-color | Python Code:
# BE SURE TO RUN THIS CELL BEFORE ANY OF THE OTHER CELLS
import psycopg2
import pandas as pd
from skbio.diversity.alpha import shannon
# query database
statement =
SELECT *
FROM twitter.job
WHERE description LIKE '%New York City%';
try:
connect_str = "dbname='twitter' user='dsa_ro_user' host='dbase.dsa.missouri.edu'password='readonly'"
# use our connection values to establish a connection
conn = psycopg2.connect(connect_str)
cursor = conn.cursor()
cursor.execute(statement)
column_names = [desc[0] for desc in cursor.description]
rows = cursor.fetchall()
except Exception as e:
print("Uh oh, can't connect. Invalid dbname, user or password?")
print(e)
# create dictionary from the rows and column names
new_york = {}
for i in list(range(len(column_names))):
new_york['{}'.format(column_names[i])] = [x[i] for x in rows]
# turn dictionary into a data frame
pd.DataFrame(new_york)
Explanation: Twitter: An Analysis of Linguistic Diversity
Part IV
This particular database was collecting tweets between middle of January through the middle of May. So there is a time dimension to these data. If we glance back at the first tweet notebook, we see that there is an attribute named created_at. This is a timestamp of when the tweet was published for the world to see.
Adding a time component to an analysis gives us the option to follow trends. When are hashtags popular? How quickly do they die? For our linguistic diversity analysis, it certainly begs for a modified analysis.
Quick question: Would accounting for time when calculating a shannon index on a city have any effect? Would cities stay stable throughout time in regards to their linguistic diversity? Are some cities more prone to fluctuations than others? Well, a timestamp allows us to explore these questions and more.
Today we are going to be comparing two different cities: New York City, New York and Columbia, Missouri. By now you are probably aware that the job id for Columbia is 261. But what is the job id for New York City. That is a simple query of the job table.
End of explanation
# put your code here
# ------------------
# query database
statement =
SELECT j.description, t.*
FROM twitter.job j, twitter.tweet t
WHERE j.description LIKE '%New York City%' AND j.job_id = t.job_id
LIMIT 10000;
try:
connect_str = "dbname='twitter' user='dsa_ro_user' host='dbase.dsa.missouri.edu'password='readonly'"
# use our connection values to establish a connection
conn = psycopg2.connect(connect_str)
cursor = conn.cursor()
cursor.execute(statement)
column_names = [desc[0] for desc in cursor.description]
rows = cursor.fetchall()
except Exception as e:
print("Uh oh, can't connect. Invalid dbname, user or password?")
print(e)
# create dictionary from the rows and column names
new_york = {}
for i in list(range(len(column_names))):
new_york['{}'.format(column_names[i])] = [x[i] for x in rows]
# turn dictionary into a data frame
pd.DataFrame(new_york)
Explanation: So above we use the LIKE statement in conjunction with the % sign. The LIKE operator is going to match a string while the % matches any string of 0 or greater length. If you know the exact match then you needn't use the % sign.
<span style="background-color: #FFFF00">YOUR TURN</span>
PRACTICE: Just a refresher. Query the 10,000 tweets from the tweet table where the job_id corresponds to New York City. Be sure to also select the the description column from the job table so that every record returned has a description saying "New York City, New York.
End of explanation
# query database
statement =
SELECT t.*,
date_part('month',created_at) as month,
date_part('year', created_at) as year
FROM twitter.job j, twitter.tweet t
WHERE j.description LIKE '%New York City%' AND j.job_id = t.job_id
LIMIT 1000;
try:
connect_str = "dbname='twitter' user='dsa_ro_user' host='dbase.dsa.missouri.edu'password='readonly'"
# use our connection values to establish a connection
conn = psycopg2.connect(connect_str)
cursor = conn.cursor()
cursor.execute(statement)
column_names = [desc[0] for desc in cursor.description]
rows = cursor.fetchall()
except Exception as e:
print("Uh oh, can't connect. Invalid dbname, user or password?")
print(e)
# create dictionary from the rows and column names
new_york = {}
for i in list(range(len(column_names))):
new_york['{}'.format(column_names[i])] = [x[i] for x in rows]
# turn dictionary into a data frame
pd.DataFrame(new_york)
Explanation: A Couple Things to Think About
When dealing with timestamps, the timestamp itself is often too precise to extract anything meaningful. Therefore, we generally have to bin them into larger time buckets, say weeks, months or even years depending on the amount of data and the type of problem. That is where we find ourselves right now.
To start, we are going to practice using Postgres to creat columns of month and year so that we can do some aggregations on them.
End of explanation
# query database
statement =
SELECT DISTINCT iso_language,month,year , COUNT(*) FROM
(SELECT t.*,
date_part('month',created_at) as month,
date_part('year', created_at) as year
FROM twitter.job j, twitter.tweet t
WHERE j.description LIKE '%New York City%' AND j.job_id = t.job_id
LIMIT 100000) AS new_york
GROUP BY iso_language ,month, year;
try:
connect_str = "dbname='twitter' user='dsa_ro_user' host='dbase.dsa.missouri.edu'password='readonly'"
# use our connection values to establish a connection
conn = psycopg2.connect(connect_str)
cursor = conn.cursor()
cursor.execute(statement)
column_names = [desc[0] for desc in cursor.description]
rows = cursor.fetchall()
except Exception as e:
print("Uh oh, can't connect. Invalid dbname, user or password?")
print(e)
# create dictionary from the rows and column names
new_york = {}
for i in list(range(len(column_names))):
new_york['{}'.format(column_names[i])] = [x[i] for x in rows]
# turn dictionary into a data frame
pd.DataFrame(new_york)
Explanation: Now we can apply our counting of languages per month. Now that we have month and year columns, we just need to add that to our GROUP BY clause like so...
End of explanation
# put your code here
# ------------------
# query database
statement =
SELECT DISTINCT iso_language,week,year , COUNT(*) FROM
(SELECT t.*,
date_part('week',created_at) as week,
date_part('year', created_at) as year
FROM twitter.job j, twitter.tweet t
WHERE j.description LIKE '%New York City%' AND j.job_id = t.job_id
LIMIT 1000000) AS new_york
GROUP BY iso_language ,week, year;
try:
connect_str = "dbname='twitter' user='dsa_ro_user' host='dbase.dsa.missouri.edu'password='readonly'"
# use our connection values to establish a connection
conn = psycopg2.connect(connect_str)
cursor = conn.cursor()
cursor.execute(statement)
column_names = [desc[0] for desc in cursor.description]
rows = cursor.fetchall()
except Exception as e:
print("Uh oh, can't connect. Invalid dbname, user or password?")
print(e)
# create dictionary from the rows and column names
new_york = {}
for i in list(range(len(column_names))):
new_york['{}'.format(column_names[i])] = [x[i] for x in rows]
# turn dictionary into a data frame
week_ny = pd.DataFrame(new_york)
Explanation: Well, it looks like limiting by 100,000 rows only gives a single month. That's not that interesting. What if we decrease the scope of time a little bit? Let's say by week of the year.
<span style="background-color: #FFFF00">YOUR TURN</span>
Count the number of languages in New York City per week of the year. Turn that into a data frame and call it week_ny. If you need some documentation on how to get the week from a timestamp field, look here (https://www.postgresql.org/docs/8.0/static/functions-datetime.html).
End of explanation
# put your code here
# ------------------
week_ny['count'].groupby(week_ny['week']).apply(shannon)
Explanation: <span style="background-color: #FFFF00">YOUR TURN</span>
From the week_ny data frame that you created above, now find the shannon index for each week.
End of explanation
# query database
statement =
SELECT DISTINCT iso_language,day,month,year , COUNT(*) FROM
(SELECT t.*,
date_part('day',created_at) as day,
date_part('month', created_at) as month,
date_part('year', created_at) as year
FROM twitter.job j, twitter.tweet t
WHERE j.description LIKE '%New York City%' AND j.job_id = t.job_id
LIMIT 1000000) AS new_york
GROUP BY iso_language ,day,month, year;
try:
connect_str = "dbname='twitter' user='dsa_ro_user' host='dbase.dsa.missouri.edu'password='readonly'"
# use our connection values to establish a connection
conn = psycopg2.connect(connect_str)
cursor = conn.cursor()
cursor.execute(statement)
column_names = [desc[0] for desc in cursor.description]
rows = cursor.fetchall()
except Exception as e:
print("Uh oh, can't connect. Invalid dbname, user or password?")
print(e)
# create dictionary from the rows and column names
new_york = {}
for i in list(range(len(column_names))):
new_york['{}'.format(column_names[i])] = [x[i] for x in rows]
# turn dictionary into a data frame
day_ny = pd.DataFrame(new_york)
Explanation: Even weeks are rather few. So let's take a look at days. Keep in mind that this next query could take a few minutes.
End of explanation
day_ny.head()
Explanation: We can use the head method to see what this data frame looks like.
End of explanation
date_ny = day_ny.groupby(['day','month','year'])['count'].apply(shannon).reset_index()
Explanation: And now we can find shannon for each day...
End of explanation
date_ny['date'] = pd.to_datetime(date_ny.year*10000+date_ny.month*100+date_ny.day,format='%Y%m%d')
# nicer column name
date_ny['shannon'] = date_ny['count']
Explanation: We also want the day, month, and year columns to be one date column. We can do that by using the to_datetime method and specify the columns that contribute to the date. We will call this new column date.
End of explanation
date_ny.head()
Explanation: Let's glimpse at what this gave us...
End of explanation
%matplotlib inline
#import matplotlib
#import numpy as np
#import matplotlib.pyplot as plt
Explanation: AND FINALLY...
...we want to plot this relationship between date and shannon.
End of explanation
date_ny[['date','shannon']].set_index('date').plot()
Explanation: pandas actually has matplotlib built in so that we can plot relationships. In this case, the date is going to be the x-axis and shannon will be the y-axis. pandas likes the x-axis to be the index of the data frame, so we first want to subset the data to be only the columns we want to plot, and then set the index to date. After that, we just call the plot method like so...
End of explanation
# put your code here
# ------------------
Explanation: <span style="background-color: #FFFF00">YOUR TURN</span>
Now, do the same for Columbia, MO. Be sure to find the day, month and year and to count the languages based on day. Finally plot your results after calculating the shannon index per day.
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.