markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
|---|---|---|---|---|
From this we see that the index is indeed using the timing information in the file, and we can see that the dtype is datetime.
Selecting rows and columns of data
In particular, we will select rows based on the index. Since in this example we are indexing by time, we can use human-readable notation to select based on date/times themselves instead of index. Columns can be selected by name.
We can now access the columns of the file using dictionary-like keyword arguments, like so:
|
df['trip_distance']
|
materials/4_pandas.ipynb
|
hetland/python4geosciences
|
mit
|
We can equivalently access the columns of data as if they are methods. This means that we can use tab autocomplete to see methods and data available in a dataframe.
|
df.trip_distance
|
materials/4_pandas.ipynb
|
hetland/python4geosciences
|
mit
|
We can plot in this way, too:
|
df['trip_distance'].plot(figsize=(14,6))
|
materials/4_pandas.ipynb
|
hetland/python4geosciences
|
mit
|
Simple data selection
One of the biggest benefits of using pandas is being able to easily reference the data in intuitive ways. For example, because we set up the index of the dataframe to be the date and time, we can pull out data using dates. In the following, we pull out all data from the first hour of the day:
|
df['2016-05-01 00']
|
materials/4_pandas.ipynb
|
hetland/python4geosciences
|
mit
|
Here we further subdivide to examine the passenger count during that time period:
|
df['passenger_count']['2016-05-01 00']
|
materials/4_pandas.ipynb
|
hetland/python4geosciences
|
mit
|
We can also access a range of data, for example any data rows from midnight until noon:
|
df['2016-05-01 00':'2016-05-01 11']
|
materials/4_pandas.ipynb
|
hetland/python4geosciences
|
mit
|
If you want more choice in your selection
The following, adding on minutes, does not work:
|
df['2016-05-01 00:30']
|
materials/4_pandas.ipynb
|
hetland/python4geosciences
|
mit
|
However, we can use another approach to have more control, with .loc to access combinations of specific columns and/or rows, or subsets of columns and/or rows.
|
df.loc['2016-05-01 00:30']
|
materials/4_pandas.ipynb
|
hetland/python4geosciences
|
mit
|
You can also select data for more specific time periods.
df.loc[row_label, col_label]
|
df.loc['2016-05-01 00:30', 'passenger_count']
|
materials/4_pandas.ipynb
|
hetland/python4geosciences
|
mit
|
You can select more than one column:
|
df.loc['2016-05-01 00:30', ['passenger_count','trip_distance']]
|
materials/4_pandas.ipynb
|
hetland/python4geosciences
|
mit
|
You can select a range of data:
|
df.loc['2016-05-01 00:30':'2016-05-01 01:30', ['passenger_count','trip_distance']]
|
materials/4_pandas.ipynb
|
hetland/python4geosciences
|
mit
|
You can alternatively select data by index instead of by label, using iloc instead of loc. Here we select the first 5 rows of data for all columns:
|
df.iloc[0:5, :]
|
materials/4_pandas.ipynb
|
hetland/python4geosciences
|
mit
|
Exercise
Access the data from dataframe df for the last three hours of the day at once. Plot the tip amount (tip_amount) for this time period.
After you can make a line plot, try making a histogram of the data. Play around with the data range and the number of bins. A number of plot types are available built-in to a pandas dataframe inside the plot method under the keyword argument kind.
Exercise
Using pandas, read in the CTD data we've used in class several times. What variable would make sense to use for your index column?
Notes about datetimes
You can change the format of datetimes using strftime().
Compare the datetimes in our dataframe index in the first cell below with the second cell, in which we format the look of the datetimes differently. We can choose how it looks using formatting codes. You can find a comprehensive list of the formatting directives at http://strftime.org/. Note that inside the parentheses, you can write other characters that will be passed through (like the comma in the example below).
|
df = pd.read_csv('../data/yellow_tripdata_2016-05-01_decimated.csv', parse_dates=[0, 2], index_col=[0])
df.index
df.index.strftime('%b %d, %Y %H:%m')
|
materials/4_pandas.ipynb
|
hetland/python4geosciences
|
mit
|
You can create and use datetimes using pandas. It will interpret the information you put into a string as best it can. Year-month-day is a good way to put in dates instead of using either American or European-specific ordering.
After defining a pandas Timestamp, you can also change time using Timedelta.
|
now = pd.Timestamp('October 22, 2019 1:19PM')
now
tomorrow = pd.Timedelta('1 day')
now + tomorrow
|
materials/4_pandas.ipynb
|
hetland/python4geosciences
|
mit
|
You can set up a range of datetimes to make your own data frame indices with the following. Codes for frequency are available.
|
pd.date_range(start='Jan 1 2019', end='May 1 2019', freq='15T')
|
materials/4_pandas.ipynb
|
hetland/python4geosciences
|
mit
|
Note that you can get many different measures of your time index.
|
df.index.minute
df.index.dayofweek
|
materials/4_pandas.ipynb
|
hetland/python4geosciences
|
mit
|
Exercise
How would you change the call to strftime above to format all of the indices such that the first index, for example, would be "the 1st of May, 2016 at the hour of 00 and the minute of 00 and the seconds of 00, which is the following day of the week: Sunday." Use the format codes for as many of the values as possible.
Adding column to dataframe
We can add data to our dataframe very easily. Below we add an index that gives the minute in the hour throughout the day.
|
df['tip squared'] = df.tip_amount**2 # making up some numbers to save to a new column
df['tip squared'].plot()
|
materials/4_pandas.ipynb
|
hetland/python4geosciences
|
mit
|
Another example: Wind data
Let's read in the wind data file that we have used before to have another data set to use. Note the parameters used to read it in properly.
|
df2 = pd.read_table('../data/burl1h2010.txt', header=0, skiprows=[1], delim_whitespace=True,
parse_dates={'dates': ['#YY', 'MM', 'DD', 'hh']}, index_col=0)
df2
df2.index
|
materials/4_pandas.ipynb
|
hetland/python4geosciences
|
mit
|
Plotting with pandas
You can plot with matplotlib and control many things directly from pandas. Get more info about plotting from pandas dataframes directly from:
|
df.plot?
|
materials/4_pandas.ipynb
|
hetland/python4geosciences
|
mit
|
You can mix and match plotting with matplotlib by either setting up a figure and axes you want to use with calls to plot from your dataframe (which you input to the plot call), or you can start with a pandas plot and save an axes from that call. Each will be demonstrated next. Or, you can bring the pandas data to matplotlib fully.
Start from matplotlib, then input axes to pandas
To demonstrate plotting starting from matplotlib, we will also demonstrate a note about column selection for plotting. You can select which data columns to plot either by selecting in the line before the plot call, or you can choose the columns within the plot call.
The key part here is that you input to your pandas plot call the axes you wanted plotted into (here: ax=axes[0]).
|
import matplotlib.pyplot as plt
fig, axes = plt.subplots(1, 2, figsize=(14,4))
df2['WSPD']['2010-5'].plot(ax=axes[0])
df2.loc['2010-5'].plot(y='WSPD', ax=axes[1])
|
materials/4_pandas.ipynb
|
hetland/python4geosciences
|
mit
|
Start with pandas, then use matplotlib commands
The important part here is that the call to pandas dataframe plotting returns an axes handle which you can save; here, it is saved as "ax".
|
ax = df2['WSPD']['2010 11 1'].plot()
ax.set_ylabel('Wind speed')
|
materials/4_pandas.ipynb
|
hetland/python4geosciences
|
mit
|
Bring pandas dataframe data to matplotlib fully
You can also use matplotlib directly by pulling the data you want to plot out of your dataframe.
|
plt.plot(df2['WSPD'])
|
materials/4_pandas.ipynb
|
hetland/python4geosciences
|
mit
|
Plot all or multiple columns at once
|
# all
df2.plot()
|
materials/4_pandas.ipynb
|
hetland/python4geosciences
|
mit
|
To plot more than one but less than all columns, give a list of column names. Here are two ways to do the same thing:
|
# multiple
fig, axes = plt.subplots(1, 2, figsize=(14,4))
df2[['WSPD', 'GST']].plot(ax=axes[0])
df2.plot(y=['WSPD', 'GST'], ax=axes[1])
|
materials/4_pandas.ipynb
|
hetland/python4geosciences
|
mit
|
Formatting dates
You can control how datetimes look on the x axis in these plots as demonstrated in this section. The formatting codes used in the call to DateFormatter are the same as those used above in this notebook for strftime.
Note that you can also control all of this with minor ticks additionally.
|
ax = df2['WSPD'].plot(figsize=(14,4))
from matplotlib.dates import DateFormatter
ax = df2['WSPD'].plot(figsize=(14,4))
ax.set_xlabel('2010')
date_form = DateFormatter("%b %d")
ax.xaxis.set_major_formatter(date_form)
# import matplotlib.dates as mdates
# # You can also control where the ticks are located, by date with Locators
# ticklocations = mdates.MonthLocator()
# ax.xaxis.set_major_locator(ticklocations)
|
materials/4_pandas.ipynb
|
hetland/python4geosciences
|
mit
|
Plotting with twin axis
You can very easily plot two variables with different y axis limits with the secondary_y keyword argument to df.plot.
|
axleft = df2['WSPD']['2010-10'].plot(figsize=(14,4))
axright = df2['WDIR']['2010-10'].plot(secondary_y=True, alpha=0.5)
axleft.set_ylabel('Speed [m/s]', color='blue');
axright.set_ylabel('Dir [degrees]', color='orange');
|
materials/4_pandas.ipynb
|
hetland/python4geosciences
|
mit
|
Resampling
Sometimes we want our data to be at a different sampling frequency that we have, that is, we want to change the time between rows or observations. Changing this is called resampling. We can upsample to increase the number of data points in a given dataset (or decrease the period between points) or we can downsample to decrease the number of data points.
The wind data is given every hour. Here we downsample it to be once a day instead. After the resample function, a method needs to be used for how to combine the data over the downsampling period since the existing data needs to be combined in some way. We could use the max value over the 1-day period to represent each day:
|
df2.resample('1d').max() #['DEWP'] # now the data is daily
|
materials/4_pandas.ipynb
|
hetland/python4geosciences
|
mit
|
It's always important to check our results to make sure they look reasonable. Let's plot our resampled data with the original data to make sure they align well. We'll choose one variable for this check.
We can see that the daily max wind gust does indeed look like the max value for each day, though note that it is plotted at the start of the day.
|
df2['GST']['2010-4-1':'2010-4-5'].plot()
df2.resample('1d').max()['GST']['2010-4-1':'2010-4-5'].plot()
|
materials/4_pandas.ipynb
|
hetland/python4geosciences
|
mit
|
We can also upsample our data or add more rows of data. Note that like before, after we resample our data we still need a method on the end telling pandas how to process the data. However, since in this case we are not combining data (downsampling) but are adding more rows (upsampling), using a function like max doesn't change the existing observations (taking the max of a single row). For the new rows, we haven't said how to fill them so they are nan's by default.
Here we are changing from having data every hour to having it every 30 minutes.
|
df2.resample('30min').max() # max doesn't say what to do with data in new rows
|
materials/4_pandas.ipynb
|
hetland/python4geosciences
|
mit
|
When upsampling, a reasonable option is to fill the new rows with data from the previous existing row:
|
df2.resample('30min').ffill()
|
materials/4_pandas.ipynb
|
hetland/python4geosciences
|
mit
|
Here we upsample to have data every 15 minutes, but we interpolate to fill in the data between. This is a very useful thing to be able to do.
|
df2.resample('15 T').interpolate()
|
materials/4_pandas.ipynb
|
hetland/python4geosciences
|
mit
|
The codes for time period/frequency are available and are presented here for convenience:
Alias Description
B business day frequency
C custom business day frequency (experimental)
D calendar day frequency
W weekly frequency
M month end frequency
SM semi-month end frequency (15th and end of month)
BM business month end frequency
CBM custom business month end frequency
MS month start frequency
SMS semi-month start frequency (1st and 15th)
BMS business month start frequency
CBMS custom business month start frequency
Q quarter end frequency
BQ business quarter endfrequency
QS quarter start frequency
BQS business quarter start frequency
A year end frequency
BA business year end frequency
AS year start frequency
BAS business year start frequency
BH business hour frequency
H hourly frequency
T, min minutely frequency
S secondly frequency
L, ms milliseconds
U, us microseconds
N nanoseconds
Exercise
We looked at NYC taxi trip distance earlier, but it was hard to tell what was going on with so much data. Resample this high resolution data to be lower resolution so that any trends in the information are easier to see. By what method do you want to do this downsampling? Plot your results.
groupby and difference between groupby and resampling
groupby allows us to aggregate data across a category or value. We'll use the example of grouping across a measure of time.
Let's examine this further using a dataset of some water properties near the Flower Garden Banks in Texas. We want to find the average salinity by month across the years of data available, that is, we want to know the average salinity value for each month of the year, calculated for each month from all of the years of data available. We will end up with 12 data points in this case.
This is distinct from resampling for which if you calculate the average salinity by month, you will get a data point for each month in the time series. If there are 5 years of data in your dataset, you will end up with 12*5=60 data points total.
In the groupby example below, we first read the data into dataframe 'df3', then we group it by month (across years, since there are many years of data). From this grouping, we decide what function we want to apply to all of the numbers we've aggregated across the months of the year. We'll use mean for this example.
|
df3 = pd.read_table('http://pong.tamu.edu/tabswebsite/daily/tabs_V_salt_all', index_col=0, parse_dates=True)
df3
ax = df3.groupby(df3.index.month).aggregate(np.mean)['Salinity'].plot(color='k', grid=True, figsize=(14, 4), marker='o')
# the x axis is now showing month of the year, which is what we aggregated over
ax.set_xlabel('Month of year')
ax.set_ylabel('Average salinity')
|
materials/4_pandas.ipynb
|
hetland/python4geosciences
|
mit
|
Using httpbin.org:
https://httpbin.org/delay/1
|
import requests
def requests_get(index=None):
response = requests.get("https://httpbin.org/delay/1")
response.raise_for_status()
print(f"{index} - {response.status_code} - {response.elapsed}")
requests_get()
before = datetime.now()
for index in range(0, 5):
requests_get(index)
after = datetime.now()
print(f"total time: {after - before}")
import httpx
def httpx_get(index=None):
response = httpx.get("https://httpbin.org/delay/1")
response.raise_for_status()
print(f"{index} - {response.status_code} - {response.elapsed}")
httpx_get()
before = datetime.now()
for index in range(0, 5):
httpx_get(index)
after = datetime.now()
print(f"total time: {after - before}")
async with httpx.AsyncClient() as client:
response = await client.get('https://httpbin.org/delay/1')
print(response)
async def async_httpx_get(index=None):
async with httpx.AsyncClient() as client:
response = await client.get("https://httpbin.org/delay/1")
response.raise_for_status()
print(f"{index} - {response.status_code} - {response.elapsed}")
await async_httpx_get()
before = datetime.now()
for index in range(0, 5):
await async_httpx_get(index)
after = datetime.now()
print(f"total time: {after - before}")
many_gets = tuple(async_httpx_get(index) for index in range(0,5))
import asyncio
before = datetime.now()
await asyncio.gather(*many_gets)
after = datetime.now()
print(f"total time: {after - before}")
semaphore = asyncio.Semaphore(3)
async def async_semaphore_httpx_get(index=None):
async with semaphore:
async with httpx.AsyncClient() as client:
response = await client.get("https://httpbin.org/delay/1")
response.raise_for_status()
print(f"{index} - {response.status_code} - {response.elapsed}")
semaphore_many_gets = tuple(
async_semaphore_httpx_get(index) for index in range(0,10))
before = datetime.now()
await asyncio.gather(*semaphore_many_gets)
after = datetime.now()
print(f"total time: {after - before}")
|
HTTPX/HTTPX.ipynb
|
CLEpy/CLEpy-MotM
|
mit
|
We may now define a parametrized function using JAX. This will allow us to efficiently compute gradients.
There are a number of libraries that provide common building blocks for parametrized functions (such as flax and haiku). For this case though, we shall implement our function from scratch.
Our function will be a 1-layer MLP (multi-layer perceptron) with a single hidden layer, and a single output layer. We initialize all parameters using a standard Gaussian $\mathcal{N}(0,1)$ distribution.
|
initial_params = {
'hidden': jax.random.normal(shape=[8, 32], key=jax.random.PRNGKey(0)),
'output': jax.random.normal(shape=[32, 2], key=jax.random.PRNGKey(1)),
}
def net(x: jnp.ndarray, params: jnp.ndarray) -> jnp.ndarray:
x = jnp.dot(x, params['hidden'])
x = jax.nn.relu(x)
x = jnp.dot(x, params['output'])
return x
def loss(params: optax.Params, batch: jnp.ndarray, labels: jnp.ndarray) -> jnp.ndarray:
y_hat = net(batch, params)
# optax also provides a number of common loss functions.
loss_value = optax.sigmoid_binary_cross_entropy(y_hat, labels).sum(axis=-1)
return loss_value.mean()
|
docs/optax-101.ipynb
|
deepmind/optax
|
apache-2.0
|
We will use optax.adam to compute the parameter updates from their gradients on each optimizer step.
Note that since optax optimizers are implemented using pure functions, we will need to also keep track of the optimizer state. For the Adam optimizer, this state will contain the momentum values.
|
def fit(params: optax.Params, optimizer: optax.GradientTransformation) -> optax.Params:
opt_state = optimizer.init(params)
@jax.jit
def step(params, opt_state, batch, labels):
loss_value, grads = jax.value_and_grad(loss)(params, batch, labels)
updates, opt_state = optimizer.update(grads, opt_state, params)
params = optax.apply_updates(params, updates)
return params, opt_state, loss_value
for i, (batch, labels) in enumerate(zip(TRAINING_DATA, LABELS)):
params, opt_state, loss_value = step(params, opt_state, batch, labels)
if i % 100 == 0:
print(f'step {i}, loss: {loss_value}')
return params
# Finally, we can fit our parametrized function using the Adam optimizer
# provided by optax.
optimizer = optax.adam(learning_rate=1e-2)
params = fit(initial_params, optimizer)
|
docs/optax-101.ipynb
|
deepmind/optax
|
apache-2.0
|
We see that our loss appears to have converged, which should indicate that we have successfully found better parameters for our network
Weight Decay, Schedules and Clipping
Many research models make use of techniques such as learning rate scheduling, and gradient clipping. These may be achieved by chaining together gradient transformations such as optax.adam and optax.clip.
In the following, we will use Adam with weight decay (optax.adamw), a cosine learning rate schedule (with warmup) and also gradient clipping.
|
schedule = optax.warmup_cosine_decay_schedule(
init_value=0.0,
peak_value=1.0,
warmup_steps=50,
decay_steps=1_000,
end_value=0.0,
)
optimizer = optax.chain(
optax.clip(1.0),
optax.adamw(learning_rate=schedule),
)
params = fit(initial_params, optimizer)
|
docs/optax-101.ipynb
|
deepmind/optax
|
apache-2.0
|
A Shift-Reduce Parser for Arithmetic Expressions
In this notebook we implement a generic shift reduce parser. The parse table that we use
implements the following grammar for arithmetic expressions:
$$
\begin{eqnarray}
\mathrm{expr} & \rightarrow & \mathrm{expr}\;\;\texttt{'+'}\;\;\mathrm{product} \
& \mid & \mathrm{expr}\;\;\texttt{'-'}\;\;\mathrm{product} \
& \mid & \mathrm{product} \[0.2cm]
\mathrm{product} & \rightarrow & \mathrm{product}\;\;\texttt{''}\;\;\mathrm{factor} \
& \mid & \mathrm{product}\;\;\texttt{'/'}\;\;\mathrm{factor} \
& \mid & \mathrm{factor} \[0.2cm]
\mathrm{factor} & \rightarrow & \texttt{'('} \;\;\mathrm{expr} \;\;\texttt{')'} \
& \mid & \texttt{NUMBER}
\end{eqnarray*}
$$
Implementing a Scanner
In order to parse, we need a scanner. We will use the same scanner that we have already used for our top down parser that has been presented in the notebook Top-Down-Parser.ipynb.
|
import re
|
Python/Shift-Reduce-Parser-Pure.ipynb
|
karlstroetmann/Formal-Languages
|
gpl-2.0
|
The function tokenize scans the string s into a list of tokens using Python's regular expressions. The scanner distinguishes between
* whitespace, which is discarded,
* numbers,
* arithmetical operators and parenthesis,
* all remaining characters, which are treated as lexical errors.
See below for an example.
|
def tokenize(s):
'''Transform the string s into a list of tokens. The string s
is supposed to represent an arithmetic expression.
'''
lexSpec = r'''([ \t\n]+) | # blanks and tabs
([1-9][0-9]*|0) | # number
([-+*/()]) | # arithmetical operators and parentheses
(.) # unrecognized character
'''
tokenList = re.findall(lexSpec, s, re.VERBOSE)
result = []
for ws, number, operator, error in tokenList:
if ws: # skip blanks and tabs
continue
elif number:
result += [ 'NUMBER' ]
elif operator:
result += [ operator ]
else:
result += [ f'ERROR({error})']
return result
tokenize('1 + 2 * (3 - 4)')
|
Python/Shift-Reduce-Parser-Pure.ipynb
|
karlstroetmann/Formal-Languages
|
gpl-2.0
|
Assume a grammar $G = \langle V, T, R, S \rangle$ is given. A shift-reduce parser
is defined as a 4-Tuple
$$P = \langle Q, q_0, \texttt{action}, \texttt{goto} \rangle$$
where
- $Q$ is the set of states of the shift-reduce parser.
For the purpose of the shift-reduce-parser, states are purely abstract.
- $q_0 \in Q$ is the start state.
- $\texttt{action}$ is a function taking two arguments. The first argument is a state $q \in Q$
and the second argument is a token $t \in T$. The result of this function is an element from the set
$$\texttt{Action} :=
\bigl{ \langle\texttt{shift}, q\rangle \mid q \in Q \bigr} \cup
\bigl{ \langle\texttt{reduce}, r\rangle \mid r \in R \bigr} \cup
\bigl{ \texttt{accept} \bigr} \cup
\bigl{ \texttt{error} \bigr}.
$$
Here shift, reduce, accept, and error are strings that serve to
distinguish the different kinds of results returned by the function
action. Therefore the signature of the function action is given as follows:
$$\texttt{action}: Q \times T \rightarrow \texttt{Action}.$$
- goto is a function that takes a state $q \in Q$ and a syntactical variable
$v \in V$ and computes a new state. Therefore the signature of goto is as follows:
$$\texttt{goto}: Q \times V \rightarrow Q.$$
The class ShiftReduceParser maintains two tables that are implemented as dictionaries:
- mActionTable encodes the function $\texttt{action}: Q \times T \rightarrow \texttt{Action}$.
- mGotoTable encodes the function $\texttt{goto}: Q \times V \rightarrow Q$.
The constructor takes these tables as arguments and stores them in the member variables mActionTable and mGotoTable.
|
class ShiftReduceParser():
def __init__(self, actionTable, gotoTable):
self.mActionTable = actionTable
self.mGotoTable = gotoTable
|
Python/Shift-Reduce-Parser-Pure.ipynb
|
karlstroetmann/Formal-Languages
|
gpl-2.0
|
The method parse takes a list of tokens TL as its argument. It returns True if the token list can be parsed successfully or False otherwise.
It algorithm that is applied is known as shift/reduce parsing.
|
def parse(self, TL):
index = 0 # points to next token
Symbols = [] # stack of symbols
States = ['s0'] # stack of states, s0 is start state
TL += ['EOF']
while True:
q = States[-1]
t = TL[index]
# Below, an undefined table entry is interpreted as an error entry.
match self.mActionTable.get((q, t), 'error'):
case 'error':
return False
case 'accept':
return True
case 'shift', s:
Symbols += [t]
States += [s]
index += 1
case 'reduce', rule:
head, body = rule
n = len(body)
Symbols = Symbols[:-n]
States = States [:-n]
Symbols = Symbols + [head]
state = States[-1]
States += [ self.mGotoTable[state, head] ]
ShiftReduceParser.parse = parse
del parse
%run Parse-Table.ipynb
|
Python/Shift-Reduce-Parser-Pure.ipynb
|
karlstroetmann/Formal-Languages
|
gpl-2.0
|
Details of the "Happy" dataset:
- Images are of shape (64,64,3)
- Training: 600 pictures
- Test: 150 pictures
It is now time to solve the "Happy" Challenge.
2 - Building a model in Keras
Keras is very good for rapid prototyping. In just a short time you will be able to build a model that achieves outstanding results.
Here is an example of a model in Keras:
```python
def model(input_shape):
# Define the input placeholder as a tensor with shape input_shape. Think of this as your input image!
X_input = Input(input_shape)
# Zero-Padding: pads the border of X_input with zeroes
X = ZeroPadding2D((3, 3))(X_input)
# CONV -> BN -> RELU Block applied to X
X = Conv2D(32, (7, 7), strides = (1, 1), name = 'conv0')(X)
X = BatchNormalization(axis = 3, name = 'bn0')(X)
X = Activation('relu')(X)
# MAXPOOL
X = MaxPooling2D((2, 2), name='max_pool')(X)
# FLATTEN X (means convert it to a vector) + FULLYCONNECTED
X = Flatten()(X)
X = Dense(1, activation='sigmoid', name='fc')(X)
# Create model. This creates your Keras model instance, you'll use this instance to train/test the model.
model = Model(inputs = X_input, outputs = X, name='HappyModel')
return model
```
Note that Keras uses a different convention with variable names than we've previously used with numpy and TensorFlow. In particular, rather than creating and assigning a new variable on each step of forward propagation such as X, Z1, A1, Z2, A2, etc. for the computations for the different layers, in Keras code each line above just reassigns X to a new value using X = .... In other words, during each step of forward propagation, we are just writing the latest value in the commputation into the same variable X. The only exception was X_input, which we kept separate and did not overwrite, since we needed it at the end to create the Keras model instance (model = Model(inputs = X_input, ...) above).
Exercise: Implement a HappyModel(). This assignment is more open-ended than most. We suggest that you start by implementing a model using the architecture we suggest, and run through the rest of this assignment using that as your initial model. But after that, come back and take initiative to try out other model architectures. For example, you might take inspiration from the model above, but then vary the network architecture and hyperparameters however you wish. You can also use other functions such as AveragePooling2D(), GlobalMaxPooling2D(), Dropout().
Note: You have to be careful with your data's shapes. Use what you've learned in the videos to make sure your convolutional, pooling and fully-connected layers are adapted to the volumes you're applying it to.
|
# GRADED FUNCTION: HappyModel
def HappyModel(input_shape):
"""
Implementation of the HappyModel.
Arguments:
input_shape -- shape of the images of the dataset
Returns:
model -- a Model() instance in Keras
"""
### START CODE HERE ###
# Define the input placeholder as a tensor with shape input_shape. Think of this as your input image!
X_input = Input(input_shape)
# Zero-Padding: pads the border of X_input with zeroes
X = ZeroPadding2D((3, 3))(X_input)
# CONV -> BN -> RELU Block applied to X
X = Conv2D(32, (7, 7), strides = (1, 1), name = 'conv0')(X)
X = BatchNormalization(axis = 3, name = 'bn0')(X)
X = Activation('relu')(X)
# MAXPOOL
X = MaxPooling2D((2, 2), name='max_pool')(X)
# FLATTEN X (means convert it to a vector) + FULLYCONNECTED
X = Flatten()(X)
X = Dense(1, activation='sigmoid', name='fc')(X)
# Create model. This creates your Keras model instance, you'll use this instance to train/test the model.
model = Model(inputs = X_input, outputs = X, name='HappyModel')
### END CODE HERE ###
return model
|
deep-learnining-specialization/4. Convolutional Neural Networks/week2/Keras+-+Tutorial+-+Happy+House+v2.ipynb
|
diegocavalca/Studies
|
cc0-1.0
|
4. Find a reasonable threshold to say exposure is high and recode the data
|
df['High_Exposure'] = df['Exposure'].apply(lambda x:1 if x > 3.41 else 0)
|
class7/donow/hon_jingyi_donow_7.ipynb
|
ledeprogram/algorithms
|
gpl-3.0
|
5. Create a logistic regression model
|
lm = LogisticRegression()
x = np.asarray(dataset[['Mortality']])
y = np.asarray(dataset['Exposure'])
lm = lm.fit(x,y)
|
class7/donow/hon_jingyi_donow_7.ipynb
|
ledeprogram/algorithms
|
gpl-3.0
|
# Créer et manipuler des Tensors
Objectifs de formation :
* Initialiser et affecter des objets Variable TensorFlow
* Créer et manipuler des Tensors
* Rafraîchir ses connaissances sur les opérations de somme et de produit en algèbre linéaire (lecture conseillée de l'introduction à l'addition et au produit matriciels, si ces notions vous sont inconnues)
* Se familiariser avec les opérations mathématiques et de tableau basiques dans TensorFlow
|
from __future__ import print_function
import tensorflow as tf
|
ml/cc/prework/fr/creating_and_manipulating_tensors.ipynb
|
google/eng-edu
|
apache-2.0
|
## Somme vectorielle
Vous pouvez réaliser de nombreuses opérations mathématiques standards sur les Tensors (reportez-vous à l'API TensorFlow). Le code suivant permet de créer et de manipuler deux vecteurs (Tensors à une dimension), constitués chacun de six éléments :
|
with tf.Graph().as_default():
# Create a six-element vector (1-D tensor).
primes = tf.constant([2, 3, 5, 7, 11, 13], dtype=tf.int32)
# Create another six-element vector. Each element in the vector will be
# initialized to 1. The first argument is the shape of the tensor (more
# on shapes below).
ones = tf.ones([6], dtype=tf.int32)
# Add the two vectors. The resulting tensor is a six-element vector.
just_beyond_primes = tf.add(primes, ones)
# Create a session to run the default graph.
with tf.Session() as sess:
print(just_beyond_primes.eval())
|
ml/cc/prework/fr/creating_and_manipulating_tensors.ipynb
|
google/eng-edu
|
apache-2.0
|
### Formats de Tensor
Le format caractérise la taille et le nombre de dimensions d'un Tensor. Il est indiqué sous la forme d'une liste, où le ie élément désigne la taille par rapport à la dimension i. La longueur de la liste indique le rang du Tensor (c'est-à-dire le nombre de dimensions).
Pour en savoir plus, consultez la documentation TensorFlow.
Quelques exemples basiques :
|
with tf.Graph().as_default():
# A scalar (0-D tensor).
scalar = tf.zeros([])
# A vector with 3 elements.
vector = tf.zeros([3])
# A matrix with 2 rows and 3 columns.
matrix = tf.zeros([2, 3])
with tf.Session() as sess:
print('scalar has shape', scalar.get_shape(), 'and value:\n', scalar.eval())
print('vector has shape', vector.get_shape(), 'and value:\n', vector.eval())
print('matrix has shape', matrix.get_shape(), 'and value:\n', matrix.eval())
|
ml/cc/prework/fr/creating_and_manipulating_tensors.ipynb
|
google/eng-edu
|
apache-2.0
|
### Broadcasting
En mathématiques, les Tensors de format identique peuvent subir uniquement des opérations au niveau de l'élément (opérations ajouter et égal, par exemple). Dans TensorFlow, en revanche, il est possible de réaliser des opérations traditionnellement incompatibles. ce modèle autorise ainsi le broadcasting (un concept emprunté à Numpy), qui permet d'agrandir un petit tableau pour qu'il prenne le même format que le grand tableau. Exemples de possibilités offertes par le broadcasting :
Si une opération exige un Tensor de taille [6], un Tensor de taille [1] ou [] peut être utilisé comme opérande.
Si une opération exige un Tensor de taille [4, 6], vous pouvez utiliser comme opérande l'une des tailles de Tensor suivantes :
[1, 6]
[6]
[]
Si une opération exige un Tensor de taille [3, 5, 6], vous pouvez utiliser comme opérande l'une des tailles de Tensor suivantes :
[1, 5, 6]
[3, 1, 6]
[3, 5, 1]
[1, 1, 1]
[5, 6]
[1, 6]
[6]
[1]
[]
REMARQUE : Lorsqu'un Tensor est broadcasté, ses entrées sont copiées de manière conceptuelle. (Elles ne sont pas réellement copiées, pour des raisons liées aux performances. Le broadcasting a été conçu comme un outil d'optimisation des performances.)
La documentation sur le broadcasting Numpy, qui se veut facile d'accès, fournit une description détaillée de l'ensemble de règles de broadcasting.
Le code suivant reprend l'opération de somme de Tensors précédente, cette fois avec le broadcasting :
|
with tf.Graph().as_default():
# Create a six-element vector (1-D tensor).
primes = tf.constant([2, 3, 5, 7, 11, 13], dtype=tf.int32)
# Create a constant scalar with value 1.
ones = tf.constant(1, dtype=tf.int32)
# Add the two tensors. The resulting tensor is a six-element vector.
just_beyond_primes = tf.add(primes, ones)
with tf.Session() as sess:
print(just_beyond_primes.eval())
|
ml/cc/prework/fr/creating_and_manipulating_tensors.ipynb
|
google/eng-edu
|
apache-2.0
|
## Produit matriciel
En algèbre linéaire, lorsque vous calculez le produit de deux matrices, le nombre de colonnes dans la première doit être égal au nombre de lignes dans la seconde.
Une matrice 3x4 peut être multipliée par une matrice 4x2. Vous obtiendrez une matrice 3x2.
Une matrice 4x2 ne peut pas être multipliée par une matrice 3x4.
|
with tf.Graph().as_default():
# Create a matrix (2-d tensor) with 3 rows and 4 columns.
x = tf.constant([[5, 2, 4, 3], [5, 1, 6, -2], [-1, 3, -1, -2]],
dtype=tf.int32)
# Create a matrix with 4 rows and 2 columns.
y = tf.constant([[2, 2], [3, 5], [4, 5], [1, 6]], dtype=tf.int32)
# Multiply `x` by `y`.
# The resulting matrix will have 3 rows and 2 columns.
matrix_multiply_result = tf.matmul(x, y)
with tf.Session() as sess:
print(matrix_multiply_result.eval())
|
ml/cc/prework/fr/creating_and_manipulating_tensors.ipynb
|
google/eng-edu
|
apache-2.0
|
## Modification du format des Tensors
La somme de Tensors et le produit matriciel sont deux opérations qui imposent des contraintes spécifiques aux opérandes, obligeant ainsi les programmeurs TensorFlow à modifier régulièrement le format des Tensors.
La méthode tf.reshape permet de modifier le format d'un Tensor.
Ainsi, un Tensor 8x2 peut être converti en Tensor 2x8 ou 4x4 :
|
with tf.Graph().as_default():
# Create an 8x2 matrix (2-D tensor).
matrix = tf.constant([[1,2], [3,4], [5,6], [7,8],
[9,10], [11,12], [13, 14], [15,16]], dtype=tf.int32)
# Reshape the 8x2 matrix into a 2x8 matrix.
reshaped_2x8_matrix = tf.reshape(matrix, [2,8])
# Reshape the 8x2 matrix into a 4x4 matrix
reshaped_4x4_matrix = tf.reshape(matrix, [4,4])
with tf.Session() as sess:
print("Original matrix (8x2):")
print(matrix.eval())
print("Reshaped matrix (2x8):")
print(reshaped_2x8_matrix.eval())
print("Reshaped matrix (4x4):")
print(reshaped_4x4_matrix.eval())
|
ml/cc/prework/fr/creating_and_manipulating_tensors.ipynb
|
google/eng-edu
|
apache-2.0
|
Vous pouvez également utiliser tf.reshape pour modifier le nombre de dimensions (le \'rang\') d'un Tensor.
Par exemple, le même Tensor 8x2 peut être converti en Tensor 2x2x4 à trois dimensions ou en Tensor une dimension de 16 éléments.
|
with tf.Graph().as_default():
# Create an 8x2 matrix (2-D tensor).
matrix = tf.constant([[1,2], [3,4], [5,6], [7,8],
[9,10], [11,12], [13, 14], [15,16]], dtype=tf.int32)
# Reshape the 8x2 matrix into a 3-D 2x2x4 tensor.
reshaped_2x2x4_tensor = tf.reshape(matrix, [2,2,4])
# Reshape the 8x2 matrix into a 1-D 16-element tensor.
one_dimensional_vector = tf.reshape(matrix, [16])
with tf.Session() as sess:
print("Original matrix (8x2):")
print(matrix.eval())
print("Reshaped 3-D tensor (2x2x4):")
print(reshaped_2x2x4_tensor.eval())
print("1-D vector:")
print(one_dimensional_vector.eval())
|
ml/cc/prework/fr/creating_and_manipulating_tensors.ipynb
|
google/eng-edu
|
apache-2.0
|
### Exercice n° 1 : Modifier le format de deux Tensors pour les multiplier
L'opération de produit matriciel est impossible sur les deux vecteurs suivants :
a = tf.constant([5, 3, 2, 7, 1, 4])
b = tf.constant([4, 6, 3])
Modifiez leur format pour les convertir en opérandes compatibles avec l'opération de produit matriciel.
Réalisez ensuite cette opération sur les Tensors ainsi modifiés.
|
# Write your code for Task 1 here.
|
ml/cc/prework/fr/creating_and_manipulating_tensors.ipynb
|
google/eng-edu
|
apache-2.0
|
### Solution
Cliquez ci-dessous pour afficher la solution.
|
with tf.Graph().as_default(), tf.Session() as sess:
# Task: Reshape two tensors in order to multiply them
# Here are the original operands, which are incompatible
# for matrix multiplication:
a = tf.constant([5, 3, 2, 7, 1, 4])
b = tf.constant([4, 6, 3])
# We need to reshape at least one of these operands so that
# the number of columns in the first operand equals the number
# of rows in the second operand.
# Reshape vector "a" into a 2-D 2x3 matrix:
reshaped_a = tf.reshape(a, [2,3])
# Reshape vector "b" into a 2-D 3x1 matrix:
reshaped_b = tf.reshape(b, [3,1])
# The number of columns in the first matrix now equals
# the number of rows in the second matrix. Therefore, you
# can matrix mutiply the two operands.
c = tf.matmul(reshaped_a, reshaped_b)
print(c.eval())
# An alternate approach: [6,1] x [1, 3] -> [6,3]
|
ml/cc/prework/fr/creating_and_manipulating_tensors.ipynb
|
google/eng-edu
|
apache-2.0
|
## Variables, initialisation et affectation
Les opérations réalisées jusqu'à maintenant portaient uniquement sur des valeurs statiques (tf.constant). L'appel de la méthode eval() renvoyait systématiquement le même résultat. Avec TensorFlow, vous pouvez définir des objets Variable, dont la valeur peut changer.
Lors de la création d'une variable, vous avez le choix entre définir explicitement sa valeur initiale ou utiliser un initialiseur (comme pour une distribution) :
|
g = tf.Graph()
with g.as_default():
# Create a variable with the initial value 3.
v = tf.Variable([3])
# Create a variable of shape [1], with a random initial value,
# sampled from a normal distribution with mean 1 and standard deviation 0.35.
w = tf.Variable(tf.random_normal([1], mean=1.0, stddev=0.35))
|
ml/cc/prework/fr/creating_and_manipulating_tensors.ipynb
|
google/eng-edu
|
apache-2.0
|
L'une des particularités de TensorFlow est que l'initialisation des variables n'est pas automatique. Ainsi, le bloc suivant renverra une erreur :
|
with g.as_default():
with tf.Session() as sess:
try:
v.eval()
except tf.errors.FailedPreconditionError as e:
print("Caught expected error: ", e)
|
ml/cc/prework/fr/creating_and_manipulating_tensors.ipynb
|
google/eng-edu
|
apache-2.0
|
Le plus simple pour initialiser une variable consiste à appeler global_variables_initializer. La méthode Session.run() employée ici équivaut à eval(), à peu de chose près.
|
with g.as_default():
with tf.Session() as sess:
initialization = tf.global_variables_initializer()
sess.run(initialization)
# Now, variables can be accessed normally, and have values assigned to them.
print(v.eval())
print(w.eval())
|
ml/cc/prework/fr/creating_and_manipulating_tensors.ipynb
|
google/eng-edu
|
apache-2.0
|
Une fois initialisées, les variables conservent leur valeur pour toute la session (il convient de les réinitialiser au démarrage d'une nouvelle session) :
|
with g.as_default():
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# These three prints will print the same value.
print(w.eval())
print(w.eval())
print(w.eval())
|
ml/cc/prework/fr/creating_and_manipulating_tensors.ipynb
|
google/eng-edu
|
apache-2.0
|
Pour modifier la valeur d'une variable, utilisez l'opération assign. Créer simplement cette opération n'a aucun effet. Comme pour l'initialisation, vous devez exécuter l'opération d'affectation (via run) pour pouvoir mettre à jour la valeur de la variable :
|
with g.as_default():
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# This should print the variable's initial value.
print(v.eval())
assignment = tf.assign(v, [7])
# The variable has not been changed yet!
print(v.eval())
# Execute the assignment op.
sess.run(assignment)
# Now the variable is updated.
print(v.eval())
|
ml/cc/prework/fr/creating_and_manipulating_tensors.ipynb
|
google/eng-edu
|
apache-2.0
|
Chargement, stockage… les thématiques autour des variables ne manquent pas. Pour en savoir plus sur un sujet non abordé dans cette formation, consultez la documentation TensorFlow.
### Exercice n° 2 : Simuler 10 lancers de deux dés
Simulez un lancer de dés, qui génère un Tensor 10x3 à deux dimensions :
Les colonnes 1 et 2 enregistrent un lancer de chaque dé.
La colonne 3 contient la somme des colonnes 1 et 2, sur la même ligne.
Exemple de valeurs sur la première ligne :
Colonne 1 : 4
Colonne 2 : 3
Colonne 3 : 7
Pour effectuer cet exercice, nous vous invitons à consulter la documentation TensorFlow.
|
# Write your code for Task 2 here.
|
ml/cc/prework/fr/creating_and_manipulating_tensors.ipynb
|
google/eng-edu
|
apache-2.0
|
### Solution
Cliquez ci-dessous pour afficher la solution.
|
with tf.Graph().as_default(), tf.Session() as sess:
# Task 2: Simulate 10 throws of two dice. Store the results
# in a 10x3 matrix.
# We're going to place dice throws inside two separate
# 10x1 matrices. We could have placed dice throws inside
# a single 10x2 matrix, but adding different columns of
# the same matrix is tricky. We also could have placed
# dice throws inside two 1-D tensors (vectors); doing so
# would require transposing the result.
dice1 = tf.Variable(tf.random_uniform([10, 1],
minval=1, maxval=7,
dtype=tf.int32))
dice2 = tf.Variable(tf.random_uniform([10, 1],
minval=1, maxval=7,
dtype=tf.int32))
# We may add dice1 and dice2 since they share the same shape
# and size.
dice_sum = tf.add(dice1, dice2)
# We've got three separate 10x1 matrices. To produce a single
# 10x3 matrix, we'll concatenate them along dimension 1.
resulting_matrix = tf.concat(
values=[dice1, dice2, dice_sum], axis=1)
# The variables haven't been initialized within the graph yet,
# so let's remedy that.
sess.run(tf.global_variables_initializer())
print(resulting_matrix.eval())
|
ml/cc/prework/fr/creating_and_manipulating_tensors.ipynb
|
google/eng-edu
|
apache-2.0
|
Fine tuning the model using GridSearch
|
from sklearn.svm import SVC
from sklearn.cross_validation import cross_val_score
from sklearn.pipeline import Pipeline
from sklearn import grid_search
knn = KNeighborsClassifier()
parameters = {'n_neighbors':[1,]}
grid = grid_search.GridSearchCV(knn, parameters, n_jobs=-1, verbose=1, scoring='accuracy')
grid.fit(X_train, y_train)
print 'Best score: %0.3f' % grid.best_score_
print 'Best parameters set:'
best_parameters = grid.best_estimator_.get_params()
for param_name in sorted(parameters.keys()):
print '\t%s: %r' % (param_name, best_parameters[param_name])
predictions = grid.predict(X_test)
print classification_report(y_test, predictions)
|
Miscellaneous/Lenses Data Classification.ipynb
|
Aniruddha-Tapas/Applied-Machine-Learning
|
mit
|
cross validation for SVM
|
tt7=time()
print "cross result========"
scores = cross_validation.cross_val_score(svc,X,y, cv=5)
print scores
print scores.mean()
tt8=time()
print "time elapsed: ", tt7-tt6
print "\n"
from sklearn.svm import SVC
from sklearn.cross_validation import cross_val_score
from sklearn.pipeline import Pipeline
from sklearn import grid_search
svc = SVC()
parameters = {'kernel':('linear', 'rbf'), 'C':[1, 10]}
grid = grid_search.GridSearchCV(svc, parameters, n_jobs=-1, verbose=1, scoring='accuracy')
grid.fit(X_train, y_train)
print 'Best score: %0.3f' % grid.best_score_
print 'Best parameters set:'
best_parameters = grid.best_estimator_.get_params()
for param_name in sorted(parameters.keys()):
print '\t%s: %r' % (param_name, best_parameters[param_name])
predictions = grid.predict(X_test)
print classification_report(y_test, predictions)
pipeline = Pipeline([
('clf', SVC(kernel='linear', gamma=0.01, C=1))
])
parameters = {
'clf__gamma': (0.01, 0.03, 0.1, 0.3, 1),
'clf__C': (0.1, 0.3, 1, 3, 10, 30),
}
grid_search = GridSearchCV(pipeline, parameters, n_jobs=-1, verbose=1, scoring='accuracy')
grid_search.fit(X_train, y_train)
print 'Best score: %0.3f' % grid_search.best_score_
print 'Best parameters set:'
best_parameters = grid_search.best_estimator_.get_params()
for param_name in sorted(parameters.keys()):
print '\t%s: %r' % (param_name, best_parameters[param_name])
predictions = grid_search.predict(X_test)
print classification_report(y_test, predictions)
|
Miscellaneous/Lenses Data Classification.ipynb
|
Aniruddha-Tapas/Applied-Machine-Learning
|
mit
|
Unsupervised Learning
|
features = ['Age', 'Specs', 'Astigmatic', 'Tear-Production-Rate']
df1 = df[features]
df1.head()
|
Miscellaneous/Lenses Data Classification.ipynb
|
Aniruddha-Tapas/Applied-Machine-Learning
|
mit
|
PCA
|
# Apply PCA with the same number of dimensions as variables in the dataset
from sklearn.decomposition import PCA
pca = PCA(n_components=4) # 6 components for 6 variables
pca.fit(df1)
# Print the components and the amount of variance in the data contained in each dimension
print(pca.components_)
print(pca.explained_variance_ratio_)
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(list(pca.explained_variance_ratio_),'-o')
plt.title('Explained variance ratio as function of PCA components')
plt.ylabel('Explained variance ratio')
plt.xlabel('Component')
plt.show()
%pylab inline
import IPython
import sklearn as sk
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
y = df['Target-Lenses']
target = array(y.unique())
target
def plot_pca_scatter():
colors = ['blue', 'red', 'green']
for i in xrange(len(colors)):
px = X_pca[:, 0][y == i+1]
py = X_pca[:, 1][y == i+1]
plt.scatter(px, py, c=colors[i])
plt.legend(target)
plt.xlabel('First Principal Component')
plt.ylabel('Second Principal Component')
from sklearn.decomposition import PCA
estimator = PCA(n_components=3)
X_pca = estimator.fit_transform(X)
plot_pca_scatter() # Note that we only plot the first and second principal component
|
Miscellaneous/Lenses Data Classification.ipynb
|
Aniruddha-Tapas/Applied-Machine-Learning
|
mit
|
Clustering
|
# Import clustering modules
from sklearn.cluster import KMeans
from sklearn.mixture import GMM
# First we reduce the data to two dimensions using PCA to capture variation
pca = PCA(n_components=2)
reduced_data = pca.fit_transform(df1)
print(reduced_data[:10]) # print upto 10 elements
# Implement your clustering algorithm here, and fit it to the reduced data for visualization
# The visualizer below assumes your clustering object is named 'clusters'
# TRIED OUT 2,3,4,5,6 CLUSTERS AND CONCLUDED THAT 3 CLUSTERS ARE A SENSIBLE CHOICE BASED ON VISUAL INSPECTION, SINCE
# WE OBTAIN ONE CENTRAL CLUSTER AND TWO CLUSTERS THAT SPREAD FAR OUT IN TWO DIRECTIONS.
kmeans = KMeans(n_clusters=3)
clusters = kmeans.fit(reduced_data)
print(clusters)
# Plot the decision boundary by building a mesh grid to populate a graph.
x_min, x_max = reduced_data[:, 0].min() - 1, reduced_data[:, 0].max() + 1
y_min, y_max = reduced_data[:, 1].min() - 1, reduced_data[:, 1].max() + 1
hx = (x_max-x_min)/1000.
hy = (y_max-y_min)/1000.
xx, yy = np.meshgrid(np.arange(x_min, x_max, hx), np.arange(y_min, y_max, hy))
# Obtain labels for each point in mesh. Use last trained model.
Z = clusters.predict(np.c_[xx.ravel(), yy.ravel()])
# Find the centroids for KMeans or the cluster means for GMM
centroids = kmeans.cluster_centers_
print('*** K MEANS CENTROIDS ***')
print(centroids)
# TRANSFORM DATA BACK TO ORIGINAL SPACE FOR ANSWERING 7
print('*** CENTROIDS TRANSFERED TO ORIGINAL SPACE ***')
print(pca.inverse_transform(centroids))
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(1)
plt.clf()
plt.imshow(Z, interpolation='nearest',
extent=(xx.min(), xx.max(), yy.min(), yy.max()),
cmap=plt.cm.Paired,
aspect='auto', origin='lower')
plt.plot(reduced_data[:, 0], reduced_data[:, 1], 'k.', markersize=2)
plt.scatter(centroids[:, 0], centroids[:, 1],
marker='x', s=169, linewidths=3,
color='w', zorder=10)
plt.title('Clustering on the lenses dataset (PCA-reduced data)\n'
'Centroids are marked with white cross')
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.xticks(())
plt.yticks(())
plt.show()
|
Miscellaneous/Lenses Data Classification.ipynb
|
Aniruddha-Tapas/Applied-Machine-Learning
|
mit
|
Elbow Method
|
distortions = []
for i in range(1, 11):
km = KMeans(n_clusters=i,
init='k-means++',
n_init=10,
max_iter=300,
random_state=0)
km.fit(X)
distortions .append(km.inertia_)
plt.plot(range(1,11), distortions , marker='o')
plt.xlabel('Number of clusters')
plt.ylabel('Distortion')
plt.tight_layout()
#plt.savefig('./figures/elbow.png', dpi=300)
plt.show()
|
Miscellaneous/Lenses Data Classification.ipynb
|
Aniruddha-Tapas/Applied-Machine-Learning
|
mit
|
Quantifying the quality of clustering via silhouette plots
|
import numpy as np
from matplotlib import cm
from sklearn.metrics import silhouette_samples
km = KMeans(n_clusters=3,
init='k-means++',
n_init=10,
max_iter=300,
tol=1e-04,
random_state=0)
y_km = km.fit_predict(X)
cluster_labels = np.unique(y_km)
n_clusters = cluster_labels.shape[0]
silhouette_vals = silhouette_samples(X, y_km, metric='euclidean')
y_ax_lower, y_ax_upper = 0, 0
yticks = []
for i, c in enumerate(cluster_labels):
c_silhouette_vals = silhouette_vals[y_km == c]
c_silhouette_vals.sort()
y_ax_upper += len(c_silhouette_vals)
color = cm.jet(i / n_clusters)
plt.barh(range(y_ax_lower, y_ax_upper), c_silhouette_vals, height=1.0,
edgecolor='none', color=color)
yticks.append((y_ax_lower + y_ax_upper) / 2)
y_ax_lower += len(c_silhouette_vals)
silhouette_avg = np.mean(silhouette_vals)
plt.axvline(silhouette_avg, color="red", linestyle="--")
plt.yticks(yticks, cluster_labels + 1)
plt.ylabel('Cluster')
plt.xlabel('Silhouette coefficient')
plt.tight_layout()
# plt.savefig('./figures/silhouette.png', dpi=300)
plt.show()
|
Miscellaneous/Lenses Data Classification.ipynb
|
Aniruddha-Tapas/Applied-Machine-Learning
|
mit
|
Our clustering with 3 centroids is good.
Bad Clustering:
|
km = KMeans(n_clusters=4,
init='k-means++',
n_init=10,
max_iter=300,
tol=1e-04,
random_state=0)
y_km = km.fit_predict(X)
cluster_labels = np.unique(y_km)
n_clusters = cluster_labels.shape[0]
silhouette_vals = silhouette_samples(X, y_km, metric='euclidean')
y_ax_lower, y_ax_upper = 0, 0
yticks = []
for i, c in enumerate(cluster_labels):
c_silhouette_vals = silhouette_vals[y_km == c]
c_silhouette_vals.sort()
y_ax_upper += len(c_silhouette_vals)
color = cm.jet(i / n_clusters)
plt.barh(range(y_ax_lower, y_ax_upper), c_silhouette_vals, height=1.0,
edgecolor='none', color=color)
yticks.append((y_ax_lower + y_ax_upper) / 2)
y_ax_lower += len(c_silhouette_vals)
silhouette_avg = np.mean(silhouette_vals)
plt.axvline(silhouette_avg, color="red", linestyle="--")
plt.yticks(yticks, cluster_labels + 1)
plt.ylabel('Cluster')
plt.xlabel('Silhouette coefficient')
plt.tight_layout()
# plt.savefig('./figures/silhouette_bad.png', dpi=300)
plt.show()
|
Miscellaneous/Lenses Data Classification.ipynb
|
Aniruddha-Tapas/Applied-Machine-Learning
|
mit
|
Organizing clusters as a hierarchical tree
Performing hierarchical clustering on a distance matrix
To calculate the distance matrix as input for the hierarchical clustering algorithm, we will use the pdist function from SciPy's spatial.distance submodule:
|
labels = []
for i in range(df1.shape[0]):
str = 'ID_{}'.format(i)
labels.append(str)
from scipy.spatial.distance import pdist,squareform
row_dist = pd.DataFrame(squareform(pdist(df1, metric='euclidean')), columns=labels, index=labels)
row_dist[:5]
# 1. incorrect approach: Squareform distance matrix
from scipy.cluster.hierarchy import linkage
row_clusters = linkage(row_dist, method='complete', metric='euclidean')
pd.DataFrame(row_clusters,
columns=['row label 1', 'row label 2', 'distance', 'no. of items in clust.'],
index=['cluster %d' %(i+1) for i in range(row_clusters.shape[0])])
# 2. correct approach: Condensed distance matrix
row_clusters = linkage(pdist(df, metric='euclidean'), method='complete')
pd.DataFrame(row_clusters,
columns=['row label 1', 'row label 2', 'distance', 'no. of items in clust.'],
index=['cluster %d' %(i+1) for i in range(row_clusters.shape[0])])
# 3. correct approach: Input sample matrix
row_clusters = linkage(df.values, method='complete', metric='euclidean')
pd.DataFrame(row_clusters,
columns=['row label 1', 'row label 2', 'distance', 'no. of items in clust.'],
index=['cluster %d' %(i+1) for i in range(row_clusters.shape[0])])
|
Miscellaneous/Lenses Data Classification.ipynb
|
Aniruddha-Tapas/Applied-Machine-Learning
|
mit
|
As shown in the following table, the linkage matrix consists of several rows where each row represents one merge. The first and second columns denote the most dissimilar members in each cluster, and the third row reports the distance between those members. The last column returns the count of the members in each cluster.
Now that we have computed the linkage matrix, we can visualize the results in the form of a dendrogram:
|
from scipy.cluster.hierarchy import dendrogram
# make dendrogram black (part 1/2)
# from scipy.cluster.hierarchy import set_link_color_palette
# set_link_color_palette(['black'])
row_dendr = dendrogram(row_clusters,
labels=labels,
# make dendrogram black (part 2/2)
# color_threshold=np.inf
)
plt.tight_layout()
plt.ylabel('Euclidean distance')
#plt.savefig('./figures/dendrogram.png', dpi=300,
# bbox_inches='tight')
plt.show()
# plot row dendrogram
fig = plt.figure(figsize=(8,8))
axd = fig.add_axes([0.09,0.1,0.2,0.6])
row_dendr = dendrogram(row_clusters, orientation='right')
# reorder data with respect to clustering
df_rowclust = df.ix[row_dendr['leaves'][::-1]]
axd.set_xticks([])
axd.set_yticks([])
# remove axes spines from dendrogram
for i in axd.spines.values():
i.set_visible(False)
# plot heatmap
axm = fig.add_axes([0.23,0.1,0.6,0.6]) # x-pos, y-pos, width, height
cax = axm.matshow(df_rowclust, interpolation='nearest', cmap='hot_r')
fig.colorbar(cax)
axm.set_xticklabels([''] + list(df_rowclust.columns))
axm.set_yticklabels([''] + list(df_rowclust.index))
# plt.savefig('./figures/heatmap.png', dpi=300)
plt.show()
|
Miscellaneous/Lenses Data Classification.ipynb
|
Aniruddha-Tapas/Applied-Machine-Learning
|
mit
|
Applying agglomerative clustering via scikit-learn
|
from sklearn.cluster import AgglomerativeClustering
ac = AgglomerativeClustering(n_clusters=3, affinity='euclidean', linkage='complete')
labels = ac.fit_predict(X)
print('Cluster labels: %s' % labels)
|
Miscellaneous/Lenses Data Classification.ipynb
|
Aniruddha-Tapas/Applied-Machine-Learning
|
mit
|
from sklearn.cross_validation import train_test_split
X = df[features]
y = df['Target-Lenses']
X_train, X_test, y_train, y_test = train_test_split(X.values, y.values ,test_size=0.25, random_state=42)
from sklearn import cluster
clf = cluster.KMeans(init='k-means++', n_clusters=3, random_state=5)
clf.fit(X_train)
print clf.labels_.shape
print clf.labels_
# Predict clusters on testing data
y_pred = clf.predict(X_test)
from sklearn import metrics
print "Addjusted rand score:{:.2}".format(metrics.adjusted_rand_score(y_test, y_pred))
print "Homogeneity score:{:.2} ".format(metrics.homogeneity_score(y_test, y_pred))
print "Completeness score: {:.2} ".format(metrics.completeness_score(y_test, y_pred))
print "Confusion matrix"
print metrics.confusion_matrix(y_test, y_pred)
|
Miscellaneous/Lenses Data Classification.ipynb
|
Aniruddha-Tapas/Applied-Machine-Learning
|
mit
|
|
Affinity Propogation
|
# Affinity propagation
aff = cluster.AffinityPropagation()
aff.fit(X_train)
print aff.cluster_centers_indices_.shape
y_pred = aff.predict(X_test)
from sklearn import metrics
print "Addjusted rand score:{:.2}".format(metrics.adjusted_rand_score(y_test, y_pred))
print "Homogeneity score:{:.2} ".format(metrics.homogeneity_score(y_test, y_pred))
print "Completeness score: {:.2} ".format(metrics.completeness_score(y_test, y_pred))
print "Confusion matrix"
print metrics.confusion_matrix(y_test, y_pred)
#MeanShift
ms = cluster.MeanShift()
ms.fit(X_train)
print ms.cluster_centers_
y_pred = ms.predict(X_test)
from sklearn import metrics
print "Addjusted rand score:{:.2}".format(metrics.adjusted_rand_score(y_test, y_pred))
print "Homogeneity score:{:.2} ".format(metrics.homogeneity_score(y_test, y_pred))
print "Completeness score: {:.2} ".format(metrics.completeness_score(y_test, y_pred))
print "Confusion matrix"
print metrics.confusion_matrix(y_test, y_pred)
|
Miscellaneous/Lenses Data Classification.ipynb
|
Aniruddha-Tapas/Applied-Machine-Learning
|
mit
|
Mixture of Guassian Models
|
from sklearn import mixture
# Define a heldout dataset to estimate covariance type
X_train_heldout, X_test_heldout, y_train_heldout, y_test_heldout = train_test_split(
X_train, y_train,test_size=0.25, random_state=42)
for covariance_type in ['spherical','tied','diag','full']:
gm=mixture.GMM(n_components=3, covariance_type=covariance_type, random_state=42, n_init=5)
gm.fit(X_train_heldout)
y_pred=gm.predict(X_test_heldout)
print "Adjusted rand score for covariance={}:{:.2}".format(covariance_type, metrics.adjusted_rand_score(y_test_heldout, y_pred))
gm = mixture.GMM(n_components=3, covariance_type='tied', random_state=42)
gm.fit(X_train)
# Print train clustering and confusion matrix
y_pred = gm.predict(X_test)
print "Addjusted rand score:{:.2}".format(metrics.adjusted_rand_score(y_test, y_pred))
print "Homogeneity score:{:.2} ".format(metrics.homogeneity_score(y_test, y_pred))
print "Completeness score: {:.2} ".format(metrics.completeness_score(y_test, y_pred))
print "Confusion matrix"
print metrics.confusion_matrix(y_test, y_pred)
|
Miscellaneous/Lenses Data Classification.ipynb
|
Aniruddha-Tapas/Applied-Machine-Learning
|
mit
|
Matplotlib
Introduction
Matplotlib is a library for producing publication-quality figures. mpl (for short) was designed from the beginning to serve two purposes. First, allow for interactive, cross-platform control of figures and plots, and second, to make it very easy to produce static raster or vector graphics files without the need for any GUIs. Furthermore, mpl -- much like Python itself -- gives the developer complete control over the appearance of their plots, while still being very usable through a powerful defaults system.
Online Documentation
The matplotlib.org project website is the primary online resource for the library's documentation. It contains examples, FAQs, API documentation, and, most importantly, the gallery.
Gallery
Many users of Matplotlib are often faced with the question, "I want to make a figure that has X with Y in the same figure, but it needs to look like Z". Good luck getting an answer from a web search with that query! This is why the gallery is so useful, because it showcases the variety of ways one can make figures. Browse through the gallery, click on any figure that has pieces of what you want to see and the code that generated it. Soon enough, you will be like a chef, mixing and matching components to produce your masterpiece!
As always, if you have a new and interesting plot that demonstrates a feature of Matplotlib, feel free to submit a concise, well-commented version of the code for inclusion in the gallery.
Mailing Lists and StackOverflow
When you are just simply stuck, and cannot figure out how to get something to work, or just need some hints on how to get started, you will find much of the community at the matplotlib-users mailing list. This mailing list is an excellent resource of information with many friendly members who just love to help out newcomers. The number one rule to remember with this list is to be persistant. While many questions do get answered fairly quickly, some do fall through the cracks, or the one person who knows the answer isn't available. Therefore, try again with your questions rephrased, or with a plot showing your attempts so far. We love plots, so an image showing what is wrong often gets the quickest responses.
Another community resource is StackOverflow, so if you need to build up karma points, submit your questions here, and help others out too! We are also on Gitter.
Github repository
Location
Matplotlib is hosted by GitHub.
Bug Reports and feature requests
So, you think you found a bug? Or maybe you think some feature is just too difficult to use? Or missing altogether? Submit your bug reports here at Matplotlib's issue tracker. We even have a process for submitting and discussing Matplotlib Enhancement Proposals (MEPs).
Quick note on "backends" and Jupyter notebooks
Matplotlib has multiple backends. The backends allow mpl to be used on a variety of platforms with a variety of GUI toolkits (GTK, Qt, Wx, etc.), all of them written so that most of the time, you will not need to care which backend you are using.
|
import matplotlib
print(matplotlib.__version__)
print(matplotlib.get_backend())
|
resources/matplotlib/AnatomyOfMatPlotLib/AnatomyOfMatplotlib-Part1-Figures_Subplots_and_layouts.ipynb
|
BrainIntensive/OnlineBrainIntensive
|
mit
|
Normally we wouldn't need to think about this too much, but IPython/Jupyter notebooks behave a touch differently than "normal" python.
Inside of IPython, it's often easiest to use the Jupyter nbagg or notebook backend. This allows plots to be displayed and interacted with in the browser in a Jupyter notebook. Otherwise, figures will pop up in a separate GUI window.
We can do this in two ways:
The IPython %matplotlib backend_name "magic" command (or plt.ion(), which behaves similarly)
Figures will be shown automatically by IPython, even if you don't call plt.show().
matplotlib.use("backend_name")
Figures will only be shown when you call plt.show().
Here, we'll use the second method for one simple reason: it allows our code to behave the same way regardless of whether we run it inside of an Jupyter notebook or from a python script at the command line. Feel free to use the %matplotlib magic command if you'd prefer.
One final note: You will always need to do this before you import matplotlib.pyplot as plt.
|
matplotlib.use('nbagg')
|
resources/matplotlib/AnatomyOfMatPlotLib/AnatomyOfMatplotlib-Part1-Figures_Subplots_and_layouts.ipynb
|
BrainIntensive/OnlineBrainIntensive
|
mit
|
On with the show!
Matplotlib is a large project and can seem daunting at first. However, by learning the components, it should begin to feel much smaller and more approachable.
Anatomy of a "Plot"
People use "plot" to mean many different things. Here, we'll be using a consistent terminology (mirrored by the names of the underlying classes, etc):
<img src="images/figure_axes_axis_labeled.png">
The Figure is the top-level container in this hierarchy. It is the overall window/page that everything is drawn on. You can have multiple independent figures and Figures can contain multiple Axes.
Most plotting ocurs on an Axes. The axes is effectively the area that we plot data on and any ticks/labels/etc associated with it. Usually we'll set up an Axes with a call to subplot (which places Axes on a regular grid), so in most cases, Axes and Subplot are synonymous.
Each Axes has an XAxis and a YAxis. These contain the ticks, tick locations, labels, etc. In this tutorial, we'll mostly control ticks, tick labels, and data limits through other mechanisms, so we won't touch the individual Axis part of things all that much. However, it is worth mentioning here to explain where the term Axes comes from.
Getting Started
In this tutorial, we'll use the following import statements. These abbreviations are semi-standardized, and most tutorials, other scientific python code that you'll find elsewhere will use them as well.
|
import numpy as np
import matplotlib.pyplot as plt
|
resources/matplotlib/AnatomyOfMatPlotLib/AnatomyOfMatplotlib-Part1-Figures_Subplots_and_layouts.ipynb
|
BrainIntensive/OnlineBrainIntensive
|
mit
|
Overview
Time-series forecasting problems are ubiquitous throughout the business world and can be posed as a supervised machine learning problem.
A common approach to creating features and labels is to use a sliding window where the features are historical entries and the label(s) represent entries in the future. As any data-scientist that works with time-series knows, this sliding window approach can be tricky to get right.
In this notebook we share a workflow to tackle time-series problems.
Dataset
For this demo we will be using New York City real estate data obtained from nyc.gov. The data starts in 2003. The data can be loaded into BigQuery with the following code:
```python
Read data. Data was collected from nyc open data repository.
import pandas as pd
dfr = pd.read_csv('https://storage.googleapis.com/asl-testing/data/nyc_open_data_real_estate.csv')
Upload to BigQuery.
PROJECT = 'YOUR-PROJECT-HERE'
DATASET = 'nyc_real_estate'
TABLE = 'residential_sales'
dfr.to_gbq('{}.{}'.format(DATASET, TABLE), PROJECT)
```
Objective
The goal of the notebook is to show how to forecast using Pandas and BigQuery. The steps achieve in this notebook are the following:
1 Building a machine learning (ML) forecasting model locally
* Create features and labels on a subsample of data
* Train a model using sklearn
2 Building and scaling out a ML using Google BigQuery
* Create features and labels on the full dataset using BigQuery.
* Train the model on the entire dataset using BigQuery ML
3 Building an advanced forecasting modeling using recurrent neural network (RNN) model
* Create features and labels on the full dataset using BigQuery.
* Train a model using TensorFlow
Costs
This tutorial uses billable components of Google Cloud Platform (GCP):
BigQuery
Cloud storage
AI Platform
The BigQuery and Cloud Storage costs are < \$0.05 and the AI Platform training job uses approximately 0.68 ML units or ~\$0.33.
Pandas: Rolling window for time-series forecasting
We have created a Pandas solution create_rolling_features_label function that automatically creates the features/label setup. This is suitable for smaller datasets and for local testing before training on the Cloud. And we have also created a BigQuery script that creates these rolling windows suitable for large datasets.
Data Exploration
This notebook is self-contained so let's clone the training-data-analyst repo so we can have access to the feature and label creation functions in time_series.py and scalable_time_series.py. We'll be using the pandas_gbq package so make sure that it is installed.
|
!pip3 install pandas-gbq
%%bash
git clone https://github.com/GoogleCloudPlatform/training-data-analyst.git \
--depth 1
cd training-data-analyst/blogs/gcp_forecasting
|
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
After cloning the above repo we can important pandas and our custom module time_series.py.
|
%matplotlib inline
import pandas as pd
import pandas_gbq as gbq
from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor
from sklearn.linear_model import Ridge
import time_series
# Allow you to easily have Python variables in SQL query.
from IPython.core.magic import register_cell_magic
from IPython import get_ipython
@register_cell_magic('with_globals')
def with_globals(line, cell):
contents = cell.format(**globals())
if 'print' in line:
print(contents)
get_ipython().run_cell(contents)
|
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
For this demo we will be using New York City real estate data obtained from nyc.gov. This public dataset starts in 2003. The data can be loaded into BigQuery with the following code:
|
dfr = pd.read_csv('https://storage.googleapis.com/asl-testing/data/nyc_open_data_real_estate.csv')
# Upload to BigQuery.
PROJECT = "[your-project-id]"
DATASET = 'nyc_real_estate'
TABLE = 'residential_sales'
BUCKET = "[your-bucket]" # Used later.
gbq.to_gbq(dfr, '{}.{}'.format(DATASET, TABLE), PROJECT, if_exists='replace')
|
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
Since we are just doing local modeling, let's just use a subsample of the data. Later we will train on all of the data in the cloud.
|
SOURCE_TABLE = TABLE
FILTER = '''residential_units = 1 AND sale_price > 10000
AND sale_date > TIMESTAMP('2010-12-31 00:00:00')'''
%%with_globals
%%bigquery --project {PROJECT} df
SELECT
borough,
neighborhood,
building_class_category,
tax_class_at_present,
block,
lot,
ease_ment,
building_class_at_present,
address,
apartment_number,
zip_code,
residential_units,
commercial_units,total_units,
land_square_feet,
gross_square_feet,
year_built,
tax_class_at_time_of_sale,
building_class_at_time_of_sale,
sale_price,
sale_date,
price_per_sq_ft
FROM
{SOURCE_TABLE}
WHERE
{FILTER}
ORDER BY
sale_date
LIMIT
100
df.head()
%%with_globals
%%bigquery --project {PROJECT} df
SELECT
neighborhood,
COUNT(*) AS cnt
FROM
{SOURCE_TABLE}
WHERE
{FILTER}
GROUP BY
neighborhood
ORDER BY
cnt
|
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
The most sales are from the upper west side, midtown west, and the upper east side.
|
ax = df.set_index('neighborhood').cnt\
.tail(10)\
.plot(kind='barh');
ax.set_xlabel('total sales');
|
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
SOHO and Civic Center are the most expensive neighborhoods.
|
%%with_globals
%%bigquery --project {PROJECT} df
SELECT
neighborhood,
APPROX_QUANTILES(sale_price, 100)[
OFFSET
(50)] AS median_price
FROM
{SOURCE_TABLE}
WHERE
{FILTER}
GROUP BY
neighborhood
ORDER BY
median_price
ax = df.set_index('neighborhood').median_price\
.tail(10)\
.plot(kind='barh');
ax.set_xlabel('median price')
|
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
Build features
Let's create features for building a machine learning model:
Aggregate median sales for each week. Prices are noisy and by grouping by week, we will smooth out irregularities.
Create a rolling window to split the single long time series into smaller windows. One feature vector will contain a single window and the label will be a single observation (or window for multiple predictions) occuring after the window.
|
%%with_globals
%%bigquery --project asl-testing-217717 df
SELECT
sale_week,
APPROX_QUANTILES(sale_price, 100)[
OFFSET
(50)] AS median_price
FROM (
SELECT
TIMESTAMP_TRUNC(sale_date, week) AS sale_week,
sale_price
FROM
{SOURCE_TABLE}
WHERE
{FILTER})
GROUP BY
sale_week
ORDER BY
sale_week
sales = pd.Series(df.median_price)
sales.index= pd.DatetimeIndex(df.sale_week.dt.date)
sales.head()
ax = sales.plot(figsize=(8,4), label='median_price')
ax = sales.rolling(10).mean().plot(ax=ax, label='10 week rolling average')
ax.legend()
|
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
Sliding window
Let's create our features. We will use the create_rolling_features_label function that automatically creates the features/label setup.
Create the features and labels.
|
WINDOW_SIZE = 52 * 1
HORIZON = 4*6
MONTHS = 0
WEEKS = 1
LABELS_SIZE = 1
df = time_series.create_rolling_features_label(sales, window_size=WINDOW_SIZE, pred_offset=HORIZON)
df = time_series.add_date_features(df, df.index, months=MONTHS, weeks=WEEKS)
df.head()
|
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
Let's train our model using all weekly median prices from 2003 -- 2015. Then we will test our model's performance on prices from 2016 -- 2018
|
# Features, label.
X = df.drop('label', axis=1)
y = df['label']
# Train/test split. Splitting on time.
train_ix = time_series.is_between_dates(y.index,
end='2015-12-30')
test_ix = time_series.is_between_dates(y.index,
start='2015-12-30',
end='2018-12-30 08:00:00')
X_train, y_train = X.iloc[train_ix], y.iloc[train_ix]
X_test, y_test = X.iloc[test_ix], y.iloc[test_ix]
print(X_train.shape, X_test.shape)
|
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
Z-score normalization for the features for training.
|
mean = X_train.mean()
std = X_train.std()
def zscore(X):
return (X-mean)/std
X_train = zscore(X_train)
X_test = zscore(X_test)
|
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
Initial model
Baseline
Build naive model that just uses the mean of training set.
|
df_baseline = y_test.to_frame(name='label')
df_baseline['pred'] = y_train.mean()
# Join mean predictions with test labels.
baseline_global_metrics = time_series.Metrics(df_baseline.pred,
df_baseline.label)
baseline_global_metrics.report("Global Baseline Model")
# Train model.
cl = RandomForestRegressor(n_estimators=500, max_features='sqrt',
random_state=10, criterion='mse')
cl = Ridge(100)
cl = GradientBoostingRegressor ()
cl.fit(X_train, y_train)
pred = cl.predict(X_test)
random_forest_metrics = time_series.Metrics(y_test,
pred)
random_forest_metrics.report("Forest Model")
|
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
The regression model performs 35% better than the baseline model.
Observations:
* Linear Regression does okay for this dataset (Regularization helps generalize the model)
* Random Forest is better -- doesn't require a lot of tuning. It performs a bit better than regression.
* Gradient Boosting does do better than regression
Interpret results
|
# Data frame to query for plotting
df_res = pd.DataFrame({'pred': pred, 'baseline': df_baseline.pred, 'y_test': y_test})
metrics = time_series.Metrics(df_res.y_test, df_res.pred)
ax = df_res.iloc[:].plot(y=[ 'pred', 'y_test'],
style=['b-','k-'],
figsize=(10,5))
ax.set_title('rmse: {:2.2f}'.format(metrics.rmse), size=16);
ax.set_ylim(20,)
df_res.head()
|
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
BigQuery modeling
We have observed there is signal in our data and our smaller, local model is working better. Let's scale this model out to the cloud. Let's train a BigQuery Machine Learning (BQML) on the full dataset.
Set up your GCP project
The following steps are required, regardless of your notebook environment.
Select or create a GCP project.
Make sure that billing is enabled for your project.
BigQuery is automatically enabled in new projects. To activate BigQuery in a pre-existing project, go to Enable the BigQuery API.
Enter your project ID in the cell below.
Authenticate your GCP account
If you are using AI Platform Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the GCP Console, go to the Create service account key
page.
From the Service account drop-down list, select New service account.
In the Service account name field, enter a name.
From the Role drop-down list, select
BigQuery > BigQuery Admin and
Storage > Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your
computer.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below.
Import libraries
Import supporting modules:
|
# Import BigQuery module
from google.cloud import bigquery
# Import external custom module containing SQL queries
import scalable_time_series
# Define hyperparameters
value_name = "med_sales_price"
downsample_size = 7 # 7 days into 1 week
window_size = 52
labels_size = 1
horizon = 1
# Construct a BigQuery client object.
client = bigquery.Client()
# Set dataset_id to the ID of the dataset to create.
sink_dataset_name = "temp_forecasting_dataset"
dataset_id = "{}.{}".format(client.project, sink_dataset_name)
# Construct a full Dataset object to send to the API.
dataset = bigquery.Dataset.from_string(dataset_id)
# Specify the geographic location where the dataset should reside.
dataset.location = "US"
# Send the dataset to the API for creation.
# Raises google.api_core.exceptions.Conflict if the Dataset already
# exists within the project.
try:
dataset = client.create_dataset(dataset) # API request
print("Created dataset {}.{}".format(client.project, dataset.dataset_id))
except Exception as e:
print("Dataset {}.{} already exists".format(
client.project, dataset.dataset_id))
|
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
We need to create a date range table in BigQuery so that we can join our data to that to get the correct sequences.
|
# Call BigQuery and examine in dataframe
source_dataset = "nyc_real_estate"
source_table_name = "all_sales"
query_create_date_range = scalable_time_series.create_date_range(
client.project, source_dataset, source_table_name)
df = client.query(query_create_date_range + "LIMIT 100").to_dataframe()
df.head(5)
|
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
Execute query and write to BigQuery table.
|
job_config = bigquery.QueryJobConfig()
# Set the destination table
table_name = "start_end_timescale_date_range"
table_ref = client.dataset(sink_dataset_name).table(table_name)
job_config.destination = table_ref
job_config.write_disposition = "WRITE_TRUNCATE"
# Start the query, passing in the extra configuration.
query_job = client.query(
query=query_create_date_range,
# Location must match that of the dataset(s) referenced in the query
# and of the destination table.
location="US",
job_config=job_config) # API request - starts the query
query_job.result() # Waits for the query to finish
print("Query results loaded to table {}".format(table_ref.path))
|
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
Now that we have the date range table created we can create our training dataset for BQML.
|
# Call BigQuery and examine in dataframe
sales_dataset_table = source_dataset + "." + source_table_name
query_bq_sub_sequences = scalable_time_series.bq_create_rolling_features_label(
client.project, sink_dataset_name, table_name, sales_dataset_table,
value_name, downsample_size, window_size, horizon, labels_size)
print(query_bq_sub_sequences[0:500])
%%with_globals
%%bigquery --project $PROJECT
{query_bq_sub_sequences}
LIMIT 100
|
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
Create BigQuery dataset
Prior to now we've just been reading an existing BigQuery table, now we're going to create our own so so we need some place to put it. In BigQuery parlance, Dataset means a folder for tables.
We will take advantage of BigQuery's Python Client to create the dataset.
|
bq = bigquery.Client(project = PROJECT)
dataset = bigquery.Dataset(bq.dataset("bqml_forecasting"))
try:
bq.create_dataset(dataset) # will fail if dataset already exists
print("Dataset created")
except:
print("Dataset already exists")
|
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
Split dataset into a train and eval set.
|
feature_list = ["price_ago_{time}".format(time=time)
for time in range(window_size, 0, -1)]
label_list = ["price_ahead_{time}".format(time=time)
for time in range(1, labels_size + 1)]
select_list = ",".join(feature_list + label_list)
select_string = "SELECT {select_list} FROM ({query})".format(
select_list=select_list,
query=query_bq_sub_sequences)
concat_vars = []
concat_vars.append("CAST(feat_seq_start_date AS STRING)")
concat_vars.append("CAST(lab_seq_end_date AS STRING)")
farm_finger = "FARM_FINGERPRINT(CONCAT({concat_vars}))".format(
concat_vars=", ".join(concat_vars))
sampling_clause = "ABS(MOD({farm_finger}, 100))".format(
farm_finger=farm_finger)
bqml_train_query = "{select_string} WHERE {sampling_clause} < 80".format(
select_string=select_string, sampling_clause=sampling_clause)
bqml_eval_query = "{select_string} WHERE {sampling_clause} >= 80".format(
select_string=select_string, sampling_clause=sampling_clause)
|
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
Create model
To create a model
1. Use CREATE MODEL and provide a destination table for resulting model. Alternatively we can use CREATE OR REPLACE MODEL which allows overwriting an existing model.
2. Use OPTIONS to specify the model type (linear_reg or logistic_reg). There are many more options we could specify, such as regularization and learning rate, but we'll accept the defaults.
3. Provide the query which fetches the training data
Have a look at Step Two of this tutorial to see another example.
The query will take about two minutes to complete
|
%%with_globals
%%bigquery --project $PROJECT
CREATE or REPLACE MODEL bqml_forecasting.nyc_real_estate
OPTIONS(model_type = "linear_reg",
input_label_cols = ["price_ahead_1"]) AS
{bqml_train_query}
|
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
Get training statistics
Because the query uses a CREATE MODEL statement to create a table, you do not see query results. The output is an empty string.
To get the training results we use the ML.TRAINING_INFO function.
Have a look at Step Three and Four of this tutorial to see a similar example.
|
%%bigquery --project $PROJECT
SELECT
{select_list}
FROM
ML.TRAINING_INFO(MODEL `bqml_forecasting.nyc_real_estate`)
|
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
'eval_loss' is reported as mean squared error, so our RMSE is 291178. Your results may vary.
|
%%with_globals
%%bigquery --project $PROJECT
#standardSQL
SELECT
{select_list}
FROM
ML.EVALUATE(MODEL `bqml_forecasting.nyc_real_estate`, ({bqml_eval_query}))
|
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
Predict
To use our model to make predictions, we use ML.PREDICT. Let's, use the nyc_real_estate you trained above to infer median sales price of all of our data.
Have a look at Step Five of this tutorial to see another example.
|
%%with_globals
%%bigquery --project $PROJECT df
#standardSQL
SELECT
predicted_price_ahead_1
FROM
ML.PREDICT(MODEL `bqml_forecasting.nyc_real_estate`, ({bqml_eval_query}))
|
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
TensorFlow Sequence Model
If you might want to use a more custom model, then Keras or TensorFlow may be helpful. Below we are going to create a custom LSTM sequence-to-one model that will read our input data in via CSV files and will train and evaluate.
Create temporary BigQuery dataset
|
# Construct a BigQuery client object.
client = bigquery.Client()
# Set dataset_id to the ID of the dataset to create.
sink_dataset_name = "temp_forecasting_dataset"
dataset_id = "{}.{}".format(client.project, sink_dataset_name)
# Construct a full Dataset object to send to the API.
dataset = bigquery.Dataset.from_string(dataset_id)
# Specify the geographic location where the dataset should reside.
dataset.location = "US"
# Send the dataset to the API for creation.
# Raises google.api_core.exceptions.Conflict if the Dataset already
# exists within the project.
try:
dataset = client.create_dataset(dataset) # API request
print("Created dataset {}.{}".format(client.project, dataset.dataset_id))
except:
print("Dataset {}.{} already exists".format(
client.project, dataset.dataset_id))
|
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
Now that we have the date range table created we can create our training dataset.
|
# Call BigQuery and examine in dataframe
sales_dataset_table = source_dataset + "." + source_table_name
downsample_size = 7
query_csv_sub_seqs = scalable_time_series.csv_create_rolling_features_label(
client.project, sink_dataset_name, table_name, sales_dataset_table,
value_name, downsample_size, window_size, horizon, labels_size)
df = client.query(query_csv_sub_seqs + "LIMIT 100").to_dataframe()
df.head(20)
|
blogs/gcp_forecasting/gcp_time_series_forecasting.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.