code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
# Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's [post on RNNs](http://karpathy.github.io/2015/05/21/rnn-effectiveness/) and [implementation in Torch](https://github.com/karpathy/char-rnn). Also, some information [here at r2rt](http://r2rt.com/recurrent-neural-networks-in-tensorflow-ii.html) and from [Sherjil Ozair](https://github.com/sherjilozair/char-rnn-tensorflow) on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
```
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
```
First we'll load the text file and convert it into integers for our network to use.
```
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
text[:100]
chars[:100]
```
Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.
The idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the `split_frac` keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.
```
def split_data(chars, batch_size, num_steps, split_frac=0.9):
"""
Split character data into training and validation sets, inputs and targets for each set.
Arguments
---------
chars: character array
batch_size: Size of examples in each of batch
num_steps: Number of sequence steps to keep in the input and pass to the network
split_frac: Fraction of batches to keep in the training set
Returns train_x, train_y, val_x, val_y
"""
slice_size = batch_size * num_steps
n_batches = int(len(chars) / slice_size)
# Drop the last few characters to make only full batches
x = chars[: n_batches*slice_size]
y = chars[1: n_batches*slice_size + 1]
# Split the data into batch_size slices, then stack them into a 2D matrix
x = np.stack(np.split(x, batch_size))
y = np.stack(np.split(y, batch_size))
# Now x and y are arrays with dimensions batch_size x n_batches*num_steps
# Split into training and validation sets, keep the virst split_frac batches for training
split_idx = int(n_batches*split_frac)
train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]
val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]
return train_x, train_y, val_x, val_y
train_x, train_y, val_x, val_y = split_data(chars, 10, 200)
train_x.shape
train_x[:,:10]
```
I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size `batch_size X num_steps`. For example, if we want our network to train on a sequence of 100 characters, `num_steps = 100`. For the next batch, we'll shift this window the next sequence of `num_steps` characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
```
def get_batch(arrs, num_steps):
batch_size, slice_size = arrs[0].shape
n_batches = int(slice_size/num_steps)
for b in range(n_batches):
yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]
def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,
learning_rate=0.001, grad_clip=5, sampling=False):
if sampling == True:
batch_size, num_steps = 1, 1
tf.reset_default_graph()
# Declare placeholders we'll feed into the graph
with tf.name_scope('inputs'):
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot')
with tf.name_scope('targets'):
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot')
y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# Build the RNN layers
with tf.name_scope("RNN_cells"):
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
with tf.name_scope("RNN_init_state"):
initial_state = cell.zero_state(batch_size, tf.float32)
# Run the data through the RNN layers
with tf.name_scope("RNN_forward"):
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=initial_state)
final_state = state
# Reshape output so it's a bunch of rows, one row for each cell output
with tf.name_scope('sequence_reshape'):
seq_output = tf.concat(outputs, axis=1,name='seq_output')
output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output')
# Now connect the RNN outputs to a softmax layer and calculate the cost
with tf.name_scope('logits'):
softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1),
name='softmax_w')
softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b')
logits = tf.matmul(output, softmax_w) + softmax_b
tf.summary.histogram('softmax_w', softmax_w)
tf.summary.histogram('softmax_b', softmax_b)
with tf.name_scope('predictions'):
preds = tf.nn.softmax(logits, name='predictions')
tf.summary.histogram('predictions', preds)
with tf.name_scope('cost'):
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss')
cost = tf.reduce_mean(loss, name='cost')
tf.summary.scalar('cost', cost)
# Optimizer for training, using gradient clipping to control exploding gradients
with tf.name_scope('train'):
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
merged = tf.summary.merge_all()
# Export the nodes
export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',
'keep_prob', 'cost', 'preds', 'optimizer', 'merged']
Graph = namedtuple('Graph', export_nodes)
local_dict = locals()
graph = Graph(*[local_dict[each] for each in export_nodes])
return graph
```
## Hyperparameters
Here I'm defining the hyperparameters for the network. The two you probably haven't seen before are `lstm_size` and `num_layers`. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability.
```
batch_size = 100
num_steps = 100
lstm_size = 512
num_layers = 2
learning_rate = 0.001
```
## Training
Time for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by `save_every_n`) I calculate the validation loss and save a checkpoint.
```
!mkdir -p checkpoints/anna
epochs = 10
save_every_n = 100
train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
train_writer = tf.summary.FileWriter('./logs/2/train', sess.graph)
test_writer = tf.summary.FileWriter('./logs/2/test')
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/anna20.ckpt')
n_batches = int(train_x.shape[1]/num_steps)
iterations = n_batches * epochs
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):
iteration = e*n_batches + b
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 0.5,
model.initial_state: new_state}
summary, batch_loss, new_state, _ = sess.run([model.merged, model.cost,
model.final_state, model.optimizer],
feed_dict=feed)
loss += batch_loss
end = time.time()
print('Epoch {}/{} '.format(e+1, epochs),
'Iteration {}/{}'.format(iteration, iterations),
'Training loss: {:.4f}'.format(loss/b),
'{:.4f} sec/batch'.format((end-start)))
train_writer.add_summary(summary, iteration)
if (iteration%save_every_n == 0) or (iteration == iterations):
# Check performance, notice dropout has been set to 1
val_loss = []
new_state = sess.run(model.initial_state)
for x, y in get_batch([val_x, val_y], num_steps):
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 1.,
model.initial_state: new_state}
summary, batch_loss, new_state = sess.run([model.merged, model.cost,
model.final_state], feed_dict=feed)
val_loss.append(batch_loss)
test_writer.add_summary(summary, iteration)
print('Validation loss:', np.mean(val_loss),
'Saving checkpoint!')
#saver.save(sess, "checkpoints/anna/i{}_l{}_{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss)))
tf.train.get_checkpoint_state('checkpoints/anna')
```
## Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
```
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
prime = "Far"
samples = [c for c in prime]
model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
checkpoint = "checkpoints/anna/i3560_l512_1.122.ckpt"
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i200_l512_2.432.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i600_l512_1.750.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i1000_l512_1.484.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
```
| github_jupyter |
# Exploring data to calculate bunches and gaps
This first half of this notebook is the exploratory code I used to find this method in the first place. The second half will be a cleaned version that can be run on its own.
## Initialize database connection
So I can load the data I'll need
```
import getpass
import psycopg2 as pg
import pandas as pd
import pandas.io.sql as sqlio
import plotly.express as px
# Enter database credentials. Requires you to paste in the user and
# password so it isn't saved in the notebook file
print("Enter database username:")
user = getpass.getpass()
print("Enter database password:")
password = getpass.getpass()
creds = {
'user': user,
'password': password,
'host': "lambdalabs24sfmta.cykkiwxbfvpg.us-east-1.rds.amazonaws.com",
'dbname': "historicalTransitData"
}
# Set up connection to database
cnx = pg.connect(**creds)
cursor = cnx.cursor()
print('\nDatabase connection successful')
```
## Load route configuration and schedule data using custom classes
Allows me to build on previous work
```
# Change this one cell and run all to change to a different route/day
rid = "1"
begin = "2020/6/1 07:00:00"
end = "2020/6/2 07:00:00" # non-inclusive
# Load in the other classes I wrote (I uploaded class files to colab manually for this)
from schedule import Schedule
from route import Route
# Day is not working right. We currently have the same data for all days available, so this will still work
# I will investigate why my class isn't pulling data right later, it worked before...
day = "2020-5-20"
schedule = Schedule(rid, day, cnx)
route = Route(rid, day, cnx)
# One thing I was after there
schedule.inbound_table.head(10)
# I didn't have the stops functionality complete in my class, so I'll write it here
# Read in route configuration data
stops = pd.DataFrame(route.route_data['stop'])
directions = pd.DataFrame(route.route_data['direction'])
# Change stop arrays to just the list of numbers
for i in range(len(directions)):
directions.at[i, 'stop'] = [s['tag'] for s in directions.at[i, 'stop']]
# Find which stops are inbound or outbound
inbound = []
for stop_list in directions[directions['name'] == "Inbound"]['stop']:
for stop in stop_list:
if stop not in inbound:
inbound.append(stop)
outbound = []
for stop_list in directions[directions['name'] == "Outbound"]['stop']:
for stop in stop_list:
if stop not in inbound:
outbound.append(stop)
# Label each stop as inbound or outbound
stops['direction'] = ['none'] * len(stops)
for i in range(len(stops)):
if stops.at[i, 'tag'] in inbound:
stops.at[i, 'direction'] = 'inbound'
elif stops.at[i, 'tag'] in outbound:
stops.at[i, 'direction'] = 'outbound'
# Convert from string to float
stops['lat'] = stops['lat'].astype(float)
stops['lon'] = stops['lon'].astype(float)
stops.head()
# Combine with the schedule class
# Add a boolean that is true if the stop is also in the schedule table (not all stops get scheduled)
scheduled_stops = list(schedule.inbound_table) + list(schedule.outbound_table)
stops['scheduled'] = [(tag in scheduled_stops) for tag in stops['tag']]
stops.head()
```
## Load bus data for the day
I'm directly accessing the database for this
```
# Build query to select location data
query = f"""
SELECT *
FROM locations
WHERE rid = '{rid}' AND
timestamp > '{begin}'::TIMESTAMP AND
timestamp < '{end}'::TIMESTAMP
ORDER BY id;
"""
# read the query directly into pandas
locations = sqlio.read_sql_query(query, cnx)
locations.head()
# Convert those UTC timestamps to local PST by subtracting 7 hours
locations['timestamp'] = locations['timestamp'] - pd.Timedelta(hours=7)
locations.head()
locations.tail()
```
## Method exploration: bus trips
Can we break bus location data into trips? And will it match the number of scheduled trips?
### Conclusion
After exploring with the code below, I can't use trips in the way I had hoped. There is too much inconsistency, it was actually hard to find one that showed data for the entire inbound or outbound trip. And of course the number of observed trips doesn't match the number of scheduled trips.
```
print("Number of inbound trips:", len(schedule.inbound_table))
print("Number of outbound trips:", len(schedule.outbound_table))
# Remove all rows where direction is empty (I may review this step later)
locations_cleaned = locations[~pd.isna(locations['direction'])]
# Start simple, use just 1 vid
locations['vid'].value_counts()[:5]
# trips will be stored as a list of numbers, row indices matching the above table
trips = pd.DataFrame(columns=['direction', 'vid', 'start', 'ids', 'count'])
# perform search separately for each vid
for vid in locations_cleaned['vid'].unique():
# filter to just that vid
df = locations_cleaned[locations_cleaned['vid'] == vid]
# Start with the first direction value
direction = df.at[df.index[0], 'direction']
current_trip = []
for i in df.index:
if df.at[i, 'direction'] == direction:
# same direction, add to current trip
current_trip.append(i)
else:
# changed direction, append current trip to the dataframe of trips
trips = trips.append({'direction': direction,
'vid': vid,
'start': current_trip[0],
'ids': current_trip,
'count': len(current_trip)}, ignore_index=True)
# and reset variables for the next one
current_trip = [i]
direction = df.at[i, 'direction']
# Sort trips to run chronologically
trips = trips.sort_values('start').reset_index(drop=True)
print(trips.shape)
trips.head()
```
I saw at the beginning of this exploration that there were 316 trips on the schedule. So 286 found doesn't cover all of it. Additionally some trips have few data points and will be cut short.
```
# needed to filter properly
locations_cleaned = locations_cleaned.reset_index()
# Plot a trip on the map to explore visually
# plot the route path
fig = px.line_mapbox(route.path_coords, lat=0, lon=1)
# filter to just 1 trip
filtered = [i in trips.at[102, 'ids'] for i in locations_cleaned['index']]
# plot the trip dots
fig.add_trace(px.scatter_mapbox(locations_cleaned[filtered], lat='latitude',
lon='longitude', text='timestamp').data[0])
fig.update_layout(title=f"Route {rid}", mapbox_style="stamen-terrain", mapbox_zoom=12)
fig.show()
```
## Assigning Stops
The core objective here is to figure out when busses are at each stop. Since tracking this based on bus location is so inconsistent, I'm going to try turning it around.
I will calculate which stop each bus is at for each location report, then I will switch viewpoints and analyse what times each stop sees a bus.
```
# I'll need an efficient method to get distance between lat/lon coordinates
# Austie's notebooks already had this:
from math import sqrt, cos
def fcc_projection(loc1, loc2):
"""
function to apply FCC recommended formulae
for calculating distances on earth projected to a plane
significantly faster computationally, negligible loss in accuracy
Args:
loc1 - a tuple of lat/lon
loc2 - a tuple of lat/lon
"""
lat1, lat2 = loc1[0], loc2[0]
lon1, lon2 = loc1[1], loc2[1]
mean_lat = (lat1+lat2)/2
delta_lat = lat2 - lat1
delta_lon = lon2 - lon1
k1 = 111.13209 - 0.56605*cos(2*mean_lat) + .0012*cos(4*mean_lat)
k2 = 111.41513*cos(mean_lat) - 0.09455*cos(3*mean_lat) + 0.00012*cos(5*mean_lat)
distance = sqrt((k1*delta_lat)**2 + (k2*delta_lon)**2)
return distance
locations.head()
# We get data every 60 seconds, so I'll drop any that are more than
# 60 seconds old. Those would end up as duplicates otherwise.
df = locations[locations['age'] < 60].copy()
print("Old rows removed:", len(locations)-len(df), "out of", len(locations))
# Again, I'll remove rows with no direction so we are only tracking
# buses that report they are going one way or the other
df = df[~pd.isna(df['direction'])]
# Since these reports are a few seconds old, I'm going to shift the timestamps
# so that they match when the vehicle was actually at that location
def shift_timestamp(row):
""" subtracts row['age'] from row['timestamp'] """
return row['timestamp'] - pd.Timedelta(seconds=row['age'])
df['timestamp'] = df.apply(shift_timestamp, axis=1)
df.head()
# For stop comparison, I'll make lists of all inbound or outbound stops
inbound_stops = stops[stops['direction'] == 'inbound'].reset_index(drop=True)
outbound_stops = stops[stops['direction'] == 'outbound'].reset_index(drop=True)
inbound_stops.head()
%%time
# initialize columns for efficiency
df['closestStop'] = [0] * len(df)
df['distance'] = [0.0] * len(df)
for i in df.index:
if '_I_' in df.at[i, 'direction']:
candidates = inbound_stops.copy()
elif '_O_' in df.at[i, 'direction']:
candidates = outbound_stops.copy()
else:
# Skip row if bus is not found to be either inbound or outbound
continue
bus_coord = (df.at[i, 'latitude'], df.at[i, 'longitude'])
# Find closest stop within candidates
# Assume the first stop
closest = candidates.iloc[0]
distance = fcc_projection(bus_coord, (closest['lat'], closest['lon']))
# Check each stop after that
for j in range(1, len(candidates)):
# find distance to this stop
dist = fcc_projection(bus_coord, (candidates.iloc[j]['lat'],
candidates.iloc[j]['lon']))
if dist < distance:
# closer stop found, save it
closest = candidates.iloc[j]
distance = dist
# Save the tag of the closest stop and the distance to it
df.at[i, 'closestStop'] = closest['tag']
df.at[i, 'distance'] = distance
```
There are 48 stops on both the inbound and outbound routes, and 10445 location reports for the day after removing any with no direction.
So the code cell above checked 501,360 combinations of bus locations and stops.
Good candidate for optimization.
```
print(df.shape)
df.head(10)
import matplotlib.pyplot as plt
import seaborn as sns
fig, ax = plt.subplots()
fig.set_size_inches(16, 16)
# All vehicles
sns.scatterplot(x=df['timestamp'],
y=df['closestStop'].astype(str))
# Single vehicle plot
# vid = 5798
# sns.scatterplot(x=df[df['vid']==vid]['timestamp'],
# y=df[df['vid']==vid]['closestStop'].astype(str),
# hue=df[df['vid']==vid]['direction'])
# sort Y axis by the order stops should visited in
stoplist = list(inbound_stops['tag']) + list(outbound_stops['tag'])
plt.yticks(ticks=stoplist, labels=stoplist)
plt.show()
```
## Calculating when each bus is at each stop
General steps:
1. Find which stop is closest to each reported location (done above)
- This step could use improvement. Using the closest stop will be a good approximation, but we could get more accurate by taking speed, distance, or maybe heading into account.
2. Fill in times for stops in between each location report (interpolation)
- 
- Possible improvement: the current approach assumes equal distances between stops
3. Save those times for each stop, to get a list of all times each stop sees a bus
4. Analyze that list of times the stop sees a bus to get the bunches and gaps
```
# Slimming down to just 1 vehicle so I can test faster
df_full = df.copy()
df = df[df['vid'] == 5797]
# This is the first inbound group
print(df.shape)
df.head(12)
# The inbound and outbound lists have stops in the correct order
inbound[:5]
# example, index of a stop
inbound.index('4016')
# example, interpolate the time between stops at rows 0 and 1
df.at[0, 'timestamp'] + (df.at[1, 'timestamp'] - df.at[0, 'timestamp'])/2
%%time
# Initialize the data structure I will store results in
# Dict with the stop tag as a key, and a list of timestamps
stop_times = {}
for stop in inbound + outbound:
stop_times[str(stop)] = []
for vid in df_full['vid'].unique():
# Process the route one vehicle at a time
df = df_full[df_full['vid'] == vid]
# process 1st row on its own
prev_row = df.loc[df.index[0]]
stop_times[str(prev_row['closestStop'])].append(prev_row['timestamp'])
# loop through the rest of the rows, comparing each to the previous one
for i, row in df[1:].iterrows():
if row['direction'] != prev_row['direction']:
# changed directions, don't compare to previous row
stop_times[str(row['closestStop'])].append(row['timestamp'])
else:
# same direction, compare to previous
if '_I_' in row['direction']: # get correct stop list
stoplist = inbound
else:
stoplist = outbound
current = stoplist.index(str(row['closestStop']))
previous = stoplist.index(str(prev_row['closestStop']))
gap = current - previous
if gap > 1: # need to interpolate
diff = (row['timestamp'] - prev_row['timestamp'])/gap
counter = 1
for stop in stoplist[previous+1:current]:
# save interpolated time
stop_times[str(stop)].append(prev_row['timestamp'] + (counter * diff))
# print('interpolated time for stop', stop)
# increase counter for the next stop
# example: with 2 interpolated stops, gap would be 3
# 1st diff is 1/3, next is 2/3
counter += 1
if row['closestStop'] != prev_row['closestStop']:
# only save time if the stop has changed,
# otherwise the bus hasn't moved since last time
stop_times[str(row['closestStop'])].append(row['timestamp'])
# advance for next row
prev_row = row
# meta-analysis
total = 0
for key in stop_times.keys():
total += len(stop_times[key])
print(f"Stop {key} recorded {len(stop_times[key])} timestamps")
print(f"\n{total} total timestamps were recorded, meaning {total - len(df_full)} were interpolated")
```
## Finding bunches and gaps
Now that we have the lists of each time a bus was at each stop, we can go through those lists to check if the times were long or short enough to be bunches or gaps.
```
# Expected headway (in minutes)
schedule.common_interval
%%time
# sort every array so that the times are in order
for stop in stop_times.keys():
stop_times[stop].sort()
%%time
# Initialize dataframe for the bunces and gaps
problems = pd.DataFrame(columns=['type', 'time', 'duration', 'stop'])
counter = 0
# Set the bunch/gap thresholds (in seconds)
bunch_threshold = (schedule.common_interval * 60) * .2 # 20%
gap_threshold = (schedule.common_interval * 60) * 1.5 # 150%
for stop in stop_times.keys():
# ensure we have a time at all for this stop
if len(stop_times[stop]) == 0:
print(f"Stop {stop} had no recorded times")
continue # go to next stop in the loop
# save initial time
prev_time = stop_times[stop][0]
# loop through all others, comparing to the previous one
for time in stop_times[stop][1:]:
diff = (time - prev_time).seconds
if diff <= bunch_threshold:
# bunch found, save it
problems.at[counter] = ['bunch', prev_time, diff, stop]
counter += 1
elif diff >= gap_threshold:
problems.at[counter] = ['gap', prev_time, diff, stop]
counter += 1
prev_time = time
# type - 'bunch' or 'gap'
# time - the start time of the problem interval
# duration - the length of the interval, in seconds
# stop - the stop this interval was recorded at
problems[problems['type'] == 'gap']
problems[problems['type'] == 'bunch']
# how many is that?
bunches = len(problems[problems['type'] == 'bunch'])
gaps = len(problems[problems['type'] == 'gap'])
intervals = total-len(stop_times)
print(f"Out of {intervals} recorded intervals, we found {bunches} bunches and {gaps} gaps")
print(f"\t{(bunches/intervals)*100 : .2f}% bunched")
print(f"\t{(gaps/intervals)*100 : .2f}% gapped")
```
## On time percentage
We can also use the list of stop times to calculate on-time percentage. For each expected time, did we have a bus there on time?
SFMTA defines "on time" as "within four minutes late or one minute early of the scheduled arrival time"
- source: https://www.sfmta.com/reports/muni-time-performance
We don't have enough precision in our data to distinguish arrivals and departures from every specific stop, so our results may not match up exactly with SFMTA's reported results. That website reports monthly statistics, but not daily.
These approximations also do not have info on which bus is supposed to be which trip. This code does not distinguish between early or late if a scheduled stop was missed, because we do not know if the bus before or after was supposed to make that stop.
```
# Remind myself what I'm working with
schedule.inbound_table.head(10)
# What times did we actually see?
stop_times['4026'][:10]
pd.to_datetime(inbound_times['4277'])
# Save schedules with timestamp data types, set date to match
example = df_full['timestamp'][0]
inbound_times = schedule.inbound_table
for col in inbound_times.columns:
inbound_times[col] = pd.to_datetime(inbound_times[col]).apply(
lambda dt: dt.replace(year=example.year,
month=example.month,
day=example.day))
outbound_times = schedule.outbound_table
for col in outbound_times.columns:
outbound_times[col] = pd.to_datetime(outbound_times[col]).apply(
lambda dt: dt.replace(year=example.year,
month=example.month,
day=example.day))
%%time
# This performs a sequential search to find observed times that match
# expected times. Could switch to a binary search to improve speed if needed.
def count_on_time(expected_times, observed_times):
""" Returns the number of on-time stops found """
# set up early/late thresholds (in seconds)
early_threshold = pd.Timedelta(seconds=1*60) # 1 minute early
late_threshold = pd.Timedelta(seconds=4*60) # 4 minutes late
count = 0
for stop in expected_times.columns:
for expected in expected_times[stop]:
if pd.isna(expected):
continue # skip NaT values in the expected schedule
# for each expected time...
# find first observed time after the early threshold
found_time = None
early = expected - early_threshold
for observed in observed_times[stop]:
if observed >= early:
found_time = observed
break
# if found time is still None, then all observed times were too early
# if found_time is before the late threshold then we were on time
if (not pd.isna(found_time)) and found_time <= (expected + late_threshold):
# found_time is within the on-time window
count += 1
return count
# count times for both inbound and outbound schedules
on_time_count = (count_on_time(inbound_times, stop_times) +
count_on_time(outbound_times, stop_times))
# get total expected count
total_expected = inbound_times.count().sum() + outbound_times.count().sum()
# and there we have it, the on-time percentage
print(f"Found {on_time_count} on time stops out of {total_expected} scheduled")
print(f"On-time percentage is {(on_time_count/total_expected)*100 : .2f}%\n")
```
## Bunches and Gaps Line Graph
```
problems.head()
index = pd.DatetimeIndex(problems['time'])
df = problems.copy()
df.index = index
df.head()
# How to select rows between a time
interval = pd.Timedelta(minutes=10)
select_start = pd.to_datetime('2020/6/8 11:00:00')
select_end = select_start + interval
df.between_time(select_start.time(), select_end.time())
import matplotlib.pyplot as plt
def draw_bunch_gap_graph(problems, interval=5):
"""
plots a graph of the bunches and gaps throughout the day
problems - the dataframe of bunches and gaps
interval - the number of minutes to bin data into
"""
# generate the DatetimeIndex needed
index = pd.DatetimeIndex(problems['time'])
df = problems.copy()
df.index = index
# lists for graph data
bunches = []
gaps = []
times = []
# set the time interval and selectino times
interval = pd.Timedelta(minutes=interval)
start_date = problems.at[0, 'time'].replace(hour=0, minute=0, second=0)
select_start = start_date
select_end = date + interval
while select_start.day == start_date.day:
# get the count of each type of problem in this time interval
count = df.between_time(select_start.time(), select_end.time())['type'].value_counts()
# append the counts to the data list
if 'bunch' in count.index:
bunches.append(count['bunch'])
else:
bunches.append(0)
if 'gap' in count.index:
gaps.append(count['gap'])
else:
gaps.append(0)
# save the start time for the x axis
times.append(str(select_start.time())[:5])
# increment the selection window
select_start += interval
select_end += interval
# draw the graph
fig, ax = plt.subplots()
fig.set_size_inches(16, 6)
plt.plot(times, bunches, label='Bunches')
plt.plot(times, gaps, label='Gaps')
plt.title(f"Bunches and Gaps for route {rid} on {str(pd.to_datetime(begin).date())}")
ax.xaxis.set_major_locator(plt.MaxNLocator(24)) # limit number of ticks on x-axis
plt.legend()
plt.show()
bunch_gap_graph(problems, interval=5)
bunch_gap_graph(problems, interval=10)
```
# Cleaning code
The above code achieves what we're after: it finds the number of bunches and gaps in a given route on a given day, and also calculates the on-time percentage.
The code below can be run on it's own, and does not include any of the exploratory bits.
## Run Once
```
# Used in many places
import psycopg2 as pg
import pandas as pd
# Used to enter database credentials without saving them to the notebook file
import getpass
# Used to easily read in bus location data
import pandas.io.sql as sqlio
# Only used in the schedule class definition
import numpy as np
from scipy import stats
# Used in the fcc_projection function to find distances
from math import sqrt, cos
# Used at the end, to convert the final product to JSON
import json
# Enter database credentials. Requires you to paste in the user and
# password so it isn't saved in the notebook file
print("Enter database username:")
user = getpass.getpass()
print("Enter database password:")
password = getpass.getpass()
creds = {
'user': user,
'password': password,
'host': "lambdalabs24sfmta.cykkiwxbfvpg.us-east-1.rds.amazonaws.com",
'dbname': "historicalTransitData"
}
# Set up connection to database
cnx = pg.connect(**creds)
cursor = cnx.cursor()
print('\nDatabase connection successful')
# Schedule class definition
# Copied from previous work, has extra methods that are not all used in this notebook
class Schedule:
def __init__(self, route_id, date, connection):
"""
The Schedule class loads the schedule for a particular route and day,
and makes several accessor methods available for it.
Parameters:
route_id (str or int)
- The route id to load
date (str or pandas.Timestamp)
- Which date to load
- Converted with pandas.to_datetime so many formats are acceptable
"""
self.route_id = str(route_id)
self.date = pd.to_datetime(date)
# load the schedule for that date and route
self.route_data = load_schedule(self.route_id, self.date, connection)
# process data into a table
self.inbound_table, self.outbound_table = extract_schedule_tables(self.route_data)
# calculate the common interval values
self.mean_interval, self.common_interval = get_common_intervals(
[self.inbound_table, self.outbound_table])
def list_stops(self):
"""
returns the list of all stops used by this schedule
"""
# get stops for both inbound and outbound routes
inbound = list(self.inbound_table.columns)
outbound = list(self.outbound_table.columns)
# convert to set to ensure no duplicates,
# then back to list for the correct output type
return list(set(inbound + outbound))
def get_specific_interval(self, stop, time, inbound=True):
"""
Returns the expected interval, in minutes, for a given stop and
time of day.
Parameters:
stop (str or int)
- the stop tag/id of the bus stop to check
time (str or pandas.Timestamp)
- the time of day to check, uses pandas.to_datetime to convert
- examples that work: "6:00", "3:30pm", "15:30"
inbound (bool, optional)
- whether to check the inbound or outbound schedule
- ignored unless the given stop is in both inbound and outbound
"""
# ensure correct parameter types
stop = str(stop)
time = pd.to_datetime(time)
# check which route to use, and extract the column for the given stop
if (stop in self.inbound_table.columns and
stop in self.outbound_table.columns):
# stop exists in both, use inbound parameter to decide
if inbound:
sched = self.inbound_table[stop]
else:
sched = self.outbound_table[stop]
elif (stop in self.inbound_table.columns):
# stop is in the inbound schedule, use that
sched = self.inbound_table[stop]
elif (stop in self.outbound_table.columns):
# stop is in the outbound schedule, use that
sched = self.outbound_table[stop]
else:
# stop doesn't exist in either, throw an error
raise ValueError(f"Stop id '{stop}' doesn't exist in either inbound or outbound schedules")
# 1: convert schedule to datetime for comparison statements
# 2: drop any NaN values
# 3: convert to list since pd.Series threw errors on i indexing
sched = list(pd.to_datetime(sched).dropna())
# reset the date portion of the time parameter to
# ensure we are checking the schedule correctly
time = time.replace(year=self.date.year, month=self.date.month,
day=self.date.day)
# iterate through that list to find where the time parameter fits
for i in range(1, len(sched)):
# start at 1 and move forward,
# is the time parameter before this schedule entry?
if(time < sched[i]):
# return the difference between this entry and the previous one
return (sched[i] - sched[i-1]).seconds / 60
# can only reach this point if the time parameter is after all entries
# in the schedule, return the last available interval
return (sched[len(sched)-1] - sched[len(sched)-2]).seconds / 60
def load_schedule(route, date, connection):
"""
loads schedule data from the database and returns it
Parameters:
route (str)
- The route id to load
date (str or pd.Timestamp)
- Which date to load
- Converted with pandas.to_datetime so many formats are acceptable
"""
# ensure correct parameter types
route = str(route)
date = pd.to_datetime(date)
# DB connection
cursor = connection.cursor()
# build selection query
query = """
SELECT content
FROM schedules
WHERE rid = %s AND
begin_date <= %s::TIMESTAMP AND
(end_date IS NULL OR end_date >= %s::TIMESTAMP);
"""
# execute query and save the route data to a local variable
cursor.execute(query, (route, str(date), str(date)))
data = cursor.fetchone()[0]['route']
# pd.Timestamp.dayofweek returns 0 for monday and 6 for Sunday
# the actual serviceClass strings are defined by Nextbus
# these are the only 3 service classes we can currently observe,
# if others are published later then this will need to change
if(date.dayofweek <= 4):
serviceClass = 'wkd'
elif(date.dayofweek == 5):
serviceClass = 'sat'
else:
serviceClass = 'sun'
# the schedule format has two entries for each serviceClass,
# one each for inbound and outbound.
# return each entry in the data list with the correct serviceClass
return [sched for sched in data if (sched['serviceClass'] == serviceClass)]
def extract_schedule_tables(route_data):
"""
converts raw schedule data to two pandas dataframes
columns are stops, and rows are individual trips
returns inbound_df, outbound_df
"""
# assuming 2 entries, but not assuming order
if(route_data[0]['direction'] == 'Inbound'):
inbound = 0
else:
inbound = 1
# extract a list of stops to act as columns
inbound_stops = [s['tag'] for s in route_data[inbound]['header']['stop']]
# initialize dataframe
inbound_df = pd.DataFrame(columns=inbound_stops)
# extract each row from the data
if type(route_data[inbound]['tr']) == list:
# if there are multiple trips in a day, structure will be a list
i = 0
for trip in route_data[inbound]['tr']:
for stop in trip['stop']:
# '--' indicates the bus is not going to that stop on this trip
if stop['content'] != '--':
inbound_df.at[i, stop['tag']] = stop['content']
# increment for the next row
i += 1
else:
# if there is only 1 trip in a day, the object is a dict and
# must be handled slightly differently
for stop in route_data[inbound]['tr']['stop']:
if stop['content'] != '--':
inbound_df.at[0, stop['tag']] = stop['content']
# flip between 0 and 1
outbound = int(not inbound)
# repeat steps for the outbound schedule
outbound_stops = [s['tag'] for s in route_data[outbound]['header']['stop']]
outbound_df = pd.DataFrame(columns=outbound_stops)
if type(route_data[outbound]['tr']) == list:
i = 0
for trip in route_data[outbound]['tr']:
for stop in trip['stop']:
if stop['content'] != '--':
outbound_df.at[i, stop['tag']] = stop['content']
i += 1
else:
for stop in route_data[outbound]['tr']['stop']:
if stop['content'] != '--':
outbound_df.at[0, stop['tag']] = stop['content']
# return both dataframes
return inbound_df, outbound_df
def get_common_intervals(df_list):
"""
takes route schedule tables and returns both the average interval (mean)
and the most common interval (mode), measured in number of minutes
takes a list of dataframes and combines them before calculating statistics
intended to combine inbound and outbound schedules for a single route
"""
# ensure we have at least one dataframe
if len(df_list) == 0:
raise ValueError("Function requires at least one dataframe")
# append all dataframes in the array together
df = df_list[0].copy()
for i in range(1, len(df_list)):
df.append(df_list[i].copy())
# convert all values to datetime so we can get an interval easily
for col in df.columns:
df[col] = pd.to_datetime(df[col])
# initialize a table to hold each individual interval
intervals = pd.DataFrame(columns=df.columns)
intervals['temp'] = range(len(df))
# take each column and find the intervals in it
for col in df.columns:
prev_time = np.nan
for i in range(len(df)):
# find the first non-null value and save it to prev_time
if pd.isnull(prev_time):
prev_time = df.at[i, col]
# if the current time is not null, save the interval
elif ~pd.isnull(df.at[i, col]):
intervals.at[i, col] = (df.at[i, col] - prev_time).seconds / 60
prev_time = df.at[i, col]
# this runs without adding a temp column, but the above loop runs 3x as
# fast if the rows already exist
intervals = intervals.drop('temp', axis=1)
# calculate the mean of the entire table
mean = intervals.mean().mean()
# calculate the mode of the entire table, the [0][0] at the end is
# because scipy.stats returns an entire ModeResult class
mode = stats.mode(intervals.values.flatten())[0][0]
return mean, mode
# Route class definition
# Copied from previous work, has extra methods that are not all used in this notebook
class Route:
def __init__(self, route_id, date, connection):
"""
The Route class loads the route configuration data for a particular
route, and makes several accessor methods available for it.
Parameters:
route_id (str or int)
- The route id to load
date (str or pandas.Timestamp)
- Which date to load
- Converted with pandas.to_datetime so many formats are acceptable
"""
self.route_id = str(route_id)
self.date = pd.to_datetime(date)
# load the route data
self.route_data, self.route_type, self.route_name = load_route(self.route_id, self.date, connection)
# extract stops info and rearrange columns to be more human readable
# note: the stop tag is what was used in the schedule data, not stopId
self.stops_table = pd.DataFrame(self.route_data['stop'])
self.stops_table = self.stops_table[['stopId', 'tag', 'title', 'lat', 'lon']]
# extract route path, list of (lat, lon) pairs
self.path_coords = extract_path(self.route_data)
# extract stops table
self.stops_table, self.inbound, self.outbound = extract_stops(self.route_data)
def load_route(route, date, connection):
"""
loads raw route data from the database
Parameters:
route (str or int)
- The route id to load
date (str or pd.Timestamp)
- Which date to load
- Converted with pandas.to_datetime so many formats are acceptable
Returns route_data (dict), route_type (str), route_name (str)
"""
# ensure correct parameter types
route = str(route)
date = pd.to_datetime(date)
# DB connection
cursor = connection.cursor()
# build selection query
query = """
SELECT route_name, route_type, content
FROM routes
WHERE rid = %s AND
begin_date <= %s::TIMESTAMP AND
(end_date IS NULL OR end_date > %s::TIMESTAMP);
"""
# execute query and return the route data
cursor.execute(query, (route, str(date), str(date)))
result = cursor.fetchone()
return result[2]['route'], result[1], result[0]
def extract_path(route_data):
"""
Extracts the list of path coordinates for a route.
The raw data stores this as an unordered list of sub-routes, so this
function deciphers the order they should go in and returns a single list.
"""
# KNOWN BUG
# this approach assumed all routes were either a line or a loop.
# routes that have multiple sub-paths meeting at a point break this,
# route 24 is a current example.
# I'm committing this now to get the rest of the code out there
# extract the list of subpaths as just (lat,lon) coordinates
# also converts from string to float (raw data has strings)
path = []
for sub_path in route_data['path']:
path.append([(float(p['lat']), float(p['lon']))
for p in sub_path['point']])
# start with the first element, remove it from path
final = path[0]
path.pop(0)
# loop until the first and last coordinates in final match
counter = len(path)
done = True
while final[0] != final[-1]:
# loop through the sub-paths that we haven't yet moved to final
for i in range(len(path)):
# check if the last coordinate in final matches the first
# coordinate of another sub-path
if final[-1] == path[i][0]:
# match found, move it to final
# leave out the first coordinate to avoid duplicates
final = final + path[i][1:]
path.pop(i)
break # break the for loop
# protection against infinite loops, if the path never closes
counter -= 1
if counter < 0:
done = False
break
if not done:
# route did not connect in a loop, perform same steps backwards
# to get the rest of the line
for _ in range(len(path)):
# loop through the sub-paths that we haven't yet moved to final
for i in range(len(path)):
# check if the first coordinate in final matches the last
# coordinate of another sub-path
if final[0] == path[i][-1]:
# match found, move it to final
# leave out the last coordinate to avoid duplicates
final = path[i][:-1] + final
path.pop(i)
break # break the for loop
# some routes may have un-used sub-paths
# Route 1 for example has two sub-paths that are almost identical, with the
# same start and end points
# if len(path) > 0:
# print(f"WARNING: {len(path)} unused sub-paths")
# return the final result
return final
def extract_stops(route_data):
"""
Extracts a dataframe of stops info
Returns the main stops dataframe, and a list of inbound and outbound stops
in the order they are intended to be on the route
"""
stops = pd.DataFrame(route_data['stop'])
directions = pd.DataFrame(route_data['direction'])
# Change stop arrays to just the list of numbers
for i in range(len(directions)):
directions.at[i, 'stop'] = [s['tag'] for s in directions.at[i, 'stop']]
# Find which stops are inbound or outbound
inbound = []
for stop_list in directions[directions['name'] == "Inbound"]['stop']:
for stop in stop_list:
if stop not in inbound:
inbound.append(stop)
outbound = []
for stop_list in directions[directions['name'] == "Outbound"]['stop']:
for stop in stop_list:
if stop not in inbound:
outbound.append(stop)
# Label each stop as inbound or outbound
stops['direction'] = ['none'] * len(stops)
for i in range(len(stops)):
if stops.at[i, 'tag'] in inbound:
stops.at[i, 'direction'] = 'inbound'
elif stops.at[i, 'tag'] in outbound:
stops.at[i, 'direction'] = 'outbound'
# Convert from string to float
stops['lat'] = stops['lat'].astype(float)
stops['lon'] = stops['lon'].astype(float)
return stops, inbound, outbound
def get_location_data(rid, begin, end, connection):
# Build query to select location data
query = f"""
SELECT *
FROM locations
WHERE rid = '{rid}' AND
timestamp > '{begin}'::TIMESTAMP AND
timestamp < '{end}'::TIMESTAMP
ORDER BY id;
"""
# read the query directly into pandas
locations = sqlio.read_sql_query(query, connection)
# Convert those UTC timestamps to local PST by subtracting 7 hours
locations['timestamp'] = locations['timestamp'] - pd.Timedelta(hours=7)
# return the result
return locations
# Written by Austie
def fcc_projection(loc1, loc2):
"""
function to apply FCC recommended formulae
for calculating distances on earth projected to a plane
significantly faster computationally, negligible loss in accuracy
Args:
loc1 - a tuple of lat/lon
loc2 - a tuple of lat/lon
"""
lat1, lat2 = loc1[0], loc2[0]
lon1, lon2 = loc1[1], loc2[1]
mean_lat = (lat1+lat2)/2
delta_lat = lat2 - lat1
delta_lon = lon2 - lon1
k1 = 111.13209 - 0.56605*cos(2*mean_lat) + .0012*cos(4*mean_lat)
k2 = 111.41513*cos(mean_lat) - 0.09455*cos(3*mean_lat) + 0.00012*cos(5*mean_lat)
distance = sqrt((k1*delta_lat)**2 + (k2*delta_lon)**2)
return distance
def clean_locations(locations, stops):
"""
takes a dataframe of bus locations and a dataframe of
returns the locations dataframe with nearest stop added
"""
# remove old location reports that would be duplicates
df = locations[locations['age'] < 60].copy()
# remove rows with no direction value
df = df[~pd.isna(df['direction'])]
# shift timestamps according to the age column
df['timestamp'] = df.apply(shift_timestamp, axis=1)
# Make lists of all inbound or outbound stops
inbound_stops = stops[stops['direction'] == 'inbound'].reset_index(drop=True)
outbound_stops = stops[stops['direction'] == 'outbound'].reset_index(drop=True)
# initialize new columns for efficiency
df['closestStop'] = [0] * len(df)
df['distance'] = [0.0] * len(df)
for i in df.index:
if '_I_' in df.at[i, 'direction']:
candidates = inbound_stops
elif '_O_' in df.at[i, 'direction']:
candidates = outbound_stops
else:
# Skip row if bus is not found to be either inbound or outbound
continue
bus_coord = (df.at[i, 'latitude'], df.at[i, 'longitude'])
# Find closest stop within candidates
# Assume the first stop
closest = candidates.iloc[0]
distance = fcc_projection(bus_coord, (closest['lat'], closest['lon']))
# Check each stop after that
for _, row in candidates[1:].iterrows():
# find distance to this stop
dist = fcc_projection(bus_coord, (row['lat'], row['lon']))
if dist < distance:
# closer stop found, save it
closest = row
distance = dist
# Save the tag of the closest stop and the distance to it
df.at[i, 'closestStop'] = closest['tag']
df.at[i, 'distance'] = distance
return df
def shift_timestamp(row):
""" subtracts row['age'] from row['timestamp'] """
return row['timestamp'] - pd.Timedelta(seconds=row['age'])
def get_stop_times(locations, route):
"""
returns a dict, keys are stop tags and values are lists of timestamps
that describe every time a bus was seen at that stop
"""
# Initialize the data structure I will store results in
stop_times = {}
for stop in route.inbound + route.outbound:
stop_times[str(stop)] = []
for vid in locations['vid'].unique():
# Process the route one vehicle at a time
df = locations[locations['vid'] == vid]
# process 1st row on its own
prev_row = df.loc[df.index[0]]
stop_times[str(prev_row['closestStop'])].append(prev_row['timestamp'])
# loop through the rest of the rows, comparing each to the previous one
for i, row in df[1:].iterrows():
if row['direction'] != prev_row['direction']:
# changed directions, don't compare to previous row
stop_times[str(row['closestStop'])].append(row['timestamp'])
else:
# same direction, compare to previous row
if '_I_' in row['direction']: # get correct stop list
stoplist = route.inbound
else:
stoplist = route.outbound
current = stoplist.index(str(row['closestStop']))
previous = stoplist.index(str(prev_row['closestStop']))
gap = current - previous
if gap > 1: # need to interpolate
diff = (row['timestamp'] - prev_row['timestamp'])/gap
counter = 1
for stop in stoplist[previous+1:current]:
# save interpolated time
stop_times[str(stop)].append(prev_row['timestamp'] + (counter * diff))
# increase counter for the next stop
# example: with 2 interpolated stops, gap would be 3
# 1st diff is 1/3, next is 2/3
counter += 1
if row['closestStop'] != prev_row['closestStop']:
# only save time if the stop has changed,
# otherwise the bus hasn't moved since last time
stop_times[str(row['closestStop'])].append(row['timestamp'])
# advance for next row
prev_row = row
# Sort each list before returning
for stop in stop_times.keys():
stop_times[stop].sort()
return stop_times
def get_bunches_gaps(stop_times, schedule, bunch_threshold=.2, gap_threshold=1.5):
"""
returns a dataframe of all bunches and gaps found
default thresholds define a bunch as 20% and a gap as 150% of scheduled headway
"""
# Initialize dataframe for the bunces and gaps
problems = pd.DataFrame(columns=['type', 'time', 'duration', 'stop'])
counter = 0
# Set the bunch/gap thresholds (in seconds)
bunch_threshold = (schedule.common_interval * 60) * bunch_threshold
gap_threshold = (schedule.common_interval * 60) * gap_threshold
for stop in stop_times.keys():
# ensure we have any times at all for this stop
if len(stop_times[stop]) == 0:
#print(f"Stop {stop} had no recorded times")
continue # go to next stop in the loop
# save initial time
prev_time = stop_times[stop][0]
# loop through all others, comparing to the previous one
for time in stop_times[stop][1:]:
diff = (time - prev_time).seconds
if diff <= bunch_threshold:
# bunch found, save it
problems.at[counter] = ['bunch', prev_time, diff, stop]
counter += 1
elif diff >= gap_threshold:
problems.at[counter] = ['gap', prev_time, diff, stop]
counter += 1
prev_time = time
return problems
# this uses sequential search, could speed up with binary search if needed,
# but it currently uses hardly any time in comparison to other steps
def helper_count(expected_times, observed_times):
""" Returns the number of on-time stops found """
# set up early/late thresholds (in seconds)
early_threshold = pd.Timedelta(seconds=1*60) # 1 minute early
late_threshold = pd.Timedelta(seconds=4*60) # 4 minutes late
count = 0
for stop in expected_times.columns:
for expected in expected_times[stop]:
if pd.isna(expected):
continue # skip NaN values in the expected schedule
# for each expected time...
# find first observed time after the early threshold
found_time = None
early = expected - early_threshold
# BUG: some schedule data may have stop tags that are not in the inbound
# or outbound definitions for a route. That would throw a key error here.
# Example: stop 14148 on route 24
# current solution is to ignore those stops with the try/except statement
try:
for observed in observed_times[stop]:
if observed >= early:
found_time = observed
break
except:
continue
# if found time is still None, then all observed times were too early
# if found_time is before the late threshold then we were on time
if (not pd.isna(found_time)) and found_time <= (expected + late_threshold):
# found_time is within the on-time window
count += 1
return count
def calculate_ontime(stop_times, schedule):
""" Returns the on-time percentage and total scheduled stops for this route """
# Save schedules with timestamp data types, set date to match
inbound_times = schedule.inbound_table
for col in inbound_times.columns:
inbound_times[col] = pd.to_datetime(inbound_times[col]).apply(
lambda dt: dt.replace(year=schedule.date.year,
month=schedule.date.month,
day=schedule.date.day))
outbound_times = schedule.outbound_table
for col in outbound_times.columns:
outbound_times[col] = pd.to_datetime(outbound_times[col]).apply(
lambda dt: dt.replace(year=schedule.date.year,
month=schedule.date.month,
day=schedule.date.day))
# count times for both inbound and outbound schedules
on_time_count = (helper_count(inbound_times, stop_times) +
helper_count(outbound_times, stop_times))
# get total expected count
total_expected = inbound_times.count().sum() + outbound_times.count().sum()
# return on-time percentage
return (on_time_count / total_expected), total_expected
def bunch_gap_graph(problems, interval=10):
"""
returns data for a graph of the bunches and gaps throughout the day
problems - the dataframe of bunches and gaps
interval - the number of minutes to bin data into
returns
{
"times": [time values (x)],
"bunches": [bunch counts (y1)],
"gaps": [gap counts (y2)]
}
"""
# set the time interval
interval = pd.Timedelta(minutes=interval)
# rest of code doesn't work if there are no bunches or gaps
# return the empty graph manually
if len(problems) == 0:
# generate list of times according to the interval
start = pd.Timestamp('today').replace(hour=0, minute=0, second=0)
t = start
times = []
while t.day == start.day:
times.append(str(t.time())[:5])
t += interval
return {
"times": times,
"bunches": [0] * len(times),
"gaps": [0] * len(times)
}
# generate the DatetimeIndex needed
index = pd.DatetimeIndex(problems['time'])
df = problems.copy()
df.index = index
# lists for graph data
bunches = []
gaps = []
times = []
# set selection times
start_date = problems.at[0, 'time'].replace(hour=0, minute=0, second=0)
select_start = start_date
select_end = select_start + interval
while select_start.day == start_date.day:
# get the count of each type of problem in this time interval
count = df.between_time(select_start.time(), select_end.time())['type'].value_counts()
# append the counts to the data list
if 'bunch' in count.index:
bunches.append(int(count['bunch']))
else:
bunches.append(0)
if 'gap' in count.index:
gaps.append(int(count['gap']))
else:
gaps.append(0)
# save the start time for the x axis
times.append(str(select_start.time())[:5])
# increment the selection window
select_start += interval
select_end += interval
return {
"times": times,
"bunches": bunches,
"gaps": gaps
}
def create_simple_geojson(bunches, rid):
geojson = {'type': 'FeatureCollection',
'bunches': create_geojson_features(bunches, rid)}
return geojson
def create_geojson_features(df, rid):
"""
function to generate list of geojson features
for plotting vehicle locations on timestamped map
Expects a dataframe containing lat/lon, vid, timestamp
returns list of basic geojson formatted features:
{
type: Feature
geometry: {
type: Point,
coordinates:[lat, lon]
},
properties: {
route_id: rid
time: timestamp
}
}
"""
# initializing empty features list
features = []
# iterating through df to pull coords, vid, timestamp
# and format for json
for index, row in df.iterrows():
feature = {
'type': 'Feature',
'geometry': {
'type':'Point',
'coordinates':[row.lon, row.lat]
},
'properties': {
'time': row.time.__str__(),
'stop': {'stopId': row.stopId.__str__(),
'stopTitle': row.title.__str__()},
'direction': row.direction.__str__()
}
}
features.append(feature) # adding point to features list
return features
```
## Run each time to get a new report
### Timing breakdown
```
# Change this one cell to change to a different route/day
# Uses 7am to account for the UTC to PST conversion
rid = "1"
begin = "2020/6/1 07:00:00"
end = "2020/6/2 07:00:00"
%%time
# Most time in this cell is waiting on the database responses
# Load schedule and route data
schedule = Schedule(rid, begin, cnx)
route = Route(rid, begin, cnx)
# Load bus location data
locations = get_location_data(rid, begin, end, cnx)
%%time
# Apply cleaning function (this usually takes 1-2 minutes)
locations = clean_locations(locations, route.stops_table)
%%time
# Calculate all times a bus was at each stop
stop_times = get_stop_times(locations, route)
%%time
# Find all bunches and gaps
problems = get_bunches_gaps(stop_times, schedule)
%%time
# Calculate on-time percentage
on_time, total_scheduled = calculate_ontime(stop_times, schedule)
%%time
# Get the bunch/gap graph
bg_graph = bunch_gap_graph(problems, interval=10)
%%time
# Generate the geojson object
bunch_df = problems[problems.type.eq('bunch')]
bunch_df = bunch_df.merge(route.stops_table, left_on='stop', right_on='tag', how='left')
# Creating GeoJSON of bunch times / locations
geojson = create_simple_geojson(bunch_df, rid)
# Print results
print(f"--- Report for route {rid} on {str(pd.to_datetime(begin).date())} ---")
total = 0
for key in stop_times.keys():
total += len(stop_times[key])
intervals = total-len(stop_times)
bunches = len(problems[problems['type'] == 'bunch'])
gaps = len(problems[problems['type'] == 'gap'])
print(f"\nOut of {intervals} recorded intervals, we found {bunches} bunches and {gaps} gaps")
print(f"\t{(bunches/intervals)*100 : .2f}% bunched")
print(f"\t{(gaps/intervals)*100 : .2f}% gapped")
print(f"\nFound {int(on_time * total_scheduled + .5)} on-time stops out of {total_scheduled} scheduled")
print(f"On-time percentage is {(on_time)*100 : .2f}%")
```
### All in one function
```
# Expect this cell to take 60-90 seconds to run
# Change this to change to a different route/day
# Uses 7am to account for the UTC to PST conversion
# rid = "LBUS"
# begin = "2020/6/8 07:00:00"
# end = "2020/6/9 07:00:00"
def print_report(rid, date):
"""
Prints a daily report for the given rid and date
rid : (str)
the route id to generate a report for
date : (str or pd.Datetime)
the date to generate a report for
"""
# get begin and end timestamps for the date
begin = pd.to_datetime(date).replace(hour=7)
end = begin + pd.Timedelta(days=1)
# Load schedule and route data
schedule = Schedule(rid, begin, cnx)
route = Route(rid, begin, cnx)
# Load bus location data
locations = get_location_data(rid, begin, end, cnx)
# Apply cleaning function (this usually takes 1-2 minutes)
locations = clean_locations(locations, route.stops_table)
# Calculate all times a bus was at each stop
stop_times = get_stop_times(locations, route)
# Find all bunches and gaps
problems = get_bunches_gaps(stop_times, schedule)
# Calculate on-time percentage
on_time, total_scheduled = calculate_ontime(stop_times, schedule)
# Print results
print(f"--- Report for route {rid} on {str(pd.to_datetime(begin).date())} ---")
total = 0
for key in stop_times.keys():
total += len(stop_times[key])
intervals = total-len(stop_times)
bunches = len(problems[problems['type'] == 'bunch'])
gaps = len(problems[problems['type'] == 'gap'])
print(f"\nOut of {intervals} recorded intervals, we found {bunches} bunches and {gaps} gaps")
print(f"\t{(bunches/intervals)*100 :.2f}% bunched")
print(f"\t{(gaps/intervals)*100 :.2f}% gapped")
print(f"\nFound {int(on_time * total_scheduled + .5)} on-time stops out of {total_scheduled} scheduled")
print(f"On-time percentage is {(on_time)*100 :.2f}%")
coverage = (total_scheduled * on_time + bunches) / total_scheduled
print(f"\nCoverage is {coverage * 100 :.2f}%")
print_report(rid='1', date='2020/6/2')
print_report(rid='5', date='2020/6/2')
```
## Generating report JSON
```
def generate_report(rid, date):
"""
Generates a daily report for the given rid and date
rid : (str)
the route id to generate a report for
date : (str or pd.Datetime)
the date to generate a report for
returns a dict of the report info
"""
# get begin and end timestamps for the date
begin = pd.to_datetime(date).replace(hour=7)
end = begin + pd.Timedelta(days=1)
# Load schedule and route data
schedule = Schedule(rid, begin, cnx)
route = Route(rid, begin, cnx)
# Load bus location data
locations = get_location_data(rid, begin, end, cnx)
# Apply cleaning function (this usually takes 1-2 minutes)
locations = clean_locations(locations, route.stops_table)
# Calculate all times a bus was at each stop
stop_times = get_stop_times(locations, route)
# Find all bunches and gaps
problems = get_bunches_gaps(stop_times, schedule)
# Calculate on-time percentage
on_time, total_scheduled = calculate_ontime(stop_times, schedule)
# Build result dict
count_times = 0
for key in stop_times.keys():
count_times += len(stop_times[key])
# Number of recorded intervals ( sum(len(each list of time)) - number or lists of times)
intervals = count_times-len(stop_times)
bunches = len(problems[problems['type'] == 'bunch'])
gaps = len(problems[problems['type'] == 'gap'])
coverage = (total_scheduled * on_time + bunches) / total_scheduled
# Isolating bunches, merging with stops to assign locations to bunches
bunch_df = problems[problems.type.eq('bunch')]
bunch_df = bunch_df.merge(route.stops_table, left_on='stop', right_on='tag', how='left')
# Creating GeoJSON of bunch times / locations
geojson = create_simple_geojson(bunch_df, rid)
# int/float conversions are because the json library doesn't work with numpy types
result = {
'route_id': rid,
'route_name': route.route_name,
'route_type': route.route_type,
'date': str(pd.to_datetime(date)),
'overall_health': 0, # TODO: implement this
'num_bunches': bunches,
'num_gaps': gaps,
'total_intervals': intervals,
'on_time_percentage': float(round(on_time * 100, 2)),
'scheduled_stops': int(total_scheduled),
'coverage': float(round(coverage * 100, 2)),
# line_chart contains all data needed to generate the line chart
'line_chart': bunch_gap_graph(problems, interval=10),
# route_table is an array of all rows that should show up in the table
# it will be filled in after all reports are generated
'route_table': [
{
'route_id': rid,
'route_name': route.route_name,
'bunches': bunches,
'gaps': gaps,
'on-time': float(round(on_time * 100, 2)),
'coverage': float(round(coverage * 100, 2))
}
],
'map_data': geojson
}
return result
# Route 1 usage example
report_1 = generate_report(rid='1', date='2020/6/1')
report_1.keys()
# Route 714 usage example
report_714 = generate_report(rid='714', date='2020/6/1')
print('Route:', report_714['route_name'])
print('Bunches:', report_714['num_bunches'])
print('Gaps:', report_714['num_gaps'])
print('On-time:', report_714['on_time_percentage'])
print('Coverage:', report_714['coverage'])
```
# Generating report for all routes
```
def get_active_routes(date):
"""
returns a list of all active route id's for the given date
"""
query = """
SELECT DISTINCT rid
FROM routes
WHERE begin_date <= %s ::TIMESTAMP AND
(end_date IS NULL OR end_date > %s ::TIMESTAMP);
"""
cursor.execute(query, (date, date))
return [result[0] for result in cursor.fetchall()]
%%time
# since this is not optimized yet, this takes about 20 minutes
# choose a day
date = '2020-6-1'
# get all active routes
route_ids = get_active_routes(date)
route_ids.sort()
# get the report for all routes
all_reports = []
for rid in route_ids:
try:
all_reports.append(generate_report(rid, date))
print("Generated report for route", rid)
except: # in case any particular route throws an error
print(f"Route {rid} failed")
len(all_reports)
# generate aggregate reports
# read existing reports into a dataframe to work with them easily
df = pd.DataFrame(all_reports)
# for each aggregate type
types = ['All'] + list(df['route_type'].unique())
counter = 0
for t in types:
# filter df to the routes we are adding up
if t == 'All':
filtered = df
else:
filtered = df[df['route_type'] == t]
# on-time percentage: sum([all on-time stops]) / sum([all scheduled stops])
count_on_time = (filtered['on_time_percentage'] * filtered['scheduled_stops']).sum()
on_time_perc = count_on_time / filtered['scheduled_stops'].sum()
# coverage: (sum([all on-time stops]) + sum([all bunches])) / sum([all scheduled stops])
coverage = (count_on_time + filtered['num_bunches'].sum()) / filtered['scheduled_stops'].sum()
# aggregate the graph object
# x-axis is same for all
first = filtered.index[0]
times = filtered.at[first, 'line_chart']['times']
# sum up all y-axis values
bunches = pd.Series(filtered.at[first, 'line_chart']['bunches'])
gaps = pd.Series(filtered.at[first, 'line_chart']['gaps'])
# same pattern for the geojson list
geojson = filtered.at[first, 'map_data']['bunches']
for i, report in filtered[1:].iterrows():
# pd.Series adds all values in the lists together
bunches += pd.Series(report['line_chart']['bunches'])
gaps += pd.Series(report['line_chart']['gaps'])
# lists concatenate together
geojson += report['map_data']['bunches']
# save a new report object
new_report = {
'route_id': t,
'route_name': t,
'route_type': t,
'date': all_reports[0]['date'],
'overall_health': 0, # TODO, implement. Either avg of all routes, or recalculate using the aggregate statistics
'num_bunches': int(filtered['num_bunches'].sum()),
'num_gaps': int(filtered['num_gaps'].sum()),
'total_intervals': int(filtered['total_intervals'].sum()),
'on_time_percentage': float(round(on_time_perc, 2)),
'scheduled_stops': int(filtered['scheduled_stops'].sum()),
'coverage': float(round(coverage, 2)),
'line_chart': {
'times': times,
'bunches': list(bunches),
'gaps': list(gaps)
},
'route_table': [
{
'route_id': t,
'route_name': t,
'bunches': int(filtered['num_bunches'].sum()),
'gaps': int(filtered['num_gaps'].sum()),
'on-time': float(round(on_time_perc, 2)),
'coverage': float(round(coverage, 2))
}
],
'map_data': {
# 'type': 'FeatureCollection',
# 'bunches': geojson
}
}
# put aggregate reports at the beginning of the list
all_reports.insert(counter, new_report)
counter += 1
# Add route_table rows to the aggregate report
# Set up a dict to hold each aggregate table
tables = {}
for t in types:
tables[t] = []
# Add rows from each report
for report in all_reports:
# add to the route type's table
tables[report['route_type']].append(report['route_table'][0])
# also add to all routes table
if report['route_id'] != "All":
# if statement needed to not duplicate the "All" row twice
tables['All'].append(report['route_table'][0])
# find matching report and set the table there
for key in tables.keys():
for report in all_reports:
if report['route_id'] == key:
# override it because the new table includes the row that was already there
report['route_table'] = tables[key]
break # only 1 report needs each aggregate table
len(all_reports)
# save the all_reports object to a file so I can download it
with open(f'report_{date}_slimmed.json', 'w') as outfile:
json.dump(all_reports, outfile)
# rollback code if needed for testing: remove all aggregate reports
for i in reversed(range(len(all_reports))):
if all_reports[i]['route_id'] in types:
del(all_reports[i])
# save report in the database
query = """
INSERT INTO reports (date, report)
VALUES (%s, %s);
"""
cursor.execute(query, (date, json.dumps(all_reports)))
cnx.commit()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/mkhalil7625/DS-Unit-2-Linear-Models/blob/master/module4-logistic-regression/Copy_of_LS_DS_214_assignment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Lambda School Data Science
*Unit 2, Sprint 1, Module 4*
---
# Logistic Regression
## Assignment 🌯
You'll use a [**dataset of 400+ burrito reviews**](https://srcole.github.io/100burritos/). How accurately can you predict whether a burrito is rated 'Great'?
> We have developed a 10-dimensional system for rating the burritos in San Diego. ... Generate models for what makes a burrito great and investigate correlations in its dimensions.
- [ ] Do train/validate/test split. Train on reviews from 2016 & earlier. Validate on 2017. Test on 2018 & later.
- [ ] Begin with baselines for classification.
- [ ] Use scikit-learn for logistic regression.
- [ ] Get your model's validation accuracy. (Multiple times if you try multiple iterations.)
- [ ] Get your model's test accuracy. (One time, at the end.)
- [ ] Commit your notebook to your fork of the GitHub repo.
## Stretch Goals
- [ ] Add your own stretch goal(s) !
- [ ] Make exploratory visualizations.
- [ ] Do one-hot encoding.
- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).
- [ ] Get and plot your coefficients.
- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html).
```
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Linear-Models/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
# Load data downloaded from https://srcole.github.io/100burritos/
import pandas as pd
df = pd.read_csv(DATA_PATH+'burritos/burritos.csv')
# Derive binary classification target:
# We define a 'Great' burrito as having an
# overall rating of 4 or higher, on a 5 point scale.
# Drop unrated burritos.
df = df.dropna(subset=['overall'])
df['Great'] = df['overall'] >= 4
# Clean/combine the Burrito categories
df['Burrito'] = df['Burrito'].str.lower()
california = df['Burrito'].str.contains('california')
asada = df['Burrito'].str.contains('asada')
surf = df['Burrito'].str.contains('surf')
carnitas = df['Burrito'].str.contains('carnitas')
df.loc[california, 'Burrito'] = 'California'
df.loc[asada, 'Burrito'] = 'Asada'
df.loc[surf, 'Burrito'] = 'Surf & Turf'
df.loc[carnitas, 'Burrito'] = 'Carnitas'
df.loc[~california & ~asada & ~surf & ~carnitas, 'Burrito'] = 'Other'
# Drop some high cardinality categoricals
df = df.drop(columns=['Notes', 'Location', 'Reviewer', 'Address', 'URL', 'Neighborhood'])
# Drop some columns to prevent "leakage"
df = df.drop(columns=['Rec', 'overall'])
df.head()
df['Date']=pd.to_datetime(df['Date'],infer_datetime_format=True)
train = df[df['Date'].dt.year <= 2016]
val = df[df['Date'].dt.year == 2017]
test = df[df['Date'].dt.year >2017]
train['Date'].dt.year.value_counts()
#baseline (majority class)
target = 'Great'
y_train = train[target]
y_val = val[target]
y_test = test[target]
y_train.value_counts(normalize=True)
features = ['Fillings','Meat','Meat:filling','Uniformity', 'Salsa','Tortilla','Temp','Cost']
X_train = train[features]
X_val = val[features]
X_test = test[features]
X_train.head()
from sklearn.impute import SimpleImputer
imputer = SimpleImputer(strategy='mean')
X_train_imputed = imputer.fit_transform(X_train)
X_val_imputed = imputer.transform(X_val)
X_test_imputed = imputer.transform(X_test)
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train_imputed)
X_val_scaled = scaler.transform(X_val_imputed)
X_test_scaled = scaler.transform(X_test_imputed)
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
model = LogisticRegression(solver='lbfgs')
model.fit(X_train_scaled,y_train)
y_pred = model.predict(X_test_scaled)
print('Validation Accuracy', model.score(X_val_scaled, y_val))
print('Test Accuracy', accuracy_score(y_test, y_pred))
```
| github_jupyter |
```
# Datset source
# https://www.kaggle.com/moltean/fruits
# Problem Statement: Multiclass classification problem for 131 categories of fruits and vegetables
# Created using the following tutorial template
# https://keras.io/examples/vision/image_classification_from_scratch/
# import required libraries
import matplotlib.pyplot as plt
import numpy as np
import os
import PIL
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
# Define dataset paths and image properties
batch_size = 32
img_height = 100
img_width = 100
image_size = (img_height, img_width)
train_data_dir = 'dataset/fruits-360/Training/'
test_data_dir = 'dataset/fruits-360/Test/'
# Create a train dataset
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
train_data_dir,
seed=2,
image_size=(img_height, img_width),
batch_size=batch_size)
# Create a test dataset
test_ds = tf.keras.preprocessing.image_dataset_from_directory(
test_data_dir,
seed=2,
image_size=(img_height, img_width),
batch_size=batch_size)
# Print train data class names
train_class_names = train_ds.class_names
print(train_class_names)
# Print test data class names
test_class_names = train_ds.class_names
print(test_class_names)
# Print a few images
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 10))
for images, labels in train_ds.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(images[i].numpy().astype("uint8"))
plt.title(train_class_names[labels[i]])
plt.axis("off")
# Use data augmentation
data_augmentation = keras.Sequential(
[
layers.experimental.preprocessing.RandomFlip("horizontal"),
layers.experimental.preprocessing.RandomRotation(0.1),
]
)
# Plot an image after data augmentation
plt.figure(figsize=(10, 10))
for images, _ in train_ds.take(1):
for i in range(9):
augmented_images = data_augmentation(images)
ax = plt.subplot(3, 3, i + 1)
plt.imshow(augmented_images[0].numpy().astype("uint8"))
plt.axis("off")
# Buffered prefetching to mitigate I/O blocking
train_ds = train_ds.prefetch(buffer_size=32)
test_ds = test_ds.prefetch(buffer_size=32)
# Use this for tf 2.3 gpu
# !pip install numpy==1.19.5
# Small version of Xception network
def make_model(input_shape, num_classes):
inputs = keras.Input(shape=input_shape)
# Image augmentation block
x = data_augmentation(inputs)
# Entry block
x = layers.experimental.preprocessing.Rescaling(1.0 / 255)(x)
x = layers.Conv2D(32, 3, strides=2, padding="same")(x)
x = layers.BatchNormalization()(x)
x = layers.Activation("relu")(x)
x = layers.Conv2D(64, 3, padding="same")(x)
x = layers.BatchNormalization()(x)
x = layers.Activation("relu")(x)
previous_block_activation = x # Set aside residual
for size in [128, 256, 512, 728]:
x = layers.Activation("relu")(x)
x = layers.SeparableConv2D(size, 3, padding="same")(x)
x = layers.BatchNormalization()(x)
x = layers.Activation("relu")(x)
x = layers.SeparableConv2D(size, 3, padding="same")(x)
x = layers.BatchNormalization()(x)
x = layers.MaxPooling2D(3, strides=2, padding="same")(x)
# Project residual
residual = layers.Conv2D(size, 1, strides=2, padding="same")(
previous_block_activation
)
x = layers.add([x, residual]) # Add back residual
previous_block_activation = x # Set aside next residual
x = layers.SeparableConv2D(1024, 3, padding="same")(x)
x = layers.BatchNormalization()(x)
x = layers.Activation("relu")(x)
x = layers.GlobalAveragePooling2D()(x)
if num_classes == 2:
activation = "sigmoid"
units = 1
else:
activation = "softmax"
units = num_classes
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(units, activation=activation)(x)
return keras.Model(inputs, outputs)
model = make_model(input_shape=image_size + (3,), num_classes=131)
keras.utils.plot_model(model, show_shapes=True)
model.summary()
# Train the model
epochs = 5
callbacks = [
keras.callbacks.ModelCheckpoint("save_at_{epoch}.h5"),
]
model.compile(
optimizer=keras.optimizers.Adam(1e-3),
loss="SparseCategoricalCrossentropy",
metrics=["accuracy"],
)
model.fit(
train_ds, epochs=epochs, callbacks=callbacks, validation_data=test_ds,
)
# Test the model
import matplotlib.image as mpimg
img = keras.preprocessing.image.load_img(
"dataset/fruits-360/Test/Banana/100_100.jpg", target_size=image_size
)
img_array = keras.preprocessing.image.img_to_array(img)
img_array = tf.expand_dims(img_array, 0) # Create batch axis
predictions = model.predict(img_array)
score = predictions[0]
img = mpimg.imread('dataset/fruits-360/Test/Banana/100_100.jpg')
imgplot = plt.imshow(img)
plt.title("Predicted Fruit: " + train_class_names[score.tolist().index(max(score.tolist()))])
plt.axis("off")
plt.show()
```
| github_jupyter |
# Project Week 1: Creating an animated scatterplot with Python
```
#Import libraries
import pandas as pd
import imageio
import seaborn as sns
import matplotlib.pyplot as plt
```
### Step 1: Read in data
```
gdppercapitagrowth = pd.read_csv('data/gdp_per_capita_growth.csv', index_col=0)
gdppercapitagrowth
industry = pd.read_csv('data/industry_value_added_growth.csv', index_col=0)
industry
gdp = pd.read_csv('data/gdp_per_capita.csv', index_col=0)
gdp
```
### Step 2: some preliminary data exploration
```
print(gdppercapitagrowth.shape)
print(industry.shape)
print(gdp.shape)
gdppercapitagrowth.columns
industry.columns
gdp.columns
gdppercapitagrowth.index
industry.index
gdp.index
```
### Step 3: some preliminary data transformation
```
gdppercapitagrowth.columns = gdppercapitagrowth.columns.astype(int)
industry.columns = industry.columns.astype(int)
gdp.columns = gdp.columns.astype(int)
gdppercapitagrowth.index.name = 'country'
gdppercapitagrowth = gdppercapitagrowth.reset_index()
gdppercapitagrowth = gdppercapitagrowth.melt(id_vars='country', var_name='year', value_name='gdppercapitagrowth')
industry.index.name = 'country'
industry = industry.reset_index()
industry = industry.melt(id_vars='country', var_name='year', value_name='industry')
gdp.index.name = 'country'
gdp = gdp.reset_index()
gdp = gdp.melt(id_vars='country', var_name='year', value_name='gdp')
```
### Step 4: merging the dataframes
```
df = gdppercapitagrowth.merge(industry)
df
continents = pd.read_csv('data/continents.csv', index_col=0)
continents
df = pd.merge(df, continents, how = "left", on = "country" )
df
df = df.reset_index()
gdp = gdp.reset_index()
gdp
df = pd.merge(df, gdp, how = "outer", on='index')
df
# Retrieving different values from "Continent" column
df['Continent'].value_counts()
indexNames = df[df['Continent'] == "0"].index
df.drop(indexNames, inplace=True)
df.dropna()
df
column_a = df["gdppercapitagrowth"]
xmax = column_a.max()
xmin = column_a.min()
column_b = df["industry"]
ymax = column_b.max()
ymin = column_b.min()
df_subset = df.loc[df['year_x'] == 2000]
sns.scatterplot(x='gdppercapitagrowth', y='industry',
data=df_subset, alpha=0.6)
plt.show()
```
### Step 5: creating a version of the visualisation with outliers
```
year_list = list(range(1965, 2020))
for a in (year_list):
df_subset = df.loc[df['year_x'] == a]
sns.scatterplot(x='gdppercapitagrowth', y='industry', size='gdp', hue='gdp', data=df_subset, alpha=0.8, sizes=(60, 200), legend = False)
plt.axis((xmin, xmax, ymin, ymax))
plt.xlabel("GDP per capita growth")
plt.ylabel("Industrial value added growth")
plt.title(str(a))
plt.savefig('growth_'+str(a)+'.png', dpi=300)
plt.close()
images = []
for a in year_list:
filename = 'growth_{}.png'.format(a)
images.append(imageio.imread(filename))
imageio.mimsave('output1.gif', images, fps=0.9)
```
<img width="60%" height="60%" align="left" src="output1.gif">
### Step 6: creating a version of the visualisation without outliers
```
for a in (year_list):
df_subset = df.loc[df['year_x'] == a]
sns.scatterplot(x='gdppercapitagrowth', y='industry', size='gdp', sizes=(60, 200),
data=df_subset, alpha=0.75, hue='gdp', legend = False)
plt.axis((-15, 20, -15, 20))
plt.xlabel("GDP per capita growth")
plt.ylabel("Industrial value added growth")
plt.title(str(a))
plt.savefig('growthb_'+str(a)+'.png', dpi=300)
plt.close()
images = []
for a in year_list:
filename = 'growthb_{}.png'.format(a)
images.append(imageio.imread(filename))
imageio.mimsave('output2.gif', images, fps=0.9)
```
<img width="60%" height="60%" align="left" src="output2.gif">
### Step 7: creating a version of the visualisation with selected continents are hues
```
df['continents'] = ""
df.loc[df['Continent'] == "Asia", "continents"] = "Asia"
df.loc[df['Continent'] == "South America", "continents"] = "South America"
colors = ["#c0c0c0","#ff6600",'#0080ff']
for a in (year_list):
df_subset = df.loc[df['year_x'] == a]
g = sns.scatterplot(x='gdppercapitagrowth', y='industry', sizes=(50,1500), size="gdp", palette=colors,
data=df_subset, alpha=0.6, hue='continents', legend="brief")
h,l = g.get_legend_handles_labels()
plt.axis((-20, 20, -20, 20))
plt.xlabel("GDP per capita growth")
plt.ylabel("Industrial value added growth")
sns.despine()
plt.legend(h[0:3],l[0:3], fontsize=8, loc='lower right', bbox_to_anchor=(1, 0.1))
plt.text(0.5, -19, '*the larger the bubble, the larger the GDP per capita',
verticalalignment='bottom', horizontalalignment='center', color='black', fontsize=6)
plt.text(-17.5, 19, str(a),
verticalalignment='top', horizontalalignment='left', color='grey', fontsize=10)
plt.title("Does industrial growth drive GDP per capita growth?")
plt.savefig('growthc_'+str(a)+'.png', dpi=300)
plt.close()
images = []
for a in year_list:
filename = 'growthc_{}.png'.format(a)
images.append(imageio.imread(filename))
imageio.mimsave('output3.gif', images, fps=1.2)
```
<img width="80%" height="80%" align="left" src="output3.gif">
| github_jupyter |
```
%load_ext autoreload
%autoreload 2 """Reloads all functions automatically"""
%matplotlib notebook
from irreversible_stressstrain import StressStrain as strainmodel
import test_suite as suite
import graph_suite as plot
import numpy as np
model = strainmodel('ref/HSRS/22').get_experimental_data()
slopes = suite.get_slopes(model)
second_deriv_slopes = suite.get_slopes(suite.combine_data(model[:-1,0],slopes))
# -- we think that yield occurs where the standard deviation is decreasing AND the slopes are mostly negative
def findYieldInterval(slopes, numberofsections):
def numneg(val):
return sum((val<0).astype(int))
# -- divide into ten intervals and save stddev of each
splitslopes = np.array_split(slopes,numberofsections)
splitseconds = np.array_split(second_deriv_slopes,numberofsections)
# -- displays the number of negative values in a range (USEFUL!!!)
for section in splitslopes:
print numneg(section), len(section)
print "-------------------------------"
for section in splitseconds:
print numneg(section), len(section)
divs = [np.std(vals) for vals in splitslopes]
# -- stddev of the whole thing
stdev = np.std(slopes)
interval = 0
slopesect = splitslopes[interval]
secondsect = splitseconds[interval]
print divs, stdev
# -- the proportion of slope values in an interval that must be negative to determine that material yields
cutoff = 3./4.
while numneg(slopesect)<len(slopesect)*cutoff and numneg(secondsect)<len(secondsect)*cutoff:
interval = interval + 1
"""Guard against going out of bounds"""
if interval==len(splitslopes): break
slopesect = splitslopes[interval]
secondsect = splitseconds[interval]
print
print interval
return interval
numberofsections = 15
interval_length = len(model)/numberofsections
"""
Middle of selected interval
Guard against going out of bounds
"""
yield_interval = findYieldInterval(slopes,numberofsections)
yield_index = min(yield_interval*interval_length + interval_length/2,len(model[:])-1)
yield_value = np.array(model[yield_index])[None,:]
print
print yield_value
```
## Make these estimates more reliable and robust
```
model = strainmodel('ref/HSRS/326').get_experimental_data()
strain = model[:,0]
stress = model[:,1]
slopes = suite.get_slopes(model)
second_deriv = suite.get_slopes(suite.combine_data(model[:-1,0],slopes))
"""Now what if we have strain vs slope"""
strainvslope = suite.combine_data(strain,slopes)
strainvsecond = suite.combine_data(strain,second_deriv)
plot.plot2D(strainvsecond,'Strain','Slope',marker="ro")
plot.plot2D(model,'Strain','Stress',marker="ro")
model = strainmodel('ref/HSRS/326').get_experimental_data()
strain = model[:,0]
stress = model[:,1]
slopes = suite.get_slopes(model)
second_deriv = suite.get_slopes(suite.combine_data(model[:-1,0],slopes))
num_intervals = 80
interval_length = len(second_deriv)/num_intervals
split_2nd_derivs = np.array_split(second_deriv,num_intervals)
print np.mean(second_deriv)
down_index = 0
for index, section in enumerate(split_2nd_derivs):
if sum(section)<np.mean(slopes):
down_index = index
break
yield_index = down_index*interval_length
print strain[yield_index], stress[yield_index]
model = strainmodel('ref/HSRS/326').get_experimental_data()
strain = model[:,0]
stress = model[:,1]
first_deriv = suite.get_slopes(model)
second_deriv = suite.get_slopes(suite.combine_data(model[:-1,0],first_deriv))
plot1 = suite.combine_data(strain,first_deriv)
plot2 = suite.combine_data(strain,second_deriv)
plot.plot2D(model)
plot.plot2D(plot1)
plot.plot2D(plot2)
```
### See when standard deviation of second derivative begins to decrease
```
model = strainmodel('ref/HSRS/222').get_experimental_data()
strain = model[:,0]
stress = model[:,1]
first_deriv = suite.get_slopes(model)
second_deriv = suite.get_slopes(suite.combine_data(model[:-1,0],first_deriv))
ave_deviation = np.std(second_deriv)
deviation_second = [np.std(val) for val in np.array_split(second_deriv,30)]
yielding = 0
for index,value in enumerate(deviation_second):
if value != 0.0 and value<ave_deviation and index!=0:
yielding = index
break
print second_deriv
#print "It seems to yield at index:", yielding
#print "These are all of the standard deviations, by section:", deviation_second, "\n"
#print "The overall standard deviation of the second derivative is:", ave_deviation
```
## The actual yield values are as follows (These are approximate):
### ref/HSRS/22: Index 106 [1.3912797535, 900.2614980977]
### ref/HSRS/222: Index 119 [0, 904.6702299]
### ref/HSRS/326: Index 150 [6.772314989, 906.275032]
### Index of max standard deviation of the curve
```
model = strainmodel('ref/HSRS/22').get_experimental_data()
strain = model[:,0]
stress = model[:,1]
first_deriv = suite.get_slopes(model)
second_deriv = suite.get_slopes(suite.combine_data(model[:-1,0],first_deriv))
print second_deriv;
return;
chunks = 20
int_length = len(model[:])/chunks
deriv2spl = np.array_split(second_deriv,chunks)
deviation_second = [abs(np.mean(val)) for val in deriv2spl]
del(deviation_second[0])
print deviation_second
print np.argmax(deviation_second)
#print "The standard deviation of all the second derivatives is", np.std(second_deriv)
```
### If our data dips, we can attempt to find local maxima
```
import numpy as np
# -- climbs a discrete dataset to find local max
def hillclimber(data, guessindex = 0):
x = data[:,0]
y = data[:,1]
curx = x[guessindex]
cury = y[guessindex]
guessleft = max(0,guessindex-1)
guessright = min(len(x)-1,guessindex+1)
done = False
while not done:
left = y[guessleft]
right = y[guessright]
difleft = left-cury
difright = right-cury
if difleft<0 and difright<0 or (difleft==0 and difright==0):
done = True
elif difleft>difright:
cur = left
guessindex = guessleft
elif difright>difleft or difright==difleft:
cur = right
guessindex = guessright
return guessindex
func = lambda x: x**2
xs = np.linspace(0.,10.,5)
ys = func(xs)
data = suite.combine_data(xs,ys)
print hillclimber(data)
```
| github_jupyter |
# `scinum` example
```
from scinum import Number, Correlation, NOMINAL, UP, DOWN, ABS, REL
```
The examples below demonstrate
- [Numbers and formatting](#Numbers-and-formatting)
- [Defining uncertainties](#Defining-uncertainties)
- [Multiple uncertainties](#Multiple-uncertainties)
- [Configuration of correlations](#Configuration-of-correlations)
- [Automatic uncertainty propagation](#Automatic-uncertainty-propagation)
### Numbers and formatting
```
n = Number(1.234, 0.2)
n
```
The uncertainty definition is absolute. See the examples with [multiple uncertainties](#Multiple-uncertainties) for relative uncertainty definitions.
The representation of numbers (`repr`) in jupyter notebooks uses latex-style formatting. Internally, [`Number.str()`](https://scinum.readthedocs.io/en/latest/#scinum.Number.str) is called, which - among others - accepts a `format` argument, defaulting to `"%s"` (configurable globally or per instance via [`Number.default_format`](https://scinum.readthedocs.io/en/latest/#scinum.Number.default_format)). Let's change the format for this notebook:
```
Number.default_format = "%.2f"
n
# or
n.str("%.3f")
```
### Defining uncertainties
Above, `n` is defined with a single, symmetric uncertainty. Here are some basic examples to access and play it:
```
# nominal value
print(n.nominal)
print(type(n.nominal))
# get the uncertainty
print(n.get_uncertainty())
print(n.get_uncertainty(direction=UP))
print(n.get_uncertainty(direction=DOWN))
# get the nominal value, shifted by the uncertainty
print(n.get()) # nominal value
print(n.get(UP)) # up variation
print(n.get(DOWN)) # down variation
# some more advanved use-cases:
# 1. get the multiplicative factor that would scale the nomninal value to the UP/DOWN varied ones
print("absolute factors:")
print(n.get(UP, factor=True))
print(n.get(DOWN, factor=True))
# 2. get the factor to obtain the uncertainty only (i.e., the relative unceratinty)
# (this is, of course, more useful in case of multiple uncertainties, see below)
print("\nrelative factors:")
print(n.get(UP, factor=True, diff=True))
print(n.get(DOWN, factor=True, diff=True))
```
There are also a few shorthands for the above methods:
```
# __call__ is forwarded to get()
print(n())
print(n(UP))
# u() is forwarded to get_uncertainty()
print(n.u())
print(n.u(direction=UP))
```
### Multiple uncertainties
Let's create a number that has two uncertainties: `"stat"` and `"syst"`. The `"stat"` uncertainty is asymmetric, and the `"syst"` uncertainty is relative.
```
n = Number(8848, {
"stat": (30, 20), # absolute +30-20 uncertainty
"syst": (REL, 0.5), # relative +-50% uncertainty
})
n
```
Similar to above, we can access the uncertainties and shifted values with [`get()`](https://scinum.readthedocs.io/en/latest/#scinum.Number.get) (or `__call__`) and [`get_uncertainty()`](https://scinum.readthedocs.io/en/latest/#scinum.Number.get_uncertainty) (or [`u()`](https://scinum.readthedocs.io/en/latest/#scinum.Number.u)). But this time, we can distinguish between the combined (in quadrature) value or the particular uncertainty sources:
```
# nominal value as before
print(n.nominal)
# get all uncertainties (stored absolute internally)
print(n.uncertainties)
# get particular uncertainties
print(n.u("syst"))
print(n.u("stat"))
print(n.u("stat", direction=UP))
# get the nominal value, shifted by particular uncertainties
print(n(UP, "stat"))
print(n(DOWN, "syst"))
# compute the shifted value for both uncertainties, added in quadrature without correlation (default but configurable)
print(n(UP))
```
As before, we can also access certain aspects of the uncertainties:
```
print("factors for particular uncertainties:")
print(n.get(UP, "stat", factor=True))
print(n.get(DOWN, "syst", factor=True))
print("\nfactors for the combined uncertainty:")
print(n.get(UP, factor=True))
print(n.get(DOWN, factor=True))
```
We can also apply some nice formatting:
```
print(n.str())
print(n.str("%.2f"))
print(n.str("%.2f", unit="m"))
print(n.str("%.2f", unit="m", force_asymmetric=True))
print(n.str("%.2f", unit="m", scientific=True))
print(n.str("%.2f", unit="m", si=True))
print(n.str("%.2f", unit="m", style="root"))
```
### Configuration of correlations
Let's assume that we have a second measurement for the quantity `n` we defined above,
```
n
```
and we measured it with the same sources of uncertainty,
```
n2 = Number(8920, {
"stat": (35, 15), # absolute +35-15 uncertainty
"syst": (REL, 0.3), # relative +-30% uncertainty
})
n2
```
Now, we want to compute the average measurement, including correct error propagation under consideration of sensible correlations. For more info on automatic uncertainty propagation, see the [subsequent section](#Automatic-uncertainty-propagation).
In this example, we want to fully correlate the *systematic* uncertainty, whereas we can treat *statistical* effects as uncorrelated. However, just wirting `(n + n2) / 2` will consider equally named uncertainty sources to be 100% correlated, i.e., both `syst` and `stat` uncertainties will be simply averaged. This is the default behavior in scinum as it is not possible (nor wise) to *guesstimate* the meaning of an uncertainty from its name.
While this approach is certainly correct for `syst`, we don't achieve the correct treatment for `stat`:
```
(n + n2) / 2
```
Instead, we need to define the correlation specifically for `stat`. This can be achieved in multiple ways, but the most pythonic way is to use a [`Correlation`](https://scinum.readthedocs.io/en/latest/#correlation) object.
```
(n @ Correlation(stat=0) + n2) / 2
```
**Note** that the statistical uncertainty decreased as desired, whereas the systematic one remained the same.
`Correlation` objects have a default value that can be set as the first positional, yet optional parameter, and itself defaults to one.
Internally, the operation `n @ Correlation(stat=0)` (or `n * Correlation(stat=0)` in Python 2) is evaluated prior to the addition of `n2` and generates a so-called [`DeferredResult`](https://scinum.readthedocs.io/en/latest/#deferredresult). This object carries the information of `n` and the correlation over to the next operation, at which point the uncertainty propagation is eventually resolved. As usual, in situations where the operator precedence might seem unclear, it is recommended to use parentheses to structure the expression.
### Automatic uncertainty propagation
Let's continue working with the number `n` from above.
Uncertainty propagation works in a pythonic way:
```
n + 200
n / 2
n**0.5
```
In cases such as the last one, formatting makes a lot of sense ...
```
(n**0.5).str("%.2f")
```
More complex operations such as `exp`, `log`, `sin`, etc, are provided on the `ops` object, which mimics Python's `math` module. The benefit of the `ops` object is that all its operations are aware of Gaussian error propagation rules.
```
from scinum import ops
# change the default format for convenience
Number.default_format = "%.3f"
# compute the log of n
ops.log(n)
```
The propagation is actually performed simultaneously per uncertainty source.
```
m = Number(5000, {"syst": 1000})
n + m
n / m
```
As described [above](#Configuration-of-correlations), equally named uncertainty sources are assumed to be fully correlated. You can configure the correlation in operations through `Correlation` objects, or by using explicit methods on the number object.
```
# n.add(m, rho=0.5, inplace=False)
# same as
n @ Correlation(0.5) + m
```
When you set `inplace` to `True` (the default), `n` is updated inplace.
```
n.add(m, rho=0.5)
n
```
| github_jupyter |
## Briefly
### __ Problem Statement __
- Obtain news from google news articles
- Sammarize the articles within 60 words
- Obtain keywords from the articles
##### Importing all the necessary libraries required to run the following code
```
from gnewsclient import gnewsclient # for fetching google news
from newspaper import Article # to obtain text from news articles
from transformers import pipeline # to summarize text
import spacy # for named entity recognition
```
##### Load sshleifer/distilbart-cnn-12-6 model
```
def load_model():
model = pipeline('summarization')
return model
data = gnewsclient.NewsClient(max_results=0)
nlp = spacy.load("en_core_web_lg")
```
##### Obtain urls and it's content
```
def getNews(topic,location):
count=0
contents=[]
titles=[]
authors=[]
urls=[]
data = gnewsclient.NewsClient(language='english',location=location,topic=topic,max_results=10)
news = data.get_news()
for item in news:
url=item['link']
article = Article(url)
try:
article.download()
article.parse()
temp=item['title'][::-1]
index=temp.find("-")
temp=temp[:index-1][::-1]
urls.append(url)
contents.append(article.text)
titles.append(item['title'][:-index-1])
authors.append(temp)
count+=1
if(count==5):
break
except:
continue
return contents,titles,authors,urls
```
##### Summarizes the content- minimum word limit 30 and maximum 60
```
def getNewsSummary(contents,summarizer):
summaries=[]
for content in contents:
minimum=len(content.split())
summaries.append(summarizer(content,max_length=60,min_length=min(30,minimum),do_sample=False,truncation=True)[0]['summary_text'])
return summaries
```
##### Named Entity Recognition
```
# Obtain 4 keywords from content (person,organisation or geopolitical entity)
def generateKeyword(contents):
keywords=[]
words=[]
labels=["PERSON","ORG","GPE"]
for content in contents:
doc=nlp(content)
keys=[]
limit=0
for ent in doc.ents:
key=ent.text.upper()
label=ent.label_
if(key not in words and key not in keywords and label in labels):
keys.append(key)
limit+=1
for element in key.split():
words.append(element)
if(limit==4):
keywords.append(keys)
break
return keywords
```
##### Displaying keywords
```
def printKeywords(keywords):
for keyword in keywords:
print(keyword)
```
##### Displaying the Summary with keywords in it highlighted
```
def printSummary(summaries,titles):
for summary,title in zip(summaries,titles):
print(title.upper(),'\n')
print(summary)
print("\n\n")
summarizer=load_model()
contents,titles,authors,urls=getNews("Sports","India")
summaries=getNewsSummary(contents,summarizer)
keywords=generateKeyword(contents)
printKeywords(keywords)
printSummary(summaries,titles)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/SamH3pn3r/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling/blob/master/Copy_of_Black_Friday.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
# Imports
import pandas as pd
# Url
black_friday_csv_link = "https://raw.githubusercontent.com/pierretd/datasets-1/master/BlackFriday.csv"
# Reading in csv
black_friday_df = pd.read_csv(black_friday_csv_link)
black_friday_df.head(10)
```
### The above dataset has already been loaded in for you, answer the following questions. If you get done quickly, move onto stretch goals and super stretch goals. Work on improving your model until 8:40.
#### 1) Clean the data set and drop the Null/NaN values. Rename Product_Category_1-3 columns with an actual Product.
```
black_friday_df = black_friday_df.dropna()
black_friday_df.isnull().sum()
black_friday_df = black_friday_df.rename(index = str, columns={"Product_Category_1": "Widscreen Tv", "Product_Category_2": "Nintendo Wii", "Product_Category_3": "Electronic Drumset"})
black_friday_df.head()
```
#### 2) How many unique user_ids does the data set contain?
```
unique_values = black_friday_df['User_ID'].value_counts()
print(len(unique_values))
```
#### 3) How many unique age brackets are in the dataset. Which Age bracket has the most entries? Which has the least?
```
unique_ages = black_friday_df['Age'].value_counts()
len(unique_ages)
unique_ages
```
#### 4) Transform the Gender categorical variable into a numerical variable. Then transform that numerical value into a Boolean.
```
black_friday_df['Gender'] = black_friday_df['Gender'].replace("M",0)
black_friday_df['Gender'] = black_friday_df['Gender'].replace("F",1)
black_friday_df.head(10)
black_friday_df['Gender'] = black_friday_df['Gender'].replace(0,"F")
black_friday_df['Gender'] = black_friday_df['Gender'].replace(1,"T")
black_friday_df.head()
```
#### 5) What is the average Occupation score? What is the Standard Deviation? What is the maximum and minimum value?
```
black_friday_df['Occupation'].describe()
```
#### 6) Group Age by Gender and print out a cross tab with age as the y axis
```
pd.crosstab(black_friday_df['Age'], black_friday_df['Gender'])
```
### Stretch Goal:
#### Build a linear regression model to predict the purchase amount given the other features in the data set with scikit learn.
```
```
### Super Stretch Goals:
#### Plot the actual values vs the predicted values.
```
```
#### Find a good way to measure your model's predictive power.
```
```
| github_jupyter |
# ♠ Sell Prediction ♠
Importing necessary files
```
# Importing pandas to read file
import pandas as pd
# Reading csv file directly from url
data = pd.read_csv("http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv", index_col = 0)
# Display data
data
# Checking the shape of data (Rows, Column)
data.shape
```
# Display the relationship of sales to TV, Radio, Newspaper
```
# Importing seaborn for data visualization
import seaborn as sns
# for plotting within notebook
% matplotlib inline
sns.pairplot(data, x_vars=('TV','radio','newspaper'), y_vars='sales', height = 5, aspect = .8, kind='reg');
# This is just a warning that in future this module will moderate.
```
# Storing data separately
```
# Create a python list
feature_cols = ["TV", "radio", "newspaper"]
response_col = ["sales"]
# Storing values into X
X = data[feature_cols]
# Display X
X.head()
# Display the shape of features
X.shape
# Storing response_col in y
y = data[response_col]
# Display the sales column
y.head()
# Display the shape of response column
y.shape
```
# Splitting X and y into train and test
```
from sklearn.cross_validation import train_test_split
split = .30
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 42, test_size = split)
# See if the split works or not
print("X_Train: {0}".format(X_train.shape))
print("X_test: {0}".format(X_test.shape))
print("y_train: {0}".format(y_train.shape))
print("y_test: {0}".format(y_test.shape))
```
# Import Linear Regression
```
# Import model
from sklearn.linear_model import LinearRegression
# Store into a classifier
clf = LinearRegression()
# Fit the model by X_train, y_test
clf.fit(X_train, y_train)
# Prediction
y_pred = clf.predict(X_test)
```
# Accuracy
```
# Error_evaluation
import numpy as np
from sklearn import metrics
print("Error: {0}".format(np.sqrt(metrics.mean_squared_error(y_test, y_pred))))
```
# We can also minimize this error by removing newspaper column. As newspaper has week linear relation to sales.
```
# Feature columns
f_col = ["TV", "radio"]
r_col = ["sales"]
# Store into X and y
X = data[f_col]
y = data[r_col]
# Split the data
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 1)
# Fit the model
clf.fit(X_train, y_train)
# Prediction
y_pred = clf.predict(X_test)
# Error evaluation
print("New Error: {0}".format(np.sqrt(metrics.mean_squared_error(y_test, y_pred))))
```
So as you can see that our error has decreased 1.92 to 1.39. And that is a good news for our model. So if we spent more money in TV and radio, instead of newspaper then sells will go high.
# Thank You
© NELOY CHANDRA BARDHAN
| github_jupyter |
```
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
%matplotlib inline
df_id_pos = pd.read_excel('978-4-274-22101-9.xlsx', 'ID付きPOSデータ(POSデータ)')
df_id_pos.head()
```
# 5 売り場の評価
## 5.1 集計による売上の評価
```
df7 = df_id_pos[df_id_pos['日'] <= 7][['大カテゴリ名', '日']]
# 表5.1 日別・大カテゴリ名別販売個数
table_5_1 = df7.groupby(['大カテゴリ名', '日']).size().unstack()
table_5_1 = table_5_1.fillna(0)
table_5_1['合計'] = table_5_1.sum(axis=1)
table_5_1.loc['合計'] = table_5_1.sum(axis=0)
table_5_1
# 表5.2 大カテゴリの変動係数(標準偏差/算術平均)
table_5_2 = table_5_1.copy()
table_5_2 = table_5_2.drop(['合計'], axis=1).drop(['合計'], axis=0)
table_5_2['変動係数'] = table_5_2.std(axis=1) / table_5_2.mean(axis=1)
table_5_2 = table_5_2.drop([i for i in range(1, 8)], axis=1)
table_5_2
# 表5.3 即席食品のドリルダウン
df7c = df[(df['日'] <= 7) & (df['大カテゴリ名'] == '即席食品')][['中カテゴリ名', '日']]
table_5_3 = df7c.groupby(['中カテゴリ名', '日']).size().unstack()
table_5_3 = table_5_3.fillna(0)
table_5_3
```
## 5.2 売り場の計数管理
```
df.head()
# 表5.4 売上の因数分解
pd.options.display.float_format = '{:.2f}'.format
def add_sales_factorize (df, col, df0):
sales = df0[['税抜価格']].sum().values[0]
values = [
sales,
df0['顧客ID'].nunique(),
sales / unique_customer,
df0['レシートNo'].nunique() / unique_customer,
sales / df0['レシートNo'].nunique(),
df0.groupby(['レシートNo']).size().mean(),
sales / len(df0)
]
print(values)
df[col] = values
table_5_4 = pd.DataFrame(index=[
'売上(円)',
'ユニーク顧客(人)',
'1人あたり購買金額(円)',
'来店頻度(回)',
'1回あたり客単価(円)',
'買上点数(点)',
'買上商品単価(円)'
])
df_former_15 = df_id_pos[df_id_pos['日'] <= 15]
df_later_15 = df_id_pos[df_id_pos['日'] > 15]
add_sales_factorize(table_5_4, '1〜15日', df_former_15)
add_sales_factorize(table_5_4, '16〜30日', df_later_15)
table_5_4
# 表5.5 点数PI値上位20小カテゴリ
customers = df['顧客ID'].nunique()
table_5_5_item = df_id_pos.groupby(['小カテゴリ名']).size() / customers * 1000
table_5_5_item = table_5_5_item.sort_values(ascending=False)
table_5_5_item = table_5_5_item.iloc[:20]
print('点数PI上位20位\n', table_5_5_item)
print('')
# 表5.5 金額PI値上位20小カテゴリ
table_5_5_price = df_id_pos.groupby(['小カテゴリ名'])['税抜価格'].sum() / customers * 1000
table_5_5_price = table_5_5_price.sort_values(ascending=False)
table_5_5_price = table_5_5_price.iloc[:20]
print('金額PI上位20位\n', table_5_5_price)
```
## 5.3 ABC分析による重要カテゴリの評価
### 5.3.1 ABC分析による分析の方法
```
df_id_pos.head()
# 表5.6 大カテゴリのABC分析
def rank(s):
if s <= 60:
return 'A'
elif s <= 90:
return 'B'
else:
return 'C'
total = len(df_id_pos)
table_5_6 = df_id_pos.groupby(['大カテゴリ名']).size()
table_5_6 = table_5_6.sort_values(ascending=False)
table_5_6 = pd.DataFrame(table_5_6, columns = ['販売個数'])
table_5_6['構成比率'] = table_5_6['販売個数'] / len(df_id_pos) * 100
table_5_6['累積構成比率'] = table_5_6['構成比率'].cumsum()
table_5_6['ランク'] = table_5_6['累積構成比率'].map(rank)
table_5_6
# 図5.4 大カテゴリのパレート図
graph_5_4 = table_5_6
ax = graph_5_4['構成比率'].plot.bar(color='green')
graph_5_4['累積構成比率'].plot(ax=ax)
# 図5.5 顧客別購買金額のパレート図
pd.options.display.float_format = '{:.3f}'.format
graph_5_5 = df_id_pos.groupby(['顧客ID'])['税抜価格'].sum()
graph_5_5 = graph_5_5.sort_values(ascending=False)
graph_5_5 = pd.DataFrame(graph_5_5)
graph_5_5['構成比率'] = graph_5_5['税抜価格'] / graph_5_5['税抜価格'].sum() * 100
graph_5_5['累積構成比率'] = graph_5_5['構成比率'].cumsum()
graph_5_5['順位'] = range(1, df_id_pos['顧客ID'].nunique() + 1)
graph_5_5 = graph_5_5.set_index('順位')
xticks=[1, 300, 600, 900, 1200]
ax = graph_5_5['構成比率'].plot(color='green', xticks=xticks, legend=True)
graph_5_5['累積構成比率'].plot(ax=ax.twinx(), xticks=xticks, legend=True)
```
### 5.3.2 Gini係数
```
# 表 5.4のGini係数
cumsum = graph_5_4['累積構成比率'] / 100
cumsum = cumsum.values.tolist()
cumsum.insert(0, 0) # 左端は0スタート
n = len(cumsum)
span = 1 / (n - 1) # 植木算
s = 0
for i in range(1, n):
s += (cumsum[i] + cumsum[i - 1] ) * span / 2
gini = (s - 0.5) / 0.5
gini
```
## 5.4 吸引力モデルによる商圏分析
吸引力モデルの一種で商圏評価のモデルであるバフモデルにおいては、顧客$i$が店舗$j$を選択確率する確率$p_{ij}$は店舗の面積$S_j$と顧客$i$と店舗$j$の距離$d_{ij}$を用いて
$$
p_{ij} = \frac{ \frac{S_j}{d^\lambda_{ij}} }{ \sum^m_{k=1} \frac{S_k}{d^\lambda_{ik}} }
$$
と表される。$\lambda$は交通抵抗パラメータとよばれる。
```
# 店舗の座標と面積
shops = pd.DataFrame([[-3, -3, 3000], [0, 3, 5000], [5, -5, 10000]],
index=['店舗A', '店舗B', '店舗C'], columns=['x', 'y', 'S'])
shops
# 住宅区域の座標と人数
regidents = pd.DataFrame([
[-8, 5, 1000],
[-7, -8, 1000],
[-3, 4, 3000],
[3, -3, 5000],
[7, 7, 4000],
[7, 0, 6000]
], index=['地域1', '地域2', '地域3', '地域4', '地域5', '地域6'],
columns=['x', 'y', 'N'])
regidents
# 店舗選択確率の比から期待集客数を求める
huff = pd.DataFrame(index=regidents.index, columns=shops.index)
# ハフモデル
for s in huff.columns:
huff[s] = shops.loc[s].S / ((regidents.x - shops.loc[s].x) ** 2 + (regidents.y - shops.loc[s].y) ** 2)
# 地域毎に割合化
for s in huff.index:
huff.loc[s] = huff.loc[s] / huff.loc[s].sum() #
huff['N'] = regidents.N
huff
print('期待集客数')
for c in shops.index:
# 地域の人数 * 店舗に来る割合
print(c, (huff.N * huff[c]).sum())
```
## 5.5 回帰分析による売上予測
### 5.5.1 単回帰分析
```
# 日々の商品単価を説明変数、来店客数を目的変数とした回帰分析
avg_unit_price = df_id_pos.groupby('日')['税抜単価'].mean().values.reshape(-1, 1)
visits = df_id_pos.groupby('日')['顧客ID'].nunique().values
model = LinearRegression()
model.fit(avg_unit_price, visits)
print('回帰係数', model.coef_)
print('切片', model.intercept_)
print('決定係数', model.score(avg_unit_price, visits)) # 小さすぎない?
plt.scatter(avg_unit_price, visits)
plt.plot(avg_unit_price, model.predict(avg_unit_price), color='red')
```
### 5.5.2 重回帰分析
```
# 日々の商品単価、曜日(日 mod 7)を説明変数、来店客数を目的変数とした回帰分析
X = pd.DataFrame()
X['税抜単価'] = df_id_pos.groupby('日')['税抜単価'].mean()
X['曜日'] = df_id_pos['日'].unique() % 7
X = X.loc[:].values
y = df_id_pos.groupby('日')['顧客ID'].nunique().values
model = LinearRegression()
model.fit(X, y)
print('回帰係数', model.coef_)
print('切片', model.intercept_)
print('決定係数', model.score(X, y)) # 小さすぎない?
1
```
| github_jupyter |
```
# !wget http://qim.fs.quoracdn.net/quora_duplicate_questions.tsv
import tensorflow as tf
import re
import numpy as np
import pandas as pd
from tqdm import tqdm
import collections
from unidecode import unidecode
from sklearn.cross_validation import train_test_split
def build_dataset(words, n_words):
count = [['PAD', 0], ['GO', 1], ['EOS', 2], ['UNK', 3], ['SEPARATOR', 4]]
count.extend(collections.Counter(words).most_common(n_words - 1))
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
data = list()
unk_count = 0
for word in words:
index = dictionary.get(word, 0)
if index == 0:
unk_count += 1
data.append(index)
count[0][1] = unk_count
reversed_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, count, dictionary, reversed_dictionary
def str_idx(corpus, dic, maxlen, UNK=3):
X = np.zeros((len(corpus),maxlen))
for i in range(len(corpus)):
for no, k in enumerate(corpus[i][:maxlen][::-1]):
val = dic[k] if k in dic else UNK
X[i,-1 - no]= val
return X
def cleaning(string):
string = unidecode(string).replace('.', ' . ').replace(',', ' , ')
string = re.sub('[^A-Za-z\- ]+', ' ', string)
string = re.sub(r'[ ]+', ' ', string).strip()
return string.lower()
df = pd.read_csv('quora_duplicate_questions.tsv', delimiter='\t').dropna()
df.head()
left, right, label = df['question1'].tolist(), df['question2'].tolist(), df['is_duplicate'].tolist()
np.unique(label, return_counts = True)
for i in tqdm(range(len(left))):
left[i] = cleaning(left[i])
right[i] = cleaning(right[i])
left[i] = left[i] + ' SEPARATOR ' + right[i]
concat = ' '.join(left).split()
vocabulary_size = len(list(set(concat)))
data, count, dictionary, rev_dictionary = build_dataset(concat, vocabulary_size)
print('vocab from size: %d'%(vocabulary_size))
print('Most common words', count[4:10])
print('Sample data', data[:10], [rev_dictionary[i] for i in data[:10]])
def position_encoding(inputs):
T = tf.shape(inputs)[1]
repr_dim = inputs.get_shape()[-1].value
pos = tf.reshape(tf.range(0.0, tf.to_float(T), dtype=tf.float32), [-1, 1])
i = np.arange(0, repr_dim, 2, np.float32)
denom = np.reshape(np.power(10000.0, i / repr_dim), [1, -1])
enc = tf.expand_dims(tf.concat([tf.sin(pos / denom), tf.cos(pos / denom)], 1), 0)
return tf.tile(enc, [tf.shape(inputs)[0], 1, 1])
def layer_norm(inputs, epsilon=1e-8):
mean, variance = tf.nn.moments(inputs, [-1], keep_dims=True)
normalized = (inputs - mean) / (tf.sqrt(variance + epsilon))
params_shape = inputs.get_shape()[-1:]
gamma = tf.get_variable('gamma', params_shape, tf.float32, tf.ones_initializer())
beta = tf.get_variable('beta', params_shape, tf.float32, tf.zeros_initializer())
return gamma * normalized + beta
def self_attention(inputs, is_training, num_units, num_heads = 8, activation=None):
T_q = T_k = tf.shape(inputs)[1]
Q_K_V = tf.layers.dense(inputs, 3*num_units, activation)
Q, K, V = tf.split(Q_K_V, 3, -1)
Q_ = tf.concat(tf.split(Q, num_heads, axis=2), 0)
K_ = tf.concat(tf.split(K, num_heads, axis=2), 0)
V_ = tf.concat(tf.split(V, num_heads, axis=2), 0)
align = tf.matmul(Q_, K_, transpose_b=True)
align *= tf.rsqrt(tf.to_float(K_.get_shape()[-1].value))
paddings = tf.fill(tf.shape(align), float('-inf'))
lower_tri = tf.ones([T_q, T_k])
lower_tri = tf.linalg.LinearOperatorLowerTriangular(lower_tri).to_dense()
masks = tf.tile(tf.expand_dims(lower_tri,0), [tf.shape(align)[0],1,1])
align = tf.where(tf.equal(masks, 0), paddings, align)
align = tf.nn.softmax(align)
align = tf.layers.dropout(align, 0.1, training=is_training)
x = tf.matmul(align, V_)
x = tf.concat(tf.split(x, num_heads, axis=0), 2)
x += inputs
x = layer_norm(x)
return x
def ffn(inputs, hidden_dim, activation=tf.nn.relu):
x = tf.layers.conv1d(inputs, 4* hidden_dim, 1, activation=activation)
x = tf.layers.conv1d(x, hidden_dim, 1, activation=None)
x += inputs
x = layer_norm(x)
return x
class Model:
def __init__(self, size_layer, num_layers, embedded_size,
dict_size, learning_rate, dropout, kernel_size = 5):
def cnn(x, scope):
x += position_encoding(x)
with tf.variable_scope(scope, reuse = tf.AUTO_REUSE):
for n in range(num_layers):
with tf.variable_scope('attn_%d'%i,reuse=tf.AUTO_REUSE):
x = self_attention(x, True, size_layer)
with tf.variable_scope('ffn_%d'%i, reuse=tf.AUTO_REUSE):
x = ffn(x, size_layer)
with tf.variable_scope('logits', reuse=tf.AUTO_REUSE):
return tf.layers.dense(x, 2)[:, -1]
self.X = tf.placeholder(tf.int32, [None, None])
self.Y = tf.placeholder(tf.int32, [None])
encoder_embeddings = tf.Variable(tf.random_uniform([dict_size, embedded_size], -1, 1))
embedded_left = tf.nn.embedding_lookup(encoder_embeddings, self.X)
self.logits = cnn(embedded_left, 'left')
self.cost = tf.reduce_mean(
tf.nn.sparse_softmax_cross_entropy_with_logits(
logits = self.logits, labels = self.Y
)
)
self.optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(self.cost)
correct_pred = tf.equal(
tf.argmax(self.logits, 1, output_type = tf.int32), self.Y
)
self.accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
size_layer = 128
num_layers = 4
embedded_size = 128
learning_rate = 1e-4
maxlen = 50
batch_size = 128
dropout = 0.8
from sklearn.cross_validation import train_test_split
vectors = str_idx(left, dictionary, maxlen)
train_X, test_X, train_Y, test_Y = train_test_split(vectors, label, test_size = 0.2)
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Model(size_layer,num_layers,embedded_size,len(dictionary),learning_rate,dropout)
sess.run(tf.global_variables_initializer())
import time
EARLY_STOPPING, CURRENT_CHECKPOINT, CURRENT_ACC, EPOCH = 3, 0, 0, 0
while True:
lasttime = time.time()
if CURRENT_CHECKPOINT == EARLY_STOPPING:
print('break epoch:%d\n' % (EPOCH))
break
train_acc, train_loss, test_acc, test_loss = 0, 0, 0, 0
pbar = tqdm(range(0, len(train_X), batch_size), desc='train minibatch loop')
for i in pbar:
batch_x = train_X[i:min(i+batch_size,train_X.shape[0])]
batch_y = train_Y[i:min(i+batch_size,train_X.shape[0])]
acc, loss, _ = sess.run([model.accuracy, model.cost, model.optimizer],
feed_dict = {model.X : batch_x,
model.Y : batch_y})
assert not np.isnan(loss)
train_loss += loss
train_acc += acc
pbar.set_postfix(cost=loss, accuracy = acc)
pbar = tqdm(range(0, len(test_X), batch_size), desc='test minibatch loop')
for i in pbar:
batch_x = test_X[i:min(i+batch_size,test_X.shape[0])]
batch_y = test_Y[i:min(i+batch_size,test_X.shape[0])]
acc, loss = sess.run([model.accuracy, model.cost],
feed_dict = {model.X : batch_x,
model.Y : batch_y})
test_loss += loss
test_acc += acc
pbar.set_postfix(cost=loss, accuracy = acc)
train_loss /= (len(train_X) / batch_size)
train_acc /= (len(train_X) / batch_size)
test_loss /= (len(test_X) / batch_size)
test_acc /= (len(test_X) / batch_size)
if test_acc > CURRENT_ACC:
print(
'epoch: %d, pass acc: %f, current acc: %f'
% (EPOCH, CURRENT_ACC, test_acc)
)
CURRENT_ACC = test_acc
CURRENT_CHECKPOINT = 0
else:
CURRENT_CHECKPOINT += 1
print('time taken:', time.time()-lasttime)
print('epoch: %d, training loss: %f, training acc: %f, valid loss: %f, valid acc: %f\n'%(EPOCH,train_loss,
train_acc,test_loss,
test_acc))
```
| github_jupyter |
## Plotting Installation:
The plotting package allows you to make an interactive CRN plot. Plotting requires the [Bokeh](https://docs.bokeh.org/en/latest/docs/installation.html) and [ForceAtlas2](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0098679) libraries to be installed on your machine. Bokeh is used for plotting, ForceAtlas2 is used for graph layout. To install, type the following into the consol:
Conda install bokeh
pip install fa2
Alternatively, follow the installation instructions in the links above.
## Plotting Example
The CRN plot has a default representation:
* species are circles
- orange circles are RNA species
- blue circles are complexes
- green circles are proteins
- grey circles are DNA
-there's always a purple circle that represents nothing. If your CRN doesn't have reactions that go to nothing, then it won't have any connections.
* reactions are squares
Click on a node (either reaction or species) and all arrows including that node will be highlighted
Mouse over a node (either reaction or species) and a tooltip will tell you what it is.
### Create a BioCRNpyler Model
Here we model a DNA assembly with a regulated promoter in a TxTl mixture.
```
%matplotlib inline
from biocrnpyler import *
import numpy as np
txtl = CRNLab("GFP")
txtl.mixture("mixture1", extract = "TxTlExtract", mixture_volume = 1e-6, mixture_parameters = 'BasicExtract.tsv')
dna = DNAassembly("mydna",promoter=RegulatedPromoter("plac",["laci"]),rbs="UTR1",protein="GFP")
txtl.add_dna(name = "G1", promoter = "pBest", rbs = "BCD2", protein = "GFP", initial_conc = 10, volume = 1e-7)
crn1 = txtl.get_model()
print(crn1.pretty_print())
```
## Plotting a reaction graph
First we import bokeh in order to plot an interactive graph then use the graphPlot function and the BioCRNpyler.plotting.generate_networkx_graph(crn) function to produce a graph. Mouseover the circles to see which species they identify and the squares to see reactions.
```
from bokeh.models import (Plot , Range1d)
import bokeh.plotting
import bokeh.io
bokeh.io.output_notebook() #this makes the graph appear in line with the notebook
DG, DGspec, DGrxn = generate_networkx_graph(crn1) #this creates the networkx objects
plot = Plot(plot_width=500, plot_height=500, x_range=Range1d(-500, 500), y_range=Range1d(-500, 500)) #this generates a bokeh plot
graphPlot(DG,DGspec,DGrxn,plot,layout="force",posscale=1) #now you draw the network on the plot. Layout "force" is the default.
#"posscale" scales the entire graph. This mostly just affects the size of the arrows relative to everything else
bokeh.io.show(plot) #if you don't type this the plot won't show
#mouse over nodes to get a tooltip telling you what they are
#mouse over single lines to outline them
#click on nodes to highlight all edges that touch them
#use the magnifying glass symbol to zoom in and out
#click and drag to move around
#NOTE: this function is not deterministic in how it displays the network, because it uses randomness to push the nodes around
```
### Advanced options for generate_networkx_graph
The following options are useful changing how plotting works.
To get better control over the way reactions and species text is display using the following keywords (default values shown below):
use_pretty_print=False
pp_show_rates=True
pp_show_attributes=True
pp_show_material=True
To get better control over the colors of different nodes, use the keywords to set a species' color.
The higher keywords will take precedence.
repr(species): "color"
species.name: "color"
(species.material_type, tuple(specie.attributes)): "color"
species.material_type: "color"
tuple(species.attributes): "color"
```
#this demonstrates the "circle" layout. reactions are in the middle with species on the outside.
#also, the pretty_print text display style
colordict = {
"G1":"red", #will only effect the species dna_G1 and rna_G1
"protein_GFP": "green", #will only effect the species protein_GFP
"protein": "blue" #All protein species, protein_Ribo, protein_RNAase, and protein_RNAP will be blue
#All other species will be grey by default. This will include all complexes.
}
DG, DGspec, DGrxn = generate_networkx_graph(crn1,
colordict = colordict,
use_pretty_print=True, #uses pretty print
pp_show_rates=False, #this would put the reaction rates in the reaction name. It's already listed seperately in the tool tip
pp_show_attributes=False,
pp_show_material = True #this lists the material of the species being displayed
)
plot = Plot(plot_width=500, plot_height=500, x_range=Range1d(-500, 500), y_range=Range1d(-500, 500))
graphPlot(DG,DGspec,DGrxn,plot,layout="circle",posscale=1)
bokeh.io.show(plot)
```
| github_jupyter |
# Homework 04 - Numpy
### Exercise 1 - Terminology
Describe the following terms with your own words:
***numpy array:*** an array only readable/useable in numpy
***broadcasting:*** how numpy treats arrays
Answer the following questions:
***What is the difference between a Python list and a Numpy array?*** the numpy library can only handle Numpy arrays and is way faster and arrays need to me symmetrical in Numpy (i.e. each row needs to have the same amount of columns)
***How can you avoid using loops or list comprehensions when working with Numpy?*** Numpy can combine arrays and numbers arithmetrically while this is not possible for Python lists where you would need loops or list comprehensions: e.g. np.array([2,4,5]) + 10 results in array([12,14,15]) while this would cause a type error with Ptyhon lists; here you would need to do: [for i+10 for i in list]
***Give different examples of usages of square brackets `[]` in Python and Numpy? Describe at least two completely different ones!*** in Python: creating lists and accessing elements of an list, e.g. l[1]; in Numpy to access elements of the array e.g. ar[1,1]
***Give different examples of usages of round brackets `()` in Python and Numpy? Describe at least two completely different ones! (Bonus: give a third example not covered in the lecture until now!)*** to get the length of the list or numpy array, e.g. len(); in Numpy to create arrays; i.e. np.array([x,y,z])
### Exercise 2 - rotate and plot points in 2D
Plot the 5 points in 2D defined in the array `points`, then rotate the points by 90 degrees by performing a matrix multiplication with a [rotation matrix](https://en.wikipedia.org/wiki/Rotation_matrix) by using `rotation_matrix @ points` and plot the result in the same plot. The rotation angle needs to be converted to radians before it is passed to `np.cos()` and `np.sin()`, use `np.radians(90)` to do so.
```
import numpy as np
import matplotlib.pyplot as plt
points = np.array([[0, 0],
[1, 1],
[-1, -1],
[0, 1],
[0, 0.7],
]).T
angle = np.radians(90)
cosinus_thet = np.cos(angle)
sinus_thet = np.sin(angle)
rotation_matrix = np.array([[cosinus_thet, -sinus_thet],
[sinus_thet,cosinus_thet]])
points_rotated = rotation_matrix @ points
```
The result should like like this:
```
plt.plot(*points, 'o', label='original points')
plt.plot(*points_rotated, 'o', label='rotated points')
plt.gca().set_aspect('equal'),
plt.legend()
```
### Exercise 3 - Flatten the curve
Copy the function `new_infections(t, k)` from last week's homework (exercise 3) and re-do the exercise using Numpy arrays instead of Python lists.
What needs to be changed in the function `new_infections(t, k)` to make this work?
```
import math
import matplotlib.pyplot as plt
import numpy as np
def new_infections(t,k,P,i0):
result = (((np.exp(-k*P*t))*k*(P**2)*(-1+(P/i0)))/(1+(np.exp(-k*P*t))*(-1+(P/i0)))**2)
return result
time=np.arange(250)
Pop=1000000
k= 3/(Pop*10)
inf=new_infections(time,k,Pop,1)
cap=[10000]*len(time)
plt.plot(time,cap,label="health sysetm capacity")
plt.xlabel("Time")
plt.ylabel("Number of new infections")
plt.plot(time,inf,label="new infections,k=3/(10P)")
plt.legend()
```
### Exercise 4 - Mean of random numbers
Generate 100 random values between 0 and 1 (uniformly distributed) and plot them. Then calculate the mean value of the first i values for $i=1,\ldots,100$ and plot this list too.
To solve the exercise find out how to generate random values with Numpy! How did you find an answer? Which possible ways are there? List at least ***2 to 5 different ways*** to look up what a numpy function does!
Note: To solve this exercise, a list comprehension is necessary. Pure Numpy is faster, but probably not a good idea here.
```
import numpy as np
random_1 = np.random.uniform(0,1,100)
#np.random.random_sample((100,))
plt.plot(random_1,'o',label="100 random numbers")
plt.title("100 random numbers")
#calc mean
mean_1 = [np.mean(random_1[:i+1]) for i in np.arange(100)]
#comparing if it worked correctly:
print(mean_1[19])
print(np.mean(random_1[:20]))
#plot
plt.plot(mean_1,'o',label="Mean of x numbers",color="red")
label1 = f'Mean of all 100 numbers = {round(mean_1[99],5)}'
plt.plot(99,mean_1[99],'o',label=label1, color="blue")
label2 = f'Mean of first 50 numbers = {round(mean_1[49],5)}'
plt.plot(49,mean_1[49],'o',label=label2, color="green")
plt.title("Mean of x random numbers")
plt.xlabel("x")
plt.ylabel("Mean")
plt.legend()
#other ways to calculate random numbers
#np.random?
np.random.rand(100)
np.random.randn(100)
np.random.ranf(100)
np.random.uniform(0,1,100)
```
| github_jupyter |
# VAE outlier detection on CIFAR10
## Method
The Variational Auto-Encoder ([VAE](https://arxiv.org/abs/1312.6114)) outlier detector is first trained on a batch of unlabeled, but normal (*inlier*) data. Unsupervised training is desireable since labeled data is often scarce. The VAE detector tries to reconstruct the input it receives. If the input data cannot be reconstructed well, the reconstruction error is high and the data can be flagged as an outlier. The reconstruction error is either measured as the mean squared error (MSE) between the input and the reconstructed instance or as the probability that both the input and the reconstructed instance are generated by the same process.
## Dataset
[CIFAR10](https://www.cs.toronto.edu/~kriz/cifar.html) consists of 60,000 32 by 32 RGB images equally distributed over 10 classes.
```
import logging
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
tf.keras.backend.clear_session()
from tensorflow.keras.layers import Conv2D, Conv2DTranspose, Dense, Layer, Reshape, InputLayer
from tqdm import tqdm
from alibi_detect.models.losses import elbo
from alibi_detect.od import OutlierVAE
from alibi_detect.utils.fetching import fetch_detector
from alibi_detect.utils.perturbation import apply_mask
from alibi_detect.utils.saving import save_detector, load_detector
from alibi_detect.utils.visualize import plot_instance_score, plot_feature_outlier_image
logger = tf.get_logger()
logger.setLevel(logging.ERROR)
```
## Load CIFAR10 data
```
train, test = tf.keras.datasets.cifar10.load_data()
X_train, y_train = train
X_test, y_test = test
X_train = X_train.astype('float32') / 255
X_test = X_test.astype('float32') / 255
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
```
## Load or define outlier detector
The pretrained outlier and adversarial detectors used in the example notebooks can be found [here](https://console.cloud.google.com/storage/browser/seldon-models/alibi-detect). You can use the built-in ```fetch_detector``` function which saves the pre-trained models in a local directory ```filepath``` and loads the detector. Alternatively, you can train a detector from scratch:
```
load_outlier_detector = True
filepath = 'my_path' # change to directory where model is downloaded
if load_outlier_detector: # load pretrained outlier detector
detector_type = 'outlier'
dataset = 'cifar10'
detector_name = 'OutlierVAE'
od = fetch_detector(filepath, detector_type, dataset, detector_name)
filepath = os.path.join(filepath, detector_name)
else: # define model, initialize, train and save outlier detector
latent_dim = 1024
encoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(32, 32, 3)),
Conv2D(64, 4, strides=2, padding='same', activation=tf.nn.relu),
Conv2D(128, 4, strides=2, padding='same', activation=tf.nn.relu),
Conv2D(512, 4, strides=2, padding='same', activation=tf.nn.relu)
])
decoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(latent_dim,)),
Dense(4*4*128),
Reshape(target_shape=(4, 4, 128)),
Conv2DTranspose(256, 4, strides=2, padding='same', activation=tf.nn.relu),
Conv2DTranspose(64, 4, strides=2, padding='same', activation=tf.nn.relu),
Conv2DTranspose(3, 4, strides=2, padding='same', activation='sigmoid')
])
# initialize outlier detector
od = OutlierVAE(threshold=.015, # threshold for outlier score
score_type='mse', # use MSE of reconstruction error for outlier detection
encoder_net=encoder_net, # can also pass VAE model instead
decoder_net=decoder_net, # of separate encoder and decoder
latent_dim=latent_dim,
samples=2)
# train
od.fit(X_train,
loss_fn=elbo,
cov_elbo=dict(sim=.05),
epochs=50,
verbose=False)
# save the trained outlier detector
save_detector(od, filepath)
```
## Check quality VAE model
```
idx = 8
X = X_train[idx].reshape(1, 32, 32, 3)
X_recon = od.vae(X)
plt.imshow(X.reshape(32, 32, 3))
plt.axis('off')
plt.show()
plt.imshow(X_recon.numpy().reshape(32, 32, 3))
plt.axis('off')
plt.show()
```
## Check outliers on original CIFAR images
```
X = X_train[:500]
print(X.shape)
od_preds = od.predict(X,
outlier_type='instance', # use 'feature' or 'instance' level
return_feature_score=True, # scores used to determine outliers
return_instance_score=True)
print(list(od_preds['data'].keys()))
```
### Plot instance level outlier scores
```
target = np.zeros(X.shape[0],).astype(int) # all normal CIFAR10 training instances
labels = ['normal', 'outlier']
plot_instance_score(od_preds, target, labels, od.threshold)
```
### Visualize predictions
```
X_recon = od.vae(X).numpy()
plot_feature_outlier_image(od_preds,
X,
X_recon=X_recon,
instance_ids=[8, 60, 100, 330], # pass a list with indices of instances to display
max_instances=5, # max nb of instances to display
outliers_only=False) # only show outlier predictions
```
## Predict outliers on perturbed CIFAR images
We perturb CIFAR images by adding random noise to patches (masks) of the image. For each mask size in `n_mask_sizes`, sample `n_masks` and apply those to each of the `n_imgs` images. Then we predict outliers on the masked instances:
```
# nb of predictions per image: n_masks * n_mask_sizes
n_mask_sizes = 10
n_masks = 20
n_imgs = 50
```
Define masks and get images:
```
mask_sizes = [(2*n,2*n) for n in range(1,n_mask_sizes+1)]
print(mask_sizes)
img_ids = np.arange(n_imgs)
X_orig = X[img_ids].reshape(img_ids.shape[0], 32, 32, 3)
print(X_orig.shape)
```
Calculate instance level outlier scores:
```
all_img_scores = []
for i in tqdm(range(X_orig.shape[0])):
img_scores = np.zeros((len(mask_sizes),))
for j, mask_size in enumerate(mask_sizes):
# create masked instances
X_mask, mask = apply_mask(X_orig[i].reshape(1, 32, 32, 3),
mask_size=mask_size,
n_masks=n_masks,
channels=[0,1,2],
mask_type='normal',
noise_distr=(0,1),
clip_rng=(0,1))
# predict outliers
od_preds_mask = od.predict(X_mask)
score = od_preds_mask['data']['instance_score']
# store average score over `n_masks` for a given mask size
img_scores[j] = np.mean(score)
all_img_scores.append(img_scores)
```
### Visualize outlier scores vs. mask sizes
```
x_plt = [mask[0] for mask in mask_sizes]
for ais in all_img_scores:
plt.plot(x_plt, ais)
plt.xticks(x_plt)
plt.title('Outlier Score All Images for Increasing Mask Size')
plt.xlabel('Mask size')
plt.ylabel('Outlier Score')
plt.show()
ais_np = np.zeros((len(all_img_scores), all_img_scores[0].shape[0]))
for i, ais in enumerate(all_img_scores):
ais_np[i, :] = ais
ais_mean = np.mean(ais_np, axis=0)
plt.title('Mean Outlier Score All Images for Increasing Mask Size')
plt.xlabel('Mask size')
plt.ylabel('Outlier score')
plt.plot(x_plt, ais_mean)
plt.xticks(x_plt)
plt.show()
```
### Investigate instance level outlier
```
i = 8 # index of instance to look at
plt.plot(x_plt, all_img_scores[i])
plt.xticks(x_plt)
plt.title('Outlier Scores Image {} for Increasing Mask Size'.format(i))
plt.xlabel('Mask size')
plt.ylabel('Outlier score')
plt.show()
```
Reconstruction of masked images and outlier scores per channel:
```
all_X_mask = []
X_i = X_orig[i].reshape(1, 32, 32, 3)
all_X_mask.append(X_i)
# apply masks
for j, mask_size in enumerate(mask_sizes):
# create masked instances
X_mask, mask = apply_mask(X_i,
mask_size=mask_size,
n_masks=1, # just 1 for visualization purposes
channels=[0,1,2],
mask_type='normal',
noise_distr=(0,1),
clip_rng=(0,1))
all_X_mask.append(X_mask)
all_X_mask = np.concatenate(all_X_mask, axis=0)
all_X_recon = od.vae(all_X_mask).numpy()
od_preds = od.predict(all_X_mask)
```
Visualize:
```
plot_feature_outlier_image(od_preds,
all_X_mask,
X_recon=all_X_recon,
max_instances=all_X_mask.shape[0],
n_channels=3)
```
## Predict outliers on a subset of features
The sensitivity of the outlier detector can not only be controlled via the `threshold`, but also by selecting the percentage of the features used for the instance level outlier score computation. For instance, we might want to flag outliers if 40% of the features (pixels for images) have an average outlier score above the threshold. This is possible via the `outlier_perc` argument in the `predict` function. It specifies the percentage of the features that are used for outlier detection, sorted in descending outlier score order.
```
perc_list = [20, 40, 60, 80, 100]
all_perc_scores = []
for perc in perc_list:
od_preds_perc = od.predict(all_X_mask, outlier_perc=perc)
iscore = od_preds_perc['data']['instance_score']
all_perc_scores.append(iscore)
```
Visualize outlier scores vs. mask sizes and percentage of features used:
```
x_plt = [0] + x_plt
for aps in all_perc_scores:
plt.plot(x_plt, aps)
plt.xticks(x_plt)
plt.legend(perc_list)
plt.title('Outlier Score for Increasing Mask Size and Different Feature Subsets')
plt.xlabel('Mask Size')
plt.ylabel('Outlier Score')
plt.show()
```
## Infer outlier threshold value
Finding good threshold values can be tricky since they are typically not easy to interpret. The `infer_threshold` method helps finding a sensible value. We need to pass a batch of instances `X` and specify what percentage of those we consider to be normal via `threshold_perc`.
```
print('Current threshold: {}'.format(od.threshold))
od.infer_threshold(X, threshold_perc=99) # assume 1% of the training data are outliers
print('New threshold: {}'.format(od.threshold))
```
| github_jupyter |
## Full name: Farhang Forghanparast
## R#: 321654987
## HEX: 0x132c10cb
## Title of the notebook
## Date: 9/3/2020
# Laboratory 5 Functions
Functions are simply pre-written code fragments that perform a certain task.
In older procedural languages functions and subroutines are similar, but a function returns a value whereas
a subroutine operates on data.
The difference is subtle but important.
More recent thinking has functions being able to operate on data (they always could) and the value returned may be simply an exit code.
An analogy are the functions in *MS Excel*.
To add numbers, we can use the sum(range) function and type `=sum(A1:A5)` instead of typing `=A1+A2+A3+A4+A5`
## Special Notes for ENGR 1330
1. This notebook has graphics dependencies, remember to download the three .png files included in the web directory. CoCalc (free) does not allow the notebook to do the downloads automatically, so download to your local machine, then upload to CoCalc.
2. There is a python module dependency on a separate file named `mylibrary.py`, as with the .png files download this file and upload to CoCalc to run this notebook without errors. If you forget, its easy enough to create in realtime and that will be demonstrated below.
## Calling the Function
We call a function simply by typing the name of the function or by using the dot notation.
Whether we can use the dot notation or not depends on how the function is written, whether it is part of a class, and how it is imported into a program.
Some functions expect us to pass data to them to perform their tasks.
These data are known as parameters( older terminology is arguments, or argument list) and we pass them to the function by enclosing their values in parenthesis ( ) separated by commas.
For instance, the `print()` function for displaying text on the screen is \called" by typing `print('Hello World')` where print is the name of the function and the literal (a string) 'Hello World' is the argument.
## Program flow
A function, whether built-in, or added must be defined *before* it is called, otherwise the script will fail. Certain built-in functions "self define" upon start (such as `print()` and `type()` and we need not worry about those funtions). The diagram below illustrates the requesite flow control for functions that need to be defined before use.

An example below will illustrate, change the cell to code and run it, you should get an error.
Then fix the indicated line (remove the leading "#" in the import math ... line) and rerun, should get a functioning script.
```
# reset the notebook using a magic function in JupyterLab
%reset -f
# An example, run once as is then activate indicated line, run again - what happens?
x= 4.
sqrt_by_arithmetic = x**0.5
print('Using arithmetic square root of ', x, ' is ',sqrt_by_arithmetic )
import math # import the math package ## activate and rerun
sqrt_by_math = math.sqrt(x) # note the dot notation
print('Using math package square root of ', x,' is ',sqrt_by_arithmetic)
```
An alternate way to load just the sqrt() function is shown below, either way is fine.
```
# reset the notebook using a magic function in JupyterLab
%reset -f
# An example, run once as is then activate indicated line, run again - what happens?
x= 4.
sqrt_by_arithmetic = x**0.5
print('Using arithmetic square root of ', x, ' is ',sqrt_by_arithmetic )
from math import sqrt # import sqrt from the math package ## activate and rerun
sqrt_by_math = sqrt(x) # note the notation
print('Using math package square root of ', x,' is ',sqrt_by_arithmetic)
```
## Built-In in Primitive Python (Base install)
The base Python functions and types built into it that are always available, the figure below lists those functions.

Notice all have the structure of `function_name()`, except `__import__()` which has a constructor type structure, and is not intended for routine use. We will learn about constructors later.
## Added-In using External Packages/Modules and Libaries (e.g. math)
Python is also distributed with a large number of external functions.
These functions are saved
in files known as modules.
To use the built-in codes in Python modules, we have to import
them into our programs first. We do that by using the import keyword.
There are three
ways to import:
1. Import the entire module by writing import moduleName; For instance, to import the random module, we write import random. To use the randrange() function in the random module, we write random.randrange( 1, 10);28
2. Import and rename the module by writing import random as r (where r is any name of your choice). Now to use the randrange() function, you simply write r.randrange(1, 10); and
3. Import specific functions from the module by writing from moduleName import name1[,name2[, ... nameN]]. For instance, to import the randrange() function from the random module, we write from random import randrange. To import multiple functions, we separate them with a comma. To import the randrange() and randint() functions, we write from random import randrange, randint. To use the function now, we do not have to use the dot notation anymore. Just write randrange( 1, 10).
```
# Example 1 of import
%reset -f
import random
low = 1 ; high = 10
random.randrange(low,high) #generate random number in range low to high
# Example 2 of import
%reset -f
import random as r
low = 1 ; high = 10
r.randrange(low,high)
# Example 3 of import
%reset -f
from random import randrange
low = 1 ; high = 10
randrange(low,high)
```
The modules that come with Python are extensive and listed at
https://docs.python.org/3/py-modindex.html.
There are also other modules that can be downloaded and used
(just like user defined modules below).
In these labs we are building primitive codes to learn how to code and how to create algorithms.
For many practical cases you will want to load a well-tested package to accomplish the tasks.
That exercise is saved for the end of the document.
## User-Built
We can define our own functions in Python and reuse them throughout the program.
The syntax for defining a function is:
def functionName( argument ):
code detailing what the function should do
note the colon above and indentation
...
...
return [expression]
The keyword `def` tells the program that the indented code from the next line onwards is
part of the function.
The keyword `return `tells the program to return an answer from the
function.
There can be multiple return statements in a function.
Once the function executes
a return statement, the program exits the function and continues with *its* next executable
statement.
If the function does not need to return any value, you can omit the return
statement.
Functions can be pretty elaborate; they can search for things in a list, determine variable
types, open and close files, read and write to files.
To get started we will build a few really
simple mathematical functions; we will need this skill in the future anyway, especially in
scientific programming contexts.
### User-built within a Code Block
For our first function we will code $$f(x) = x\sqrt{1 + x}$$ into a function named `dusty()`.
When you run the next cell, all it does is prototype the function (defines it), nothing happens until we use the function.
```
def dusty(x) :
temp = x * ((1.0+x)**(0.5)) # don't need the math package
return temp
# the function should make the evaluation
# store in the local variable temp
# return contents of temp
# wrapper to run the dusty function
yes = 0
while yes == 0:
xvalue = input('enter a numeric value')
try:
xvalue = float(xvalue)
yes = 1
except:
print('enter a bloody number! Try again \n')
# call the function, get value , write output
yvalue = dusty(xvalue)
print('f(',xvalue,') = ',yvalue) # and we are done
```
## Example
Create the AVERAGE function for three values and test it for these values:
- 3,4,5
- 10,100,1000
- -5,15,5
## Example
Create the FC function to convert Fahrenhiet to Celsius and test it for these values:
- 32
- 15
- 100
*hint: Formula-(°F − 32) × 5/9 = °C
## Exercise 1
Create the function $$f(x) = e^x - 10 cos(x) - 100$$ as a function (i.e. use the `def` keyword)
def name(parameters) :
operations on parameters
...
...
return (value, or null)
Then apply your function to the value.
Use your function to complete the table below:
| x | f(x) |
|---:|---:|
| 0.0 | |
| 1.50 | |
| 2.00 | |
| 2.25 | |
| 3.0 | |
| 4.25 | |
## Variable Scope
An important concept when defining a function is the concept of variable scope.
Variables defined inside a function are treated differently from variables defined outside.
Firstly, any variable declared within a function is only accessible within the function.
These are known as local variables.
In the `dusty()` function, the variables `x` and `temp` are local to the function.
Any variable declared outside a function in a main program is known as a program variable
and is accessible anywhere in the program.
In the example, the variables `xvalue` and `yvalue` are program variables (global to the program; if they are addressed within a function, they could be operated on.)
Generally we want to protect the program variables from the function unless the intent is to change their values.
The way the function is written in the example, the function cannot damage `xvalue` or `yvalue`.
If a local variable shares the same name as a program variable, any code inside the function is
accessing the local variable. Any code outside is accessing the program variable
### As Separate Module/File
In this section we will invent the `neko()` function, export it to a file, so we can reuse it in later notebooks without having to retype or cut-and-paste. The `neko()` function evaluates:
$$f(x) = x\sqrt{|(1 + x)|}$$
Its the same as the dusty() function, except operates on the absolute value in the wadical.
1. Create a text file named "mylibrary.txt"
2. Copy the neko() function script below into that file.
def neko(input_argument) :
import math #ok to import into a function
local_variable = input_argument * math.sqrt(abs(1.0+input_argument))
return local_variable
4. rename mylibrary.txt to mylibrary.py
5. modify the wrapper script to use the neko function as an external module
```
# wrapper to run the neko function
import mylibrary
yes = 0
while yes == 0:
xvalue = input('enter a numeric value')
try:
xvalue = float(xvalue)
yes = 1
except:
print('enter a bloody number! Try again \n')
# call the function, get value , write output
yvalue = mylibrary.neko(xvalue)
print('f(',xvalue,') = ',yvalue) # and we are done
```
In JupyterHub environments, you may discover that changes you make to your external python file are not reflected when you re-run your script; you need to restart the kernel to get the changes to actually update. The figure below depicts the notebook, external file relatonship

* Future version - explain absolute path
## Rudimentary Graphics
Graphing values is part of the broader field of data visualization, which has two main
goals:
1. To explore data, and
2. To communicate data.
In this subsection we will concentrate on introducing skills to start exploring data and to
produce meaningful visualizations we can use throughout the rest of this notebook.
Data visualization is a rich field of study that fills entire books.
The reason to start visualization here instead of elsewhere is that with functions plotting
is a natural activity and we have to import the matplotlib module to make the plots.
The example below is code adapted from Grus (2015) that illustrates simple generic
plots. I added a single line (label the x-axis), and corrected some transcription
errors (not the original author's mistake, just the consequence of how the API handled the
cut-and-paste), but otherwise the code is unchanged.
```
# python script to illustrate plotting
# CODE BELOW IS ADAPTED FROM:
# Grus, Joel (2015-04-14). Data Science from Scratch: First Principles with Python
# (Kindle Locations 1190-1191). O'Reilly Media. Kindle Edition.
#
from matplotlib import pyplot as plt # import the plotting library from matplotlibplt.show()
years = [1950, 1960, 1970, 1980, 1990, 2000, 2010] # define one list for years
gdp = [300.2, 543.3, 1075.9, 2862.5, 5979.6, 10289.7, 14958.3] # and another one for Gross Domestic Product (GDP)
plt.plot( years, gdp, color ='green', marker ='o', linestyle ='solid') # create a line chart, years on x-axis, gdp on y-axis
# what if "^", "P", "*" for marker?
# what if "red" for color?
# what if "dashdot", '--' for linestyle?
plt.title("Nominal GDP")# add a title
plt.ylabel("Billions of $")# add a label to the x and y-axes
plt.xlabel("Year")
plt.show() # display the plot
```
Now lets put the plotting script into a function so we can make line charts of any two numeric lists
```
def plotAline(list1,list2,strx,stry,strtitle): # plot list1 on x, list2 on y, xlabel, ylabel, title
from matplotlib import pyplot as plt # import the plotting library from matplotlibplt.show()
plt.plot( list1, list2, color ='green', marker ='o', linestyle ='solid') # create a line chart, years on x-axis, gdp on y-axis
plt.title(strtitle)# add a title
plt.ylabel(stry)# add a label to the x and y-axes
plt.xlabel(strx)
plt.show() # display the plot
return #null return
# wrapper
years = [1950, 1960, 1970, 1980, 1990, 2000, 2010] # define two lists years and gdp
gdp = [300.2, 543.3, 1075.9, 2862.5, 5979.6, 10289.7, 14958.3]
print(type(years[0]))
print(type(gdp[0]))
plotAline(years,gdp,"Year","Billions of $","Nominal GDP")
```
## Example
Use the plotting script and create a function that draws a straight line between two points.
## Example- Lets have some fun!
Copy the wrapper script for the `plotAline()` function, and modify the copy to create a plot of
$$ x = 16sin^3(t) $$
$$ y = 13cos(t) - 5cos(2t) - 2cos(3t) - cos(4t) $$
for t raging from [0,2$\Pi$] (inclusive).
Label the plot and the plot axes.
## Exercise 2
Copy the wrapper script for the `plotAline()` function, and modify the copy to create a plot of
$$ y = x^2 $$
for x raging from 0 to 9 (inclusive) in steps of 1.
Label the plot and the plot axes.
## Exercise 3
Use your function from Exercise 1.
$$f(x) = e^x - 10 cos(x) - 100$$
And make a plot where $x$ ranges from 0 to 15 in increments of 0.25. Label the plot and the plot axes.
## References
1. Grus, Joel (2015-04-14). Data Science from Scratch: First Principles with Python
(Kindle Locations 1190-1191). O'Reilly Media. Kindle Edition.
2. Call Expressions in "Adhikari, A. and DeNero, J. Computational and Inferential Thinking The Foundations of Data Science" https://www.inferentialthinking.com/chapters/03/3/Calls.html
3. Functions and Tables in "Adhikari, A. and DeNero, J. Computational and Inferential Thinking The Foundations of Data Science" https://www.inferentialthinking.com/chapters/08/Functions_and_Tables.html
4. Visualization in "Adhikari, A. and DeNero, J. Computational and Inferential Thinking The Foundations of Data Science" https://www.inferentialthinking.com/chapters/07/Visualization.html
| github_jupyter |
## Audio Monitor
Monitors the audio by continuously recording and plotting the recorded audio.
You will need to start osp running in a terminal.
```
# make Jupyter use the whole width of the browser
from IPython.display import Image, display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
# Pull in all the necessary python modules
import plotly.graph_objects as go
from IPython.display import Audio
from osp_control import OspControl
import os
import json
import numpy as np
import time
import ipywidgets as widgets
# Connect to OSP process
osp = OspControl() # connects to a local process
# set the number of bands and mute mic
osp.send({"method": "set", "data": {"num_bands": 11}})
osp.send_chan({"alpha": 0})
def plot_fft(fig, title, out1, update=True):
ftrans = np.abs(np.fft.fft(out1)/len(out1))
outlen = len(out1)
values = np.arange(int(outlen/2))
period = outlen/32000
frequencies = values/period
if update:
with fig.batch_update():
fig.data[0].y = ftrans
fig.data[0].x = frequencies
fig.update_layout()
return
fig.add_trace(go.Scatter(x=frequencies, line_color='green', opacity=1, y=ftrans))
fig.update_layout(title=title,
xaxis_title='Frequency',
yaxis_title='Amplitude',
template='plotly_white')
def plot_mono(fig, title, out1, lab1, rate=48000, update=True):
# Create x values.
x = np.array(range(len(out1)))/rate
if update:
with fig.batch_update():
fig.data[0].y = out1
fig.data[0].x = x
fig.update_layout()
return
fig.add_trace(go.Scatter(x=x,
y=out1,
name=lab1,
opacity=1))
fig.update_layout(title=title,
# yaxis_range=[-1,1],
xaxis_title='Time(sec)',
yaxis_title='Amplitude',
template='plotly_white')
def monitor(afig, ffig, interval=2):
update=False
while True:
# set record file name
filename = os.path.join(os.getcwd(), 'tmpaudio')
osp.send_chan({'audio_rfile': filename})
#start recording output
osp.send_chan({"audio_record": 1})
# record for {interval} seconds
time.sleep(interval)
#stop recording
osp.send_chan({"audio_record": 0})
# read the saved recording data
with open(filename, "rb") as f:
data = f.read()
data = np.frombuffer(data, dtype=np.float32)
data = np.reshape(data, (2, -1), 'F')
# plot
plot_mono(afig, '', data[0], 'out', update=update)
plot_fft(ffig, '', data[0], update=update)
if update == False:
update = True
audio_fig = go.FigureWidget()
fft_fig = go.FigureWidget()
widgets.VBox([audio_fig, fft_fig])
monitor(audio_fig, fft_fig)
```
| github_jupyter |
```
# This line is a comment because it starts with a pound sign (#). That
# means Python ignores it. A comment is just for a human reading the
# code. This project involves 20 small problems to give you practice
# with operators, types, and boolean logic. We'll give you directions
# (as comments) on what to do for each problem.
#
# Before you get started, please tell us who you are, putting your
# Net ID and your partner's Net ID below (or none if your working solo)
# project: p2
# submitter: zchen697
# partner: none
#q1: what is 1+1?
# we did this one for you :)
1+1
#q2: what is 2000+20?
2000+20
#q3: what is the type of 2020?
# we did this one for you too...
type(2020)
#q4: what is the type of 2020/8?
type(2020/8)
#q5: what is the type of 2020//8?
type(2020//8)
#q6: what is the type of "2020"? Note the quotes!
type("2020")
#q7: what is the type of "True"? Note the quotes!
type("True")
#q8: what is the type of False?
type(False)
#q9: what is the type of 2020<2019?
type(2020<2019)
#q10: what is two thousand plus twenty?
# fix the below to make Python do an addition that produces 2020
2000 + 20
#q11: please fix the following to display 6 smileys
":)" * 6
#q12: please fix the following to get 42
6 * 7
#q13: what is the volume of a cube with side length of 8?
# oops, it is currently computing the area of a square (please fix)
8 ** 3
#q14: you start with 20 dollars and triple your money every decade.
# how much will you have after 4 decades?
20 * (3 ** 4)
#q15: fix the Python logic to match the English
# In English: 220 is less than 2020 and greater than 120
# In Python:
220 < 2020 and 220 > 120
#q16: change ONLY the value of x to get True for the output
x = 270
x < 2020 and x > 220
#q17: change ONLY the value of x to get True for the output
x = 2020.5
(x > 2000 and x < 2021) and (x > 2019 and x < 3000)
#q18: what???
x = 2020
# fix the following logic so we get True, not 2020.
# The correct logic should check whether x is either
# -2020 or 2020. The problem is with types: to the left
# of the or, we have a boolean, and to the right of
# the or, we have an integer. This semester, we will
# always try to have a boolean to both the left and
# right of logical operators such as or.
x == -2020 or x == 2020
#q19: we should win!
number = 36
# There are three winners, with numbers 36, 220, and 2020.
# The following should output True if number is changed
# to be any of these (but the logic is currently wrong).
#
# Please update the following expression so we get True
# if the number variable is changed to any of the winning
# options.
# Did we win?
number == 36 or number == 220 or number == 2020
#q20: what is the volume of a cylinder with radius 4 and height 2?
# you may assume PI is 3.14 if you like (a close answer is good enough)
4 * 4 * 3.14 * 2
```
| github_jupyter |
---
_You are currently looking at **version 1.0** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-data-analysis/resources/0dhYG) course resource._
---
# The Python Programming Language: Functions
```
x = 1
y = 2
x + y
x
```
<br>
`add_numbers` is a function that takes two numbers and adds them together.
```
def add_numbers(x, y):
return x + y
add_numbers(1, 2)
```
<br>
`add_numbers` updated to take an optional 3rd parameter. Using `print` allows printing of multiple expressions within a single cell.
```
def add_numbers(x,y,z=None):
if (z==None):
return x+y
else:
return x+y+z
print(add_numbers(1, 2))
print(add_numbers(1, 2, 3))
```
<br>
`add_numbers` updated to take an optional flag parameter.
```
def add_numbers(x, y, z=None, flag=False):
if (flag):
print('Flag is true!')
if (z==None):
return x + y
else:
return x + y + z
print(add_numbers(1, 2, flag=True))
```
<br>
Assign function `add_numbers` to variable `a`.
```
def add_numbers(x,y):
return x+y
a = add_numbers
a(1,2)
```
<br>
# The Python Programming Language: Types and Sequences
<br>
Use `type` to return the object's type.
```
type('This is a string')
type(None)
type(1)
type(1.0)
type(add_numbers)
```
<br>
Tuples are an immutable data structure (cannot be altered).
```
x = (1, 'a', 2, 'b')
type(x)
```
<br>
Lists are a mutable data structure.
```
x = [1, 'a', 2, 'b']
type(x)
```
<br>
Use `append` to append an object to a list.
```
x.append(3.3)
print(x)
```
<br>
This is an example of how to loop through each item in the list.
```
for item in x:
print(item)
```
<br>
Or using the indexing operator:
```
i=0
while( i != len(x) ):
print(x[i])
i = i + 1
```
<br>
Use `+` to concatenate lists.
```
[1,2] + [3,4]
```
<br>
Use `*` to repeat lists.
```
[1]*3
```
<br>
Use the `in` operator to check if something is inside a list.
```
1 in [1, 2, 3]
```
<br>
Now let's look at strings. Use bracket notation to slice a string.
```
x = 'This is a string'
print(x[0]) #first character
print(x[0:1]) #first character, but we have explicitly set the end character
print(x[0:2]) #first two characters
```
<br>
This will return the last element of the string.
```
x[-1]
```
<br>
This will return the slice starting from the 4th element from the end and stopping before the 2nd element from the end.
```
x[-4:-2]
```
<br>
This is a slice from the beginning of the string and stopping before the 3rd element.
```
x[:3]
```
<br>
And this is a slice starting from the 3rd element of the string and going all the way to the end.
```
x[3:]
firstname = 'Christopher'
lastname = 'Brooks'
print(firstname + ' ' + lastname)
print(firstname*3)
print('Chris' in firstname)
```
<br>
`split` returns a list of all the words in a string, or a list split on a specific character.
```
firstname = 'Christopher Arthur Hansen Brooks'.split(' ')[0] # [0] selects the first element of the list
lastname = 'Christopher Arthur Hansen Brooks'.split(' ')[-1] # [-1] selects the last element of the list
print(firstname)
print(lastname)
```
<br>
Make sure you convert objects to strings before concatenating.
```
'Chris' + 2
'Chris' + str(2)
```
<br>
Dictionaries associate keys with values.
```
x = {'Christopher Brooks': 'brooksch@umich.edu', 'Bill Gates': 'billg@microsoft.com'}
x['Christopher Brooks'] # Retrieve a value by using the indexing operator
x['Kevyn Collins-Thompson'] = None
x['Kevyn Collins-Thompson']
```
<br>
Iterate over all of the keys:
```
for name in x:
print(x[name])
```
<br>
Iterate over all of the values:
```
for email in x.values():
print(email)
```
<br>
Iterate over all of the items in the list:
```
for name, email in x.items():
print(name)
print(email)
```
<br>
You can unpack a sequence into different variables:
```
x = ('Christopher', 'Brooks', 'brooksch@umich.edu')
fname, lname, email = x
fname
lname
```
<br>
Make sure the number of values you are unpacking matches the number of variables being assigned.
```
x = ('Christopher', 'Brooks', 'brooksch@umich.edu', 'Ann Arbor')
fname, lname, email = x
```
<br>
# The Python Programming Language: More on Strings
```
print('Chris' + 2)
print('Chris' + str(2))
```
<br>
Python has a built in method for convenient string formatting.
```
sales_record = {
'price': 3.24,
'num_items': 4,
'person': 'Chris'}
sales_statement = '{} bought {} item(s) at a price of {} each for a total of {}'
print(sales_statement.format(sales_record['person'],
sales_record['num_items'],
sales_record['price'],
sales_record['num_items']*sales_record['price']))
```
<br>
# Reading and Writing CSV files
<br>
Let's import our datafile mpg.csv, which contains fuel economy data for 234 cars.
```
import csv
%precision 2
with open('mpg.csv') as csvfile:
mpg = list(csv.DictReader(csvfile))
mpg[:3] # The first three dictionaries in our list.
```
<br>
`csv.Dictreader` has read in each row of our csv file as a dictionary. `len` shows that our list is comprised of 234 dictionaries.
```
len(mpg)
```
<br>
`keys` gives us the column names of our csv.
```
mpg[0].keys()
```
<br>
This is how to find the average cty fuel economy across all cars. All values in the dictionaries are strings, so we need to convert to float.
```
sum(float(d['cty']) for d in mpg) / len(mpg)
```
<br>
Similarly this is how to find the average hwy fuel economy across all cars.
```
sum(float(d['hwy']) for d in mpg) / len(mpg)
```
<br>
Use `set` to return the unique values for the number of cylinders the cars in our dataset have.
```
cylinders = set(d['cyl'] for d in mpg)
cylinders
```
<br>
Here's a more complex example where we are grouping the cars by number of cylinder, and finding the average cty mpg for each group.
```
CtyMpgByCyl = []
for c in cylinders: # iterate over all the cylinder levels
summpg = 0
cyltypecount = 0
for d in mpg: # iterate over all dictionaries
if d['cyl'] == c: # if the cylinder level type matches,
summpg += float(d['cty']) # add the cty mpg
cyltypecount += 1 # increment the count
CtyMpgByCyl.append((c, summpg / cyltypecount)) # append the tuple ('cylinder', 'avg mpg')
CtyMpgByCyl.sort(key=lambda x: x[0])
CtyMpgByCyl
```
<br>
Use `set` to return the unique values for the class types in our dataset.
```
vehicleclass = set(d['class'] for d in mpg) # what are the class types
vehicleclass
```
<br>
And here's an example of how to find the average hwy mpg for each class of vehicle in our dataset.
```
HwyMpgByClass = []
for t in vehicleclass: # iterate over all the vehicle classes
summpg = 0
vclasscount = 0
for d in mpg: # iterate over all dictionaries
if d['class'] == t: # if the cylinder amount type matches,
summpg += float(d['hwy']) # add the hwy mpg
vclasscount += 1 # increment the count
HwyMpgByClass.append((t, summpg / vclasscount)) # append the tuple ('class', 'avg mpg')
HwyMpgByClass.sort(key=lambda x: x[1])
HwyMpgByClass
```
<br>
# The Python Programming Language: Dates and Times
```
import datetime as dt
import time as tm
```
<br>
`time` returns the current time in seconds since the Epoch. (January 1st, 1970)
```
tm.time()
```
<br>
Convert the timestamp to datetime.
```
dtnow = dt.datetime.fromtimestamp(tm.time())
dtnow
```
<br>
Handy datetime attributes:
```
dtnow.year, dtnow.month, dtnow.day, dtnow.hour, dtnow.minute, dtnow.second # get year, month, day, etc.from a datetime
```
<br>
`timedelta` is a duration expressing the difference between two dates.
```
delta = dt.timedelta(days = 100) # create a timedelta of 100 days
delta
```
<br>
`date.today` returns the current local date.
```
today = dt.date.today()
today - delta # the date 100 days ago
today > today-delta # compare dates
```
<br>
# The Python Programming Language: Objects and map()
<br>
An example of a class in python:
```
class Person:
department = 'School of Information' #a class variable
def set_name(self, new_name): #a method
self.name = new_name
def set_location(self, new_location):
self.location = new_location
person = Person()
person.set_name('Christopher Brooks')
person.set_location('Ann Arbor, MI, USA')
print('{} live in {} and works in the department {}'.format(person.name, person.location, person.department))
```
<br>
Here's an example of mapping the `min` function between two lists.
```
store1 = [10.00, 11.00, 12.34, 2.34]
store2 = [9.00, 11.10, 12.34, 2.01]
cheapest = map(min, store1, store2)
cheapest
```
<br>
Now let's iterate through the map object to see the values.
```
for item in cheapest:
print(item)
```
<br>
# The Python Programming Language: Lambda and List Comprehensions
<br>
Here's an example of lambda that takes in three parameters and adds the first two.
```
my_function = lambda a, b, c : a + b
my_function(1, 2, 3)
```
<br>
Let's iterate from 0 to 999 and return the even numbers.
```
my_list = []
for number in range(0, 1000):
if number % 2 == 0:
my_list.append(number)
my_list
```
<br>
Now the same thing but with list comprehension.
```
my_list = [number for number in range(0,1000) if number % 2 == 0]
my_list
```
<br>
# The Python Programming Language: Numerical Python (NumPy)
```
import numpy as np
```
<br>
## Creating Arrays
Create a list and convert it to a numpy array
```
mylist = [1, 2, 3]
x = np.array(mylist)
x
```
<br>
Or just pass in a list directly
```
y = np.array([4, 5, 6])
y
```
<br>
Pass in a list of lists to create a multidimensional array.
```
m = np.array([[7, 8, 9], [10, 11, 12]])
m
```
<br>
Use the shape method to find the dimensions of the array. (rows, columns)
```
m.shape
```
<br>
`arange` returns evenly spaced values within a given interval.
```
n = np.arange(0, 30, 2) # start at 0 count up by 2, stop before 30
n
```
<br>
`reshape` returns an array with the same data with a new shape.
```
n = n.reshape(3, 5) # reshape array to be 3x5
n
```
<br>
`linspace` returns evenly spaced numbers over a specified interval.
```
o = np.linspace(0, 4, 9) # return 9 evenly spaced values from 0 to 4
o
```
<br>
`resize` changes the shape and size of array in-place.
```
o.resize(3, 3)
o
```
<br>
`ones` returns a new array of given shape and type, filled with ones.
```
np.ones((3, 2))
```
<br>
`zeros` returns a new array of given shape and type, filled with zeros.
```
np.zeros((2, 3))
```
<br>
`eye` returns a 2-D array with ones on the diagonal and zeros elsewhere.
```
np.eye(3)
```
<br>
`diag` extracts a diagonal or constructs a diagonal array.
```
np.diag(y)
```
<br>
Create an array using repeating list (or see `np.tile`)
```
np.array([1, 2, 3] * 3)
```
<br>
Repeat elements of an array using `repeat`.
```
np.repeat([1, 2, 3], 3)
```
<br>
#### Combining Arrays
```
p = np.ones([2, 3], int)
p
```
<br>
Use `vstack` to stack arrays in sequence vertically (row wise).
```
np.vstack([p, 2*p])
```
<br>
Use `hstack` to stack arrays in sequence horizontally (column wise).
```
np.hstack([p, 2*p])
```
<br>
## Operations
Use `+`, `-`, `*`, `/` and `**` to perform element wise addition, subtraction, multiplication, division and power.
```
print(x + y) # elementwise addition [1 2 3] + [4 5 6] = [5 7 9]
print(x - y) # elementwise subtraction [1 2 3] - [4 5 6] = [-3 -3 -3]
print(x * y) # elementwise multiplication [1 2 3] * [4 5 6] = [4 10 18]
print(x / y) # elementwise divison [1 2 3] / [4 5 6] = [0.25 0.4 0.5]
print(x**2) # elementwise power [1 2 3] ^2 = [1 4 9]
```
<br>
**Dot Product:**
$ \begin{bmatrix}x_1 \ x_2 \ x_3\end{bmatrix}
\cdot
\begin{bmatrix}y_1 \\ y_2 \\ y_3\end{bmatrix}
= x_1 y_1 + x_2 y_2 + x_3 y_3$
```
x.dot(y) # dot product 1*4 + 2*5 + 3*6
z = np.array([y, y**2])
print(len(z)) # number of rows of array
```
<br>
Let's look at transposing arrays. Transposing permutes the dimensions of the array.
```
z = np.array([y, y**2])
z
```
<br>
The shape of array `z` is `(2,3)` before transposing.
```
z.shape
```
<br>
Use `.T` to get the transpose.
```
z.T
```
<br>
The number of rows has swapped with the number of columns.
```
z.T.shape
```
<br>
Use `.dtype` to see the data type of the elements in the array.
```
z.dtype
```
<br>
Use `.astype` to cast to a specific type.
```
z = z.astype('f')
z.dtype
```
<br>
## Math Functions
Numpy has many built in math functions that can be performed on arrays.
```
a = np.array([-4, -2, 1, 3, 5])
a.sum()
a.max()
a.min()
a.mean()
a.std()
```
<br>
`argmax` and `argmin` return the index of the maximum and minimum values in the array.
```
a.argmax()
a.argmin()
```
<br>
## Indexing / Slicing
```
s = np.arange(13)**2
s
```
<br>
Use bracket notation to get the value at a specific index. Remember that indexing starts at 0.
```
s[0], s[4], s[-1]
```
<br>
Use `:` to indicate a range. `array[start:stop]`
Leaving `start` or `stop` empty will default to the beginning/end of the array.
```
s[1:5]
```
<br>
Use negatives to count from the back.
```
s[-4:]
```
<br>
A second `:` can be used to indicate step-size. `array[start:stop:stepsize]`
Here we are starting 5th element from the end, and counting backwards by 2 until the beginning of the array is reached.
```
s[-5::-2]
```
<br>
Let's look at a multidimensional array.
```
r = np.arange(36)
r.resize((6, 6))
r
```
<br>
Use bracket notation to slice: `array[row, column]`
```
r[2, 2]
```
<br>
And use : to select a range of rows or columns
```
r[3, 3:6]
```
<br>
Here we are selecting all the rows up to (and not including) row 2, and all the columns up to (and not including) the last column.
```
r[:2, :-1]
```
<br>
This is a slice of the last row, and only every other element.
```
r[-1, ::2]
```
<br>
We can also perform conditional indexing. Here we are selecting values from the array that are greater than 30. (Also see `np.where`)
```
r[r > 30]
```
<br>
Here we are assigning all values in the array that are greater than 30 to the value of 30.
```
r[r > 30] = 30
r
```
<br>
## Copying Data
Be careful with copying and modifying arrays in NumPy!
`r2` is a slice of `r`
```
r2 = r[:3,:3]
r2
```
<br>
Set this slice's values to zero ([:] selects the entire array)
```
r2[:] = 0
r2
```
<br>
`r` has also been changed!
```
r
```
<br>
To avoid this, use `r.copy` to create a copy that will not affect the original array
```
r_copy = r.copy()
r_copy
```
<br>
Now when r_copy is modified, r will not be changed.
```
r_copy[:] = 10
print(r_copy, '\n')
print(r)
```
<br>
### Iterating Over Arrays
Let's create a new 4 by 3 array of random numbers 0-9.
```
test = np.random.randint(0, 10, (4,3))
test
```
<br>
Iterate by row:
```
for row in test:
print(row)
```
<br>
Iterate by index:
```
for i in range(len(test)):
print(test[i])
```
<br>
Iterate by row and index:
```
for i, row in enumerate(test):
print('row', i, 'is', row)
```
<br>
Use `zip` to iterate over multiple iterables.
```
test2 = test**2
test2
for i, j in zip(test, test2):
print(i,'+',j,'=',i+j)
```
| github_jupyter |
```
from prettytable import PrettyTable
def math_expr(x):
return 230*x**4+18*x**3+9*x**2-221*x-9
#return x**3 - 0.165*x**2 + 3.993e-4
#return (x-1)**3 + .512
dig = 5
u = -1
v = 1
it = 50
#u,v,it,dig
#x 3 − 0.165x 2 + 3.993×10−4 = 0
def bisection_method(func,xl=0,xu=5,iter=10,digits = 16):
tab = PrettyTable()
acc = 0.5*10**(-digits)
if (func(xl)*func(xu) > 0) :
print("f(xl)*f(xu) > 0 ")
return
xm = (xl+xu)/2
i = 1
e_a = None
#print("Iter\txl\txu\txm\t|ε_a|%\tf(xm)")
tab.field_names = ['Iter','xl','f(xl)','xu','f(xu)','xm','f(xm)','|ε_a|%']
#print(f"{i}\t{xl:.5f}\t{xu:.5f}\t{xm:.5f}\t{e_a}\t{func(xm):.5g}")
tab.add_row([i,"%.5f" % xl,"%.5g" % func(xl),"%.5f" % xu,"%.5g" % func(xu),"%.5f" % xm,"%.5g"% func(xm),e_a])
e_a = acc+1
while(i < iter and e_a > acc):
if func(xl)*func(xm) < 0:
xu = xm
elif func(xl)*func(xm) > 0:
xl = xm
else:
break
i+=1
xmn = (xl+xu)/2
e_a = abs((xmn-xm)/xmn)
xm = xmn
#print(f"{i}\t{xl:.5f}\t{xu:.5f}\t{xm:.5f}\t{e_a*100:.5f}\t{func(xm):.5g}")
tab.add_row([i,"%.5g" % xl,"%.5g" % func(xl),"%.5g" % xu,"%.5g" % func(xu),"%.6f" % xm,"%.5g" % func(xm),"%.5f" % (e_a*100)])
print(tab)
bisection_method(math_expr,u,v,it,dig)
def derivative(func,h=1e-5):
def f_prime(x):
return (func(x+h)-func(x))/h
a = f_prime
return a
def newtonraphson_method(func,x=0,iter=10,digits = 16):
tab = PrettyTable()
acc = 0.5*10**(-digits)
i = 0
e_a = None
tab.field_names = ['Iter','x_i-1','f(x)',"f'(x)",'x_i','|ε_a|%']
#print(f"{i}\t{x:.5g}\t{e_a}\t{func(x):.5g}")
fprime = derivative(func)
e_a = acc+1
while(i < iter and e_a > acc):
i+=1
xn = x - func(x)/fprime(x)
e_a = abs((xn-x)/xn)
tab.add_row([i,"%.5g" % x,"%.5g" % func(x),"%.5g" % fprime(x),"%.6g" % xn,"%.5f" % (e_a*100)])
x = xn
#print(f"{i}\t{x:.5g}\t{e_a*100:.5g}\t{func(x):.5g}")
print(tab)
newtonraphson_method(math_expr,u,it,dig)
def secant_method(func,x0=0,x1=5,iter=10,digits = 16):
tab = PrettyTable()
acc = 0.5*10**(-digits)
i = 0
#e_a = abs((x1-x0)/x1)
tab.field_names = ['Iter','x_i-1','f(x_i-1)','x_i','f(x_i)','x_i+1','f(x_i+1)','|ε_a|%']
#print(f"{i}\t{x:.5f}\t{e_a}\t{func(x):.5g}")
#tab.add_row([i,"%.5g" % x0,"%.5g" % x1,None,"%.5g" % (e_a*100),None])
e_a = acc+1
while(i < iter and e_a > acc):
fprime = (func(x1)-func(x0))/(x1-x0)
i+=1
xn = x1 - func(x1)/fprime
e_a = abs((xn-x1)/xn)
tab.add_row([i,"%.5g" % x0,"%.5g" % func(x0),"%.5g" % x1,"%.5g" % func(x1),"%.6g" % xn,"%.5g" % func(xn),"%.5f" % (e_a*100)])
x0 = x1
x1 = xn
#print(f"{i}\t{x:.5f}\t{e_a*100:.5f}\t{func(x):.5g}")
print(tab)
secant_method(math_expr,u,v,it,dig)
def falseposition_method(func,xl=0,xu=5,iter=10,digits = 16):
tab = PrettyTable()
acc = 0.5*10**(-digits)
if (func(xl)*func(xu) > 0) :
print("f(xl)*f(xu) > 0 ")
return
xm = (xu*func(xl) - xl*func(xu))/(func(xl) - func(xu))
i = 1
e_a = None
#print("Iter\txl\txu\txm\t|ε_a|%\tf(xm)")
tab.field_names = ['Iter','xl','f(xl)','xu','f(xu)','xm','f(xm)','|ε_a|%']
#print(f"{i}\t{xl:.5f}\t{xu:.5f}\t{xm:.5f}\t{e_a}\t{func(xm):.5g}")
tab.add_row([i,"%.5f" % xl,"%.5g" % func(xl),"%.5f" % xu,"%.5g" % func(xu),"%.5f" % xm,"%.6g" % func(xm),e_a])
e_a = acc+1
while(i < iter and e_a > acc):
if func(xl)*func(xm) < 0:
xu = xm
elif func(xl)*func(xm) > 0:
xl = xm
else:
break
i+=1
xmn = (xu*func(xl) - xl*func(xu))/(func(xl) - func(xu))
e_a = abs((xmn-xm)/xmn)
xm = xmn
#print(f"{i}\t{xl:.5f}\t{xu:.5f}\t{xm:.5f}\t{e_a*100:.5f}\t{func(xm):.5g}")
tab.add_row([i,"%.5f" % xl,"%.5g" % func(xl),"%.5f" % xu,"%.5g" % func(xu),"%.5f" % xm,"%.6g" % func(xm),"%.5f" % (e_a*100)])
print(tab)
falseposition_method(math_expr,u,v,it,dig)
def math_expr(x):
#return 40*x**1.5-875*x+35000
return 230*x**4+18*x**3+9*x**2-221*x-9
#return x**3 - 0.165*x**2 + 3.993e-4
#return (x-1)**3 + .512
dig = 4
u = -1
v = 0
it = 50
#u,v,it,dig
#x 3 − 0.165x 2 + 3.993×10−4 = 0
#newtonraphson_method(math_expr,300,it,dig)
falseposition_method(math_expr,-1,0,it,dig)
```
| github_jupyter |
<font size="6"> <center> **[Open in Github](https://github.com/danielalcalde/apalis/blob/master/timings.ipynb)**</center></font>
# Timings
## Apalis vs Ray
In this notebook, the overhead of both the libraries' Ray and Apalis is measured. For Apalis also its different syntaxes are compared in the next section. We conclude that for parallel workloads of more than **10ms** ray and Apalis perform similarly but under that Apalis outperforms ray.
```
import apalis
import ray
import timeit
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.ticker as mticker
from collections import defaultdict
num_cpus = 16
ray.init(num_cpus=num_cpus);
class A:
def function(self, n):
cum = 0
for i in range(n):
cum += i
return cum
Ar = ray.remote(A)
obj_number = 16
objs = [A() for _ in range(obj_number)]
objs_apalis = [apalis.Handler(a) for a in objs]
objs_apalis_G = apalis.GroupHandler(objs, threads=num_cpus)
objs_ray = [Ar.remote() for _ in range(obj_number)]
```
Ray:
```
%%time
tokens = [obj.function.remote(1) for obj in objs_ray]
print(ray.get(tokens))
```
Apalis:
```
%%time
tokens = [obj.function(1) for obj in objs_apalis]
print(apalis.get(tokens))
```
All the functions that will be tested:
```
funcs = dict()
funcs["single"] = lambda x: [obj.function(x) for obj in objs]
funcs["ray"] = lambda x: ray.get([obj.function.remote(x) for obj in objs_ray])
funcs["apalis"] = lambda x: apalis.get([obj.function(x) for obj in objs_apalis])
```
Testing:
```
repeat = 50
Ns = np.asarray(np.logspace(2, 7, 20), dtype="int")
ts = defaultdict(lambda: np.zeros((len(Ns), repeat)))
for r in range(repeat):
for i, n in enumerate(Ns):
for j, (label, func) in enumerate(funcs.items()):
number = max(1, 2 * 10 ** 4 // n)
t = timeit.timeit(lambda : func(n), number=number)
ts[label][i][r] = t / number
ts_std = defaultdict(lambda: np.zeros(len(Ns)))
for label in ts:
ts_std[label] = ts[label].std(axis=1) / np.sqrt(repeat)
ts[label] = ts[label].mean(axis=1)
```
Apalis vs Ray timings:
```
plt.ylabel("Time / [s]")
plt.xlabel("Operations")
for label in ["single", "ray", "apalis"]:
plt.errorbar(Ns, ts[label], yerr=ts_std[label], fmt="--o", label=label)
plt.xscale("log")
plt.yscale("log")
plt.xticks(fontsize=14)
plt.legend()
plt.show()
```
The improvement over single-threaded vs the time it takes the function to execute in single-threaded:
```
ax = plt.gca()
plt.ylabel(r"$\frac{t_{multi}}{t_{single}}$", size=25)
plt.xlabel("$t_{single}$/[ms]", size=20)
plt.plot([],[])
for label in ["ray", "apalis"]:
error = ts_std["single"] / ts[label] + ts["single"] * ts_std[label] / ts[label] ** 2
plt.errorbar(ts["single"] * 1000 / num_cpus, ts["single"] / ts[label], yerr=error, fmt="--o", label=label)
plt.xscale("log")
plt.yscale("log")
plt.axhline(1, color="gray")
plt.axhline(num_cpus, color="gray")
ax.yaxis.set_major_formatter(mticker.ScalarFormatter())
ax.xaxis.set_major_formatter(mticker.ScalarFormatter())
ax.set_xticks([0.01, 0.1, 1, 10, 100, 1000])
ax.set_xticklabels(["0.01", "0.1", "1", "10", "200", "1000"])
ax.set_yticks([0.1, 1, 16])
ax.set_yticklabels(["0.1", "1", "16"])
plt.legend()
plt.show()
```
Note that the longer the task takes the closer to 16 times improvement we come.
## Performance of the different apalis syntaxes
The GroupHandler handles several objects at a time. It should be used instead of Handler if the amount of objects that need to be parallelized is larger than the number of cores. Here we compare for the case where we want to parallelize 32 objects on 16 cores.
```
obj_number = 32
objs = [A() for _ in range(obj_number)]
objs_apalis = [apalis.Handler(a) for a in objs]
objs_apalis_G = apalis.GroupHandler(objs, threads=num_cpus)
```
Handler:
```
%%time
tokens = [obj.function(1) for obj in objs_apalis]
print(apalis.get(tokens))
```
Group Handler:
```
%%time
tasks = [obj.function(1) for obj in objs_apalis_G]
print(objs_apalis_G.multiple_run(tasks)())
```
Faster but more cumbersome syntax:
```
%%time
tasks = [{"i": i, "mode": "run", "name": "function", "args": (1,)} for i in range(obj_number)]
print(objs_apalis_G.multiple_run(tasks)())
```
Runs the tasks and directly returns the outputs. Does not need to deal with the overhead of creating Tokens.
```
%%time
tasks = [{"i": i, "mode": "run", "name": "function", "args": (1,)} for i in range(obj_number)]
print(objs_apalis_G.run(tasks))
```
Timing for different parallelization solutions:
```
funcsA = dict()
funcsA["single"] = lambda x: [obj.function(x) for obj in objs]
funcsA["handler"] = lambda x: apalis.get([obj.function(x) for obj in objs_apalis])
funcsA["group handler"] = lambda x: objs_apalis_G.multiple_run([obj.function(x) for obj in objs_apalis_G])()
funcsA["run"] = lambda x: objs_apalis_G.run([{"i": i, "mode": "run", "name": "function", "args": (x,)} for i in range(obj_number)])
funcsA["fast syntax"] = lambda x: objs_apalis_G.multiple_run([{"i": i, "mode": "run", "name": "function", "args": (x,)} for i in range(obj_number)])()
```
Testing:
```
repeat = 50
Ns = np.asarray(np.logspace(2, 7, 20), dtype="int")
tsA = defaultdict(lambda: np.zeros((len(Ns), repeat)))
for r in range(repeat):
for i, n in enumerate(Ns):
for j, (label, func) in enumerate(funcsA.items()):
number = max(1, 2 * 10 ** 4 // n)
t = timeit.timeit(lambda : func(n), number=number)
tsA[label][i][r] = t / number
ts_stdA = defaultdict(lambda: np.zeros(len(Ns)))
for label in tsA:
ts_stdA[label] = tsA[label].std(axis=1) / np.sqrt(repeat)
tsA[label] = tsA[label].mean(axis=1)
```
Plotting:
```
ax = plt.gca()
plt.ylabel(r"$\frac{t_{multi}}{t_{single}}$", size=25)
plt.xlabel("$t_{single}$/[ms]", size=20)
for label in ["handler", "group handler"]:
error = ts_stdA["single"] / tsA[label] + tsA["single"] * ts_stdA[label] / tsA[label] ** 2
plt.errorbar(tsA["single"] * 1000 / obj_number, tsA["single"] / tsA[label], yerr=error, fmt="--o", label=label)
plt.xscale("log")
plt.yscale("log")
plt.axhline(1, color="gray")
plt.axhline(num_cpus, color="gray")
ax.yaxis.set_major_formatter(mticker.ScalarFormatter())
ax.xaxis.set_major_formatter(mticker.ScalarFormatter())
ax.set_xticks([0.01, 0.1, 1, 10, 100, 1000])
ax.set_xticklabels(["0.01", "0.1", "1", "10", "200", "1000"])
ax.set_yticks([0.1, 1, 16])
ax.set_yticklabels(["0.1", "1", "16"])
plt.legend()
plt.show()
```
Percentage improvement of run vs the other methods:
```
plt.ylabel(r"$\frac{t_{run}}{t_{single}}$", size=25)
plt.xlabel("$t_{single}$/[ms]", size=20)
for label in ["group handler", "fast syntax"]:
error = ts_stdA["run"] / tsA[label] + tsA["run"] * ts_stdA[label] / tsA[label] ** 2
plt.errorbar(tsA["single"] * 1000 / obj_number, tsA["run"] / tsA[label], yerr=error, fmt="--o", label=label)
plt.xscale("log")
plt.axhline(1, color="gray")
plt.legend()
plt.show()
```
| github_jupyter |
```
import altair as alt
import pandas as pd
from pony.orm import *
from datetime import datetime
import random
from bail.db import DB, Case, Inmate
# Connect to SQLite database using Pony
db = DB()
statuses = set(select(i.status for i in Inmate))
pretrial = set(s for s in statuses if ("Prearraignment" in s or "Pretrial" in s))
pretrial_inmates = list(select(i for i in Inmate if i.status in pretrial))
other_inmates = list(select(i for i in Inmate if not i.status in pretrial))
black_pretrial_inmates = list(select(i for i in Inmate if i.race == "African American" and i.status in pretrial))
white_pretrial_inmates = list(select(i for i in Inmate if i.race == "Caucasian" and i.status in pretrial))
other_pretrial_inmates = list(select(i for i in Inmate if
i.race != "African American" and
i.race != "Caucasian" and
i.status in pretrial))
def severity(inmate):
if any("Felony" in x.severity for c in inmate.cases for x in c.charges):
return 'felony'
else:
return 'misdemeanor'
def bail_value(inmate):
bails = [c.cash_bond for c in inmate.cases if c.cash_bond]
if len(bails) == 0:
bails = [0]
return max(bails)
def build_data(inmates):
return pd.DataFrame([{
'race': x.race,
'bail': bail_value(x),
'days_since_booking': x.days_since_booking(),
'most_recent_arrest': x.most_recent_arrest(),
'days_since_most_recent_arrest': x.days_since_most_recent_arrest(),
'severity': severity(x),
'url': x.url,
} for x in inmates])
data = build_data(pretrial_inmates)
days = data.query('severity=="felony"')['days_since_booking']
print(f"Mean days in jail for a felony: {days.mean()} with standard deviation {days.std()} and maximum {days.max()}")
days = data.query('severity=="misdemeanor"')['days_since_booking']
print(f"Mean days in jail for a misdemeanor: {days.mean()} with standard deviation {days.std()} and maximum {days.max()}")
# Instead try most recent arrest date...
days = data.query('severity=="felony"')['days_since_most_recent_arrest']
print(f"Mean days since arrest for a felony: {days.mean()} with standard deviation {days.std()} and maximum {days.max()}")
days = data.query('severity=="misdemeanor"')['days_since_most_recent_arrest']
print(f"Mean days since arrest for a misdemeanor: {days.mean()} with standard deviation {days.std()} and maximum {days.max()}")
data.query('severity=="misdemeanor"').sort_values('days_since_most_recent_arrest')
# Prefab stable colors for charts
domain = ['African American', 'American Indian or Alaskan Native', 'Asian or Pacific Islander', 'Caucasian', 'Hispanic', 'Unknown']
range_ = ['red', 'green', 'blue', 'orange', 'purple', 'grey']
race = alt.Color('race:N', scale=alt.Scale(domain=domain, range=range_))
# Facet across both felonies and misdemeanors
alt.Chart(data).mark_circle(size=40, opacity=0.85, clip=True).encode(
y=alt.Y('days_in_prison:Q',
scale=alt.Scale(domain=(0, 365))
),
x=alt.X('bail:Q',
scale=alt.Scale(domain=(0, 1000))
),
color=race,
).facet(
row='severity:N'
)
# Misdemeanors only with trend lines
misdemeanors = data.query('severity=="misdemeanor"')
base = alt.Chart(misdemeanors).mark_circle(size=30, clip=True).encode(
y=alt.Y('days_in_prison:Q',
scale=alt.Scale(domain=(0, 365))
),
x=alt.X('bail:Q',
scale=alt.Scale(domain=(0, 800))
),
color=race,
)
polynomial_fit = [
base.transform_regression(
"x", "y", method="poly", order=1, as_=["x", str(thing)]
)
.mark_line()
.transform_fold([str(thing)], as_=["race", "y"])
.encode(race)
for thing in ['African American', 'Caucasian']
]
alt.layer(base, *polynomial_fit)
base
```
| github_jupyter |
# Notes:
This notebook is used to predict demand of Victoria state (without using any future dataset)
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from tsa_utils import *
from statsmodels.tsa.stattools import pacf
from sklearn.ensemble import RandomForestRegressor
import warnings
warnings.filterwarnings("ignore")
# show float in two decimal form
plt.style.use('ggplot')
pd.set_option('display.float_format',lambda x : '%.2f' % x)
```
## 1) Load dataset
```
df = pd.read_csv("../../data/all.csv").reset_index(drop=True)
df.head(3)
df = df[df.time <= '2021-08-11 23:30:00']
df.head(3)
```
## 3) Feature Engineering
```
drop_columns = ['demand_nsw',
'demand_sa',
'demand_tas',
'spot_price_nsw',
'spot_price_sa',
'spot_price_tas',
'spot_price_vic',
'inter_gen_nsw', 'inter_gen_sa', 'inter_gen_tas', 'inter_gen_vic',]
vic = df.drop(columns=drop_columns)
vic.columns = ['time', 'demand_vic', 'period']
vic.head(3)
# Feature engineering on datetime
vic['time'] = vic.time.astype('datetime64[ns]')
vic['month'] = vic.time.dt.month
vic['day'] = vic.time.dt.day
vic['day_of_year'] = vic.time.dt.dayofyear
vic['year'] = vic.time.dt.year
vic['weekday'] = vic['time'].apply(lambda x: x.weekday())
vic['week'] = vic.time.dt.week
vic['hour'] = vic.time.dt.hour
vic.loc[vic['month'].isin([12,1,2]), 'season'] = 1
vic.loc[vic['month'].isin([3,4,5]), 'season'] = 2
vic.loc[vic['month'].isin([6,7,8]), 'season'] = 3
vic.loc[vic['month'].isin([9, 10, 11]), 'season'] = 4
vic.tail(3)
# Add fourier terms
fourier_terms = add_fourier_terms(vic.time, year_k=3, week_k=3, day_k=3)
vic = pd.concat([vic, fourier_terms], 1).drop(columns=['datetime'])
vic.head(3)
# Plot autocorrelation
nlags=144
plot_tsc(vic.demand_vic, lags=nlags)
# Add nlag features (choosing the first 10 highest autocorrelation nlag)
dict_pacf = dict()
list_pacf = pacf(df['demand_vic'], nlags=nlags)
for nlag in range(nlags):
if nlag >= 48:
dict_pacf[nlag] = list_pacf[nlag]
dict_pacf = {k: v for k, v in sorted(dict_pacf.items(), key=lambda item: abs(item[1]), reverse=True)}
# 10 highest pacf nlag
max_pacf_nlags = list(dict_pacf.keys())[:5]
for nlag in max_pacf_nlags:
vic['n_lag'+str(nlag)] = df.reset_index()['demand_vic'].shift(nlag)
vic_train = vic[vic["time"] <= "2020-12-31 23:30:00"]
vic_cv = vic[(vic['time'] >= "2021-01-01 00:00:00") & (vic['time'] <= "2021-06-30 23:30:00")].reset_index(drop=True)
vic_test = vic[(vic['time'] >= "2021-07-01 00:00:00") & (vic['time'] <= "2021-08-11 23:30:00")].reset_index(drop=True)
X_train = vic_train.drop(columns=['demand_vic', 'time'])[nlags:]
y_train = vic_train.demand_vic[nlags:]
X_cv = vic_cv.drop(columns=['demand_vic', 'time'])
y_cv = vic_cv.demand_vic
X_test = vic_test.drop(columns=['demand_vic', 'time'])
y_test = vic_test.demand_vic
X_train.head(3)
X_train.columns
```
## 4) First look at Random Forest Regressor
```
rfr_clf = RandomForestRegressor(n_estimators=100)
rfr_clf = rfr_clf.fit(X_train, y_train)
print("Random Forest Regressor accuracy: ")
rfr_result = rfr_clf.predict(X_test)
rfr_residuals = y_test - rfr_result
print('Mean Absolute Percent Error:', round(np.mean(abs(rfr_residuals/y_test)), 4))
print('Root Mean Squared Error:', np.sqrt(np.mean(rfr_residuals**2)))
plt.figure(figsize=(20, 4))
plt.plot(y_test[:200], label='true value')
plt.plot(rfr_result[:200], label='predict')
plt.legend()
plt.show()
plt.figure(figsize=(20, 4))
plt.plot(rfr_residuals)
plt.show()
# Get numerical feature importances
importances = list(rfr_clf.feature_importances_)
# List of tuples with variable and importance
feature_importances = [(feature, round(importance, 2)) for feature, importance in zip(X_train.columns, importances)]
# Sort the feature importances by most important first
feature_importances = sorted(feature_importances, key = lambda x: x[1], reverse = True)
# Print out the feature and importances
[print('Variable: {:20} Importance: {}'.format(*pair)) for pair in feature_importances]
```
## 6) Predict CV and Test period demand
### 6.1) Predict CV period demand
```
X_train = vic_train.drop(columns=['demand_vic', 'time'])[nlags:]
y_train = vic_train.demand_vic[nlags:]
X_cv = vic_cv.drop(columns=['demand_vic', 'time'])
y_cv = vic_cv.demand_vic
rfr_clf = RandomForestRegressor(n_estimators=100)
rfr_clf = rfr_clf.fit(X_train, y_train)
print("Random Forest Regressor accuracy: ")
rfr_result = rfr_clf.predict(X_cv)
rfr_residuals = y_cv - rfr_result
print('Mean Absolute Percent Error:', round(np.mean(abs(rfr_residuals/y_test)), 4))
print('Root Mean Squared Error:', np.sqrt(np.mean(rfr_residuals**2)))
plt.figure(figsize=(20, 4))
plt.plot(y_cv, label='true value')
plt.plot(rfr_result, label='predict')
plt.legend()
plt.show()
vic_demand_cv_rfr = pd.DataFrame({'time': vic_cv.time, 'demand_vic': vic_cv.demand_vic})
vic_demand_cv_rfr['predicted_demand_vic'] = rfr_result
vic_demand_cv_rfr.tail(3)
vic_demand_cv_rfr.to_csv('predictions/vic_demand_unknow_cv_rfr.csv', index=False, header=True)
```
### 6.2) Predict Test period demand
```
idx_test_start = 61296 # index of df(full) where test start
X_train = vic.drop(columns=['demand_vic', 'time'])[nlags:idx_test_start]
y_train = vic.demand_vic[nlags:idx_test_start]
X_test = vic_test.drop(columns=['demand_vic', 'time'])
y_test = vic_test.demand_vic
rfr_clf = RandomForestRegressor(n_estimators=100, random_state=1)
rfr_clf = rfr_clf.fit(X_train, y_train)
print("Random Forest Regressor accuracy: ")
rfr_result = rfr_clf.predict(X_test)
rfr_residuals = y_test - rfr_result
print('Mean Absolute Percent Error:', round(np.mean(abs(rfr_residuals/y_test)), 4))
print('Root Mean Squared Error:', np.sqrt(np.mean(rfr_residuals**2)))
plt.figure(figsize=(20, 4))
plt.plot(y_test, label='true value')
plt.plot(rfr_result, label='predict')
plt.legend()
plt.show()
vic_demand_test_rfr = pd.DataFrame({'time': vic_test.time, 'demand_vic': vic_test.demand_vic})
vic_demand_test_rfr['predicted_demand_vic'] = rfr_result
vic_demand_test_rfr.tail(3)
vic_demand_test_rfr.to_csv('predictions/vic_demand_unknow_test_rfr.csv', index=False, header=True)
```
| github_jupyter |
# Converters for Quadratic Programs
Optimization problems in Qiskit's optimization module are represented with the `QuadraticProgram` class, which is generic and powerful representation for optimization problems. In general, optimization algorithms are defined for a certain formulation of a quadratic program and we need to convert our problem to the right type.
For instance, Qiskit provides several optimization algorithms that can handle Quadratic Unconstrained Binary Optimization (QUBO) problems. These are mapped to Ising Hamiltonians, for which Qiskit uses the `qiskit.aqua.operators` module, and then their ground state is approximated. For this optimization commonly known algorithms such as VQE or QAOA can be used as underlying routine. See the following tutorial about the [Minimum Eigen Optimizer](./03_minimum_eigen_optimizer.ipynb) for more detail. Note that also other algorithms exist that work differently, such as the `GroverOptimizer`.
To map a problem to the correct input format, the optimization module of Qiskit offers a variety of converters. In this tutorial we're providing an overview on this functionality. Currently, Qiskit contains the following converters.
- `InequalityToEquality`: converts inequality constraints into equality constraints with additional slack variables.
- `IntegerToBinary`: converts integer variables into binary variables and corresponding coefficients.
- `LinearEqualityToPenalty`: convert equality constraints into additional terms of the object function.
- `QuadraticProgramToQubo`: a wrapper for `InequalityToEquality`, `IntegerToBinary`, and `LinearEqualityToPenalty` for convenience.
## InequalityToEquality
`InequalityToEqualityConverter` converts inequality constraints into equality constraints with additional slack variables to remove inequality constraints from `QuadraticProgram`. The upper bounds and the lower bounds of slack variables will be calculated from the difference between the left sides and the right sides of constraints. Signs of slack variables depend on symbols in constraints such as $\leq$ and $\geq$.
The following is an example of a maximization problem with two inequality constraints. Variable $x$ and $y$ are binary variables and variable $z$ is an integer variable.
\begin{aligned}
& \text{maximize}
& 2x + y + z\\
& \text{subject to:}
& x+y+z \leq 5.5\\
& & x+y+z \geq 2.5\\
& & x, y \in \{0,1\}\\
& & z \in \{0,1,2,3,4,5,6,7\} \\
\end{aligned}
With `QuadraticProgram`, an optimization model of the problem is written as follows.
```
from qiskit_optimization import QuadraticProgram
qp = QuadraticProgram()
qp.binary_var('x')
qp.binary_var('y')
qp.integer_var(lowerbound=0, upperbound=7, name='z')
qp.maximize(linear={'x': 2, 'y': 1, 'z': 1})
qp.linear_constraint(linear={'x': 1, 'y': 1, 'z': 1}, sense='LE', rhs=5.5,name='xyz_leq')
qp.linear_constraint(linear={'x': 1, 'y': 1, 'z': 1}, sense='GE', rhs=2.5,name='xyz_geq')
print(qp.export_as_lp_string())
```
Call `convert` method of `InequalityToEquality` to convert.
```
from qiskit_optimization.converters import InequalityToEquality
ineq2eq = InequalityToEquality()
qp_eq = ineq2eq.convert(qp)
print(qp_eq.export_as_lp_string())
```
After converting, the formulation of the problem looks like as the follows. As we can see, the inequality constraints are replaced with equality constraints with additional integer slack variables, $xyz\_leg\text{@}int\_slack$ and $xyz\_geq\text{@}int\_slack$.
Let us explain how the conversion works. For example, the lower bound of the left side of the first constraint is $0$ which is the case of $x=0$, $y=0$, and $z=0$. Thus, the upperbound of the additional integer variable must be $5$ to be able to satisfy even the case of $x=0$, $y=0$, and $z=0$. Note that we cut off the part after the decimal point in the converted formulation since the left side of the first constraint in the original formulation can be only integer values. For the second constraint, basically we apply the same approach. However, the symbol in the second constraint is $\geq$, so we add minus before $xyz\_geq\text{@}int\_slack$ to be able to satisfy even the case of $x=1, y=1$, and $z=7$.
\begin{aligned}
& \text{maximize}
& 2x + y + z\\
& \text{subject to:}
& x+y+z+ xyz\_leg\text{@}int\_slack= 5\\
& & x+y+z+xyz\_geq\text{@}int\_slack= 3\\
& & x, y \in \{0,1\}\\
& & z \in \{0,1,2,3,4,5,6,7\} \\
& & xyz\_leg\text{@}int\_slack \in \{0,1,2,3,4,5\} \\
& & xyz\_geq\text{@}int\_slack \in \{0,1,2,3,4,5,6\} \\
\end{aligned}
## IntegerToBinary
`IntegerToBinary` converts integer variables into binary variables and coefficients to remove integer variables from `QuadraticProgram`. For converting, bounded-coefficient encoding proposed in [arxiv:1706.01945](https://arxiv.org/abs/1706.01945) (Eq. (5)) is used. For more detail of the encoding method, please see the paper.
We use the output of `InequalityToEquality` as starting point. Variable $x$ and $y$ are binary variables, while the variable $z$ and the slack variables $xyz\_leq\text{@}int\_slack$ and $xyz\_geq\text{@}int\_slack$ are integer variables. We print the problem again for reference.
```
print(qp_eq.export_as_lp_string())
```
Call `convert` method of `IntegerToBinary` to convert.
```
from qiskit_optimization.converters import IntegerToBinary
int2bin = IntegerToBinary()
qp_eq_bin = int2bin.convert(qp_eq)
print(qp_eq_bin.export_as_lp_string())
```
After converting, integer variables $z$ is replaced with three binary variables $z\text{@}0$, $z\text{@}1$ and $z\text{@}2$ with coefficients 1, 2 and 4, respectively as the above.
The slack variables $xyz\_leq\text{@}int\_slack$ and $xyz\_geq\text{@}int\_slack$ that were introduced by `InequalityToEquality` are also both replaced with three binary variables with coefficients 1, 2, 2, and 1, 2, 3, respectively.
Note: Essentially the coefficients mean that the sum of these binary variables with coefficients can be the sum of a subset of $\{1, 2, 4\}$, $\{1, 2, 2\}$, and $\{1, 2, 3\}$ to represent that acceptable values $\{0, \ldots, 7\}$, $\{0, \ldots, 5\}$, and $\{0, \ldots, 6\}$, which respects the lower bound and the upper bound of original integer variables correctly.
`IntegerToBinary` also provides `interpret` method that is the functionality to translate a given binary result back to the original integer representation.
## LinearEqualityToPenalty
`LinearEqualityToPenalty` converts linear equality constraints into additional quadratic penalty terms of the objective function to map `QuadraticProgram` to an unconstrained form.
An input to the converter has to be a `QuadraticProgram` with only linear equality constraints. Those equality constraints, e.g. $\sum_i a_i x_i = b$ where $a_i$ and $b$ are numbers and $x_i$ is a variable, will be added to the objective function in the form of $M(b - \sum_i a_i x_i)^2$ where $M$ is a large number as penalty factor.
By default $M= 1e5$. The sign of the term depends on whether the problem type is a maximization or minimization.
We use the output of `IntegerToBinary` as starting point, where all variables are binary variables and all inequality constraints have been mapped to equality constraints.
We print the problem again for reference.
```
print(qp_eq_bin.export_as_lp_string())
```
Call `convert` method of `LinearEqualityToPenalty` to convert.
```
from qiskit_optimization.converters import LinearEqualityToPenalty
lineq2penalty = LinearEqualityToPenalty()
qubo = lineq2penalty.convert(qp_eq_bin)
print(qubo.export_as_lp_string())
```
After converting, the equality constraints are added to the objective function as additional terms with the default penalty factor $M=1e5$.
The resulting problem is now a QUBO and compatible with many quantum optimization algorithms such as VQE, QAOA and so on.
This gives the same result as before.
```
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
```
| github_jupyter |
<a href="https://colab.research.google.com/github/MattBizzo/alura-quarentenadados/blob/master/QuarentenaDados_aula03.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Introdução
Olá, seja bem-vinda e bem-vindo ao notebook da **aula 03**! A partir desta aula iremos analisar e discutir uma base de dados junto com você. Por isso, será **importante que as discussões nos vídeos sejam acompanhadas** para entender todos os processos das análises.
Nessa aula utilizaremos uma base totalmente nova, que nós também não conhecíamos até o momento da análise. Você vai acompanhar a exploração e, principalmente, as dificuldades ao analisar uma base de dados desconhecida.
Vamos começar importando a nossa base de dados! Nessa aula iremos trabalhar com a IMBD 5000, base que contém uma série de informações sobre filmes, sendo uma pequena amostra da famosa base de dados [IMBD](https://www.imdb.com/).
```
import pandas as pd
imdb = pd.read_csv("https://gist.githubusercontent.com/guilhermesilveira/24e271e68afe8fd257911217b88b2e07/raw/e70287fb1dcaad4215c3f3c9deda644058a616bc/movie_metadata.csv")
imdb.head()
```
Como você acompanhou, iniciamos a aula tentando conhecer as diversas colunas de cada filme e uma das que chamou mais a atenção foi a color. Vamos conhecer quais valores temos nesta colunas?!
```
imdb["color"].unique()
```
Verificamos que essa coluna **color** informa se o filme é colorido ou é preto e branco. Vamos descobrir agora quantos filmes de cada tipo nós temos:
```
imdb["color"].value_counts()
imdb["color"].value_counts(normalize=True)
```
Agora já descobrimos quantos filmes coloridos e preto e branco temos, e também sabemos que há mais de 5000 filmes na base. Fizemos algo novo, que foi chamar o `value_counts()`, passando o parâmetro **normalize como True**. Desse modo, já calculamos qual é a participação de cada um dos tipos de filmes (**95% são filmes coloridos**).
Excelente! Agora vamos explorar outra coluna a fim de conhecer os diretores que tem mais filmes na nossa base de dados (**lembrando que nossa base é uma amostra muito pequena da realidade**)
```
imdb["director_name"].value_counts()
```
**Steven Spielberg e Woody Allen** são os diretores com mais filmes no **IMDB 5000**.
Continuando com nossa exploração de algumas informações, vamos olhar para o número de críticas por filmes.
```
imdb["num_critic_for_reviews"]
imdb["num_critic_for_reviews"].describe()
```
Veja que as colunas **color** e **director_name** são *strings*, não fazendo sentido olhar para médias, medianas e afins. Olhar para o número de avaliações já pode ser interessante, por isso usamos o `.describe()`.
Agora podemos até plotar um histograma para avaliar o número de review.
```
import seaborn as sns
sns.set_style("whitegrid")
imdb["num_critic_for_reviews"].plot(kind='hist')
```
Verificamos que poucos filmes tem mais de 500 votos, por isso um paralelo que podemos fazer é que filmes com muitos votos são mais populares e filmes com poucos votos não são tão populares. Logo, pelo histograma fica evidente que poucos filmes fazem muito muito sucesso. Claro que não conseguimos afirmar isso com propriedade, pois, novamente, estamos lidando com um número restrito de dados, mas são pontos interessantes de se pensar.
Outra informação interessante de se analisar, são os orçamentos e receitas de um filme, ou seja o aspecto financeiro. Vamos começar pelo gross:
```
imdb["gross"].hist()
```
Como você deve ter reparado, essa é a primeira vez que as escalas estão totalmente diferentes, pois no eixo **X** temos valores tão altos que a escala teve que ser de centena de milhões. Veja como pouquíssimos filmes tem **alto faturamento**, o que nos acende um primeiro alerta de que tem algo estranho (ou temos filmes que rendem muito dinheiro neste dataset).
Vamos tentar conhecer quais são esses filmes com faturamento astronômico.
```
imdb.sort_values("gross", ascending=False).head()
```
Nessa lista temos **Avatar, Titanic, Jurassic World e The Avengers**, o que parece fazer sentido para nós, pois sabemos que esses foram filmes com bilheterias gigantescas. Analisando esses dados conseguimos verificar que os maiores faturamentos fazem sentido, mas encontramos um problema nos dados, dado que encontramos duas linhas diplicadas. Podemos usar o pandas para remover esses dados, mas por enquanto vamos manter todas as informações (Se estiver curioso em saber como se faz, consulte o [`.drop_duplicates()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop_duplicates.html)).
Maravilha, agora temos o faturamento e parece estar OK. Queremos começar a responder algumas perguntas e uma delas é: será que filmes coloridos tem faturamento maior que filmes preto e branco?
Para começar a responder essa pergunta precisamos transformar a coluna Color:
```
color_or_bw = imdb.query("color in ['Color', ' Black and White']")
color_or_bw["color_0_ou_1"] = (color_or_bw["color"]=="Color") * 1
color_or_bw["color_0_ou_1"].value_counts()
color_or_bw.head()
```
Veja que agora nós temos uma última coluna em nosso dataframe com valores 0 e 1. Agora podemos construir gráficos com essa informação de filmes coloridos ou não.
P.S: Em aula tivemos problemas porque Black and White tinha um espaço no início, vou cortar esses detalhes aqui no notebook, mas reforço a importância de acompanhar este processo no vídeo.
```
sns.scatterplot(data=color_or_bw, x="color_0_ou_1", y="gross")
```
Então plotamos nossos dados com um displot! Existem várias formas de visualizar essa informação, mas por ora essa nos ajuda a comparar os resultados. Repare como filmes coloridos tem valores bem maiores (isso já era até esperado), mas também temos pontos bem altos em filmes preto e branco, chamando muito atenção.
Vamos explorar algumas estatísticas destes filmes:
```
color_or_bw.groupby("color").mean()["gross"]
color_or_bw.groupby("color").mean()["imdb_score"]
color_or_bw.groupby("color").median()["imdb_score"]
```
Das estatísticas temos duas bem interessantes, a média e mediana das notas de filmes preto e branco são maiores. Há várias possíveis explicações sobre o porquê disso, reflita aí sobre algumas delas e compartilhe conosco!
A partir de agora, vamos fazer uma investigação melhor em relação às finanças dos filmes (faturamento e orçamento). Vamos iniciar plotando e interpretando um gráfico de **gross** por **budget**:
```
budget_gross= imdb[["budget", "gross"]].dropna().query("budget >0 | gross > 0")
sns.scatterplot(x="budget", y="gross", data = budget_gross)
```
Para plotar os dados, primeiro removemos as linhas com informações de faturamento e orçamento vazias e também com valores igual a 0, para então gerar o gráfico.
Agora vamos analisar esse gráfico juntos, veja que a escala de **budget** mudou, agora é **e10**. Repare que apenas poucos filmes tem orçamentos tão grandes assim, e seus faturamentos são muito baixos. Será que temos algum problema nos dados? Vamos investigar melhor!
```
imdb.sort_values("budget", ascending=False).head()
```
Ordenando os dados pelo **budget** percebemos que as primeiras posições são de filmes asiáticos. O Guilherme trouxe um ponto interessante para a investigação, pois países como a Coreia usam moedas que tem três casas decimais a mais que o dólar. Então provavelmente o que está ocorrendo é que os dados de orçamento tem valores na moeda local, por isso detectamos valores tão discrepantes.
Como não temos garantia dos números, vamos precisar trabalhar apenas com filmes americanos, assim garantimos que tanto gross e budget estão em dólares. Então vamos iniciar esse processo:
#Não esqueça de compartilhar a solução dos seus desafios com nossos instrutores, seja no Twitter, seja no LinkedIn. Boa sorte!
```
imdb["country"].unique()
```
Veja que temos filmes de diversos locais de origem:
```
imdb = imdb.drop_duplicates()
imdb_usa = imdb.query("country == 'USA'")
imdb_usa.sort_values("budget", ascending=False).head()
```
Agora temos os dados para fazer uma análise melhor entre gross e budget. Vamos plotar o gráfico novamente:
```
budget_gross = imdb_usa[["budget", "gross"]].dropna().query("budget >0 | gross > 0")
sns.scatterplot(x="budget", y="gross", data = budget_gross)
```
Veja que interessante, aparentemente temos uma relação entre orçamento e faturamento. Quanto maior o orçamento, maior o faturamento.
Já que estamos trabalhando com orçamento e faturamento, podemos construir uma nova informação, o lucro, para analisar. De forma bem simplista esse processo de construir novas informações a partir das existentes no dataset é conhecido como [feature engineering](https://en.wikipedia.org/wiki/Feature_engineering).
```
imdb_usa['lucro'] = imdb_usa['gross'] - imdb_usa['budget']
budget_gross = imdb_usa.query("budget >0 | gross > 0")[["budget", "lucro"]].dropna()
sns.scatterplot(x="budget", y="lucro", data = budget_gross)
```
MUito bom! Nós construímos nossa coluna lucro na base de dados e plotamos o orçamento contra lucro.
Repare que temos pontos interessantes nesta visualização, um deles são esses filmes com muito custo e prejuizo. Isso pode ser um prejuizo real, mas também podem ser filmes que ainda não tiveram tempo de recuperar o investimento (lançamentos recentes). Outros pontos interessantes de se anlisar seriam os filmes com baixos orçamentos e muito lucro, será que são estão corretos ou pode ser algum erro da base? Parece que nem sempre gastar uma tonelada de dinheiro vai gerar lucros absurdos, será que é isso é verdade?
Esse gráfico é muito rico em informações, vale a pena você gastar um tempo criando hipóteses.
Já que essa nova feature (lucro) parace ser interessante de se analisar, vamos continuar! Mas agora quero ver o lucro em relação ao ano de produção:
```
budget_gross = imdb_usa.query("budget >0 | gross > 0")[["title_year", "lucro"]].dropna()
sns.scatterplot(x="title_year", y="lucro", data = budget_gross)
```
Olha que legal esse gráfico, veja como alguns pontos mais recentes reforça a teoria de que alguns filmes podem ainda não ter recuperado o dinheiro investido (Claro que temos muitas variáveis para se analisar, mas é um indício relevante).
Outro ponto que chama muito atenção, são os filmes da década de 30 e 40 com lucros tão altos. Quais serão esses filmes? Bom, essa pergunta você vai responder no desafio do Paulo, que está louco para descobrir!
Falando em Paulo, ele sugeriu uma análise com os nome dos diretores e o orçamento de seus filmes, vamos ver se conseguimos concluir alguma coisa:
```
filmes_por_diretor = imdb_usa["director_name"].value_counts()
gross_director = imdb_usa[["director_name", "gross"]].set_index("director_name").join(filmes_por_diretor, on="director_name")
gross_director.columns=["dindin", "filmes_irmaos"]
gross_director = gross_director.reset_index()
gross_director.head()
sns.scatterplot(x="filmes_irmaos", y="dindin", data = gross_director)
```
Essa imagem aparentemente não é muito conclusiva, então não conseguimos inferir tantas informações.
Esse processo de gerar dados, visualizações e acabar não sendo conclusivo é muito comum na vida de um cientista de dados, pode ir se acostumando =P.
Para finalizar, que tal realizar uma análise das correlações dos dados? EXistem várias formas de calcular a correlação, esse é um assunto denso.Você pode ler mais sobre essas métricas neste [link](https://pt.wikipedia.org/wiki/Correla%C3%A7%C3%A3o).
Vamos então inciar a análise das correlações plotando o pairplot.
```
sns.pairplot(data = imdb_usa[["gross", "budget", "lucro", "title_year"]])
```
O pairplot mostra muita informação e a melhor forma de você entender é assistindo as conclusões que tiramos sobre esses gráficos na vídeoaula.
Embora plotamos um monte de informação, não necessariamente reduzimos a correlação em um número para simplificar a análise. Vamos fazer isso com a ajuda do `.corr()` do [pandas](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.corr.html).
```
imdb_usa[["gross", "budget", "lucro", "title_year"]].corr()
```
Com o pandas é simples de se calcular a correlação, mas precisamos saber interpretar os resultados. Vamos fazer isso?
A correlação é uma métrica que vai de 1 a -1. Quando a correlação é 1, dizemos que é totalmente correlacionada (relação linear perfeita e positiva), ou seja se uma variável aumenta em 10 a outra também irá aumentar em 10. Quando o valor da correlação é -1, também temos variáveis totalmente correlacionda, só que de maneira negativa (relação linear perfeita negativa), neste caso, se uma variável aumenta em 10 a outra reduz em 10. Agora quando a correlação é 0 temos a inexistência de correlação, ou seja, uma variável não tem influêcia sobre a outra.
Agora sim, entendido sobre a correlação vamos analisar as nossas. Veja que lucro e gross tem uma correlação alta, o que indica que quanto maior o orçamento maior o lucro (mas repare que a correlação não é perfeita), já o title_yers e lucro tem correlação negativa, mas muito perto de zero (ou seja quase não tem correlação). Viu como conseguimos analisar muitas coisas com a correlação?! Pense e tente analisar os outros casos também.
Com isso chegamos ao final de mais uma aula da #quarentenadados. E aí, o que está achando, cada vez mais legal e ao mesmo tempo mais complexo né?
O que importa é estar iniciando e entendendo o que fazemos para analisar os dados! **Continue até o fim, garanto que vai valer a pena.**
Vamos praticar?
**Crie seu próprio notebook, reproduza nossa aula e resolva os desafios que deixamos para vocês**.
Até a próxima aula!
**P.S: A partir de agora teremos muitos desafios envolvendo mais análises e conclusões, então não haverá um "gabarito". O importante é você compartilhar suas soluções com os colegas e debater os seus resultados e das outras pessoas**
## Desafio 1 do [Thiago Gonçalves](https://twitter.com/tgcsantos)
Plotar e analisar o Boxplot da média (coluna imbd_score) dos filmes em preto e branco e coloridos.
##Desafio 2 do [Guilherme Silveira](https://twitter.com/guilhermecaelum)
No gráfico de **budget por lucro** temos um ponto com muito custo e prejuizo, descubra com é esse filme (budget próximo de 2.5).
##Desafio 3 do [Guilherme Silveira](https://twitter.com/guilhermecaelum)
Em aula falamos que talvez, filmes mais recentes podem ter prejuizo pois ainda não tiveram tempo de recuperar o investimento. Analise essas informações e nos conte quais foram suas conclusões.
## Desafio 4 do [Paulo Silveira](https://twitter.com/paulo_caelum)
Quais foram os filmes da decada pré 2° guerra que tiveram muito lucro.
## Desafio 5 do [Paulo Silveira](https://twitter.com/paulo_caelum)
No gráfico de **filmes_irmaos por dindin** temos alguns pontos estranhos entre 15 e 20. Confirme a tese genial do Paulo que o cidadão estranho é o Woody Allen. (Se ele tiver errado pode cornete nas redes sociais kkkkk)
## Desafio 6 do [Thiago Gonçalves](https://twitter.com/tgcsantos)
Analise mais detalhadamente o gráfico pairplot, gaste um tempo pensando e tentando enteder os gráficos.
## Desafio 7 do [Thiago Gonçalves](https://twitter.com/tgcsantos)
Calcular a correlação apenas dos filmes pós anos 2000 (Jogar fora filmes antes de 2000) e interpretar essa correlação.
## Desafio 8 do [Allan Spadini](https://twitter.com/allanspadini)
Tentar encontrar uma reta, pode ser com uma régua no monitor (não faça isso), com o excel/google sheets, com o python, no gráfico que parece se aproximar com uma reta (por exemplo budget/lucro, gross/lucro)
## Desafio 9 da [Thais André](https://twitter.com/thais_tandre)
Analisar e interpretar a correlação de outras variáveis além das feitas em sala (notas é uma boa). Número de avaliações por ano pode ser também uma feature.
#Não esqueça de compartilhar a solução dos seus desafios com nossos instrutores, seja no Twitter, seja LinkedIn. Boa sorte!
```
```
| github_jupyter |
```
import pyttsx3
import webbrowser
import smtplib
import random
import speech_recognition as sr
import wikipedia
import datetime
import wolframalpha
import os
import sys
import tkinter as tk
from tkinter import *
from datetime import datetime
import random
import re
from tkinter import messagebox
from tkinter.font import Font
from tkinter import font as tkfont # python 3
from tkinter import filedialog
import textwrap
from random import choice
from PIL import Image, ImageTk, ImageSequence
from itertools import count
from contextlib import redirect_stdout
import tkinter
import matplotlib
matplotlib.use("TkAgg")
from matplotlib.backends.backend_tkagg import (
FigureCanvasTkAgg, NavigationToolbar2Tk)
from matplotlib.backend_bases import key_press_handler
from matplotlib.figure import Figure
import matplotlib.pyplot as plt
import numpy as np
from numpy import expand_dims
# In[2]:
import tensorflow.keras as keras
import tensorflow as tf
from tensorflow.keras.layers import *
from tensorflow.keras.utils import plot_model
from tensorflow.keras.preprocessing.image import load_img
from tensorflow.keras.preprocessing.image import img_to_array
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.utils import *
from text_editor import Editor
# In[ ]:
# In[ ]:
# In[3]:
client = wolframalpha.Client('QAEXLK-RY9HY2PHAT')
import pyttsx3
engine = pyttsx3.init()
# engine.say("In supervised machine learning, a crazy man just runs away what is deep learning")
engine.setProperty('volume', 1)
engine.setProperty('rate', 170)
engine.setProperty('voice', 'english+f4')
# engine.setProperty('voice', voices[1].id)
engine.runAndWait()
# In[4]:
name = "crazy"
questions = []
file1 = open('/mnt/DATA/UI/questions.txt', 'r')
Lines = file1.readlines()
count = 0
for line in Lines:
# print(line)
questions.append(line[:-1])
answers = []
file1 = open('/mnt/DATA/UI/answers.txt', 'r')
Lines = file1.readlines()
count = 0
for line in Lines:
answers.append(line[:-1])
weblinks = []
file1 = open('/mnt/DATA/UI/weblinks.txt', 'r')
Lines = file1.readlines()
count = 0
for line in Lines:
weblinks.append(line[:-1])
# In[5]:
layerdic = [{"name": None, "args": [], "defs": []} for i in range(20)]
layernames = [None] * 20
folderpath = "/mnt/DATA/UI/layers/"
for i, layer in enumerate(os.listdir(folderpath)):
layernames[i] = layer[:-4]
layerpath = folderpath + layer
file1 = open(layerpath, 'r')
Lines = file1.readlines()
for ind, line in enumerate(Lines[:-2]):
if ind == 0:
layerdic[i]["name"] = line[:-2]
else:
k = 0
m = 0
for ind, char in enumerate(line[:-2]):
if char == "=":
k = ind
layerdic[i]["args"].append(line[4:k])
layerdic[i]["defs"].append(line[k + 1:-2])
# In[6]:
import sys
#
# sys.path.insert(1, '/mnt/DATA/UI/ide/crossviper-master')
# sys.path.insert(1, '/mnt/DATA/DeeplearningUI/')
# sys.path.insert(1, '/mnt/DATA/UI/labelImgmaster')
sys.path.insert(1, '/mnt/DATA/UI/ide/crossviper-master')
sys.path.insert(1, '/mnt/DATA/DeeplearningUI/')
sys.path.insert(1, '/mnt/DATA/UI/labelImgmaster')
from FlattenDialog import *
from AddDialog import *
from CodeImageDialog import *
from CompileDialog import *
from ConvDialog import *
from DenseDialog import *
from plotgraphDialog import *
from summaryDialog import *
from tokoDialog import *
from trainDialog import *
from InputDialog import *
from OutputDialog import *
from ideDialog import *
# from StartPage import *
# from Existing import *
# from NewProject import *
# from ImageClassification import *
# from ObjectDetection import *
from labelImg import main as labelmain
from reshapeimages import reshape
# In[7]:
widgetspath = '/mnt/DATA/UI/widgets/inactive/'
# buttonspath = '/mnt/DATA/UI/buttons/'
buttonspath = "/home/crazy/UI/buttons/"
bgpath = '/mnt/DATA/UI/backgrounds/'
# In[ ]:
# In[8]:
dic = [{} for i in range(20)]
graph = [[] for i in range(20)]
invgraph = [[] for i in range(20)]
nodes = []
connectionsdic = [[] for i in range(20)]
# In[9]:
def getplotimage():
plot = ImageTk.PhotoImage(Image.open("/home/crazy/UI/modelplot.png"))
return plot
open('model.py', 'w').close()
with open("model.py", "a") as myfile:
myfile.write("import tensorflow as tf\n")
myfile.write("import numpy as np\n")
myfile.write(" \n")
myfile.write("def mymodel():\n")
class Window(Frame):
def __init__(self, buttonsbg, root, master=None):
Frame.__init__(self, master)
# self.traindatagen = root.traindatagen
# self.testdatagen = root.testdatagen
# obj = next(self.traindatagen)
# print(obj[0], obj[1])
self.master = master
global layerdic, layernames, buttonspath, bgpath, widgetspath, dic, graph, invgraph, nodes, connectionsdic
self.dic = dic
self.graph = graph
self.invgraph = invgraph
self.nodes = nodes
self.connectionsdic = connectionsdic
self.layerdic = layerdic
self.layernames = layernames
self.tensorlist = []
defaultbutton = buttonsbg[0]
buttonsbgarray = buttonsbg[1]
plotbg, trainbg, codebg, buildbg = buttonsbg[2]
toko = buttonsbg[3]
code = buttonsbg[5]
self.plot = buttonsbg[4]
self.t1, self.t2, self.t3, self.t4, self.t5 = buttonsbg[6]
self._drag_data = {"x": 0, "y": 0, "item": None}
self.currenty = 400
self.currentx = 20
self.ccnt = 0
self.dcnt = 0
self.acnt = 0
self.icnt = 0
self.ocnt = 0
self.cnt = 0
self.codelines = []
self.invgraph = [[] for i in range(20)]
self.graph = [[] for i in range(20)]
self.dic = [{} for i in range(20)]
self.coonectionsdic = [[] for i in range(20)]
self.nodes = []
self.nodescnt = 0
self.objectscnt = 0
self.pathscnt = 0
self.path = []
self.buttonsarray = []
self.plottry = ImageTk.PhotoImage(Image.open("/home/crazy/UI/modelplot.png"))
self.frame1 = tk.Frame(self.master, background="black")
self.frame1.place(relwidth=1, relheight=0.17, relx=0, rely=0.83)
totalbuttons = 22
self.currentavailablebuttons = [{} for i in range(totalbuttons)]
c1 = 0
c2 = 0
pathlist = ["input", "conv", "dense", "act", "output", "flatten", "add"]
for i, bpath in enumerate(pathlist):
bg, bga = self.getwidgets("input", bpath + ".png")
self.currentavailablebuttons[i] = {"name": bpath[:-4], "bg": bg, "bga": bga, "count": 0}
self.buttonsarray.append(Button(self.frame1, image=buttonsbgarray[i], activebackground="#32CD32"
, relief=RAISED, borderwidth=4))
self.buttonsarray[i].bind("<ButtonPress-1>", lambda event, a=self.currentx, b=self.currenty,
d=bpath, bg=bg, bga=bga: self.create_token(a, b, d, bg,
bga))
if i % 2 == 0:
self.buttonsarray[i].grid(row=0, column=c1)
c1 += 1
elif i % 2 == 1:
self.buttonsarray[i].grid(row=1, column=c2)
c2 += 1
buttonnames = ["Separable Conv2D","DW Conv2D","Conv1D" ,"Conv3D","MaxPooling2D","AveragePooling2D"
,"GlobalMaxPooling2D", "GlobalAveragePooling2D","BatchNormalization","Dropout","Reshape","UpSampling2D"
,"LSTM layer","ConvLSTM2D","SpatialDropout","GlobalMaxPooling1D"]
for j in range(i + 1, totalbuttons, 1):
# print("button",j,c1,c2)
print(buttonnames[j-7])
# self.buttonsarray.append(Button(self.frame1, image=defaultbutton, activebackground="#32CD32"
# , relief=RAISED, borderwidth=4))
pixelVirtual = tk.PhotoImage(width=1, height=1)
# buttonExample1 = tk.Button(app,
# text="Increase",
# image=pixelVirtual,
# width=100,
# height=100,
# compound="c")
self.buttonsarray.append(Button(self.frame1,text = buttonnames[j-7],image = pixelVirtual,compound = "c",
activebackground="#32CD32"
, relief=RAISED, borderwidth=4,height = 60 , width = 60,bg = "white"))
if j % 2 == 0:
self.buttonsarray[j].grid(row=0, column=c1)
c1 += 1
elif j % 2 == 1:
self.buttonsarray[j].grid(row=1, column=c2)
c2 += 1
self.b8 = Button(self.frame1, image=codebg, activebackground="#32CD32", relief=RAISED, borderwidth=4)
self.b9 = Button(self.frame1, image=trainbg, activebackground="#32CD32", relief=RAISED, borderwidth=4)
self.b11 = Button(self.frame1, image=buildbg, activebackground="#32CD32", relief=RAISED, borderwidth=4)
self.b12 = Button(self.frame1, image=plotbg, activebackground="#32CD32", relief=RAISED, borderwidth=4)
self.b8.bind("<ButtonPress-1>", self.compilemodel)
self.b9.bind("<ButtonPress-1>", self.trainclicked)
self.b11.bind("<ButtonPress-1>", self.buildmodel)
self.b12.bind("<ButtonPress-1>", self.plotclicked)
self.b8.place(rely=0, relx=0.84)
self.b9.place(rely=0, relx=0.92)
self.b11.place(rely=0, relx=0.68)
self.b12.place(rely=0, relx=0.76)
self.myFont = Font(family="clearlyu pua", size=12)
self.python = Frame(self.frame1, bg="black", width=280, height=140)
self.python.place(rely=0, relx=0.433, relwidth=0.245, relheight=0.95)
entryframe = Frame(self.python, bg="red", width=455, height=40)
entryframe.grid(row=1)
enterbutton = Button(entryframe, text="ENTER")
addbutton = Button(entryframe, text="add")
self.codeentry = Entry(entryframe, font=("Calibri 12 bold"))
self.codeentry.place(relx=0, rely=0, relwidth=0.9, relheight=1)
enterbutton.place(relx=0.9, rely=0, relwidth=0.1, relheight=1)
enterbutton.bind("<Button-1>", self.insertcode)
# addbutton.place(relx = 0.9,rely = 0,relwidth = 0.1,relheight = 1)
# add.bind("<Button-1>",add)
self.scrollcanvas = Canvas(self.python, height=125, width=450, bg="black")
self.scrollcanvas.grid(row=0)
self.scrollcanvasFrame = Frame(self.scrollcanvas, bg="black")
self.scrollcanvas.create_window(0, 0, window=self.scrollcanvasFrame, anchor='nw')
yscrollbar = Scrollbar(self.python, orient=VERTICAL)
yscrollbar.config(command=self.scrollcanvas.yview)
self.scrollcanvas.config(yscrollcommand=yscrollbar.set)
yscrollbar.grid(row=0, column=1, sticky="ns")
self.scrollcanvasFrame.bind("<Configure>", lambda event: self.scrollcanvas.configure(
scrollregion=self.scrollcanvas.bbox("all")))
self.frame7 = tk.Frame(self.master, background="yellow")
self.canvas = tk.Canvas(self.frame7, bg="black")
self.canvas.tag_bind("token", "<ButtonPress-1>", self.drag_start)
self.canvas.tag_bind("token", "<ButtonRelease-1>", self.drag_stop)
self.canvas.tag_bind("token", "<B1-Motion>", self.drag)
self.canvas.bind("<Shift-Button-3>", self.clickedcanvas)
self.frame7.place(relwidth=1, relheight=0.83, relx=0, rely=0.0)
self.canvas.place(relheight=1, relwidth=1)
self.toggle = Button(self.canvas, activebackground="#32CD32", relief=RAISED, borderwidth=3, height=90)
self.toggle.place(relwidth=0.03, relx=0.5, rely=0, relheight=0.03)
# self.toggle.bind("<Double-Button-1>",self.togglebg1)
self.toko = Button(self.canvas, image=toko, activebackground="#32CD32", relief=RAISED, borderwidth=3, height=90)
self.toko.place(relwidth=0.05, relx=0.005, rely=0.85)
self.toko.bind("<Double-Button-1>", self.tokoclicked)
self.ide = Button(self.canvas, image=code, activebackground="#32CD32", relief=RAISED, borderwidth=3, height=90)
self.ide.place(relwidth=0.04, relheight=0.1, relx=0.95, rely=0.0)
self.ide.bind("<Button-1>", self.ideclicked)
self.sequence = [ImageTk.PhotoImage(img)
for img in ImageSequence.Iterator(
Image.open(
bgpath + 'ai3.gif'))]
self.backgroundimage1 = ImageTk.PhotoImage(Image.open("/mnt/DATA/UI/backgrounds/bg2.jpg"))
self.backgroundimage2 = ImageTk.PhotoImage(Image.open("/mnt/DATA/UI/backgrounds/background1.jpg"))
self.image = self.canvas.create_image(0,0, image=self.backgroundimage1,anchor = NW)
self.editorflag = False
# self.image = self.canvas.create_image(0, 0, image=self.sequence[0], anchor=NW)
# self.animatecount = 0
# self.animate(1)
# self.change = False
# def animate(self, counter):
# self.canvas.itemconfig(self.image, image=self.sequence[counter])
# self.master.after(38, lambda: self.animate((counter + 1) % len(self.sequence)))
# self.animatecount += 1
# if self.animatecount == 2:
# self.startspeak()
# def animatedestroy(self, counter):
# self.master.after(1,lambda : _show('Title', 'Prompting after 5 seconds'))
# self.canvas.itemconfig(self.image, image=self.backgroundimage1)
# def togglebg1(self,event):
# if self.change == True:
# self.canvas.itemconfig(self.image, image=self.sequence[0])
# self.animate(1)
# self.change = False
# else:
# self.canvas.delete(self.image)
# self.image = self.canvas.create_image(0,0, image=self.backgroundimage1,anchor = NW)
# self.change = True
# animatedestroy
def ideclicked(self, event):
# obj = Ide(self.canvas)
self.editorflag = True
self.editorobj = Editor(self.canvas)
self.editorobj.frame.mainloop()
self.updatecodecanvas("the code is saved as model.py format")
def startspeak(self):
self.updatecodecanvas("Welcome to deeplearning self.graphical interface")
self.updatecodecanvas("For any queries click on the Toko Button ")
self.updatecodecanvas("-----------------------------")
self.updatecodecanvas("start by adding Input layer")
engine.say("Welcome to deep learning self.graphical interface")
engine.say("For any queries click on the Toko Button")
engine.runAndWait()
def updatecodecanvas(self, s):
label = Label(self.scrollcanvasFrame, text=s, bg="black", foreground="white", font="Courier 12")
label.pack()
self.codelines.append(label)
self.scrollcanvas.update_idletasks()
self.scrollcanvas.yview_moveto('1.5')
def insertcode(self, event):
s = self.codeentry.get()
self.updatecodecanvas(s)
self.codeentry.delete(0, 'end')
if s == "clear":
for i in range(len(self.codelines)):
self.codelines[i].destroy()
self.scrollcanvas.yview_moveto('0')
def getwidgets(self, s, path):
widgetsactivepath = '/mnt/DATA/UI/widgets/active/'
widgetspath = '/mnt/DATA/UI/widgets/inactive/'
image = Image.open(widgetspath + path)
bg = ImageTk.PhotoImage(image)
s = widgetsactivepath + path[:-4] + "active" + path[-4:]
image = Image.open(s)
bga = ImageTk.PhotoImage(image)
return bg, bga
def drag_start(self, event):
self._drag_data["item"] = self.canvas.find_closest(event.x, event.y)[0]
self._drag_data["x"] = event.x
self._drag_data["y"] = event.y
def drag_stop(self, event):
"""End drag of an object"""
self._drag_data["item"] = None
self._drag_data["x"] = 0
self._drag_data["y"] = 0
# i = event.widget.find_withtag('current')[0]
canvas_item_id = event.widget.find_withtag('current')[0]
n1 = self.nodes.index(canvas_item_id)
self.dic[n1]["x"] = event.x
self.dic[n1]["y"] = event.y
lines = self.coonectionsdic[n1]
print("selected ", n1, "lines ==", lines)
for j in range(len(lines)):
# print("connectionsself.dic before",self.coonectionsdic[:4])
line = lines[0]
n2 = line.get("nodeid")
lineid1 = line.get("lineid")
ind = [self.coonectionsdic[n2][i].get("nodeid") for i in range(len(self.coonectionsdic[n2]))].index(n1)
lineid2 = self.coonectionsdic[n2][ind]["lineid"]
self.canvas.delete(lineid2)
self.canvas.delete(lineid1)
self.addline(n1, n2)
self.coonectionsdic[n1].pop(0)
self.coonectionsdic[n2].pop(ind)
# print("connectionsself.dic after",self.coonectionsdic[:4])
def drag(self, event):
"""Handle dragging of an object"""
delta_x = event.x - self._drag_data["x"]
delta_y = event.y - self._drag_data["y"]
self.canvas.move(self._drag_data["item"], delta_x, delta_y)
self._drag_data["x"] = event.x
self._drag_data["y"] = event.y
def create_token(self, x, y, label, bg, bga):
# print("button ",label," is pressed")
x = self.currentx
y = self.currenty
self.nodescnt += 1
if label == "conv":
self.updatecodecanvas("conv2d object is created, double-click to initialize")
self.ccnt += 1
self.currentx += 160
name = "conv"
elif label == "dense":
self.updatecodecanvas("dense object is created, double-click to initialize")
self.currentx += 80
y = y - 50
name = "dense"
elif label == "act":
self.updatecodecanvas("activation object is created, double-click to initialize")
self.currentx += 120
y = y
name = "activation"
elif label == "input":
self.updatecodecanvas("input object is created, double-click to initialize")
self.updatecodecanvas("start adding layers")
if self.icnt >= 1:
y = self.currenty - 120 * self.icnt
x = 20
self.icnt += 1
else:
self.currentx += 120
self.icnt += 1
name = "input"
elif label == "output":
self.updatecodecanvas("output object is created, double-click to initialize")
y = self.currenty - 120 * self.ocnt
x = 1800
self.ocnt += 1
name = "output"
elif label == "flatten":
self.updatecodecanvas("flatten object is created, double-click to initialize")
self.currentx += 80
y = y - 50
name = "flatten"
elif label == "add":
self.updatecodecanvas("add object is created, double-click to initialize")
self.currentx += 110
name = "add"
token = self.canvas.create_image(x, y, image=bg, anchor=NW, tags=("token",), activeimage=bga)
self.nodes.append(token)
self.tensorlist.append(None)
self.dic[self.cnt] = {"x": x, "y": y, "name": name, "Node": None, "param": [None]}
self.canvas.tag_bind(token, '<Double-Button-1>', self.itemClicked)
self.canvas.tag_bind(token, "<Shift-Button-1>", self.connect)
self.cnt += 1
def connect(self, event):
canvas_item_id = event.widget.find_withtag('current')[0]
try:
self.path.index(canvas_item_id)
# print("Node already added")
except:
self.path.append(canvas_item_id)
# print(self.path)
def clickedcanvas(self, event):
if len(self.path) > 1:
for i in range(len(self.path) - 1):
n1 = self.nodes.index(self.path[i])
n2 = self.nodes.index(self.path[i + 1])
self.graph[n1].append(n2)
self.invgraph[n2].append(n1)
self.addline(n1, n2)
self.pathscnt += 1
print("added path")
self.path = []
print(self.path)
if self.pathscnt == self.nodescnt - 1:
self.updatecodecanvas("path is created build now")
def addline(self, n1, n2):
x1 = self.dic[n1]["x"]
y1 = self.dic[n1]["y"]
x2 = self.dic[n2]["x"]
y2 = self.dic[n2]["y"]
lineid1 = self.canvas.create_line(x1, y1, x2, y2, fill="white", width=3)
lineid2 = self.canvas.create_line(x2, y2, x1, y1, fill="white", width=3)
self.coonectionsdic[n1].append({"nodeid": n2, "lineid": lineid1})
self.coonectionsdic[n2].append({"nodeid": n1, "lineid": lineid2})
def itemClicked(self, event):
canvas_item_id = event.widget.find_withtag('current')[0]
print('Item', canvas_item_id, 'Clicked!')
ind = self.nodes.index(canvas_item_id)
self.objectscnt += 1
if self.dic[ind]["name"].startswith("conv"):
print("conv")
self.updatecodecanvas(str(canvas_item_id) + ":tensorflow.keras.layers.Conv2D(**args) is created")
ConvDialog(self.canvas, self, event.x, event.y, ind)
dic = self.dic
elif self.dic[ind]["name"].startswith("dense"):
print("dense")
self.updatecodecanvas(str(canvas_item_id) + ":tensorflow.keras.layers.Dense(**args) is created")
DenseDialog(self.canvas, self, event.x, event.y, ind)
dic = self.dic
elif self.dic[ind]["name"].startswith("input"):
print("input")
self.updatecodecanvas(str(canvas_item_id) + ":tensorflow.keras.layers.Input(**args) is created")
InputDialog(self.canvas, self, event.x, event.y, ind)
dic = self.dic
elif self.dic[ind]["name"].startswith("output"):
print("output")
self.updatecodecanvas(str(canvas_item_id) + ":tensorflow.keras.layers.Dense(**args) is created")
OutputDialog(self.canvas, self, event.x, event.y, ind)
dic = self.dic
elif self.dic[ind]["name"].startswith("add"):
print("add")
self.updatecodecanvas(str(canvas_item_id) + ":tensorflow.keras.layers.Add(**args) is created")
AddDialog(self.canvas, self, event.x, event.y, ind)
dic = self.dic
# elif self.dic[ind]["name"].startswith("act"):
# print("activation")
# self.updatecodecanvas(str(canvas_item_id)+":tensorflow.keras.layers.Activation(**args) is created")
# actDialog(self.canvas,event.x,event.y,ind)
elif self.dic[ind]["name"].startswith("flatten"):
print("flatten")
self.updatecodecanvas(str(canvas_item_id) + ":tensorflow.keras.layers.Flatten(**args) is created")
FlattenDialog(self.canvas, self, event.x, event.y, ind)
dic = self.dic
if self.objectscnt >= self.nodescnt:
self.updatecodecanvas("---all objects are created.---")
self.updatecodecanvas("--- you can add paths now---")
self.updatecodecanvas("hold shift + left mouse click to add paths")
self.updatecodecanvas("hold shift + right mouse click to connect paths")
self.updatecodecanvas(" ")
# print(self.dic)
def tokoclicked(self, event):
obj = TokoDialog(self.canvas)
def plotclicked(self, event):
global getplotimage
obj = ModelplotDialog(self.canvas, getplotimage())
self.updatecodecanvas("plot is self.graphical representation in image format")
def codeclicked(self, event):
# obj = ModelcodeDialog(self.canvas)
obj = Editor(self.canvas)
obj.frame.mainloop()
self.updatecodecanvas("the code is saved as model.py format")
def summaryclicked(self):
obj = ModelsummaryDialog(self.canvas)
self.updatecodecanvas("summary is the architecture of model")
self.updatecodecanvas("built successful")
self.updatecodecanvas("you can train model now")
self.updatecodecanvas("remember to compile model first")
def trainclicked(self, event):
obj = Train(self.canvas, [self.t1, self.t2, self.t3, self.t4, self.t5], self.model, self)
def printdict(self, event):
print(self.nodes)
print(self.dic)
def addtocode(self, node, xlist):
name = self.dic[node]["name"]
params = self.dic[node]["param"]
print("params = ", params)
if name.startswith("conv"):
self.updatecodecanvas(" {} = tf.keras.layers.Conv2d(**args)(x)")
with open("model.py", "a") as myfile:
myfile.write(
" {} = tf.keras.layers.Conv2d(filters={},kernel_size={},strides={},padding= '{}',dilation_rate= {},activation= '{}')({})\n".format(
"x" + str(node), params[0], params[1], params[2], params[3], params[4], params[5], xlist))
elif name.startswith("dense"):
self.updatecodecanvas("x = tf.keras.layers.Dense(**args)(x)")
with open("model.py", "a") as myfile:
myfile.write(" {} = tf.keras.layers.Dense(units = {},activation= '{}',use_bias= {})({})\n".format(
"x" + str(node), params[0], params[1], params[2], xlist))
elif name.startswith("output"):
self.updatecodecanvas("x = tf.keras.layers.Dense(**args)(x)")
with open("model.py", "a") as myfile:
myfile.write(" {} = tf.keras.layers.Dense(units = {},activation= '{}',use_bias= {})({})\n".format(
"x" + str(node), params[0], params[1], params[2], xlist))
elif name.startswith("flatten"):
self.updatecodecanvas("x = tf.keras.layers.Flatten(**args)(x)")
with open("model.py", "a") as myfile:
myfile.write(" {} = tf.keras.layers.Flatten()({})\n".format("x" + str(node), xlist))
elif name.startswith("act"):
self.updatecodecanvas("x = tf.keras.layers.Activation(**args)(x)")
with open("model.py", "a") as myfile:
myfile.write(" {} = tf.keras.layers.Activation()({})\n".format("x" + str(node), xlist))
elif name.startswith("add"):
self.updatecodecanvas("x = tf.keras.layers.Add(**args)(x)")
with open("model.py", "a") as myfile:
myfile.write(" {} = tf.keras.layers.Add(**args)({})\n".format("x" + str(node), xlist))
elif name.startswith("input"):
self.updatecodecanvas("x = tf.keras.layers.Add(**args)(x)")
with open("model.py", "a") as myfile:
myfile.write(" {} = tf.keras.layers.Input(shape = {})\n".format("x" + str(node), params[0]))
def rec(self, node):
if node == 0:
self.tensorlist[0] = self.dic[0]["Node"]
self.addtocode(node, 0)
return self.tensorlist[0]
else:
lis = [self.rec(i) for i in self.invgraph[node]]
xlist = ["x" + str(i) for i in self.invgraph[node]]
if len(lis) == 1:
self.tensorlist[node] = self.dic[node]["Node"](lis[0])
self.addtocode(node, xlist[0])
else:
self.tensorlist[node] = self.dic[node]["Node"](lis)
self.addtocode(node, xlist)
print(self.dic[node]["Node"], self.dic[node]["name"])
return self.tensorlist[node]
def buildmodel(self, event):
for i in range(len(self.nodes)):
if len(self.graph[i]) == 0:
end = i
self.rec(i)
self.model = tf.keras.models.Model(self.tensorlist[0], self.tensorlist[i])
self.model.summary()
tf.keras.utils.plot_model(self.model, "/home/crazy/UI/modelplot.png")
with open('modelsummary.txt', 'w') as f:
with redirect_stdout(f):
self.model.summary()
with open("model.py", "a") as myfile:
myfile.write(
" return tf.keras.models.Model(inputs = {} , outputs = {})\n\n".format("x" + str(0), "x" + str(i)))
myfile.write("\n")
myfile.write("model = mymodel()\n")
myfile.write("model.summary()")
# self.editorobj.root.destroy()
# self.editorobj = Editor(self.canvas)
# self.editorobj.frame.mainloop()
print("compilling model")
if self.editorflag == True:
self.editorobj.first_open("model.py")
self.updatecodecanvas("the code is saved as model.py format")
self.summaryclicked()
def compilemodel(self, event):
obj = CompileDialog(self.canvas,self.model)
self.updatecodecanvas("model is compileD. Start Training")
class Mainclass:
def __init__(self):
# self.sroot = sroot
# self.traindatagen = sroot.traingen
# self.testdatagen = sroot.testgen
# self.traindatagen = []
# self.testdatagen = []
# sroot.controller.root.destroy()
self.window = tk.Tk()
self.window.attributes('-zoomed', True)
self.fullScreenState = False
self.window.bind("<F11>", self.toggleFullScreen)
self.window.bind("<Escape>", self.quitFullScreen)
self.window.title("DEEP-LEARNING UI")
b = self.getbuttonimages()
app = Window(b, root=self, master=self.window)
self.window.mainloop()
def getbuttonimages(self):
# ["input","conv","dense","act","output","flatten","add"]
default = self.buttonimage("later")
bg1 = self.buttonimage("input")
bg2 = self.buttonimage("conv")
bg3 = self.buttonimage("dense")
bg4 = self.buttonimage("act")
bg5 = self.buttonimage("output")
bg6 = self.buttonimage("flatten")
bg7 = self.buttonimage("add")
bg8 = self.buttonimage("embedding")
# bg9 = self.buttonimage("add")
# bg10 = self.buttonimage("add")
# bg11 = self.buttonimage("add")
# bg12 = self.buttonimage("add")
# bg13 = self.buttonimage("add")
# bg14 = self.buttonimage("add")
# bg15 = self.buttonimage("add")
# bg16 = self.buttonimage("add")
# fg1 = ImageTk.PhotoImage(Image.open("/mnt/DATA/UI/buttons/plotbutton3.png"))
fg1 = self.buttonimage("plotlarge")
fg2 = self.buttonimage("train")
fg3 = self.buttonimage("compilebuttonlarge")
fg4 = self.buttonimage("buildlarge")
# fg3 = ImageTk.PhotoImage(Image.open("/mnt/DATA/UI/buttons/codebutton3.png"))
# fg4 = ImageTk.PhotoImage(Image.open("/mnt/DATA/UI/buttons/buildbutton3.png"))
toko = self.buttonimage("toko")
code = self.buttonimage("code")
plot = ImageTk.PhotoImage(Image.open("/home/crazy/UI/modelplot.png"))
train1 = ImageTk.PhotoImage(Image.open("/mnt/DATA/UI/buttons/trainingbutton.png"))
train2 = ImageTk.PhotoImage(Image.open("/mnt/DATA/UI/buttons/predbutton.png"))
train3 = ImageTk.PhotoImage(Image.open("/mnt/DATA/UI/buttons/databutton.png"))
train4 = ImageTk.PhotoImage(Image.open("/mnt/DATA/UI/buttons/compilebutton1.png"))
train5 = ImageTk.PhotoImage(Image.open("/mnt/DATA/UI/buttons/databutton.png"))
return [default, [bg1, bg2, bg3, bg4, bg5, bg6, bg8], [fg1, fg2, fg3, fg4], toko, plot, code,
[train1, train2, train3, train4, train5]]
def buttonimage(self, s, resize=False):
path = buttonspath +"edited/"+ s + "button2.png"
image = Image.open(path)
if resize == True:
image = image.resize((200, 140), Image.ANTIALIAS)
bg = ImageTk.PhotoImage(image)
return bg
def toggleFullScreen(self, event):
self.fullScreenState = not self.fullScreenState
self.window.attributes("-zoomed", self.fullScreenState)
def quitFullScreen(self, event):
self.fullScreenState = False
self.window.attributes("-zoomed", self.fullScreenState)
def toggleFullScreen(event):
fullScreenState = not fullScreenState
root.attributes("-zoomed", fullScreenState)
def quitFullScreen(event):
fullScreenState = False
root.attributes("-zoomed", fullScreenState)
from PIL import Image, ImageTk, ImageSequence
# sroot = Tk()
# sroot.minsize(height=1000, width=1000)
# sroot.title("Loading.....")
# sroot.configure()
# spath = "/mnt/DATA/UI/backgrounds/"
Mainclass()
# sroot.mainloop()
```
| github_jupyter |
# Object Detection
*Object detection* is a form of computer vision in which a machine learning model is trained to classify individual instances of objects in an image, and indicate a *bounding box* that marks its location. Youi can think of this as a progression from *image classification* (in which the model answers the question "what is this an image of?") to building solutions where we can ask the model "what objects are in this image, and where are they?".
<p style='text-align:center'><img src='./images/object-detection.jpg' alt='A robot identifying fruit'/></p>
For example, a grocery store might use an object detection model to implement an automated checkout system that scans a conveyor belt using a camera, and can identify specific items without the need to place each item on the belt and scan them individually.
The **Custom Vision** cognitive service in Microsoft Azure provides a cloud-based solution for creating and publishing custom object detection models.
## Create a Custom Vision resource
To use the Custom Vision service, you need an Azure resource that you can use to train a model, and a resource with which you can publish it for applications to use. You can use the same resource for each of these tasks, or you can use different resources for each to allocate costs separately provided both resources are created in the same region. The resource for either (or both) tasks can be a general **Cognitive Services** resource, or a specific **Custom Vision** resource. Use the following instructions to create a new **Custom Vision** resource (or you can use an existing resource if you have one).
1. In a new browser tab, open the Azure portal at [https://portal.azure.com](https://portal.azure.com), and sign in using the Microsoft account associated with your Azure subscription.
2. Select the **+Create a resource** button, search for *custom vision*, and create a **Custom Vision** resource with the following settings:
- **Create options**: Both
- **Subscription**: *Your Azure subscription*
- **Resource group**: *Create a new resource group with a unique name*
- **Name**: *Enter a unique name*
- **Training location**: *Choose any available region*
- **Training pricing tier**: F0
- **Prediction location**: *The same as the training location*
- **Prediction pricing tier**: F0
> **Note**: If you already have an F0 custom vision service in your subscription, select **S0** for this one.
3. Wait for the resource to be created.
## Create a Custom Vision project
To train an object detection model, you need to create a Custom Vision project based on your trainign resource. To do this, you'll use the Custom Vision portal.
1. In a new browser tab, open the Custom Vision portal at [https://customvision.ai](https://customvision.ai), and sign in using the Microsoft account associated with your Azure subscription.
2. Create a new project with the following settings:
- **Name**: Grocery Detection
- **Description**: Object detection for groceries.
- **Resource**: *The Custom Vision resource you created previously*
- **Project Types**: Object Detection
- **Domains**: General
3. Wait for the project to be created and opened in the browser.
## Add and tag images
To train an object detection model, you need to upload images that contain the classes you want the model to identify, and tag them to indicate bounding boxes for each object instance.
1. Download and extract the training images from https://aka.ms/fruit-objects. The extracted folder contains a collection of images of fruit.
2. In the Custom Vision portal, in your object detection project, select **Add images** and upload all of the images in the extracted folder.
3. After the images have been uploaded, select the first one to open it.
4. Hold the mouse over any object in the image until an automatically detected region is displayed like the image below. Then select the object, and if necessary resize the region to surround it.
<p style='text-align:center'><img src='./images/object-region.jpg' alt='The default region for an object'/></p>
Alternatively, you can simply drag around the object to create a region.
5. When the region surrounds the object, add a new tag with the appropriate object type (*apple*, *banana*, or *orange*) as shown here:
<p style='text-align:center'><img src='./images/object-tag.jpg' alt='A tagged object in an image'/></p>
6. Select and tag each other object in the image, resizing the regions and adding new tags as required.
<p style='text-align:center'><img src='./images/object-tags.jpg' alt='Two tagged objects in an image'/></p>
7. Use the **>** link on the right to go to the next image, and tag its objects. Then just keep working through the entire image collection, tagging each apple, banana, and orange.
8. When you have finished tagging the last image, close the **Image Detail** editor and on the **Training Images** page, under **Tags**, select **Tagged** to see all of your tagged images:
<p style='text-align:center'><img src='./images/tagged-images.jpg' alt='Tagged images in a project'/></p>
## Train and test a model
Now that you've tagged the images in your project, you're ready to train a model.
1. In the Custom Vision project, click **Train** to train an object detection model using the tagged images. Select the **Quick Training** option.
2. Wait for training to complete (it might take ten minutes or so), and then review the *Precision*, *Recall*, and *mAP* performance metrics - these measure the prediction accuracy of the classification model, and should all be high.
3. At the top right of the page, click **Quick Test**, and then in the **Image URL** box, enter `https://aka.ms/apple-orange` and view the prediction that is generated. Then close the **Quick Test** window.
## Publish and consume the object detection model
Now you're ready to publish your trained model and use it from a client application.
1. At the top left of the **Performance** page, click **🗸 Publish** to publish the trained model with the following settings:
- **Model name**: detect-produce
- **Prediction Resource**: *Your custom vision **prediction** resource*.
2. After publishing, click the *settings* (⚙) icon at the top right of the **Performance** page to view the project settings. Then, under **General** (on the left), copy the **Project Id** and paste it into the code cell below replacing **YOUR_PROJECT_ID**.
> (*if you used a **Cognitive Services** resource instead of creating a **Custom Vision** resource at the beginning of this exercise, you can copy its key and endpoint from the right side of the project settings, paste it into the code cell below, and run it to see the results. Otherwise, continue completing the steps below to get the key and endpoint for your Custom Vision prediction resource*).
3. At the top left of the **Project Settings** page, click the *Projects Gallery* (👁) icon to return to the Custom Vision portal home page, where your project is now listed.
4. On the Custom Vision portal home page, at the top right, click the *settings* (⚙) icon to view the settings for your Custom Vision service. Then, under **Resources**, expand your *prediction* resource (<u>not</u> the training resource) and copy its **Key** and **Endpoint** values to the code cell below, replacing **YOUR_KEY** and **YOUR_ENDPOINT**.
5. Run the code cell below by clicking its green <span style="color:green">▷</span> button (at the top left of the cell) to set the variables to your project ID, key, and endpoint values.
```
project_id = 'YOUR_PROJECT_ID' # Replace with your project ID
cv_key = 'YOUR_KEY' # Replace with your prediction resource primary key
cv_endpoint = 'YOUR_ENDPOINT' # Replace with your prediction resource endpoint
model_name = 'detect-produce' # this must match the model name you set when publishing your model iteration exactly (including case)!
print('Ready to predict using model {} in project {}'.format(model_name, project_id))
```
Client applications can use the details above to connect to and your custom vision object detection model.
Run the following code cell, which uses your model to detect individual produce items in an image.
> **Note**: Don't worry too much about the details of the code. It uses the Python SDK for the Custom Vision service to submit an image to your model and retrieve predictions for detected objects. Each prediction consists of a class name (*apple*, *banana*, or *orange*) and *bounding box* coordinates that indicate where in the image the predicted object has been detected. The code then uses this information to draw a labelled box around each object on the image.
```
from azure.cognitiveservices.vision.customvision.prediction import CustomVisionPredictionClient
from msrest.authentication import ApiKeyCredentials
from matplotlib import pyplot as plt
from PIL import Image, ImageDraw, ImageFont
import numpy as np
import os
%matplotlib inline
# Load a test image and get its dimensions
test_img_file = os.path.join('data', 'object-detection', 'produce.jpg')
test_img = Image.open(test_img_file)
test_img_h, test_img_w, test_img_ch = np.array(test_img).shape
# Get a prediction client for the object detection model
credentials = ApiKeyCredentials(in_headers={"Prediction-key": cv_key})
predictor = CustomVisionPredictionClient(endpoint=cv_endpoint, credentials=credentials)
print('Detecting objects in {} using model {} in project {}...'.format(test_img_file, model_name, project_id))
# Detect objects in the test image
with open(test_img_file, mode="rb") as test_data:
results = predictor.detect_image(project_id, model_name, test_data)
# Create a figure to display the results
fig = plt.figure(figsize=(8, 8))
plt.axis('off')
# Display the image with boxes around each detected object
draw = ImageDraw.Draw(test_img)
lineWidth = int(np.array(test_img).shape[1]/100)
object_colors = {
"apple": "lightgreen",
"banana": "yellow",
"orange": "orange"
}
for prediction in results.predictions:
color = 'white' # default for 'other' object tags
if (prediction.probability*100) > 50:
if prediction.tag_name in object_colors:
color = object_colors[prediction.tag_name]
left = prediction.bounding_box.left * test_img_w
top = prediction.bounding_box.top * test_img_h
height = prediction.bounding_box.height * test_img_h
width = prediction.bounding_box.width * test_img_w
points = ((left,top), (left+width,top), (left+width,top+height), (left,top+height),(left,top))
draw.line(points, fill=color, width=lineWidth)
plt.annotate(prediction.tag_name + ": {0:.2f}%".format(prediction.probability * 100),(left,top), backgroundcolor=color)
plt.imshow(test_img)
```
View the resulting predictions, which show the objects detected and the probability for each prediction.
| github_jupyter |
# Classifier
Classifiers used to classify a song genre given its lyrics
## Loading and Processing Dataset
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import accuracy_score
from sklearn.model_selection import GridSearchCV
filename = '../data/dataset.csv'
df = pd.read_csv(filename, header=None, names=["artist", "song", "genre", "tokens"])
def to_list(x):
return x[1:-1].split(',')
df['tokens'] = df['tokens'].apply(to_list)
from ast import literal_eval
from sklearn.feature_extraction.text import TfidfVectorizer
data = df.values
def unison_shuffled_copies(a, b):
assert a.shape[0] == b.shape[0]
p = np.random.permutation(a.shape[0])
return a[p], b[p]
def dummy_fun(x):
return x
tfidf = TfidfVectorizer(
analyzer='word',
tokenizer=dummy_fun,
preprocessor=dummy_fun,
token_pattern=None)
X = tfidf.fit_transform(data[:,3])
Y = data[:, 2]
X, Y = unison_shuffled_copies(X, Y)
X_train = X[:4800]
X_valid = X[4800:5500]
X_test = X[5500:]
Y_train = Y[:4800]
Y_valid = Y[4800:5500]
Y_test = Y[5500:]
```
## Naive Bayes
```
from sklearn.naive_bayes import MultinomialNB
alpha_params = [0, 0.2, 0.6, 1.0, 2.0, 5.0, 10.0]
best_alpha, best_score = alpha_params[0], 0
scores = []
for alpha in alpha_params:
nb_model = MultinomialNB(alpha=alpha).fit(X_train, Y_train)
predicted = nb_model.predict(X_valid)
score = accuracy_score(predicted, Y_valid)
if score > best_score:
best_score = score
best_alpha = alpha
scores.append(score)
print(f"Alpha: {alpha}, Validation Accuracy: {score}")
plt.plot(alpha_params, scores)
plt.ylabel('Accuracy')
plt.title('Multinomial NB: Alpha & Accuracy')
plt.xlabel('Alpha')
plt.savefig('nb.png')
plt.show()
nb_model = MultinomialNB(alpha=best_alpha).fit(X_train, Y_train)
predicted = nb_model.predict(X_test)
score = accuracy_score(predicted, Y_test)
print(f"Naive Bayes Test Accuracy: {score}")
```
## Logistic Regression
```
from sklearn.linear_model import LogisticRegression
c_params = [0.1, 0.2, 0.4, 0.7, 1.0, 2.0, 3.5, 5.0, 10.0, 20.0]
best_c, best_score = c_params[0], 0
scores = []
for c in c_params:
lr_model = LogisticRegression(C = c).fit(X_train, Y_train)
predicted = lr_model.predict(X_valid)
score = accuracy_score(predicted, Y_valid)
if score > best_score:
best_score = score
best_c = c
scores.append(score)
print(f"C: {c}, Validation Accuracy: {score}")
plt.plot(c_params, scores)
plt.ylabel('Accuracy')
plt.title('Logistic Regression: Regularization (C) & Accuracy')
plt.xlabel('C')
plt.savefig('lr.png')
plt.show()
nb_model = LogisticRegression(C=best_c).fit(X_train, Y_train)
predicted = nb_model.predict(X_test)
score = accuracy_score(predicted, Y_test)
print(f"Logistic Regression Test Accuracy: {score}")
```
## Support Vector Machine
```
from sklearn.svm import SVC
c_params = [0.1, 0.5, 1.0, 1.5, 2.0, 2.5, 5.0, 10.0]
best_c, best_score = c_params[0], 0
scores = []
for c in c_params:
svc = SVC(kernel='linear', C=c).fit(X_train, Y_train)
predicted = svc.predict(X_valid)
score = accuracy_score(predicted, Y_valid)
if score > best_score:
best_score = score
best_c = c
scores.append(score)
print(f"C: {c}, Validation Accuracy: {score}")
plt.plot(c_params, scores)
plt.ylabel('Accuracy')
plt.title('SVM: Penalty (C) & Accuracy')
plt.xlabel('C')
plt.savefig('svm.png')
plt.show()
nb_model = LogisticRegression(C=best_c).fit(X_train, Y_train)
predicted = nb_model.predict(X_test)
score = accuracy_score(predicted, Y_test)
print(f"SVM Test Accuracy: {score}")
```
## Random Forests
```
from sklearn.ensemble import RandomForestClassifier
parameters = [{"n_estimators": [10, 20, 40, 60], "max_depth": [5, 10, 20, 35, 60], "min_samples_split": [2, 4, 8]}]
clf = GridSearchCV(RandomForestClassifier(), parameters, cv=5)
clf.fit(X_train, Y_train)
print("Best parameters set found on development set:")
print()
print(clf.best_params_)
print()
print("Grid scores on development set:")
print()
means = clf.cv_results_['mean_test_score']
stds = clf.cv_results_['std_test_score']
for mean, std, params in zip(means, stds, clf.cv_results_['params']):
print("%0.3f (+/-%0.03f) for %r" % (mean, std * 2, params))
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(max_depth = 60, n_estimators= 60, min_samples_split = 4).fit(X_train, Y_train)
predicted = clf.predict(X_test)
score = accuracy_score(predicted, Y_test)
print(f"SVM Test Accuracy: {score}")
```
# Kaggle Dataset
```
from numpy import genfromtxt
kaggle_data = genfromtxt('clean_data.csv', delimiter=',', dtype=None)
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
Y = kaggle_data[:,0]
count_vect = CountVectorizer()
X = count_vect.fit_transform(kaggle_data[:,1])
tfidf_transformer = TfidfTransformer()
X = tfidf_transformer.fit_transform(X)
X_train = X[:250000]
Y_train = Y[:250000]
X_test = X[250000:]
Y_test = Y[250000:]
```
| github_jupyter |
# Facial Keypoint Detection
This project will be all about defining and training a convolutional neural network to perform facial keypoint detection, and using computer vision techniques to transform images of faces. The first step in any challenge like this will be to load and visualize the data you'll be working with.
Let's take a look at some examples of images and corresponding facial keypoints.
<img src='images/key_pts_example.png' width=50% height=50%/>
Facial keypoints (also called facial landmarks) are the small magenta dots shown on each of the faces in the image above. In each training and test image, there is a single face and **68 keypoints, with coordinates (x, y), for that face**. These keypoints mark important areas of the face: the eyes, corners of the mouth, the nose, etc. These keypoints are relevant for a variety of tasks, such as face filters, emotion recognition, pose recognition, and so on. Here they are, numbered, and you can see that specific ranges of points match different portions of the face.
<img src='images/landmarks_numbered.jpg' width=30% height=30%/>
---
## Load and Visualize Data
The first step in working with any dataset is to become familiar with your data; you'll need to load in the images of faces and their keypoints and visualize them! This set of image data has been extracted from the [YouTube Faces Dataset](https://www.cs.tau.ac.il/~wolf/ytfaces/), which includes videos of people in YouTube videos. These videos have been fed through some processing steps and turned into sets of image frames containing one face and the associated keypoints.
#### Training and Testing Data
This facial keypoints dataset consists of 5770 color images. All of these images are separated into either a training or a test set of data.
* 3462 of these images are training images, for you to use as you create a model to predict keypoints.
* 2308 are test images, which will be used to test the accuracy of your model.
The information about the images and keypoints in this dataset are summarized in CSV files, which we can read in using `pandas`. Let's read the training CSV and get the annotations in an (N, 2) array where N is the number of keypoints and 2 is the dimension of the keypoint coordinates (x, y).
---
**** Additional for using cloud(Ignore if your jupyter is in on-prems.)
First, before we do anything, we have to load in our image data. This data is stored in a zip file and in the below cell, we access it by it's URL and unzip the data in a /data/ directory that is separate from the workspace home directory.
```
# -- DO NOT CHANGE THIS CELL -- #
!mkdir /data
!wget -P /data/ https://s3.amazonaws.com/video.udacity-data.com/topher/2018/May/5aea1b91_train-test-data/train-test-data.zip
!unzip -n /data/train-test-data.zip -d /data
# import the required libraries
import glob
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import cv2
key_pts_frame = pd.read_csv('data/training_frames_keypoints.csv')
n = 0
image_name = key_pts_frame.iloc[n, 0]
key_pts = key_pts_frame.iloc[n, 1:].as_matrix()
key_pts = key_pts.astype('float').reshape(-1, 2)
print('Image name: ', image_name)
print('Landmarks shape: ', key_pts.shape)
print('First 4 key pts: {}'.format(key_pts[:4]))
# print out some stats about the data
print('Number of images: ', key_pts_frame.shape[0])
```
## Look at some images
Below, is a function `show_keypoints` that takes in an image and keypoints and displays them. As you look at this data, **note that these images are not all of the same size**, and neither are the faces! To eventually train a neural network on these images, we'll need to standardize their shape.
```
def show_keypoints(image, key_pts):
"""Show image with keypoints"""
plt.imshow(image)
plt.scatter(key_pts[:, 0], key_pts[:, 1], s=20, marker='.', c='m')
# Display a few different types of images by changing the index n
# select an image by index in our data frame
n = 0
image_name = key_pts_frame.iloc[n, 0]
key_pts = key_pts_frame.iloc[n, 1:].as_matrix()
key_pts = key_pts.astype('float').reshape(-1, 2)
plt.figure(figsize=(5, 5))
show_keypoints(mpimg.imread(os.path.join('data/training/', image_name)), key_pts)
plt.show()
```
## Dataset class and Transformations
To prepare our data for training, we'll be using PyTorch's Dataset class. Much of this this code is a modified version of what can be found in the [PyTorch data loading tutorial](http://pytorch.org/tutorials/beginner/data_loading_tutorial.html).
#### Dataset class
``torch.utils.data.Dataset`` is an abstract class representing a
dataset. This class will allow us to load batches of image/keypoint data, and uniformly apply transformations to our data, such as rescaling and normalizing images for training a neural network.
Your custom dataset should inherit ``Dataset`` and override the following
methods:
- ``__len__`` so that ``len(dataset)`` returns the size of the dataset.
- ``__getitem__`` to support the indexing such that ``dataset[i]`` can
be used to get the i-th sample of image/keypoint data.
Let's create a dataset class for our face keypoints dataset. We will
read the CSV file in ``__init__`` but leave the reading of images to
``__getitem__``. This is memory efficient because all the images are not
stored in the memory at once but read as required.
A sample of our dataset will be a dictionary
``{'image': image, 'keypoints': key_pts}``. Our dataset will take an
optional argument ``transform`` so that any required processing can be
applied on the sample. We will see the usefulness of ``transform`` in the
next section.
```
from torch.utils.data import Dataset, DataLoader
class FacialKeypointsDataset(Dataset):
"""Face Landmarks dataset."""
def __init__(self, csv_file, root_dir, transform=None):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.key_pts_frame = pd.read_csv(csv_file)
self.root_dir = root_dir
self.transform = transform
def __len__(self):
return len(self.key_pts_frame)
def __getitem__(self, idx):
image_name = os.path.join(self.root_dir,
self.key_pts_frame.iloc[idx, 0])
image = mpimg.imread(image_name)
# if image has an alpha color channel, get rid of it
if(image.shape[2] == 4):
image = image[:,:,0:3]
key_pts = self.key_pts_frame.iloc[idx, 1:].as_matrix()
key_pts = key_pts.astype('float').reshape(-1, 2)
sample = {'image': image, 'keypoints': key_pts}
if self.transform:
sample = self.transform(sample)
return sample
```
Now that we've defined this class, let's instantiate the dataset and display some images.
```
# Construct the dataset
face_dataset = FacialKeypointsDataset(csv_file='data/training_frames_keypoints.csv',
root_dir='data/training/')
# print some stats about the dataset
print('Length of dataset: ', len(face_dataset))
# Display a few of the images from the dataset
num_to_display = 3
for i in range(num_to_display):
# define the size of images
fig = plt.figure(figsize=(20,10))
# randomly select a sample
rand_i = np.random.randint(0, len(face_dataset))
sample = face_dataset[rand_i]
# print the shape of the image and keypoints
print(i, sample['image'].shape, sample['keypoints'].shape)
ax = plt.subplot(1, num_to_display, i + 1)
ax.set_title('Sample #{}'.format(i))
# Using the same display function, defined earlier
show_keypoints(sample['image'], sample['keypoints'])
```
## Transforms
Now, the images above are not of the same size, and neural networks often expect images that are standardized; a fixed size, with a normalized range for color ranges and coordinates, and (for PyTorch) converted from numpy lists and arrays to Tensors.
Therefore, we will need to write some pre-processing code.
Let's create four transforms:
- ``Normalize``: to convert a color image to grayscale values with a range of [0,1] and normalize the keypoints to be in a range of about [-1, 1]
- ``Rescale``: to rescale an image to a desired size.
- ``RandomCrop``: to crop an image randomly.
- ``ToTensor``: to convert numpy images to torch images.
We will write them as callable classes instead of simple functions so
that parameters of the transform need not be passed everytime it's
called. For this, we just need to implement ``__call__`` method and
(if we require parameters to be passed in), the ``__init__`` method.
We can then use a transform like this:
tx = Transform(params)
transformed_sample = tx(sample)
Observe below how these transforms are generally applied to both the image and its keypoints.
```
import torch
from torchvision import transforms, utils
# tranforms
class Normalize(object):
"""Convert a color image to grayscale and normalize the color range to [0,1]."""
def __call__(self, sample):
image, key_pts = sample['image'], sample['keypoints']
image_copy = np.copy(image)
key_pts_copy = np.copy(key_pts)
# convert image to grayscale
image_copy = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
# scale color range from [0, 255] to [0, 1]
image_copy= image_copy/255.0
# scale keypoints to be centered around 0 with a range of [-1, 1]
# mean = 100, sqrt = 50, so, pts should be (pts - 100)/50
key_pts_copy = (key_pts_copy - 100)/50.0
return {'image': image_copy, 'keypoints': key_pts_copy}
class Rescale(object):
"""Rescale the image in a sample to a given size.
Args:
output_size (tuple or int): Desired output size. If tuple, output is
matched to output_size. If int, smaller of image edges is matched
to output_size keeping aspect ratio the same.
"""
def __init__(self, output_size):
assert isinstance(output_size, (int, tuple))
self.output_size = output_size
def __call__(self, sample):
image, key_pts = sample['image'], sample['keypoints']
h, w = image.shape[:2]
if isinstance(self.output_size, int):
if h > w:
new_h, new_w = self.output_size * h / w, self.output_size
else:
new_h, new_w = self.output_size, self.output_size * w / h
else:
new_h, new_w = self.output_size
new_h, new_w = int(new_h), int(new_w)
img = cv2.resize(image, (new_w, new_h))
# scale the pts, too
key_pts = key_pts * [new_w / w, new_h / h]
return {'image': img, 'keypoints': key_pts}
class RandomCrop(object):
"""Crop randomly the image in a sample.
Args:
output_size (tuple or int): Desired output size. If int, square crop
is made.
"""
def __init__(self, output_size):
assert isinstance(output_size, (int, tuple))
if isinstance(output_size, int):
self.output_size = (output_size, output_size)
else:
assert len(output_size) == 2
self.output_size = output_size
def __call__(self, sample):
image, key_pts = sample['image'], sample['keypoints']
h, w = image.shape[:2]
new_h, new_w = self.output_size
top = np.random.randint(0, h - new_h)
left = np.random.randint(0, w - new_w)
image = image[top: top + new_h,
left: left + new_w]
key_pts = key_pts - [left, top]
return {'image': image, 'keypoints': key_pts}
class ToTensor(object):
"""Convert ndarrays in sample to Tensors."""
def __call__(self, sample):
image, key_pts = sample['image'], sample['keypoints']
# if image has no grayscale color channel, add one
if(len(image.shape) == 2):
# add that third color dim
image = image.reshape(image.shape[0], image.shape[1], 1)
# swap color axis because
# numpy image: H x W x C
# torch image: C X H X W
image = image.transpose((2, 0, 1))
return {'image': torch.from_numpy(image),
'keypoints': torch.from_numpy(key_pts)}
```
## Test out the transforms
Let's test these transforms out to make sure they behave as expected. As you look at each transform, note that, in this case, **order does matter**. For example, you cannot crop a image using a value smaller than the original image (and the orginal images vary in size!), but, if you first rescale the original image, you can then crop it to any size smaller than the rescaled size.
```
# test out some of these transforms
rescale = Rescale(100)
crop = RandomCrop(50)
composed = transforms.Compose([Rescale(250),
RandomCrop(224)])
# apply the transforms to a sample image
test_num = 500
sample = face_dataset[test_num]
fig = plt.figure()
for i, tx in enumerate([rescale, crop, composed]):
transformed_sample = tx(sample)
ax = plt.subplot(1, 3, i + 1)
plt.tight_layout()
ax.set_title(type(tx).__name__)
show_keypoints(transformed_sample['image'], transformed_sample['keypoints'])
plt.show()
```
## Create the transformed dataset
Apply the transforms in order to get grayscale images of the same shape. Verify that your transform works by printing out the shape of the resulting data (printing out a few examples should show you a consistent tensor size).
```
# define the data tranform
# order matters! i.e. rescaling should come before a smaller crop
data_transform = transforms.Compose([Rescale(250),
RandomCrop(224),
Normalize(),
ToTensor()])
# create the transformed dataset
transformed_dataset = FacialKeypointsDataset(csv_file='data/training_frames_keypoints.csv',
root_dir='data/training/',
transform=data_transform)
# print some stats about the transformed data
print('Number of images: ', len(transformed_dataset))
# make sure the sample tensors are the expected size
for i in range(5):
sample = transformed_dataset[i]
print(i, sample['image'].size(), sample['keypoints'].size())
```
## Data Iteration and Batching
Right now, we are iterating over this data using a ``for`` loop, but we are missing out on a lot of PyTorch's dataset capabilities, specifically the abilities to:
- Batch the data
- Shuffle the data
- Load the data in parallel using ``multiprocessing`` workers.
``torch.utils.data.DataLoader`` is an iterator which provides all these
features, and we'll see this in use in the *next* notebook, Notebook 2, when we load data in batches to train a neural network!
---
## Ready to Train!
Now that you've seen how to load and transform our data, you're ready to build a neural network to train on this data.
In the next notebook, you'll be tasked with creating a CNN for facial keypoint detection.
| github_jupyter |
```
import nbformat as nbf
import textwrap
nb = nbf.v4.new_notebook()
#this creates a button to toggle to remove code in html
source_1 = textwrap.dedent("""
from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form>''')""")
code_cell = nbf.v4.new_code_cell(source=source_1)
nb.cells.append(code_cell)
#import libraries
source_1 = textwrap.dedent("""\
#import libraries
import warnings
warnings.filterwarnings("ignore",category=FutureWarning)
warnings.filterwarnings("ignore", message="numpy.dtype size changed")
import scipy.io as spio
import scipy.signal
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from collections import namedtuple
import math
import re
import pandas as pd
import os
import glob
from os.path import expanduser
import datetime
import statistics
from plotly import __version__
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
import plotly.graph_objs as go
init_notebook_mode(connected=True)
""")
code_cell = nbf.v4.new_code_cell(source=source_1)
nb.cells.append(code_cell)
#define functions
source_1 = textwrap.dedent("""\
def color_negative_red(val):
color = 'red' if val > 110 else 'black'
return 'color: %s' % color
""")
code_cell = nbf.v4.new_code_cell(source=source_1)
nb.cells.append(code_cell)
source_1 = textwrap.dedent("""\
class Keypoint:
tag = ""
parent = ['']
child = ['']
point = None
def __init__(self,tag=None,parent=None,child=None,point=None):
if tag is not None:
self.tag = tag
if parent is not None:
self.parent = parent
if child is not None:
self.child = child
if point is not None:
self.point = point
""")
code_cell = nbf.v4.new_code_cell(source=source_1)
nb.cells.append(code_cell)
source_1 = textwrap.dedent("""\
class Skeleton:
keypoints = [Keypoint() for i in range(17)]
tag2id = {
"shoulderCenter" : 0,
"head" : 1,
"shoulderLeft" : 2,
"elbowLeft" : 3,
"handLeft" : 4,
"shoulderRight" : 5,
"elbowRight" : 6,
"handRight" : 7,
"hipCenter" : 8,
"hipLeft" : 9,
"kneeLeft" : 10,
"ankleLeft" : 11,
"footLeft" : 12,
"hipRight" : 13,
"kneeRight" : 14,
"ankleRight" : 15,
"footRight" : 16,
}
keypoints[tag2id["shoulderCenter"]] = Keypoint("shoulderCenter",[''],['head','shoulderLeft','shoulderRight','hipCenter'])
keypoints[tag2id["head"]] = Keypoint("head",['shoulderCenter'],[''])
keypoints[tag2id["shoulderLeft"]] = Keypoint("shoulderLeft",['shoulderCenter'],['elbowLeft'])
keypoints[tag2id["elbowLeft"]] = Keypoint("elbowLeft",['shoulderLeft'],['handLeft'])
keypoints[tag2id["handLeft"]] = Keypoint("handLeft",['elbowLeft'],[''])
keypoints[tag2id["shoulderRight"]] = Keypoint("shoulderRight",['shoulderCenter'],['elbowRight'])
keypoints[tag2id["elbowRight"]] = Keypoint("elbowRight",['shoulderRight'],['handRight'])
keypoints[tag2id["handRight"]] = Keypoint("handRight",['elbowRight'],[''])
keypoints[tag2id["hipCenter"]] = Keypoint("hipCenter",['shoulderCenter'],['hipLeft','hipRight'])
keypoints[tag2id["hipLeft"]] = Keypoint("hipLeft",['shoulderCenter'],['kneeLeft'])
keypoints[tag2id["kneeLeft"]] = Keypoint("kneeLeft",['hipLeft'],['ankleLeft'])
keypoints[tag2id["ankleLeft"]] = Keypoint("ankleLeft",['kneeLeft'],['footLeft'])
keypoints[tag2id["footLeft"]] = Keypoint("footLeft",['ankleLeft'],[''])
keypoints[tag2id["hipRight"]] = Keypoint("hipRight",['shoulderCenter'],['kneeRight'])
keypoints[tag2id["kneeRight"]] = Keypoint("kneeRight",['hipRight'],['ankleRight'])
keypoints[tag2id["ankleRight"]] = Keypoint("ankleRight",['kneeRight'],['footRight'])
keypoints[tag2id["footRight"]] = Keypoint("footRight",['ankleRight'],[''])
def __init__(self,keyp_map=None):
if keyp_map is not None:
for i in range(len(keyp_map)):
tag = keyp_map.keys()[i]
self.keypoints[self.tag2id[tag]].point = keyp_map[tag]
def getKeypoint(self,keyp_tag):
return self.keypoints[self.tag2id[keyp_tag]].point
def getChild(self,keyp_tag):
return self.keypoints[self.tag2id[keyp_tag]].child
def getParent(self,keyp_tag):
return self.keypoints[self.tag2id[keyp_tag]].parent
def getTransformation(self):
sagittal = None
coronal = None
transverse = None
T = np.eye(4,4)
if self.getKeypoint("shoulderLeft") is not None:
if self.getKeypoint("shoulderRight") is not None:
sagittal = self.getKeypoint("shoulderLeft")[0]-self.getKeypoint("shoulderRight")[0]
sagittal = sagittal/np.linalg.norm(sagittal)
if self.getKeypoint("shoulderCenter") is not None:
if self.getKeypoint("hipLeft") is not None:
if self.getKeypoint("hipRight") is not None:
transverse = self.getKeypoint("shoulderCenter")[0]-0.5*(self.getKeypoint("hipLeft")[0]+self.getKeypoint("hipRight")[0])
transverse = transverse/np.linalg.norm(transverse)
if self.getKeypoint("shoulderCenter") is not None:
pSC = self.getKeypoint("shoulderCenter")[0]
if sagittal is not None:
if coronal is not None:
coronal = np.cross(sagittal,transverse)
T[0,0]=coronal[0]
T[1,0]=coronal[1]
T[2,0]=coronal[2]
T[0,1]=sagittal[0]
T[1,1]=sagittal[1]
T[2,1]=sagittal[2]
T[0,2]=transverse[0]
T[1,2]=transverse[1]
T[2,2]=transverse[2]
T[0,3]=pSC[0]
T[1,3]=pSC[1]
T[2,3]=pSC[2]
T[3,3]=1
return T
def show(self):
for i in range(len(self.keypoints)):
k = self.keypoints[i]
print "keypoint[", k.tag, "]", "=", k.point
""")
code_cell = nbf.v4.new_code_cell(source=source_1)
nb.cells.append(code_cell)
source_1 = textwrap.dedent("""\
class Exercise:
name = ""
typee = ""
metrics = []
class Tug(Exercise):
name = "tug"
typee = "test"
metrics = ["ROM_0","ROM_1","ROM_2","ROM_3","ROM_4","ROM_5","step_0"]
result = []
month_res = {
0: [],
1: [],
2: [],
3: [],
4: [],
5: [],
6: [],
7: [],
8: [],
9: [],
10: [],
11: []
}
def __init__(self,month,result):
self.result = result
self.month_res[month] = result
def getResult(self,month):
return self.month_res[month]
class Abduction(Exercise):
name = "abduction"
typee = "rehabilitation"
metrics = ["ROM_0"]
result = []
month_res = {
0: [],
1: [],
2: [],
3: [],
4: [],
5: [],
6: [],
7: [],
8: [],
9: [],
10: [],
11: []
}
def __init__(self,name,monthi,result):
self.name = name
self.result = result
self.month_res[monthi] = result
def getResult(self,month):
return self.month_res[month]
class Internal_Rotation(Exercise):
name = "internal_rotation"
typee = "rehabilitation"
metrics = ["ROM_0"]
result = []
month_res = {
0: [],
1: [],
2: [],
3: [],
4: [],
5: [],
6: [],
7: [],
8: [],
9: [],
10: [],
11: []
}
def __init__(self,name,monthi,result):
self.name = name
self.result = result
self.month_res[monthi] = result
def getResult(self,month):
return self.month_res[month]
class External_Rotation(Exercise):
name = "external_rotation"
typee = "rehabilitation"
metrics = ["ROM_0"]
result = []
month_res = {
0: [],
1: [],
2: [],
3: [],
4: [],
5: [],
6: [],
7: [],
8: [],
9: [],
10: [],
11: []
}
def __init__(self,name,monthi,result):
self.name = name
self.result = result
self.month_res[monthi] = result
def getResult(self,month):
return self.month_res[month]
class Reaching(Exercise):
name = "reaching"
typee = "rehabilitation"
metrics = ["EP_0"]
result = []
month_res = {
0: [],
1: [],
2: [],
3: [],
4: [],
5: [],
6: [],
7: [],
8: [],
9: [],
10: [],
11: []
}
def __init__(self,name,monthi,result):
self.name = name
self.result = result
self.month_res[monthi] = result
def getResult(self,month):
return self.month_res[month]
""")
code_cell = nbf.v4.new_code_cell(source=source_1)
nb.cells.append(code_cell)
source_1 = textwrap.dedent("""\
class Metric:
name = ''
def __init__(self,name):
self.name = name
class Rom(Metric):
name = "ROM"
tagjoint = ""
refjoint = ""
refdir = []
tagplane = ""
def __init__(self,tagjoint,refjoint,refdir,tagplane):
self.tagjoint = tagjoint
self.refjoint = refjoint
self.refdir = refdir
self.tagplane = tagplane
def compute(self,skeleton):
#joint ref and child
tj = skeleton.getKeypoint(self.tagjoint)
tagchild = skeleton.getChild(self.tagjoint)[0]
cj = skeleton.getKeypoint(tagchild)
xrj = []
yrj = []
zrj = []
if self.refjoint != "":
rj = skeleton.getKeypoint(self.refjoint)
xrj = rj[:,0]
yrj = rj[:,1]
zrj = rj[:,2]
#compute metric
x=tj[:,0]
y=tj[:,1]
z=tj[:,2]
xchild=cj[:,0]
ychild=cj[:,1]
zchild=cj[:,2]
#plane over which we want to evaluate the metric
plane = np.zeros(3)
if tagplane == "coronal":
plane[0] = 1.0
if tagplane == "sagittal":
plane[1] = 1.0
if tagplane == "transverse":
plane[2] = 1.0
#project v1 on the right plane
invT = np.linalg.inv(skeleton.getTransformation())
cosRom = []
for i in range(len(x)):
temp_ref = np.array([x[i],y[i],z[i],1])
temp_child = np.array([xchild[i],ychild[i],zchild[i],1])
transf_ref = np.inner(invT,temp_ref)
transf_child = np.inner(invT,temp_child)
vprocess = transf_child-transf_ref
vprocess = np.delete(vprocess,3)
dist = np.dot(vprocess,np.transpose(plane))
vprocess = vprocess-dist*plane
n1 = np.linalg.norm(vprocess)
if(n1>0):
vprocess = vprocess/n1
if len(xrj)>0:
temp_refjoint = np.array([xrj[i],yrj[i],zrj[i],1])
transf_refjoint = np.inner(invT,temp_refjoint)
vecref = transf_ref - transf_refjoint
ref = np.delete(vecref,3)
else:
n2 = np.linalg.norm(self.refdir)
if(n2>0):
self.refdir = self.refdir/n2
ref = self.refdir
dotprod = np.dot(vprocess,np.transpose(ref))
cosRom.append(dotprod)
rom_value = np.arccos(cosRom)
result = rom_value *(180/math.pi)
return result
class Step(Metric):
name = "step"
num = []
den = []
tstart = 0.0
tend = 0.0
steplen = []
nsteps = 0
cadence = 0.0
speed = 0.0
ex_time = 0.0
filtered_distfeet = []
strikes = []
def __init__(self,num,den,tstart,tend):
self.num = num
self.den = den
self.tstart = tstart
self.tend = tend
def compute(self,skeleton):
alj = skeleton.getKeypoint("ankleLeft")
arj = skeleton.getKeypoint("ankleRight")
sagittal = None
if skeleton.getKeypoint("shoulderLeft") is not None:
if skeleton.getKeypoint("shoulderRight") is not None:
sagittal = skeleton.getKeypoint("shoulderLeft")[0]-skeleton.getKeypoint("shoulderRight")[0]
sagittal = sagittal/np.linalg.norm(sagittal)
distfeet = []
for i in range(len(alj)):
v = alj[i,:]-arj[i,:]
dist = np.dot(v,np.transpose(sagittal))
v = v-dist*sagittal
distfeet.append(np.linalg.norm(v))
self.filtered_distfeet = scipy.signal.filtfilt(self.num,self.den,distfeet)
self.strikes,_ = scipy.signal.find_peaks(self.filtered_distfeet,height=0.05)
filtered_distfeet_np = np.array(self.filtered_distfeet)
slen = filtered_distfeet_np[self.strikes]
self.steplen = statistics.mean(slen)
self.nsteps = len(self.strikes)
self.cadence = self.nsteps/(self.tend-self.tstart)
self.speed = self.steplen*self.cadence
self.ex_time = self.tend-self.tstart
class EndPoint(Metric):
name = "EP"
tagjoint = ""
refdir = []
tagplane = ""
target = []
trajectories = []
tstart = 0.0
tend = 0.0
speed = 0.0
ex_time = 0.0
def __init__(self,tagjoint,refdir,tagplane,target,tstart,tend):
self.tagjoint = tagjoint
self.refdir = refdir
self.tagplane = tagplane
self.target = target
self.tstart = tstart
self.tend = tend
def compute(self,skeleton):
self.ex_time = self.tend-self.tstart
tj = skeleton.getKeypoint(self.tagjoint)
x = tj[:,0]
y = tj[:,1]
z = tj[:,2]
plane = np.zeros(3)
if tagplane == "coronal":
plane[0] = 1.0
if tagplane == "sagittal":
plane[1] = 1.0
if tagplane == "transverse":
plane[2] = 1.0
invT = np.linalg.inv(skeleton.getTransformation())
self.trajectories = np.zeros([len(x),3])
for i in range(len(x)):
temp_jnt = np.array([x[i],y[i],z[i],1])
transf_jnt = np.inner(invT,temp_jnt)
v = np.delete(transf_jnt,3)
dist = np.dot(v,np.transpose(plane))
v = v-dist*plane
self.trajectories[i,0]=v[0]
self.trajectories[i,1]=v[1]
self.trajectories[i,2]=v[2]
vel = np.zeros([len(x),3])
vel[:,0] = np.gradient(self.trajectories[:,0])/self.ex_time
vel[:,1] = np.gradient(self.trajectories[:,1])/self.ex_time
vel[:,2] = np.gradient(self.trajectories[:,2])/self.ex_time
self.speed = 0.0
for i in range(len(x)):
self.speed = self.speed + np.linalg.norm([vel[i,0],vel[i,1],vel[i,2]])
self.speed = self.speed/len(x)
""")
code_cell = nbf.v4.new_code_cell(source=source_1)
nb.cells.append(code_cell)
source_1 = textwrap.dedent("""\
def loadmat(filename):
'''
this function should be called instead of direct spio.loadmat
as it cures the problem of not properly recovering python dictionaries
from mat files. It calls the function check keys to cure all entries
which are still mat-objects
'''
data = spio.loadmat(filename, struct_as_record=False, squeeze_me=True)
return _check_keys(data)
def _check_keys(dict):
'''
checks if entries in dictionary are mat-objects. If yes
todict is called to change them to nested dictionaries
'''
for key in dict:
if isinstance(dict[key], spio.matlab.mio5_params.mat_struct):
dict[key] = _todict(dict[key])
return dict
def _todict(matobj):
'''
A recursive function which constructs from matobjects nested dictionaries
'''
dict = {}
for strg in matobj._fieldnames:
elem = matobj.__dict__[strg]
if isinstance(elem, spio.matlab.mio5_params.mat_struct):
dict[strg] = _todict(elem)
else:
dict[strg] = elem
return dict
""")
code_cell = nbf.v4.new_code_cell(source=source_1)
nb.cells.append(code_cell)
#print personal data
text = """
# Personal data """
markdown_cell = nbf.v4.new_markdown_cell(text)
nb.cells.append(markdown_cell)
source_1 = textwrap.dedent("""
#load file
home = expanduser("~")
pth = home + '/.local/share/yarp/contexts/motionAnalyzer'
files = glob.glob(os.path.join(pth, '*.mat'))
lastfile = max(files, key=os.path.getctime)
print lastfile
#print personal data
i = [pos for pos, char in enumerate(lastfile) if char == "-"]
i1 = i[-3]
i2 = i[-2]
name = lastfile[i1+1:i2]
surname = ""
age = ""
personaldata = []
personaldata.append(name)
personaldata.append(surname)
personaldata.append(age)
table = pd.DataFrame(personaldata)
table.rename(index={0:"Name",1:"Surname",2:"Age"}, columns={0:"Patient"}, inplace=True)
display(table) """)
code_cell = nbf.v4.new_code_cell(source=source_1)
nb.cells.append(code_cell)
#we load all the files with the same user name
source_1 = textwrap.dedent("""
data = []
ctime = []
filename = []
tagex = []
files.sort(key=os.path.getctime)
for fi in files:
i = [pos for pos, char in enumerate(fi) if char == "-"]
i1 = i[-3]
i2 = i[-2]
i3 = i[-1]
namei = fi[i1+1:i2]
if namei == name:
filename.append(fi)
data.append(loadmat(fi)) #data.append(scipy.io.loadmat(fi))
tagex.append(fi[i2+1:i3])
ctime.append(os.path.getctime(fi)) """)
code_cell = nbf.v4.new_code_cell(source=source_1)
nb.cells.append(code_cell)
#define keypoints
source_1 = textwrap.dedent("""\
time = []
month = []
exercises = []
ex_names = []
#count how many exercise of the same type were performed at that month
countexmonth = {
"tug" : [0,0,0,0,0,0,0,0,0,0,0,0],
"abduction_left" : [0,0,0,0,0,0,0,0,0,0,0,0],
"internal_rotation_left" : [0,0,0,0,0,0,0,0,0,0,0,0],
"external_rotation_left" : [0,0,0,0,0,0,0,0,0,0,0,0],
"reaching_left" : [0,0,0,0,0,0,0,0,0,0,0,0]
}
for i in range(len(data)):
datai = data[i]
time.append(datai['Time_samples'])
monthi = datetime.date.fromtimestamp(ctime[i]).month-1
month.append(monthi)
shoulderCenter = datai['Keypoints']['shoulderCenter']
head = datai['Keypoints']['head']
shoulderLeft = datai['Keypoints']['shoulderLeft']
shoulderRight = datai['Keypoints']['shoulderRight']
elbowLeft = datai['Keypoints']['elbowLeft']
handLeft = datai['Keypoints']['handLeft']
elbowRight = datai['Keypoints']['elbowRight']
handRight = datai['Keypoints']['handRight']
hipLeft = datai['Keypoints']['hipLeft']
hipRight = datai['Keypoints']['hipRight']
ankleLeft = datai['Keypoints']['ankleLeft']
ankleRight = datai['Keypoints']['ankleRight']
kneeLeft = datai['Keypoints']['kneeLeft']
kneeRight = datai['Keypoints']['kneeRight']
footLeft = datai['Keypoints']['footLeft']
footRight = datai['Keypoints']['footRight']
hipCenter = datai['Keypoints']['hipCenter']
key_pam = {
"shoulderCenter" : shoulderCenter,
"head" : head,
"shoulderLeft" : shoulderLeft,
"shoulderRight" : shoulderRight,
"elbowLeft" : elbowLeft,
"handLeft" : handLeft,
"elbowRight" : elbowRight,
"handRight" : handRight,
"hipLeft" : hipLeft,
"hipRight" : hipRight,
"ankleLeft" : ankleLeft,
"ankleRight" : ankleRight,
"kneeLeft" : kneeLeft,
"kneeRight" : kneeRight,
"footLeft" : footLeft,
"footRight" : footRight,
"hipCenter" : hipCenter
}
s=Skeleton(key_pam)
#s.show()
exname = datai["Exercise"]["name"]
exname = re.sub(r'[^\w]','',exname)
ex_names.append(exname)
result_singleexercise = []
allmet = datai["Exercise"]["metrics"]
metrics = allmet.keys()
for j in range(len(metrics)):
metname = metrics[j]
if "ROM" in metname:
tagjoint = allmet[metname]["tag_joint"]
tagjoint = re.sub(r'[^\w]', '',tagjoint)
refjoint = allmet[metname]["ref_joint"]
refjoint = re.sub(r'[^\w]', '',refjoint)
if type(refjoint) is np.ndarray:
refjoint = ""
refdir = allmet[metname]["ref_dir"]
tagplane = allmet[metname]["tag_plane"]
tagplane = re.sub(r'[^\w]', '',tagplane)
rom = Rom(tagjoint,refjoint,refdir,tagplane)
result_singleexercise.append((rom,rom.compute(s)))
if "step" in metname:
num = allmet[metname]["num"]
den = allmet[metname]["den"]
tstart = allmet[metname]["tstart"]
tend = allmet[metname]["tend"]
step = Step(num,den,tstart,tend)
step.compute(s)
stepmet = [step.steplen,step.nsteps,step.cadence,step.speed,step.ex_time,
step.filtered_distfeet,step.strikes]
result_singleexercise.append((step,stepmet))
if "EP" in metname:
tagjoint = allmet[metname]["tag_joint"]
tagjoint = re.sub(r'[^\w]', '',tagjoint)
refdir = allmet[metname]["ref_dir"]
tagplane = allmet[metname]["tag_plane"]
tagplane = re.sub(r'[^\w]', '',tagplane)
target = allmet[metname]["target"]
tstart = allmet[metname]["tstart"]
tend = allmet[metname]["tend"]
ep = EndPoint(tagjoint,refdir,tagplane,target,tstart,tend)
ep.compute(s)
result_singleexercise.append((ep,ep.trajectories))
if exname == "tug":
ex = Tug(monthi,result_singleexercise)
if "abduction" in exname:
ex = Abduction(exname,monthi,result_singleexercise)
if "internal_rotation" in exname:
ex = Internal_Rotation(exname,monthi,result_singleexercise)
if "external_rotation" in exname:
ex = External_Rotation(exname,monthi,result_singleexercise)
if "reaching" in exname:
ex = Reaching(exname,monthi,result_singleexercise)
countexmonth[exname][monthi] = 1 + countexmonth[exname][monthi]
exercises.append(ex)
""")
code_cell = nbf.v4.new_code_cell(source=source_1)
nb.cells.append(code_cell)
#print information about the performed movement and the evaluated metric
text = """
# Daily session report
The patient did the following exercise: """
markdown_cell = nbf.v4.new_markdown_cell(text)
nb.cells.append(markdown_cell)
source_1 = textwrap.dedent("""
print exname.encode('ascii') """)
code_cell = nbf.v4.new_code_cell(source=source_1)
nb.cells.append(code_cell)
text = """
on: """
markdown_cell = nbf.v4.new_markdown_cell(text)
nb.cells.append(markdown_cell)
source_1 = textwrap.dedent("""
now = datetime.datetime.now()
print now """)
code_cell = nbf.v4.new_code_cell(source=source_1)
nb.cells.append(code_cell)
#plot the metrics for each session
text = """
The following shows the metric trend during the exercise:"""
markdown_cell = nbf.v4.new_markdown_cell(text)
nb.cells.append(markdown_cell)
source_1 = textwrap.dedent("""\
lastsess_time = time[-1]
lastsess_result = exercises[-1].result
lastsess_res_step = []
%matplotlib inline
%matplotlib inline
for i in range(len(lastsess_result)):
lastsess_met,lastsess_resi = lastsess_result[i]
lastsess_metname = lastsess_met.name
################
# ROM #
################
if lastsess_metname == "ROM":
lastsess_metjoint = lastsess_met.tagjoint
trace1 = go.Scatter(
x=lastsess_time,y=lastsess_resi,
mode='lines',
line=dict(
color='blue',
width=3
),
name='Real trend'
)
data = [trace1]
layout = dict(
width=750,
height=600,
autosize=False,
title='Range of Motion '+lastsess_metjoint,
font=dict(family='Courier New, monospace', size=22, color='black'),
xaxis=dict(
title='time [s]',
titlefont=dict(
family='Courier New, monospace',
size=18,
color='#7f7f7f'
)
),
yaxis=dict(
title='ROM [degrees]',
titlefont=dict(
family='Courier New, monospace',
size=18,
color='#7f7f7f'
)
)
)
fig = dict(data=data, layout=layout)
iplot(fig)
################
# STEP #
################
if lastsess_metname == "step":
dist=lastsess_resi[5]
strikes=lastsess_resi[6]
trace1 = go.Scatter(
x=lastsess_time,y=dist,
mode='lines',
line=dict(
color='blue',
width=3
),
name='Feet distance'
)
trace2 = go.Scatter(
x=lastsess_time[strikes],y=dist[strikes],
mode='markers',
marker=dict(
color='red',
size=10
),
name='Heel strikes'
)
data = [trace1,trace2]
layout = dict(
width=750,
height=600,
autosize=False,
title='Feet distance',
font=dict(family='Courier New, monospace', size=22, color='black'),
xaxis=dict(
title='time [s]',
titlefont=dict(
family='Courier New, monospace',
size=18,
color='#7f7f7f'
)
),
yaxis=dict(
title='dist [m]',
titlefont=dict(
family='Courier New, monospace',
size=18,
color='#7f7f7f'
)
)
)
fig = dict(data=data, layout=layout)
iplot(fig)
lastsess_res_step.append(lastsess_resi)
tablestep = pd.DataFrame(lastsess_res_step[0][0:4])
tablestep.rename(index={0:"Step length [m]",1:"Number of steps",2:"Cadence [steps/s]",
3:"Speed [m/s]",4:"Execution time [s]"},
columns={0:"Gait analysis"}, inplace=True)
display(tablestep)
################
# EP #
################
if lastsess_metname == "EP":
target = lastsess_met.target
trace1 = go.Scatter3d(
x=lastsess_resi[:,0], y=lastsess_resi[:,1], z=lastsess_resi[:,2],
mode = 'lines',
line=dict(
color='blue',
width=3
),
name = 'Trajectories'
)
trace2 = go.Scatter3d(
x=[target[0]], y=[target[1]], z=[target[2]],
mode = 'markers',
marker=dict(
color='red',
size=5
),
name = 'Target to reach'
)
data = [trace1, trace2]
layout = dict(
margin=dict(
l=0,
r=0,
b=0
),
title='End-point trajectories',
font=dict(family='Courier New, monospace', size=22, color='black'),
scene=dict(
xaxis=dict(
title='x [cm]',
titlefont=dict(
family='Courier New, monospace',
size=18,
color='#7f7f7f'
),
tickfont=dict(
family='Courier New, monospace',
size=14,
color='#7f7f7f'
),
gridcolor='rgb(255, 255, 255)',
zerolinecolor='rgb(255, 255, 255)',
showbackground=True,
backgroundcolor='rgb(230, 230,230)'
),
yaxis=dict(
title='y [cm]',
titlefont=dict(
family='Courier New, monospace',
size=18,
color='#7f7f7f'
),
tickfont=dict(
family='Courier New, monospace',
size=14,
color='#7f7f7f'
),
gridcolor='rgb(255, 255, 255)',
zerolinecolor='rgb(255, 255, 255)',
showbackground=True,
backgroundcolor='rgb(230, 230,230)'
),
zaxis=dict(
title='z [cm]',
titlefont=dict(
family='Courier New, monospace',
size=18,
color='#7f7f7f'
),
tickfont=dict(
family='Courier New, monospace',
size=14,
color='#7f7f7f'
),
gridcolor='rgb(255, 255, 255)',
zerolinecolor='rgb(255, 255, 255)',
showbackground=True,
backgroundcolor='rgb(230, 230,230)'
),
camera=dict(
up=dict(
x=0,
y=0,
z=1
),
eye=dict(
x=-1.7428,
y=1.0707,
z=0.7100,
)
),
aspectratio = dict( x=1, y=1, z=0.7 ),
aspectmode = 'manual'
),
)
fig = dict(data=data, layout=layout)
iplot(fig)
""")
code_cell = nbf.v4.new_code_cell(source=source_1)
nb.cells.append(code_cell)
source_1 = textwrap.dedent("""\
if exname == 'tug':
table_tug = pd.DataFrame([['Normal mobility'],['Good mobility'],['Gait aids'],['Falling risk']],
index=['< 10 s','< 20 s','< 30 s','>= 30 s'],
columns=['TUG Table'])
display(table_tug)
time_score = lastsess_res_step[0][4]
print "The test has been done in",round(time_score,2),"s"
if time_score < 10:
evaluation = 'Normal mobility'
print "The evaluation is \033[1;30;42m",evaluation
elif time_score < 20:
evaluation = 'Good mobility, no need for gait aids'
print "The evaluation is \033[1;30;42m",evaluation
elif time_score < 30:
evaluation = 'Need for gait aids'
print "The evaluation is \033[1;30;43m",evaluation
elif time_score >= 30:
evaluation = 'Falling risk'
print "The evaluation is \033[1;30;41m",evaluation
""")
code_cell = nbf.v4.new_code_cell(source=source_1)
nb.cells.append(code_cell)
#report about patient's improvements
text = """
# Patient progress"""
markdown_cell = nbf.v4.new_markdown_cell(text)
nb.cells.append(markdown_cell)
text = """
The exercises performed by the patient during the months under analysis are grouped as following:"""
markdown_cell = nbf.v4.new_markdown_cell(text)
nb.cells.append(markdown_cell)
source_1 = textwrap.dedent("""\
labels = ["reaching_left","abduction_left","internal-rotation_left","external-rotation_left","timed-up-and-go"]
values = [tagex.count("reaching_left"), tagex.count("abduction_left"), tagex.count("internal_rotation_left"),
tagex.count("external_rotation_left"), tagex.count("tug")]
colors = ['#FEBFB3', '#E1396C', '#96D38C', '#D0F9B1']
trace = go.Pie(labels=labels, values=values,
#hoverinfo='label+percent', textinfo='value',
textfont=dict(size=20),
marker=dict(colors=colors,
line=dict(color='#000000', width=2)),
hoverinfo="label+percent+value",
hole=0.3
)
layout = go.Layout(
title="Performed exercises",
)
data = [trace]
fig = go.Figure(data=data,layout=layout)
iplot(fig) """)
code_cell = nbf.v4.new_code_cell(source=source_1)
nb.cells.append(code_cell)
text = """
The trends of the patient metrics, grouped by month, are shown here: """
markdown_cell = nbf.v4.new_markdown_cell(text)
nb.cells.append(markdown_cell)
source_1 = textwrap.dedent("""\
keyp2rommax = {
"shoulderCenter" : [0,0,0,0,0,0,0,0,0,0,0,0],
"head" : [0,0,0,0,0,0,0,0,0,0,0,0],
"shoulderLeft" : [0,0,0,0,0,0,0,0,0,0,0,0],
"elbowLeft" : [0,0,0,0,0,0,0,0,0,0,0,0],
"handLeft" : [0,0,0,0,0,0,0,0,0,0,0,0],
"shoulderRight" : [0,0,0,0,0,0,0,0,0,0,0,0],
"elbowRight" : [0,0,0,0,0,0,0,0,0,0,0,0],
"handRight" : [0,0,0,0,0,0,0,0,0,0,0,0],
"hipCenter" : [0,0,0,0,0,0,0,0,0,0,0,0],
"hipLeft" : [0,0,0,0,0,0,0,0,0,0,0,0],
"kneeLeft" : [0,0,0,0,0,0,0,0,0,0,0,0],
"ankleLeft" : [0,0,0,0,0,0,0,0,0,0,0,0],
"footLeft" : [0,0,0,0,0,0,0,0,0,0,0,0],
"hipRight" : [0,0,0,0,0,0,0,0,0,0,0,0],
"kneeRight" : [0,0,0,0,0,0,0,0,0,0,0,0],
"ankleRight" : [0,0,0,0,0,0,0,0,0,0,0,0],
"footRight" : [0,0,0,0,0,0,0,0,0,0,0,0]
}
keyp2rommin = {
"shoulderCenter" : [0,0,0,0,0,0,0,0,0,0,0,0],
"head" : [0,0,0,0,0,0,0,0,0,0,0,0],
"shoulderLeft" : [0,0,0,0,0,0,0,0,0,0,0,0],
"elbowLeft" : [0,0,0,0,0,0,0,0,0,0,0,0],
"handLeft" : [0,0,0,0,0,0,0,0,0,0,0,0],
"shoulderRight" : [0,0,0,0,0,0,0,0,0,0,0,0],
"elbowRight" : [0,0,0,0,0,0,0,0,0,0,0,0],
"handRight" : [0,0,0,0,0,0,0,0,0,0,0,0],
"hipCenter" : [0,0,0,0,0,0,0,0,0,0,0,0],
"hipLeft" : [0,0,0,0,0,0,0,0,0,0,0,0],
"kneeLeft" : [0,0,0,0,0,0,0,0,0,0,0,0],
"ankleLeft" : [0,0,0,0,0,0,0,0,0,0,0,0],
"footLeft" : [0,0,0,0,0,0,0,0,0,0,0,0],
"hipRight" : [0,0,0,0,0,0,0,0,0,0,0,0],
"kneeRight" : [0,0,0,0,0,0,0,0,0,0,0,0],
"ankleRight" : [0,0,0,0,0,0,0,0,0,0,0,0],
"footRight" : [0,0,0,0,0,0,0,0,0,0,0,0]
}
countrommonth = {
"shoulderCenter" : [0,0,0,0,0,0,0,0,0,0,0,0],
"head" : [0,0,0,0,0,0,0,0,0,0,0,0],
"shoulderLeft" : [0,0,0,0,0,0,0,0,0,0,0,0],
"elbowLeft" : [0,0,0,0,0,0,0,0,0,0,0,0],
"handLeft" : [0,0,0,0,0,0,0,0,0,0,0,0],
"shoulderRight" : [0,0,0,0,0,0,0,0,0,0,0,0],
"elbowRight" : [0,0,0,0,0,0,0,0,0,0,0,0],
"handRight" : [0,0,0,0,0,0,0,0,0,0,0,0],
"hipCenter" : [0,0,0,0,0,0,0,0,0,0,0,0],
"hipLeft" : [0,0,0,0,0,0,0,0,0,0,0,0],
"kneeLeft" : [0,0,0,0,0,0,0,0,0,0,0,0],
"ankleLeft" : [0,0,0,0,0,0,0,0,0,0,0,0],
"footLeft" : [0,0,0,0,0,0,0,0,0,0,0,0],
"hipRight" : [0,0,0,0,0,0,0,0,0,0,0,0],
"kneeRight" : [0,0,0,0,0,0,0,0,0,0,0,0],
"ankleRight" : [0,0,0,0,0,0,0,0,0,0,0,0],
"footRight" : [0,0,0,0,0,0,0,0,0,0,0,0]
}
keyp2rommax_avg = {
"shoulderCenter" : [0,0,0,0,0,0,0,0,0,0,0,0],
"head" : [0,0,0,0,0,0,0,0,0,0,0,0],
"shoulderLeft" : [0,0,0,0,0,0,0,0,0,0,0,0],
"elbowLeft" : [0,0,0,0,0,0,0,0,0,0,0,0],
"handLeft" : [0,0,0,0,0,0,0,0,0,0,0,0],
"shoulderRight" : [0,0,0,0,0,0,0,0,0,0,0,0],
"elbowRight" : [0,0,0,0,0,0,0,0,0,0,0,0],
"handRight" : [0,0,0,0,0,0,0,0,0,0,0,0],
"hipCenter" : [0,0,0,0,0,0,0,0,0,0,0,0],
"hipLeft" : [0,0,0,0,0,0,0,0,0,0,0,0],
"kneeLeft" : [0,0,0,0,0,0,0,0,0,0,0,0],
"ankleLeft" : [0,0,0,0,0,0,0,0,0,0,0,0],
"footLeft" : [0,0,0,0,0,0,0,0,0,0,0,0],
"hipRight" : [0,0,0,0,0,0,0,0,0,0,0,0],
"kneeRight" : [0,0,0,0,0,0,0,0,0,0,0,0],
"ankleRight" : [0,0,0,0,0,0,0,0,0,0,0,0],
"footRight" : [0,0,0,0,0,0,0,0,0,0,0,0]
}
keyp2rommin_avg = {
"shoulderCenter" : [0,0,0,0,0,0,0,0,0,0,0,0],
"head" : [0,0,0,0,0,0,0,0,0,0,0,0],
"shoulderLeft" : [0,0,0,0,0,0,0,0,0,0,0,0],
"elbowLeft" : [0,0,0,0,0,0,0,0,0,0,0,0],
"handLeft" : [0,0,0,0,0,0,0,0,0,0,0,0],
"shoulderRight" : [0,0,0,0,0,0,0,0,0,0,0,0],
"elbowRight" : [0,0,0,0,0,0,0,0,0,0,0,0],
"handRight" : [0,0,0,0,0,0,0,0,0,0,0,0],
"hipCenter" : [0,0,0,0,0,0,0,0,0,0,0,0],
"hipLeft" : [0,0,0,0,0,0,0,0,0,0,0,0],
"kneeLeft" : [0,0,0,0,0,0,0,0,0,0,0,0],
"ankleLeft" : [0,0,0,0,0,0,0,0,0,0,0,0],
"footLeft" : [0,0,0,0,0,0,0,0,0,0,0,0],
"hipRight" : [0,0,0,0,0,0,0,0,0,0,0,0],
"kneeRight" : [0,0,0,0,0,0,0,0,0,0,0,0],
"ankleRight" : [0,0,0,0,0,0,0,0,0,0,0,0],
"footRight" : [0,0,0,0,0,0,0,0,0,0,0,0]
}
stepmonth = {
"steplen" : [0,0,0,0,0,0,0,0,0,0,0,0],
"numsteps" : [0,0,0,0,0,0,0,0,0,0,0,0],
"cadence" : [0,0,0,0,0,0,0,0,0,0,0,0],
"speed" : [0,0,0,0,0,0,0,0,0,0,0,0],
"time" : [0,0,0,0,0,0,0,0,0,0,0,0]
}
countstepmonth = [0,0,0,0,0,0,0,0,0,0,0,0]
endpointmonth = {
"time" : [0,0,0,0,0,0,0,0,0,0,0,0],
"speed" : [0,0,0,0,0,0,0,0,0,0,0,0]
}
countendpointmonth = [0,0,0,0,0,0,0,0,0,0,0,0]
for i in range(len(exercises)):
exnamei = exercises[i].name
for monthi in range(12):
if countexmonth[exnamei][monthi] != 0:
res_exi = exercises[i].getResult(monthi)
for m in range(len(res_exi)):
single_metric,result_single_metric = res_exi[m]
single_metric_name = single_metric.name
if single_metric_name == "ROM":
maxromj = max(result_single_metric)
minromj = min(result_single_metric)
tagjoint = single_metric.tagjoint
keyp2rommax[tagjoint][monthi] = maxromj + keyp2rommax[tagjoint][monthi]
keyp2rommin[tagjoint][monthi] = minromj + keyp2rommin[tagjoint][monthi]
countrommonth[tagjoint][monthi] = 1 + countrommonth[tagjoint][monthi]
if single_metric_name == "step":
stepmonth["steplen"][monthi] = single_metric.steplen + stepmonth["steplen"][monthi]
stepmonth["numsteps"][monthi] = single_metric.nsteps + stepmonth["numsteps"][monthi]
stepmonth["cadence"][monthi] = single_metric.cadence + stepmonth["cadence"][monthi]
stepmonth["speed"][monthi] = single_metric.speed + stepmonth["speed"][monthi]
stepmonth["time"][monthi] = single_metric.ex_time + stepmonth["time"][monthi]
countstepmonth[monthi] = 1 + countstepmonth[monthi]
if single_metric_name == "EP":
endpointmonth["time"][monthi] = single_metric.ex_time + endpointmonth["time"][monthi]
endpointmonth["speed"][monthi] = single_metric.speed + endpointmonth["speed"][monthi]
countendpointmonth[monthi] = 1 + countendpointmonth[monthi]
""")
code_cell = nbf.v4.new_code_cell(source=source_1)
nb.cells.append(code_cell)
source_1 = textwrap.dedent("""\
counted_exmonth = {
"tug" : [0,0,0,0,0,0,0,0,0,0,0,0],
"abduction_left" : [0,0,0,0,0,0,0,0,0,0,0,0],
"internal_rotation_left" : [0,0,0,0,0,0,0,0,0,0,0,0],
"external_rotation_left" : [0,0,0,0,0,0,0,0,0,0,0,0],
"reaching_left" : [0,0,0,0,0,0,0,0,0,0,0,0]
}
for i in range(len(exercises)):
exnamei = exercises[i].name
for monthi in range(12):
if countexmonth[exnamei][monthi] != 0:
if counted_exmonth[exnamei][monthi] < 1:
res_exi = exercises[i].getResult(monthi)
for m in range(len(res_exi)):
single_metric,result_single_metric = res_exi[m]
single_metric_name = single_metric.name
if single_metric_name == "ROM":
tagjoint = single_metric.tagjoint
if countrommonth[tagjoint][monthi] != 0:
keyp2rommax_avg[tagjoint][monthi] = keyp2rommax[tagjoint][monthi]/countrommonth[tagjoint][monthi]
keyp2rommin_avg[tagjoint][monthi] = keyp2rommin[tagjoint][monthi]/countrommonth[tagjoint][monthi]
if single_metric_name == "step":
if countstepmonth[monthi] != 0:
stepmonth["steplen"][monthi] = stepmonth["steplen"][monthi]/countstepmonth[monthi]
stepmonth["numsteps"][monthi] = stepmonth["numsteps"][monthi]/countstepmonth[monthi]
stepmonth["cadence"][monthi] = stepmonth["cadence"][monthi]/countstepmonth[monthi]
stepmonth["speed"][monthi] = stepmonth["speed"][monthi]/countstepmonth[monthi]
stepmonth["time"][monthi] = stepmonth["time"][monthi]/countstepmonth[monthi]
if single_metric_name == "EP":
if countendpointmonth[monthi] != 0:
endpointmonth["time"][monthi] = endpointmonth["time"][monthi]/countendpointmonth[monthi]
endpointmonth["speed"][monthi] = endpointmonth["speed"][monthi]/countendpointmonth[monthi]
counted_exmonth[exnamei][monthi] = 1
""")
code_cell = nbf.v4.new_code_cell(source=source_1)
nb.cells.append(code_cell)
source_1 = textwrap.dedent("""\
title = "Results for Timed Up and Go"
msg1 = HTML('<font size=5><b>'+title+'</b></font>')
msg2 = HTML('<font size=5,font color=gray,font >'+title+'</font>')
if sum(counted_exmonth['tug'])>0:
display(msg1);
else:
display(msg2); """)
code_cell = nbf.v4.new_code_cell(source=source_1)
nb.cells.append(code_cell)
source_1 = textwrap.dedent("""\
allmonths = [1,2,3,4,5,6,7,8,9,10,11,12]
counted_ex = {
"tug" : 0,
"abduction_left" : 0,
"internal_rotation_left" : 0,
"external_rotation_left" : 0,
"reaching_left" : 0
}
for i in range(len(exercises)):
exnamei = exercises[i].name
res_exi = exercises[i].result
if counted_ex[exnamei] < 1:
#############################
# Results for TUG #
#############################
if exnamei == "tug":
counted_ex[exnamei] = 1
step_month_table = pd.DataFrame.from_dict(stepmonth,orient='index',
columns=['Jan','Feb','Mar','Apr','May','Jun',
'Jul','Aug','Sep','Oct','Nov','Dec'])
step_month_table.index = ["Number steps","Speed [m/s]","Step length [m]",
"Cadence [steps/s]","Execution time [s]"]
display(step_month_table)
for m in range(len(res_exi)):
single_metric,result_single_metric = res_exi[m]
if single_metric.name == "ROM":
tagjoint = single_metric.tagjoint
if np.sum(keyp2rommax_avg[tagjoint]) > 0.0:
trace1 = go.Bar(
x=allmonths,
y=keyp2rommax_avg[tagjoint],
name='Maximum value reached',
marker=dict(
color='rgb(0,0,255)'
)
)
trace2 = go.Bar(
x=allmonths,
y=keyp2rommin_avg[tagjoint],
name='Minimum value reached',
marker=dict(
color='rgb(255,0,0)'
)
)
layout = go.Layout(
title='Range of Motion Parameters',
font=dict(family='Courier New, monospace', size=18, color='black'),
xaxis=dict(
title='Month',
titlefont=dict(
family='Courier New, monospace',
size=18,
color='#7f7f7f'
),
tickfont=dict(
family='Courier New, monospace',
size=14,
color='#7f7f7f'
)
),
yaxis=dict(
title='ROM ' + tagjoint + ' [degrees]',
titlefont=dict(
family='Courier New, monospace',
size=18,
color='#7f7f7f'
),
tickfont=dict(
family='Courier New, monospace',
size=14,
color='#7f7f7f'
)
),
legend=dict(
bgcolor='rgba(255, 255, 255, 0)',
bordercolor='rgba(255, 255, 255, 0)'
),
barmode='group',
bargap=0.1,
bargroupgap=0.0
)
data=[trace1,trace2]
fig = go.Figure(data=data, layout=layout)
iplot(fig) """)
code_cell = nbf.v4.new_code_cell(source=source_1)
nb.cells.append(code_cell)
source_1 = textwrap.dedent("""\
title = "Results for Abduction"
msg1 = HTML('<font size=5><b>'+title+'</b></font>')
msg2 = HTML('<font size=5,font color=gray>'+title+'</font>')
if sum(counted_exmonth['abduction_left'])>0:
display(msg1);
else:
display(msg2); """)
code_cell = nbf.v4.new_code_cell(source=source_1)
nb.cells.append(code_cell)
source_1 = textwrap.dedent("""\
for i in range(len(exercises)):
exnamei = exercises[i].name
res_exi = exercises[i].result
if counted_ex[exnamei] < 1:
###################################
# Results for Abduction #
###################################
if "abduction" in exnamei:
counted_ex[exnamei] = 1
for m in range(len(res_exi)):
single_metric,result_single_metric = res_exi[m]
if single_metric.name == "ROM":
tagjoint = single_metric.tagjoint
if np.sum(keyp2rommax_avg[tagjoint]) > 0.0:
trace1 = go.Bar(
x=allmonths,
y=keyp2rommax_avg[tagjoint],
name='Maximum value reached',
marker=dict(
color='rgb(0,0,255)'
)
)
trace2 = go.Bar(
x=allmonths,
y=keyp2rommin_avg[tagjoint],
name='Minimum value reached',
marker=dict(
color='rgb(255,0,0)'
)
)
layout = go.Layout(
title='Range of Motion Parameters',
font=dict(family='Courier New, monospace', size=18, color='black'),
xaxis=dict(
title='Month',
titlefont=dict(
family='Courier New, monospace',
size=18,
color='#7f7f7f'
),
tickfont=dict(
family='Courier New, monospace',
size=14,
color='#7f7f7f'
)
),
yaxis=dict(
title='ROM ' + tagjoint + ' [degrees]',
titlefont=dict(
family='Courier New, monospace',
size=18,
color='#7f7f7f'
),
tickfont=dict(
family='Courier New, monospace',
size=14,
color='#7f7f7f'
)
),
legend=dict(
bgcolor='rgba(255, 255, 255, 0)',
bordercolor='rgba(255, 255, 255, 0)'
),
barmode='group',
bargap=0.1,
bargroupgap=0.0
)
data=[trace1,trace2]
fig = go.Figure(data=data, layout=layout)
iplot(fig) """)
code_cell = nbf.v4.new_code_cell(source=source_1)
nb.cells.append(code_cell)
source_1 = textwrap.dedent("""\
title = "Results for Internal Rotation"
msg1 = HTML('<font size=5><b>'+title+'</b></font>')
msg2 = HTML('<font size=5,font color=gray>'+title+'</font>')
if sum(counted_exmonth['internal_rotation_left'])>0:
display(msg1);
else:
display(msg2); """)
code_cell = nbf.v4.new_code_cell(source=source_1)
nb.cells.append(code_cell)
source_1 = textwrap.dedent("""\
for i in range(len(exercises)):
exnamei = exercises[i].name
res_exi = exercises[i].result
if counted_ex[exnamei] < 1:
###################################
# Results for Internal #
###################################
if "internal_rotation" in exnamei:
counted_ex[exnamei] = 1
for m in range(len(res_exi)):
single_metric,result_single_metric = res_exi[m]
if single_metric.name == "ROM":
tagjoint = single_metric.tagjoint
if np.sum(keyp2rommax_avg[tagjoint]) > 0.0:
trace1 = go.Bar(
x=allmonths,
y=keyp2rommax_avg[tagjoint],
name='Maximum value reached',
marker=dict(
color='rgb(0,0,255)'
)
)
trace2 = go.Bar(
x=allmonths,
y=keyp2rommin_avg[tagjoint],
name='Minimum value reached',
marker=dict(
color='rgb(255,0,0)'
)
)
layout = go.Layout(
title='Range of Motion Parameters',
font=dict(family='Courier New, monospace', size=18, color='black'),
xaxis=dict(
title='Month',
titlefont=dict(
family='Courier New, monospace',
size=18,
color='#7f7f7f'
),
tickfont=dict(
family='Courier New, monospace',
size=14,
color='#7f7f7f'
)
),
yaxis=dict(
title='ROM ' + tagjoint + ' [degrees]',
titlefont=dict(
family='Courier New, monospace',
size=18,
color='#7f7f7f'
),
tickfont=dict(
family='Courier New, monospace',
size=14,
color='#7f7f7f'
)
),
legend=dict(
bgcolor='rgba(255, 255, 255, 0)',
bordercolor='rgba(255, 255, 255, 0)'
),
barmode='group',
bargap=0.1,
bargroupgap=0.0
)
data=[trace1,trace2]
fig = go.Figure(data=data, layout=layout)
iplot(fig) """)
code_cell = nbf.v4.new_code_cell(source=source_1)
nb.cells.append(code_cell)
source_1 = textwrap.dedent("""\
title = "Results for External Rotation"
msg1 = HTML('<font size=5><b>'+title+'</b></font>')
msg2 = HTML('<font size=5,font color=gray>'+title+'</font>')
if sum(counted_exmonth['external_rotation_left'])>0:
display(msg1);
else:
display(msg2); """)
code_cell = nbf.v4.new_code_cell(source=source_1)
nb.cells.append(code_cell)
source_1 = textwrap.dedent("""\
for i in range(len(exercises)):
exnamei = exercises[i].name
res_exi = exercises[i].result
if counted_ex[exnamei] < 1:
###################################
# Results for External #
###################################
if "external_rotation" in exnamei:
counted_ex[exnamei] = 1
for m in range(len(res_exi)):
single_metric,result_single_metric = res_exi[m]
if single_metric.name == "ROM":
tagjoint = single_metric.tagjoint
if np.sum(keyp2rommax_avg[tagjoint]) > 0.0:
trace1 = go.Bar(
x=allmonths,
y=keyp2rommax_avg[tagjoint],
name='Maximum value reached',
marker=dict(
color='rgb(0,0,255)'
)
)
trace2 = go.Bar(
x=allmonths,
y=keyp2rommin_avg[tagjoint],
name='Minimum value reached',
marker=dict(
color='rgb(255,0,0)'
)
)
layout = go.Layout(
title='Range of Motion Parameters',
font=dict(family='Courier New, monospace', size=18, color='black'),
xaxis=dict(
title='Month',
titlefont=dict(
family='Courier New, monospace',
size=18,
color='#7f7f7f'
),
tickfont=dict(
family='Courier New, monospace',
size=14,
color='#7f7f7f'
)
),
yaxis=dict(
title='ROM ' + tagjoint + ' [degrees]',
titlefont=dict(
family='Courier New, monospace',
size=18,
color='#7f7f7f'
),
tickfont=dict(
family='Courier New, monospace',
size=14,
color='#7f7f7f'
)
),
legend=dict(
bgcolor='rgba(255, 255, 255, 0)',
bordercolor='rgba(255, 255, 255, 0)'
),
barmode='group',
bargap=0.1,
bargroupgap=0.0
)
data=[trace1,trace2]
fig = go.Figure(data=data, layout=layout)
iplot(fig) """)
code_cell = nbf.v4.new_code_cell(source=source_1)
nb.cells.append(code_cell)
source_1 = textwrap.dedent("""\
title = "Results for Reaching"
msg1 = HTML('<font size=5><b>'+title+'</b></font>')
msg2 = HTML('<font size=5,font color=gray>'+title+'</font>')
if sum(counted_exmonth['reaching_left'])>0:
display(msg1);
else:
display(msg2); """)
code_cell = nbf.v4.new_code_cell(source=source_1)
nb.cells.append(code_cell)
source_1 = textwrap.dedent("""\
for i in range(len(exercises)):
exnamei = exercises[i].name
res_exi = exercises[i].result
if counted_ex[exnamei] < 1:
###################################
# Results for Reaching #
###################################
if "reaching" in exnamei:
counted_ex[exnamei] = 1
for m in range(len(res_exi)):
single_metric,result_single_metric = res_exi[m]
if single_metric.name == "EP":
trace1 = go.Bar(
x=allmonths,
y=endpointmonth["time"],
name='Time',
marker=dict(
color='rgb(0,0,255)'
)
)
layout = go.Layout(
title='Reaching parameters',
font=dict(family='Courier New, monospace', size=18, color='black'),
xaxis=dict(
title='Month',
titlefont=dict(
family='Courier New, monospace',
size=18,
color='#7f7f7f'
),
tickfont=dict(
family='Courier New, monospace',
size=14,
color='#7f7f7f'
)
),
yaxis=dict(
title='Execution time [s]',
titlefont=dict(
family='Courier New, monospace',
size=18,
color='#7f7f7f'
),
tickfont=dict(
family='Courier New, monospace',
size=14,
color='#7f7f7f'
)
),
legend=dict(
bgcolor='rgba(255, 255, 255, 0)',
bordercolor='rgba(255, 255, 255, 0)'
),
barmode='group',
bargap=0.1,
bargroupgap=0.0
)
data=[trace1]
fig = go.Figure(data=data, layout=layout)
iplot(fig)
trace1 = go.Bar(
x=allmonths,
y=endpointmonth["speed"],
name='Speed',
marker=dict(
color='rgb(0,0,255)'
)
)
layout = go.Layout(
font=dict(family='Courier New, monospace', size=18, color='black'),
xaxis=dict(
title='Month',
titlefont=dict(
family='Courier New, monospace',
size=18,
color='#7f7f7f'
),
tickfont=dict(
family='Courier New, monospace',
size=14,
color='#7f7f7f'
)
),
yaxis=dict(
title='Speed [m/s]',
titlefont=dict(
family='Courier New, monospace',
size=18,
color='#7f7f7f'
),
tickfont=dict(
family='Courier New, monospace',
size=14,
color='#7f7f7f'
)
),
legend=dict(
bgcolor='rgba(255, 255, 255, 0)',
bordercolor='rgba(255, 255, 255, 0)'
),
barmode='group',
bargap=0.1,
bargroupgap=0.0
)
data=[trace1]
fig = go.Figure(data=data, layout=layout)
iplot(fig) """)
code_cell = nbf.v4.new_code_cell(source=source_1)
nb.cells.append(code_cell)
nbf.write(nb, 'report-eng.ipynb')
```
| github_jupyter |
# Essential: Static file management with SourceLoader
Data pipelines usually interact with external systems such as SQL databases. Using relative paths to find such files is error-prone as the path to the file depends on the file loading it, on the other hand, absolute paths are to restrictive, the path will only work in your current environment but will break others. Combining `Env` with `SourceLoader` provides a clean approach for managing static files.
```
from pathlib import Path
import pandas as pd
from sklearn import datasets
from IPython.display import display, Markdown
from ploomber import DAG, SourceLoader, with_env
from ploomber.tasks import PythonCallable, NotebookRunner, SQLUpload, SQLScript
from ploomber.products import File, SQLiteRelation
from ploomber.clients import SQLAlchemyClient
from ploomber.executors import Serial
# initialize a temporary directory
import tempfile
import os
tmp_dir = Path(tempfile.mkdtemp())
tmp_dir_static = tmp_dir / 'static'
tmp_dir_static.mkdir()
os.chdir(str(tmp_dir))
report_py = """
# static/report.py
# +
# This file is in jupytext light format
import seaborn as sns
import pandas as pd
# -
# + tags=['parameters']
# papermill will add the parameters below this cell
upstream = None
product = None
# -
# +
path = upstream['raw']
df = pd.read_parquet(path)
# -
# ## AGE distribution
# +
_ = sns.distplot(df.AGE)
# -
# ## Price distribution
# +
_ = sns.distplot(df.price)
# -
"""
clean_table_sql = """
-- static/clean_table.sql
DROP TABLE IF EXISTS {{product}};
CREATE TABLE {{product}}
AS SELECT * FROM {{upstream["raw_table"]}}
WHERE AGE < 100
"""
env_yaml = """
_module: '{{here}}'
path:
data: '{{here}}/data/'
static: '{{here}}/static/'
"""
(tmp_dir_static / 'report.py').write_text(report_py)
(tmp_dir_static / 'clean_table.sql').write_text(clean_table_sql)
(tmp_dir / 'env.yaml').write_text(env_yaml)
def display_file(file, syntax):
s = """
```{}
{}
```
""".format(syntax, file)
return display(Markdown(s))
```
Our working environment has an `env.yaml` file with a `static/` folder holding a SQL and a Python script.
```
! tree $tmp_dir
```
### Content of `env.yaml`
```
display_file(env_yaml, 'yaml')
```
### Content of `static/report.py`
```
display_file(report_py, 'python')
```
### Content of `static/create_table.sql`
```
display_file(clean_table_sql, 'sql')
```
### Pipeline declaration
```
def _get_data(product):
data = datasets.load_boston()
df = pd.DataFrame(data.data)
df.columns = data.feature_names
df['price'] = data.target
df.to_parquet(str(product))
@with_env
def make(env):
# NOTE: passing the executor parameter is only required for testing purposes, can be removed
dag = DAG(executor=Serial(build_in_subprocess=False))
client = SQLAlchemyClient('sqlite:///my_db.db')
dag.clients[SQLUpload] = client
dag.clients[SQLiteRelation] = client
dag.clients[SQLScript] = client
# initialize SourceLoader in our static directory
loader = SourceLoader(path=env.path.static)
get_data = PythonCallable(_get_data,
product=File(tmp_dir / 'raw.parquet'),
dag=dag,
name='raw')
# if we do not pass a name, the filename will be used as default
report = NotebookRunner(loader['report.py'],
product=File(tmp_dir / 'report.html'),
dag=dag,
kernelspec_name='python3')
raw_table = SQLUpload(source='{{upstream["raw"]}}',
product=SQLiteRelation(('raw', 'table')),
dag=dag,
name='raw_table')
# same here, no need to pass a name
clean_table = SQLScript(loader['clean_table.sql'],
product=SQLiteRelation(('clean', 'table')),
dag=dag)
get_data >> report
get_data >> raw_table >> clean_table
return dag
dag = make()
```
### Pipeline status
```
# Using SourceLoader automatically adds 'Location' column which points to the the source code location
dag.status()
dag.build()
```
## Advanced jinja2 features
`SourceLoader` initializes a proper jinja2 environment, so you can use features such as [macros](https://jinja.palletsprojects.com/en/2.11.x/templates/#macros), this is very useful to maximize SQL code reusability.
```
import shutil
shutil.rmtree(str(tmp_dir))
```
| github_jupyter |
## Simulation Procedures
## The progress of simulation
We simulate paired scDNA and RNA data following the procedure as illustrated in supplement (Figure S1). The simulation principle is to coherently generate scRNA and scDNA data from the same ground truth genetic copy number and clonality while also allowing adding sequencing platform specific noises.
```
import pandas as pd
import sys
sys.path.append('~/CCNMF/SimulationCode/')
# The Simulation.py (module) is restored in '~/simulationCode'
import Simulation as st
```
Specifically, we estimated the transition probabilty matrix as follows: we downloaded the [TCGA](https://www.cancer.gov/about-nci/organization/ccg/research/structural-genomics/tcga) genetic copy number difference GCN data from [cBioPortal](https://www.cbioportal.org/) with 171 triple-negative breast cancer basal samples on paired bulk RNA-seq and DNA-seq data. The below Probmatrix's cloumns are copy number from 1 to 5 as well as the rows.
```
ProbMatrix = [[0.42, 0.5, 0.08, 0, 0],
[0.02, 0.52, 0.46, 0, 0],
[0, 0, 0.5, 0.5, 0],
[0, 0, 0.01, 0.4, 0.59],
[0, 0, 0, 0.01, 0.99]]
```
The various configurations for simulated data. The details of each parameter are shown as the annotation.
```
Paramaters = {'Ncluster' : [3], # The number of clusters, 2 or 3
'Topology' : ['linear'], # The clonal structure of simulated data: 'linear' or 'bifurcate'
'C1Percent' : [0.5, 0.5], # The each cluster percentage if the data has 2 clusters
'C2Percent':[0.2, 0.4, 0.4], # The each cluster percentage if the data has 3 clusters
'Percentage' : [0.1, 0.2, 0.3, 0.4, 0.5], # The simulated copy number fraction in each cluster on various cases
'Outlier': [0.5], # The simulated outlier percentages in each cluster on various cases
'Dropout': [0.5]} # The simulated dropout percentages in each cluster on various cases
```
Simulate the Genetic Copy file for the pecific clone structure, nGenes is the number of genes, nCells is the number
of cells
```
Configure = st.GeneticCN(Paramaters, nGenes = 200, nCells = 100)
```
We simulate the scDNA data based on their associated clonal copy number profiles and transition probability matrix.
```
DNAmatrix = st.Simulate_DNA(ProbMatrix, Configure)
```
Simulate the scRNA data based on their associated clonal copy number profiles.
```
RNAmatrix = st.Simulate_RNA(Configure)
```
The above procedures are how to simulate the various copy number fractions in each cluster for linear structure with 3
clusters when the default of outlier percentange and dropout percentange are 0.5.
Meanwhile, if we need to simulate other configuration such as bifurcate structure with 3 clusters for various dropout percentages.
it is best to give the default of "Percentage" and "Dropout". Such as:
Paramaters = {'Ncluster' : [3], 'Topology' : ['bifurcate'], 'C1Percent' : [0.5, 0.5], 'C2Percent':[0.2, 0.4, 0.4], 'Percentage' : [0.5], 'Outlier': [0.5], 'Dropout': [0.1, 0.2, 0.3, 0.4, 0.5]}
Finally, save each pair datasets as '.csv' file.
```
DNA1 = DNAmatrix[1]
RNA1 = RNAmatrix[1]
DNA1 = pd.DataFrame(DNA1)
RNA1 = pd.DataFrame(RNA1)
DNA1.to_csv('DNA1.csv', index = 0)
RNA1.to_csv('RNA1.csv', index = 0)
```
| github_jupyter |
## Compile a training set using ASPCAP normalization
```
from utils_h5 import H5Compiler
from astropy.io import fits
import numpy as np
# To create a astroNN compiler instance
compiler_aspcap_train = H5Compiler()
compiler_aspcap_train.teff_low = 4000 # Effective Temperature Upper
compiler_aspcap_train.teff_high = 5500 # Effective Temperature Lower
compiler_aspcap_train.vscattercut = 1 # Velocity Scattering Upper
compiler_aspcap_train.starflagcut = True # STARFLAG == 0
compiler_aspcap_train.aspcapflagcut = True # ASPCAPFALG == 0
compiler_aspcap_train.ironlow = -10000. # [Fe/H] Lower
compiler_aspcap_train.continuum = False # use aspcap normalization
compiler_aspcap_train.SNR_low = 200 # SNR Lower
compiler_aspcap_train.SNR_high = 99999 # SNR Upper
compiler_aspcap_train.filename = 'aspcap_norm_train'
# To compile a .h5 datasets, use .compile() method
compiler_aspcap_train.compile()
```
## Compile a testing set using ASPCAP normalization
```
from utils_h5 import H5Compiler
from astropy.io import fits
import numpy as np
# To create a astroNN compiler instance
compiler_aspcap_test = H5Compiler()
compiler_aspcap_test.teff_low = 4000 # Effective Temperature Upper
compiler_aspcap_test.teff_high = 5500 # Effective Temperature Lower
compiler_aspcap_test.vscattercut = 1 # Velocity Scattering Upper
compiler_aspcap_test.starflagcut = True # STARFLAG == 0
compiler_aspcap_test.aspcapflagcut = True # ASPCAPFALG == 0
compiler_aspcap_test.ironlow = -10000. # [Fe/H] Lower
compiler_aspcap_test.continuum = False # use aspcap normalization
compiler_aspcap_test.SNR_low = 100 # SNR Lower
compiler_aspcap_test.SNR_high = 200 # SNR Upper
compiler_aspcap_test.filename = 'aspcap_norm_test'
# To compile a .h5 datasets, use .compile() method
compiler_aspcap_test.compile()
```
## Train a NN with ASPCAP normalization
```
import numpy as np
from utils_h5 import H5Loader
from astroNN.models import ApogeeBCNNCensored
loader = H5Loader('aspcap_norm_train') # continuum normalized dataset
loader.load_err = True
loader.target = ['teff', 'logg', 'C', 'C1', 'N', 'O', 'Na', 'Mg', 'Al', 'Si', 'P', 'S', 'K',
'Ca', 'Ti', 'Ti2', 'V', 'Cr', 'Mn', 'Fe','Co', 'Ni']
x, y, x_err, y_err = loader.load()
bcnn = ApogeeBCNNCensored()
bcnn.num_hidden = [192, 64, 32, 16, 2] # default model size used in the paper
bcnn.max_epochs = 60 # default max epochs used in the paper
bcnn.autosave = True
bcnn.folder_name = 'aspcapStar_BCNNCensored'
bcnn.train(x, y, labels_err=y_err)
```
## Test the NN with ASPCAP normalization
```
import numpy as np
import pandas as pd
from astropy.stats import mad_std as mad
from utils_h5 import H5Loader
from astroNN.models import ApogeeBCNNCensored, load_folder
loader = H5Loader('aspcap_norm_test') # continuum normalized dataset
loader.load_err = True
loader.target = ['teff', 'logg', 'C', 'C1', 'N', 'O', 'Na', 'Mg', 'Al', 'Si', 'P', 'S', 'K',
'Ca', 'Ti', 'Ti2', 'V', 'Cr', 'Mn', 'Fe','Co', 'Ni']
x, y, x_err, y_err = loader.load()
bcnn = load_folder('aspcapStar_BCNNCensored')
pred, pred_error = bcnn.test(x, y)
residue = (pred - y)
bias = np.ma.median(np.ma.array(residue, mask=[y == -9999.]), axis=0)
scatter = mad(np.ma.array(residue, mask=[y == -9999.]), axis=0)
d = {'Name': bcnn.targetname, 'Bias': [f'{bias_single:.{3}f}' for bias_single in bias], 'Scatter': [f'{scatter_single:.{3}f}' for scatter_single in scatter]}
df = pd.DataFrame(data=d)
df
```
| github_jupyter |
## What is SPySort?
SPySort is a spike sorting package written entirely in Python. It takes advantage of Numpy, Scipy, Matplotlib, Pandas and Scikit-learn. Below, you can find a brief how-to-use tutorial.
### Load the data
To begin with, we have to load our raw data. This can be done either by using the **import_data** module of SPySort or by using a custom loading function. In the case where the user would like to use the SPySort's methods to load its data he or she has to take into account the fact that the raw data must be in plain binary format. SPySort does not yet support reading other formats such as HDF5. Here we examine the case where the user decides to use the SPySort's methods. In the demonstration case below we have already the data available on our machine. If you would like to get the data you can use the following commands before performing any further analysis.
#### Download the data
```
import os
from urllib import urlretrieve
import matplotlib.pyplot as plt
%matplotlib inline
data_names = ['Locust_' + str(i) + '.dat.gz' for i in range(1, 5)]
data_src = ['http://xtof.disque.math.cnrs.fr/data/'+ n for n in data_names]
[urlretrieve(data_src[i], data_names[i]) for i in range(4)]
[os.system('gunzip ' + n) for n in data_names]
data_files_names = ['Locust_' + str(i) + '.dat' for i in range(1, 5)]
```
#### Load the data
Once we have downloaded the data, we load them using the module **import_data**. This module provides the method **read_data(filenames, frequency)**, which loads all the raw data. This method takes as input the filenames and the sampling frequency.
At a next level, we can print the five numbers for each row (recording channel) using the method **five_numbers()**. The first column contains the minimal value; the second, the first quartile; the third, the median; the fourth, the third quartile; the fifth, the maximal value. Using these five numbers we can ensure that our data are proper for any further statistical analysis. Moreover, we check if some processing like a division by the standard deviation (SD) has been applied on our raw data by calling the method **checkStdDiv()**. Finally, we can obtain the size of the digitization set by calling the method **discreteStepAmpl()**.
```
import numpy as np
from spysort.ReadData import import_data
# Data analysis parameter
freq = 1.5e4 # Sampling frequency
win = np.array([1., 1., 1., 1., 1.])/5. # Boxcar filter
# read_data instance
r_data = import_data.read_data(data_files_names, freq) # Read raw data
print r_data.fiveNumbers() # Prints the five numbers
print r_data.checkStdDiv() # Check if our data have been divided by std
print r_data.discretStepAmpl() # Print the discretization step amplitude
```
### Preprossecing the data
Once we have loaded the raw data and we have check that the data range (maximum - minimum) is similar (close to 20) on the four recording sites and the inter-quartiles ranges are also similar in previously described five-numbers statistics, we can proceed to the renormalization of our data.
The renormalization can be done by calling the method **renormalization()** of the **import_data** module.
As one can notice in the following code snippet, the attribute timeseries is not of dimension one. Indeed, the **timeseries** atribute contains the renormalized data in its first dimension and theis corresponding timestamps in the second dimension. Finally, we can plot the renormalized raw data in order to visually inspect our data.
The goal of the renormalization is to scale the raw data such that the noise SD is approximately 1. Since it is not straightforward to obtain a noise SD on data where both signal (i.e., spikes) and noise are present, we use this robust type of statistic for the SD.
```
# Data normalization using MAD
r_data.timeseries[0] = r_data.renormalization()
r_data.plotData(r_data.data[0:300]) # Plot normalized data
```
Now, we have renormalized our data, but how we know that MAD does its job? In order to know if MAD indeed works, we can compute the Q-Q plots of the whole traces normalized with the MAD and normalized with the "classical" SD. This is implemented in SPySort's **chechMad()** method.
```
r_data.checkMad()
```
### Detect peaks
After the normalization of the raw data, we apply a threshold method in spite of detecting peaks (possible spike events) in our raw data. Before, detect any peaks we filter the data slightly using a "box" filter of length 3. This means that the data points of the original trace are going to be replaced by the average of themselves with their four nearest neighbors. We will then scale the filtered traces such that the MAD is one on each recording site and keep only the parts of the signal which are above a threshold. The threshold is an argument and can be set by the user easily.
The peak detection can be done by creating a **spike_detection** instance and then calling the methods **filtering(threshold, window)** and **peaks(filtered_data, minimalDist, notZero, kind)**. The method **filtering()** takes as arguments a threshold value and the filtering window. The method **peaks()** takes as input arguments the filtered data, the minimal distance between two successive peaks, the smallest value above which the absolute value of the derivative is considered null and the sort of detection method. There are two choices for the detection method. The first one is the aggregate, where
the data of all channels are summed up and then the detection is performed on the
aggregated data. The second one is the classical method, where a peak detection is performed
on each signal separatelly.
```
from spysort.Events import spikes
# Create a spike_detection instance
s = spikes.spike_detection(r_data.timeseries)
# Filter the data using a boxcar window filter of length 3 and a threshold value 4
filtered = s.filtering(4.0, win)
# Detect peaks over the filtered data
sp0 = s.peaks(filtered, kind='aggregate')
```
### Split the data
Depending on the nature of our raw data we can accelerate the spike sorting method by splitting our data into two or more sets. In the present example we split the data into two subsets and early and a later one. In sp0E (early), the number of detected events is: 908. On the other hand in sp0L (late), the number of detected events is: 887. Then we can plot our new filtered data sets and our detected peaks calling the methods **plotFilteredData(data, filteredData, threshold)** and **plotPeaks(data, positions)**, respectively.
```
# We split our peak positions into two subsets (left and right)
sp0E = sp0[sp0 <= r_data.data_len/2.]
sp0L = sp0[sp0 > r_data.data_len/2.]
s.plotFilteredData(s.data, filtered, 4.) # Plot the data
s.plotPeaks(s.data, sp0E) # Plot the peaks of the left subset
```
### Making cuts
After detecting our spikes, we must make our cuts in order to create our events' sample. The obvious question we must first address is: How long should our cuts be? The pragmatic way to get an answer is:
+ Make longer cuts, like 50 sampling points on both sides of the detected event's time.
+ Compute robust estimates of the "central" event (with the median) and of the dispersion of the
sample around this central event (with the MAD).
+ Plot the two together and check when does the MAD trace reach the background noise level (at 1
since we have normalized the data).
+ Having the central event allows us to see if it outlasts significantly the region where the MAD
is above the background noise level.
Clearly cutting beyond the time at which the MAD hits back the noise level should not bring any useful information as far a classifying the spikes is concerned. In this context, we use the method **mkEvents()** in order to build all the events and then we plot the MAD and the Median using the mehtod **plotMadMedia()**.
```
from spysort.Events import events
evts = events.build_events(r_data.data, sp0E, win, before=14, after=30)
evtsE = evts.mkEvents(otherPos=True, x=np.asarray(r_data.data) ,pos=sp0E, before=50, after=50) # Make spike events
evts.plotMadMedian(evtsE) # Plot mad and median of the events
```
### Events
Once we are satisfied with our spike detection, at least in a provisory way, and that we have decided on the length of our cuts, we proceed by making cuts around the detected events using the **mkEvents()** method once again.
### Noise
Getting an estimate of the noise statistical properties is an essential ingredient to build respectable goodness of fit tests. In our approach "noise events" are essentially anything that is not an "event".
We could think that keeping a cut length on each side would be enough. That would indeed be the case if all events were starting from and returning to zero within a cut. But this is not the case with the cuts parameters we chose previously (that will become clear soon). You might wonder why we chose so short a cut length then. Simply to avoid having to deal with too many superposed events which are the really bothering events for anyone wanting to do proper sorting. To obtain our noise events we are going to use the method
**mkNoise()**.
```
evtsE = evts.mkEvents() # Make spike events
noise = evts.mkNoise() # Make noise events
evts.plotEvents(evtsE) # Plot events
```
### Getting "clean" events
Our spike sorting has two main stages, the first one consist in estimating a model and the second one consists in using this model to classify the data. Our model is going to be built out of reasonably "clean" events. Here by clean we mean events which are not due to a nearly simultaneous firing of two or more neurons; and simultaneity is defined on the time scale of one of our cuts. When the model will be subsequently used to classify data, events are going to decomposed into their (putative) constituent when they are not "clean", that is, superposition are going to be looked and accounted for.
In order to eliminate the most obvious superpositions we are going to use a rather brute force approach, looking at the sides of the central peak of our median event and checking if individual events are not too large there, that is do not exhibit extra peaks. We first define a function doing this job:
### Dimension reduction
```
from spysort.Events import clusters
# Create a clusters instance
c = clusters.pca_clustering(r_data.timeseries[0], sp0E, win, thr=8, before=14, after=30)
print c.pcaVariance(10) # Print the variance of the PCs
c.plotMeanPca() # Plot the mean +- PC
c.plotPcaProjections() # Plot the projections of the PCs on the data
```
### Clustering
```
CSize = 10 # Cluster size
kmeans_clusters = c.KMeans(CSize) # K-means clustering
# gmmClusters = c.GMM(10, 'diag') # GMM clustering
c.plotClusters(kmeans_clusters) # Plot the clusters
```
### Spike "peeling"
```
from spysort.Events import alignment
#(data, positions, goodEvts, clusters, CSize, win=[],
# before=14, after=30, thr=3)
align = alignment.align_events(r_data.data,sp0E,c.goodEvts,kmeans_clusters,CSize,win=win)
centers = {"Cluster " + str(i): align.mk_center_dictionary(sp0E[c.goodEvts][np.array(kmeans_clusters) == i])
for i in range(CSize)}
# c10b
# Apply it to every detected event
# Warning do it like this
round0 = [align.classify_and_align_evt(sp0[i], centers) for i in range(len(sp0))]
print(len([x[1] for x in round0 if x[0] == '?']))
# Apply it
pred0 = align.predict_data(round0, centers)
data1 = np.array(r_data.timeseries[0]) - pred0
data_filtered = np.apply_along_axis(lambda x: np.convolve(x, np.array([1, 1, 1])/3.), 1, data1)
data_filtered = (data_filtered.transpose() / np.apply_along_axis(mad, 1, data_filtered)).transpose()
data_filtered[data_filtered < 4.] = 0
print data_filtered[0, :].shape
sp1 = s.peaks(data_filtered[0, :], kind='simple')
print(len(sp1))
round1 = [align.classify_and_align_evt(sp1[i], centers, otherData=True,
x=data1) for i in range(len(sp1))]
print(len([x[1] for x in round1 if x[0] == '?']))
pred1 = align.predict_data(round1, centers)
data2 = data1 - pred1
dir(alignment)
```
| github_jupyter |
```
import os; os.chdir('../')
from tqdm import tqdm
import pandas as pd
import numpy as np
from sklearn.neighbors import BallTree
%matplotlib inline
from urbansim_templates import modelmanager as mm
from urbansim_templates.models import MNLDiscreteChoiceStep
from urbansim.utils import misc
from scripts import datasources, models
import orca
```
### Load data
```
chts_persons = pd.read_csv('/home/mgardner/data/chts-orig/data/Deliv_PER.csv', low_memory=False)
chts_persons_lookup = pd.read_csv('/home/mgardner/data/chts-orig/data/LookUp_PER.csv')
chts_persons = pd.merge(
chts_persons.set_index(['SAMPN','PERNO']),
chts_persons_lookup.set_index(['SAMPN','PERNO']),
left_index=True, right_index=True,
suffixes=('_persons', '_lookup')).reset_index()
chts_homes = pd.read_csv('/home/mgardner/data/chts-orig/data/LookUp_Home.csv')
chts_persons = pd.merge(chts_persons, chts_homes, on='SAMPN')
# SF Bay Area only!
chts_persons = chts_persons[chts_persons['HCTFIP'].isin([1, 13, 41, 55, 75, 81, 85, 95, 97])].reset_index()
jobs = pd.read_csv('/home/mgardner/data/jobs_w_occup.csv')
buildings = pd.read_hdf('./data/bayarea_ual.h5', 'buildings')
parcels = pd.read_hdf('./data/bayarea_ual.h5', 'parcels')
```
### Get job coords
```
buildings = pd.merge(buildings, parcels[['x', 'y']], left_on='parcel_id', right_index=True)
jobs = pd.merge(jobs, buildings[['x', 'y']], left_on='building_id', right_index=True)
jobs.rename(columns={'x': 'lng', 'y': 'lat'}, inplace=True)
```
### Assign jobs a node ID
```
# load the network nodes
nodes = pd.read_csv('~/data/bay_area_full_strongly_nodes.csv')
nodes = nodes.set_index('osmid')
assert nodes.index.is_unique
# haversine requires data in form of [lat, lng] and inputs/outputs in units of radians
nodes_rad = np.deg2rad(nodes[['y', 'x']])
persons_rad = np.deg2rad(chts_persons[['WYCORD_lookup', 'WXCORD_lookup']])
jobs_rad = np.deg2rad(jobs[['lng', 'lat']])
# build the tree for fast nearest-neighbor search
tree = BallTree(nodes_rad, metric='haversine')
# query the tree for nearest node to each home
idx = tree.query(jobs_rad, return_distance=False)
jobs['node_id'] = nodes.iloc[idx[:,0]].index
jobs.to_csv('/home/mgardner/data/jobs_w_occup_and_node.csv', index=False)
```
### Assign CHTS persons a job ID
```
dists = []
no_job_info = []
no_work_coords = []
# new columnd in CHTS persons to store job_id
chts_persons.loc[:, 'job_id'] = None
# prepare jobs table
jobs.loc[:, 'taken'] = False
jobs.loc[:, 'x'] = jobs_rad['lng']
jobs.loc[:, 'y'] = jobs_rad['lat']
for i, person in tqdm(chts_persons.iterrows(), total=len(chts_persons)):
# only assign a job ID for employed persons with a fixed work location
if (person['EMPLY'] == 1) & (person['WLOC'] == 1):
# skip person if no CHTS industry or occupation
if (person['INDUS'] > 96) & (person['OCCUP'] > 96):
no_job_info.append(i)
continue
# skip person if no work location
elif pd.isnull(person[['WYCORD_lookup', 'WXCORD_lookup']]).any():
no_work_coords.append(i)
continue
# if CHTS industry is unknown, match jobs based on occupation only
elif person['INDUS'] > 96:
potential_jobs = jobs[
(jobs['occupation_id'] == person['OCCUP']) &
(jobs['taken'] == False)]
# if occupation is unknown, match jobs based on industry only
elif person['OCCUP'] > 96:
potential_jobs = jobs[
(jobs['naics'] == person['INDUS']) &
(jobs['taken'] == False)]
elif (person['INDUS'] < 97) & (person['OCCUP'] < 97):
# define potential jobs based on industry and occupation
potential_jobs = jobs[
(jobs['naics'] == person['INDUS']) &
(jobs['occupation_id'] == person['OCCUP']) &
(jobs['taken'] == False)]
# if no such jobs exist, define jobs by industry
if len(potential_jobs) == 0:
potential_jobs = jobs[
(jobs['naics'] == person['INDUS']) &
(jobs['taken'] == False)]
# if no such jobs exist, define jobs by occupation
if len(potential_jobs) == 0:
potential_jobs = jobs[
(jobs['occupation_id'] == person['OCCUP']) &
(jobs['taken'] == False)]
# otherwise, continue
if len(potential_jobs) == 0:
continue
# build the tree of potential jobs for fast nearest-neighbor search
tree = BallTree(potential_jobs[['y','x']], metric='haversine')
# query the tree for nearest job to each workplace
dist, idx = tree.query(persons_rad.iloc[i].values.reshape(1,-1), return_distance=True)
# save results
job = potential_jobs.iloc[idx[0][0]]
dists.append(dist[0][0])
chts_persons.loc[i, 'job_id'] = job['job_id']
jobs.loc[jobs['job_id'] == job['job_id'], 'taken'] = True
chts_persons.to_csv('./data/chts_persons_with_job_id.csv', index=False)
# convert dists from radians to kilometers
dists = [dist * 6371 for dist in dists]
pd.Series(dists).plot(kind='hist', bins=20000, xlim=(0, 20), normed=True)
print('Assigned job IDs to {0}% of workers with a fixed work location.'.format(
np.round(chts_persons.job_id.count() / len(
chts_persons[(chts_persons['EMPLY'] == 1) & (chts_persons['WLOC'] == 1)]) * 100, 1)))
print('{0}% had no industry/occupation info.'.format(
np.round(len(no_job_info) / len(
chts_persons[(chts_persons['EMPLY'] == 1) & (chts_persons['WLOC'] == 1)]) * 100, 1)))
print('{0}% had no work coordinates.'.format(
np.round(len(no_work_coords) / len(
chts_persons[(chts_persons['EMPLY'] == 1) & (chts_persons['WLOC'] == 1)]) * 100, 1)))
```
| github_jupyter |
# Unsupervised Learning in R
> clustering and dimensionality reduction in R from a machine learning perspective
- author: Victor Omondi
- toc: true
- comments: true
- categories: [unsupervised-learning, machine-learning, r]
- image: images/ield.png
# Overview
Many times in machine learning, the goal is to find patterns in data without trying to make predictions. This is called unsupervised learning. One common use case of unsupervised learning is grouping consumers based on demographics and purchasing history to deploy targeted marketing campaigns. Another example is wanting to describe the unmeasured factors that most influence crime differences between cities. We will cover basic introduction to clustering and dimensionality reduction in R from a machine learning perspective, so that we can get from data to insights as quickly as possible.
# Libraries
```
library(readr)
library(ggplot2)
library(dplyr)
```
# Unsupervised learning in R
The k-means algorithm is one common approach to clustering. We will explore how the algorithm works under the hood, implement k-means clustering in R, visualize and interpret the results, and select the number of clusters when it's not known ahead of time. We'll have applied k-means clustering to a fun "real-world" dataset!
### Types of machine learning
- Unsupervised learning
- Finding structure in unlabeled data
- Supervised learning
- Making predictions based on labeled data
- Predictions like regression or classication
- Reinforcement learning
### Unsupervised learning - clustering
- Finding homogeneous subgroups within larger group
- _People have features such as income, education attainment, and gender_
### Unsupervised learning - dimensionality reduction
- Finding homogeneous subgroups within larger group
- Clustering
- Finding patterns in the features of the data
- Dimensionality reduction
- Find patterns in the features of the data
- Visualization of high dimensional data
- Pre-processing before supervised learning
### Challenges and benefits
- No single goal of analysis
- Requires more creativity
- Much more unlabeled data available than cleanly labeled data
## Introduction to k-means clustering
### k-means clustering algorithm
- Breaks observations into pre-dened number of clusters
### k-means in R
- One observation per row, one feature per column
- k-means has a random component
- Run algorithm multiple times to improve odds of the best model
```
x <- as.matrix(x_df <- read_csv("datasets/x.csv"))
class(x)
head(x)
```
### k-means clustering
We have created some two-dimensional data and stored it in a variable called `x`.
```
x_df %>%
ggplot(aes(x=V1, y=V2)) +
geom_point()
```
The scatter plot on the above is a visual representation of the data.
We will create a k-means model of the `x` data using 3 clusters, then to look at the structure of the resulting model using the `summary()` function.
```
# Create the k-means model: km.out
km.out_x <- kmeans(x, centers=3, nstart=20)
# Inspect the result
summary(km.out_x)
```
### Results of kmeans()
The `kmeans()` function produces several outputs. One is the output of modeling, the cluster membership. We will access the `cluster` component directly. This is useful anytime we need the cluster membership for each observation of the data used to build the clustering model. This cluster membership might be used to help communicate the results of k-means modeling.
`k-means` models also have a print method to give a human friendly output of basic modeling results. This is available by using `print()` or simply typing the name of the model.
```
# Print the cluster membership component of the model
km.out_x$cluster
# Print the km.out object
km.out_x
```
### Visualizing and interpreting results of kmeans()
One of the more intuitive ways to interpret the results of k-means models is by plotting the data as a scatter plot and using color to label the samples' cluster membership. We will use the standard `plot()` function to accomplish this.
To create a scatter plot, we can pass data with two features (i.e. columns) to `plot()` with an extra argument `col = km.out$cluster`, which sets the color of each point in the scatter plot according to its cluster membership.
```
# Scatter plot of x
plot(x, main="k-means with 3 clusters", col=km.out_x$cluster, xlab="", ylab="")
```
## How k-means works and practical matters
### Objectives
- Explain how k-means algorithm is implemented visually
- **Model selection**: determining number of clusters
### Model selection
- Recall k-means has a random component
- Best outcome is based on total within cluster sum of squares:
- For each cluster
- For each observation in the cluster
- Determine squared distance from observation to cluster center
- Sum all of them together
- Running algorithm multiple times helps find the global minimum total within cluster sum of squares
### Handling random algorithms
`kmeans()` randomly initializes the centers of clusters. This random initialization can result in assigning observations to different cluster labels. Also, the random initialization can result in finding different local minima for the k-means algorithm. we will demonstrate both results.
At the top of each plot, the measure of model quality—total within cluster sum of squares error—will be plotted. Look for the model(s) with the lowest error to find models with the better model results.
Because `kmeans()` initializes observations to random clusters, it is important to set the random number generator seed for reproducibility.
```
# Set up 2 x 3 plotting grid
par(mfrow = c(2, 3))
# Set seed
set.seed(1)
for(i in 1:6) {
# Run kmeans() on x with three clusters and one start
km.out <- kmeans(x, centers=3, nstart=1)
# Plot clusters
plot(x, col = km.out$cluster,
main = km.out$tot.withinss,
xlab = "", ylab = "")
}
```
Because of the random initialization of the k-means algorithm, there's quite some variation in cluster assignments among the six models.
### Selecting number of clusters
The k-means algorithm assumes the number of clusters as part of the input. If you know the number of clusters in advance (e.g. due to certain business constraints) this makes setting the number of clusters easy. However, if you do not know the number of clusters and need to determine it, you will need to run the algorithm multiple times, each time with a different number of clusters. From this, we can observe how a measure of model quality changes with the number of clusters.
We will run `kmeans()` multiple times to see how model quality changes as the number of clusters changes. Plots displaying this information help to determine the number of clusters and are often referred to as scree plots.
The ideal plot will have an elbow where the quality measure improves more slowly as the number of clusters increases. This indicates that the quality of the model is no longer improving substantially as the model complexity (i.e. number of clusters) increases. In other words, the elbow indicates the number of clusters inherent in the data.
```
# Initialize total within sum of squares error: wss
wss <- 0
# For 1 to 15 cluster centers
for (i in 1:15) {
km.out <- kmeans(x, centers = i, nstart=20)
# Save total within sum of squares to wss variable
wss[i] <- km.out$tot.withinss
}
# Plot total within sum of squares vs. number of clusters
plot(1:15, wss, type = "b",
xlab = "Number of Clusters",
ylab = "Within groups sum of squares")
# Set k equal to the number of clusters corresponding to the elbow location
k <- 2
```
Looking at the scree plot, it looks like there are inherently 2 or 3 clusters in the data.
## Introduction to the Pokemon data
```
pokemon <- read_csv("datasets//Pokemon.csv")
head(pokemon)
```
### Data challenges
- Selecting the variables to cluster upon
- Scaling the data
- Determining the number of clusters
- Often no clean "elbow" in scree plot
- This will be a core part.
- Visualize the results for interpretation
### Practical matters: working with real data
Dealing with real data is often more challenging than dealing with synthetic data. Synthetic data helps with learning new concepts and techniques, but we will deal with data that is closer to the type of real data we might find in the professional or academic pursuits.
The first challenge with the Pokemon data is that there is no pre-determined number of clusters. We will determine the appropriate number of clusters, keeping in mind that in real data the elbow in the scree plot might be less of a sharp elbow than in synthetic data. We'll use our judgement on making the determination of the number of clusters.
We'll be plotting the outcomes of the clustering on two dimensions, or features, of the data.
An additional note: We'll utilize the `iter.max` argument to `kmeans()`. `kmeans()` is an iterative algorithm, repeating over and over until some stopping criterion is reached. The default number of iterations for `kmeans()` is 10, which is not enough for the algorithm to converge and reach its stopping criterion, so we'll set the number of iterations to 50 to overcome this issue.
```
head(pokemon <- pokemon %>%
select(HitPoints:Speed))
# Initialize total within sum of squares error: wss
wss <- 0
# Look over 1 to 15 possible clusters
for (i in 1:15) {
# Fit the model: km.out
km.out <- kmeans(pokemon, centers = i, nstart = 20, iter.max = 50)
# Save the within cluster sum of squares
wss[i] <- km.out$tot.withinss
}
# Produce a scree plot
plot(1:15, wss, type = "b",
xlab = "Number of Clusters",
ylab = "Within groups sum of squares")
# Select number of clusters
k <- 2
# Build model with k clusters: km.out
km.out <- kmeans(pokemon, centers = 2, nstart = 20, iter.max = 50)
# View the resulting model
km.out
# Plot of Defense vs. Speed by cluster membership
plot(pokemon[, c("Defense", "Speed")],
col = km.out$cluster,
main = paste("k-means clustering of Pokemon with", k, "clusters"),
xlab = "Defense", ylab = "Speed")
```
## Review of k-means clustering
### Chapter review
- Unsupervised vs. supervised learning
- How to create k-means cluster model in R
- How k-means algorithm works
- Model selection
- Application to "real" (and hopefully fun) dataset
# Hierarchical clustering
Hierarchical clustering is another popular method for clustering. We will go over how it works, how to use it, and how it compares to k-means clustering.
## Introduction to hierarchical clustering
### Hierarchical clustering
- Number of clusters is not known ahead of time
- Two kinds: bottom-up and top-down, we will focus on bottom-up
### Hierarchical clustering in R
```
dist_matrix <- dist(x)
hclust(dist_matrix)
head(x <- as.matrix(x_df <- read_csv("datasets//x2.csv")))
class(x)
```
### Hierarchical clustering with results
We will create hierarchical clustering model using the `hclust()` function.
```
# Create hierarchical clustering model: hclust.out
hclust.out <- hclust(dist(x))
# Inspect the result
summary(hclust.out)
```
## Selecting number of clusters
### Dendrogram
- Tree shaped structure used to interpret hierarchical clustering models
```
plot(hclust.out)
abline(h=6, col="red")
abline(h=3.5, col="red")
abline(h=4.5, col="red")
abline(h=6.9, col="red")
abline(h=9, col="red")
```
If you cut the tree at a height of 6.9, you're left with 3 branches representing 3 distinct clusters.
### Tree "cutting" in R
`cutree()` is the R function that cuts a hierarchical model. The `h` and `k` arguments to `cutree()` allow you to cut the tree based on a certain height `h` or a certain number of clusters `k`.
```
cutree(hclust.out, h=6)
cutree(hclust.out, k=2)
# Cut by height
cutree(hclust.out, h=7)
# Cut by number of clusters
cutree(hclust.out, k=3)
```
The output of each `cutree()` call represents the cluster assignments for each observation in the original dataset
## Clustering linkage and practical matters
### Linking clusters in hierarchical clustering
- How is distance between clusters determined? Rules?
- Four methods to determine which cluster should be linked
- **Complete**: pairwise similarity between all observations in cluster 1 and cluster 2, and uses ***largest of similarities***
- **Single**: same as above but uses ***smallest of similarities***
- **Average**: same as above but uses ***average of similarities***
- **Centroid**: finds centroid of cluster 1 and centroid of cluster 2, and uses ***similarity between two centroids***
### Linkage in R
```
hclust.complete <- hclust(dist(x), method = "complete")
hclust.average <- hclust(dist(x), method = "average")
hclust.single <- hclust(dist(x), method = "single")
plot(hclust.complete, main="Complete")
plot(hclust.average, main="Average")
plot(hclust.single, main="Single")
```
Whether you want balanced or unbalanced trees for hierarchical clustering model depends on the context of the problem you're trying to solve. Balanced trees are essential if you want an even number of observations assigned to each cluster. On the other hand, if you want to detect outliers, for example, an unbalanced tree is more desirable because pruning an unbalanced tree can result in most observations assigned to one cluster and only a few observations assigned to other clusters.
### Practical matters
- Data on different scales can cause undesirable results in clustering methods
- Solution is to scale data so that features have same mean and standard deviation
- Subtract mean of a feature from all observations
- Divide each feature by the standard deviation of the feature
- Normalized features have a mean of zero and a standard deviation of one
```
colMeans(x)
apply(x, 2, sd)
scaled_x <- scale(x)
colMeans(scaled_x)
apply(scaled_x, 2, sd)
```
### Linkage methods
We will produce hierarchical clustering models using different linkages and plot the dendrogram for each, observing the overall structure of the trees.
### Practical matters: scaling
Clustering real data may require scaling the features if they have different distributions. We will go back to working with "real" data, the `pokemon` dataset. We will observe the distribution (mean and standard deviation) of each feature, scale the data accordingly, then produce a hierarchical clustering model using the complete linkage method.
```
# View column means
colMeans(pokemon)
# View column standard deviations
apply(pokemon, 2, sd)
# Scale the data
pokemon.scaled = scale(pokemon)
# Create hierarchical clustering model: hclust.pokemon
hclust.pokemon <- hclust(dist(pokemon.scaled), method="complete")
```
### Comparing kmeans() and hclust()
Comparing k-means and hierarchical clustering, we'll see the two methods produce different cluster memberships. This is because the two algorithms make different assumptions about how the data is generated. In a more advanced course, we could choose to use one model over another based on the quality of the models' assumptions, but for now, it's enough to observe that they are different.
We will have compare results from the two models on the pokemon dataset to see how they differ.
```
# Apply cutree() to hclust.pokemon: cut.pokemon
cut.pokemon<- cutree(hclust.pokemon, k=3)
# Compare methods
table(km.out$cluster, cut.pokemon)
```
# Dimensionality reduction with PCA
Principal component analysis, or PCA, is a common approach to dimensionality reduction. We'll explore exactly what PCA does, visualize the results of PCA with biplots and scree plots, and deal with practical issues such as centering and scaling the data before performing PCA.
## Introduction to PCA
### Two methods of clustering
- Two methods of clustering - finding groups of homogeneous items
- Next up, dimensionality reduction
- Find structure in features
- Aid in visualization
### Dimensionality reduction
- A popular method is principal component analysis (PCA)
- Three goals when finding lower dimensional representation of features:
- Find linear combination of variables to create principal components
- Maintain most variance in the data
- Principal components are uncorrelated (i.e. orthogonal to each other)
```
head(iris)
```
### PCA in R
```
summary(
pr.iris <- prcomp(x=iris[-5], scale=F, center=T)
)
```
### PCA using prcomp()
We will create PCA model and observe the diagnostic results.
We have loaded the Pokemon data, which has four dimensions, and placed it in a variable called `pokemon`. We'll create a PCA model of the data, then to inspect the resulting model using the `summary()` function.
```
# Perform scaled PCA: pr.out
pr.out <- prcomp(pokemon, scale=T)
# Inspect model output
summary(pr.out)
```
The first 3 principal components describe around 77% of the variance.
### Additional results of PCA
PCA models in R produce additional diagnostic and output components:
- `center`: the column means used to center to the data, or `FALSE` if the data weren't centered
- `scale`: the column standard deviations used to scale the data, or `FALSE` if the data weren't scaled
- `rotation`: the directions of the principal component vectors in terms of the original features/variables. This information allows you to define new data in terms of the original principal components
- `x`: the value of each observation in the original dataset projected to the principal components
You can access these the same as other model components. For example, use `pr.out$rotation` to access the rotation component
```
head(pr.out$x)
pr.out$center
pr.out$scale
```
## Visualizing and interpreting PCA results
### Biplots in R
```
biplot(pr.iris)
```
Petal.Width and Petal.Length are correlated in the original dataset.
### Scree plots in R
```
# Getting proportion of variance for a scree plot
pr.var <- pr.iris$sdev^2
pve <- pr.var/sum(pr.var)
# Plot variance explained for each principal component
plot(pve, main="variance explained for each principal component",
xlab="Principal component", ylab="Proportion of Variance Explained", ylim=c(0,1), type="b")
```
### Interpreting biplots (1)
The `biplot()` function plots both the principal components loadings and the mapping of the observations to their first two principal component values. We will do interpretation of the `biplot()` visualization.
Using the `biplot()` of the `pr.out` model, which two original variables have approximately the same loadings in the first two principal components?
```
biplot(pr.out)
```
`Attack` and `HitPoints` have approximately the same loadings in the first two principal components
### Variance explained
The second common plot type for understanding PCA models is a scree plot. A scree plot shows the variance explained as the number of principal components increases. Sometimes the cumulative variance explained is plotted as well.
We will prepare data from the `pr.out` modelfor use in a scree plot. Preparing the data for plotting is required because there is not a built-in function in R to create this type of plot.
```
# Variability of each principal component: pr.var
pr.var <- pr.out$sdev^2
# Variance explained by each principal component: pve
pve <-pr.var / sum(pr.var)
```
### Visualize variance explained
Now we will create a scree plot showing the proportion of variance explained by each principal component, as well as the cumulative proportion of variance explained.
These plots can help to determine the number of principal components to retain. One way to determine the number of principal components to retain is by looking for an elbow in the scree plot showing that as the number of principal components increases, the rate at which variance is explained decreases substantially. In the absence of a clear elbow, we can use the scree plot as a guide for setting a threshold.
```
# Plot variance explained for each principal component
plot(pve, xlab = "Principal Component",
ylab = "Proportion of Variance Explained",
ylim = c(0, 1), type = "b")
# Plot cumulative proportion of variance explained
plot(cumsum(pve), xlab = "Principal Component",
ylab = "Cumulative Proportion of Variance Explained",
ylim = c(0, 1), type = "b")
```
when the number of principal components is equal to the number of original features in the data, the cumulative proportion of variance explained is 1.
### Practical issues with PCA
- Scaling the data
- Missing values:
- Drop observations with missing values
- Impute / estimate missing values
- Categorical data:
- Do not use categorical data features
- Encode categorical features as numbers
### mtcars dataset
```
head(mtcars)
```
### Scaling
```
# Means and standard deviations vary a lot
round(colMeans(mtcars), 2)
round(apply(mtcars, 2, sd), 2)
biplot(prcomp(mtcars, center=T, scale=F))
biplot(prcomp(mtcars, scale=T, center=T))
```
### Practical issues: scaling
Scaling data before doing PCA changes the results of the PCA modeling. Here, we will perform PCA with and without scaling, then visualize the results using biplots.
Sometimes scaling is appropriate when the variances of the variables are substantially different. This is commonly the case when variables have different units of measurement, for example, degrees Fahrenheit (temperature) and miles (distance). Making the decision to use scaling is an important step in performing a principal component analysis.
```
head(pokemon <- read_csv("datasets//Pokemon.csv"))
head(pokemon <- pokemon%>%
select(Total, HitPoints, Attack, Defense, Speed))
# Mean of each variable
colMeans(pokemon)
# Standard deviation of each variable
apply(pokemon, 2, sd)
# PCA model with scaling: pr.with.scaling
pr.with.scaling <- prcomp(pokemon, center=T, scale=T)
# PCA model without scaling: pr.without.scaling
pr.without.scaling <- prcomp(pokemon, center=T, scale=F)
# Create biplots of both for comparison
biplot(pr.with.scaling)
biplot(pr.without.scaling)
```
The new Total column contains much more variation, on average, than the other four columns, so it has a disproportionate effect on the PCA model when scaling is not performed. After scaling the data, there's a much more even distribution of the loading vectors.
# Exploring Wisconsin breast cancer data
## Introduction to the case study
- Human breast mass data:
- Ten features measured of each cell nuclei
- Summary information is provided for each group of cells
- Includes diagnosis: benign (not cancerous) and malignant (cancerous)
### Analysis
- Download data and prepare data for modeling
- Exploratory data analysis (# observations, # features, etc.)
- Perform PCA and interpret results
- Complete two types of clusteringUnderstand and compare the two types
- Combine PCA and clustering
### Preparing the data
```
url <- "datasets//WisconsinCancer.csv"
# Download the data: wisc.df
wisc.df <- read.csv(url)
# Convert the features of the data: wisc.data
wisc.data <- as.matrix(wisc.df[3:32])
# Set the row names of wisc.data
row.names(wisc.data) <- wisc.df$id
# Create diagnosis vector
diagnosis <- as.numeric(wisc.df$diagnosis == "M")
```
### Exploratory data analysis
- How many observations are in this dataset?
```
dim(wisc.data)
names(wisc.df)
```
### Performing PCA
The next step is to perform PCA on wisc.data.
it's important to check if the data need to be scaled before performing PCA. Two common reasons for scaling data:
- The input variables use different units of measurement.
- The input variables have significantly different variances.
```
# Check column means and standard deviations
colMeans(wisc.data)
apply(wisc.data, 2, sd)
# Execute PCA, scaling if appropriate: wisc.pr
wisc.pr <- prcomp(wisc.data, center=T, scale=T)
# Look at summary of results
summary(wisc.pr)
```
### Interpreting PCA results
Now we'll use some visualizations to better understand the PCA model.
```
# Create a biplot of wisc.pr
biplot(wisc.pr)
# Scatter plot observations by components 1 and 2
plot(wisc.pr$x[, c(1, 2)], col = (diagnosis + 1),
xlab = "PC1", ylab = "PC2")
# Repeat for components 1 and 3
plot(wisc.pr$x[, c(1,3)], col = (diagnosis + 1),
xlab = "PC1", ylab = "PC3")
# Do additional data exploration of your choosing below (optional)
plot(wisc.pr$x[, c(2,3)], col = (diagnosis + 1),
xlab = "PC2", ylab = "PC3")
```
Because principal component 2 explains more variance in the original data than principal component 3, you can see that the first plot has a cleaner cut separating the two subgroups.
### Variance explained
We will produce scree plots showing the proportion of variance explained as the number of principal components increases. The data from PCA must be prepared for these plots, as there is not a built-in function in R to create them directly from the PCA model.
```
# Set up 1 x 2 plotting grid
par(mfrow = c(1, 2))
# Calculate variability of each component
pr.var <- wisc.pr$sdev^2
# Variance explained by each principal component: pve
pve <- pr.var/sum(pr.var)
# Plot variance explained for each principal component
plot(pve, xlab = "Principal Component",
ylab = "Proportion of Variance Explained",
ylim = c(0, 1), type = "b")
# Plot cumulative proportion of variance explained
plot(cumsum(pve), xlab = "Principal Component",
ylab = "Cumulative Proportion of Variance Explained",
ylim = c(0, 1), type = "b")
```
### Next steps
- Complete hierarchical clustering
- Complete k-means clustering
- Combine PCA and clustering
- Contrast results of hierarchical clustering with diagnosis
- Compare hierarchical and k-means clustering results
- PCA as a pre-processing step for clustering
### Hierarchical clustering of case data
We will do hierarchical clustering of the observations. This type of clustering does not assume in advance the number of natural groups that exist in the data.
As part of the preparation for hierarchical clustering, distance between all pairs of observations are computed. Furthermore, there are different ways to link clusters together, with single, complete, and average being the most common linkage methods.
```
# Scale the wisc.data data: data.scaled
data.scaled <- scale(wisc.data)
# Calculate the (Euclidean) distances: data.dist
data.dist <- dist(data.scaled)
# Create a hierarchical clustering model: wisc.hclust
wisc.hclust <- hclust(data.dist, method="complete")
```
### Results of hierarchical clustering
```
plot(wisc.hclust)
```
### Selecting number of clusters
We will compare the outputs from your hierarchical clustering model to the actual diagnoses. Normally when performing unsupervised learning like this, a target variable isn't available. We do have it with this dataset, however, so it can be used to check the performance of the clustering model.
When performing supervised learning—that is, when we're trying to predict some target variable of interest and that target variable is available in the original data—using clustering to create new features may or may not improve the performance of the final model.
```
# Cut tree so that it has 4 clusters: wisc.hclust.clusters
wisc.hclust.clusters <- cutree(wisc.hclust, k=4)
# Compare cluster membership to actual diagnoses
table(wisc.hclust.clusters, diagnosis)
```
Four clusters were picked after some exploration. Before moving on, we may want to explore how different numbers of clusters affect the ability of the hierarchical clustering to separate the different diagnoses
### k-means clustering and comparing results
There are two main types of clustering: hierarchical and k-means.
We will create a k-means clustering model on the Wisconsin breast cancer data and compare the results to the actual diagnoses and the results of your hierarchical clustering model.
```
# Create a k-means model on wisc.data: wisc.km
wisc.km <- kmeans(scale(wisc.data), centers=2, nstart=20)
# Compare k-means to actual diagnoses
table(wisc.km$cluster, diagnosis)
# Compare k-means to hierarchical clustering
table(wisc.km$cluster, wisc.hclust.clusters)
```
Looking at the second table you generated, it looks like clusters 1, 2, and 4 from the hierarchical clustering model can be interpreted as the cluster 1 equivalent from the k-means algorithm, and cluster 3 can be interpreted as the cluster 2 equivalent.
### Clustering on PCA results
We will put together several steps used earlier and, in doing so, we will experience some of the creativity that is typical in unsupervised learning.
The PCA model required significantly fewer features to describe 80% and 95% of the variability of the data. In addition to normalizing data and potentially avoiding overfitting, PCA also uncorrelates the variables, sometimes improving the performance of other modeling techniques.
Let's see if PCA improves or degrades the performance of hierarchical clustering.
```
# Create a hierarchical clustering model: wisc.pr.hclust
wisc.pr.hclust <- hclust(dist(wisc.pr$x[, 1:7]), method = "complete")
# Cut model into 4 clusters: wisc.pr.hclust.clusters
wisc.pr.hclust.clusters <- cutree(wisc.pr.hclust, k=4)
# Compare to actual diagnoses
table(wisc.pr.hclust.clusters, diagnosis)
table(wisc.hclust.clusters, diagnosis)
# Compare to k-means and hierarchical
table(wisc.km$cluster, diagnosis)
```
| github_jupyter |
# Semantic Text Summarization
Here we are using the semantic method to understand the text and also keep up the standards of the extractive summarization. The task is implemnted using the various pre-defined models such **BERT, BART, T5, XLNet and GPT2** for summarizing the articles. It is also comapared with a classical method i.e. **summarzation based on word frequencies**.
```
## installation
!pip install transformers --upgrade
!pip install bert-extractive-summarizer
!pip install neuralcoref
!python -m spacy download en_core_web_md
from transformers import pipeline
from summarizer import Summarizer, TransformerSummarizer
import pprint
pp = pprint.PrettyPrinter(indent=14)
## documentation for summarizer: https://huggingface.co/transformers/main_classes/pipelines.html#summarizationpipeline
# summarize with BART
summarizer_bart = pipeline(task='summarization', model="bart-large-cnn")
#summarize with BERT
summarizer_bert = Summarizer()
# summarize with T5
summarizer_t5 = pipeline(task='summarization', model="t5-large") # options: ‘t5-small’, ‘t5-base’, ‘t5-large’, ‘t5-3b’, ‘t5-11b’
#for T5 you can chose the size of the model. Everything above t5-base is very slow, even on GPU or TPU.
# summarize with XLNet
summarizer_xlnet = TransformerSummarizer(transformer_type="XLNet",transformer_model_key="xlnet-base-cased")
# summarize with GPT2
summarizer_gpt2 = TransformerSummarizer(transformer_type="GPT2",transformer_model_key="gpt2-medium")
data = '''
For the actual assembly of an module, the Material list of a complete module is displayed in order to make the necessary materials physically available. Also CAD model of the assembly and 2-D construction models can be viewed or printed out in order to be able to later on
to carry out individual steps.
Necessary steps: The material list, 3D model and 2D drawings of a complete assembly must be available.
'''
# Bart for Text - Summarization
print('Bart for Text - Summarization')
summary_bart = summarizer_bart(data, min_length=10, max_length=40) # change min_ and max_length for different output
pp.pprint(summary_bart[0]['summary_text'])
# BERT for Text - Summarization
print('\n BERT for Text - Summarization')
summary_bert = summarizer_bert(data, min_length=60)
full = ''.join(summary_bert)
pp.pprint(full)
# XLNet for Text - Summarization
print('\n XLNet for Text - Summarization')
summary_xlnet = summarizer_xlnet(data, min_length=60)
full = ''.join(summary_xlnet)
pp.pprint(full)
# GPT2 for Text - Summarization
print('\n GPT2 for Text - Summarization')
summary_gpt2 = summarizer_gpt2(data, min_length=60, ratio = 0.1)
full = ''.join(summary_gpt2)
pp.pprint(full)
# T5 for Text - Summarization
print('\n T5 for Text - Summarization')
summary_t5 = summarizer_t5(data, min_length=10) # change min_ and max_length for different output
pp.pprint(summary_t5[0]['summary_text'])
# a review on another data
data = '''
In the production of SMC (Sheet Moulding Compound), the maturing of the semi-finished product (resin+glass fibre) is of decisive importance.
The associated thickening of the material determines the viscosity and thus the quality of the end product.
Possible defects due to short maturing and soft semi-finished products are lack of fibre transport, while too long maturing and hard semi-finished products result in incompletely filled components.
By adjusting the press parameters such as closing force, closing speed, mould temperature etc., the fluctuations in thickening can normally be compensated.
By measuring the flowability/viscosity of the material or by measuring additional parameters during the manufacturing process, the ideal process window for the production of SMC is to be controlled even better.
'''
# Bart for Text - Summarization
print('Bart for Text - Summarization')
summary_bart = summarizer_bart(data, min_length=10, max_length=40) # change min_ and max_length for different output
pp.pprint(summary_bart[0]['summary_text'])
# BERT for Text - Summarization
print('\n BERT for Text - Summarization')
summary_bert = summarizer_bert(data, min_length=60)
full = ''.join(summary_bert)
pp.pprint(full)
# XLNet for Text - Summarization
print('\n XLNet for Text - Summarization')
summary_xlnet = summarizer_xlnet(data, min_length=60)
full = ''.join(summary_xlnet)
pp.pprint(full)
# GPT2 for Text - Summarization
print('\n GPT2 for Text - Summarization')
summary_gpt2 = summarizer_gpt2(data, min_length=60)
full = ''.join(summary_gpt2)
pp.pprint(full)
# T5 for Text - Summarization
print('\n T5 for Text - Summarization')
summary_t5 = summarizer_t5(data, min_length=10) # change min_ and max_length for different output
pp.pprint(summary_t5[0]['summary_text'])
# Text - Summarization using word frequencies
# importing libraries
import nltk
nltk.download('stopwords')
nltk.download('punkt')
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
from nltk.tokenize import word_tokenize, sent_tokenize
import bs4 as BeautifulSoup
import urllib.request
#fetching the content from the URL
fetched_data = urllib.request.urlopen('https://en.wikipedia.org/wiki/20th_century')
article_read = fetched_data.read()
#parsing the URL content and storing in a variable
article_parsed = BeautifulSoup.BeautifulSoup(article_read,'html.parser')
#returning <p> tags
paragraphs = article_parsed.find_all('p')
article_content = '''
In the production of SMC (Sheet Moulding Compound), the maturing of the semi-finished product (resin+glass fibre) is of decisive importance. The associated thickening of the material determines the viscosity and thus the quality of the end product. Possible defects due to short maturing and soft semi-finished products are lack of fibre transport, while too long maturing and hard semi-finished products result in incompletely filled components. By adjusting the press parameters such as closing force, closing speed, mould temperature etc., the fluctuations in thickening can normally be compensated. By measuring the flowability/viscosity of the material or by measuring additional parameters during the manufacturing process, the ideal process window for the production of SMC is to be controlled even better.
'''
#looping through the paragraphs and adding them to the variable
#for p in paragraphs:
# article_content += p.text
#print(article_content)
def _create_dictionary_table(text_string) -> dict:
#removing stop words
stop_words = set(stopwords.words("english"))
words = word_tokenize(text_string)
#reducing words to their root form
stem = PorterStemmer()
#creating dictionary for the word frequency table
frequency_table = dict()
for wd in words:
wd = stem.stem(wd)
if wd in stop_words:
continue
if wd in frequency_table:
frequency_table[wd] += 1
else:
frequency_table[wd] = 1
return frequency_table
def _calculate_sentence_scores(sentences, frequency_table) -> dict:
#algorithm for scoring a sentence by its words
sentence_weight = dict()
for sentence in sentences:
sentence_wordcount = (len(word_tokenize(sentence)))
sentence_wordcount_without_stop_words = 0
for word_weight in frequency_table:
if word_weight in sentence.lower():
sentence_wordcount_without_stop_words += 1
if sentence[:7] in sentence_weight:
sentence_weight[sentence[:7]] += frequency_table[word_weight]
else:
sentence_weight[sentence[:7]] = frequency_table[word_weight]
sentence_weight[sentence[:7]] = sentence_weight[sentence[:7]] / sentence_wordcount_without_stop_words
return sentence_weight
def _calculate_average_score(sentence_weight) -> int:
#calculating the average score for the sentences
sum_values = 0
for entry in sentence_weight:
sum_values += sentence_weight[entry]
#getting sentence average value from source text
average_score = (sum_values / len(sentence_weight))
return average_score
def _get_article_summary(sentences, sentence_weight, threshold):
sentence_counter = 0
article_summary = ''
for sentence in sentences:
if sentence[:7] in sentence_weight and sentence_weight[sentence[:7]] >= (threshold):
article_summary += " " + sentence
sentence_counter += 1
return article_summary
def _run_article_summary(article):
#creating a dictionary for the word frequency table
frequency_table = _create_dictionary_table(article)
#tokenizing the sentences
sentences = sent_tokenize(article)
#algorithm for scoring a sentence by its words
sentence_scores = _calculate_sentence_scores(sentences, frequency_table)
#getting the threshold
threshold = _calculate_average_score(sentence_scores)
#producing the summary
article_summary = _get_article_summary(sentences, sentence_scores, 1.1 * threshold)
return article_summary
if __name__ == '__main__':
summary_results = _run_article_summary(article_content)
print(summary_results)
# Text - Summarization using GenSim
from gensim.summarization.summarizer import summarize
print(summarize(data))
```
| github_jupyter |
## Case Study 1 DS7333
```
import sys
print(sys.path)
```
## Business Understanding
The objective behind this case study is to build a linear regression modeling using L1 (LASSO) or L2 (Ridge) regularization to predict the cirtical temperature. The team was given two files which contain the data and from this data we must attempt to predict the cirtical temperature.
Our overall goals are to predict critical temperature and to describe which variable carries the most importance.
## Data Evaluation/ Engineering
```
#First we'll import all our necessary libraries and add addtional ones as needed as the project grows
import pandas as pd
import numpy as np
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score, mean_absolute_error
from ml_metrics import rmse
import matplotlib.pyplot as plt
from sklearn import datasets
from pandas_profiling import ProfileReport
from sklearn.preprocessing import StandardScaler, normalize
from sklearn.preprocessing import MinMaxScaler
from sklearn import linear_model
from sklearn.linear_model import Ridge
from sklearn import preprocessing
import statsmodels.api as sm
from statsmodels.stats.outliers_influence import variance_inflation_factor
from matplotlib import pyplot
#Next we'll bring in our data
dat1 = pd.read_csv("/Users/lijjumathew/Code/SMU/MSDS-QTW/case_study1/data/unique_m.csv")
dat2 = pd.read_csv("/Users/lijjumathew/Code/SMU/MSDS-QTW/case_study1/data/train.csv")
#Drop critical temp
dat1.drop(['critical_temp'], axis=1)
#Merge the data
frames = [dat1,dat2]
result = pd.concat(frames)
#Drop material
del result['material']
result.info()
result.describe()
pd.set_option('display.max_rows', None)
#Check out the data, start to make some decisions on columns and missing data
#Compute percentages of each columns missing data
percent_missing = result.isnull().sum()/len(result)
#Put percents into dfReduced
missing_value_result = pd.DataFrame({'column_name': result.columns,
'percent_missing': percent_missing})
#Sort it and show the results
missing_value_result.sort_values('percent_missing', inplace=True)
missing_value_result.round(6)
```
Based on the analysis above, we are lucky and have no missing values to impute/handle. Additionally, we will check for NaN's in the combined dataframe.
```
result.isnull().values.any()
result= result.fillna(0)
```
This line handles filling the NaN's with 0's.
```
print(len(result))
result.drop_duplicates(keep = False, inplace = True)
print(len(result))
```
In a seperate check for duplicate entries- we can see that we have no duplicates.
In the cell below we will examine the data for multicollinearity, and remove variables above a threshold of 1000.
```
vif_info = pd.DataFrame()
vif_info['VIF'] = [variance_inflation_factor(result.values, i) for i in range(result.shape[1])]
vif_info['Column'] = result.columns
vif_info.sort_values('VIF', ascending=False)
result.describe()
del result['wtd_mean_fie']
del result['wtd_gmean_fie']
del result['mean_fie']
del result['gmean_fie']
del result['wtd_mean_atomic_radius']
del result['wtd_gmean_atomic_radius']
del result['mean_atomic_radius']
del result['entropy_fie']
del result['gmean_atomic_radius']
del result['wtd_gmean_Valence']
del result['mean_Valence']
del result['gmean_Valence']
del result['entropy_Valence']
del result['wtd_gmean_atomic_mass']
del result['mean_atomic_mass']
del result['wtd_entropy_atomic_radius']
del result['gmean_atomic_mass']
del result['wtd_entropy_Valence']
del result['entropy_atomic_mass']
del result['std_atomic_radius']
del result['std_fie']
del result['wtd_std_fie']
del result['wtd_std_atomic_radius']
del result['wtd_mean_ElectronAffinity']
del result['std_ThermalConductivity']
del result['wtd_entropy_fie']
del result['range_fie']
del result['wtd_entropy_atomic_mass']
del result['range_ThermalConductivity']
del result['entropy_Density']
del result['wtd_mean_FusionHeat']
```
Based on the results above- this data is ready for analysis. It is free of duplicates, NaN's, and missing values.
With those items stated- we should also discuss that we are soely interested in our outcome variable
### EDA
The purpose of the plot above was to investigate the distribution of values of our outcome variable. Based on this histogram we can see that the data is heavily right skewed. This may be worth log-transforming to fit our normality assumption.
## Modeling Preparations
The following evaluation metrics will be used for the regression task. -Mean Absolute Error (MAE) -Root Mean Squared Error (RMSE) -Mean Absolute Percentage Error (MAPE) and R^2
Mean absolute error is being used because it is an intuitive metric that simply looks at the absolute difference between the data and the models preditions. This metric is a good first pass at evaluating a models performance, its simple and quick. The mean absolute error however, does not indicate under/over performance in the model. The residuals all contribute proportionally, so larger errors will contribute significantally to the model. A small MAE is a good indicator of good prediction and a large MAE indicates the model may need more work. MAE of 0 is ideal, but not really possible. The MAE is farily robust to outliers since it uses the absolute value. For this model a lower MAE is "better." This metric is appropriate since we are concerned with the models ability to predict critical temperatures.
Root mean squared error is the standard deviation of the residuals (commonly known as prediction errrors). We use these rediuals to see how far from the line the points are. RMSE is a good metric to tell us how concentrated the data is around the line of best fit. The nice and differentiating part of RMSE is that they can be positive or negative as the predicted value under or over the estimates of the actual value unlike the MAE and MAPE which use the absolute value. Additionally we can use the RMSE as a good measure of the spread of the y values about the predicted y values.
Mean absolute percentage error is being used as the percentage equivalent of MAE. Similarily to the MAE, MAPE measures how far our model's predictions are off from their corresponding outputs on average. MAPE is pretty easy to interpret because of its use of percentages. As a downside, MAPE is undefined for data where we have 0 values. The MAPE can also grow very large if the values themselves are small, and as a final note MAPE is biased towards predictions that are systematically less than actual values. We will use MAPE as a more interpretable MAE.
We will interpret the MAPE in terms of percentages, as an example the MAPE will state, "our model's predictions are on average xx% off from the actual value." This metric is appropriate since we are concerned with the model's ability to predict critical temperatures, furthermore the addition of percentage will further our ability to interpret the model's performance.
Finally R2 or the coefficient of determination will be used and is the proportion of the variation in the dependent variable that is predictable from the independent variable. This isn't the best metric, but it is quite easy to interpret as it lives in a 0-100% scale.
## Model Building & Evaluations
### Start with Linear Regression Model
```
# Lets scale the data
x = result.values #returns a numpy array
min_max_scaler = preprocessing.MinMaxScaler()
x_scaled = min_max_scaler.fit_transform(x)
result = pd.DataFrame(x_scaled)
# Create training and testing sets (cross-validation not needed)
train_set = result.sample(frac=0.8, random_state=100)
test_set = result.loc[:, result.columns != 'critical_temp']
print(train_set.shape[0])
print(test_set.shape[0])
# Get the training and testing row indices for later use
train_index = train_set.index.values.astype(int)
test_index = test_set.index.values.astype(int)
# Converting the training and testing datasets back to matrix-formats
X_train = train_set.iloc[:, :].values # returns the data; excluding the target
Y_train = train_set.iloc[:, -1].values # returns the target-only
X_test = test_set.iloc[:, :].values # ""
Y_test = test_set.iloc[:, -1].values # "
# Fit a linear regression to the training data
reg = LinearRegression(normalize=True).fit(X_train, Y_train)
print(reg.score(X_train, Y_train))
print(reg.coef_)
print(reg.intercept_)
print(reg.get_params())
# Find the variable with the largest "normalized" coefficient value
print('The positive(max) coef-value is {}'.format(max(reg.coef_))) # Positive Max
#print('The abs(max) coef-value is {}'.format(max(reg.coef_, key=abs))) # ABS Max
max_var = max(reg.coef_) # Positive Max
#max_var = max(reg.coef_, key=abs) # ABS Max
var_index = reg.coef_.tolist().index(max_var)
print('The variable associated with this coef-value is {}'.format(result.columns[var_index]))
importance = reg.coef_
# summarize feature importance
for i,v in enumerate(importance):
print('Feature: %0d, Score: %.5f' % (i,v))
# plot feature importance
pyplot.bar([x for x in range(len(importance))], importance)
pyplot.show()
Y_pred = reg.predict(X_test)
orig_mae = mean_absolute_error(Y_test,Y_pred)
orig_mse = mean_squared_error(Y_test,Y_pred)
orig_rmse_val = rmse(Y_test,Y_pred)
orig_r2 = r2_score(Y_test,Y_pred)
print("MAE: %.30f"%orig_mae)
print("MSE: %.30f"%orig_mse)
print("RMSE: %.30f"%orig_rmse_val)
print("R2: %.30f"%orig_r2)
Y_pred_tr = reg.predict(X_train)
orig_mae_tr = mean_absolute_error(Y_train,Y_pred_tr)
orig_mse_tr = mean_squared_error(Y_train,Y_pred_tr)
orig_rmse_val_tr = rmse(Y_train,Y_pred_tr)
orig_r2_tr = r2_score(Y_train,Y_pred_tr)
print("MAE: %.30f"%orig_mae_tr)
print("MSE: %.30f"%orig_mse_tr)
print("RMSE: %.30f"%orig_rmse_val_tr)
print("R2: %.30f"%orig_r2_tr)
res_frame = pd.DataFrame({'data':'original',
'imputation':'none',
'mae': orig_mae,
'mse': orig_mse,
'rmse':orig_rmse_val,
'R2':orig_r2,
'mae_diff':np.nan,
'mse_diff':np.nan,
'rmse_diff':np.nan,
'R2_diff':np.nan}, index=[0])
res_frame
```
## LASSO Regularization
```
l1_mod = linear_model.Lasso(alpha=0.001, normalize=True).fit(X_train, Y_train)
print(l1_mod.score(X_train, Y_train))
print(l1_mod.coef_)
print(l1_mod.intercept_)
print(l1_mod.get_params())
Y_pred2 = l1_mod.predict(X_test)
orig_mae2 = mean_absolute_error(Y_test,Y_pred2)
orig_mse2 = mean_squared_error(Y_test,Y_pred2)
orig_rmse_val2 = rmse(Y_test,Y_pred2)
orig_r22 = r2_score(Y_test,Y_pred2)
print("MAE: %.5f"%orig_mae2)
print("MSE: %.5f"%orig_mse2)
print("RMSE: %.5f"%orig_rmse_val2)
print("R2: %.5f"%orig_r22)
Y_pred2_tr = l1_mod.predict(X_train)
orig_mae2_tr = mean_absolute_error(Y_train,Y_pred2_tr)
orig_mse2_tr = mean_squared_error(Y_train,Y_pred2_tr)
orig_rmse_val2_tr = rmse(Y_train,Y_pred2_tr)
orig_r22_tr = r2_score(Y_train,Y_pred2_tr)
print("MAE: %.5f"%orig_mae2_tr)
print("MSE: %.5f"%orig_mse2_tr)
print("RMSE: %.5f"%orig_rmse_val2_tr)
print("R2: %.5f"%orig_r22_tr)
# Find the variable with the largest "normalized" coefficient value
print('The positive(max) coef-value is {}'.format(max(l1_mod.coef_))) # Positive Max
#print('The abs(max) coef-value is {}'.format(max(l1_mod.coef_, key=abs))) # ABS Max
max_var = max(l1_mod.coef_) # Positive Max
#max_var = max(reg.coef_, key=abs) # ABS Max
var_index = l1_mod.coef_.tolist().index(max_var)
print('The variable associated with this coef-value is {}'.format(result.columns[var_index]))
importance = l1_mod.coef_
# summarize feature importance
for i,v in enumerate(importance):
print('Feature: %0d, Score: %.5f' % (i,v))
# plot feature importance
pyplot.bar([x for x in range(len(importance))], importance)
pyplot.show()
res_frame2 = pd.DataFrame({'data':'lasso',
'imputation':'none',
'mae': orig_mae2,
'mse': orig_mse2,
'rmse':orig_rmse_val2,
'R2':orig_r22,
'mae_diff':orig_mae2-orig_mae,
'mse_diff':orig_mse2-orig_mse,
'rmse_diff':orig_rmse_val2-orig_rmse_val,
'R2_diff':orig_r22-orig_r2}, index=[0])
res_frame = pd.concat([res_frame, res_frame2])
res_frame
```
## Ridge Regularization
```
l2_mod = Ridge(alpha=1.0, normalize=True).fit(X_train, Y_train)
print(l2_mod.score(X_train, Y_train))
print(l2_mod.coef_)
print(l2_mod.intercept_)
print(l2_mod.get_params())
Y_pred3 = l2_mod.predict(X_test)
orig_mae3 = mean_absolute_error(Y_test,Y_pred3)
orig_mse3 = mean_squared_error(Y_test,Y_pred3)
orig_rmse_val3 = rmse(Y_test,Y_pred3)
orig_r23 = r2_score(Y_test,Y_pred3)
print("MAE: %.5f"%orig_mae3)
print("MSE: %.5f"%orig_mse3)
print("RMSE: %.5f"%orig_rmse_val3)
print("R2: %.5f"%orig_r23)
Y_pred3_tr = l2_mod.predict(X_train)
orig_mae3_tr = mean_absolute_error(Y_train,Y_pred3_tr)
orig_mse3_tr = mean_squared_error(Y_train,Y_pred3_tr)
orig_rmse_val3_tr = rmse(Y_train,Y_pred3_tr)
orig_r23_tr = r2_score(Y_train,Y_pred3_tr)
print("MAE: %.5f"%orig_mae3_tr)
print("MSE: %.5f"%orig_mse3_tr)
print("RMSE: %.5f"%orig_rmse_val3_tr)
print("R2: %.5f"%orig_r23_tr)
# Find the variable with the largest "normalized" coefficient value
print('The positive(max) coef-value is {}'.format(max(l2_mod.coef_))) # Positive Max
print('The abs(max) coef-value is {}'.format(max(l2_mod.coef_, key=abs))) # ABS Max
max_var = max(l2_mod.coef_) # Positive Max
#max_var = max(reg.coef_, key=abs) # ABS Max
var_index = l2_mod.coef_.tolist().index(max_var)
print('The variable associated with this coef-value is {}'.format(result.columns[4]))
importance = l2_mod.coef_
# summarize feature importance
for i,v in enumerate(importance):
print('Feature: %0d, Score: %.5f' % (i,v))
# plot feature importance
pyplot.bar([x for x in range(len(importance))], importance)
pyplot.show()
res_frame3 = pd.DataFrame({'data':'ridge',
'imputation':'none',
'mae': orig_mae3,
'mse': orig_mse3,
'rmse':orig_rmse_val3,
'R2':orig_r23,
'mae_diff':orig_mae3-orig_mae,
'mse_diff':orig_mse3-orig_mse,
'rmse_diff':orig_rmse_val3-orig_rmse_val,
'R2_diff':orig_r23-orig_r2}, index=[0])
res_frame = pd.concat([res_frame, res_frame3])
res_frame
result.describe()
```
## Model Interpretability & Explainability
## Case Conclusions
| github_jupyter |
# Python Data Model
Most of the content of this book has been extracted from the book "Fluent Python" by Luciano Ramalho (O'Reilly, 2015)
http://shop.oreilly.com/product/0636920032519.do
>One of the best qualities of Python is its consistency.
>After working with Python for a while, you are able to start making informed, correct guesses about features that are new to you.
However, if you learned another object-oriented language before Python, you may have found it strange to use `len(collection)` instead of `collection.len()`.
This apparent oddity is the tip of an iceberg that, when properly understood, is the key to everything we call **Pythonic**.
>The iceberg is called the **Python data model**, and it describes the API that you can use to make your own objects play well with the most idiomatic language features.
> You can think of the data model as a description of Python as a framework. It formalises the interfaces of the building blocks of the language itself, such as sequences, iterators, functions, classes, context managers, and so on.
While coding with any framework, you spend a lot of time implementing methods that are called by the framework. The same happens when you leverage the Python data model.
The Python interpreter invokes **special methods** to perform basic object operations, often triggered by **special syntax**.
The special method names are always written with leading and trailing double underscores (i.e., `__getitem__`).
For example, the syntax `obj[key]` is supported by the `__getitem__` special method.
In order to evaluate `my_collection[key]`, the interpreter calls `my_collection.__getitem__(key)`.
The special method names allow your objects to implement, support, and interact with basic language constructs such as:
- Iteration
- Collections
- Attribute access
- Operator overloading
- Function and method invocation
- Object creation and destruction
- String representation and formatting
- Managed contexts (i.e., with blocks)
## A Pythonic Card Deck
```
import collections
Card = collections.namedtuple('Card', ['rank', 'suit'])
class FrenchDeck:
ranks = [str(n) for n in range(2, 11)] + list('JQKA')
suits = 'spades diamonds clubs hearts'.split()
def __init__(self):
self._cards = [Card(rank, suit) for suit in self.suits
for rank in self.ranks]
def __len__(self):
return len(self._cards)
def __getitem__(self, position):
return self._cards[position]
```
The first thing to note is the use of `collections.namedtuple` to construct a simple class to represent individual cards.
Since Python 2.6, `namedtuple` can be used to build classes of objects that are just bundles of attributes with no custom methods, like a database record.
In the example, we use it to provide a nice representation for the cards in the deck:
```
beer_card = Card('7', 'diamonds')
beer_card
```
But the point of this example is the `FrenchDeck` class.
It’s short, but it packs a punch.
##### Length of a Deck
First, like any standard Python collection, a deck responds to the `len()` function by returning the number of cards in it:
```
deck = FrenchDeck()
len(deck)
```
##### Reading Specific Cards
Reading specific cards from the deck say, the first or the last— should be as easy as `deck[0]` or `deck[-1]`, and this is what the `__getitem__` method provides:
```
deck[0]
deck[-1]
```
##### Picking Random Card
Should we create a method to pick a random card? **No need**.
Python already has a function to get a random item from a sequence: `random.choice`. We can just use it on a deck instance:
```
from random import choice
choice(deck)
choice(deck)
choice(deck)
```
### First Impressions:
We’ve just seen two advantages of using special methods to leverage the Python data model:
- The users of your classes don’t have to memorize arbitrary method names for standard operations
(“How to get the number of items? Is it .size(), .length(), or what?”).
- It’s easier to benefit from the rich Python standard library and avoid reinventing the wheel, like the `random.choice` function.
**... but it gets better ...**
Because our `__getitem__` delegates to the `[]` operator of `self._cards`, our deck automatically supports **slicing**.
Here’s how we look at the top three cards from a brand new deck, and then pick just the aces by starting on index 12 and skipping 13 cards at a time:
```
deck[:3]
deck[12::13]
```
Just by implementing the `__getitem__` special method, our deck is also **iterable**:
```
for card in deck:
print(card)
```
... also in **reverse order**:
```python
for card in reversed(deck):
print(card)
...
```
## To know more...
To have a more complete overview of the Magic (_dunder_) methods, please have a look at the [Extra - Magic Methods and Operator Overloading](Extra - Magic Methods and Operator Overloading.ipynb) notebook.
---
## Exercise: Emulating Numeric Types
```python
from math import hypot
class Vector:
def __init__(self, x=0, y=0):
self.x = x
self.y = y
def __repr__(self):
return 'Vector(%r, %r)' % (self.x, self.y)
def __abs__(self):
return hypot(self.x, self.y)
def __bool__(self):
return bool(abs(self))
def __add__(self, other):
x = self.x + other.x
y = self.y + other.y
return Vector(x, y)
def __mul__(self, scalar):
return Vector(self.x * scalar, self.y * scalar)
```
| github_jupyter |
_Lambda School Data Science — Tree Ensembles_
# Decision Trees — with ipywidgets!
### Notebook requirements
- [ipywidgets](https://ipywidgets.readthedocs.io/en/stable/examples/Using%20Interact.html): works in Jupyter but [doesn't work on Google Colab](https://github.com/googlecolab/colabtools/issues/60#issuecomment-462529981)
- [mlxtend.plotting.plot_decision_regions](http://rasbt.github.io/mlxtend/user_guide/plotting/plot_decision_regions/): `pip install mlxtend`
## Regressing a wave
```
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
# Example from http://scikit-learn.org/stable/auto_examples/tree/plot_tree_regression.html
def make_data():
import numpy as np
rng = np.random.RandomState(1)
X = np.sort(5 * rng.rand(80, 1), axis=0)
y = np.sin(X).ravel()
y[::5] += 2 * (0.5 - rng.rand(16))
return X, y
X, y = make_data()
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.25, random_state=42)
plt.scatter(X_train, y_train)
plt.scatter(X_test, y_test);
from sklearn.tree import DecisionTreeRegressor
def regress_wave(max_depth):
tree = DecisionTreeRegressor(max_depth=max_depth)
tree.fit(X_train, y_train)
print('Train R^2 score:', tree.score(X_train, y_train))
print('Test R^2 score:', tree.score(X_test, y_test))
plt.scatter(X_train, y_train)
plt.scatter(X_test, y_test)
plt.step(X, tree.predict(X))
plt.show()
from ipywidgets import interact
interact(regress_wave, max_depth=(1,8,1));
```
## Classifying a curve
```
import numpy as np
curve_X = np.random.rand(1000, 2)
curve_y = np.square(curve_X[:,0]) + np.square(curve_X[:,1]) < 1.0
curve_y = curve_y.astype(int)
from sklearn.linear_model import LogisticRegression
from mlxtend.plotting import plot_decision_regions
lr = LogisticRegression(solver='lbfgs')
lr.fit(curve_X, curve_y)
plot_decision_regions(curve_X, curve_y, lr, legend=False)
plt.axis((0,1,0,1));
from sklearn.tree import DecisionTreeClassifier
def classify_curve(max_depth):
tree = DecisionTreeClassifier(max_depth=max_depth)
tree.fit(curve_X, curve_y)
plot_decision_regions(curve_X, curve_y, tree, legend=False)
plt.axis((0,1,0,1))
plt.show()
interact(classify_curve, max_depth=(1,8,1));
```
## Titanic survival, by age & fare
```
import seaborn as sns
from sklearn.impute import SimpleImputer
titanic = sns.load_dataset('titanic')
imputer = SimpleImputer()
titanic_X = imputer.fit_transform(titanic[['age', 'fare']])
titanic_y = titanic['survived'].values
from sklearn.linear_model import LogisticRegression
from mlxtend.plotting import plot_decision_regions
lr = LogisticRegression(solver='lbfgs')
lr.fit(titanic_X, titanic_y)
plot_decision_regions(titanic_X, titanic_y, lr, legend=False);
plt.axis((0,75,0,175));
def classify_titanic(max_depth):
tree = DecisionTreeClassifier(max_depth=max_depth)
tree.fit(titanic_X, titanic_y)
plot_decision_regions(titanic_X, titanic_y, tree, legend=False)
plt.axis((0,75,0,175))
plt.show()
interact(classify_titanic, max_depth=(1,8,1));
```
| github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Load-Network" data-toc-modified-id="Load-Network-1"><span class="toc-item-num">1 </span>Load Network</a></span></li><li><span><a href="#Explore-Directions" data-toc-modified-id="Explore-Directions-2"><span class="toc-item-num">2 </span>Explore Directions</a></span><ul class="toc-item"><li><span><a href="#Interactive" data-toc-modified-id="Interactive-2.1"><span class="toc-item-num">2.1 </span>Interactive</a></span></li></ul></li></ul></div>
```
is_stylegan_v1 = False
from pathlib import Path
import matplotlib.pyplot as plt
import numpy as np
import sys
import os
from datetime import datetime
from tqdm import tqdm
# ffmpeg installation location, for creating videos
plt.rcParams['animation.ffmpeg_path'] = str('/usr/bin/ffmpeg')
import ipywidgets as widgets
from ipywidgets import interact, interact_manual
from IPython.display import display
from ipywidgets import Button
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
%load_ext autoreload
%autoreload 2
# StyleGAN2 Repo
sys.path.append('/tf/notebooks/stylegan2')
# StyleGAN Utils
from stylegan_utils import load_network, gen_image_fun, synth_image_fun, create_video
# v1 override
if is_stylegan_v1:
from stylegan_utils import load_network_v1 as load_network
from stylegan_utils import gen_image_fun_v1 as gen_image_fun
from stylegan_utils import synth_image_fun_v1 as synth_image_fun
import run_projector
import projector
import training.dataset
import training.misc
# Data Science Utils
sys.path.append(os.path.join(*[os.pardir]*3, 'data-science-learning'))
from ds_utils import generative_utils
res_dir = Path.home() / 'Documents/generated_data/stylegan'
```
# Load Network
```
MODELS_DIR = Path.home() / 'Documents/models/stylegan2'
MODEL_NAME = 'original_ffhq'
SNAPSHOT_NAME = 'stylegan2-ffhq-config-f'
Gs, Gs_kwargs, noise_vars = load_network(str(MODELS_DIR / MODEL_NAME / SNAPSHOT_NAME) + '.pkl')
Z_SIZE = Gs.input_shape[1]
IMG_SIZE = Gs.output_shape[2:]
IMG_SIZE
img = gen_image_fun(Gs, np.random.randn(1, Z_SIZE), Gs_kwargs, noise_vars)
plt.imshow(img)
```
# Explore Directions
```
def plot_direction_grid(dlatent, direction, coeffs):
fig, ax = plt.subplots(1, len(coeffs), figsize=(15, 10), dpi=100)
for i, coeff in enumerate(coeffs):
new_latent = (dlatent.copy() + coeff*direction)
ax[i].imshow(synth_image_fun(Gs, new_latent, Gs_kwargs, randomize_noise=False))
ax[i].set_title(f'Coeff: {coeff:0.1f}')
ax[i].axis('off')
plt.show()
# load learned direction
direction = np.load('/tf/media/datasets/stylegan/learned_directions.npy')
nb_latents = 5
# generate dlatents from mapping network
dlatents = Gs.components.mapping.run(np.random.rand(nb_latents, Z_SIZE), None, truncation_psi=1.)
for i in range(nb_latents):
plot_direction_grid(dlatents[i:i+1], direction, np.linspace(-2, 2, 5))
```
## Interactive
```
# Setup plot image
dpi = 100
fig, ax = plt.subplots(dpi=dpi, figsize=(7, 7))
fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0, wspace=0)
plt.axis('off')
im = ax.imshow(gen_image_fun(Gs, np.random.randn(1, Z_SIZE),Gs_kwargs, noise_vars, truncation_psi=1))
#prevent any output for this cell
plt.close()
# fetch attributes names
directions_dir = MODELS_DIR / MODEL_NAME / 'directions' / 'set01'
attributes = [e.stem for e in directions_dir.glob('*.npy')]
# get names or projected images
data_dir = res_dir / 'projection' / MODEL_NAME / SNAPSHOT_NAME / '20200215_192547'
entries = [p.name for p in data_dir.glob("*") if p.is_dir()]
entries.remove('tfrecords')
# set target latent to play with
dlatents = Gs.components.mapping.run(np.random.rand(1, Z_SIZE), None, truncation_psi=0.5)
target_latent = dlatents[0:1]
#target_latent = np.array([np.load("/out_4/image_latents2000.npy")])
%matplotlib inline
@interact
def i_direction(attribute=attributes,
entry=entries,
coeff=(-10., 10.)):
direction = np.load(directions_dir / f'{attribute}.npy')
target_latent = np.array([np.load(data_dir / entry / "image_latents1000.npy")])
new_latent_vector = target_latent.copy() + coeff*direction
im.set_data(synth_image_fun(Gs, new_latent_vector, Gs_kwargs, True))
ax.set_title('Coeff: %0.1f' % coeff)
display(fig)
```
| github_jupyter |
```
import numpy as np
# to load mat files (from matlab) we need to import a special function
from scipy.io import loadmat
import matplotlib.pyplot as plt
#we will also need to import operating system tools through the os package- this will allow us to interact with the filesystem
import os
#Define the paths to the relevant data
raster_directory = './raster_data/'
object_label_file = './object_labels.npy'
loc_label_file = './loc_labels.npy'
#Let's load data from a single neuron, and the two label files
raster_array = np.load(raster_directory + 'raster1.npy')
object_labels = np.load(object_label_file)
loc_labels = np.load(loc_label_file)
print(raster_array.shape, object_labels.shape, loc_labels.shape)
def get_neuron_dat(raster_dir, neuron_ind):
raster_arr = np.load(raster_dir + 'raster'+str(neuron_ind) + '.npy')
print(raster_array[0:10, 0:10], object_labels[0:10], loc_labels[0:10])
```
The object labels and location labels are quite long and visualizing the first 10 labels does not tell us what all the unique objects and locations are. To do so we can use a numpy function called np.unique(). What are the unique objects and labels?
```
print(np.unique(object_labels), np.unique(loc_labels))
```
There are 60 trials of each object each 1000ms long. These trials correspond to IT neural responses toobject presented at 3 different locations in the visual scene (each 20 trials). There are 7 objects for a total of 420 trials. Let's visualize all these trials.
```
#Plotting a raster
#Stimulus turns on at 500ms
plt.figure(figsize=(10,10))
plt.imshow(raster_array, cmap='gray_r', aspect='auto')
plt.ylabel('Trial')
plt.xlabel('Time (ms)')
plt.title('Raster for IT Neuron 1')
#Show when the stimulus comes on
plt.axvline(x=500, color = 'b')
plt.show()
#Let's put this into a function
def plot_raster(raster_array, title=None):
plt.figure(figsize=(10,10))
plt.imshow(raster_array, cmap='gray_r', aspect='auto')
plt.ylabel('Trial')
plt.xlabel('Time (ms)')
plt.title(title)
#Show when the stimulus comes on
plt.axvline(x=500, color = 'b')
plt.show()
```
A common problem in dataset analysis is to compute basic information about your data (like how many trials of each category exist..). For this data calculate how many times each object appears and how many times each location is presented for each object. For example, for the "car" calculate how many cars there are and how many times there is 'car' and 'lower', 'car' and 'upper' etc.
```
unique_objects = np.unique(object_labels)
unique_locations = np.unique(loc_labels)
#Here we show 3 different ways to do the exact same thing
print('Method 3 implementation .... ')
for obj in unique_objects:
for loc in unique_locations:
#Method 1
n_times = sum(object_labels[i] == obj and loc_labels[i] == loc for i in range(len(object_labels)))
print("Number of times", obj, " and ", loc, ": ", n_times)
print('Method 2 implementation...')
for obj in unique_objects:
for loc in unique_locations:
#Method 2
n_times = 0
for i in range(len(object_labels)):
if object_labels[i] == obj and loc_labels[i] == loc:
n_times += 1
print("Number of times", obj, " and ", loc, ": ", n_times)
print('Method 3 implementation ...')
for obj in unique_objects:
for loc in unique_locations:
#Method 3
n_times = sum(loc_labels[object_labels == obj] == loc)
print("Number of times", obj, " and ", loc, ": ", n_times)
#Now plot the raster for just the trials with the car
indices_car = np.where(object_labels == 'car')
raster_car = raster_array[indices_car]
plot_raster(raster_car, 'Raster for Neuron 1 for Car Objects')
```
Now we would like to compute the summed spikes for each trial of a "car" object being presentation. This will give us 60 spike counts corresponding to each of the trials. After this we want to compute the mean and standard deviation of this array of summed spike counts.
```
#Mean firing rate for each trial
sum_spikes = raster_car[:,500:1000].sum(axis=1)
mean_spikes = sum_spikes.mean()
std_spikes = sum_spikes.std()
print(sum_spikes, mean_spikes, std_spikes)
```
Now we are going to plot a tuning curve. What this means, is for each object (or location) we would like to calculate the mean number of spikes fired for that object. We then want to bar plot this for the 7 objects (and then the 3 locations). We will write a function to do this that takes in a label list and the data array and calculates a tuning curve for those labels.
```
def calc_tuning_curve(data,labels):
tuning_curve_means = []
tuning_curve_std = []
for label in np.unique(labels):
label_arr = data[labels == label]
sum_spike = label_arr.sum(axis=1)
tuning_curve_means.append(sum_spike.mean())
tuning_curve_std.append(sum_spike.std())
return [tuning_curve_means, tuning_curve_std]
tuning_curve_object = calc_tuning_curve(raster_array[:,500:], object_labels)
tuning_curve_loc = calc_tuning_curve(raster_array[:,500:], loc_labels)
print(tuning_curve_object, tuning_curve_loc)
def plot_tuning_curve(tuning_curve, labels, title = None):
y = tuning_curve[0]
x = list(range(len(y)))
error = tuning_curve[1]
plt.figure()
plt.errorbar(x,y, error)
plt.title(title)
plt.xticks(np.arange(len(y)), labels)
plt.ylabel('Mean number of spikes fired to category')
plt.xlabel('Category')
plt.show()
plot_tuning_curve(tuning_curve_object, unique_objects, title = 'Object Tuning')
plot_tuning_curve(tuning_curve_loc, unique_locations, title = 'Location Tuning')
```
| github_jupyter |
# Importing the libraries
```
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
```
# Importing the datasets
```
dataset = pd.read_csv("train_ctrUa4K.csv")
dataset2 = pd.read_csv("test_lAUu6dG.csv")
dataset = dataset.drop(['Loan_ID'], axis = 1)
dataset2 = dataset2.drop(['Loan_ID'], axis = 1)
dataset.shape
dataset2.shape
```
# Analysing the Training dataset
```
dataset.head()
dataset.dtypes
dataset.count()
dataset.isna().sum()
dataset["Gender"].isnull().sum()
dataset.info()
dataset["Gender"].value_counts()
dataset["Education"].value_counts()
dataset["Self_Employed"].value_counts()
dataset["Property_Area"].value_counts()
dataset["Loan_Status"].value_counts()
```
# Visualising the datasets
```
categorical_columns = ['Gender', 'Married',
'Dependents', 'Education', 'Self_Employed', 'Property_Area','Credit_History','Loan_Amount_Term']
fig,axes = plt.subplots(4,2,figsize=(12,15))
for idx,cat_col in enumerate(categorical_columns):
row,col = idx//2,idx%2
sns.countplot(x=cat_col,data=dataset,hue='Loan_Status',ax=axes[row,col])
plt.scatter(dataset['ApplicantIncome'],dataset['CoapplicantIncome'])
import seaborn as sns
sns.violinplot(dataset['ApplicantIncome'], dataset['Gender']) #Variable Plot
sns.despine()
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
ax.bar(dataset['Loan_Status'],dataset['CoapplicantIncome'],color = "yellow")
plt.show()
sns.heatmap(dataset.corr(), annot=True)
fig, ax = plt.subplots()
ax.hist(dataset["Loan_Status"],color = "purple")
ax.set_title('loan approvl counts')
ax.set_xlabel('Loan status')
ax.set_ylabel('Frequency')
```
# Taking Care of Missing Values
```
dataset["Gender"].fillna("Male", inplace = True)
dataset["Married"].fillna("No", inplace = True)
dataset["Education"].fillna("Graduate", inplace = True)
dataset["Self_Employed"].fillna("No", inplace = True)
dataset["Property_Area"].fillna("Urban", inplace = True)
dataset.isnull().sum()
dataset2["Gender"].fillna("Male", inplace = True)
dataset2["Married"].fillna("No", inplace = True)
dataset2["Education"].fillna("Graduate", inplace = True)
dataset2["Self_Employed"].fillna("No", inplace = True)
dataset2["Property_Area"].fillna("Urban", inplace = True)
```
# Encodiing the categorical variable
```
train_df_encoded = pd.get_dummies(dataset,drop_first=True)
train_df_encoded.head()
train_df_encoded.shape
test_df_encoded = pd.get_dummies(dataset2,drop_first=True)
test_df_encoded.head()
test_df_encoded.shape
```
# Splitting the dependent and independewnt variable
```
X = train_df_encoded.drop(columns='Loan_Status_Y').values
y = train_df_encoded['Loan_Status_Y'].values
X.shape
X_test_run = test_df_encoded.values
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X[:,0:4] = sc.fit_transform(X[:,0:4])
X_test_run[:,0:4] = sc.fit_transform(X_test_run[:,0:4])
```
# Splitting in to train and test
```
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.2,stratify =y,random_state =42)
print(X_train)
```
# Taking Care of numrical missing values
```
from sklearn.impute import SimpleImputer
imp = SimpleImputer(strategy='mean')
imp_train = imp.fit(X_train)
X_train = imp_train.transform(X_train)
X_test_imp = imp_train.transform(X_test)
X_test_run[0]
X_test_run= imp_train.transform(X_test_run)
```
# Testing different Clasification Models
## Logistic Regression
```
from sklearn.linear_model import LogisticRegression
log_classifier = LogisticRegression()
log_classifier.fit(X_train, y_train)
y_pred = log_classifier.predict(X_test_imp)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
from sklearn.metrics import f1_score
f1_score(y_test, y_pred, average=None)
ax= plt.subplot()
sns.heatmap(cm, annot=True, ax = ax, cmap='rainbow'); #annot=True to annotate cells
# labels, title and ticks
ax.set_xlabel('Predicted labels');ax.set_ylabel('True labels');
ax.set_title('Confusion Matrix');
ax.xaxis.set_ticklabels(['Yes', 'No']); ax.yaxis.set_ticklabels(['Yes', 'No']);
```
## Knearest
```
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier(n_neighbors = 5, metric = 'minkowski', p = 2)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test_imp)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
from sklearn.metrics import f1_score
f1_score(y_test, y_pred, average=None)
ax= plt.subplot()
sns.heatmap(cm, annot=True, ax = ax, cmap='flag'); #annot=True to annotate cells
# labels, title and ticks
ax.set_xlabel('Predicted labels');ax.set_ylabel('True labels');
ax.set_title('Confusion Matrix');
ax.xaxis.set_ticklabels(['Yes', 'No']); ax.yaxis.set_ticklabels(['Yes', 'No']);
```
## SVM
```
from sklearn.svm import SVC
classifier = SVC(kernel = 'linear', random_state = 0)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test_imp)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
from sklearn.metrics import f1_score
f1_score(y_test, y_pred, average=None)
ax= plt.subplot()
sns.heatmap(cm, annot=True, ax = ax, cmap='gist_rainbow'); #annot=True to annotate cells
# labels, title and ticks
ax.set_xlabel('Predicted labels');ax.set_ylabel('True labels');
ax.set_title('Confusion Matrix');
ax.xaxis.set_ticklabels(['Yes', 'No']); ax.yaxis.set_ticklabels(['Yes', 'No']);
```
## Kernal SVM
```
from sklearn.svm import SVC
classifier = SVC(kernel = 'rbf', random_state = 0)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test_imp)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
from sklearn.metrics import f1_score
f1_score(y_test, y_pred, average=None)
ax= plt.subplot()
sns.heatmap(cm, annot=True, ax = ax,); #annot=True to annotate cells
# labels, title and ticks
ax.set_xlabel('Predicted labels');ax.set_ylabel('True labels');
ax.set_title('Confusion Matrix');
ax.xaxis.set_ticklabels(['Yes', 'No']); ax.yaxis.set_ticklabels(['Yes', 'No']);
```
## Naive Bayes
```
from sklearn.naive_bayes import GaussianNB
classifier = GaussianNB()
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test_imp)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
from sklearn.metrics import f1_score
f1_score(y_test, y_pred, average=None)
ax= plt.subplot()
sns.heatmap(cm, annot=True, ax = ax, cmap='rainbow'); #annot=True to annotate cells
# labels, title and ticks
ax.set_xlabel('Predicted labels');ax.set_ylabel('True labels');
ax.set_title('Confusion Matrix');
ax.xaxis.set_ticklabels(['Yes', 'No']); ax.yaxis.set_ticklabels(['Yes', 'No']);
```
## Decision Tree
```
from sklearn.tree import DecisionTreeClassifier
classifier = DecisionTreeClassifier(criterion = 'entropy', random_state = 0)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test_imp)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
from sklearn.metrics import f1_score
f1_score(y_test, y_pred, average=None)
ax= plt.subplot()
sns.heatmap(cm, annot=True, ax = ax, cmap='flag'); #annot=True to annotate cells
# labels, title and ticks
ax.set_xlabel('Predicted labels');ax.set_ylabel('True labels');
ax.set_title('Confusion Matrix');
ax.xaxis.set_ticklabels(['Yes', 'No']); ax.yaxis.set_ticklabels(['Yes', 'No']);
```
## Random Forest
```
from sklearn.ensemble import RandomForestClassifier
classifier = RandomForestClassifier(n_estimators = 10, criterion = 'entropy', random_state = 0)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test_imp)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
from sklearn.metrics import f1_score
f1_score(y_test, y_pred, average=None)
ax= plt.subplot()
sns.heatmap(cm, annot=True, ax = ax, cmap='gist_rainbow'); #annot=True to annotate cells
# labels, title and ticks
ax.set_xlabel('Predicted labels');ax.set_ylabel('True labels');
ax.set_title('Confusion Matrix');
ax.xaxis.set_ticklabels(['Yes', 'No']); ax.yaxis.set_ticklabels(['Yes', 'No']);
```
# Predicting The Test dataset
## Selecting Logistic Regression Based on accuracy_score and f1Score
```
log_classifier.predict(X_test_run)
```
| github_jupyter |
```
%matplotlib inline
```
Failed Model Fits
=================
Example of model fit failures and how to debug them.
```
# Import the FOOOFGroup object
from fooof import FOOOFGroup
# Import simulation code to create test power spectra
from fooof.sim.gen import gen_group_power_spectra
# Import FitError, which we will use to help debug model fit errors
from fooof.core.errors import FitError
```
Model Fit Failures
------------------
The power spectrum model is not guaranteed to fit - sometimes the fit procedure can fail.
Model fit failures are rare, and they typically only happen on spectra that are
particular noisy, and/or are some kind of outlier for which the fitting procedure
fails to find a good model solution.
In general, model fit failures should lead to a clean exit, meaning that
a failed model fit does not lead to a code error. The failed fit will be encoded in
the results as a null model, and the code can continue onwards.
In this example, we will look through what it looks like when model fits fail.
```
# Simulate some example power spectra to use for the example
freqs, powers = gen_group_power_spectra(25, [1, 50], [1, 1], [10, 0.25, 3],
nlvs=0.1, freq_res=0.25)
# Initialize a FOOOFGroup object, with some desired settings
fg = FOOOFGroup(min_peak_height=0.1, max_n_peaks=6)
# Fit power spectra
fg.fit(freqs, powers)
```
If there are failed fits, these are stored as null models.
Let's check if there were any null models, from model failures, in the models
that we have fit so far. To do so, the :class:`~fooof.FOOOFGroup` object has some
attributes that provide information on any null model fits.
These attributes are:
- ``n_null_`` : the number of model results that are null
- ``null_inds_`` : the indices of any null model results
```
# Check for failed model fits
print('Number of Null models : \t', fg.n_null_)
print('Indices of Null models : \t', fg.null_inds_)
```
Inducing Model Fit Failures
~~~~~~~~~~~~~~~~~~~~~~~~~~~
So far, we have no model failures (as is typical).
For this example, to induce some model fits, we will use a trick to change the number of
iterations the model uses to fit parameters (`_maxfev`), making it much more likely to fail.
Note that in normal usage, you would likely never want to change the value of `_maxfev`,
and this here is a 'hack' of the code in order to induce reproducible failure modes
in simulated data.
```
# Hack the object to induce model failures
fg._maxfev = 50
# Try fitting again
fg.fit(freqs, powers)
```
As we can see, there are now some model fit failures! Note that, as above, it will
be printed out if there is as model fit failure when in verbose mode.
```
# Check how many model fit failures we have failed model fits
print('Number of Null models : \t', fg.n_null_)
print('Indices of Null models : \t', fg.null_inds_)
```
Debug Mode
----------
There are multiple possible reasons why a model fit failure can occur, or at least
multiple possible steps in the algorithm at which the fit failure can occur.
If you have a small number of fit failures, you can likely just exclude them.
However, if you have multiple fit failures, and/or you want to investigate why the
model is failing, you can use the debug mode to get a bit more information about
where the model is failing.
The debug mode will stop the FOOOF object catching and continuing any model
fit errors, allowing you to see where the error is happening, and get more
information about where it is failing.
Note that here we will run the fitting in a try / except to catch the error and
print it out, without the error actually being raised (for website purposes).
If you just want to see the error, you can run the fit call without the try/except.
```
# Set FOOOFGroup into debug mode
fg.set_debug_mode(True)
# Refit in debug mode, in which failed fits will raise an error
try:
fg.fit(freqs, powers)
except FitError as fooof_error:
print(fooof_error)
```
Debugging Model Fit Errors
~~~~~~~~~~~~~~~~~~~~~~~~~~
This debug mode should indicate in which step the model is failing, which might indicate
what aspects of the data to look into, and/or which settings to try and tweak.
Also, all known model fit failures should be caught by the object, and not raise an
error (when not in debug mode). If you are finding examples in which the model is failing
to fit, and raising an error (outside of debug mode), then this might be an unanticipated
issue with the model fit.
If you are unsure about why or how the model is failing to fit, consider
opening an `issue <https://github.com/fooof-tools/fooof/issues>`_ on the project
repository, and we will try to look into what seems to be happening.
| github_jupyter |
# Part 2 Experiment with my dataset
## Prepare the model and load my dataset
```
#install torch in google colab
# http://pytorch.org/
from os.path import exists
from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag
platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())
cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\.\([0-9]*\)\.\([0-9]*\)$/cu\1\2/'
accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu'
!pip install -q http://download.pytorch.org/whl/{accelerator}/torch-0.4.1-{platform}-linux_x86_64.whl torchvision
import torch
print(torch.__version__)
print(torch.cuda.is_available())
#initialization
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
import torchvision.datasets as datasets
# Device configuration
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
# Hyper parameters
num_epochs = 5
num_classes = 10
batch_size = 100
learning_rate = 0.001
# MNIST dataset
mnist_trainset = datasets.MNIST(root='./data', train=True, download=True, transform=None)
len(mnist_trainset)
print(mnist_trainset)
train_dataset = torchvision.datasets.MNIST(root='./data/',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = torchvision.datasets.MNIST(root='./data/',
train=False,
transform=transforms.ToTensor())
# Mounte files to gdrive
from google.colab import drive
drive.mount('/content/gdrive')
# loade test myDateSet
import pandas as pd
from matplotlib.pyplot import imshow
import numpy as np
from PIL import Image
from torch.utils.data.dataset import Dataset
from torchvision import transforms
class SimpleDataset(Dataset):
def __init__(self, data_path, csv_name, transform = None ):
"""
Args:
data_path (string): path to the folder where images and csv files are located
csv_name (string): name of the csv lablel file
transform: pytorch transforms for transforms and tensor conversion
"""
# Set path
self.data_path = data_path
# Transforms
self.transform = transform
# Read the csv file
self.data_info = pd.read_csv(data_path + csv_name, header=None)
# First column contains the image paths
self.image_arr = np.asarray(self.data_info.iloc[:, 0])
# Second column is the labels
self.label_arr = np.asarray(self.data_info.iloc[:, 1])
# Calculate len
self.data_len = len(self.data_info.index)
def __getitem__(self, index):
# Get image name from the pandas df
single_image_name = self.image_arr[index]
# Open image
img_as_img = Image.open(self.data_path + single_image_name)
if self.transform is not None:
img_as_img = self.transform(img_as_img)
# Get label(class) of the image based on the cropped pandas column
single_image_label = self.label_arr[index]
#convert to tensor to be consistent with MNIST dataset
single_image_label = torch.LongTensor( [ single_image_label ] )[0]
return (img_as_img, single_image_label)
def __len__(self):
return self.data_len
#mydata = SimpleDataset( "gdrive/My Drive/myDataSet/", "label.csv")
mydataT = SimpleDataset( "gdrive/My Drive/myDataSet/", "label.csv", transform=transforms.ToTensor())
#testc = mydataT[1]
#testT = testc[0]
#imgt = Image.fromarray((testT.numpy()[0] * 255).astype("uint8"))
#display(imgt)
# loade My test set to make my test loader
my_test_loader = torch.utils.data.DataLoader(dataset=mydataT,
batch_size=batch_size,
shuffle=False)
# Convolutional neural network (two convolutional layers)
class ConvNet(nn.Module):
def __init__(self, num_classes=10):
super(ConvNet, self).__init__()
self.layer1 = nn.Sequential(
nn.Conv2d(1, 16, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.layer2 = nn.Sequential(
nn.Conv2d(16, 32, kernel_size=5, stride=1, padding=2),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.fc = nn.Linear(7*7*32, num_classes)
def forward(self, x):
out = self.layer1(x)
out = self.layer2(out)
out = out.reshape(out.size(0), -1)
out = self.fc(out)
return out
model = ConvNet(num_classes).to(device)
# Loss and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
```
## 1.1 experiment on 100 size trainning set
```
# Hyper parameters
num_epochs = 5
num_classes = 10
batch_size = 50
learning_rate = 0.001
# Data loader
# loade trainning MNIST set
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
# Train the model
total_step = len(train_loader)
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
if i > 1:
break
images = images.to(device)
labels = labels.to(device)
# Forward pass
outputs = model(images)
loss = criterion(outputs, labels)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i+1) % 1 == 0:
print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'
.format(epoch+1, num_epochs, i+1, total_step, loss.item()))
# Test the model on myDataSet
model.eval() # eval mode (batchnorm uses moving mean/variance instead of mini-batch mean/variance)
with torch.no_grad():
correct = 0
total = 0
for images, labels in my_test_loader:
images = images.to(device)
labels = labels.to(device)
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Test Accuracy of 100 images trained model on the my dataset: {} %'.format(100 * correct / total))
```
## 1.2 experiment on 250 size trainning set
```
num_epochs = 5
num_classes = 10
batch_size = 50
learning_rate = 0.001
# Data loader
# loade trainning MNIST set
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
# Train the model
total_step = len(train_loader)
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
if i > 4:
break
images = images.to(device)
labels = labels.to(device)
# Forward pass
outputs = model(images)
loss = criterion(outputs, labels)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i+1) % 1 == 0:
print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'
.format(epoch+1, num_epochs, i+1, total_step, loss.item()))
# Test the model on MNIST dataset
model.eval() # eval mode (batchnorm uses moving mean/variance instead of mini-batch mean/variance)
with torch.no_grad():
correct = 0
total = 0
for images, labels in my_test_loader:
images = images.to(device)
labels = labels.to(device)
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Test Accuracy of 250 images trained model on the my dataset: {} %'.format(100 * correct / total))
```
## 1.3 experiment on 500 size trainning set
```
# Hyper parameters
num_epochs = 5
num_classes = 10
batch_size = 50
learning_rate = 0.001
# Data loader
# loade trainning MNIST set
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
# Train the model
total_step = len(train_loader)
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
if i > 9:
break
images = images.to(device)
labels = labels.to(device)
# Forward pass
outputs = model(images)
loss = criterion(outputs, labels)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i+1) % 1 == 0:
print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'
.format(epoch+1, num_epochs, i+1, total_step, loss.item()))
# Test the model on myDataSet
model.eval() # eval mode (batchnorm uses moving mean/variance instead of mini-batch mean/variance)
with torch.no_grad():
correct = 0
total = 0
for images, labels in my_test_loader:
images = images.to(device)
labels = labels.to(device)
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Test Accuracy of 500 images trained model on the my dataset: {} %'.format(100 * correct / total))
```
## 1.4 experiment on 1000 size trainning set
```
# Hyper parameters
num_epochs = 5
num_classes = 10
batch_size = 50
learning_rate = 0.001
# Data loader
# loade trainning MNIST set
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
# Train the model
total_step = len(train_loader)
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
if i > 19:
break
images = images.to(device)
labels = labels.to(device)
# Forward pass
outputs = model(images)
loss = criterion(outputs, labels)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i+1) % 1 == 0:
print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'
.format(epoch+1, num_epochs, i+1, total_step, loss.item()))
# Test the model on myDataSet
model.eval() # eval mode (batchnorm uses moving mean/variance instead of mini-batch mean/variance)
with torch.no_grad():
correct = 0
total = 0
for images, labels in my_test_loader:
images = images.to(device)
labels = labels.to(device)
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Test Accuracy of 50 images trained model on the my dataset: {} %'.format(100 * correct / total))
```
## 1.5 experiment on 60000 size trainning set
```
# Hyper parameters
num_epochs = 5
num_classes = 10
batch_size = 100
learning_rate = 0.001
# Data loader
# loade trainning MNIST set
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
# Train the model
total_step = len(train_loader)
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
images = images.to(device)
labels = labels.to(device)
# Forward pass
outputs = model(images)
loss = criterion(outputs, labels)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i+1) % 100 == 0:
print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'
.format(epoch+1, num_epochs, i+1, total_step, loss.item()))
# Test the model on myDataSet
model.eval() # eval mode (batchnorm uses moving mean/variance instead of mini-batch mean/variance)
with torch.no_grad():
correct = 0
total = 0
for images, labels in my_test_loader:
images = images.to(device)
labels = labels.to(device)
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Test Accuracy of 60000 images trained model on the my dataset: {} %'.format(100 * correct / total))
```
| github_jupyter |
# Transfer Learning
In this notebook, you'll learn how to use pre-trained networks to solved challenging problems in computer vision. Specifically, you'll use networks trained on [ImageNet](http://www.image-net.org/) [available from torchvision](http://pytorch.org/docs/0.3.0/torchvision/models.html).
ImageNet is a massive dataset with over 1 million labeled images in 1000 categories. It's used to train deep neural networks using an architecture called convolutional layers. I'm not going to get into the details of convolutional networks here, but if you want to learn more about them, please [watch this](https://www.youtube.com/watch?v=2-Ol7ZB0MmU).
Once trained, these models work astonishingly well as feature detectors for images they weren't trained on. Using a pre-trained network on images not in the training set is called transfer learning. Here we'll use transfer learning to train a network that can classify our cat and dog photos with near perfect accuracy.
With `torchvision.models` you can download these pre-trained networks and use them in your applications. We'll include `models` in our imports now.
```
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
from torchvision import datasets, transforms, models
```
Most of the pretrained models require the input to be 224x224 images. Also, we'll need to match the normalization used when the models were trained. Each color channel was normalized separately, the means are `[0.485, 0.456, 0.406]` and the standard deviations are `[0.229, 0.224, 0.225]`.
```
data_dir = 'Cat_Dog_data'
# TODO: Define transforms for the training data and testing data
train_transforms = transforms.Compose([transforms.RandomRotation(30),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
test_transforms = transforms.Compose([transforms.Resize(255),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
# Pass transforms in here, then run the next cell to see how the transforms look
train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms)
test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms)
trainloader = torch.utils.data.DataLoader(train_data, batch_size=64, shuffle=True)
testloader = torch.utils.data.DataLoader(test_data, batch_size=64)
```
We can load in a model such as [DenseNet](http://pytorch.org/docs/0.3.0/torchvision/models.html#id5). Let's print out the model architecture so we can see what's going on.
```
model = models.densenet121(pretrained=True)
model
```
This model is built out of two main parts, the features and the classifier. The features part is a stack of convolutional layers and overall works as a feature detector that can be fed into a classifier. The classifier part is a single fully-connected layer `(classifier): Linear(in_features=1024, out_features=1000)`. This layer was trained on the ImageNet dataset, so it won't work for our specific problem. That means we need to replace the classifier, but the features will work perfectly on their own. In general, I think about pre-trained networks as amazingly good feature detectors that can be used as the input for simple feed-forward classifiers.
```
# Freeze parameters so we don't backprop through them
for param in model.parameters():
param.requires_grad = False
from collections import OrderedDict
classifier = nn.Sequential(OrderedDict([
('fc1', nn.Linear(1024, 500)),
('relu', nn.ReLU()),
('fc2', nn.Linear(500, 2)),
('output', nn.LogSoftmax(dim=1))
]))
model.classifier = classifier
```
With our model built, we need to train the classifier. However, now we're using a **really deep** neural network. If you try to train this on a CPU like normal, it will take a long, long time. Instead, we're going to use the GPU to do the calculations. The linear algebra computations are done in parallel on the GPU leading to 100x increased training speeds. It's also possible to train on multiple GPUs, further decreasing training time.
PyTorch, along with pretty much every other deep learning framework, uses [CUDA](https://developer.nvidia.com/cuda-zone) to efficiently compute the forward and backwards passes on the GPU. In PyTorch, you move your model parameters and other tensors to the GPU memory using `model.to('cuda')`. You can move them back from the GPU with `model.to('cpu')` which you'll commonly do when you need to operate on the network output outside of PyTorch. As a demonstration of the increased speed, I'll compare how long it takes to perform a forward and backward pass with and without a GPU.
```
import time
for device in ['cpu', 'cuda']:
criterion = nn.NLLLoss()
# Only train the classifier parameters, feature parameters are frozen
optimizer = optim.Adam(model.classifier.parameters(), lr=0.001)
model.to(device)
for ii, (inputs, labels) in enumerate(trainloader):
# Move input and label tensors to the GPU
inputs, labels = inputs.to(device), labels.to(device)
start = time.time()
outputs = model.forward(inputs)
loss = criterion(outputs, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if ii==3:
break
print(f"Device = {device}; Time per batch: {(time.time() - start)/3:.3f} seconds")
```
You can write device agnostic code which will automatically use CUDA if it's enabled like so:
```python
# at beginning of the script
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
...
# then whenever you get a new Tensor or Module
# this won't copy if they are already on the desired device
input = data.to(device)
model = MyModule(...).to(device)
```
From here, I'll let you finish training the model. The process is the same as before except now your model is much more powerful. You should get better than 95% accuracy easily.
>**Exercise:** Train a pretrained models to classify the cat and dog images. Continue with the DenseNet model, or try ResNet, it's also a good model to try out first. Make sure you are only training the classifier and the parameters for the features part are frozen.
```
# Use GPU if it's available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = models.densenet121(pretrained=True)
# Freeze parameters so we don't backprop through them
for param in model.parameters():
param.requires_grad = False
model.classifier = nn.Sequential(nn.Linear(1024, 256),
nn.ReLU(),
nn.Dropout(0.2),
nn.Linear(256, 2),
nn.LogSoftmax(dim=1))
criterion = nn.NLLLoss()
# Only train the classifier parameters, feature parameters are frozen
optimizer = optim.Adam(model.classifier.parameters(), lr=0.003)
model.to(device);
epochs = 1
steps = 0
running_loss = 0
print_every = 5
for epoch in range(epochs):
for inputs, labels in trainloader:
steps += 1
# Move input and label tensors to the default device
inputs, labels = inputs.to(device), labels.to(device)
logps = model.forward(inputs)
loss = criterion(logps, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
running_loss += loss.item()
if steps % print_every == 0:
test_loss = 0
accuracy = 0
model.eval()
with torch.no_grad():
for inputs, labels in testloader:
inputs, labels = inputs.to(device), labels.to(device)
logps = model.forward(inputs)
batch_loss = criterion(logps, labels)
test_loss += batch_loss.item()
# Calculate accuracy
ps = torch.exp(logps)
top_p, top_class = ps.topk(1, dim=1)
equals = top_class == labels.view(*top_class.shape)
accuracy += torch.mean(equals.type(torch.FloatTensor)).item()
print(f"Epoch {epoch+1}/{epochs}.. "
f"Train loss: {running_loss/print_every:.3f}.. "
f"Test loss: {test_loss/len(testloader):.3f}.. "
f"Test accuracy: {accuracy/len(testloader):.3f}")
running_loss = 0
model.train()
```
| github_jupyter |
<a href="https://www.bigdatauniversity.com"><img src = "https://ibm.box.com/shared/static/ugcqz6ohbvff804xp84y4kqnvvk3bq1g.png" width = 300, align = "center"></a>
<h1 align=center><font size = 5>Data Analysis with Python</font></h1>
# House Sales in King County, USA
This dataset contains house sale prices for King County, which includes Seattle. It includes homes sold between May 2014 and May 2015.
<b>id</b> : A notation for a house
<b> date</b>: Date house was sold
<b>price</b>: Price is prediction target
<b>bedrooms</b>: Number of bedrooms
<b>bathrooms</b>: Number of bathrooms
<b>sqft_living</b>: Square footage of the home
<b>sqft_lot</b>: Square footage of the lot
<b>floors</b> :Total floors (levels) in house
<b>waterfront</b> :House which has a view to a waterfront
<b>view</b>: Has been viewed
<b>condition</b> :How good the condition is overall
<b>grade</b>: overall grade given to the housing unit, based on King County grading system
<b>sqft_above</b> : Square footage of house apart from basement
<b>sqft_basement</b>: Square footage of the basement
<b>yr_built</b> : Built Year
<b>yr_renovated</b> : Year when house was renovated
<b>zipcode</b>: Zip code
<b>lat</b>: Latitude coordinate
<b>long</b>: Longitude coordinate
<b>sqft_living15</b> : Living room area in 2015(implies-- some renovations) This might or might not have affected the lotsize area
<b>sqft_lot15</b> : LotSize area in 2015(implies-- some renovations)
You will require the following libraries:
```
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler,PolynomialFeatures
from sklearn.linear_model import LinearRegression
%matplotlib inline
```
# Module 1: Importing Data Sets
Load the csv:
```
file_name='https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/coursera/project/kc_house_data_NaN.csv'
df=pd.read_csv(file_name)
```
We use the method <code>head</code> to display the first 5 columns of the dataframe.
```
df.head()
```
### Question 1
Display the data types of each column using the attribute dtype, then take a screenshot and submit it, include your code in the image.
```
df.dtypes
```
We use the method describe to obtain a statistical summary of the dataframe.
```
df.describe()
```
# Module 2: Data Wrangling
### Question 2
Drop the columns <code>"id"</code> and <code>"Unnamed: 0"</code> from axis 1 using the method <code>drop()</code>, then use the method <code>describe()</code> to obtain a statistical summary of the data. Take a screenshot and submit it, make sure the <code>inplace</code> parameter is set to <code>True</code>
```
df.drop(df[['id']], axis=1, inplace=True)
df.drop(df[['Unnamed: 0']], axis=1, inplace=True)
df.describe()
```
We can see we have missing values for the columns <code> bedrooms</code> and <code> bathrooms </code>
```
print("number of NaN values for the column bedrooms :", df['bedrooms'].isnull().sum())
print("number of NaN values for the column bathrooms :", df['bathrooms'].isnull().sum())
```
We can replace the missing values of the column <code>'bedrooms'</code> with the mean of the column <code>'bedrooms' </code> using the method <code>replace()</code>. Don't forget to set the <code>inplace</code> parameter to <code>True</code>
```
mean=df['bedrooms'].mean()
df['bedrooms'].replace(np.nan,mean, inplace=True)
```
We also replace the missing values of the column <code>'bathrooms'</code> with the mean of the column <code>'bathrooms' </code> using the method <code>replace()</code>. Don't forget to set the <code> inplace </code> parameter top <code> True </code>
```
mean=df['bathrooms'].mean()
df['bathrooms'].replace(np.nan,mean, inplace=True)
print("number of NaN values for the column bedrooms :", df['bedrooms'].isnull().sum())
print("number of NaN values for the column bathrooms :", df['bathrooms'].isnull().sum())
```
# Module 3: Exploratory Data Analysis
### Question 3
Use the method <code>value_counts</code> to count the number of houses with unique floor values, use the method <code>.to_frame()</code> to convert it to a dataframe.
```
df['floors'].value_counts().to_frame()
```
### Question 4
Use the function <code>boxplot</code> in the seaborn library to determine whether houses with a waterfront view or without a waterfront view have more price outliers.
```
sns.boxplot(df['waterfront'],df['price'], data=df)
```
### Question 5
Use the function <code>regplot</code> in the seaborn library to determine if the feature <code>sqft_above</code> is negatively or positively correlated with price.
```
sns.regplot(x="sqft_above", y= "price", data=df)
plt.ylim()
```
We can use the Pandas method <code>corr()</code> to find the feature other than price that is most correlated with price.
```
df.corr()['price'].sort_values()
```
# Module 4: Model Development
We can Fit a linear regression model using the longitude feature <code>'long'</code> and caculate the R^2.
```
X = df[['long']]
Y = df['price']
lm = LinearRegression()
lm.fit(X,Y)
lm.score(X, Y)
```
### Question 6
Fit a linear regression model to predict the <code>'price'</code> using the feature <code>'sqft_living'</code> then calculate the R^2. Take a screenshot of your code and the value of the R^2.
```
X1 = df[['sqft_living']]
Y1 = df['price']
lm1 = LinearRegression()
lm.fit(X1,Y1)
print('R^2: ', lm.score(X1,Y1))
```
### Question 7
Fit a linear regression model to predict the <code>'price'</code> using the list of features:
```
features =["floors", "waterfront","lat" ,"bedrooms" ,"sqft_basement" ,"view" ,"bathrooms","sqft_living15","sqft_above","grade","sqft_living"]
```
Then calculate the R^2. Take a screenshot of your code.
```
X2 = df[["floors", "waterfront","lat" ,"bedrooms" ,"sqft_basement" ,"view" ,"bathrooms","sqft_living15","sqft_above","grade","sqft_living"]]
Y2 = df['price']
lm2 = LinearRegression()
lm2.fit(X2, Y2)
print('R^2: ',lm2.score(X2, Y2))
```
### This will help with Question 8
Create a list of tuples, the first element in the tuple contains the name of the estimator:
<code>'scale'</code>
<code>'polynomial'</code>
<code>'model'</code>
The second element in the tuple contains the model constructor
<code>StandardScaler()</code>
<code>PolynomialFeatures(include_bias=False)</code>
<code>LinearRegression()</code>
```
Input=[('scale',StandardScaler()),('polynomial', PolynomialFeatures(include_bias=False)),('model',LinearRegression())]
```
### Question 8
Use the list to create a pipeline object to predict the 'price', fit the object using the features in the list <code>features</code>, and calculate the R^2.
```
X3 = df[["floors", "waterfront","lat" ,"bedrooms" ,"sqft_basement" ,"view" ,"bathrooms","sqft_living15","sqft_above","grade","sqft_living"]]
Y3 = df['price']
pipe = Pipeline(Input)
pipe.fit(X3, Y3)
print('R^2: ',pipe.score(X3,Y3))
```
# Module 5: Model Evaluation and Refinement
Import the necessary modules:
```
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import train_test_split
print("done")
```
We will split the data into training and testing sets:
```
features =["floors", "waterfront","lat" ,"bedrooms" ,"sqft_basement" ,"view" ,"bathrooms","sqft_living15","sqft_above","grade","sqft_living"]
X = df[features]
Y = df['price']
x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.15, random_state=1)
print("number of test samples:", x_test.shape[0])
print("number of training samples:",x_train.shape[0])
```
### Question 9
Create and fit a Ridge regression object using the training data, set the regularization parameter to 0.1, and calculate the R^2 using the test data.
```
from sklearn.linear_model import Ridge
RidgeModel = Ridge(alpha = 0.1)
RidgeModel.fit(x_train,y_train)
print('R^2 Test: ', RidgeModel.score(x_test, y_test))
```
### Question 10
Perform a second order polynomial transform on both the training data and testing data. Create and fit a Ridge regression object using the training data, set the regularisation parameter to 0.1, and calculate the R^2 utilising the test data provided. Take a screenshot of your code and the R^2.
```
pr=PolynomialFeatures(degree=2)
x_train_pr=pr.fit_transform(x_train)
x_test_pr=pr.fit_transform(x_test)
RidgeModel=Ridge(alpha=0.1)
RidgeModel.fit(x_train_pr, y_train)
print("Test R^2: ",RidgeModel.score(x_test_pr, y_test))
```
<p>Once you complete your notebook you will have to share it. Select the icon on the top right a marked in red in the image below, a dialogue box should open, and select the option all content excluding sensitive code cells.</p>
<p><img width="600" src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/coursera/project/save_notebook.png" alt="share notebook" style="display: block; margin-left: auto; margin-right: auto;"/></p>
<p></p>
<p>You can then share the notebook via a URL by scrolling down as shown in the following image:</p>
<p style="text-align: center;"><img width="600" src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/coursera/project/url_notebook.png" alt="HTML" style="display: block; margin-left: auto; margin-right: auto;" /></p>
<p> </p>
<h2>About the Authors:</h2>
<a href="https://www.linkedin.com/in/joseph-s-50398b136/">Joseph Santarcangelo</a> has a PhD in Electrical Engineering, his research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.
Other contributors: <a href="https://www.linkedin.com/in/michelleccarey/">Michelle Carey</a>, <a href="www.linkedin.com/in/jiahui-mavis-zhou-a4537814a">Mavis Zhou</a>
| github_jupyter |
# **First run Ocean parcels on SSC fieldset**
```
%matplotlib inline
import numpy as np
import xarray as xr
import os
from matplotlib import pyplot as plt, animation, rc
import matplotlib.colors as mcolors
from datetime import datetime, timedelta
from dateutil.parser import parse
from scipy.io import loadmat
from cartopy import crs, feature
from parcels import FieldSet, Field, VectorField, ParticleSet, JITParticle, ErrorCode, AdvectionRK4, AdvectionRK4_3D
rc('animation', html='html5')
```
## Functions
#### Path prefix
```
def make_prefix(date, path, res='h'):
"""Construct path prefix for local SalishSeaCast results given date object and paths dict
e.g., /results2/SalishSea/nowcast-green.201905/daymonthyear/SalishSea_1h_yyyymmdd_yyyymmdd
"""
datestr = '_'.join(np.repeat(date.strftime('%Y%m%d'), 2))
folder = date.strftime("%d%b%y").lower()
prefix = os.path.join(path, f'{folder}/SalishSea_1{res}_{datestr}')
return prefix
```
#### Scatter_Colors
```
colors=['#1f77b4', '#ff7f0e', '#2ca02c', '#d62728', '#9467bd', '#8c564b', '#e377c2', '#7f7f7f', '#bcbd22', '#17becf']
def scatter_particles(ax, N,colors, nmin, nmax,yvar):
scatter=[]
#N is the number of stations
#Color is a list of strings picking the desired colors
#nmin is t0, nmax is tmax
#yvar is the y coordinate
starts = np.arange(0,N*n,n)
ends = np.arange(n-1,N*n,n)
if N < len(colors):
colors = colors[0:N]
elif N > len(colors):
con = 0
while N > len(colors):
colors.append(colors[con])
con+=1
if nmin==nmax:
for i in range(N):
scatter.append(ax.scatter(ds.lon[starts[i]:ends[i], nmin], yvar[starts[i]:ends[i], nmin],c=colors[i],s=5))
else:
for i in range(N):
scatter.append(ax.scatter(ds.lon[starts[i]:ends[i], nmin:nmax], yvar[starts[i]:ends[i], nmin:nmax],c=colors[i],s=5))
return scatter
```
#### Useful Kernel
```
def DeleteParticle(particle, fieldset, time):
"""Delete particle from OceanParcels simulation to avoid run failure
"""
print(f'Particle {particle.id} lost !! [{particle.lon}, {particle.lat}, {particle.depth}, {particle.time}]')
particle.delete()
```
## Load drifters and definitions
```
# Define paths
paths = {
'NEMO': '/results2/SalishSea/nowcast-green.201905/',
'coords': '/ocean/jvalenti/MOAD/grid/coordinates_seagrid_SalishSea201702.nc',
'mask': '/ocean/jvalenti/MOAD/grid/mesh_mask201702.nc',
'out': '/home/jvalenti/MOAD/analysis-jose/notebooks/results',
'anim': '/home/jvalenti/MOAD/animations'
}
# Duration and timestep [s]
length = 20
duration = timedelta(days=length)
dt = 90 #toggle between - or + to pick backwards or forwards
N = 6 # number of deploying locations
n = 100 # 1000 # number of particles per location
# Define Gaussian point cloud in the horizontal
r = 3000 # radius of particle cloud [m]
deg2m = 111000 * np.cos(50 * np.pi / 180)
var = (r / (deg2m * 3))**2
x_offset, y_offset = np.random.multivariate_normal([0, 0], [[var, 0], [0, var]], [n,N]).T
# Set a uniform distribution in depth, from dmin to dmax
dmin = 0.
dmax = 5.
zvals = dmin + np.random.random_sample([n,N]).T*(dmax-dmin)
```
## Simulation
```
start = datetime(2018, 1, 17)
daterange = [start+timedelta(days=i) for i in range(length)]
# Build filenames
Ulist, Vlist, Wlist = [], [], []
for day in range(duration.days):
path_NEMO = make_prefix(start + timedelta(days=day), paths['NEMO'])
print (path_NEMO)
Ulist.append(path_NEMO + '_grid_U.nc')
Vlist.append(path_NEMO + '_grid_V.nc')
Wlist.append(path_NEMO + '_grid_W.nc')
# Load NEMO forcing : note, depth aware but no vertical advection, particles stay at their original depth
filenames = {
'U': {'lon': paths['coords'], 'lat': paths['coords'], 'depth': Wlist[0], 'data': Ulist},
'V': {'lon': paths['coords'], 'lat': paths['coords'], 'depth': Wlist[0], 'data': Vlist},
'W': {'lon': paths['coords'], 'lat': paths['coords'], 'depth': Wlist[0], 'data': Wlist},
}
variables = {'U': 'vozocrtx', 'V': 'vomecrty','W': 'vovecrtz'}
dimensions = {'lon': 'glamf', 'lat': 'gphif', 'depth': 'depthw','time': 'time_counter'}
#bring salish sea results into field_set
field_set = FieldSet.from_nemo(filenames, variables, dimensions, allow_time_extrapolation=True)
```
### Change name for each run!!
```
# Set output file name. Maybe change for each run
fn = f'Long-neutral-MP' + '_'.join(d.strftime('%Y%m%d')+'_1n' for d in [start, start+duration]) + '.nc'
outfile = os.path.join(paths['out'], fn)
print(outfile)
```
### Set particle location
```
lon = np.zeros([N,n])
lat = np.zeros([N,n])
# Execute run
clon, clat = [-123.901172,-125.155849,-123.207648,-122.427508,-123.399769,-123.277731], [49.186308,49.975326,49.305448,47.622403,48.399420,49.11602] # choose horizontal centre of the particle cloud
for i in range(N):
lon[i,:]=(clon[i] + x_offset[i,:])
lat[i,:]=(clat[i] + y_offset[i,:])
z = zvals
# pset = ParticleSet.from_list(field_set, JITParticle, lon=lon, lat=lat, depth=z, time=start+timedelta(hours=2))
# #pset.computeTimeChunk(allow_time_extrapolation=1)
# pset.execute(
# pset.Kernel(AdvectionRK4_3D), runtime=duration, dt=dt,
# output_file=pset.ParticleFile(name=outfile, outputdt=timedelta(hours=1)),
# recovery={ErrorCode.ErrorOutOfBounds: DeleteParticle},
# )
coords = xr.open_dataset(paths['coords'], decode_times=False)
mask = xr.open_dataset(paths['mask'])
ds = xr.open_dataset(outfile)
fig, ax = plt.subplots(figsize=(19, 8))
ax.contour(coords.nav_lon, coords.nav_lat, mask.mbathy[0,:,:],colors='k',linewidths=0.1)
ax.contourf(coords.nav_lon, coords.nav_lat, mask.tmask[0, 0, ...], levels=[-0.01, 0.01], colors='lightgray')
ax.contour(coords.nav_lon, coords.nav_lat, mask.tmask[0, 0, ...], levels=[-0.01, 0.01], colors='k')
ax.set_aspect(1/1)
nmin, nmax = 0, -1
scatter_particles(ax, N,colors, nmin, nmax, ds.lat)
ax.scatter(clon,clat,c='g', marker='*', linewidths=1)
plt.ylabel('Latitude')
plt.xlabel('Longitude')
fig, ax = plt.subplots(figsize=(10, 10))
scatter_particles(ax, N,colors, nmin, nmax, -ds.z)
ax.grid()
plt.ylabel('Depth [m]')
plt.xlabel('Longitude')
fps = 1
fig = plt.figure(figsize=(10, 10))
ax = plt.axes(xlim=(-126,-122),ylim=(-250,10))
ss = scatter_particles(ax, N,colors, 0, 0, -ds.z)
plt.ylabel('Depth [m]',fontsize=16)
plt.xlabel('Longitude',fontsize=16)
plt.grid()
def update(frames):
global ss
for scat in ss:
scat.remove()
ss = scatter_particles(ax, N,colors, frames, frames, -ds.z)
return ss
anim = animation.FuncAnimation(fig, update, frames=np.arange(0,len(ds.lon[0,:]),fps),repeat=True)
f = paths["anim"]+"/depth_neutral.mp4"
FFwriter = animation.FFMpegWriter()
anim.save(f, writer = FFwriter)
fps = 1
fig = plt.figure(figsize=(19, 8))
ax = plt.axes(xlim=(-127,-121),ylim=(46.8,51.2))
ax.contour(coords.nav_lon, coords.nav_lat, mask.mbathy[0,:,:],colors='k',linewidths=0.1)
ax.contourf(coords.nav_lon, coords.nav_lat, mask.tmask[0, 0, ...], levels=[-0.01, 0.01], colors='lightgray')
ax.contour(coords.nav_lon, coords.nav_lat, mask.tmask[0, 0, ...], levels=[-0.01, 0.01], colors='k')
ax.grid()
ax.set_aspect(1/1)
ax.scatter(clon,clat,c='g', marker='*', linewidths=2)
plt.ylabel('Latitude',fontsize=16)
plt.xlabel('Longitude',fontsize=16)
ss = scatter_particles(ax, N,colors, 0,0, ds.lat)
def update(frames):
global ss
for scat in ss:
scat.remove()
ss = scatter_particles(ax, N,colors, frames,frames, ds.lat)
return ss
anim = animation.FuncAnimation(fig, update, frames=np.arange(0,len(ds.lon[0,:]),fps))
f=paths["anim"]+"/neutral.mp4"
FFwriter = animation.FFMpegWriter()
anim.save(f, writer = FFwriter)
colors=['#1f77b4', '#ff7f0e', '#2ca02c', '#d62728', '#9467bd', '#8c564b', '#e377c2', '#7f7f7f', '#bcbd22', '#17becf']
def scatterHD_particles(ax, N,colors, nmin, nmax,yvar):
scatter=[]
#N is the number of stations
#Color is a list of strings picking the desired colors
#nmin is t0, nmax is tmax
#yvar is the y coordinate
starts = np.arange(0,N*n,n)
ends = np.arange(n-1,N*n,n)
if N < len(colors):
colors = colors[0:N]
elif N > len(colors):
con = 0
while N > len(colors):
colors.append(colors[con])
con+=1
if nmin==nmax:
for i in range(N):
scatter.append(ax.scatter(ds.lon[starts[i]:ends[i], nmin], yvar[starts[i]:ends[i], nmin],c=colors[i],s=5,transform=crs.PlateCarree(),zorder=1))
else:
for i in range(N):
scatter.append(ax.scatter(ds.lon[starts[i]:ends[i], nmin:nmax], yvar[starts[i]:ends[i], nmin:nmax],c=colors[i],s=5,transform=crs.PlateCarree(),zorder=1))
return scatter
fps = 1
fig = plt.figure(figsize=(19, 8))
# Make map
fig, ax = plt.subplots(figsize=(19, 8), subplot_kw={'projection': crs.Mercator()})
ax.set_extent([-127, -121, 47, 51], crs=crs.PlateCarree())
ax.add_feature(feature.GSHHSFeature('intermediate', edgecolor='k', facecolor='lightgray'),zorder=2)
ax.add_feature(feature.RIVERS, edgecolor='k',zorder=5)
gl = ax.gridlines(
linestyle=':', color='k', draw_labels=True,
xlocs=range(-126, -121), ylocs=range(47, 52),zorder=5)
gl.top_labels, gl.right_labels = False, False
ss=scatterHD_particles(ax, N,colors, 0, 0,ds.lat)
ax.scatter(clon,clat,c='g', marker='*', linewidth=2,transform=crs.PlateCarree(),zorder=4)
ax.text(-0.1, 0.55, 'Latitude', va='bottom', ha='center',
rotation='vertical', rotation_mode='anchor',
transform=ax.transAxes, fontsize=14)
ax.text(0.5, -0.1, 'Longitude', va='bottom', ha='center',
rotation='horizontal', rotation_mode='anchor',
transform=ax.transAxes, fontsize=14)
def animate(frames):
global ss
for scat in ss:
scat.remove()
ss=scatterHD_particles(ax, N,colors, frames, frames,ds.lat)
anim = animation.FuncAnimation(fig, animate, frames=np.arange(0,len(ds.lon[0,:]),fps))
f = paths["anim"]+"/neutral_HD.mp4"
FFwriter = animation.FFMpegWriter()
anim.save(f, writer = FFwriter)
```
| github_jupyter |
```
#cell-width control
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:80% !important; }</style>"))
```
# Imports
```
#packages
import numpy
import tensorflow as tf
from tensorflow.core.example import example_pb2
#utils
import os
import random
import pickle
import struct
import time
from generators import *
#keras
import keras
from keras.preprocessing import text, sequence
from keras.preprocessing.text import Tokenizer
from keras.models import Model, Sequential
from keras.models import load_model
from keras.layers import Dense, Dropout, Activation, Concatenate, Dot, Embedding, LSTM, Conv1D, MaxPooling1D, Input, Lambda
#callbacks
from keras.callbacks import TensorBoard, ModelCheckpoint, Callback
```
# Seeding
```
sd = 7
from numpy.random import seed
seed(sd)
from tensorflow import set_random_seed
set_random_seed(sd)
```
# CPU usage
```
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"] = ""
```
# Global parameters
```
# Embedding
max_features = 400000
maxlen_text = 400
maxlen_summ = 80
embedding_size = 100 #128
# Convolution
kernel_size = 5
filters = 64
pool_size = 4
# LSTM
lstm_output_size = 70
# Training
batch_size = 32
epochs = 20
```
# Load data
```
#data_dir = '/mnt/disks/500gb/experimental-data-mini/experimental-data-mini/generator-dist-1to1/1to1/'
data_dir = '/media/oala/4TB/experimental-data/experiment-1_nonconform-models/generator-dist/1to1/'
#processing_dir = '/mnt/disks/500gb/stats-and-meta-data/400000/'
processing_dir = '/media/oala/4TB/experimental-data/stats-and-meta-data/400000/'
with open(data_dir+'partition.pickle', 'rb') as handle: partition = pickle.load(handle)
with open(data_dir+'labels.pickle', 'rb') as handle: labels = pickle.load(handle)
with open(processing_dir+'tokenizer.pickle', 'rb') as handle: tokenizer = pickle.load(handle)
embedding_matrix = numpy.load(processing_dir+'embedding_matrix.npy')
#the p_n constant
c = 80000
```
# Model
```
#2way input
text_input = Input(shape=(maxlen_text,embedding_size), dtype='float32')
summ_input = Input(shape=(maxlen_summ,embedding_size), dtype='float32')
#2way dropout
text_route = Dropout(0.25)(text_input)
summ_route = Dropout(0.25)(summ_input)
#2way conv
text_route = Conv1D(filters,
kernel_size,
padding='valid',
activation='relu',
strides=1)(text_route)
summ_route = Conv1D(filters,
kernel_size,
padding='valid',
activation='relu',
strides=1)(summ_route)
#2way max pool
text_route = MaxPooling1D(pool_size=pool_size)(text_route)
summ_route = MaxPooling1D(pool_size=pool_size)(summ_route)
#2way lstm
text_route = LSTM(lstm_output_size)(text_route)
summ_route = LSTM(lstm_output_size)(summ_route)
#get dot of both routes
merged = Dot(axes=1,normalize=True)([text_route, summ_route])
#negate results
#merged = Lambda(lambda x: -1*x)(merged)
#add p_n constant
#merged = Lambda(lambda x: x + c)(merged)
#output
output = Dense(1, activation='sigmoid')(merged)
#define model
model = Model(inputs=[text_input, summ_input], outputs=[output])
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
```
# Train model
```
#callbacks
class BatchHistory(keras.callbacks.Callback):
def on_train_begin(self, logs={}):
self.losses = []
self.accs = []
def on_batch_end(self, batch, logs={}):
self.losses.append(logs.get('loss'))
self.accs.append(logs.get('acc'))
history = BatchHistory()
tensorboard = TensorBoard(log_dir='./logs', histogram_freq=0, batch_size=batch_size, write_graph=True, write_grads=True)
modelcheckpoint = ModelCheckpoint('best.h5', monitor='val_loss', verbose=0, save_best_only=True, mode='min', period=1)
#batch generator parameters
params = {'dim': [(maxlen_text,embedding_size),(maxlen_summ,embedding_size)],
'batch_size': batch_size,
'shuffle': True,
'tokenizer':tokenizer,
'embedding_matrix':embedding_matrix,
'maxlen_text':maxlen_text,
'maxlen_summ':maxlen_summ,
'data_dir':data_dir,
'sample_info':None}
#generators
training_generator = ContAllGenerator(partition['train'], labels, **params)
validation_generator = ContAllGenerator(partition['validation'], labels, **params)
# Train model on dataset
model.fit_generator(generator=training_generator,
validation_data=validation_generator,
use_multiprocessing=True,
workers=6,
epochs=epochs,
callbacks=[tensorboard, modelcheckpoint, history])
with open('losses.pickle', 'wb') as handle: pickle.dump(history.losses, handle, protocol=pickle.HIGHEST_PROTOCOL)
with open('accs.pickle', 'wb') as handle: pickle.dump(history.accs, handle, protocol=pickle.HIGHEST_PROTOCOL)
```
| github_jupyter |
## Dependencies
```
!pip install --quiet /kaggle/input/kerasapplications
!pip install --quiet /kaggle/input/efficientnet-git
import warnings, glob
from tensorflow.keras import Sequential, Model
import efficientnet.tfkeras as efn
from cassava_scripts import *
seed = 0
seed_everything(seed)
warnings.filterwarnings('ignore')
```
### Hardware configuration
```
# TPU or GPU detection
# Detect hardware, return appropriate distribution strategy
strategy, tpu = set_up_strategy()
AUTO = tf.data.experimental.AUTOTUNE
REPLICAS = strategy.num_replicas_in_sync
print(f'REPLICAS: {REPLICAS}')
```
# Model parameters
```
BATCH_SIZE = 8 * REPLICAS
HEIGHT = 512
WIDTH = 512
CHANNELS = 3
N_CLASSES = 5
TTA_STEPS = 0 # Do TTA if > 0
```
# Augmentation
```
def data_augment(image, label):
p_spatial = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_rotate = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
# p_pixel_1 = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
# p_pixel_2 = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
# p_pixel_3 = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_crop = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
# Flips
image = tf.image.random_flip_left_right(image)
image = tf.image.random_flip_up_down(image)
if p_spatial > .75:
image = tf.image.transpose(image)
# Rotates
if p_rotate > .75:
image = tf.image.rot90(image, k=3) # rotate 270º
elif p_rotate > .5:
image = tf.image.rot90(image, k=2) # rotate 180º
elif p_rotate > .25:
image = tf.image.rot90(image, k=1) # rotate 90º
# # Pixel-level transforms
# if p_pixel_1 >= .4:
# image = tf.image.random_saturation(image, lower=.7, upper=1.3)
# if p_pixel_2 >= .4:
# image = tf.image.random_contrast(image, lower=.8, upper=1.2)
# if p_pixel_3 >= .4:
# image = tf.image.random_brightness(image, max_delta=.1)
# Crops
if p_crop > .7:
if p_crop > .9:
image = tf.image.central_crop(image, central_fraction=.7)
elif p_crop > .8:
image = tf.image.central_crop(image, central_fraction=.8)
else:
image = tf.image.central_crop(image, central_fraction=.9)
elif p_crop > .4:
crop_size = tf.random.uniform([], int(HEIGHT*.8), HEIGHT, dtype=tf.int32)
image = tf.image.random_crop(image, size=[crop_size, crop_size, CHANNELS])
# # Crops
# if p_crop > .6:
# if p_crop > .9:
# image = tf.image.central_crop(image, central_fraction=.5)
# elif p_crop > .8:
# image = tf.image.central_crop(image, central_fraction=.6)
# elif p_crop > .7:
# image = tf.image.central_crop(image, central_fraction=.7)
# else:
# image = tf.image.central_crop(image, central_fraction=.8)
# elif p_crop > .3:
# crop_size = tf.random.uniform([], int(HEIGHT*.6), HEIGHT, dtype=tf.int32)
# image = tf.image.random_crop(image, size=[crop_size, crop_size, CHANNELS])
return image, label
```
## Auxiliary functions
```
# Datasets utility functions
def resize_image(image, label):
image = tf.image.resize(image, [HEIGHT, WIDTH])
image = tf.reshape(image, [HEIGHT, WIDTH, CHANNELS])
return image, label
def process_path(file_path):
name = get_name(file_path)
img = tf.io.read_file(file_path)
img = decode_image(img)
img, _ = scale_image(img, None)
# img = center_crop(img, HEIGHT, WIDTH)
return img, name
def get_dataset(files_path, shuffled=False, tta=False, extension='jpg'):
dataset = tf.data.Dataset.list_files(f'{files_path}*{extension}', shuffle=shuffled)
dataset = dataset.map(process_path, num_parallel_calls=AUTO)
if tta:
dataset = dataset.map(data_augment, num_parallel_calls=AUTO)
dataset = dataset.map(resize_image, num_parallel_calls=AUTO)
dataset = dataset.batch(BATCH_SIZE)
dataset = dataset.prefetch(AUTO)
return dataset
```
# Load data
```
database_base_path = '/kaggle/input/cassava-leaf-disease-classification/'
submission = pd.read_csv(f'{database_base_path}sample_submission.csv')
display(submission.head())
TEST_FILENAMES = tf.io.gfile.glob(f'{database_base_path}test_tfrecords/ld_test*.tfrec')
NUM_TEST_IMAGES = count_data_items(TEST_FILENAMES)
print(f'GCS: test: {NUM_TEST_IMAGES}')
model_path_list = glob.glob('/kaggle/input/97-cassava-leaf-effnetb3-scl-cce-512x512/*.h5')
model_path_list.sort()
print('Models to predict:')
print(*model_path_list, sep='\n')
```
# Model
```
def encoder_fn(input_shape):
inputs = L.Input(shape=input_shape, name='input_image')
base_model = efn.EfficientNetB3(input_tensor=inputs,
include_top=False,
weights=None,
pooling='avg')
model = Model(inputs=inputs, outputs=base_model.output)
return model
def classifier_fn(input_shape, N_CLASSES, encoder, trainable=True):
for layer in encoder.layers:
layer.trainable = trainable
inputs = L.Input(shape=input_shape, name='input_image')
features = encoder(inputs)
features = L.Dropout(.5)(features)
features = L.Dense(512, activation='relu')(features)
features = L.Dropout(.5)(features)
output = L.Dense(N_CLASSES, activation='softmax', name='output', dtype='float32')(features)
output_healthy = L.Dense(1, activation='sigmoid', name='output_healthy', dtype='float32')(features)
output_cmd = L.Dense(1, activation='sigmoid', name='output_cmd', dtype='float32')(features)
model = Model(inputs=inputs, outputs=[output, output_healthy, output_cmd])
return model
with strategy.scope():
encoder = encoder_fn((None, None, CHANNELS))
model = classifier_fn((None, None, CHANNELS), N_CLASSES, encoder, trainable=False)
model.summary()
```
# Test set predictions
```
files_path = f'{database_base_path}test_images/'
test_size = len(os.listdir(files_path))
test_preds = np.zeros((test_size, N_CLASSES))
for model_path in model_path_list:
print(model_path)
K.clear_session()
model.load_weights(model_path)
if TTA_STEPS > 0:
test_ds = get_dataset(files_path, tta=True).repeat()
ct_steps = TTA_STEPS * ((test_size/BATCH_SIZE) + 1)
preds = model.predict(test_ds, steps=ct_steps, verbose=1)[0][:(test_size * TTA_STEPS)]
preds = np.mean(preds.reshape(test_size, TTA_STEPS, N_CLASSES, order='F'), axis=1)
test_preds += preds / len(model_path_list)
else:
test_ds = get_dataset(files_path, tta=False)
x_test = test_ds.map(lambda image, image_name: image)
test_preds += model.predict(x_test)[0] / len(model_path_list)
test_preds = np.argmax(test_preds, axis=-1)
test_names_ds = get_dataset(files_path)
image_names = [img_name.numpy().decode('utf-8') for img, img_name in iter(test_names_ds.unbatch())]
submission = pd.DataFrame({'image_id': image_names, 'label': test_preds})
submission.to_csv('submission.csv', index=False)
display(submission.head())
```
| github_jupyter |
```
#import libraries
import pandas as pd
import numpy as np
import seaborn as sb
import matplotlib.pyplot as plt
import warnings
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import RandomizedSearchCV
warnings.filterwarnings('ignore')
#import data
train_df = pd.read_csv("../Dataset/train.csv")
test_df = pd.read_csv("../Dataset/test.csv")
```
# Data inspection
```
train_df.head()
test_df.head()
#shape of dataset
print("shape of dataframe:", train_df.shape)
#data types
print("\nVariable types:\n", train_df.dtypes.value_counts(),train_df.dtypes.value_counts().plot.pie())
plt.draw()
#shape of dataset
print("shape of dataframe:", test_df.shape)
#data types
print("\nVariable types:\n", test_df.dtypes.value_counts(),test_df.dtypes.value_counts().plot.pie())
plt.draw()
train_df.info()
test_df.info()
```
# Data Cleaning
```
train_df = train_df.drop(['Name', 'referral_id'], axis=1) # dropping unnecessary columns
train_df
test_df = test_df.drop(['Name', 'referral_id'], axis=1)
test_df
```
# Checking Null Values
```
#ratio of null values
train_df.isnull().sum()/test_df.shape[0]*100
#ratio of null values
test_df.isnull().sum()/test_df.shape[0]*100
```
# Treating Null Values
```
train_df['region_category'] = train_df['region_category'].fillna(train_df['region_category'].mode()[0])
train_df['preferred_offer_types'] = train_df['preferred_offer_types'].fillna(train_df['preferred_offer_types'].mode()[0])
train_df['points_in_wallet'] = train_df['points_in_wallet'].fillna(train_df['points_in_wallet'].mean())
test_df['region_category'] = test_df['region_category'].fillna(test_df['region_category'].mode()[0])
test_df['preferred_offer_types'] = test_df['preferred_offer_types'].fillna(test_df['preferred_offer_types'].mode()[0])
test_df['points_in_wallet'] = test_df['points_in_wallet'].fillna(test_df['points_in_wallet'].mean())
#checking null values is present or not
print(train_df.info())
print('-'*50)
print(test_df.info())
```
# DATA VISUALIZATION
```
sb.distplot(train_df.age) #age is uniformly distributed
train_df["gender"].value_counts()
train_df[train_df.gender == "Unknown"]
train_df.drop(train_df.index[train_df['gender'] == 'Unknown'],axis = 0, inplace = True)
sb.countplot(train_df.gender) #male and female is equaly distributed
# checking the distribution of outcomes
sb.countplot(x = 'churn_risk_score', data = train_df)
sb.countplot(train_df.gender,hue=train_df.churn_risk_score)
cat = []
con = []
for i in train_df.columns:
if (train_df[i].dtypes == "object"):
cat.append(i)
else:
con.append(i)
```
# Data Preprocessing
```
from sklearn.preprocessing import LabelEncoder
label_encoder = LabelEncoder()
for i in cat:
test_df[i]=label_encoder.fit_transform(test_df[i])
for i in cat:
train_df[i]=label_encoder.fit_transform(train_df[i])
convert = ["avg_time_spent","avg_transaction_value","points_in_wallet"]
for k in convert:
train_df[k] = train_df[k].astype(int)
for k in convert:
test_df[k] = test_df[k].astype(int)
train = train_df.drop(['points_in_wallet'],axis = 1)
test = test_df.drop(['points_in_wallet'],axis = 1)
X= train.drop(columns = ['churn_risk_score'], axis=1)
y= train[['churn_risk_score']]
from sklearn.preprocessing import StandardScaler
X_scaled=StandardScaler().fit_transform(X)
```
### Checking Variance
```
train_df.columns
# checking variance
from statsmodels.stats.outliers_influence import variance_inflation_factor
variables = train_df[['customer_id', 'age', 'gender', 'security_no', 'region_category',
'membership_category', 'joining_date', 'joined_through_referral',
'preferred_offer_types', 'medium_of_operation', 'internet_option',
'last_visit_time', 'days_since_last_login', 'avg_time_spent',
'avg_transaction_value', 'avg_frequency_login_days', 'points_in_wallet',
'used_special_discount', 'offer_application_preference',
'past_complaint', 'complaint_status', 'feedback', 'churn_risk_score']]
vif = pd.DataFrame()
vif['VIF'] = [variance_inflation_factor(variables.values, i) for i in range(variables.shape[1])]
vif['Features'] = variables.columns
vif
```
# model building
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X_scaled, y, test_size=0.2, random_state=0)
pd.DataFrame(X_scaled).describe()
```
#### Decision Tree Model
```
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier()
```
#### Using GridSearchCV for finding best parameters
```
gs=GridSearchCV(clf,
{
'max_depth':range(3,10),
'min_samples_split':range(10,100,10)
},
cv=5,
n_jobs=2,
scoring='neg_mean_squared_error',
verbose=2
)
gs.fit(X_train,y_train)
clf=gs.best_estimator_
clf.fit(X_train,y_train)
print(gs.best_params_)
```
#### Making Predictions
```
pred = clf.predict(X_test)
```
#### Checking Accuracy
```
score = clf.score(X_test, y_test)
score
```
0.570867740625423
```
from sklearn import tree
s = plt.figure(figsize=(20,10))
tree.plot_tree(clf, max_depth=3, filled=True, fontsize=10)
plt.title("Decision Tree for Customer Churn Data", fontsize=30)
plt.show()
```
#### Random Forrest
```
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(min_samples_split=20)
```
##### Randomized Grid Search CV
```
gs=RandomizedSearchCV(rf,
{
'criterion':('gini', 'entropy'),
'n_estimators':np.linspace(50,200,5).astype(int)
#'min_samples_split':range(5,105,20)
},
cv=5,
n_jobs=2,
scoring='neg_mean_squared_error',
)
gs.fit(X_train,y_train)
rf=gs.best_estimator_
rf.fit(X_train,y_train)
print(gs.best_params_)
```
#### Prediction
```
pred1 = rf.predict(X_test)
```
#### Checking Accuracy
```
score1 = rf.score(X_test, y_test)
score1
rf.score(X_train,y_train)
```
0.6302964667659402
#### Gradient Boosting Classifier
```
from sklearn.ensemble import GradientBoostingClassifier
gb = GradientBoostingClassifier(n_estimators=500,max_depth=5)
gb.fit(X_train,y_train)
```
#### Prediction
```
pred2 = gb.predict(X_test)
```
#### Checking Accuracy
```
score2 = gb.score(X_test, y_test)
score2
```
0.6376066062000812
#### AdaBoostClassifier
```
from sklearn.ensemble import AdaBoostClassifier
ad = AdaBoostClassifier()
```
##### Randomized Grid Search CV
```
gs=RandomizedSearchCV(ad,
{
'learning_rate':np.logspace(-2,2,5),
'n_estimators':np.linspace(20,1020,5).astype(int),
'algorithm':('SAMME', 'SAMME.R')
},
cv=2,
n_jobs=2,
scoring='neg_mean_squared_error',
verbose=2
)
gs.fit(X_train,y_train)
ad=gs.best_estimator_
ad.fit(X_train,y_train)
print(gs.best_params_)
```
#### Prediction
```
pred3 = ad.predict(X_test)
```
#### Checking Accuracy
```
score3 = ad.score(X_test, y_test)
score3
```
GRADIENT BOOSTING IS THE PREDICTOR AMONG THOSE MODELS
| github_jupyter |
# Quantum Machine Learning and TTN
Let's look at the Tree Tensor Network as a model for quantum machine learning.
## What you will learn
1. TTN model
2. Optimization
## Install Blueqat
```
!pip install blueqat
```
The model we are going to build is called TTN. The quantum circuit is as follows.
<img src="../tutorial-ja/img/253_img.png" width="25%">
It has a shape that takes on tree structure.
This circuit uses a one qubit arbitrary rotation gate (a combination of $Rz$ and $Ry$ gates) and a two qubit gate ($CX$ gate).
More details are as follows.
<img src="../tutorial-ja/img/253_img_2.png" width="35%">
```
from blueqat import Circuit
import matplotlib.pyplot as plt
import numpy as np
import time
%matplotlib inline
```
Configure hyperparameters and other settings.
```
np.random.seed(45)
# Number of steps of optimizetaion
nsteps = 2000
# Number of parameters of the quantum circuit to be optimized
nparams = 18
# Fineness of numerical differentiation
h = 0.01
# Learning rate
e = 0.01
# Initial parameter
param_init = [np.random.rand()*np.pi*2 for i in range(nparams)]
# list for containing results
arr = []
#1: train, 2: prediction
mode = 1
```
We create a model of the tree structure.
Set upthe input to the quantum circuit and the target label for it and start learning.
This time, the input data can be selected by arguments.
```
def TTN_Z(a, ran, mode=1):
# Input circuit
init = [Circuit(4).x[0,1], Circuit(4).x[2,3], Circuit(4).x[0], Circuit(4).x[1], Circuit(4).x[2], Circuit(4).x[0,2]]
# Target label
target = [1,1,-1,-1,-1,1]
# Circuit construction
u = init[ran]
u.rz(a[0])[0].ry(a[1])[0].rz(a[2])[0]
u.rz(a[3])[1].ry(a[4])[1].rz(a[5])[1]
u.rz(a[6])[2].ry(a[7])[2].rz(a[8])[2]
u.rz(a[9])[3].ry(a[10])[3].rz(a[11])[3]
u.cx[0,1].cx[2,3]
u.rz(a[12])[1].ry(a[13])[1].rz(a[14])[1]
u.rz(a[15])[3].ry(a[16])[3].rz(a[17])[3]
u.cx[1,3]
# Calculate expectation value from state vector
full = u.run()
expt = sum(np.abs(full[:8])**2)-sum(np.abs(full[8:])**2)
if(mode ==1):
# return error between label and prediction
return (expt - target[ran])**2
else:
return expt
```
Stochastic gradient descent (SGD) is used to learning.
At the start of each step, the input data is randomly selected from 6 ways (0 to 5), and the gradient is calculated and the parameters are updated.
In each step, the gradient calculation and parameter update are performed on only one data, but by repeating the process while randomly selecting input data, the system eventually learns to minimize the loss function for all data.
```
start = time.time()
param = param_init.copy()
for i in range(nsteps):
it = np.random.randint(0,6)
loss = TTN_Z(param, it, mode)
arr.append(loss)
param_new = [0 for i in range(nparams)]
for j in range(nparams):
_param = param.copy()
_param[j] += h
param_new[j] = param[j] - e*(TTN_Z(_param, it, mode) - loss)/h
param = param_new
plt.plot(arr)
plt.show()
print(time.time() - start)
```
It converged well.
Let's check it out.
```
target = [1,1,-1,-1,-1,1]
preds = []
for i in range(6):
pred = TTN_Z(param, i, mode=2)
preds.append(pred)
print("Prediction :", pred, " Target :", target[i])
```
From the above, we were able to learn a quantum circuit using the TTN model.
| github_jupyter |
```
# Importing needed libraries
import datetime
import pandas as pd
# Fetching the data from official site of Ministry of Health and Family Welfare | Government of India
try:
url = "https://www.mohfw.gov.in/"
dfs = pd.read_html(url)
for i in range(len(dfs)):
df = dfs[i]
if (len(df.columns) == 6):
cols_match = sum(df.columns==['S. No.', 'Name of State / UT', 'Active Cases*',
'Cured/Discharged/Migrated*', 'Deaths**', 'Total Confirmed cases*'])
if (cols_match == 6):
now = datetime.datetime.now()
dt_string = now.strftime("%d/%m/%Y %H:%M:%S")
df.to_csv("/home/caesar/covid_19.csv")
break
except:
df = pd.read_csv("/home/caesar/covid_19.csv")
df = df.drop(columns=["Unnamed: 0"])
now = datetime.datetime.fromtimestamp(os.path.getmtime("/home/caesar/covid_19.csv"))
dt_string = now.strftime("%d/%m/%Y %H:%M:%S")
df
# Preprocessing the column data to remove any special characters inbetween because we cannot rely on the source completely.
for col in ['Active Cases*', 'Cured/Discharged/Migrated*', 'Deaths**', 'Total Confirmed cases*' ]:
if df[col].dtypes=='O':
df[col] = df[col].str.replace('\W', '')
# Fetching out the values from the DataFrame
m = 35
states = df["Name of State / UT"][:m]
active_cases = df["Active Cases*"][:m].astype(int)
confirmed_cases = df["Total Confirmed cases*"][:m].astype(int)
casualties = df["Deaths**"][:m].astype(int)
# Plotting the bar graph!
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
max_cases = max(active_cases)
total_active_cases = int(df["Active Cases*"][36])
total_death_casualties = df["Deaths**"][36]
total_cured_cases = df["Cured/Discharged/Migrated*"][36]
total_confirmed_cases = df["Total Confirmed cases*"][36]
barWidth = 0.4
r1 = np.arange(len(active_cases))
r2 = [x + barWidth for x in r1]
plt.figure(figsize=(16, 8))
plt.bar(r1, active_cases, color="royalblue", width=barWidth, edgecolor="white", label="Active Cases")
plt.bar(r2, casualties, color="orchid", width=barWidth, edgecolor="white", label="Death Casualties")
plt.xlabel("States / UT", fontweight="bold", fontsize=16)
plt.xticks([r + barWidth - 0.19 for r in range(len(active_cases))], states, rotation="78", fontsize=14)
plt.ylabel("Number of COVID-19 cases", fontweight="bold", fontsize=16)
plt.legend(fontsize=16, loc="upper right")
plt.text(-1, max_cases, "Total active cases: " + str(total_active_cases), fontsize=16)
plt.text(-1, max_cases - 2000, "Total death casualties: " + str(total_death_casualties), fontsize=16)
plt.text(-1, max_cases - 4000, "Total cured cases: " + str(total_cured_cases), fontsize=16)
plt.title("Data updated at: " + dt_string, loc="center")
plt.show()
# A more visualistic comparison!
mortality_rate = str(round(total_active_cases / int(total_confirmed_cases), 2))
sizes=[total_confirmed_cases, total_active_cases]
names=["Total Confirmed Cases", "Total Active Cases"]
plt.pie(sizes, explode=(0, 0.1), labels=names, autopct='%1.1f%%', shadow=True, startangle=90)
plt.text(-1, 1.2, "Mortality Rate: " + mortality_rate, fontsize=16)
plt.show()
# In case you need a more fancy donut-like graph!
from palettable.colorbrewer.qualitative import Pastel1_7
sizes=[total_confirmed_cases, total_active_cases]
names=["Total Confirmed Cases", "Total Active Cases"]
my_circle=plt.Circle((0,0), 0.7, color='white')
plt.pie(sizes, labels=names, colors=Pastel1_7.hex_colors, autopct='%1.1f%%', explode=(0, 0.1))
p=plt.gcf()
p.gca().add_artist(my_circle)
plt.text(-1, 1.2, "Mortality Rate: " + mortality_rate, fontsize=16)
plt.show()
```
| github_jupyter |
# Artificial Intelligence Nanodegree
## Machine Translation Project
In this notebook, sections that end with **'(IMPLEMENTATION)'** in the header indicate that the following blocks of code will require additional functionality which you must provide. Please be sure to read the instructions carefully!
## Introduction
In this notebook, you will build a deep neural network that functions as part of an end-to-end machine translation pipeline. Your completed pipeline will accept English text as input and return the French translation.
- **Preprocess** - You'll convert text to sequence of integers.
- **Models** Create models which accepts a sequence of integers as input and returns a probability distribution over possible translations. After learning about the basic types of neural networks that are often used for machine translation, you will engage in your own investigations, to design your own model!
- **Prediction** Run the model on English text.
```
%load_ext autoreload
%aimport helper, tests
%autoreload 1
import collections
import helper
import numpy as np
import project_tests as tests
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.models import Model
from keras.layers import GRU, Input, Dense, TimeDistributed, Activation, RepeatVector, Bidirectional
from keras.layers.embeddings import Embedding
from keras.optimizers import Adam
from keras.losses import sparse_categorical_crossentropy
```
### Verify access to the GPU
The following test applies only if you expect to be using a GPU, e.g., while running in a Udacity Workspace or using an AWS instance with GPU support. Run the next cell, and verify that the device_type is "GPU".
- If the device is not GPU & you are running from a Udacity Workspace, then save your workspace with the icon at the top, then click "enable" at the bottom of the workspace.
- If the device is not GPU & you are running from an AWS instance, then refer to the cloud computing instructions in the classroom to verify your setup steps.
```
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
```
## Dataset
We begin by investigating the dataset that will be used to train and evaluate your pipeline. The most common datasets used for machine translation are from [WMT](http://www.statmt.org/). However, that will take a long time to train a neural network on. We'll be using a dataset we created for this project that contains a small vocabulary. You'll be able to train your model in a reasonable time with this dataset.
### Load Data
The data is located in `data/small_vocab_en` and `data/small_vocab_fr`. The `small_vocab_en` file contains English sentences with their French translations in the `small_vocab_fr` file. Load the English and French data from these files from running the cell below.
```
# Load English data
english_sentences = helper.load_data('data/small_vocab_en')
# Load French data
french_sentences = helper.load_data('data/small_vocab_fr')
print('Dataset Loaded')
```
### Files
Each line in `small_vocab_en` contains an English sentence with the respective translation in each line of `small_vocab_fr`. View the first two lines from each file.
```
for sample_i in range(2):
print('small_vocab_en Line {}: {}'.format(sample_i + 1, english_sentences[sample_i]))
print('small_vocab_fr Line {}: {}'.format(sample_i + 1, french_sentences[sample_i]))
```
From looking at the sentences, you can see they have been preprocessed already. The puncuations have been delimited using spaces. All the text have been converted to lowercase. This should save you some time, but the text requires more preprocessing.
### Vocabulary
The complexity of the problem is determined by the complexity of the vocabulary. A more complex vocabulary is a more complex problem. Let's look at the complexity of the dataset we'll be working with.
```
english_words_counter = collections.Counter([word for sentence in english_sentences for word in sentence.split()])
french_words_counter = collections.Counter([word for sentence in french_sentences for word in sentence.split()])
print('{} English words.'.format(len([word for sentence in english_sentences for word in sentence.split()])))
print('{} unique English words.'.format(len(english_words_counter)))
print('10 Most common words in the English dataset:')
print('"' + '" "'.join(list(zip(*english_words_counter.most_common(10)))[0]) + '"')
print()
print('{} French words.'.format(len([word for sentence in french_sentences for word in sentence.split()])))
print('{} unique French words.'.format(len(french_words_counter)))
print('10 Most common words in the French dataset:')
print('"' + '" "'.join(list(zip(*french_words_counter.most_common(10)))[0]) + '"')
```
For comparison, _Alice's Adventures in Wonderland_ contains 2,766 unique words of a total of 15,500 words.
## Preprocess
For this project, you won't use text data as input to your model. Instead, you'll convert the text into sequences of integers using the following preprocess methods:
1. Tokenize the words into ids
2. Add padding to make all the sequences the same length.
Time to start preprocessing the data...
### Tokenize (IMPLEMENTATION)
For a neural network to predict on text data, it first has to be turned into data it can understand. Text data like "dog" is a sequence of ASCII character encodings. Since a neural network is a series of multiplication and addition operations, the input data needs to be number(s).
We can turn each character into a number or each word into a number. These are called character and word ids, respectively. Character ids are used for character level models that generate text predictions for each character. A word level model uses word ids that generate text predictions for each word. Word level models tend to learn better, since they are lower in complexity, so we'll use those.
Turn each sentence into a sequence of words ids using Keras's [`Tokenizer`](https://keras.io/preprocessing/text/#tokenizer) function. Use this function to tokenize `english_sentences` and `french_sentences` in the cell below.
Running the cell will run `tokenize` on sample data and show output for debugging.
```
def tokenize(x):
"""
Tokenize x
:param x: List of sentences/strings to be tokenized
:return: Tuple of (tokenized x data, tokenizer used to tokenize x)
"""
# TODO: Implement
#x_tk = Tokenizer(char_level=True)
x_tk = Tokenizer(num_words=None, char_level=False)
x_tk.fit_on_texts(x)
return x_tk.texts_to_sequences(x), x_tk
#return None, None
tests.test_tokenize(tokenize)
# Tokenize Example output
text_sentences = [
'The quick brown fox jumps over the lazy dog .',
'By Jove , my quick study of lexicography won a prize .',
'This is a short sentence .']
text_tokenized, text_tokenizer = tokenize(text_sentences)
print(text_tokenizer.word_index)
print()
for sample_i, (sent, token_sent) in enumerate(zip(text_sentences, text_tokenized)):
print('Sequence {} in x'.format(sample_i + 1))
print(' Input: {}'.format(sent))
print(' Output: {}'.format(token_sent))
```
### Padding (IMPLEMENTATION)
When batching the sequence of word ids together, each sequence needs to be the same length. Since sentences are dynamic in length, we can add padding to the end of the sequences to make them the same length.
Make sure all the English sequences have the same length and all the French sequences have the same length by adding padding to the **end** of each sequence using Keras's [`pad_sequences`](https://keras.io/preprocessing/sequence/#pad_sequences) function.
```
def pad(x, length=None):
"""
Pad x
:param x: List of sequences.
:param length: Length to pad the sequence to. If None, use length of longest sequence in x.
:return: Padded numpy array of sequences
"""
# TODO: Implement
#return None
if length is None:
length = max([len(sentence) for sentence in x])
#return pad_sequences(x, maxlen=length, padding='post')
return pad_sequences(x, maxlen=length, padding='post', truncating='post')
tests.test_pad(pad)
# Pad Tokenized output
test_pad = pad(text_tokenized)
for sample_i, (token_sent, pad_sent) in enumerate(zip(text_tokenized, test_pad)):
print('Sequence {} in x'.format(sample_i + 1))
print(' Input: {}'.format(np.array(token_sent)))
print(' Output: {}'.format(pad_sent))
```
### Preprocess Pipeline
Your focus for this project is to build neural network architecture, so we won't ask you to create a preprocess pipeline. Instead, we've provided you with the implementation of the `preprocess` function.
```
def preprocess(x, y):
"""
Preprocess x and y
:param x: Feature List of sentences
:param y: Label List of sentences
:return: Tuple of (Preprocessed x, Preprocessed y, x tokenizer, y tokenizer)
"""
preprocess_x, x_tk = tokenize(x)
preprocess_y, y_tk = tokenize(y)
preprocess_x = pad(preprocess_x)
preprocess_y = pad(preprocess_y)
# Keras's sparse_categorical_crossentropy function requires the labels to be in 3 dimensions
preprocess_y = preprocess_y.reshape(*preprocess_y.shape, 1)
return preprocess_x, preprocess_y, x_tk, y_tk
preproc_english_sentences, preproc_french_sentences, english_tokenizer, french_tokenizer =\
preprocess(english_sentences, french_sentences)
max_english_sequence_length = preproc_english_sentences.shape[1]
max_french_sequence_length = preproc_french_sentences.shape[1]
english_vocab_size = len(english_tokenizer.word_index)
french_vocab_size = len(french_tokenizer.word_index)
print('Data Preprocessed')
print("Max English sentence length:", max_english_sequence_length)
print("Max French sentence length:", max_french_sequence_length)
print("English vocabulary size:", english_vocab_size)
print("French vocabulary size:", french_vocab_size)
```
## Models
In this section, you will experiment with various neural network architectures.
You will begin by training four relatively simple architectures.
- Model 1 is a simple RNN
- Model 2 is a RNN with Embedding
- Model 3 is a Bidirectional RNN
- Model 4 is an optional Encoder-Decoder RNN
After experimenting with the four simple architectures, you will construct a deeper architecture that is designed to outperform all four models.
### Ids Back to Text
The neural network will be translating the input to words ids, which isn't the final form we want. We want the French translation. The function `logits_to_text` will bridge the gab between the logits from the neural network to the French translation. You'll be using this function to better understand the output of the neural network.
```
def logits_to_text(logits, tokenizer):
"""
Turn logits from a neural network into text using the tokenizer
:param logits: Logits from a neural network
:param tokenizer: Keras Tokenizer fit on the labels
:return: String that represents the text of the logits
"""
index_to_words = {id: word for word, id in tokenizer.word_index.items()}
index_to_words[0] = '<PAD>'
return ' '.join([index_to_words[prediction] for prediction in np.argmax(logits, 1)])
print('`logits_to_text` function loaded.')
```
### Model 1: RNN (IMPLEMENTATION)

A basic RNN model is a good baseline for sequence data. In this model, you'll build a RNN that translates English to French.
```
def simple_model(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
"""
Build and train a basic RNN on x and y
:param input_shape: Tuple of input shape
:param output_sequence_length: Length of output sequence
:param english_vocab_size: Number of unique English words in the dataset
:param french_vocab_size: Number of unique French words in the dataset
:return: Keras model built, but not trained
"""
# TODO: Build the layers
learning_rate = 0.01
input_seq = Input(input_shape[1:])
rnn = GRU(output_sequence_length, return_sequences=True)(input_seq)
#logits = TimeDistributed(Dense(french_vocab_size))(rnn)
logits = TimeDistributed(Dense(french_vocab_size, activation='softmax'))(rnn)
#model = None
#model = Model(input_seq, Activation('softmax')(logits))
model = Model(inputs=input_seq, outputs=logits)
model.compile(loss=sparse_categorical_crossentropy,
optimizer=Adam(learning_rate),
metrics=['accuracy'])
return model
tests.test_simple_model(simple_model)
# Reshaping the input to work with a basic RNN
tmp_x = pad(preproc_english_sentences, max_french_sequence_length)
tmp_x = tmp_x.reshape((-1, preproc_french_sentences.shape[-2], 1))
# Train the neural network
simple_rnn_model = simple_model(
tmp_x.shape,
max_french_sequence_length,
english_vocab_size,
french_vocab_size)
simple_rnn_model.fit(tmp_x, preproc_french_sentences, batch_size=1024, epochs=10, validation_split=0.2)
# Print prediction(s)
print(logits_to_text(simple_rnn_model.predict(tmp_x[:1])[0], french_tokenizer))
```
### Model 2: Embedding (IMPLEMENTATION)

You've turned the words into ids, but there's a better representation of a word. This is called word embeddings. An embedding is a vector representation of the word that is close to similar words in n-dimensional space, where the n represents the size of the embedding vectors.
In this model, you'll create a RNN model using embedding.
```
def embed_model(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
"""
Build and train a RNN model using word embedding on x and y
:param input_shape: Tuple of input shape
:param output_sequence_length: Length of output sequence
:param english_vocab_size: Number of unique English words in the dataset
:param french_vocab_size: Number of unique French words in the dataset
:return: Keras model built, but not trained
"""
# TODO: Implement
learning_rate = 0.01
input_seq = Input(input_shape[1:])
embedding_layer = Embedding(input_dim=english_vocab_size, output_dim=output_sequence_length, mask_zero=False)(input_seq)
rnn = GRU(output_sequence_length, return_sequences=True)(embedding_layer)
logits = TimeDistributed(Dense(french_vocab_size, activation='softmax'))(rnn)
model = Model(inputs=input_seq, outputs=logits)
model.compile(loss=sparse_categorical_crossentropy,
optimizer=Adam(learning_rate),
metrics=['accuracy'])
return model
#return None
tests.test_embed_model(embed_model)
# TODO: Reshape the input
tmp_x = pad(preproc_english_sentences, preproc_french_sentences.shape[1])
tmp_x = tmp_x.reshape((-1, preproc_french_sentences.shape[-2]))
# TODO: Train the neural network
embed_rnn_model = embed_model(
tmp_x.shape,
preproc_french_sentences.shape[1],
len(english_tokenizer.word_index) + 1,
len(french_tokenizer.word_index) + 1)
embed_rnn_model.summary()
embed_rnn_model.fit(tmp_x, preproc_french_sentences, batch_size=1024, epochs=10, validation_split=0.2)
# TODO: Print prediction(s)
print(logits_to_text(embed_rnn_model.predict(tmp_x[:1])[0], french_tokenizer))
```
### Model 3: Bidirectional RNNs (IMPLEMENTATION)

One restriction of a RNN is that it can't see the future input, only the past. This is where bidirectional recurrent neural networks come in. They are able to see the future data.
```
from keras.layers import Bidirectional
def bd_model(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
"""
Build and train a bidirectional RNN model on x and y
:param input_shape: Tuple of input shape
:param output_sequence_length: Length of output sequence
:param english_vocab_size: Number of unique English words in the dataset
:param french_vocab_size: Number of unique French words in the dataset
:return: Keras model built, but not trained
"""
# TODO: Implement
#Config Hyperparameters
learning_rate = 0.01
#Create Model
inputs = Input(shape=input_shape[1:])
hidden_layer = Bidirectional(GRU(output_sequence_length, return_sequences=True))(inputs)
outputs = TimeDistributed(Dense(french_vocab_size, activation='softmax'))(hidden_layer)
#Create Model from parameters defined above
model = Model(inputs=inputs, outputs=outputs)
model.compile(loss=sparse_categorical_crossentropy,
optimizer=Adam(learning_rate),
metrics=['accuracy'])
#return None
return model
tests.test_bd_model(bd_model)
tmp_x = pad(preproc_english_sentences, preproc_french_sentences.shape[1])
tmp_x = tmp_x.reshape((-1, preproc_french_sentences.shape[-2], 1))
# TODO: Train and Print prediction(s)
# Train the neural network
bd_rnn_model = bd_model(
tmp_x.shape,
preproc_french_sentences.shape[1],
len(english_tokenizer.word_index) + 1,
len(french_tokenizer.word_index) + 1)
bd_rnn_model.summary()
bd_rnn_model.fit(tmp_x, preproc_french_sentences, batch_size=1024, epochs=10, validation_split=0.2)
# Print prediction(s)
print(logits_to_text(bd_rnn_model.predict(tmp_x[:1])[0], french_tokenizer))
```
### Model 4: Encoder-Decoder (OPTIONAL)
Time to look at encoder-decoder models. This model is made up of an encoder and decoder. The encoder creates a matrix representation of the sentence. The decoder takes this matrix as input and predicts the translation as output.
Create an encoder-decoder model in the cell below.
```
from keras.layers import RepeatVector
def encdec_model(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
"""
Build and train an encoder-decoder model on x and y
:param input_shape: Tuple of input shape
:param output_sequence_length: Length of output sequence
:param english_vocab_size: Number of unique English words in the dataset
:param french_vocab_size: Number of unique French words in the dataset
:return: Keras model built, but not trained
"""
# OPTIONAL: Implement
#Config Hyperparameters
learning_rate = 0.01
latent_dim = 128
#Config Encoder
encoder_inputs = Input(shape=input_shape[1:])
encoder_gru = GRU(output_sequence_length)(encoder_inputs)
encoder_outputs = Dense(latent_dim, activation='relu')(encoder_gru)
#Config Decoder
decoder_inputs = RepeatVector(output_sequence_length)(encoder_outputs)
decoder_gru = GRU(latent_dim, return_sequences=True)(decoder_inputs)
output_layer = TimeDistributed(Dense(french_vocab_size, activation='softmax'))
outputs = output_layer(decoder_gru)
#Create Model from parameters defined above
model = Model(inputs=encoder_inputs, outputs=outputs)
model.compile(loss=sparse_categorical_crossentropy,
optimizer=Adam(learning_rate),
metrics=['accuracy'])
#return None
return model
tests.test_encdec_model(encdec_model)
# OPTIONAL: Train and Print prediction(s)
tmp_x = pad(preproc_english_sentences, preproc_french_sentences.shape[1])
tmp_x = tmp_x.reshape((-1, preproc_french_sentences.shape[-2], 1))
# Train the neural network
encdec_rnn_model = encdec_model(
tmp_x.shape,
preproc_french_sentences.shape[1],
len(english_tokenizer.word_index) + 1,
len(french_tokenizer.word_index) + 1)
encdec_rnn_model.summary()
encdec_rnn_model.fit(tmp_x, preproc_french_sentences, batch_size=1024, epochs=10, validation_split=0.2)
# Print prediction(s)
print(logits_to_text(encdec_rnn_model.predict(tmp_x[:1])[0], french_tokenizer))
```
### Model 5: Custom (IMPLEMENTATION)
Use everything you learned from the previous models to create a model that incorporates embedding and a bidirectional rnn into one model.
```
def model_final(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
"""
Build and train a model that incorporates embedding, encoder-decoder, and bidirectional RNN on x and y
:param input_shape: Tuple of input shape
:param output_sequence_length: Length of output sequence
:param english_vocab_size: Number of unique English words in the dataset
:param french_vocab_size: Number of unique French words in the dataset
:return: Keras model built, but not trained
"""
# TODO: Implement
#Config Hyperparameters
learning_rate = 0.01
latent_dim = 128
#Config Model
inputs = Input(shape=input_shape[1:])
embedding_layer = Embedding(input_dim=english_vocab_size, output_dim=output_sequence_length, mask_zero=False)(inputs)
#bd_layer = Bidirectional(GRU(output_sequence_length))(embedding_layer)
bd_layer = Bidirectional(GRU(256))(embedding_layer)
encoding_layer = Dense(latent_dim, activation='relu')(bd_layer)
decoding_layer = RepeatVector(output_sequence_length)(encoding_layer)
#output_layer = Bidirectional(GRU(latent_dim, return_sequences=True))(decoding_layer)
output_layer = Bidirectional(GRU(256, return_sequences=True))(decoding_layer)
outputs = TimeDistributed(Dense(french_vocab_size, activation='softmax'))(output_layer)
#Create Model from parameters defined above
model = Model(inputs=inputs, outputs=outputs)
model.compile(loss=sparse_categorical_crossentropy,
optimizer=Adam(learning_rate),
metrics=['accuracy'])
#return None
return model
tests.test_model_final(model_final)
print('Final Model Loaded')
# TODO: Train the final model
```
## Prediction (IMPLEMENTATION)
```
def final_predictions(x, y, x_tk, y_tk):
"""
Gets predictions using the final model
:param x: Preprocessed English data
:param y: Preprocessed French data
:param x_tk: English tokenizer
:param y_tk: French tokenizer
"""
# TODO: Train neural network using model_final
#model = None
#model = model_final(x.shape, y.shape[1], len(x_tk.word_index) + 1, len(y_tk.word_index) + 1)
model = model_final(x.shape, y.shape[1], len(x_tk.word_index) + 1, len(y_tk.word_index) + 1)
model.summary()
#model.fit(x, y, batch_size=1024, epochs=10, validation_split=0.2)
model.fit(x, y, batch_size=512, epochs=10, validation_split=0.2)
## DON'T EDIT ANYTHING BELOW THIS LINE
y_id_to_word = {value: key for key, value in y_tk.word_index.items()}
y_id_to_word[0] = '<PAD>'
sentence = 'he saw a old yellow truck'
sentence = [x_tk.word_index[word] for word in sentence.split()]
sentence = pad_sequences([sentence], maxlen=x.shape[-1], padding='post')
sentences = np.array([sentence[0], x[0]])
predictions = model.predict(sentences, len(sentences))
print('Sample 1:')
print(' '.join([y_id_to_word[np.argmax(x)] for x in predictions[0]]))
print('Il a vu un vieux camion jaune')
print('Sample 2:')
print(' '.join([y_id_to_word[np.argmax(x)] for x in predictions[1]]))
print(' '.join([y_id_to_word[np.max(x)] for x in y[0]]))
final_predictions(preproc_english_sentences, preproc_french_sentences, english_tokenizer, french_tokenizer)
```
## Submission
When you're ready to submit, complete the following steps:
1. Review the [rubric](https://review.udacity.com/#!/rubrics/1004/view) to ensure your submission meets all requirements to pass
2. Generate an HTML version of this notebook
- Run the next cell to attempt automatic generation (this is the recommended method in Workspaces)
- Navigate to **FILE -> Download as -> HTML (.html)**
- Manually generate a copy using `nbconvert` from your shell terminal
```
$ pip install nbconvert
$ python -m nbconvert machine_translation.ipynb
```
3. Submit the project
- If you are in a Workspace, simply click the "Submit Project" button (bottom towards the right)
- Otherwise, add the following files into a zip archive and submit them
- `helper.py`
- `machine_translation.ipynb`
- `machine_translation.html`
- You can export the notebook by navigating to **File -> Download as -> HTML (.html)**.
### Generate the html
**Save your notebook before running the next cell to generate the HTML output.** Then submit your project.
```
# Save before you run this cell!
!!jupyter nbconvert *.ipynb
```
## Optional Enhancements
This project focuses on learning various network architectures for machine translation, but we don't evaluate the models according to best practices by splitting the data into separate test & training sets -- so the model accuracy is overstated. Use the [`sklearn.model_selection.train_test_split()`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) function to create separate training & test datasets, then retrain each of the models using only the training set and evaluate the prediction accuracy using the hold out test set. Does the "best" model change?
| github_jupyter |
```
%matplotlib inline
import numpy as np
import pandas as pd
import math
from scipy import stats
import pickle
from causality.analysis.dataframe import CausalDataFrame
from sklearn.linear_model import LinearRegression
import datetime
import matplotlib
import matplotlib.pyplot as plt
matplotlib.rcParams['font.sans-serif'] = "Gotham"
matplotlib.rcParams['font.family'] = "sans-serif"
import plotly
import plotly.graph_objs as go
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
init_notebook_mode(connected=True)
```
Open the data from past notebooks and correct them to only include years that are common between the data structures (>1999).
```
with open('VariableData/money_data.pickle', 'rb') as f:
income_data, housing_data, rent_data = pickle.load(f)
with open('VariableData/demographic_data.pickle', 'rb') as f:
demographic_data = pickle.load(f)
with open('VariableData/endowment.pickle', 'rb') as f:
endowment = pickle.load(f)
with open('VariableData/expander.pickle', 'rb') as f:
expander = pickle.load(f)
endowment = endowment[endowment['FY'] > 1997].reset_index()
endowment.drop('index', axis=1, inplace=True)
demographic_data = demographic_data[demographic_data['year'] > 1999].reset_index()
demographic_data.drop('index', axis=1, inplace=True)
income_data = income_data[income_data['year'] > 1999].reset_index()
income_data.drop('index', axis=1, inplace=True)
housing_data = housing_data[housing_data['year'] > 1999].reset_index()
housing_data.drop('index', axis=1, inplace=True)
rent_data = rent_data[rent_data['year'] > 1999].reset_index()
rent_data.drop('index', axis=1, inplace=True)
```
Read in the data on Harvard owned land and Cambridge's property records. Restrict the Harvard data to Cambridge, MA.
```
harvard_land = pd.read_excel("Spreadsheets/2018_building_reference_list.xlsx", header=3)
harvard_land = harvard_land[harvard_land['City'] == 'Cambridge']
cambridge_property = pd.read_excel("Spreadsheets/cambridge_properties.xlsx")
```
Restrict the Cambridge data to Harvard properties, and only use relevant columns.
```
cambridge_property = cambridge_property[cambridge_property['Owner_Name'].isin(['PRESIDENT & FELLOWS OF HARVARD COLLEGE', 'PRESIDENT & FELLOW OF HARVARD COLLEGE'])]
cambridge_property = cambridge_property[['Address', 'PropertyClass', 'LandArea', 'BuildingValue', 'LandValue', 'AssessedValue', 'SalePrice', 'SaleDate', 'Owner_Name']]
```
Fix the time data.
```
cambridge_property['SaleDate'] = pd.to_datetime(cambridge_property['SaleDate'], infer_datetime_format=True)
clean_property = cambridge_property.drop_duplicates(subset=['Address'])
clean_property.head()
type(clean_property['SaleDate'])
```
Only look at properties purchased after 2000.
```
recent_property = clean_property[clean_property['SaleDate'] > datetime.date(2000, 1, 1)]
property_numbers = recent_property[['LandArea', 'AssessedValue', 'SalePrice']]
num_recent = recent_property['Address'].count()
sum_properties = property_numbers.sum()
sum_properties
full_property_numbers = clean_property[['LandArea', 'AssessedValue', 'SalePrice']]
sum_full = full_property_numbers.sum()
delta_property = sum_properties / sum_full
delta_property
```
What can be gathered from above?
Since the year 2000, Harvard has increased its presence in Cambridge by about 3%, corresponding to about 2% of its overall assessed value, an increase of 281,219 square feet and \$115,226,500. Although the assessed value increase is so high, Harvard only paid \$57,548,900 for the property at their times of purchase.
To make some adjustments for inflation:
Note that the inflation rate since 2000 is ~37.8% (https://data.bls.gov/timeseries/CUUR0000SA0L1E?output_view=pct_12mths).
```
inflation_data = pd.read_excel("Spreadsheets/inflation.xlsx", header=11)
inflation_data = inflation_data[['Year', 'Jan']]
inflation_data['Year'] = pd.to_datetime(inflation_data['Year'], format='%Y')
inflation_data['CumulativeInflation'] = inflation_data['Jan'].cumsum()
inflation_data.rename(columns={'Year' : 'SaleDate'}, inplace=True)
recent_property['SaleDate'] = recent_property['SaleDate'].dt.year
inflation_data['SaleDate'] = inflation_data['SaleDate'].dt.year
recent_property = pd.merge(recent_property, inflation_data, how="left", on=['SaleDate'])
recent_property = recent_property.drop('Jan', 1)
recent_property['TodaySale'] = (1 + (recent_property['CumulativeInflation'] / 100)) * recent_property['SalePrice']
today_sale_sum = recent_property['TodaySale'].sum()
today_sale_sum
sum_properties['AssessedValue'] - today_sale_sum
```
Hence, adjusted for inflation, the sale price of the property Harvard has acquired since 2000 is \$65,929,240.
The difference between this value and the assessed value of the property (in 2018) is: \$49,297,260, showing that Harvard's property has appreciated in value even more than (twice more than) inflation would account for, illustrating a clear advantageous dynamic for Harvard.
```
sorted_df = recent_property.sort_values(by=['SaleDate'])
sorted_df = sorted_df.reset_index().drop('index', 1)
sorted_df['CumLand'] = sorted_df['LandArea'].cumsum()
sorted_df['CumValue'] = sorted_df['AssessedValue'].cumsum()
sorted_df
```
Graph the results.
```
def fitter(x, y, regr_x):
"""
Use linear regression to make a best fit line for a set of data.
Args:
x (numpy array): The independent variable.
y (numpy array): The dependent variable.
regr_x (numpy array): The array used to extrapolate the regression.
"""
slope, intercept, r_value, p_value, std_err = stats.linregress(x, y)
return (slope * regr_x + intercept)
years = sorted_df['SaleDate'].as_matrix()
cum_land = sorted_df['CumLand'].as_matrix()
cum_value = sorted_df['CumValue'].as_matrix()
regr = np.arange(2000, 2012)
line0 = fitter(years, cum_land, regr)
trace0 = go.Scatter(
x = years,
y = cum_land,
mode = 'markers',
name='Harvard Land\n In Cambridge',
marker=go.Marker(color='#601014')
)
fit0 = go.Scatter(
x = regr,
y = line0,
mode='lines',
marker=go.Marker(color='#D2232A'),
name='Fit'
)
data = [trace0, fit0]
layout = go.Layout(
title = "The Change In Harvard's Land in Cambridge Since 2000",
font = dict(family='Gotham', size=18),
yaxis=dict(
title='Land Accumulated Since 2000 (Sq. Feet)'
),
xaxis=dict(
title='Year')
)
fig = go.Figure(data=data, layout=layout)
iplot(fig, filename="land_changes")
graph2_df = pd.DataFrame(list(zip(regr, line0)))
graph2_df.to_csv('graph2.csv')
def grapher(x, y, city, title, ytitle, xtitle, filename):
slope, intercept, r_value, p_value, std_err = stats.linregress(x, y)
fit = slope * x + intercept
trace0 = go.Scatter(
x = x,
y = y,
mode = 'markers',
name=city,
marker=go.Marker(color='#D2232A')
)
fit0 = go.Scatter(
x = x,
y = fit,
mode='lines',
marker=go.Marker(color='#AC1D23'),
name='Linear Fit'
)
data = [trace0, fit0]
layout = go.Layout(
title = title,
font = dict(family='Gotham', size=12),
yaxis=dict(
title=ytitle
),
xaxis=dict(
title=xtitle)
)
fig = go.Figure(data=data, layout=layout)
return iplot(fig, filename=filename)
len(line0)
```
Restrict the demographic data to certain years (up to 2012) in order to fit the data well.
```
demographic_data = demographic_data[demographic_data['year'] < 2011]
rent_data = rent_data[rent_data['year'] < 2011]
housing_data = housing_data[housing_data['year'] < 2011]
x = cum_land
y = pd.to_numeric(demographic_data['c_black']).as_matrix()
z1 = pd.to_numeric(rent_data['cambridge']).as_matrix()
z2 = pd.to_numeric(housing_data['cambridge']).as_matrix()
endow_black = grapher(x, y, "Cambridge", "The Correlation Between Harvard Land Change and Black Population", "Black Population of Cambridge", "Land Change (Sq. Feet)", "land_black")
X = CausalDataFrame({'x': x, 'y': y, 'z1': z1, 'z2': z2})
causal_land_black = X.zplot(x='x', y='y', z=['z1', 'z2'], z_types={'z1': 'c', 'z2': 'c'}, kind='line', color="#D2232A")
fig = causal_land_black.get_figure()
fig.set_size_inches(9, 5.5)
ax = plt.gca()
ax.set_frame_on(False)
ax.get_yaxis().set_visible(False)
ax.legend_.remove()
ax.set_title("The Controlled Correlation Between Land Use (Square Feet) and Black Population", fontproperties=gotham_black, size=10, color="#595959")
ax.set_xlabel("Land Use", fontproperties=gotham_book, fontsize=10, color="#595959")
for tick in ax.get_xticklabels():
tick.set_fontproperties(gotham_book)
tick.set_fontsize(10)
tick.set_color("#595959")
fig.savefig('images/black_land.svg', format='svg', dpi=2400, bbox_inches='tight')
z2
graph9_df = pd.DataFrame(X)
graph9_df.to_csv('graph9.csv')
y = pd.to_numeric(rent_data['cambridge']).as_matrix()
z1 = pd.to_numeric(housing_data['cambridge']).as_matrix()
X = CausalDataFrame({'x': x, 'y': y, 'z1': z1})
causal_land_rent = X.zplot(x='x', y='y', z=['z1'], z_types={'z1': 'c'}, kind='line', color="#D2232A")
fig = causal_land_rent.get_figure()
fig.set_size_inches(9, 5.5)
ax = plt.gca()
ax.set_frame_on(False)
ax.get_yaxis().set_visible(False)
ax.legend_.remove()
ax.set_title("The Controlled Correlation Between Land Use (Square Feet) and Rent", fontproperties=gotham_black, size=10, color="#595959")
ax.set_xlabel("Land Use", fontproperties=gotham_book, fontsize=10, color="#595959")
for tick in ax.get_xticklabels():
tick.set_fontproperties(gotham_book)
tick.set_fontsize(10)
tick.set_color("#595959")
fig.savefig('images/rent_land.svg', format='svg', dpi=1200, bbox_inches='tight')
```
| github_jupyter |
```
import git_access,api_access,git2repo
import json
from __future__ import division
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import math
import networkx as nx
import re
import git2data
import social_interaction
access_token = '--'
repo_owner = 'jankotek'
source_type = 'github_repo'
git_url = 'git://github.com/jankotek/jankotek-mapdb.git'
api_base_url = 'http://api.github.com'
repo_name = 'jankotek-mapdb'
url_type = 'issues'
url_details = 'comments'
access_project = api_access.git_api_access(access_token,repo_owner,source_type,git_url,api_base_url,repo_name)
issue_comment = access_project.get_comments(url_type = url_type,url_details = url_details)
issue = access_project.get_issues(url_type = url_type,url_details = '')
events = access_project.get_events(url_type = url_type,url_details = 'events')
git_repo = git2repo.git2repo(repo_url,repo_name)
repo = git_repo.clone_repo()
obj = repo.get('95693814f4c733ece3563b51c317c89203f1ff59')
print(obj)
commits = git_repo.get_commits()
committed_files = git_repo.get_committed_files()
issue_df = pd.DataFrame(issue, columns = ['Issue_number','user_logon','author_type','Desc','title'])
commit_df = pd.DataFrame(commits, columns=['commit_number', 'message', 'parent'])
events_df = pd.DataFrame(events, columns=['event_type', 'issue_number', 'commit_number'])
import re
issue_commit_dict = {}
for i in range(commit_df.shape[0]):
_commit_number = commit_df.loc[i,'commit_number']
_commit_message = commit_df.loc[i,'message']
_commit_parent = commit_df.loc[i,'parent']
res = re.search("#[0-9]+$", _commit_message)
if res is not None:
_issue_id = res.group(0)[1:]
issue_commit_dict[_commit_number] = _issue_id
links = events_df.dropna()
links.reset_index(inplace=True)
for i in range(links.shape[0]):
if links.loc[i,'commit_number'] in issue_commit_dict.keys():
continue
else:
issue_commit_dict[links.loc[i,'commit_number']] = links.loc[i,'issue_number']
issue_commit_temp = []
commit_df['issues'] = pd.Series([None]*commit_df.shape[0])
issue_df['commits'] = pd.Series([None]*issue_df.shape[0])
for i in range(commit_df.shape[0]):
_commit_number = commit_df.loc[i,'commit_number']
_commit_message = commit_df.loc[i,'message']
_commit_parent = commit_df.loc[i,'parent']
res = re.search("#[0-9]+$", _commit_message)
if res is not None:
_issue_id = res.group(0)[1:]
issue_commit_temp.append([_commit_number,np.int64(_issue_id)])
issue_commit_list_1 = np.array(issue_commit_temp)
links = events_df.dropna()
links.reset_index(inplace=True)
issue_commit_temp = []
for i in range(links.shape[0]):
if links.loc[i,'commit_number'] in issue_commit_list[:,0]:
continue
else:
issue_commit_temp.append([links.loc[i,'commit_number'],links.loc[i,'issue_number']])
issue_commit_list_2 = np.array(issue_commit_temp)
issue_commit_list = np.append(issue_commit_list_1,issue_commit_list_2, axis = 0)
df = pd.DataFrame(issue_commit_list2, columns = ['commit_id','issues']).drop_duplicates()
df = df.drop_duplicates()
df_unique_issues = df.issues.unique()
for i in df_unique:
i = np.int64(i)
commits = df[df['issues'] == i]['commit_id']
x = issue_df['Issue_number'] == i
j = x[x == True].index.values
if len(j) != 1:
continue
issue_df.at[j[0],'commits'] = commits.values
df_unique_commits = df.commit_id.unique()
for i in df_unique_commits:
issues = df[df['commit_id'] == i]['issues']
x = commit_df['commit_number'] == i
j = x[x == True].index.values
if len(j) != 1:
continue
commit_df.at[j[0],'issues'] = issues.values
commit_df
git_data = git2data.git2data(access_token,repo_owner,source_type,git_url,api_base_url,repo_name)
git_data.get_api_data()
git_data.get_commit_data()
committed_files_data = git_data.get_committed_files()
committed_files_df = pd.DataFrame(committed_files_data, columns = ['commit_id','file_id','file_mode','file_path'])
issue_data,commit_data = git_data.create_link()
issue_data.to_pickle('rails_issue.pkl')
commit_data.to_pickle('rails_commit.pkl')
committed_files_df.to_pickle('rails_committed_file.pkl')
type(committed_files_df.loc[1,'file_id'])
git_data.create_data()
si = social_interaction.create_social_inteaction_graph('rspec-rails')
x = si.get_user_node_degree()
x
import magician_package
x = magician_package.learner('pitsE',1,'C:\Users\suvod\AI4SE\magician_package\magician_package\data\preprocessed\')
import magician_package
data_path = '/home/suvo/AI4SE/pypi/magician_package/magician_package/data/preprocessed/'
data_file = 'pitsD'
result = magician_package.learner(data_file,'1',data_path)
```
| github_jupyter |
# `aiterutils` tutorial
A functional programming toolkit for manipulation of asynchronous iterators in python >3.5
It has two types of operations:
1. Iterator functions
* `au.map(fn, aiter): aiter`
* `au.each(fn, aiter): coroutine`
* `au.filter(fn, aiter): aiter`
* `au.merge([aiter...]): aiter`
* `au.bifurcate(fn, aiter): (aiter, aiter)`
* `au.branch([fn...], aiter): (aiter...)`
2. Dynamic streams
* `au.Stream(aiter)'
Dynamics streams can have their undelying object methods invocated implicitly
```
capitals = au.Stream(['a','b','c']).upper()
```
```
# if executing this notebook, downgrade tornado
# as tornado does not play nice with asyncio
# ! pip install tornado==4.5.3
```
## 1. Setup
```
import functools as ft
import asyncio
import aiterutils as au
async def symbol_stream(symbols, interval):
for symbol in symbols:
yield symbol
await asyncio.sleep(interval)
smiley = lambda: symbol_stream(["🙂"]*5, 1)
sunny = lambda: symbol_stream(["☀️"]*5, 0.5)
clown = lambda: symbol_stream(["🤡"]*5, 0.2)
team = lambda: au.merge([smiley(), sunny(), clown()])
single_line_print = ft.partial(print, end='')
loop = asyncio.get_event_loop()
```
## 2. Iterator functions
### `au.map(fn, aiter): aiter`
```
stream = au.map(lambda s: s + '!', team())
app = au.println(stream, end='')
loop.run_until_complete(app)
```
### `au.each(fn, aiter): coroutine`
```
import functools as ft
app = au.each(single_line_print, team())
loop.run_until_complete(app)
```
### `au.filter(fn, aiter): aiter`
```
stream = au.filter(lambda s: s == '☀️', team())
app = au.println(stream, end="")
loop.run_until_complete(app)
```
### `au.merge([aiter...]): aiter`
```
stream = au.merge([smiley(), sunny()])
app = au.println(stream, end="")
loop.run_until_complete(app)
```
### `au.bifurcate(fn, aiter): (aiter, aiter)`
```
smile_stream, other_stream = au.bifurcate(lambda s: s == '🙂', team())
loop.run_until_complete(au.println(smile_stream, end=""))
loop.run_until_complete(au.println(other_stream, end=""))
```
### `au.branch([fn...], aiter): (aiter...)`
```
filters = [
lambda s: s == '🙂',
lambda s: s == '☀️',
lambda s: s == '🤡'
]
smile_stream, sun_stream, clown_stream = au.branch(filters, team())
loop.run_until_complete(au.println(smile_stream, end=''))
loop.run_until_complete(au.println(sun_stream, end=''))
loop.run_until_complete(au.println(clown_stream, end=''))
```
## 3. Dynamic Streams
### 3.1 map and each methods
```
#dynamic streams have regular map an each methods
app = (au.Stream(team())
.map(lambda s: s+'!')
.each(single_line_print))
loop.run_until_complete(app)
```
### 3.2 operator overloading
```
# dynamics streams can have underlying object methods called implicitly
stream = au.Stream(symbol_stream([{"key-2" : 1, "key-2" : 2}]*10, 0.5))
value2 = stream['key-2'] + 3 / 5
app = au.println(value2, end='')
loop.run_until_complete(app)
```
### 3.3 dynamic method invocation
```
stream = au.Stream(symbol_stream([{"key-1" : 1, "key-2" : 2}]*5, 0.5))
keys = stream.keys()
key = keys.map(lambda key: list(key))
key = key[-1]
key = key.upper().encode('utf-8')
app = au.println(key, end='')
loop.run_until_complete(app)
```
| github_jupyter |
```
# Solve a linear system using 3 different approaches : Gaussian elimination, Inverse , PLU factorization & measure times.
import numpy as np
from scipy.linalg import solve_triangular
from numpy.linalg import inv
from scipy.linalg import lu
import time
def gaussElim(Matrix):
j = 1
num_rows, num_cols = Matrix.shape
for i in range(0,num_rows):
Matrix[i] = Matrix[i]/Matrix[i,i]
while( j < num_rows):
Matrix[j] = Matrix[j] - Matrix[j,i]*Matrix[i]
j+=1
i+=1
j = i+ 1
A = Matrix[0:10,0:10]
B = Matrix[0:,10:]
X1 = solve_triangular(A, B, unit_diagonal = True)
return(X1)
def pluTrans(upper,lower,perm,b):
RHS1 = perm.T.dot(b)
Y = solve_triangular(lower, RHS1, lower=True)
X = solve_triangular(upper, Y)
return(X)
A_ORIG = np.array([[20, 37, 38, 23, 19, 31, 35, 29, 21, 30],
[39, 13, 12, 29, 12, 16, 39, 36, 18, 18],
[38, 42, 12, 39, 17, 22, 43, 38, 29, 24],
[26, 23, 17, 42, 23, 18, 22, 35, 38, 24],
[24, 21, 42, 19, 32, 12, 26, 24, 39, 22],
[36, 18, 41, 14, 26, 19, 44, 22, 20, 27],
[26, 36, 21, 42, 42, 33, 22, 34, 13, 21],
[15, 25, 28, 22, 44, 44, 25, 18, 15, 35],
[39, 43, 19, 27, 14, 34, 16, 25, 37, 13],
[25, 35, 31, 18, 43, 20, 37, 14, 26, 14]])
b1 = np.array([[447],
[310],
[313],
[486],
[396],
[317],
[320],
[342],
[445],
[487]])
b2 = np.array([[223],
[498],
[416],
[370],
[260],
[295],
[417],
[290],
[341],
[220]])
b3 = np.array([[443],
[350],
[763],
[556],
[641],
[347],
[749],
[563],
[735],
[248]])
# 1- Convert A to an Upper triangular matrix & then solve for X using solve triangular
tic1 = time.perf_counter()
augMatrix1 = np.column_stack((A_ORIG,b1)).astype(float)
X1 = gaussElim(augMatrix1)
augMatrix2 = np.column_stack((A_ORIG,b2)).astype(float)
X2 = gaussElim(augMatrix2)
augMatrix3 = np.column_stack((A_ORIG,b3)).astype(float)
X3 = gaussElim(augMatrix3)
toc1 = time.perf_counter()
X = np.column_stack((X1,X2,X3))
print(X)
print("\nGaussian Elimination took %s seconds\n" % (toc1-tic1))
# 2- get inv of A & then find B
tic1 = time.perf_counter()
Ainv = inv(A_ORIG)
X1 = Ainv.dot(b1)
X2 = Ainv.dot(b2)
X3 = Ainv.dot(b3)
toc1 = time.perf_counter()
X = np.column_stack((X1,X2,X3))
print(X)
print("\nInverse method took %s seconds\n" % (toc1-tic1))
#3 - A = LU factorization
tic1 = time.perf_counter()
plu = lu(A_ORIG, permute_l=False, overwrite_a=False, check_finite=True)
perm = plu[0]
lower =plu[1]
upper = plu[2]
X1 = pluTrans(upper,lower,perm,b1)
X2 = pluTrans(upper,lower,perm,b2)
X3 = pluTrans(upper,lower,perm,b3)
toc1 = time.perf_counter()
X = np.column_stack((X1,X2,X3))
print(X)
print("\nPLU factorization took %s seconds\n" % (toc1-tic1))
#RREF form
import numpy as np
import sympy as sp
from sympy import init_printing
init_printing()
M = np.rint(np.random.randn(20,30)*100)
M1 =sp.Matrix(M)
K = M1.rref()
K
# A = QR factorization
import numpy as np
from scipy import linalg
M = np.rint(np.random.randn(20,20)*100)
q,r = linalg.qr(M)
print(q)
print(r)
col1 = q[0:19,0:1]
k = np.square(col1)
print(sum(k))
# diagonalize a matrix & get power
import numpy as np
from scipy import linalg as LA
from numpy.linalg import matrix_power
import sympy as sp
from sympy import init_printing
init_printing()
M = np.rint(np.random.randn(50,50))
M = M.dot(M.T)
eig = LA.eigvals(M)
diagMatrix = np.diag(eig)
diagMat = sp.Matrix(diagMatrix)
powerMatrix = matrix_power(diagMatrix,9)
powermat = sp.Matrix(powerMatrix)
powermat
# Markov matrix & steady state
import numpy as np
from scipy import linalg as LA
from numpy.linalg import matrix_power
import sympy as sp
k = np.array([72, 32, 34])
markov = np.zeros((3,3))
for i in range(0,3):
m = np.random.rand(1,3)
m = m/np.sum(m)
markov[i] = m
for i in range (1,100000):
base = matrix_power(markov, i)
base_1 = matrix_power( markov,i+1)
if (base==base_1).all():
print("steady state reached for i = %s" % i)
break
print(markov)
print(base_1)
print(k.dot(base_1))
#DFT & FFT
import numpy as np
from scipy import linalg as LA
from scipy.linalg import dft
from scipy.fftpack import fft
import sympy as sp
from sympy import init_printing
import time
init_printing()
matrix = np.random.randint(0 , 10000, 2500).reshape(50,50)
tic1 = time.perf_counter()
dft = dft(50).dot(matrix)
toc1 = time.perf_counter()
print("\n DFT took %s seconds\n" % (toc1-tic1))
dft1 = sp.Matrix(dft)
tic1 = time.perf_counter()
fft = fft(matrix)
toc1 = time.perf_counter()
print("\n FFT took %s seconds\n" % (toc1-tic1))
fft1 = sp.Matrix(fft)
fft1
# check similarity
"""
1) Two matrices A & B will be similar only when they have the same eigen values.
If they have different eigen values , there will be no solution to the equation PA = PB.
We will verify this through a 10* 10 matrix
2) Given a 50 by 50 matrix , how to find a similar matrix ( non diagonal)
Get any invertible 50 by 50 matrix & multiply Pinv. A. P
The resultant matrix should have same eigen values & similar to A
3) Also we can find the similar matrix by multiplying Eigen values with V*VT where V is the column of any orthonormal basis
"""
import numpy as np
from scipy import linalg as LA
from numpy.linalg import inv
np.set_printoptions(suppress=True)
A = np.random.randint(1,100,2500).reshape(50,50)
A = A.dot(A.T)
P = np.random.randn(50,50)
P_inv = inv(P)
eig1 = LA.eigvals(A)
eig1 = np.sort(eig1)
B1 = P_inv.dot(A)
B = B1.dot(P)
eig2 = LA.eigvals(B)
eig2 = np.sort(eig2)
print(np.real(eig1))
print(np.real(eig2))
# jordan normal form
from scipy import linalg as LA
import numpy as np
import sympy as sp
from sympy import Matrix
from sympy import init_printing
init_printing()
C = np.eye(50,50)
for i in range (25,50):
C[i] = C[i]*2
for i in range(0,48):
k = np.random.randint(0,2)
if k == 0:
C[i,i+1] =1
P = np.random.rand(50,50)
p_inv = inv(P)
JMatrix = p_inv.dot(C).dot(P)
eigC, eigV =LA.eig(C)
eigJ, eigK = LA.eig(JMatrix)
print(eigC)
print(eigJ)
# Singular value decomposition
from scipy import linalg as LA
import numpy as np
from sympy import Matrix
from sympy import init_printing
init_printing(num_columns = 40)
matrix = np.random.randint(1,10,1200).reshape(30,40)
U, Sigma, VT = LA.svd(matrix)
Sigma = np.diag(Sigma)
k = np.zeros((30,10))
Sigma = np.column_stack((Sigma,k))
matrix = sp.Matrix(matrix)
product = U.dot(Sigma).dot(VT)
product = np.round(product , decimals =2)
product = sp.Matrix(product)
matrix == product
#psuedo inverse
from scipy import linalg as LA
import numpy as np
from sympy import Matrix
from sympy import init_printing
init_printing(num_columns = 40, use_latex = True)
matrix = np.random.randint(1,10,1200).reshape(30,40)
matrixT = matrix.T
psuInv1 = LA.pinv(matrix)
psuInv2 = LA.pinv(matrixT)
psuInv1s= sp.Matrix(psuInv1)
psuInv2s= sp.Matrix(psuInv2)
psuInv2s
```
| github_jupyter |
# 数据抓取:
> # Beautifulsoup简介
***
王成军
wangchengjun@nju.edu.cn
计算传播网 http://computational-communication.com
# 需要解决的问题
- 页面解析
- 获取Javascript隐藏源数据
- 自动翻页
- 自动登录
- 连接API接口
```
import urllib2
from bs4 import BeautifulSoup
```
- 一般的数据抓取,使用urllib2和beautifulsoup配合就可以了。
- 尤其是对于翻页时url出现规则变化的网页,只需要处理规则化的url就可以了。
- 以简单的例子是抓取天涯论坛上关于某一个关键词的帖子。
- 在天涯论坛,关于雾霾的帖子的第一页是:
http://bbs.tianya.cn/list.jsp?item=free&nextid=0&order=8&k=雾霾
- 第二页是:
http://bbs.tianya.cn/list.jsp?item=free&nextid=1&order=8&k=雾霾
# Beautiful Soup
> Beautiful Soup is a Python library designed for quick turnaround projects like screen-scraping. Three features make it powerful:
- Beautiful Soup provides a few simple methods. It doesn't take much code to write an application
- Beautiful Soup automatically converts incoming documents to Unicode and outgoing documents to UTF-8. Then you just have to specify the original encoding.
- Beautiful Soup sits on top of popular Python parsers like lxml and html5lib.
# Install beautifulsoup4
### open your terminal/cmd
> $ pip install beautifulsoup4
# 第一个爬虫
Beautifulsoup Quick Start
http://www.crummy.com/software/BeautifulSoup/bs4/doc/

```
url = 'file:///Users/chengjun/GitHub/cjc/data/test.html'
# http://computational-class.github.io/cjc/data/test.html
content = urllib2.urlopen(url).read()
soup = BeautifulSoup(content, 'html.parser')
soup
```
# html.parser
Beautiful Soup supports the html.parser included in Python’s standard library
# lxml
but it also supports a number of third-party Python parsers. One is the lxml parser `lxml`. Depending on your setup, you might install lxml with one of these commands:
> $ apt-get install python-lxml
> $ easy_install lxml
> $ pip install lxml
# html5lib
Another alternative is the pure-Python html5lib parser `html5lib`, which parses HTML the way a web browser does. Depending on your setup, you might install html5lib with one of these commands:
> $ apt-get install python-html5lib
> $ easy_install html5lib
> $ pip install html5lib
```
print(soup.prettify())
```
- html
- head
- title
- body
- p (class = 'title', 'story' )
- a (class = 'sister')
- href/id
# Select 方法
```
soup.select('.sister')
soup.select('.story')
soup.select('#link2')
soup.select('#link2')[0]['href']
```
# find_all方法
```
soup('p')
soup.find_all('p')
[i.text for i in soup('p')]
for i in soup('p'):
print i.text
for tag in soup.find_all(True):
print(tag.name)
soup('head') # or soup.head
soup('body') # or soup.body
soup('title') # or soup.title
soup('p')
soup.p
soup.title.name
soup.title.string
soup.title.text
# 推荐使用text方法
soup.title.parent.name
soup.p
soup.p['class']
soup.find_all('p', {'class', 'title'})
soup.find_all('p', class_= 'title')
soup.find_all('p', {'class', 'story'})
soup.find_all('p', {'class', 'story'})[0].find_all('a')
soup.a
soup('a')
soup.find(id="link3")
soup.find_all('a')
soup.find_all('a', {'class', 'sister'}) # compare with soup.find_all('a')
soup.find_all('a', {'class', 'sister'})[0]
soup.find_all('a', {'class', 'sister'})[0].text
soup.find_all('a', {'class', 'sister'})[0]['href']
soup.find_all('a', {'class', 'sister'})[0]['id']
soup.find_all(["a", "b"])
print(soup.get_text())
```
***
***
# 数据抓取:
> # 根据URL抓取微信公众号文章内容
***
***
王成军
wangchengjun@nju.edu.cn
计算传播网 http://computational-communication.com
```
from IPython.display import display_html, HTML
HTML('<iframe src=http://mp.weixin.qq.com/s?__biz=MzA3MjQ5MTE3OA==&\
mid=206241627&idx=1&sn=471e59c6cf7c8dae452245dbea22c8f3&3rd=MzA3MDU4NTYzMw==&scene=6#rd\
width=500 height=500></iframe>')
# the webpage we would like to crawl
```
# 查看源代码
# Inspect
```
url = "http://mp.weixin.qq.com/s?__biz=MzA3MjQ5MTE3OA==&\
mid=206241627&idx=1&sn=471e59c6cf7c8dae452245dbea22c8f3&3rd=MzA3MDU4NTYzMw==&scene=6#rd"
content = urllib2.urlopen(url).read() #获取网页的html文本
soup = BeautifulSoup(content, 'html.parser')
title = soup.select("div .title #id1 img ")
for i in title:
print i.text
print soup.find('h2', {'class', 'rich_media_title'}).text
print soup.find('div', {'class', 'rich_media_meta_list'})
print soup.find('em').text
article = soup.find('div', {'class' , 'rich_media_content'}).text
print article
rmml = soup.find('div', {'class', 'rich_media_meta_list'})
date = rmml.find(id = 'post-date').text
rmc = soup.find('div', {'class', 'rich_media_content'})
content = rmc.get_text()
print title
print date
print content
```
# 尝试抓取阅读数和点赞数
```
url = 'http://mp.weixin.qq.com/s?src=3×tamp=1463191394&ver=1&signature=cV3qMIBeBTS6i8cJJOPu98j-H9veEPx0Y0BekUE7F6*sal9nkYG*w*FwDiaySIfR4XZL-XFbo2TFzrMxEniDETDYIMRuKmisV8xjfOcCjEWmPkYfK57G*cffYv4JxuM*RUtN8LUIg*n6Kd0AKB8--w=='
content = urllib2.urlopen(url).read() #获取网页的html文本
soup = BeautifulSoup(content, 'html.parser')
print soup
soup.find(id='sg_likeNum3')
```
# 作业:
- 抓取复旦新媒体微信公众号最新一期的内容
| github_jupyter |
# 导入必要的库
我们需要导入一个叫 [captcha](https://github.com/lepture/captcha/) 的库来生成验证码。
我们生成验证码的字符由数字和大写字母组成。
```sh
pip install captcha numpy matplotlib tensorflow-gpu pydot tqdm
```
```
from captcha.image import ImageCaptcha
import matplotlib.pyplot as plt
import numpy as np
import random
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import string
characters = string.digits + string.ascii_uppercase
print(characters)
width, height, n_len, n_class = 128, 64, 4, len(characters) + 1
```
# 防止 tensorflow 占用所有显存
```
import tensorflow as tf
import tensorflow.keras.backend as K
config = tf.ConfigProto()
config.gpu_options.allow_growth=True
sess = tf.Session(config=config)
K.set_session(sess)
```
# 定义 CTC Loss
```
import tensorflow.keras.backend as K
def ctc_lambda_func(args):
y_pred, labels, input_length, label_length = args
return K.ctc_batch_cost(labels, y_pred, input_length, label_length)
```
# 定义网络结构
```
from tensorflow.keras.models import *
from tensorflow.keras.layers import *
input_tensor = Input((height, width, 3))
x = input_tensor
for i, n_cnn in enumerate([2, 2, 2, 2, 2]):
for j in range(n_cnn):
x = Conv2D(32*2**min(i, 3), kernel_size=3, padding='same', kernel_initializer='he_uniform')(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = MaxPooling2D(2 if i < 3 else (2, 1))(x)
x = Permute((2, 1, 3))(x)
x = TimeDistributed(Flatten())(x)
rnn_size = 128
x = Bidirectional(CuDNNGRU(rnn_size, return_sequences=True))(x)
x = Bidirectional(CuDNNGRU(rnn_size, return_sequences=True))(x)
x = Dense(n_class, activation='softmax')(x)
base_model = Model(inputs=input_tensor, outputs=x)
labels = Input(name='the_labels', shape=[n_len], dtype='float32')
input_length = Input(name='input_length', shape=[1], dtype='int64')
label_length = Input(name='label_length', shape=[1], dtype='int64')
loss_out = Lambda(ctc_lambda_func, output_shape=(1,), name='ctc')([x, labels, input_length, label_length])
model = Model(inputs=[input_tensor, labels, input_length, label_length], outputs=loss_out)
```
# 网络结构可视化
```
from tensorflow.keras.utils import plot_model
from IPython.display import Image
plot_model(model, to_file='ctc.png', show_shapes=True)
Image('ctc.png')
base_model.summary()
```
# 定义数据生成器
```
from tensorflow.keras.utils import Sequence
class CaptchaSequence(Sequence):
def __init__(self, characters, batch_size, steps, n_len=4, width=128, height=64,
input_length=16, label_length=4):
self.characters = characters
self.batch_size = batch_size
self.steps = steps
self.n_len = n_len
self.width = width
self.height = height
self.input_length = input_length
self.label_length = label_length
self.n_class = len(characters)
self.generator = ImageCaptcha(width=width, height=height)
def __len__(self):
return self.steps
def __getitem__(self, idx):
X = np.zeros((self.batch_size, self.height, self.width, 3), dtype=np.float32)
y = np.zeros((self.batch_size, self.n_len), dtype=np.uint8)
input_length = np.ones(self.batch_size)*self.input_length
label_length = np.ones(self.batch_size)*self.label_length
for i in range(self.batch_size):
random_str = ''.join([random.choice(self.characters) for j in range(self.n_len)])
X[i] = np.array(self.generator.generate_image(random_str)) / 255.0
y[i] = [self.characters.find(x) for x in random_str]
return [X, y, input_length, label_length], np.ones(self.batch_size)
```
# 测试生成器
```
data = CaptchaSequence(characters, batch_size=1, steps=1)
[X_test, y_test, _, _], _ = data[0]
plt.imshow(X_test[0])
plt.title(''.join([characters[x] for x in y_test[0]]))
print(input_length, label_length)
```
# 准确率回调函数
```
from tqdm import tqdm
def evaluate(model, batch_size=128, steps=20):
batch_acc = 0
valid_data = CaptchaSequence(characters, batch_size, steps)
for [X_test, y_test, _, _], _ in valid_data:
y_pred = base_model.predict(X_test)
shape = y_pred.shape
out = K.get_value(K.ctc_decode(y_pred, input_length=np.ones(shape[0])*shape[1])[0][0])[:, :4]
if out.shape[1] == 4:
batch_acc += (y_test == out).all(axis=1).mean()
return batch_acc / steps
from tensorflow.keras.callbacks import Callback
class Evaluate(Callback):
def __init__(self):
self.accs = []
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
acc = evaluate(base_model)
logs['val_acc'] = acc
self.accs.append(acc)
print(f'\nacc: {acc*100:.4f}')
```
# 训练模型
```
from tensorflow.keras.callbacks import EarlyStopping, CSVLogger, ModelCheckpoint
from tensorflow.keras.optimizers import *
train_data = CaptchaSequence(characters, batch_size=128, steps=1000)
valid_data = CaptchaSequence(characters, batch_size=128, steps=100)
callbacks = [EarlyStopping(patience=5), Evaluate(),
CSVLogger('ctc.csv'), ModelCheckpoint('ctc_best.h5', save_best_only=True)]
model.compile(loss={'ctc': lambda y_true, y_pred: y_pred}, optimizer=Adam(1e-3, amsgrad=True))
model.fit_generator(train_data, epochs=100, validation_data=valid_data, workers=4, use_multiprocessing=True,
callbacks=callbacks)
```
### 载入最好的模型继续训练一会
```
model.load_weights('ctc_best.h5')
callbacks = [EarlyStopping(patience=5), Evaluate(),
CSVLogger('ctc.csv', append=True), ModelCheckpoint('ctc_best.h5', save_best_only=True)]
model.compile(loss={'ctc': lambda y_true, y_pred: y_pred}, optimizer=Adam(1e-4, amsgrad=True))
model.fit_generator(train_data, epochs=100, validation_data=valid_data, workers=4, use_multiprocessing=True,
callbacks=callbacks)
model.load_weights('ctc_best.h5')
```
# 测试模型
```
characters2 = characters + ' '
[X_test, y_test, _, _], _ = data[0]
y_pred = base_model.predict(X_test)
out = K.get_value(K.ctc_decode(y_pred, input_length=np.ones(y_pred.shape[0])*y_pred.shape[1], )[0][0])[:, :4]
out = ''.join([characters[x] for x in out[0]])
y_true = ''.join([characters[x] for x in y_test[0]])
plt.imshow(X_test[0])
plt.title('pred:' + str(out) + '\ntrue: ' + str(y_true))
argmax = np.argmax(y_pred, axis=2)[0]
list(zip(argmax, ''.join([characters2[x] for x in argmax])))
```
# 计算模型总体准确率
```
evaluate(base_model)
```
# 保存模型
```
base_model.save('ctc.h5', include_optimizer=False)
```
# 可视化训练曲线
```sh
pip install pandas
```
```
import pandas as pd
df = pd.read_csv('ctc.csv')
df[['loss', 'val_loss']].plot()
df[['loss', 'val_loss']].plot(logy=True)
df['val_acc'].plot()
```
| github_jupyter |
```
!unzip "/content/drive/MyDrive/archive.zip" -d archive
import warnings
warnings.filterwarnings('ignore')
import tensorflow as tf
from tensorflow import keras
from keras.layers import Input, Lambda, Dense, Flatten
from keras.models import Model
from keras.applications.vgg16 import VGG16
from keras.applications.vgg16 import preprocess_input
from keras.preprocessing import image
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
import numpy as np
from glob import glob
import matplotlib.pyplot as plt
IMAGE_SIZE = [224, 224]
train_path = '/content/archive/chest_xray/train'
valid_path = '/content/archive/chest_xray/test'
vgg = VGG16(input_shape=IMAGE_SIZE + [3], weights='imagenet', include_top=False)
for layer in vgg.layers:
layer.trainable = False
folders = glob('/content/archive/chest_xray/train/*')
x = Flatten()(vgg.output)
prediction = Dense(len(folders), activation='softmax')(x)
# create a model object
model = Model(inputs=vgg.input, outputs=prediction)
# view the structure of the model
model.summary()
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy']
)
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
test_datagen = ImageDataGenerator(rescale = 1./255)
# Make sure you provide the same target size as initialied for the image size
training_set = train_datagen.flow_from_directory('/content/archive/chest_xray/train',
target_size = (224, 224),
batch_size = 10,
class_mode = 'categorical')
test_set = test_datagen.flow_from_directory('/content/archive/chest_xray/test',
target_size = (224, 224),
batch_size = 10,
class_mode = 'categorical')
r = model.fit_generator(
training_set,
validation_data=test_set,
epochs=30,steps_per_epoch=len(training_set),
validation_steps=len(test_set)
)
import numpy as np
model.save('chest_xray.h5')
img=image.load_img('/content/archive/chest_xray/val/NORMAL/NORMAL2-IM-1437-0001.jpeg',target_size=(224,224))
x=image.img_to_array(img) #convert an image into array
x=np.expand_dims(x, axis=0)
img_data=preprocess_input(x)
classes=model.predict(img_data)
result=classes[0][0]
if result>0.5:
print("Result is Normal")
else:
print("Person is Affected By PNEUMONIA")
```
| github_jupyter |
# Library
```
import numpy as np
import torch
import torch.nn as nn
from utils import *
from dataset import TossingDataset
from torch.utils.data import DataLoader
```
# Model
```
class NaiveMLP(nn.Module):
def __init__(self, in_traj_num, pre_traj_num):
super(NaiveMLP, self).__init__()
self.hidden_dim = 128
self.fc_1 = nn.Sequential(
nn.Linear(in_traj_num * 2, self.hidden_dim),
nn.ReLU(inplace=True),
nn.Linear(self.hidden_dim, self.hidden_dim),
nn.ReLU(inplace=True),
)
self.fc_out = nn.Linear(self.hidden_dim, pre_traj_num * 2)
def forward(self, x):
x = self.fc_1(x)
x = self.fc_out(x)
return x
def train_model(model, train_loader, test_loader, num_epochs, optimizer, scheduler, criterion):
# Training the Model
min_test_dif = float('inf')
epoch_loss = []
for epoch in range(num_epochs):
batch_loss = []
for i, data in enumerate(train_loader):
# get the inputs
inputs = data['current_locs_gt']
locs_gt = data['future_locs_gt']
inputs = inputs.cuda()
locs_gt = locs_gt.cuda()
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, locs_gt, inputs)
loss.backward()
optimizer.step()
batch_loss.append(loss.item())
# Results every epoch
cur_epoch_loss = sum(batch_loss) / len(batch_loss)
# Scheduler
scheduler.step(cur_epoch_loss)
# Test the network
train_dif = test_model(model, train_loader)
test_dif = test_model(model, test_loader)
# Print the result
print('Epoch: %d Train Loss: %.03f Train Dif: %.03f Test Dif: %.03f'
% (epoch, cur_epoch_loss, train_dif, test_dif))
epoch_loss.append(cur_epoch_loss)
if min_test_dif > test_dif:
min_test_dif = test_dif
print('Best')
return epoch_loss
def test_model(model, test_loader):
# Test the Model
model.eval()
batch_loss = []
for i, data in enumerate(test_loader):
# get the inputs
inputs = data['current_locs_gt']
locs_gt = data['future_locs_gt']
inputs = inputs.cuda()
locs_gt = locs_gt.cuda()
outputs = net(inputs)
loss = get_mean_distance(locs_gt, outputs)
batch_loss.append(loss.item())
# Results every epoch
cur_epoch_loss = sum(batch_loss) / len(batch_loss)
model.train()
return cur_epoch_loss
def get_mean_distance(locs_a, locs_b):
vector_len = locs_a.shape[1]
x_a = locs_a[:, :vector_len // 2]
y_a = locs_a[:, vector_len // 2:]
x_b = locs_b[:, :vector_len // 2]
y_b = locs_b[:, vector_len // 2:]
dif_x = (x_a - x_b) ** 2
dif_y = (y_a - y_b) ** 2
dif = dif_x + dif_y
return torch.mean(torch.sqrt(dif))
#################### Hyperparameters ####################
num_epochs = 50000
learning_rate = 0.001
weight_decay = 0
in_frames_num = 3
pre_frames_num = 15
factor = 0.95
patience = 40
batch_size = 16
#################### Hyperparameters ####################
net = NaiveMLP(in_traj_num=3, pre_traj_num=15).cuda()
criterion = PhysicalRegularization()
train_set = TossingDataset(
'./dataset/r1_k0.2/train',
in_traj_num=3,
pre_traj_num=15,
sample_num=32
)
test_set = TossingDataset(
'./dataset/r1_k0.2/test',
in_traj_num=3,
pre_traj_num=15
)
print(len(train_set), len(test_set))
train_loader = DataLoader(train_set, batch_size=batch_size, shuffle=True)
test_loader = DataLoader(test_set, batch_size=len(test_set), shuffle=False)
optimizer = torch.optim.Adam(net.parameters(), lr=learning_rate, weight_decay=weight_decay)
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(
optimizer,
mode='min',
factor=factor,
patience=patience,
verbose=True,
threshold=1e-3
)
train_loss = train_model(
net,
train_loader,
test_loader,
num_epochs,
optimizer,
scheduler,
criterion
)
```
| github_jupyter |
<center>
<img src="https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DA0101EN-SkillsNetwork/labs/Module%202/images/IDSNlogo.png" width="300" alt="cognitiveclass.ai logo" />
</center>
# Data Wrangling
Estimated time needed: **30** minutes
## Objectives
After completing this lab you will be able to:
* Handle missing values
* Correct data format
* Standardize and normalize data
<h2>Table of Contents</h2>
<div class="alert alert-block alert-info" style="margin-top: 20px">
<ul>
<li><a href="https://#identify_handle_missing_values">Identify and handle missing values</a>
<ul>
<li><a href="https://#identify_missing_values">Identify missing values</a></li>
<li><a href="https://#deal_missing_values">Deal with missing values</a></li>
<li><a href="https://#correct_data_format">Correct data format</a></li>
</ul>
</li>
<li><a href="https://#data_standardization">Data standardization</a></li>
<li><a href="https://#data_normalization">Data normalization (centering/scaling)</a></li>
<li><a href="https://#binning">Binning</a></li>
<li><a href="https://#indicator">Indicator variable</a></li>
</ul>
</div>
<hr>
<h2>What is the purpose of data wrangling?</h2>
Data wrangling is the process of converting data from the initial format to a format that may be better for analysis.
<h3>What is the fuel consumption (L/100k) rate for the diesel car?</h3>
<h3>Import data</h3>
<p>
You can find the "Automobile Dataset" from the following link: <a href="https://archive.ics.uci.edu/ml/machine-learning-databases/autos/imports-85.data?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDA0101ENSkillsNetwork20235326-2021-01-01">https://archive.ics.uci.edu/ml/machine-learning-databases/autos/imports-85.data</a>.
We will be using this dataset throughout this course.
</p>
<h4>Import pandas</h4>
```
import pandas as pd
import matplotlib.pylab as plt
```
<h2>Reading the dataset from the URL and adding the related headers</h2>
First, we assign the URL of the dataset to "filename".
This dataset was hosted on IBM Cloud object. Click <a href="https://cocl.us/corsera_da0101en_notebook_bottom?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDA0101ENSkillsNetwork20235326-2021-01-01">HERE</a> for free storage.
```
filename = "https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DA0101EN-SkillsNetwork/labs/Data%20files/auto.csv"
```
Then, we create a Python list <b>headers</b> containing name of headers.
```
headers = ["symboling","normalized-losses","make","fuel-type","aspiration", "num-of-doors","body-style",
"drive-wheels","engine-location","wheel-base", "length","width","height","curb-weight","engine-type",
"num-of-cylinders", "engine-size","fuel-system","bore","stroke","compression-ratio","horsepower",
"peak-rpm","city-mpg","highway-mpg","price"]
```
Use the Pandas method <b>read_csv()</b> to load the data from the web address. Set the parameter "names" equal to the Python list "headers".
```
df = pd.read_csv(filename, names = headers)
```
Use the method <b>head()</b> to display the first five rows of the dataframe.
```
# To see what the data set looks like, we'll use the head() method.
df.head()
```
As we can see, several question marks appeared in the dataframe; those are missing values which may hinder our further analysis.
<div>So, how do we identify all those missing values and deal with them?</div>
<b>How to work with missing data?</b>
Steps for working with missing data:
<ol>
<li>Identify missing data</li>
<li>Deal with missing data</li>
<li>Correct data format</li>
</ol>
<h2 id="identify_handle_missing_values">Identify and handle missing values</h2>
<h3 id="identify_missing_values">Identify missing values</h3>
<h4>Convert "?" to NaN</h4>
In the car dataset, missing data comes with the question mark "?".
We replace "?" with NaN (Not a Number), Python's default missing value marker for reasons of computational speed and convenience. Here we use the function:
<pre>.replace(A, B, inplace = True) </pre>
to replace A by B.
```
import numpy as np
# replace "?" to NaN
df.replace("?", np.nan, inplace = True)
df.head(5)
```
<h4>Evaluating for Missing Data</h4>
The missing values are converted by default. We use the following functions to identify these missing values. There are two methods to detect missing data:
<ol>
<li><b>.isnull()</b></li>
<li><b>.notnull()</b></li>
</ol>
The output is a boolean value indicating whether the value that is passed into the argument is in fact missing data.
```
missing_data = df.isnull()
missing_data.head(5)
```
"True" means the value is a missing value while "False" means the value is not a missing value.
<h4>Count missing values in each column</h4>
<p>
Using a for loop in Python, we can quickly figure out the number of missing values in each column. As mentioned above, "True" represents a missing value and "False" means the value is present in the dataset. In the body of the for loop the method ".value_counts()" counts the number of "True" values.
</p>
```
for column in missing_data.columns.values.tolist():
print(column)
print (missing_data[column].value_counts())
print("")
```
Based on the summary above, each column has 205 rows of data and seven of the columns containing missing data:
<ol>
<li>"normalized-losses": 41 missing data</li>
<li>"num-of-doors": 2 missing data</li>
<li>"bore": 4 missing data</li>
<li>"stroke" : 4 missing data</li>
<li>"horsepower": 2 missing data</li>
<li>"peak-rpm": 2 missing data</li>
<li>"price": 4 missing data</li>
</ol>
<h3 id="deal_missing_values">Deal with missing data</h3>
<b>How to deal with missing data?</b>
<ol>
<li>Drop data<br>
a. Drop the whole row<br>
b. Drop the whole column
</li>
<li>Replace data<br>
a. Replace it by mean<br>
b. Replace it by frequency<br>
c. Replace it based on other functions
</li>
</ol>
Whole columns should be dropped only if most entries in the column are empty. In our dataset, none of the columns are empty enough to drop entirely.
We have some freedom in choosing which method to replace data; however, some methods may seem more reasonable than others. We will apply each method to many different columns:
<b>Replace by mean:</b>
<ul>
<li>"normalized-losses": 41 missing data, replace them with mean</li>
<li>"stroke": 4 missing data, replace them with mean</li>
<li>"bore": 4 missing data, replace them with mean</li>
<li>"horsepower": 2 missing data, replace them with mean</li>
<li>"peak-rpm": 2 missing data, replace them with mean</li>
</ul>
<b>Replace by frequency:</b>
<ul>
<li>"num-of-doors": 2 missing data, replace them with "four".
<ul>
<li>Reason: 84% sedans is four doors. Since four doors is most frequent, it is most likely to occur</li>
</ul>
</li>
</ul>
<b>Drop the whole row:</b>
<ul>
<li>"price": 4 missing data, simply delete the whole row
<ul>
<li>Reason: price is what we want to predict. Any data entry without price data cannot be used for prediction; therefore any row now without price data is not useful to us</li>
</ul>
</li>
</ul>
<h4>Calculate the mean value for the "normalized-losses" column </h4>
```
avg_norm_loss = df["normalized-losses"].astype("float").mean(axis=0)
print("Average of normalized-losses:", avg_norm_loss)
```
<h4>Replace "NaN" with mean value in "normalized-losses" column</h4>
```
df["normalized-losses"].replace(np.nan, avg_norm_loss, inplace=True)
```
<h4>Calculate the mean value for the "bore" column</h4>
```
avg_bore=df['bore'].astype('float').mean(axis=0)
print("Average of bore:", avg_bore)
```
<h4>Replace "NaN" with the mean value in the "bore" column</h4>
```
df["bore"].replace(np.nan, avg_bore, inplace=True)
```
<div class="alert alert-danger alertdanger" style="margin-top: 20px">
<h1> Question #1: </h1>
<b>Based on the example above, replace NaN in "stroke" column with the mean value.</b>
</div>
```
# Write your code below and press Shift+Enter to execute
avg_stroke=df['stroke'].astype('float').mean(axis=0)
print("Average of stroke:", avg_bore)
df["stroke"].replace(np.nan, avg_stroke, inplace=True)
```
<details><summary>Click here for the solution</summary>
```python
#Calculate the mean vaule for "stroke" column
avg_stroke = df["stroke"].astype("float").mean(axis = 0)
print("Average of stroke:", avg_stroke)
# replace NaN by mean value in "stroke" column
df["stroke"].replace(np.nan, avg_stroke, inplace = True)
```
</details>
<h4>Calculate the mean value for the "horsepower" column</h4>
```
avg_horsepower = df['horsepower'].astype('float').mean(axis=0)
print("Average horsepower:", avg_horsepower)
```
<h4>Replace "NaN" with the mean value in the "horsepower" column</h4>
```
df['horsepower'].replace(np.nan, avg_horsepower, inplace=True)
```
<h4>Calculate the mean value for "peak-rpm" column</h4>
```
avg_peakrpm=df['peak-rpm'].astype('float').mean(axis=0)
print("Average peak rpm:", avg_peakrpm)
```
<h4>Replace "NaN" with the mean value in the "peak-rpm" column</h4>
```
df['peak-rpm'].replace(np.nan, avg_peakrpm, inplace=True)
```
To see which values are present in a particular column, we can use the ".value_counts()" method:
```
df['num-of-doors'].value_counts()
```
We can see that four doors are the most common type. We can also use the ".idxmax()" method to calculate the most common type automatically:
```
df['num-of-doors'].value_counts().idxmax()
```
The replacement procedure is very similar to what we have seen previously:
```
#replace the missing 'num-of-doors' values by the most frequent
df["num-of-doors"].replace(np.nan, "four", inplace=True)
```
Finally, let's drop all rows that do not have price data:
```
# simply drop whole row with NaN in "price" column
df.dropna(subset=["price"], axis=0, inplace=True)
# reset index, because we droped two rows
df.reset_index(drop=True, inplace=True)
df.head()
```
<b>Good!</b> Now, we have a dataset with no missing values.
<h3 id="correct_data_format">Correct data format</h3>
<b>We are almost there!</b>
<p>The last step in data cleaning is checking and making sure that all data is in the correct format (int, float, text or other).</p>
In Pandas, we use:
<p><b>.dtype()</b> to check the data type</p>
<p><b>.astype()</b> to change the data type</p>
<h4>Let's list the data types for each column</h4>
```
df.dtypes
```
<p>As we can see above, some columns are not of the correct data type. Numerical variables should have type 'float' or 'int', and variables with strings such as categories should have type 'object'. For example, 'bore' and 'stroke' variables are numerical values that describe the engines, so we should expect them to be of the type 'float' or 'int'; however, they are shown as type 'object'. We have to convert data types into a proper format for each column using the "astype()" method.</p>
<h4>Convert data types to proper format</h4>
```
df[["bore", "stroke"]] = df[["bore", "stroke"]].astype("float")
df[["normalized-losses"]] = df[["normalized-losses"]].astype("int")
df[["price"]] = df[["price"]].astype("float")
df[["peak-rpm"]] = df[["peak-rpm"]].astype("float")
```
<h4>Let us list the columns after the conversion</h4>
```
df.dtypes
```
<b>Wonderful!</b>
Now we have finally obtained the cleaned dataset with no missing values with all data in its proper format.
<h2 id="data_standardization">Data Standardization</h2>
<p>
Data is usually collected from different agencies in different formats.
(Data standardization is also a term for a particular type of data normalization where we subtract the mean and divide by the standard deviation.)
</p>
<b>What is standardization?</b>
<p>Standardization is the process of transforming data into a common format, allowing the researcher to make the meaningful comparison.
</p>
<b>Example</b>
<p>Transform mpg to L/100km:</p>
<p>In our dataset, the fuel consumption columns "city-mpg" and "highway-mpg" are represented by mpg (miles per gallon) unit. Assume we are developing an application in a country that accepts the fuel consumption with L/100km standard.</p>
<p>We will need to apply <b>data transformation</b> to transform mpg into L/100km.</p>
<p>The formula for unit conversion is:<p>
L/100km = 235 / mpg
<p>We can do many mathematical operations directly in Pandas.</p>
```
df.head()
# Convert mpg to L/100km by mathematical operation (235 divided by mpg)
df['city-L/100km'] = 235/df["city-mpg"]
# check your transformed data
df.head()
```
<div class="alert alert-danger alertdanger" style="margin-top: 20px">
<h1> Question #2: </h1>
<b>According to the example above, transform mpg to L/100km in the column of "highway-mpg" and change the name of column to "highway-L/100km".</b>
</div>
```
# Write your code below and press Shift+Enter to execute
# transform mpg to L/100km by mathematical operation (235 divided by mpg)
df["highway-mpg"] = 235/df["highway-mpg"]
# rename column name from "highway-mpg" to "highway-L/100km"
df.rename(columns={'highway-mpg':'highway-L/100km'}, inplace=True)
# check your transformed data
df.head()
```
<details><summary>Click here for the solution</summary>
```python
# transform mpg to L/100km by mathematical operation (235 divided by mpg)
df["highway-mpg"] = 235/df["highway-mpg"]
# rename column name from "highway-mpg" to "highway-L/100km"
df.rename(columns={'"highway-mpg"':'highway-L/100km'}, inplace=True)
# check your transformed data
df.head()
```
</details>
<h2 id="data_normalization">Data Normalization</h2>
<b>Why normalization?</b>
<p>Normalization is the process of transforming values of several variables into a similar range. Typical normalizations include scaling the variable so the variable average is 0, scaling the variable so the variance is 1, or scaling the variable so the variable values range from 0 to 1.
</p>
<b>Example</b>
<p>To demonstrate normalization, let's say we want to scale the columns "length", "width" and "height".</p>
<p><b>Target:</b> would like to normalize those variables so their value ranges from 0 to 1</p>
<p><b>Approach:</b> replace original value by (original value)/(maximum value)</p>
```
# replace (original value) by (original value)/(maximum value)
df['length'] = df['length']/df['length'].max()
df['width'] = df['width']/df['width'].max()
```
<div class="alert alert-danger alertdanger" style="margin-top: 20px">
<h1> Question #3: </h1>
<b>According to the example above, normalize the column "height".</b>
</div>
```
# Write your code below and press Shift+Enter to execute
df['height'] = df['height']/df['height'].max()
# show the scaled columns
df[["length","width","height"]].head()
```
<details><summary>Click here for the solution</summary>
```python
df['height'] = df['height']/df['height'].max()
# show the scaled columns
df[["length","width","height"]].head()
```
</details>
Here we can see we've normalized "length", "width" and "height" in the range of \[0,1].
<h2 id="binning">Binning</h2>
<b>Why binning?</b>
<p>
Binning is a process of transforming continuous numerical variables into discrete categorical 'bins' for grouped analysis.
</p>
<b>Example: </b>
<p>In our dataset, "horsepower" is a real valued variable ranging from 48 to 288 and it has 59 unique values. What if we only care about the price difference between cars with high horsepower, medium horsepower, and little horsepower (3 types)? Can we rearrange them into three ‘bins' to simplify analysis? </p>
<p>We will use the pandas method 'cut' to segment the 'horsepower' column into 3 bins.</p>
<h3>Example of Binning Data In Pandas</h3>
Convert data to correct format:
```
df["horsepower"]=df["horsepower"].astype(int, copy=True)
```
Let's plot the histogram of horsepower to see what the distribution of horsepower looks like.
```
%matplotlib inline
import matplotlib as plt
from matplotlib import pyplot
plt.pyplot.hist(df["horsepower"])
# set x/y labels and plot title
plt.pyplot.xlabel("horsepower")
plt.pyplot.ylabel("count")
plt.pyplot.title("horsepower bins")
```
<p>We would like 3 bins of equal size bandwidth so we use numpy's <code>linspace(start_value, end_value, numbers_generated</code> function.</p>
<p>Since we want to include the minimum value of horsepower, we want to set start_value = min(df["horsepower"]).</p>
<p>Since we want to include the maximum value of horsepower, we want to set end_value = max(df["horsepower"]).</p>
<p>Since we are building 3 bins of equal length, there should be 4 dividers, so numbers_generated = 4.</p>
We build a bin array with a minimum value to a maximum value by using the bandwidth calculated above. The values will determine when one bin ends and another begins.
```
bins = np.linspace(min(df["horsepower"]), max(df["horsepower"]), 4)
bins
```
We set group names:
```
group_names = ['Low', 'Medium', 'High']
```
We apply the function "cut" to determine what each value of `df['horsepower']` belongs to.
```
df['horsepower-binned'] = pd.cut(df['horsepower'], bins, labels=group_names, include_lowest=True )
df[['horsepower','horsepower-binned']].head(20)
```
Let's see the number of vehicles in each bin:
```
df["horsepower-binned"].value_counts()
```
Let's plot the distribution of each bin:
```
%matplotlib inline
import matplotlib as plt
from matplotlib import pyplot
pyplot.bar(group_names, df["horsepower-binned"].value_counts())
# set x/y labels and plot title
plt.pyplot.xlabel("horsepower")
plt.pyplot.ylabel("count")
plt.pyplot.title("horsepower bins")
```
<p>
Look at the dataframe above carefully. You will find that the last column provides the bins for "horsepower" based on 3 categories ("Low", "Medium" and "High").
</p>
<p>
We successfully narrowed down the intervals from 59 to 3!
</p>
<h3>Bins Visualization</h3>
Normally, a histogram is used to visualize the distribution of bins we created above.
```
%matplotlib inline
import matplotlib as plt
from matplotlib import pyplot
# draw historgram of attribute "horsepower" with bins = 3
plt.pyplot.hist(df["horsepower"], bins = 3)
# set x/y labels and plot title
plt.pyplot.xlabel("horsepower")
plt.pyplot.ylabel("count")
plt.pyplot.title("horsepower bins")
```
The plot above shows the binning result for the attribute "horsepower".
<h2 id="indicator">Indicator Variable (or Dummy Variable)</h2>
<b>What is an indicator variable?</b>
<p>
An indicator variable (or dummy variable) is a numerical variable used to label categories. They are called 'dummies' because the numbers themselves don't have inherent meaning.
</p>
<b>Why we use indicator variables?</b>
<p>
We use indicator variables so we can use categorical variables for regression analysis in the later modules.
</p>
<b>Example</b>
<p>
We see the column "fuel-type" has two unique values: "gas" or "diesel". Regression doesn't understand words, only numbers. To use this attribute in regression analysis, we convert "fuel-type" to indicator variables.
</p>
<p>
We will use pandas' method 'get_dummies' to assign numerical values to different categories of fuel type.
</p>
```
df.columns
```
Get the indicator variables and assign it to data frame "dummy_variable\_1":
```
dummy_variable_1 = pd.get_dummies(df["fuel-type"])
dummy_variable_1.head()
```
Change the column names for clarity:
```
dummy_variable_1.rename(columns={'gas':'fuel-type-gas', 'diesel':'fuel-type-diesel'}, inplace=True)
dummy_variable_1.head()
```
In the dataframe, column 'fuel-type' has values for 'gas' and 'diesel' as 0s and 1s now.
```
# merge data frame "df" and "dummy_variable_1"
df = pd.concat([df, dummy_variable_1], axis=1)
# drop original column "fuel-type" from "df"
df.drop("fuel-type", axis = 1, inplace=True)
df.head()
```
The last two columns are now the indicator variable representation of the fuel-type variable. They're all 0s and 1s now.
<div class="alert alert-danger alertdanger" style="margin-top: 20px">
<h1> Question #4: </h1>
<b>Similar to before, create an indicator variable for the column "aspiration"</b>
</div>
```
# Write your code below and press Shift+Enter to execute
# get indicator variables of aspiration and assign it to data frame "dummy_variable_2"
dummy_variable_2 = pd.get_dummies(df['aspiration'])
# change column names for clarity
dummy_variable_2.rename(columns={'std':'aspiration-std', 'turbo': 'aspiration-turbo'}, inplace=True)
# show first 5 instances of data frame "dummy_variable_1"
dummy_variable_2.head()
```
<details><summary>Click here for the solution</summary>
```python
# get indicator variables of aspiration and assign it to data frame "dummy_variable_2"
dummy_variable_2 = pd.get_dummies(df['aspiration'])
# change column names for clarity
dummy_variable_2.rename(columns={'std':'aspiration-std', 'turbo': 'aspiration-turbo'}, inplace=True)
# show first 5 instances of data frame "dummy_variable_1"
dummy_variable_2.head()
```
</details>
<div class="alert alert-danger alertdanger" style="margin-top: 20px">
<h1> Question #5: </h1>
<b>Merge the new dataframe to the original dataframe, then drop the column 'aspiration'.</b>
</div>
```
# Write your code below and press Shift+Enter to execute
# merge the new dataframe to the original datafram
df = pd.concat([df, dummy_variable_2], axis=1)
# drop original column "aspiration" from "df"
df.drop('aspiration', axis = 1, inplace=True)
```
<details><summary>Click here for the solution</summary>
```python
# merge the new dataframe to the original datafram
df = pd.concat([df, dummy_variable_2], axis=1)
# drop original column "aspiration" from "df"
df.drop('aspiration', axis = 1, inplace=True)
```
</details>
Save the new csv:
```
df.to_csv('clean_df.csv')
```
### Thank you for completing this lab!
## Author
<a href="https://www.linkedin.com/in/joseph-s-50398b136/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDA0101ENSkillsNetwork20235326-2021-01-01" target="_blank">Joseph Santarcangelo</a>
### Other Contributors
<a href="https://www.linkedin.com/in/mahdi-noorian-58219234/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDA0101ENSkillsNetwork20235326-2021-01-01" target="_blank">Mahdi Noorian PhD</a>
Bahare Talayian
Eric Xiao
Steven Dong
Parizad
Hima Vasudevan
<a href="https://www.linkedin.com/in/fiorellawever/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDA0101ENSkillsNetwork20235326-2021-01-01" target="_blank">Fiorella Wenver</a>
<a href="https:// https://www.linkedin.com/in/yi-leng-yao-84451275/ " target="_blank" >Yi Yao</a>.
## Change Log
| Date (YYYY-MM-DD) | Version | Changed By | Change Description |
| ----------------- | ------- | ---------- | ----------------------------------- |
| 2020-10-30 | 2.2 | Lakshmi | Changed URL of csv |
| 2020-09-09 | 2.1 | Lakshmi | Updated Indicator Variables section |
| 2020-08-27 | 2.0 | Lavanya | Moved lab to course repo in GitLab |
<hr>
## <h3 align="center"> © IBM Corporation 2020. All rights reserved. <h3/>
| github_jupyter |
```
import numpy
import matplotlib.pyplot as plt
```
## Plot up tanker fuel capacity by vessel length
```
# SuezMax: 5986.7 m3 (4,025/130 m3 for HFO/diesel)
# Aframax: 2,984 m3 (2,822/162 for HFO/diesel)
# Handymax as 1,956 m3 (1,826/130 m3 for HFO/diesel)
# Small Tanker: 740 m3 (687/53 for HFO/diesel)
# SuezMax (281 m)
# Aframax (201-250 m)
# Handymax (182 m)
# Small Tanker (< 150 m)
tanker_size_classes = [
'SuezMax (251-300 m)',
'Aframax (201-250 m)',
'Handymax (151-200 m)',
'Small Tanker (< 150 m)'
]
fuel_hfo_to_diesel = [
4025/130, 2822/162, 1826/130, 687/53
]
capacity = numpy.array([
740000, 1956000, 2984000, 5986000.7,
])
length = numpy.array([108.5, 182, 247.24, 281])
coef3 = numpy.polyfit(length, capacity, 3)
fit = (
coef3[3] +
coef3[2]*length +
coef3[1]*numpy.power(length,2) +
coef3[0]*numpy.power(length,3)
)
numpy.array(fit.tolist())
test_length = range(50, 320, 25)
fit = (
coef3[3] +
coef3[2]*test_length +
coef3[1]*numpy.power(test_length,2) +
coef3[0]*numpy.power(test_length,3)
)
## plot fit
fig1 = plt.figure()
ax1 = fig1.add_subplot(111)
ax1.scatter(length[:], capacity[:],50)
ax1.plot(test_length, fit)
plt.xlabel('tanker length (m)',fontsize=12)
plt.ylabel('tanker fuel capacity (liters)',fontsize=12)
plt.show()
```
## plot up tank barge oil capacity
from values in our [MMSI spreadsheet](https://docs.google.com/spreadsheets/d/1dlT0JydkFG43LorqgtHle5IN6caRYjf_3qLrUYqANDY/edit#gid=591561201)
```
harley_cargo_legths = [
65.3796, 73.4568,
73.4568,73.4568,
74.676, 83.8962,
85.0392, 86.868,
90.678, 108.5088,
114.3, 128.4732,
128.7018, 128.7018,
130.5306, 130.5306,
130.5306
]
harley_cargo_capacity = [
2544952.93, 5159066.51,
5195793.20, 5069714.13,
3973955.05, 3461689.27,
6197112.22, 7685258.62,
4465075.16, 5609803.16,
12652106.22, 12791381.46,
13315412.50, 13315412.50,
13041790.71, 13009356.75,
13042744.65
]
harley_volume_per_hold = [
254495.293, 573229.6122,
577310.3556, 563301.57,
397395.505, 247263.5193,
476700.94, 548947.0443,
744179.1933, 560980.316,
903721.8729, 913670.1043,
1109617.708, 1109617.708,
931556.4793, 929239.7679,
931624.6179
]
C = numpy.polyfit(
harley_cargo_legths,
harley_cargo_capacity,
3
)
test_length = range(65,135,5)
harley_cargo_fit = (
C[3] + C[2]*test_length +
C[1]*numpy.power(test_length,2) +
C[0]*numpy.power(test_length,3)
)
## plot fit
fig2 = plt.figure()
ax1 = fig2.add_subplot(111)
ax1.scatter(harley_cargo_legths[:], harley_cargo_capacity[:],50)
ax1.plot(test_length, harley_cargo_fit)
plt.xlabel('cargo tank length (m)',fontsize=12)
plt.ylabel('cargo tank capacity (liters)',fontsize=12)
plt.show()
## plot volume per hold
fig3 = plt.figure()
ax1 = fig3.add_subplot(111)
ax1.scatter(harley_cargo_legths[:], harley_volume_per_hold[:],50)
plt.xlabel('cargo tank length (m)',fontsize=12)
plt.ylabel('cargo volume per hold (liters)',fontsize=12)
plt.show()
test_length
```
### plot up tug barge fuel capacity
from values in our [MMSI spreadsheet](https://docs.google.com/spreadsheets/d/1dlT0JydkFG43LorqgtHle5IN6caRYjf_3qLrUYqANDY/edit#gid=591561201)
```
tug_length = [
33.92424, 33.92424,
33.92424, 32.06496,
38.34384, 41.4528,
41.45
]
tug_fuel_capacity = [
101383.00, 383776.22,
378541.00, 302832.80,
545099.04, 567811.50,
300000.00
]
## plot fit
fig4 = plt.figure()
ax1 = fig4.add_subplot(111)
ax1.scatter(tug_length[:], tug_fuel_capacity[:],50)
plt.xlabel('tug length (m)',fontsize=12)
plt.ylabel('tug fuel capacity (liters)',fontsize=12)
plt.show()
```
| github_jupyter |
# Convolutional Neural Networks
## Project: Write an Algorithm for a Dog Identification App
---
In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with **'(IMPLEMENTATION)'** in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
> **Note**: Once you have completed all of the code implementations, you need to finalize your work by exporting the Jupyter Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to **File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission.
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a **'Question X'** header. Carefully read each question and provide thorough answers in the following text boxes that begin with **'Answer:'**. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.
The rubric contains _optional_ "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. If you decide to pursue the "Stand Out Suggestions", you should include the code in this Jupyter notebook.
---
### Why We're Here
In this notebook, you will make the first steps towards developing an algorithm that could be used as part of a mobile or web app. At the end of this project, your code will accept any user-supplied image as input. If a dog is detected in the image, it will provide an estimate of the dog's breed. If a human is detected, it will provide an estimate of the dog breed that is most resembling. The image below displays potential sample output of your finished project (... but we expect that each student's algorithm will behave differently!).

In this real-world setting, you will need to piece together a series of models to perform different tasks; for instance, the algorithm that detects humans in an image will be different from the CNN that infers dog breed. There are many points of possible failure, and no perfect algorithm exists. Your imperfect solution will nonetheless create a fun user experience!
### The Road Ahead
We break the notebook into separate steps. Feel free to use the links below to navigate the notebook.
* [Step 0](#step0): Import Datasets
* [Step 1](#step1): Detect Humans
* [Step 2](#step2): Detect Dogs
* [Step 3](#step3): Create a CNN to Classify Dog Breeds (from Scratch)
* [Step 4](#step4): Create a CNN to Classify Dog Breeds (using Transfer Learning)
* [Step 5](#step5): Write your Algorithm
* [Step 6](#step6): Test Your Algorithm
---
<a id='step0'></a>
## Step 0: Import Datasets
Make sure that you've downloaded the required human and dog datasets:
* Download the [dog dataset](https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/dogImages.zip). Unzip the folder and place it in this project's home directory, at the location `/dogImages`.
* Download the [human dataset](https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/lfw.zip). Unzip the folder and place it in the home directory, at location `/lfw`.
*Note: If you are using a Windows machine, you are encouraged to use [7zip](http://www.7-zip.org/) to extract the folder.*
In the code cell below, we save the file paths for both the human (LFW) dataset and dog dataset in the numpy arrays `human_files` and `dog_files`.
```
import numpy as np
from glob import glob
# load filenames for human and dog images
human_files = np.array(glob("lfw/*/*"))
dog_files = np.array(glob("dogImages/*/*/*"))
# print number of images in each dataset
print('There are %d total human images.' % len(human_files))
print('There are %d total dog images.' % len(dog_files))
```
<a id='step1'></a>
## Step 1: Detect Humans
In this section, we use OpenCV's implementation of [Haar feature-based cascade classifiers](http://docs.opencv.org/trunk/d7/d8b/tutorial_py_face_detection.html) to detect human faces in images.
OpenCV provides many pre-trained face detectors, stored as XML files on [github](https://github.com/opencv/opencv/tree/master/data/haarcascades). We have downloaded one of these detectors and stored it in the `haarcascades` directory. In the next code cell, we demonstrate how to use this detector to find human faces in a sample image.
```
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# extract pre-trained face detector
face_cascade = cv2.CascadeClassifier('haarcascades/haarcascade_frontalface_alt.xml')
# load color (BGR) image
img = cv2.imread(human_files[0])
# convert BGR image to gzrayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# find faces in image
faces = face_cascade.detectMultiScale(gray)
# print number of faces detected in the image
print('Number of faces detected:', len(faces))
# get bounding box for each detected face
for (x,y,w,h) in faces:
# add bounding box to color image
cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
# convert BGR image to RGB for plotting
cv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# display the image, along with bounding box
plt.imshow(cv_rgb)
plt.show()
```
Before using any of the face detectors, it is standard procedure to convert the images to grayscale. The `detectMultiScale` function executes the classifier stored in `face_cascade` and takes the grayscale image as a parameter.
In the above code, `faces` is a numpy array of detected faces, where each row corresponds to a detected face. Each detected face is a 1D array with four entries that specifies the bounding box of the detected face. The first two entries in the array (extracted in the above code as `x` and `y`) specify the horizontal and vertical positions of the top left corner of the bounding box. The last two entries in the array (extracted here as `w` and `h`) specify the width and height of the box.
### Write a Human Face Detector
We can use this procedure to write a function that returns `True` if a human face is detected in an image and `False` otherwise. This function, aptly named `face_detector`, takes a string-valued file path to an image as input and appears in the code block below.
```
# returns "True" if face is detected in image stored at img_path
def face_detector(img_path):
img = cv2.imread(img_path)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray)
return len(faces) > 0
```
### (IMPLEMENTATION) Assess the Human Face Detector
__Question 1:__ Use the code cell below to test the performance of the `face_detector` function.
- What percentage of the first 100 images in `human_files` have a detected human face?
- What percentage of the first 100 images in `dog_files` have a detected human face?
Ideally, we would like 100% of human images with a detected face and 0% of dog images with a detected face. You will see that our algorithm falls short of this goal, but still gives acceptable performance. We extract the file paths for the first 100 images from each of the datasets and store them in the numpy arrays `human_files_short` and `dog_files_short`.
__Answer:__
(You can print out your results and/or write your percentages in this cell)
```
from tqdm import tqdm
human_files_short = human_files[:100]
dog_files_short = dog_files[:100]
#-#-# Do NOT modify the code above this line. #-#-#
## TODO: Test the performance of the face_detector algorithm
## on the images in human_files_short and dog_files_short.
def face_detector_accuracy(face_array):
face_detected=0
for faces in face_array:
face_flag=face_detector(faces)
if face_flag==True:
face_detected += 1
return face_detected/len(face_array)*100
print('Human faces detected by Percentage in humans:', face_detector_accuracy(human_files_short))
print('Hhuman faces detected by Percentage in dogs:', face_detector_accuracy(dog_files_short))
```
We suggest the face detector from OpenCV as a potential way to detect human images in your algorithm, but you are free to explore other approaches, especially approaches that make use of deep learning :). Please use the code cell below to design and test your own face detection algorithm. If you decide to pursue this _optional_ task, report performance on `human_files_short` and `dog_files_short`.
```
### (Optional)
### TODO: Test performance of another face detection algorithm.
### Feel free to use as many code cells as needed.
```
---
<a id='step2'></a>
## Step 2: Detect Dogs
In this section, we use a [pre-trained model](http://pytorch.org/docs/master/torchvision/models.html) to detect dogs in images.
### Obtain Pre-trained VGG-16 Model
The code cell below downloads the VGG-16 model, along with weights that have been trained on [ImageNet](http://www.image-net.org/), a very large, very popular dataset used for image classification and other vision tasks. ImageNet contains over 10 million URLs, each linking to an image containing an object from one of [1000 categories](https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a).
```
import torch
import torchvision.models as models
# define VGG16 model
VGG16 = models.vgg16(pretrained=True)
# check if CUDA is available
use_cuda = torch.cuda.is_available()
# move model to GPU if CUDA is available
if use_cuda:
VGG16 = VGG16.cuda()
```
Given an image, this pre-trained VGG-16 model returns a prediction (derived from the 1000 possible categories in ImageNet) for the object that is contained in the image.
### (IMPLEMENTATION) Making Predictions with a Pre-trained Model
In the next code cell, you will write a function that accepts a path to an image (such as `'dogImages/train/001.Affenpinscher/Affenpinscher_00001.jpg'`) as input and returns the index corresponding to the ImageNet class that is predicted by the pre-trained VGG-16 model. The output should always be an integer between 0 and 999, inclusive.
Before writing the function, make sure that you take the time to learn how to appropriately pre-process tensors for pre-trained models in the [PyTorch documentation](http://pytorch.org/docs/stable/torchvision/models.html).
```
from PIL import Image
import torchvision.transforms as transforms
# Set PIL to be tolerant of image files that are truncated.
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
def VGG16_predict(img_path):
'''
Use pre-trained VGG-16 model to obtain index corresponding to
predicted ImageNet class for image at specified path
Args:
img_path: path to an image
Returns:
Index corresponding to VGG-16 model's prediction
'''
## TODO: Complete the function.
## Load and pre-process an image from the given img_path
## Return the *index* of the predicted class for that image
image = Image.open(img_path)
mean=[0.485, 0.456, 0.406]
std=[0.229, 0.224, 0.225]
image_transforms = transforms.Compose([transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean,std)])
image_tensor = image_transforms(image)
image_tensor.unsqueeze_(0)
if use_cuda:
image_tensor=image_tensor.cuda()
output = VGG16(image_tensor)
_,classes= torch.max(output,dim=1)
return classes.item() # predicted class index
```
### (IMPLEMENTATION) Write a Dog Detector
While looking at the [dictionary](https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a), you will notice that the categories corresponding to dogs appear in an uninterrupted sequence and correspond to dictionary keys 151-268, inclusive, to include all categories from `'Chihuahua'` to `'Mexican hairless'`. Thus, in order to check to see if an image is predicted to contain a dog by the pre-trained VGG-16 model, we need only check if the pre-trained model predicts an index between 151 and 268 (inclusive).
Use these ideas to complete the `dog_detector` function below, which returns `True` if a dog is detected in an image (and `False` if not).
```
### returns "True" if a dog is detected in the image stored at img_path
def dog_detector(img_path):
## TODO: Complete the function.
class_dog=VGG16_predict(img_path)
return class_dog >= 151 and class_dog <=268 # true/false
```
### (IMPLEMENTATION) Assess the Dog Detector
__Question 2:__ Use the code cell below to test the performance of your `dog_detector` function.
- What percentage of the images in `human_files_short` have a detected dog?
- What percentage of the images in `dog_files_short` have a detected dog?
__Answer:__
```
### TODO: Test the performance of the dog_detector function
### on the images in human_files_short and dog_files_short.
from tqdm import tqdm
human_files_short = human_files[:100]
dog_files_short = dog_files[:100]
dog_percentage_human = 0
dog_percentage_dog = 0
for i in tqdm(range(100)):
dog_percentage_human += int (dog_detector(human_files_short[i]))
dog_percentage_dog += int (dog_detector(dog_files_short[i]))
print ('Dog Percentage in Human dataset:{} % \t Dog Percentage in Dog Dataset:{} %'.format(dog_percentage_human,dog_percentage_dog))
```
We suggest VGG-16 as a potential network to detect dog images in your algorithm, but you are free to explore other pre-trained networks (such as [Inception-v3](http://pytorch.org/docs/master/torchvision/models.html#inception-v3), [ResNet-50](http://pytorch.org/docs/master/torchvision/models.html#id3), etc). Please use the code cell below to test other pre-trained PyTorch models. If you decide to pursue this _optional_ task, report performance on `human_files_short` and `dog_files_short`.
```
### (Optional)
### TODO: Report the performance of another pre-trained network.
### Feel free to use as many code cells as needed.
resnet50 = models.resnet50(pretrained=True)
if use_cuda:
resnet50.cuda()
def resnet50_predict(img_path):
image = Image.open(img_path)
mean=[0.485, 0.456, 0.406]
std=[0.229, 0.224, 0.225]
image_transforms = transforms.Compose([transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean,std)])
image_tensor = image_transforms(image)
image_tensor.unsqueeze_(0)
if use_cuda:
image_tensor=image_tensor.cuda()
resnet50.eval()
output = resnet50(image_tensor)
_,classes = torch.max(output,dim=1)
return classes.item()
def resnet50_dog_detector(image_path):
class_idx = resnet50_predict(image_path)
return class_idx >= 151 and class_idx <=268
from tqdm import tqdm
human_files_short = human_files[:1000]
dog_files_short = dog_files[:10000]
dog_percentage_human = 0
dog_percentage_dog = 0
for i in tqdm(range(1000)):
dog_percentage_human += int (resnet50_dog_detector(human_files_short[i]))
dog_percentage_dog += int (resnet50_dog_detector(dog_files_short[i]))
print ('Dog Percentage in Human dataset:{} % \t Dog Percentage in Dog Dataset:{} %'.format(dog_percentage_human,dog_percentage_dog))
from tqdm import tqdm
human_files_short = human_files[:100]
dog_files_short = dog_files[:100]
dog_percentage_human = 0
dog_percentage_dog = 0
for i in tqdm(range(100)):
dog_percentage_human += int (resnet50_dog_detector(human_files_short[i]))
dog_percentage_dog += int (resnet50_dog_detector(dog_files_short[i]))
print ('Dog Percentage in Human dataset:{} % \t Dog Percentage in Dog Dataset:{} %'.format(dog_percentage_human,dog_percentage_dog))
```
---
<a id='step3'></a>
## Step 3: Create a CNN to Classify Dog Breeds (from Scratch)
Now that we have functions for detecting humans and dogs in images, we need a way to predict breed from images. In this step, you will create a CNN that classifies dog breeds. You must create your CNN _from scratch_ (so, you can't use transfer learning _yet_!), and you must attain a test accuracy of at least 10%. In Step 4 of this notebook, you will have the opportunity to use transfer learning to create a CNN that attains greatly improved accuracy.
We mention that the task of assigning breed to dogs from images is considered exceptionally challenging. To see why, consider that *even a human* would have trouble distinguishing between a Brittany and a Welsh Springer Spaniel.
Brittany | Welsh Springer Spaniel
- | -
<img src="images/Brittany_02625.jpg" width="100"> | <img src="images/Welsh_springer_spaniel_08203.jpg" width="200">
It is not difficult to find other dog breed pairs with minimal inter-class variation (for instance, Curly-Coated Retrievers and American Water Spaniels).
Curly-Coated Retriever | American Water Spaniel
- | -
<img src="images/Curly-coated_retriever_03896.jpg" width="200"> | <img src="images/American_water_spaniel_00648.jpg" width="200">
Likewise, recall that labradors come in yellow, chocolate, and black. Your vision-based algorithm will have to conquer this high intra-class variation to determine how to classify all of these different shades as the same breed.
Yellow Labrador | Chocolate Labrador | Black Labrador
- | -
<img src="images/Labrador_retriever_06457.jpg" width="150"> | <img src="images/Labrador_retriever_06455.jpg" width="240"> | <img src="images/Labrador_retriever_06449.jpg" width="220">
We also mention that random chance presents an exceptionally low bar: setting aside the fact that the classes are slightly imabalanced, a random guess will provide a correct answer roughly 1 in 133 times, which corresponds to an accuracy of less than 1%.
Remember that the practice is far ahead of the theory in deep learning. Experiment with many different architectures, and trust your intuition. And, of course, have fun!
### (IMPLEMENTATION) Specify Data Loaders for the Dog Dataset
Use the code cell below to write three separate [data loaders](http://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader) for the training, validation, and test datasets of dog images (located at `dogImages/train`, `dogImages/valid`, and `dogImages/test`, respectively). You may find [this documentation on custom datasets](http://pytorch.org/docs/stable/torchvision/datasets.html) to be a useful resource. If you are interested in augmenting your training and/or validation data, check out the wide variety of [transforms](http://pytorch.org/docs/stable/torchvision/transforms.html?highlight=transform)!
```
import numpy as np
from glob import glob
from tqdm import tqdm
from tqdm import tqdm
import cv2
def get_train_set_info():
dog_files_train = np.array(glob("/data/dog_images/train/*/*"))
mean = np.array([0.,0.,0.])
std = np.array([0.,0.,0.])
for i in tqdm(range(len(dog_files_train))):
image=cv2.imread(dog_files_train[i])
image = cv2.cvtColor(image,cv2.COLOR_BGR2RGB)
image = image/255.0
mean[0] += np.mean(image[:,:,0])
mean[1] += np.mean(image[:,:,1])
mean[2] += np.mean(image[:,:,2])
std[0] += np.std(image[:,:,0])
std[1] += np.std(image[:,:,1])
std[2] += np.std(image[:,:,2])
mean = mean/len(dog_files_train)
std = std/len(dog_files_train)
return mean,std
mean_train_set,std_train_set = [0.487,0.467,0.397],[0.235,0.23,0.23]
import torch
from torchvision import datasets,transforms
from torch.utils.data import DataLoader
### TODO: Write data loaders for training, validation, and test sets
## Specify appropriate transforms, and batch_sizes
train_dir ='/data/dog_images/train'
valid_dir = '/data/dog_images/valid'
test_dir = '/data/dog_images/test'
train_transforms = transforms.Compose([transforms.RandomResizedCrop(256),
transforms.ColorJitter(saturation=0.2),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize(mean_train_set,std_train_set)])
valid_test_transforms= transforms.Compose([transforms.Resize(300),
transforms.CenterCrop(256),
transforms.ToTensor(),
transforms.Normalize(mean_train_set,std_train_set)])
train_dataset = datasets.ImageFolder(train_dir,transform=train_transforms)
valid_dataset = datasets.ImageFolder(valid_dir,transform=valid_test_transforms)
test_dataset = datasets.ImageFolder(test_dir,transform=valid_test_transforms)
trainloader = DataLoader(train_dataset,batch_size=32,shuffle=True)
validloader = DataLoader(valid_dataset,batch_size=32,shuffle=False)
testloader = DataLoader(test_dataset,batch_size=32,shuffle=False)
loaders_scratch={}
loaders_scratch['train'] = trainloader
loaders_scratch['valid'] = validloader
loaders_scratch['test'] = testloader
use_cuda = torch.cuda.is_available()
```
**Question 3:** Describe your chosen procedure for preprocessing the data.
- How does your code resize the images (by cropping, stretching, etc)? What size did you pick for the input tensor, and why?
- Did you decide to augment the dataset? If so, how (through translations, flips, rotations, etc)? If not, why not?
**Answer**:
### (IMPLEMENTATION) Model Architecture
Create a CNN to classify dog breed. Use the template in the code cell below.
```
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
### TODO: choose an architecture, and complete the class
def __init__(self):
super(Net, self).__init__()
## Define layers of a CNN
def forward(self, x):
## Define forward behavior
return x
#-#-# You do NOT have to modify the code below this line. #-#-#
# instantiate the CNN
model_scratch = Net()
# move tensors to GPU if CUDA is available
if use_cuda:
model_scratch.cuda()
```
__Question 4:__ Outline the steps you took to get to your final CNN architecture and your reasoning at each step.
__Answer:__
### (IMPLEMENTATION) Specify Loss Function and Optimizer
Use the next code cell to specify a [loss function](http://pytorch.org/docs/stable/nn.html#loss-functions) and [optimizer](http://pytorch.org/docs/stable/optim.html). Save the chosen loss function as `criterion_scratch`, and the optimizer as `optimizer_scratch` below.
```
import torch.optim as optim
### TODO: select loss function
criterion_scratch = None
### TODO: select optimizer
optimizer_scratch = None
```
### (IMPLEMENTATION) Train and Validate the Model
Train and validate your model in the code cell below. [Save the final model parameters](http://pytorch.org/docs/master/notes/serialization.html) at filepath `'model_scratch.pt'`.
```
# the following import is required for training to be robust to truncated images
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
def train(n_epochs, loaders, model, optimizer, criterion, use_cuda, save_path):
"""returns trained model"""
# initialize tracker for minimum validation loss
valid_loss_min = np.Inf
for epoch in range(1, n_epochs+1):
# initialize variables to monitor training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(loaders['train']):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
## find the loss and update the model parameters accordingly
## record the average training loss, using something like
## train_loss = train_loss + ((1 / (batch_idx + 1)) * (loss.data - train_loss))
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(loaders['valid']):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
## update the average validation loss
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch,
train_loss,
valid_loss
))
## TODO: save the model if validation loss has decreased
# return trained model
return model
# train the model
model_scratch = train(100, loaders_scratch, model_scratch, optimizer_scratch,
criterion_scratch, use_cuda, 'model_scratch.pt')
# load the model that got the best validation accuracy
model_scratch.load_state_dict(torch.load('model_scratch.pt'))
```
### (IMPLEMENTATION) Test the Model
Try out your model on the test dataset of dog images. Use the code cell below to calculate and print the test loss and accuracy. Ensure that your test accuracy is greater than 10%.
```
def test(loaders, model, criterion, use_cuda):
# monitor test loss and accuracy
test_loss = 0.
correct = 0.
total = 0.
model.eval()
for batch_idx, (data, target) in enumerate(loaders['test']):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the loss
loss = criterion(output, target)
# update average test loss
test_loss = test_loss + ((1 / (batch_idx + 1)) * (loss.data - test_loss))
# convert output probabilities to predicted class
pred = output.data.max(1, keepdim=True)[1]
# compare predictions to true label
correct += np.sum(np.squeeze(pred.eq(target.data.view_as(pred))).cpu().numpy())
total += data.size(0)
print('Test Loss: {:.6f}\n'.format(test_loss))
print('\nTest Accuracy: %2d%% (%2d/%2d)' % (
100. * correct / total, correct, total))
# call test function
test(loaders_scratch, model_scratch, criterion_scratch, use_cuda)
```
---
<a id='step4'></a>
## Step 4: Create a CNN to Classify Dog Breeds (using Transfer Learning)
You will now use transfer learning to create a CNN that can identify dog breed from images. Your CNN must attain at least 60% accuracy on the test set.
### (IMPLEMENTATION) Specify Data Loaders for the Dog Dataset
Use the code cell below to write three separate [data loaders](http://pytorch.org/docs/master/data.html#torch.utils.data.DataLoader) for the training, validation, and test datasets of dog images (located at `dogImages/train`, `dogImages/valid`, and `dogImages/test`, respectively).
If you like, **you are welcome to use the same data loaders from the previous step**, when you created a CNN from scratch.
```
## TODO: Specify data loaders
```
### (IMPLEMENTATION) Model Architecture
Use transfer learning to create a CNN to classify dog breed. Use the code cell below, and save your initialized model as the variable `model_transfer`.
```
import torchvision.models as models
import torch.nn as nn
## TODO: Specify model architecture
if use_cuda:
model_transfer = model_transfer.cuda()
```
__Question 5:__ Outline the steps you took to get to your final CNN architecture and your reasoning at each step. Describe why you think the architecture is suitable for the current problem.
__Answer:__
### (IMPLEMENTATION) Specify Loss Function and Optimizer
Use the next code cell to specify a [loss function](http://pytorch.org/docs/master/nn.html#loss-functions) and [optimizer](http://pytorch.org/docs/master/optim.html). Save the chosen loss function as `criterion_transfer`, and the optimizer as `optimizer_transfer` below.
```
criterion_transfer = None
optimizer_transfer = None
```
### (IMPLEMENTATION) Train and Validate the Model
Train and validate your model in the code cell below. [Save the final model parameters](http://pytorch.org/docs/master/notes/serialization.html) at filepath `'model_transfer.pt'`.
```
# train the model
model_transfer = # train(n_epochs, loaders_transfer, model_transfer, optimizer_transfer, criterion_transfer, use_cuda, 'model_transfer.pt')
# load the model that got the best validation accuracy (uncomment the line below)
#model_transfer.load_state_dict(torch.load('model_transfer.pt'))
```
### (IMPLEMENTATION) Test the Model
Try out your model on the test dataset of dog images. Use the code cell below to calculate and print the test loss and accuracy. Ensure that your test accuracy is greater than 60%.
```
test(loaders_transfer, model_transfer, criterion_transfer, use_cuda)
```
### (IMPLEMENTATION) Predict Dog Breed with the Model
Write a function that takes an image path as input and returns the dog breed (`Affenpinscher`, `Afghan hound`, etc) that is predicted by your model.
```
### TODO: Write a function that takes a path to an image as input
### and returns the dog breed that is predicted by the model.
# list of class names by index, i.e. a name can be accessed like class_names[0]
class_names = [item[4:].replace("_", " ") for item in data_transfer['train'].classes]
def predict_breed_transfer(img_path):
# load the image and return the predicted breed
return None
```
---
<a id='step5'></a>
## Step 5: Write your Algorithm
Write an algorithm that accepts a file path to an image and first determines whether the image contains a human, dog, or neither. Then,
- if a __dog__ is detected in the image, return the predicted breed.
- if a __human__ is detected in the image, return the resembling dog breed.
- if __neither__ is detected in the image, provide output that indicates an error.
You are welcome to write your own functions for detecting humans and dogs in images, but feel free to use the `face_detector` and `dog_detector` functions developed above. You are __required__ to use your CNN from Step 4 to predict dog breed.
Some sample output for our algorithm is provided below, but feel free to design your own user experience!

### (IMPLEMENTATION) Write your Algorithm
```
### TODO: Write your algorithm.
### Feel free to use as many code cells as needed.
def run_app(img_path):
## handle cases for a human face, dog, and neither
```
---
<a id='step6'></a>
## Step 6: Test Your Algorithm
In this section, you will take your new algorithm for a spin! What kind of dog does the algorithm think that _you_ look like? If you have a dog, does it predict your dog's breed accurately? If you have a cat, does it mistakenly think that your cat is a dog?
### (IMPLEMENTATION) Test Your Algorithm on Sample Images!
Test your algorithm at least six images on your computer. Feel free to use any images you like. Use at least two human and two dog images.
__Question 6:__ Is the output better than you expected :) ? Or worse :( ? Provide at least three possible points of improvement for your algorithm.
__Answer:__ (Three possible points for improvement)
```
## TODO: Execute your algorithm from Step 6 on
## at least 6 images on your computer.
## Feel free to use as many code cells as needed.
## suggested code, below
for file in np.hstack((human_files[:3], dog_files[:3])):
run_app(file)
```
| github_jupyter |
## Appendix 1: Optional Refresher on the Unix Environment
### A1.1) A Quick Unix Overview
In Jupyter, many of the same Unix commands we use to navigate in the regular terminal can be used. (However, this is not true when we write standalone code outside Jupyter.) As a quick refresher, try each of the following:
```
ls
pwd
cd 2017oct04
```
We're in a new folder now, so issue commands in the next two cells to look at the folder content and list your current path:
```
ls
pwd
```
Now test out a few more things. In the blank cells below, try the following and discuss in your group what each does:
```
ls M52*fit
ls M52-001*fit
ls *V*
```
What does the asterisk symbol * do?
**Answer:** Is a placeholder (wildcard) for some text in a file/folder name.
Now, return to where you started, by moving up a directory:
(one directory up from where you are is denoted with `..`, while the current directory is denoted with `.`)
```
cd ..
```
### A1.2) A few more helpful commands
#### `mkdir` to *make* a new *dir*ectory:
`mkdir new_project_name`
#### `cp` to copy a file:
`cp existingfile newfilename`
or
`cp existingfile newlocation`
#### `mv` to move or rename a file:
`mv old_filename_oldlocation old_filename_newlocation`
or
`mv old_filename_oldlocation new_filename_oldlocation`
#### `rm` to *PERMANENTLY* delete (remove) a file... (use with caution):
`rm file_I_will_never_see_again`
#### In the six cells below:
(1) Make a new directory, called `temporary`
(2) Go into that new directory
(3) Move the file test_file.txt from the original directory above (`../test_file.txt`) into your current location using the `.`
(4) Create a copy of test_file.txt with a new, different filename of your choice.
(5) Delete the original test_file.txt
(6) Go back up into the original location where this notebook is located.
```
# Make a new directory, "temporary"
# Move into temporary
# Move the test_file.txt into this current location
# Create a copy of the test_file.txt, name the copy however you like
# Delete the original test_file.txt
# Change directories to original location of notebook.
```
If all went according to plan, the following command should show three directories, a zip file, a .png file, this notebook, and the Lab6 notebook:
```
ls
```
And the following command should show the contents of the `temporary` folder, so only your new text file (a copy of test_file.txt, which is now gone forever) within it:
```
ls ./temporary/
```
## Appendix 2: Optional Refresher on Conditional Statements and Iteration
### A2.1) Conditional Statements
The use of tests or _conditions_ to evaluate variables, values, etc., is a fundamental programming tool. Try executing each of the cells below:
```
2 < 5
3 > 7
x = 11
x > 10
2 * x < x
3.14 <= 3.14 # <= means less than or equal to; >= means greater than or equal to
42 == 42
3e8 != 3e9 # != means "not equal to"
type(True)
```
You see that conditions are either `True` or `False` (with no quotes!) These are the only possible Boolean values (named after 19th century mathematician George Boole). In Python, the name Boolean is shortened to the type `bool`. It is the type of the results of true-false conditions or tests.
Now try executing the following two cells at least twice over, with inputs 50 and then 80.
```
temperature = float(input('What is the temperature in Fahrenheit? '))
if temperature > 70:
print('Wear shorts.')
else:
print('Wear long pants.')
```
The four lines in the previous cell are an if-else statement. There are two indented blocks: One comes right after the `if` heading in line 1 and is executed when the condition in the `if` heading is _true_. This is followed by an `else:` in line 3, followed by another indented block that is only execued when the original condition is _false_. In an if-else statement, exactly one of the two possible indented blocks is executed.
### A2.2) Iteration
Another important component in our arsenal of programming tools is iteration. Iteration means performing an operation repeatedly. We can execute a very simple example at the command line. Let's make a list of objects as follows:
```
names = ['Henrietta', 'Annie', 'Jocelyn', 'Vera']
for n in names:
print('There are ' + str(len(n)) + ' letters in ' + n)
```
This is an example of a for loop. The way a for loop works is a follows. We start with a list of objects -- in this example a list of strings, but it could be anything -- and then we say for variable in list:, followed by a block of code. The code inside the block will be executed once for every item in the list, and when it is executed the variable will be set equal to the appropriate list item. In this example, the list names had four objects in it, each a string. Thus the print statement inside the loop was executed four times. The first time it was executed, the variable n was set equal to `Henrietta`. The second time n was set equal to `Annie`, then `Jocelyn`, then `Vera`.
One of the most common types of loop is where you want to loop over numbers: 0, 1, 2, 3, .... To handle loops of this sort, python provides a simple command to construct a list of numbers to iterate over, called range. The command range(n) produces a list of numbers from 0 to n-1. For example:
```
for i in range(5):
print(i)
```
There are also other ways of iterating, which may be more convenient depending on what you're trying to do. A very common one is the while loop, which does exactly what it sounds like it should: it loops until some condition is met. For example:
```
i = 0 # This starts the initial value off at zero
while i < 11:
print(i)
i = i + 3 # This adds three to the value of i, then goes back to the line #3 to check if the condition is met
```
| github_jupyter |
```
import os
os.environ['CUDA_VISIBLE_DEVICES'] = ''
from malaya_speech.train.model import hubert, ctc
from malaya_speech.train.model.conformer.model import Model as ConformerModel
import malaya_speech
import tensorflow as tf
import numpy as np
import json
from glob import glob
import string
unique_vocab = [''] + list(
string.ascii_lowercase + string.digits
) + [' ']
len(unique_vocab)
X = tf.compat.v1.placeholder(tf.float32, [None, None], name = 'X_placeholder')
X_len = tf.compat.v1.placeholder(tf.int32, [None], name = 'X_len_placeholder')
training = True
class Encoder:
def __init__(self, config):
self.config = config
self.encoder = ConformerModel(**self.config)
def __call__(self, x, input_mask, training = True):
return self.encoder(x, training = training)
config_conformer = malaya_speech.config.conformer_base_encoder_config
config_conformer['subsampling']['type'] = 'none'
config_conformer['dropout'] = 0.0
encoder = Encoder(config_conformer)
cfg = hubert.HuBERTConfig(
extractor_mode='layer_norm',
dropout=0.0,
attention_dropout=0.0,
encoder_layerdrop=0.0,
dropout_input=0.0,
dropout_features=0.0,
final_dim=256,
)
model = hubert.Model(cfg, encoder, ['pad', 'eos', 'unk'] + [str(i) for i in range(100)])
r = model(X, padding_mask = X_len, features_only = True, mask = False)
logits = tf.layers.dense(r['x'], len(unique_vocab) + 1)
seq_lens = tf.reduce_sum(
tf.cast(tf.logical_not(r['padding_mask']), tf.int32), axis = 1
)
logits = tf.transpose(logits, [1, 0, 2])
logits = tf.identity(logits, name = 'logits')
seq_lens = tf.identity(seq_lens, name = 'seq_lens')
sess = tf.Session()
sess.run(tf.global_variables_initializer())
var_list = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES)
saver = tf.train.Saver(var_list = var_list)
saver.restore(sess, 'hubert-conformer-base-ctc-char/model.ckpt-2000000')
saver = tf.train.Saver()
saver.save(sess, 'output-hubert-conformer-base-ctc/model.ckpt')
strings = ','.join(
[
n.name
for n in tf.get_default_graph().as_graph_def().node
if ('Variable' in n.op
or 'gather' in n.op.lower()
or 'placeholder' in n.name
or 'logits' in n.name
or 'seq_lens' in n.name)
and 'adam' not in n.name
and 'global_step' not in n.name
and 'Assign' not in n.name
and 'ReadVariableOp' not in n.name
and 'Gather' not in n.name
]
)
strings.split(',')
def freeze_graph(model_dir, output_node_names):
if not tf.gfile.Exists(model_dir):
raise AssertionError(
"Export directory doesn't exists. Please specify an export "
'directory: %s' % model_dir
)
checkpoint = tf.train.get_checkpoint_state(model_dir)
input_checkpoint = checkpoint.model_checkpoint_path
absolute_model_dir = '/'.join(input_checkpoint.split('/')[:-1])
output_graph = absolute_model_dir + '/frozen_model.pb'
clear_devices = True
with tf.Session(graph = tf.Graph()) as sess:
saver = tf.train.import_meta_graph(
input_checkpoint + '.meta', clear_devices = clear_devices
)
saver.restore(sess, input_checkpoint)
output_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
tf.get_default_graph().as_graph_def(),
output_node_names.split(','),
)
with tf.gfile.GFile(output_graph, 'wb') as f:
f.write(output_graph_def.SerializeToString())
print('%d ops in the final graph.' % len(output_graph_def.node))
freeze_graph('output-hubert-conformer-base-ctc', strings)
def load_graph(frozen_graph_filename):
with tf.gfile.GFile(frozen_graph_filename, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def)
return graph
files = [
'speech/record/savewav_2020-11-26_22-36-06_294832.wav',
'speech/record/savewav_2020-11-26_22-40-56_929661.wav',
'speech/record/675.wav',
'speech/record/664.wav',
'speech/example-speaker/husein-zolkepli.wav',
'speech/example-speaker/mas-aisyah.wav',
'speech/example-speaker/khalil-nooh.wav',
'speech/example-speaker/shafiqah-idayu.wav',
'speech/khutbah/wadi-annuar.wav',
]
ys = [malaya_speech.load(f)[0] for f in files]
padded, lens = malaya_speech.padding.sequence_1d(ys, return_len = True)
g = load_graph('output-hubert-conformer-base-ctc/frozen_model.pb')
input_nodes = [
'X_placeholder',
'X_len_placeholder',
]
output_nodes = [
'logits',
'seq_lens',
]
inputs = {n: g.get_tensor_by_name(f'import/{n}:0') for n in input_nodes}
outputs = {n: g.get_tensor_by_name(f'import/{n}:0') for n in output_nodes}
test_sess = tf.Session(graph = g)
r = test_sess.run(outputs['logits'], feed_dict = {inputs['X_placeholder']: padded,
inputs['X_len_placeholder']: lens})
from tensorflow.tools.graph_transforms import TransformGraph
transforms = ['add_default_attributes',
'remove_nodes(op=Identity, op=CheckNumerics, op=Dropout)',
'fold_batch_norms',
'fold_old_batch_norms',
'quantize_weights(fallback_min=-10, fallback_max=10)',
'strip_unused_nodes',
'sort_by_execution_order']
pb = 'output-hubert-conformer-base-ctc/frozen_model.pb'
input_graph_def = tf.GraphDef()
with tf.gfile.FastGFile(pb, 'rb') as f:
input_graph_def.ParseFromString(f.read())
transformed_graph_def = TransformGraph(input_graph_def,
input_nodes,
output_nodes, transforms)
with tf.gfile.GFile(f'{pb}.quantized', 'wb') as f:
f.write(transformed_graph_def.SerializeToString())
g = load_graph('output-hubert-conformer-large-ctc/frozen_model.pb.quantized')
!tar -czvf output-hubert-conformer-base-ctc-v2.tar.gz output-hubert-conformer-base-ctc
b2_application_key_id = os.environ['b2_application_key_id']
b2_application_key = os.environ['b2_application_key']
from b2sdk.v1 import *
info = InMemoryAccountInfo()
b2_api = B2Api(info)
application_key_id = b2_application_key_id
application_key = b2_application_key
b2_api.authorize_account("production", application_key_id, application_key)
file_info = {'how': 'good-file'}
b2_bucket = b2_api.get_bucket_by_name('malaya-speech-model')
key = 'output-hubert-conformer-base-ctc-v2.tar.gz'
outPutname = "pretrained/output-hubert-conformer-base-ctc-v2.tar.gz"
b2_bucket.upload_local_file(
local_file=key,
file_name=outPutname,
file_infos=file_info,
)
file = 'output-hubert-conformer-base-ctc/frozen_model.pb'
outPutname = 'speech-to-text-ctc-v2/hubert-conformer/model.pb'
b2_bucket.upload_local_file(
local_file=file,
file_name=outPutname,
file_infos=file_info,
)
file = 'output-hubert-conformer-base-ctc/frozen_model.pb.quantized'
outPutname = 'speech-to-text-ctc-v2/hubert-conformer-quantized/model.pb'
b2_bucket.upload_local_file(
local_file=file,
file_name=outPutname,
file_infos=file_info,
)
!rm -rf output-hubert-conformer-base-ctc output-hubert-conformer-base-ctc-v2.tar.gz
```
| github_jupyter |
# Class Project
## GenePattern Single Cell Analysis Workshop
<div class="alert alert-info">
<p class="lead"> Instructions <i class="fa fa-info-circle"></i></p>
Log in to GenePettern with your credentials.
</div>
```
# Requires GenePattern Notebook: pip install genepattern-notebook
import gp
import genepattern
# Username and password removed for security reasons.
genepattern.display(genepattern.session.register("https://cloud.genepattern.org/gp", "", ""))
```
For this project, we will use data from the Human Cell Atlas, for the project [Single cell transcriptome analysis of human pancreas reveals transcriptional signatures of aging and somatic mutation patterns](https://data.humancellatlas.org/explore/projects/cddab57b-6868-4be4-806f-395ed9dd635a/m/expression-matrices). This project, and a few others, are available on the Human Cell Atlas Data Browser and you can experiment with finding other available studies on its [Explore Data Page](https://data.humancellatlas.org/explore/projects?filter=%5B%7B%22facetName%22%3A%22genusSpecies%22%2C%22terms%22%3A%5B%22Homo+sapiens%22%5D%7D%2C%7B%22facetName%22%3A%22organ%22%2C%22terms%22%3A%5B%22pancreas%22%5D%7D%5D), in this case the link (above) will pre-filter for Human Pancreas datasets to make it easy to find the experiment we will use.
From [the project page](https://data.humancellatlas.org/explore/projects/cddab57b-6868-4be4-806f-395ed9dd635a/m/expression-matrices), we can obtain the link to the file which contains the RNA-Seq data in the MTX format:
https://data.humancellatlas.org/project-assets/project-matrices/cddab57b-6868-4be4-806f-395ed9dd635a.homo_sapiens.mtx.zip
<div class="alert alert-info">
<p class="lead"> Instructions <i class="fa fa-info-circle"></i></p>
<ol>
<li>Drag or Copy/Paste the link to the <code>mtx.zip</code> file (above) into the <code>url*</code> parameter below.</li>
<li>Click 'Run'</li>
</div>
```
import os
import shutil
import urllib.request
import subprocess
import rpy2
%load_ext nbtools.r_support
@genepattern.build_ui(description="Setup the R and Python environments for the rest of this notebook. Downloads the example dataset to the notebook server.",
parameters={"url":{"name":"url"},
"output_var": {"name":"folder","default":"folder","hide":"true"}
})
def download_MTX_zip(url):
%load_ext rpy2.ipython
print("Retrieving input data...")
base_name = 'temp_data.mtx.zip'
os.makedirs('data/input_data/unzipped_data', exist_ok=True)
urllib.request.urlretrieve(url, f'data/input_data/{base_name}')
subprocess.run(["unzip", f"data/input_data/"+f"{base_name}","-d",f"data/input_data/unzipped_data/"])
folder = os.listdir("data/input_data/unzipped_data/")[0]
folder = os.path.join("data/input_data/unzipped_data",folder)
# cache the last downloaded copy if its a repeat (just in case)
unzipPath = "data/input_data/unzipped_data/downloaded_MTX_folder"
if os.path.exists(unzipPath):
if os.path.exists(unzipPath + "_PREVIOUS"):
shutil.rmtree(unzipPath + "_PREVIOUS")
print("Saving old copy of the data as "+unzipPath + "_PREVIOUS")
os.rename(unzipPath, unzipPath + "_PREVIOUS");
os.rename(folder,"data/input_data/unzipped_data/downloaded_MTX_folder")
folder = 'data/input_data/unzipped_data/downloaded_MTX_folder'
print(f'Data unzipped to: {folder}')
print("Done.")
return folder
```
Now we have a copy of the project data as a zipped MTX file on our GenePattern Notebook server. We need to unzip it so that Seurat can read the MTX file.
<div class="alert alert-info">
<p class="lead"> Instructions <i class="fa fa-info-circle"></i></p>
Input here the directory where your data resides. If you used default parameters, it should be in `data/input_data/unzipped_data/downloaded_MTX_folder`
</div>
```
%%r_build_ui { "name": "Setup Seurat Objects", "parameters": { "data_dir":{"name":"data_dir",},"output_var": { "hide": "True" } } }
setupR <- function(data_dir){
write("Loading libraries...", stdout())
suppressMessages(library(Seurat))
suppressMessages(library(scater))
# Load the dataset
write(c("Reading data from",data_dir), stdout())
suppressMessages(pbmc.data <- Read10X(data.dir = data_dir))
# Initialize the Seurat object with the raw (non-normalized data).
write("Loadig data into Seurat...", stdout())
pbmc <- CreateSeuratObject(counts = pbmc.data, project = "pbmc3k", min.cells = 3, min.features = 200)
write("Done with this step.", stdout())
return(pbmc)
}
suppressMessages(pbmc <- setupR(data_dir))
```
As in the example earlier today, we will want to filter the dataset based on the mitochondrial QC metrics. To make this sort of filtering easy we will add the metrics directly into the Seurat object.
<div class="alert alert-info">
<p class="lead"> Instructions <i class="fa fa-info-circle"></i></p>
Click `Run` to add mitochondrial QC metrics.
</div>
```
%%r_build_ui { "name": "Add Mitochondrial QC Metrics", "parameters": { "column_name": { "type": "string", "default":"percent.mt" },"pattern": { "type": "string", "default":"MT-" }, "output_var": { "hide": "True" } } }
set_mito_qc <- function(colName, pat) {
write("Calculating the frequency of mitochondrial genes...", stdout())
pattern <- paste("^", trimws(pat, which = "both"), sep="")
# The [[ operator can add columns to object metadata. This is a great place to stash QC stats
pbmc[[colName]] <- PercentageFeatureSet(pbmc, pattern = pattern)
write("Done!", stdout())
return(pbmc)
}
suppressMessages(pbmc <- set_mito_qc(column_name, pattern))
```
<div class="alert alert-info">
<p class="lead"> Instructions <i class="fa fa-info-circle"></i></p>
Plot the three features (nFeature_RNA, nCount_RNA, and percent.mt) to decide which filters to use in the next step.
</div>
```
%%r_build_ui -w 800 { "width": 10, "height": 300, "name": "Triple Violin Plot", "parameters": { "first_feature": { "type": "string", "default":"nFeature_RNA" }, "second_feature":{ "type": "string", "default":"nCount_RNA"}, "third_feature": { "type": "string", "default":"percent.mt" }, "output_var":{"hide":"True"} } }
# Visualize QC metrics as a violin plot
#VlnPlot(pbmc, features = c(first_feature, second_feature, third_feature), ncol = 3)
tripleViolin <- function(first, second, third){
feats <- c(first, second, third)
plot(VlnPlot(pbmc, features = feats, ncol = 3, combine=TRUE), fig.height=5, fig.width=15)
return("")
}
tripleViolin(first_feature, second_feature, third_feature)
```
<div class="alert alert-info">
<p class="lead"> Instructions <i class="fa fa-info-circle"></i></p>
Based on the three violin plots above, input which filters to apply to the data.
</div>
```
%%r_build_ui { "name": "Subset Data", "parameters": { "min_n_features": { "type": "number", "default":"200" },"max_n_features": { "type": "number", "default":"9000" },"max_percent_mitochondrial": { "type": "number", "default":"50" }, "output_var": { "hide": "True" } } }
my_subset <- function(min_n_features, max_n_features, max_percent_mitochondrial){
# print(pbmc)
pbmc <- subset(pbmc, subset = nFeature_RNA > min_n_features & nFeature_RNA < max_n_features & percent.mt < max_percent_mitochondrial)
# print(pbmc)
write('filtering done!', stdout())
return(pbmc)
}
pbmc <- my_subset(min_n_features, max_n_features, max_percent_mitochonrial)
```
<div class="alert alert-info">
<p class="lead"> Instructions <i class="fa fa-info-circle"></i></p>
Click run to apply a log-normalization and scaling of the data.
</div>
```
%%r_build_ui { "name": "Normalize", "parameters": { "method": { "type": "string", "default":"LogNormalize" },"scale_factor": { "type": "number", "default":"10000" }, "output_var": { "hide": "True" } } }
norm_pbmc <- function(meth, scale){
write("Normalizing data...", stdout())
invisible(pbmc <- NormalizeData(pbmc, normalization.method = meth, scale.factor = scale, verbose = F))
write('Normalization done!', stdout())
return(pbmc)
}
pbmc <- norm_pbmc(method, scale_factor)
```
<div class="alert alert-info">
<p class="lead"> Instructions <i class="fa fa-info-circle"></i></p>
Click run to show genes which are highly variable in this experiment. Take note of these genes as they may be helpful in downstream analyses.
</div>
```
%%r_build_ui { "name": "Feature Selection", "parameters": { "method": { "type": "string", "default":"vst","hide":"True" },"num_features": { "type": "number", "default":"2000" }, "num_to_label":{"type": "number", "default": "10", "description": "label the top N features in the plot."}, "output_var": { "hide": "True" } } }
#%%R -w 800 -h 450
feat_sel_plot <- function(meth, nFeat, nLabel){
write("Identifying variable features...", stdout())
invisible(capture.output(pbmc <- FindVariableFeatures(pbmc, selection.method = meth, nfeatures = nFeat,
verbose=F)))
write("Done!", stdout())
# Identify the 10 most highly variable genes
top10 <- head(VariableFeatures(pbmc), nLabel)
# plot variable features with and without labels
invisible(capture.output(plot1 <- VariableFeaturePlot(pbmc)))
invisible(capture.output(plot2 <- LabelPoints(plot = plot1, points = top10, repel = TRUE)))
print(plot2)
#plot(CombinePlots(plots = list(plot1, plot2)))
return(pbmc)
}
pbmc <- feat_sel_plot(method, num_features, num_to_label)
```
<div class="alert alert-info">
<p class="lead"> Instructions <i class="fa fa-info-circle"></i></p>
Click run to compute PCA.
</div>
```
%%r_build_ui {"name": "Compute PCA", "parameters": {"output_var":{"hide": "True"}}}
myscale <- function(pbmc){
write("Scaling data...", stdout())
all.genes <- rownames(pbmc)
invisible(capture.output(pbmc <- ScaleData(pbmc, features = all.genes, verbose = F)))
write('Done scaling data!', stdout())
feats <- VariableFeatures(object = pbmc, verbose = F)
pbmc <- RunPCA(pbmc, features = feats, nfeatures.print=5, verbose=F)
write('Done computing PCA. Elbow plot below.', stdout())
plot(ElbowPlot(pbmc))
return(pbmc)
}
pbmc <- myscale(pbmc)
```
Now that the PCA results have been added to the Seurat object, we want to save it again so that we can run the SeuratClustering module on the GenePattern server.
#### Why do we run Seurat Clustering on the GenePattern server instead of in this Notebook directly?
*expand this cell to see the hidden answer*
Performing UMAP or TSNE takes more memory and CPU cycles than the simpler analyses we have been doing directly in this notebook. By sending these to the GenePattern module, we can use the cluster behind GenePattern which has machines with more CPU and memory than our local notebook server for the few seconds or minutes the analysis requires, and not the hours that the notebook might be in use, reducing our compute costs while making the analysis run faster.
#### Save and upload the Seurat object
<div class="alert alert-info">
<p class="lead"> Instructions <i class="fa fa-info-circle"></i></p>
Click 'Run' to save the Seurat object as an RDS file.
</div>
```
%%r_build_ui {"name":"Save preprocessed dataset", "parameters": { "file_name": {"default":"Seurat_preprocessed.rds"}, "output_var": {"hide": "True"} }}
save_it <- function(fileName){
write('Saving file to the notebook workspace. This may take a while (this file may be about 1 GB in size)...', stdout())
saveRDS(pbmc, file = fileName)
print("Saved file!")
return(pbmc)
}
save_it(file_name)
```
<div class="alert alert-info">
<p class="lead"> Instructions <i class="fa fa-info-circle"></i></p>
Click 'Run' to upload the RDS file to the GenePattern server.
</div>
<div class="well well-sm">
<b>Note:</b> Note: This can potentially take a long time as the RDS file is nearly 1GB in size. A pre-generated file is available below if this takes too long or fails.
</div>
```
@nbtools.build_ui(name="Upload file to GenePattern Server", parameters={
"file": {
"name": "File to upload:",
"default":"Seurat_preprocessed.rds"
},
"output_var": {
"name": "results",
"description": "",
"default": "quantification_source",
"hide": True
}
})
def load_file(file):
import genepattern
uio = nbtools.UIOutput()
display(uio)
size = os.path.getsize(file)
print(f'This file size is {round(size/1e6)} MB, it may take a while to upload.')
uio.status = "Uploading..."
uploaded_file = genepattern.session.get(0).upload_file(file_name=os.path.basename(file),file_path=file)
uio.status = "Uploaded!"
display(nbtools.UIOutput(files=[uploaded_file.get_url()]))
return()
```
### Pre-generated RDS file
*Expand this cell to access a pre-generated rds file if the save & upload is taking too long*
<div class="alert alert-info">
<p class="lead"> Instructions <i class="fa fa-info-circle"></i></p>
If the above steps are taking too long (save+upload), feel free to use this file for illustrative purposes. You will need to run the cell below.
</div>
```
display(nbtools.UIOutput(text='Feel free to use this pre-generated RDS file to save time and/or disk space.',
files=['https://datasets.genepattern.org/data/module_support_files/SeuratClustering/Seurat_preprocessed_prebaked.rds']))
```
## Seurat Clustering
Now we will cluster our pancreas data using umap.
<div class="alert alert-info">
<p class="lead"> Instructions <i class="fa fa-info-circle"></i></p>
<ol>
<li>Select your RDS file (or the pre-baked one) from the `input seurat rds file*` dropdown.</li>
<li>Select `umap` for the `reduction` parameter.</li>
<li>Leave all the other parameters at their defaults (or experiment with them!).</li>
<li>Click 'Run' to start the analysis.</li>
</ol>
</div>
```
seuratclustering_task = gp.GPTask(genepattern.session.get(0), 'urn:lsid:broad.mit.edu:cancer.software.genepattern.module.analysis:00408')
seuratclustering_job_spec = seuratclustering_task.make_job_spec()
seuratclustering_job_spec.set_parameter("input.seurat.rds.file", "")
seuratclustering_job_spec.set_parameter("output.filename", "<input.seurat.rds.file_basename>.clustered")
seuratclustering_job_spec.set_parameter("maximum_dimension", "10")
seuratclustering_job_spec.set_parameter("resolution", "0.5")
seuratclustering_job_spec.set_parameter("reduction", "umap")
seuratclustering_job_spec.set_parameter("job.memory", "2 Gb")
seuratclustering_job_spec.set_parameter("job.queue", "gp-cloud-default")
seuratclustering_job_spec.set_parameter("job.cpuCount", "1")
seuratclustering_job_spec.set_parameter("job.walltime", "02:00:00")
genepattern.display(seuratclustering_task)
```
# Visualize Seurat results
#### pre-baked Results
If your SeuratClustering module is taking too long, you can use our pre-generated results instead of waiting for your analysis to complete.
*Expand this cell to access pre-generated SeuratClustering results instead of the results from your analysis*
<div class="alert alert-info">
<p class="lead"> Instructions <i class="fa fa-info-circle"></i></p>
The cell below (Job #199267) can be used to access the pre-baked results.
- Note: if you wish to use it, make sure you run it (e.g., click on the play symbol ⏯ next to it)
</div>
```
job199267 = gp.GPJob(genepattern.session.get(0), 199267)
genepattern.display(job199267)
```
<div class="alert alert-info">
<p class="lead"> Instructions <i class="fa fa-info-circle"></i></p>
Download the CSV file.
</div>
## Download files from GPServer
Since we ran the clustering on the GenePattern server instead of the notebook, now we will bring them back down to the notebook server so that we can visualize them within the notebook. We need to download both the CSV and the RDS files.
<div class="alert alert-info">
<p class="lead"> Instructions <i class="fa fa-info-circle"></i></p>
To download the CSV file, choose from the dropdown menu the CSV file that the SeuratClustering module outputs (it should end with `*.clustered.csv`)
- Alternatively, chose the CSV file which includes the term `prebaked` in it.
</div>
<div class="well well-sm">
<b>Note:</b> This takes ~ 1 second.
</div>
```
import os
DownloadJobResultFile_description = "Download file from a GenePattern module job result."
DownloadJobResultFile_parameters = {"file": {"type": "file", "kinds": ["csv", "rds"]}, "output_var": {"hide": True}}
def DownloadJobResultFile(file):
# extract job number and file name from url
job_num = file.split("/")[-2]
remote_file_name = file.split("/")[-1]
# get the job based on the job number passed as the second argument
job = gp.GPJob(genepattern.get_session(0), job_num)
# fetch a specific file from the job
remote_file = job.get_file(remote_file_name)
uio = nbtools.UIOutput(text=file)
display(uio)
uio.status = "Downloading..."
File_Name = os.path.basename(file)
response = remote_file.open()
CHUNK = 16 * 1024
with open(File_Name, 'wb') as f:
while True:
chunk = response.read(CHUNK)
if not chunk:
break
f.write(chunk)
uio.status = "Downloaded!"
print(File_Name)
display(nbtools.UIOutput(files=[File_Name]))
genepattern.GPUIBuilder(DownloadJobResultFile, collapse=False,
name='Download File From GenePattern Server',
description=DownloadJobResultFile_description,
parameters=DownloadJobResultFile_parameters)
```
<div class="alert alert-info">
<p class="lead"> Instructions <i class="fa fa-info-circle"></i></p>
To download the RDS file, choose from the dropdown menu the CSV file that the SeuratClustering module outputs (it should end with `*.clustered.rds`)
- Alternatively, chose the RDS file which includes the term `prebaked` in it.
</div>
<div class="well well-sm">
<b>Note:</b> This takes about 30 seconds on a <i>high speed</i> connnection, please be patient.
</div>
```
genepattern.GPUIBuilder(DownloadJobResultFile, collapse=False,
name='Download File From GenePattern Server',
description=DownloadJobResultFile_description,
parameters=DownloadJobResultFile_parameters)
```
## Load data into notebook
<div class="alert alert-info">
<p class="lead"> Instructions <i class="fa fa-info-circle"></i></p>
Click 'Run' to load the results you just downloaded.
- Note: make sure you type the name of the files downloaded in the two steps above.
</div>
```
%%r_build_ui {"name":"Load dataset with clustering", "parameters": {"RDS_url":{"name":"RDS file","default":"Seurat_preprocessed.clustered.rds.rds",'type':"file"}, "CSV_url":{"name":"CSV_file","type":"file","default":"Seurat_preprocessed.clustered.rds.csv"}, "output_var": {"hide": "True"} }}
load_markers <- function(CSV_url) {
write("Loading cluster markers into notebook...", stdout())
markers <- read.csv(CSV_url)
write("Done!", stdout())
return(markers)
}
load_it <- function(RDS_url){
write("Loading clustering results into notebook...", stdout())
pbmc <- readRDS(file = RDS_url)
write("Loaded file!", stdout())
return(pbmc)
}
suppressWarnings(markers <- load_markers(CSV_url))
pbmc <- load_it(RDS_url)
```
<div class="alert alert-info">
<p class="lead"> Instructions <i class="fa fa-info-circle"></i></p>
Click 'Run' to plot the results of the umap clustering.
</div>
```
%%r_build_ui {"name": "Visualize clusters", "parameters": {"output_var": {"hide": "True"}}}
library(Seurat)
do_dim_plot <- function() {
plot(DimPlot(pbmc, reduction = "umap"))
return("")
}
do_dim_plot()
```
We want to see which genes are markers for the clusters we see above. Lets look at the markers
<div class="alert alert-info">
<p class="lead"> Instructions <i class="fa fa-info-circle"></i></p>
<ol>
<li>Select one of the clusters (e.g. 1,2,3).</li>
<li>Click 'Run' to show the top markers for a cluster.</li>
</ol>
</div>
```
%%r_build_ui {"name": "Print top cluster markers", "parameters": {"cluster_number": {}, "output_var": {"hide": "True"}}}
print_top_markers <- function(cluster_number) {
return(head(markers[markers$cluster==cluster_number,], n = 5))
}
print_top_markers(cluster_number)
```
And now lets look at the gene expression for one of these markers.
<div class="alert alert-info">
<p class="lead"> Instructions <i class="fa fa-info-circle"></i></p>
<ol>
<li>Enter the gene name for a cluster marker (e.g., PDIA2, CLPS).</li>
<li>Click 'Run' to show a violin plot of its gene expression.</li>
<li>Repeat as desired for several marker genes.</li>
</ol>
</div>
```
%%r_build_ui {"name": "Violin plot of gene expression", "parameters": {"gene": {}, "output_var": {"hide": "True"}}}
do_violin <- function(gene) {
plot(VlnPlot(pbmc, features = c(gene), slot = "counts", log = TRUE))
return("")
}
do_violin(gene)
```
And now lets look at the gene expression for this marker across all clusters.
<div class="alert alert-info">
<p class="lead"> Instructions <i class="fa fa-info-circle"></i></p>
<ol>
<li>Enter the gene name for a cluster marker.</li>
<li>Click 'Run'.</li>
</ol>
</div>
```
%%r_build_ui {"name": "Expression of gene across all clusters", "parameters": {"gene": {}, "output_var": {"hide": "True"}}}
do_umap_gene <- function(gene) {
plot(FeaturePlot(pbmc, features = c(gene)))
return("")
}
do_umap_gene(gene)
```
<div class="well well-sm">
Finally, take a look at a few different genes, some suggestions are: MMP7, and MMP2.
- Crawford, Howard C et al. “Matrix metalloproteinase-7 is expressed by pancreatic cancer precursors and regulates acinar-to-ductal metaplasia in exocrine pancreas.” The Journal of clinical investigation vol. 109,11 (2002): 1437-44. doi:10.1172/JCI15051
- Jakubowska, et al. “Expressions of Matrix Metalloproteinases 2, 7, and 9 in Carcinogenesis of Pancreatic Ductal Adenocarcinoma.” Disease Markers, Hindawi, 26 June 2016, www.hindawi.com/journals/dm/2016/9895721/.
</div>
| github_jupyter |
```
import gym
import random
import numpy as np
import time
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
from gym.envs.registration import registry, register
from IPython.display import clear_output
try:
register(
id='FrozenLakeNoSlip-v0',
entry_point='gym.envs.toy_text:FrozenLakeEnv',
kwargs={'map_name' : '4x4', 'is_slippery' : False},
max_episode_steps=100,
reward_threshold=0.70, # optimum = 0.74
)
except:
pass
env_name = 'FrozenLakeNoSlip-v0'
env = gym.make(env_name)
print(env.observation_space)
print(env.action_space)
class Agent():
def __init__(self, env):
self.is_discrete = type(env.action_space == gym.spaces.discrete.Discrete)
if self.is_discrete:
self.action_size = env.action_space.n
print("action size", self.action_size)
else:
self.action_low = env.action_space.low
self.action_high = env.action_space.high
self.action_shape = env.action_space.shape
print("action range", self.action_low, self.action_high)
def get_action(self, state):
if self.is_discrete:
action = random.choice(range(self.action_size))
else:
action = np.random.uniform(self.action_low,
self.action_high,
self.action_shape)
# pole_angle = state[2]
# action = 0 if pole_angle<0 else 1
return action
class QNAgent(Agent):
def __init__(self, env, discount_rate=0.97, learning_rate=0.01):
super().__init__(env)
self.state_size = env.observation_space.n
print("state size", self.state_size)
self.eps = 1.0
self.discount_rate = discount_rate
self.learning_rate = learning_rate
self.build_model()
self.sess = tf.Session()
self.sess.run(tf.global_variables_initializer())
def build_model(self):
tf.reset_default_graph()
self.state_in = tf.placeholder(tf.int32, shape=[1])
self.action_in = tf.placeholder(tf.int32, shape=[1])
self.target_in = tf.placeholder(tf.float32, shape=[1])
self.state = tf.one_hot(self.state_in, depth=self.state_size)
self.action = tf.one_hot(self.action_in, depth=self.action_size)
self.q_state = tf.layers.dense(self.state, units=self.action_size, name='q_table')
self.q_action = tf.reduce_sum(tf.multiply(self.q_state, self.action), axis=1)
self.loss = tf.reduce_sum(tf.square(self.target_in - self.q_action))
self.optimizer = tf.train.AdamOptimizer(self.learning_rate).minimize(self.loss)
def get_action(self, state):
q_state = self.sess.run(self.q_state, feed_dict={self.state_in: [state]})
action_greedy = np.argmax(q_state)
action_random = super().get_action(state)
return action_random if random.random() < self.eps else action_greedy
def train(self, experience):
state, action, next_state, reward, done = [[exp] for exp in experience]
q_next = self.sess.run(self.q_state, feed_dict={self.state_in: next_state})
q_next[done] = np.zeros([self.action_size])
q_target = reward + self.discount_rate * np.max(q_next)
feed = {self.state_in: state, self.action_in: action, self.target_in: q_target}
self.sess.run(self.optimizer, feed_dict=feed)
if experience[4]:
self.eps = self.eps * 0.99
def __del__(self):
self.sess.close()
agent = QNAgent(env)
total_reward = 0
for ep in range(100):
state = env.reset()
done = False
while not done:
action = agent.get_action(state)
next_state, reward, done, info = env.step(action)
agent.train((state, action, next_state, reward, done))
state = next_state
total_reward += reward
print("s", state, "a", action)
print("Episode: {}, Total Reward: {}, eps: {}".format(ep, total_reward, agent.eps))
env.render()
with tf.variable_scope('q_table', reuse=True):
weights = agent.sess.run(tf.get_variable("kernel"))
print(weights)
time.sleep(0.05)
clear_output(wait=True)
env.close()
```
| github_jupyter |
<h1> Logistic Regression using Spark ML </h1>
Set up bucket
```
BUCKET='cloud-training-demos-ml' # CHANGE ME
os.environ['BUCKET'] = BUCKET
# Create spark session
from pyspark.sql import SparkSession
from pyspark import SparkContext
sc = SparkContext('local', 'logistic')
spark = SparkSession \
.builder \
.appName("Logistic regression w/ Spark ML") \
.getOrCreate()
print spark
print sc
from pyspark.mllib.classification import LogisticRegressionWithLBFGS
from pyspark.mllib.regression import LabeledPoint
```
<h2> Read dataset </h2>
```
traindays = spark.read \
.option("header", "true") \
.csv('gs://{}/flights/trainday.csv'.format(BUCKET))
traindays.createOrReplaceTempView('traindays')
spark.sql("SELECT * from traindays LIMIT 5").show()
from pyspark.sql.types import StringType, FloatType, StructType, StructField
header = 'FL_DATE,UNIQUE_CARRIER,AIRLINE_ID,CARRIER,FL_NUM,ORIGIN_AIRPORT_ID,ORIGIN_AIRPORT_SEQ_ID,ORIGIN_CITY_MARKET_ID,ORIGIN,DEST_AIRPORT_ID,DEST_AIRPORT_SEQ_ID,DEST_CITY_MARKET_ID,DEST,CRS_DEP_TIME,DEP_TIME,DEP_DELAY,TAXI_OUT,WHEELS_OFF,WHEELS_ON,TAXI_IN,CRS_ARR_TIME,ARR_TIME,ARR_DELAY,CANCELLED,CANCELLATION_CODE,DIVERTED,DISTANCE,DEP_AIRPORT_LAT,DEP_AIRPORT_LON,DEP_AIRPORT_TZOFFSET,ARR_AIRPORT_LAT,ARR_AIRPORT_LON,ARR_AIRPORT_TZOFFSET,EVENT,NOTIFY_TIME'
def get_structfield(colname):
if colname in ['ARR_DELAY', 'DEP_DELAY', 'DISTANCE', 'TAXI_OUT']:
return StructField(colname, FloatType(), True)
else:
return StructField(colname, StringType(), True)
schema = StructType([get_structfield(colname) for colname in header.split(',')])
inputs = 'gs://{}/flights/tzcorr/all_flights-00000-*'.format(BUCKET) # 1/30th; you may have to change this to find a shard that has training data
#inputs = 'gs://{}/flights/tzcorr/all_flights-*'.format(BUCKET) # FULL
flights = spark.read\
.schema(schema)\
.csv(inputs)
# this view can now be queried ...
flights.createOrReplaceTempView('flights')
```
<h2> Clean up </h2>
```
trainquery = """
SELECT
f.*
FROM flights f
JOIN traindays t
ON f.FL_DATE == t.FL_DATE
WHERE
t.is_train_day == 'True'
"""
traindata = spark.sql(trainquery)
print traindata.head(2) # if this is empty, try changing the shard you are using.
traindata.describe().show()
```
Note that the counts for the various columns are all different; We have to remove NULLs in the delay variables (these correspond to canceled or diverted flights).
<h2> Logistic regression </h2>
```
trainquery = """
SELECT
DEP_DELAY, TAXI_OUT, ARR_DELAY, DISTANCE
FROM flights f
JOIN traindays t
ON f.FL_DATE == t.FL_DATE
WHERE
t.is_train_day == 'True' AND
f.dep_delay IS NOT NULL AND
f.arr_delay IS NOT NULL
"""
traindata = spark.sql(trainquery)
traindata.describe().show()
trainquery = """
SELECT
DEP_DELAY, TAXI_OUT, ARR_DELAY, DISTANCE
FROM flights f
JOIN traindays t
ON f.FL_DATE == t.FL_DATE
WHERE
t.is_train_day == 'True' AND
f.CANCELLED == '0.00' AND
f.DIVERTED == '0.00'
"""
traindata = spark.sql(trainquery)
traindata.describe().show()
def to_example(fields):
return LabeledPoint(\
float(fields['ARR_DELAY'] < 15), #ontime? \
[ \
fields['DEP_DELAY'], \
fields['TAXI_OUT'], \
fields['DISTANCE'], \
])
examples = traindata.rdd.map(to_example)
lrmodel = LogisticRegressionWithLBFGS.train(examples, intercept=True)
print lrmodel.weights,lrmodel.intercept
print lrmodel.predict([6.0,12.0,594.0])
print lrmodel.predict([36.0,12.0,594.0])
lrmodel.clearThreshold()
print lrmodel.predict([6.0,12.0,594.0])
print lrmodel.predict([36.0,12.0,594.0])
lrmodel.setThreshold(0.7) # cancel if prob-of-ontime < 0.7
print lrmodel.predict([6.0,12.0,594.0])
print lrmodel.predict([36.0,12.0,594.0])
```
<h2> Predict with the model </h2>
First save the model
```
!gsutil -m rm -r gs://$BUCKET/flights/sparkmloutput/model
MODEL_FILE='gs://' + BUCKET + '/flights/sparkmloutput/model'
lrmodel.save(sc, MODEL_FILE)
print '{} saved'.format(MODEL_FILE)
lrmodel = 0
print lrmodel
```
Now retrieve the model
```
from pyspark.mllib.classification import LogisticRegressionModel
lrmodel = LogisticRegressionModel.load(sc, MODEL_FILE)
lrmodel.setThreshold(0.7)
print lrmodel.predict([36.0,12.0,594.0])
print lrmodel.predict([8.0,4.0,594.0])
```
<h2> Examine the model behavior </h2>
For dep_delay=20 and taxiout=10, how does the distance affect prediction?
```
lrmodel.clearThreshold() # to make the model produce probabilities
print lrmodel.predict([20, 10, 500])
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
dist = np.arange(10, 2000, 10)
prob = [lrmodel.predict([20, 10, d]) for d in dist]
sns.set_style("whitegrid")
ax = plt.plot(dist, prob)
plt.xlabel('distance (miles)')
plt.ylabel('probability of ontime arrival')
delay = np.arange(-20, 60, 1)
prob = [lrmodel.predict([d, 10, 500]) for d in delay]
ax = plt.plot(delay, prob)
plt.xlabel('departure delay (minutes)')
plt.ylabel('probability of ontime arrival')
```
<h2> Evaluate model </h2>
Evaluate on the test data
```
inputs = 'gs://{}/flights/tzcorr/all_flights-00001-*'.format(BUCKET) # you may have to change this to find a shard that has test data
flights = spark.read\
.schema(schema)\
.csv(inputs)
flights.createOrReplaceTempView('flights')
testquery = trainquery.replace("t.is_train_day == 'True'","t.is_train_day == 'False'")
print testquery
testdata = spark.sql(testquery)
examples = testdata.rdd.map(to_example)
testdata.describe().show() # if this is empty, change the shard you are using
def eval(labelpred):
cancel = labelpred.filter(lambda (label, pred): pred < 0.7)
nocancel = labelpred.filter(lambda (label, pred): pred >= 0.7)
corr_cancel = cancel.filter(lambda (label, pred): label == int(pred >= 0.7)).count()
corr_nocancel = nocancel.filter(lambda (label, pred): label == int(pred >= 0.7)).count()
cancel_denom = cancel.count()
nocancel_denom = nocancel.count()
if cancel_denom == 0:
cancel_denom = 1
if nocancel_denom == 0:
nocancel_denom = 1
return {'total_cancel': cancel.count(), \
'correct_cancel': float(corr_cancel)/cancel_denom, \
'total_noncancel': nocancel.count(), \
'correct_noncancel': float(corr_nocancel)/nocancel_denom \
}
# Evaluate model
lrmodel.clearThreshold() # so it returns probabilities
labelpred = examples.map(lambda p: (p.label, lrmodel.predict(p.features)))
print 'All flights:'
print eval(labelpred)
# keep only those examples near the decision threshold
print 'Flights near decision threshold:'
labelpred = labelpred.filter(lambda (label, pred): pred > 0.65 and pred < 0.75)
print eval(labelpred)
```
Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
| github_jupyter |
```
import pandas as pd
import warnings
import altair as alt
from urllib import request
import json
# fetch & enable a Spanish timeFormat locale.
with request.urlopen('https://raw.githubusercontent.com/d3/d3-time-format/master/locale/es-ES.json') as f:
es_time_format = json.load(f)
alt.renderers.set_embed_options(timeFormatLocale=es_time_format)
#warnings.filterwarnings('ignore')
df = pd.read_csv ('https://www.gstatic.com/covid19/mobility/Global_Mobility_Report.csv')
country = 'Mexico'
region = 'Mexico City'
sub_df = df[(df['country_region']== country) & (df['sub_region_1']==region)]
sub_df.loc[:,'date'] = pd.to_datetime(sub_df.loc[:,'date'])
# Change date here
sub_df = sub_df[(sub_df['date'] > '2020-02-15') & (sub_df['date'] < '2020-11-17')]
sub_df = sub_df.sort_values('date', ascending=True)
%run urban_theme.py
retail_recretation = alt.Chart(sub_df).mark_line(size=1).encode(
alt.X('date:T', title = " "),
alt.Y('retail_and_recreation_percent_change_from_baseline:Q', title = " "),
).properties(
title = "Lugares de ocio",
width = 450,
height = 250
)
grocery_pharmacy = alt.Chart(sub_df).mark_line(size=1).encode(
alt.X('date:T', title = " "),
alt.Y('grocery_and_pharmacy_percent_change_from_baseline:Q', title = " "),
).properties(
title = "Mercados y farmacias",
width = 450,
height = 250
)
parks = alt.Chart(sub_df).mark_line(size=1).encode(
alt.X('date:T', title = " "),
alt.Y('parks_percent_change_from_baseline:Q', title = " ")
).properties(
title = "Parques y playas",
width = 450,
height = 250
)
transit = alt.Chart(sub_df).mark_line(size=1).encode(
alt.X('date:T', title = " "),
alt.Y('transit_stations_percent_change_from_baseline:Q', title = " ")
).properties(
title = "Transporte público",
width = 450,
height = 250
)
workplaces = alt.Chart(sub_df).mark_line(size=1).encode(
alt.X('date:T', title = " "),
alt.Y('workplaces_percent_change_from_baseline:Q', title = " ")
).properties(
title = "Lugares de trabajo",
width = 450,
height = 250
)
residential = alt.Chart(sub_df).mark_line(size=1).encode(
alt.X('date:T', title = " "),
alt.Y('residential_percent_change_from_baseline:Q', title = " ")
).properties(
title = "Residenciales",
width = 450,
height = 250
)
par1 = retail_recretation | grocery_pharmacy | parks
par2 = transit | workplaces | residential
mobility = par1 & par2
# Title of the concatened grpah
mobility_1 = mobility.properties(
title={
"text":["Movilidad en CDMX"],
"subtitle": ["Datos del 15 de febrero al 15 de noviembre de 2020.", " "],
}
)
#Add footer
alt.concat(mobility_1).properties(
title=alt.TitleParams(
['Fuente: Elaboración propia con datos del COVID-19 Community Mobility Reports de Google.', 'Jay Ballesteros (@jballesterosc_)'],
baseline='bottom',
orient='bottom',
anchor='end',
fontWeight='normal',
fontSize=12
)
)
```
| github_jupyter |
```
%cd ~/NetBeansProjects/ExpLosion/
from notebooks.common_imports import *
from gui.output_utils import *
from gui.user_code import pairwise_significance_exp_ids
query = {'expansions__decode_handler': 'SignifiedOnlyFeatureHandler',
'expansions__vectors__dimensionality': 100,
'expansions__vectors__rep': 0,
'expansions__vectors__unlabelled': 'turian'}
ids = Experiment.objects.filter(**query).values_list('id', flat=True)
print('ids are', ids)
df = dataframe_from_exp_ids(ids, {'Algorithm':'expansions__vectors__algorithm',
'Composer':'expansions__vectors__composer',
'Features': 'document_features_tr'})
ids = list(ids.values_list('id', flat=True))
for eid in ids + [1]:
exp = Experiment.objects.get(id=eid)
mean, low, high, _ = get_ci(eid)
print('%s & %.2f$\pm$%.2f \\\\'%(exp.expansions.vectors.composer, mean, (high-low)/2))
pairwise_significance_exp_ids(zip(ids, [1]*len(ids)), name_format=['expansions__vectors__composer'])
df.head()
def f1(x):
return '%1.2f' % x
# ddf = df.drop('folds', axis=1).groupby(['Composer', 'k']).agg([np.mean, np.std])
# ddf.columns = ddf.columns.droplevel(0)#.reset_index()
# ddf['Accuracy'] = ddf['mean'].map(f1) + "$\pm$" + ddf['std'].map(f1)
# ddf = ddf.drop(['mean', 'std'], axis=1).reset_index()
# print(ddf.pivot_table(values='Accuracy', index='k',
# columns='Composer', aggfunc=lambda x: x).to_latex(escape=False))
ddf = df.drop(['folds', 'Algorithm'], axis=1).groupby(['Composer', 'Features']).agg('mean').reset_index() # no need to drop unwanted columns
res = ddf.pivot_table(values='Accuracy', index='Composer', columns='Features')
print(res.to_latex(float_format=f1, na_rep='N/A'))
res.T
del res.index.name
del res.columns.name
for c in res.columns:
print(res[[c]].to_latex(float_format=f1, na_rep='N/A'))
res[[c]]
```
# Compare to word2vec qualitatively
```
from discoutils.thesaurus_loader import Vectors
from discoutils.tokens import DocumentFeature
v1 = Vectors.from_tsv('../FeatureExtractionToolkit/socher_vectors/turian_unigrams.h5')
v1.init_sims(n_neighbors=25)
v2 = Vectors.from_tsv('../FeatureExtractionToolkit/word2vec_vectors/word2vec-wiki-15perc.unigr.strings.rep0')
v2.init_sims(n_neighbors=25)
def compare_neighbours(vectors, names, words=[], n_neighbours=5):
if not words:
words = random.sample([x for x in vectors[0].keys() if not x.count('_')], 10)
words_clean = [DocumentFeature.from_string(w).tokens[0].text for w in words]
data = []
for w, w_clearn in zip(words, words_clean):
this_row = []
for v in vectors:
neigh = v.get_nearest_neighbours(w)
# remove neigh with the same PoS (for turian)
new_neigh = []
for n, _ in neigh:
n1 = DocumentFeature.from_string(n).tokens[0].text
# print(n, n1)
if n1 not in new_neigh:
if n1 != w_clearn:
new_neigh.append(n1)
# print(new_neigh)
if neigh:
this_row.append(', '.join(n for n in new_neigh[:n_neighbours]))
else:
this_row.append(None)
data.append(this_row)
return pd.DataFrame(data, index=words_clean, columns=names)
# bunch of random words contained in both
words = 'andrade/N giant/J seize/V fundamental/J affidavit/N claim/V sikh/N rest/V israel/N arrow/N preventative/J torrential/J'.split()
df = compare_neighbours([v1, v2], ['turian', 'w2v'], words, n_neighbours=5)
df.to_csv('turian_vs_w2v.csv')
df
print(pd.DataFrame(df.turian).to_latex())
print(pd.DataFrame(df['w2v']).to_latex())
```
# How many words of each PoS type are there in turian's unigrams
Code below largely copied from `socher_vectors.py`
```
from scipy.io import loadmat
mat = loadmat('../FeatureExtractionToolkit/socher_vectors/vars.normalized.100.mat')
words = [w[0] for w in mat['words'].ravel()]
import nltk
from nltk import WordNetLemmatizer
import string
from collections import defaultdict
lmtzr = WordNetLemmatizer()
clean_to_dirty = defaultdict(list) # canonical -> [non-canonical]
dirty_to_clean = dict() # non-canonical -> canonical
to_keep = set() # which non-canonical forms forms we will keep
# todo this can be done based on frequency or something
for w in words:
if set(w).intersection(set(string.punctuation).union(set('0123456789'))):
# not a real word- contains digits or punctuation
continue
lemma = lmtzr.lemmatize(w.lower())
clean_to_dirty[lemma].append(w)
dirty_to_clean[w] = lemma
# decide which of possibly many non-canonical forms with the same lemma to keep
# prefer shorter and lowercased non-canonical forms
for lemma, dirty_list in clean_to_dirty.items():
if len(dirty_list) > 1:
best_lemma = min(dirty_list, key=lambda w: (len(w), not w.islower()))
else:
best_lemma = dirty_list[0]
to_keep.add(best_lemma)
pos_tagged = [nltk.pos_tag([w]) for w in to_keep]
from collections import defaultdict
pos_coarsification_map = defaultdict(lambda: "UNK")
pos_coarsification_map.update({"JJ": "J",
"JJN": "J",
"JJS": "J",
"JJR": "J",
"VB": "V",
"VBD": "V",
"VBG": "V",
"VBN": "V",
"VBP": "V",
"VBZ": "V",
"NN": "N",
"NNS": "N",
"NNP": "N",
"NPS": "N",
"NP": "N",
"RB": "RB",
"RBR": "RB",
"RBS": "RB",
"DT": "DET",
"WDT": "DET",
"IN": "CONJ",
"CC": "CONJ",
"PRP": "PRON",
"PRP$": "PRON",
"WP": "PRON",
"WP$": "PRON",
".": "PUNCT",
":": "PUNCT",
":": "PUNCT",
"": "PUNCT",
"'": "PUNCT",
"\"": "PUNCT",
"'": "PUNCT",
"-LRB-": "PUNCT",
"-RRB-": "PUNCT"})
pos_tags = [pos_coarsification_map[x[0][1]] for x in pos_tagged]
from collections import Counter
Counter(pos_tags)
```
# Let's look at the embeddings
```
from sklearn.manifold import TSNE
from sklearn.preprocessing import normalize
sns.set_style('white')
def draw_tsne_embeddings(v):
# pairs of words from Mikolov et al (2013)- Distributed word reprs and their compositionality
# ignored some pairs that do not have a vectors
words = 'china/N beijing/N russia/N moscow/N japan/N tokyo/N turkey/N ankara/N france/N \
paris/N italy/N rome/N greece/N athens/N germany/N berlin/N portugal/N lisbon/N spain/N madrid/N'.split()
mat = np.vstack([v.get_vector(w).A for w in words])
reduced = TSNE(init='pca').fit_transform(normalize(mat))
# ax = plt.fig
plt.scatter(reduced[:, 0], reduced[:, 1]);
# point labels
for i, txt in enumerate(words):
plt.annotate(txt, (reduced[i, 0], reduced[i, 1]), fontsize=20);
# lines between country-capital pairs
for i in range(len(words)):
if i %2 != 0:
continue
plt.plot([reduced[i, 0], reduced[i+1, 0]],
[reduced[i, 1], reduced[i+1, 1]], alpha=0.5, color='black')
# remove all junk from plot
sns.despine(left=True, bottom=True)
plt.gca().xaxis.set_major_locator(plt.NullLocator())
plt.gca().yaxis.set_major_locator(plt.NullLocator())
v = Vectors.from_tsv('../FeatureExtractionToolkit/word2vec_vectors/word2vec-wiki-100perc.unigr.strings.rep0') # look very good
draw_tsne_embeddings(v)
plt.savefig('plot-mikolov-tsne-w2v-wiki.pdf', format='pdf', dpi=300, bbox_inches='tight', pad_inches=0.1)
v = Vectors.from_tsv('../FeatureExtractionToolkit/word2vec_vectors/word2vec-gigaw-100perc.unigr.strings.rep0') # ok
draw_tsne_embeddings(v)
plt.savefig('plot-mikolov-tsne-w2v-gigaw.pdf', format='pdf', dpi=300, bbox_inches='tight', pad_inches=0.1)
v = Vectors.from_tsv('../FeatureExtractionToolkit/socher_vectors/turian_unigrams.h5') # terrible
draw_tsne_embeddings(v)
plt.savefig('plot-mikolov-tsne-turian.pdf', format='pdf', dpi=300, bbox_inches='tight', pad_inches=0.1)
v = Vectors.from_tsv('../FeatureExtractionToolkit/glove/vectors.miro.h5') # terrible
draw_tsne_embeddings(v)
plt.savefig('plot-mikolov-tsne-glove-wiki.pdf', format='pdf', dpi=300, bbox_inches='tight', pad_inches=0.1)
```
| github_jupyter |
# Lussen
Looping `for` a `while`
## `for` lussen
```
for i in [0, 1, 2]:
print("i is", i)
for i in range(0, 3):
print("i is", i)
for x in [10, 15, 2020]:
print("x is", x)
```
```python
for i in ...:
print("Gefeliciteerd")
```
Hoe kan dit 10 keer worden uitgevoerd? Hier is een reeks aan oplossingen mogelijk...
```
for i in range(10):
print("Gefeliciteerd!")
```
## `for` fun(cties)
```
def fun1():
for i in range(0, 3):
print("i is", i)
return
def fun2():
for i in range(0, 3):
print("i is", i)
return
```
## Meer `for`
```
def fun1B():
for i in range(1, 6):
if i % 2 == 0:
print("i is", i)
return
fun1B()
def fun2B():
for i in range(1, 6):
if i % 2 == 0:
print("i is", i)
return
fun2B()
def fun3B():
for i in range(1,6):
if i % 2 == 0:
print("i is", i)
return
fun3B()
```
```python
def fun3B():
for i in range(1,6):
if i % 2 == 0:
print("i is", i)
return
```

## Hmmm
Iteratieve oplossingen
Blokken herhalen tot een conditie is bereikt, lussen!
Bereken de faculteit van een getal
```asm
00 read r1
01 setn r13 1
02 jeqzn r1 6
03 mul r13 r13 r1
04 addn r1 -1
05 jumpn 02
06 write r13
07 halt
```
- `02` tot `r1` gelijk is aan 0 ...
- `03` vermenigvuldig `r13` met `r1` en overschrijf `r13` met het resultaat
- `04` verminder `r1` met 1
- `05` herhaal
### Hmmm denken in Python
```python
def fac(x): # 00 read r1
result = 1 # 01 setn r13 1
while x != 0: # 02 jeqzn r1 6
result *= x # 03 mul r13 r13 r1
x -= 1 # 04 addn r1 -1
# 05 jumpn 02
return result # 06 write r13
# 07 halt
```
We hebben nu de voordelen van *expliciete* lussen én op zichzelf staande functies!
### Recursieve versie
```
def fac(x):
"""faculty, recursive version
"""
if x == 0:
return 1
else:
return x * fac(x - 1)
```
```asm
00 read r1
01 setn r15 42
02 call r14 5
03 jump 21
04 nop
05 jnezn r1 8
06 setn r13 1
07 jumpr r14
08 storer r1 r15
09 addn r15 1
10 storer r14 r15
11 addn r15 1
12 addn r1 -1
13 call r14 5
14 addn r15 -1
15 loadr r14 r15
16 addn r15 -1
17 loadr r1 r15
18 mul r13 r13 r1
19 jumpr r14
20 nop
21 write r13
22 halt
```
## Iteratief ontwerp
Lussen! Variabelen!
`for`
```python
for x in [40, 41, 42]:
print(x)
```
`while`
```python
x = 42
while x > 0:
print(x)
x -= 1
```
Variabelen
```python
x = 41
x += 1 # addn r1 1
```
### Variabelen
Variëren!
```python
age = 41
```
```python
age = age + 1
```
Wat hetzelfde is als ...
```python
age += 1
```
### In het kort
```python
hwToGo = 7
hwToGo = hwToGo - 1
```
```python
hwToGo -= 1
```
```python
total = 21000000
total = total * 2
```
```python
total *= 2
```
```python
u235 = 84000000000000000;
u235 = u235 / 2
```
```python
u235 /= 2
```
## `for`!
```python
for x in [2, 4, 6, 8]:
print("x is", x)
print("Done!")
```
### Stap voor stap
1. ken elk element toe aan `x`
```python
for x in [2, 4, 6, 8]:
```
2. de BODY of BLOCK gebruikt de waarde van `x`
3. en vervolg de lus met de het volgende element
```python
print("x is", x)
```
4. code na de lus wordt wordt pas uitgevoerd als de lus klaar is!
```python
print("Done!")
```
### Faculteit met `for`
```
def fac(n):
result = 1 # not the result yet!
for i in range(1, n + 1): # range start at 0, add one!
result *= i # result = result * i
return result # return the result
fac(5)
```
## Quiz
```python
x = 0
for i in range(4):
x += 10
print(x)
```
Wat wordt geprint voor `x`?
### Oplossing
`40`
```python
S = "time to think this over! "
result = ""
for i in range(len(S)):
if S[i - 1] == " ":
result += S[i]
print(result)
```
Wat wordt geprint voor `result`?
### Oplossing
`'tttto'`
| `result` | `S[i - 1]` | `S[i]` | `i` |
|----------|------------|--------|-----|
| `'t'` | `' '` | `'t'` | `0` |
| | `'t'` | `'i'` | `1` |
| | `'i'` | `'m'` | `2` |
| | `'m'` | `'e'` | `3` |
| | `'e'` | `' '` | `4` |
| `'tt'` | `' '` | `'t'` | `5` |
| | `'t'` | `'o'` | `6` |
| | `'o'` | `' '` | `7` |
| `'ttt'` | `' '` | `'t'` | `8` |
## Twee typen `for`
Elementen versus index
### Op basis van element
```python
L = [3, 15, 17, 7]
for x in L:
print(x)
```
```python
S = "een fijne lus"
for c in S:
print(c)
```
### Op basis van index
```python
L = [3, 15, 17, 7]
for i in range(len(L)):
print(L[i])
```
```python
S = "een fijne lus"
for i in range(len(S)):
print(S[i])
```
Let op, het is niet heel gewoon om in lussen te printen, maar is wel heel nuttig voor debuggen!
### Welke van de twee?
Elementen: eenvoudiger
Indices: flexibeler
Denk aan het "time to think this over! " voorbeeld, in de lus kon op basis van de index steeds "terug" worden gekeken in de string met `S[i - 1]`!
### Of ... beide?
Index én element? Met de ingebouwde functie [`enumerate`](https://docs.python.org/3/library/functions.html#enumerate)
```
L = [3, 15, 17, 7]
for i, x in enumerate(L):
print("index", i, "element", x)
```
Misschien ben je `enumerate` al eens tegengekomen? De kans is in ieder geval groot als je op het net gaat zoeken, we staan er om deze reden even bij stil.
### Een klein uitstapje
Welkom bij de cursus Advanced Python!
```
a, b = [1, 2]
print(a)
print(b)
```
Deze techniek heet "tuple unpacking". Misschien heb je de term "tuple" al eens gehoord (of misschien gezien in een foutmelding!). Een tuple is een type dat we niet verder gaan behandelen maar zie het als een list met een vaste lengte (een list heeft géén vaste lengte, je kan daar elementen aan toevoegen en van vewijderen).
Een tuple heeft hetzelfde gedrag als een list en wordt vaak "onzichtbaar" toegepast door Python, bijvoorbeeld
```
x = 3, 4
type(x)
x
```
### Vraag
```python
a = 1
b = 2
```
Welke handeling(en) zijn nodig om de waarden om te draaien, zodat `a` gelijk is aan `b` en `b` gelijk aan `a`?
Stel, je hebt een kopje koffie en een kopje water, hoe kan je de inhoud de kopjes omwisselen?
```
a, b = b, a
print(a)
print(b)
```
Python maakt het jou makkelijk, geen derde variabele is nodig voor een wisseling!
### `LoL`'s uitpakken
`enumerate` geeft een `LoL` terug!
Hoewel, technisch gezien is het een `LoT`, een List of Tuples ...
```
W = [[5, "jan"], [12, "feb"], [15, "mar"]]
for temp, month in W:
print("month", month, "avg temp", temp)
```
Een alternatieve benadering
```python
for x in W:
temp, month = x
print("month", month, "avg temp", temp)
```
## Extreme lussen
```python
guess = 42
print("It keeps on")
while guess == 42:
print("going and")
print("Phew! I'm done!")
```
Wat doet deze lus?
## `while` lussen
Een lus tot een bepaalde **conditie** is bereikt.
Tests?
- `42 == 42`
- `guess > 42`
- ...
### Ontsnappen
```
import random
guess = 0 # starting value, not the final or desired value!
while guess != 42: # test to see if we keep looping
print('Help! Let me out!')
guess = random.choice([41, 42, 43]) # watch out for infinite loops!
print('At last!') # after the loop ends
```
### Simulatie
```python
def guess(hidden):
"""De computer raadt een geheim getal
"""
comp_guess = choice(list(range(100))) # [0, ..., 99]
print("Ik koos", comp_guess) # print de keus
time.sleep(0.5) # pauzeer een halve seconde
if comp_guess == hidden: # base case, eindelijk...
print("Gevonden!") # de computer is blij :)
return 1 # poging
else: # recursive case
return 1 + guess(hidden) # volgende poging!
```
```
from random import choice
def escape(hidden):
guess = 0
count = 0
while guess != hidden:
guess = choice(range(100))
count += 1
return count
LC = [escape(42) for i in range(1000)]
```
Bekijk de eerste 10 resutaten
```
LC[:10]
```
Het snelst geraden
```
min(LC)
```
Het minste geluk
```
max(LC)
```
Gemiddeld aantal keren
```
sum(LC)/len(LC)
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import gc
import matplotlib.pyplot as plt
import seaborn as sns
```
#### <font color='darkblue'> Doing these exercises, we had to be very careful about managing the RAM of our personal computers. We had to run this code with the 8gb RAM of our computers. So we tried to delete each dataframe as soon as we didn't need it anymore, and we also used the gc.collect() function. For every exercise, we uploaded only the part we needed from the main dataset.</font>
### <font color='purple'>Question 1:</font>
#### <font color='purple'>A marketing funnel describes your customer’s journey with your e-commerce. It may involve different stages, beginning when someone learns about your business, when he/she visits your website for the first time, to the purchasing stage, marketing funnels map routes to conversion and beyond. Suppose your funnel involves just three simple steps: 1) view, 2) cart, 3) purchase. Which is the rate of complete funnels?</font>
```
chunks_oct = pd.read_csv('2019-Oct.csv',chunksize=1000000, header='infer',usecols=["event_type",
"product_id","user_id"])
chunks_nov = pd.read_csv('2019-Nov.csv',chunksize=1000000, header='infer',usecols=["event_type",
"product_id","user_id"])
dataset=pd.DataFrame()
for chunk in chunks_oct:
chunk = chunk[chunk["event_type"]!="view"]
dataset= pd.concat([dataset,chunk])
for chunk in chunks_nov:
chunk = chunk[chunk["event_type"]!="view"]
dataset= pd.concat([dataset,chunk])
# rate of complete funnels :
# In order to have a complete funnel, we need a certain user to execute a "purchase" action after having executed
# a "cart" action. The "view" action has obviously happened before, so there is no need to consider it. This is why we will
# work with a database containing only the "purchase" and "cart" actions. We will count one complete funnel everytime a
# user performs a "purchase" action on a product, but only if the same user previously performed a "cart" action on the same
# product.
x=dataset.groupby(["user_id","product_id"]) # we group the actions of the same user on the same product
complete_funnels=0
for name,group in x:
cart=0
purchase=0
for i in range(group.shape[0]):
event = group.iloc[i]["event_type"]
if event == "cart":
cart=cart+1
elif event == "purchase" and cart>purchase: #we accept a "purchase" only if a "cart" event has happened before
purchase=purchase+1
complete_funnels=complete_funnels+purchase
complete_funnels
```
### <font color='purple'>Question 1:</font>
#### <font color='purple'>What’s the operation users repeat more on average within a session? Produce a plot that shows the average number of times users perform each operation (view/removefromchart etc etc).</font>
```
dataset=pd.DataFrame()
df_oct = pd.read_csv('2019-Oct.csv', header='infer',usecols=[ "event_type","user_session"])
gc.collect()
df_nov = pd.read_csv('2019-Nov.csv', header='infer',usecols=[ "event_type","user_session"])
dataset= pd.concat([df_oct,df_nov])
dataset.info()
n_session = dataset['user_session'].nunique() # we calculate the total number of sessions
actions = dataset['event_type'].value_counts() # we calculate the number of each action
avg_actions = actions/n_session
avg_actions.values
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style="darkgrid")
fig, ax = plt.subplots()
fig.set_size_inches(5.7, 6.27)
ax = sns.barplot(avg_actions.index,avg_actions.values)
ax.set(xlabel=' Operations', ylabel='Avg. performs')
plt.title(" Average number of times users perform each operation in a session")
plt.show()
```
#### <font color='blue'>As expected, the "view" actions are way more common than the "cart" and "purchase" actions.</font>
### <font color='purple'>Question 1:</font>
#### <font color='purple'>How many times, on average, a user views a product before adding it to the cart?</font>
```
dataset=pd.DataFrame()
df_oct = pd.read_csv('2019-Oct.csv', header='infer',usecols=[ "event_type"])
df_nov = pd.read_csv('2019-Nov.csv', header='infer',usecols=[ "event_type"])
dataset= pd.concat([df_oct,df_nov])
actions['view']/actions['cart']
```
### <font color='purple'>Question 1:</font>
#### <font color='purple'>What’s the probability that products added once to the cart are effectively bought?</font>
```
# We need to calculate in which percentage a "cart" action actually becomes a "complete funnel". Again, we consider
# that the "view" event has obviously happened before.
print("Percentage probability that products added once to the cart are effectively bought:")
complete_funnels/actions['cart']*100
```
### <font color='purple'>Question 1:</font>
#### <font color='purple'>What’s the average time an item stays in the cart before being removed?</font>
```
# For this question, we will work with a database which doesn't contain the "view" actions. Since there is no
# "remove_from_cart" event_type, we will calculate the average time difference between the first "cart" action of a user
# on a product and the first "purchase" action, since purchasing the item removes it from the cart
dataset=pd.DataFrame()
chunks_oct = pd.read_csv('2019-Oct.csv',chunksize=1000000, header='infer',usecols=["event_time", "event_type",
"product_id","user_id"])
chunks_nov = pd.read_csv('2019-Nov.csv',chunksize=1000000, header='infer',usecols=["event_time", "event_type",
"product_id","user_id"])
for chunk in chunks_oct:
chunk = chunk[chunk["event_type"]!="view"]
chunk["event_time"]= pd.to_datetime(chunk["event_time"])
dataset= pd.concat([dataset,chunk])
for chunk in chunks_nov:
chunk = chunk[chunk["event_type"]!="view"]
chunk["event_time"]= pd.to_datetime(chunk["event_time"])
dataset= pd.concat([dataset,chunk])
z=dataset.groupby(["user_id","product_id"]) # we group the actions of the same user on the same product
tot_time=list() # the "time passed between the first "cart" and a "purchase"
for name,group in z:
added= False # this condition will turn True when the product is added for the first time by the user
bought = False # this condition will turn True when the product is bought by the user
for i in range(group.shape[0]):
event = group.iloc[i]["event_type"]
if event == "cart" and not added:
add_time= group.iloc[i]["event_time"]
added=True # the product has been added to the cart
elif event=="purchase" and added and not bought:
buy_time= group.iloc[i]["event_time"]
bought= True # the product has been bought
elif bought and added:
break # we already found the first "cart" and the purchase time for this user and this product, so we can go to the next product/user
if bought and added:
tot_time.append(buy_time-add_time)
tot_time=np.array(tot_time)
counter=0
y=pd.to_timedelta(0)
while len(tot_time)>=1000:
x=tot_time[:1000]
tot_time=tot_time[1001:]
y=y+x.mean()
counter=counter+1
average_time=y/counter
average_time
```
#### <font color='blue'>On average, when a user adds a product in the cart, it will take almost 8 hours and an half for him to decide to actually buy the product</font>
### <font color='purple'>Question 1:</font>
#### <font color='purple'>How much time passes on average between the first view time and a purchase/addition to cart?</font>
```
dataset_oct=pd.DataFrame()
dataset_nov=pd.DataFrame()
dataset=pd.DataFrame()
gc.collect()
print("Free the memory")
```
#### <font color='darkblue'>For this exercise, we had to find a way to reduce the dataset in order to run the code just by using the 8gb ram of our personal computers. We decided to focus our analysis only on products with an high price. The reason for this is that we are looking for the average time between the first view time and a purchase/addition to cart. If the user sees a product with a low price and he is interested in it, he probably adds the product, or purchases it, as soon as he sees it. If he isn't interested, he simply won't add the product. So the average time between first view and a purchase/add to cart it's not really meaningful for products with a low price. Instead, when an user sees an high price product, even if he is interested in it, he will need some time to think about it. Understanding how much time the user needs to decide to buy the high price product or not is interesting for the company, because during that time the company might try to convince the user with, for example, a little discount. </font>
```
chunks_oct = pd.read_csv('2019-Oct.csv',chunksize=1000000, header='infer',usecols=["event_time", "event_type","price",
"product_id","user_id"])
chunks_nov = pd.read_csv('2019-Nov.csv',chunksize=1000000, header='infer',usecols=["event_time", "event_type","price",
"product_id","user_id"])
for chunk in chunks_oct:
chunk = chunk[chunk["price"]>500]
chunk["event_time"]= pd.to_datetime(chunk["event_time"])
dataset= pd.concat([dataset,chunk])
for chunk in chunks_nov:
chunk = chunk[chunk["price"]>500]
chunk["event_time"]= pd.to_datetime(chunk["event_time"])
dataset= pd.concat([dataset,chunk])
dataset.info()
z=dataset.groupby(["product_id","user_id"]) # we group the actions of the same user on the same product
tot_time=list() # a list of the "time passed between the first view and a purchase/addition to cart"
for name,group in z:
seen= False # this condition will turn True when the product is seen for the first time by the user
added = False # this condition will turn True when the product is bought or added to the cart by the user
for i in range(group.shape[0]):
event = group.iloc[i]["event_type"]
if event == "view" and not seen:
view_time= group.iloc[i]["event_time"]
seen=True # the product has been seen
elif event!="view" and seen and not added:
add_time= group.iloc[i]["event_time"]
added= True # the product has been bought/added to the cart
elif seen and added:
break # we already found the first view time and the cart/purchase time for this user and this product, so we can go to the next product/user
if seen and added:
tot_time.append(add_time-view_time)
tot_time=np.array(tot_time)
counter=0
y=pd.to_timedelta(0)
while len(tot_time)>=1000:
x=tot_time[:1000]
tot_time=tot_time[1001:]
y=y+x.mean()
counter=counter+1
average_time=y/counter
average_time
```
#### <font color='darkblue'> As expected, when a user sees an expensive product which he is interested in , he needs some time to think about the purchase, almost 3 days. </font>
```
dataset_oct=pd.DataFrame()
dataset_nov=pd.DataFrame()
gc.collect()
print("Free the memory")
```
### <font color='purple'>Question 2:</font>
##### <font color='purple'>What are the categories of the most trending products overall? For each month visualize this information through a plot showing the number of sold products per category.</font>
```
import pandas as pd
dataset=pd.DataFrame()
dataset_oct=pd.DataFrame()
dataset_nov=pd.DataFrame()
## reading data for October 2019:
df_oct = pd.read_csv('2019-Oct.csv', header='infer',usecols=["event_type","category_code"])
dataset_oct=df_oct[df_oct['category_code'].notna()] # eliminating missing data
dataset_oct=dataset_oct[dataset_oct.event_type=='purchase'] # we are looking for the "purchase" actions
dataset_oct.info # we check our work
# extracting category section from category_code data
dataset_oct_cat=dataset_oct
dataset_oct_cat['category_code']=dataset_oct_cat['category_code'].str.split('.').str[0]
dataset_oct.info
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style="darkgrid")
fig, ax = plt.subplots()
fig.set_size_inches(13.7, 6.27)
x=dataset_oct_cat.groupby('category_code').category_code.count().nlargest(10) # we are looking for the top 10
ax = sns.barplot(x.index,x.values)
ax.set(xlabel=' categories for October', ylabel='# rate')
plt.title(" Most Visited Categories For October")
plt.show()
# reading data for November 2019
df_nov = pd.read_csv('2019-Nov.csv', header='infer',usecols=["event_type","category_code"])
dataset_nov=df_nov[df_nov['category_code'].notna()] # eliminating missing data
dataset_nov=dataset_nov[dataset_nov.event_type=='purchase'] # we are looking for the "purchase" actions
dataset_nov.info # we check the data
# extracting category section from category_code data
dataset_nov_cat=dataset_nov
dataset_nov_cat['category_code']=dataset_nov_cat['category_code'].str.split('.').str[0]
# November
# We look for the top 10 categories by sales
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style="darkgrid")
fig, ax = plt.subplots()
fig.set_size_inches(13.7, 6.27)
x=dataset_nov_cat.groupby('category_code').category_code.count().nlargest(10)
ax = sns.barplot(x.index,x.values)
ax.set(xlabel=' categories for November', ylabel='# rate')
plt.title(" Most Visited Categories For November")
plt.show()
```
#### <font color='blue'>In both months, "Electronics" is the most visited category by a wide margin.</font>
```
import gc
gc.collect()
```
### <font color='purple'>Question 2:</font>
##### <font color='purple'>plot the most visited subcategories for each month</font>
```
dataset_oct1=df_oct[df_oct['category_code'].notna()] # eliminating missing data
oct_view=dataset_oct1[dataset_oct1.event_type=='view'] # extracting data with event_type= view
# we will extract the subcategory from the "category_code" column. We consider as "subcategory" the last part of the
# "category_column" string.
oct_view_subcat=oct_view
oct_view_subcat['category_code']=oct_view_subcat['category_code'].str.split('.').str[-1]
## October
sns.set(style="darkgrid")
fig, ax = plt.subplots()
fig.set_size_inches(13.7, 6.27)
x=oct_view_subcat.groupby('category_code').category_code.count().nlargest(10)
ax = sns.barplot(x.index,x.values)
ax.set(xlabel='subcategories for October', ylabel='# rate')
ax.set_xticklabels(ax.get_xticklabels(), rotation=45)
plt.title(" Most Visited Subcategories For October")
plt.show()
import gc
gc.collect()
print("Free the memory")
dataset_nov1=df_nov[df_nov['category_code'].notna()]
nov_view=dataset_nov1[dataset_nov1.event_type=='view'] # extracting data with event_type= view
nov_view_subcat=nov_view
nov_view_subcat['category_code']=nov_view_subcat['category_code'].str.split('.').str[-1]
# November
# We look for the top 10 categories by sales
sns.set(style="darkgrid")
fig, ax = plt.subplots()
fig.set_size_inches(13.7, 6.27)
x=nov_view_subcat.groupby('category_code').category_code.count().nlargest(10) # temp variable
ax = sns.barplot(x.index,x.values)
ax.set(xlabel=' subcategories for November ', ylabel='# rate')
ax.set_xticklabels(ax.get_xticklabels(), rotation=45)
plt.title(" Most Visited Subcategories For November")
plt.show()
```
#### <font color='blue'>In both months, "Smartphone" is the most visited subcategory by a wide margin. "Smartphone" is a subcategory of "Electronics", which is the most visited category</font>
```
dataset_nov=pd.DataFrame()
gc.collect()
print("Free the memory")
```
### <font color='purple'>What are the 10 most sold products per category in each month?</font>
```
## OCTOBER 2019 :
col_list1=["product_id"] # we aim to add column of product_id to previous loaded data
a= pd.read_csv('2019-Oct.csv', sep=',',usecols=col_list1) # use it as temporary variable because we want concat it
df_oct_m=pd.concat([df_oct,a],axis=1) # concatenate the extra column
data_oct=df_oct_m[df_oct_m['category_code'].notna()]
data_oct=data_oct[data_oct.event_type=='purchase']
data_oct_cat=data_oct # to avoid miss indexing
data_oct_cat['category_code']=data_oct_cat['category_code'].str.split('.').str[0]
categories_oct = data_oct_cat.groupby('category_code')
for name, group in categories_oct:
x= group.groupby("product_id").product_id.count().nlargest(10)
print(name)
print(x)
print("")
## November 2019:
b= pd.read_csv('2019-Nov.csv', sep=',',usecols=col_list1) # we aim to add column od product_id to previous loaded data
df_nov_m=pd.concat([df_nov,a],axis=1) # concatenate the extracted column
data_nov=df_nov_m[df_nov_m['category_code'].notna()] # eliminating missing values
data_nov=data_nov[data_nov.event_type=='purchase']
data_nov_cat=data_nov
data_nov_cat['category_code']=data_nov_cat['category_code'].str.split('.').str[0]
categories_nov = data_nov_cat.groupby('category_code')
for name, group in categories_nov:
x= group.groupby("product_id").product_id.count().nlargest(10)
print(name)
print(x)
print("")
dataset=pd.DataFrame()
dataset_oct=pd.DataFrame()
dataset_nov=pd.DataFrame()
gc.collect()
print("Free the memory")
```
### <font color='purple'>Question 3:</font>
##### <font color='purple'>For each category, what’s the brand whose prices are higher on average?</font>
```
# We look for the average price of every brand
dataset=pd.DataFrame()
dataset_oct=pd.DataFrame()
dataset_nov=pd.DataFrame()
chunks_oct = pd.read_csv('2019-Oct.csv',chunksize=1000000, header='infer',usecols=["event_type","category_code","brand","price"])
chunks_nov = pd.read_csv('2019-Nov.csv',chunksize=1000000, header='infer',usecols=["event_type","category_code","brand","price"])
for chunk in chunks_oct:
chunk=chunk[chunk["category_code"].notna()]
chunk = chunk[chunk["event_type"]=="purchase"]
chunk['category_code'] = chunk['category_code'].str.split('.').str[0]
dataset= pd.concat([dataset,chunk])
for chunk in chunks_nov:
chunk=chunk[chunk["category_code"].notna()]
chunk = chunk[chunk["event_type"]=="purchase"]
chunk['category_code'] = chunk['category_code'].str.split('.').str[0]
dataset= pd.concat([dataset,chunk])
brand_group=dataset.groupby(['category_code','brand']).agg(m_price=('price','mean'))
brand_agg=brand_group['m_price'].groupby('category_code',group_keys=False)
brand_agg.nlargest(1) # for each category, we show the brand with the highest average price
```
### <font color='purple'>Question 3:</font>
##### <font color='purple'>Write a function that asks the user a category in input and returns a plot indicating the average price of the products sold by the brand.</font>
```
# We ask the user a category in input. We will give as output a list of the average prices of each brand which belongs to that category
def category_prices(name=input()):
return brand_agg.get_group(name)
category_prices()
```
### <font color='purple'>Question 3:</font>
##### <font color='purple'>Find, for each category, the brand with the highest average price. Return all the results in ascending order by price.</font>
```
brand_agg.nlargest(1).sort_values()
dataset=pd.DataFrame()
dataset_oct=pd.DataFrame()
dataset_nov=pd.DataFrame()
gc.collect()
print("Free the memory")
```
### <font color='purple'>Question 4:</font>
##### <font color='purple'>How much does each brand earn per month?</font>
```
dataset=pd.DataFrame()
dataset_oct=pd.DataFrame()
dataset_nov=pd.DataFrame()
df_oct = pd.read_csv('2019-Oct.csv', header='infer',usecols=["event_type","brand","price"])
df_nov = pd.read_csv('2019-Nov.csv', header='infer',usecols=["event_type","brand","price"])
#Categorizing
dataset_oct = df_oct[df_oct["event_type"]=="purchase"]
dataset_nov = df_nov[df_nov["event_type"]=="purchase"]
print("October revenues")
dataset_oct.groupby('brand').price.sum() # we calculate the revenue of each brand
print("November revenues")
dataset_nov.groupby('brand').price.sum() # we calculate the revenue of each brand
```
### <font color='purple'>Question 4:</font>
##### <font color='purple'>Write a function that given the name of a brand in input returns, for each month, its profit.</font>
```
def profit_brand(Name=input()):
oct_profit=dataset_oct[dataset_oct.brand==Name].price.sum() # we calculate the revenue of the brand for October
nov_profit=dataset_nov[dataset_nov.brand==Name].price.sum() # we calculate the revenue of the brand for November
return oct_profit, nov_profit
profit_brand()
```
### <font color='purple'>Question 4:</font>
##### <font color='purple'>Is the average price of products of different brands significantly different?</font>
```
# We consider the average price of all the products by the same brand as the "brand price".
# We calculate the difference between the maximum and the minimum brand price, and the average between the brand prices.
dataset=pd.concat([dataset_oct,dataset_nov])
avg_brand_price=dataset.groupby(["brand"]).price.mean()
ranges=avg_brand_price.max()-avg_brand_price.min()
avg_brand_price.mean()
print("The difference between the maximum and the minimum brand price is: ", ranges)
print("The average between the brand prices is: ",avg_brand_price.mean())
```
## <font color='purple'>Question 4:</font>
##### <font color='purple'>Using the function you just created, find the top 3 brands that have suffered the biggest losses in earnings between one month and the next, specifing both the loss percentage and the 2 months</font>
```
# We are working only with the datasets from November and October, so the loss is to be considered as happened between these
# two months
print("Top 3 brands that have suffered the biggest losses in earnings between October 2019 and November 2019:")
(((dataset_nov.groupby('brand').price.sum()-dataset_oct.groupby('brand').price.sum())/dataset_oct.groupby('brand')
.price.sum() )*100 ) .sort_values(ascending=True).head(3)
```
## <font color='purple'>Question 5:</font>
##### <font color='purple'>In what part of the day is your store most visited? Knowing which days of the week or even which hours of the day shoppers are likely to visit your online store and make a purchase may help you improve your strategies. Create a plot that for each day of the week shows the hourly average of visitors your store has.</font>
```
dataset=pd.DataFrame()
dataset_oct=pd.DataFrame()
dataset_nov=pd.DataFrame()
gc.collect()
print("Free the memory")
# we load data of October 2019 just for the 2 columns we need
df_oct = pd.read_csv('2019-Oct.csv', header='infer',usecols=["event_time", "event_type"])
# we load data of November 2019 just for the 2 columns we need
df_nov = pd.read_csv('2019-Nov.csv', header='infer',usecols=["event_time", "event_type"])
dataset= pd.concat([df_oct,df_nov]) # concatenate data
# we are looking for the "view" actions
dataset_view= dataset[dataset.event_type=='view']
# we change the "event_time" column to datetime format
dataset_view["event_time"]= pd.to_datetime(dataset_view["event_time"])
# We show the top 3 hours for "views" on the site
dataset_view.groupby([dataset_view.event_time.dt.hour , dataset_view.event_type]).event_type.count().nlargest(3)
dataset_purchase= dataset[dataset.event_type=='purchase'] # observing just "purchase" event
# we change the "event_time" column to datetime format
dataset_purchase["event_time"]= pd.to_datetime(dataset_purchase["event_time"])
# We show the top 3 hours for "purchases" on the site
dataset_purchase.groupby([dataset_purchase.event_time.dt.hour , dataset_purchase.event_type]).event_type.count().nlargest(3)
```
#### <font color='blue'>We can see that the afternoon hours are the ones with the most visitors on the site, but the favourite hours for purchasing products are the morning hours</font>
```
gc.collect()
# now we count the views for each day of the week
tot_views_daily=dataset_view.groupby([dataset_view.event_time.dt.dayofweek,dataset_view.event_type]).event_type.count()
avg_views_daily=tot_views_daily/24
avg_views_daily.rename({0:"Monday", 1:"Tuesday", 2:"Wednesday", 3:"Thursday", 4:"Friday", 5:"Saturday", 6:"Sunday"}, inplace=True)
sns.set(style="darkgrid")
fig, ax = plt.subplots()
fig.set_size_inches(14.7, 5.27)
ax = sns.barplot(avg_views_daily.index,avg_views_daily.values)
ax.set(xlabel=' Days of the week ', ylabel='Average Hourly Views')
ax.set_xticklabels(ax.get_xticklabels(), rotation=45)
plt.title(" verage Hourly Views Per Day Of The Week")
plt.show()
```
#### <font color='blue'>As expected, Friday, Saturday and Sunday are the days with the most visitors on the site</font>
### <font color='purple'>Question 6:</font>
##### <font color='purple'>The conversion rate of a product is given by the number of times a product has been bought over the number of times it has been visited. What's the conversion rate of your online store? Find the overall conversion rate of your store.</font>
```
dataset=pd.DataFrame()
dataset_oct=pd.DataFrame()
dataset_nov=pd.DataFrame()
dataset_oct = pd.read_csv('2019-Oct.csv', header='infer',usecols=["event_type"])
dataset_nov = pd.read_csv('2019-Nov.csv', header='infer',usecols=["event_type"])
dataset= pd.concat([dataset_oct,dataset_nov])
# increase free space of RAM
dataset_oct=pd.DataFrame()
dataset_nov=pd.DataFrame()
# We need the calculate the rate between how many times the products are
# buyed and how many times the products are visited
tot_views= dataset["event_type"][dataset["event_type"]=="view"].count() # total number of views
tot_purchases= dataset["event_type"][dataset["event_type"]=="purchase"].count() # total number of purchases
conversion_rate= tot_purchases/tot_views # conversion rate of the online store
print("conversion_rate =",conversion_rate)
```
### <font color='purple'>Question 6:</font>
#### <font color='purple'>Plot the number of purchases of each category and show the conversion rate of each category in decreasing order.</font>
```
dataset=pd.DataFrame()
dataset_oct=pd.DataFrame()
dataset_nov=pd.DataFrame()
gc.collect()
print("Free the memory")
dataset_oct = pd.read_csv('2019-Oct.csv', header='infer',usecols=["event_type","category_code"])
dataset_nov = pd.read_csv('2019-Nov.csv', header='infer',usecols=["event_type","category_code"])
dataset_oct['category_code'] = dataset_oct['category_code'].str.split('.').str[0] # we extract the category
# the category is the first part of the "category_code" string
dataset_nov['category_code'] = dataset_nov['category_code'].str.split('.').str[0]
dataset=pd.concat([dataset_oct,dataset_nov])
views_dataset=dataset[dataset["event_type"]=="view"]
purchase_dataset=dataset[dataset["event_type"]=="purchase"]
total_ourchased_cat=purchase_dataset.groupby("category_code").category_code.count().sort_values(ascending=False)
total_ourchased_cat.values
sns.set(style="darkgrid")
fig, ax = plt.subplots()
fig.set_size_inches(13.7, 6.27)
ax = sns.barplot( total_ourchased_cat.index , total_ourchased_cat.values )
ax.set(ylabel=' #rate ', xlabel='Category')
ax.set_xticklabels(ax.get_xticklabels(), rotation=45)
plt.title(" Number Of Purchases Per Category")
plt.show()
```
#### <font color='blue'> We can see that the "electronics" category is the one with the highest number of purchases by a wide margin. This was expected, since "electronics" is also the most visited category</font>
```
# we show the conversion rate of each category in decreasing order
(purchase_dataset.groupby("category_code").event_type.count()/views_dataset.groupby("category_code").event_type.count()).sort_values(ascending=False)
```
#### <font color='blue'> The "electronics" category also has the best conversion rate. At the second place we find "medicine". This makes sense because medicines are something users look for when they actually need them, so there is a good chance that a user who is looking for medicines on the site will also buy a medicine. </font>
### <font color='purple'>Question 7:</font>
#### <font color='purple'>The Pareto principle states that for many outcomes roughly 80% of consequences come from 20% of the causes. Also known as 80/20 rule, in e-commerce simply means that most of your business, around 80%, likely comes from about 20% of your customers.Prove that the pareto principle applies to your store.</font>
```
dataset=pd.DataFrame()
dataset_oct=pd.DataFrame()
dataset_nov=pd.DataFrame()
gc.collect()
print("Free the memory")
dataset_oct = pd.read_csv('2019-Oct.csv', header='infer',usecols=["event_type","price","user_id"])
dataset_nov = pd.read_csv('2019-Nov.csv', header='infer',usecols=["event_type","price","user_id"])
dataset_oct = dataset_oct[dataset_oct["event_type"]=="purchase"]
dataset_nov = dataset_nov[dataset_nov["event_type"]=="purchase"]
dataset= pd.concat([dataset_oct,dataset_nov])
# We want to prove that the top 20% of our customers are responsible for at least the 80% of our total revenues
# We calculate the total revenues
tot_revenues=dataset["price"].sum()
# We calculate the total number of costumers
customers=dataset["user_id"].nunique()
# We calculate the revenues coming from the top 20% of customers
top20_revenues = dataset.groupby("user_id").price.sum().sort_values(ascending=False).head(customers//5).sum()
print("Percentage of the total revenues coming from the top 20% customers:")
(top20_revenues/tot_revenues)*100
```
| github_jupyter |
# Here I will deal mostly with extracting data from PDF files
### In accompanying scripts, I convert SAS and mutli-sheet Excel files into csv files for easy use in pandas using sas7bdat and pandas itself
```
# Necessary imports
import csv
import matplotlib.pyplot as plt
import numpy as np
import os
import openpyxl
import pandas as pd
import pdfplumber
import PyPDF2
import re
import tabula
#import sas7bdat
from openpyxl import load_workbook
from pathlib import Path
from copy import deepcopy
from tabula import read_pdf
```
##### Possibly handy functions
```
def normalize_whitespace(s, replace):
if isinstance(s, str):
return re.sub(r'\s+', replace, s)
else:
return s
def convert_multisheet(filepath, csv_path=''):
prefix = Path(filepath).stem
wb = load_workbook(filepath)
sheets = wb.get_sheet_names()
for sheet in sheets[1:]:
df = wb[sheet]
with open(f'{csv_path}{prefix}{sheet}.csv', 'w', newline='') as csvfile:
c = csv.writer(csvfile)
data =[]
for row in df.rows:
data.append([cell.value for cell in row])
c.writerows(data)
print(f'{prefix}{sheet}.csv file written')
```
##### Probably useful definitions
```
boroughs_ = {
'Manhattan': ['1','MN', 100, 12],
'Brooklyn': ['3', 'BK', 300, 18],
'Bronx': ['2', 'BX', 200, 12],
'Queens': ['4', 'QN', 400, 14],
'Staten Island': ['5', 'SI', 500, 3]
}
borough_labels = {'Manhattan': [],
'Brooklyn': [],
'Bronx': [],
'Queens': [],
'Staten Island': []}
for borough, abbr in boroughs_.items():
for n in range(1, abbr[2]+1):
if n < 10:
n = f'0{n}'
borough_labels[borough].append(f'{abbr[1]}{n}')
```
# First Collection Round
#### All the years for which specific files are available and their corresponding file naming formats
```
CHP_years = [2018, 2015] # {year}_CHP_all_data.csv, {year}_Cause_of_premature_death_data.csv
CHS_years = range(2002, 2018) # chs{year}_public.csv
CD_years = range(1999, 2015) # cd{year}.csv
CDall_years = range(1999, 2015) # cdall_{2digityear}.csv
Pov_years = range(2005, 2017) # NYCgov_Poverty_Measure_Data__{year}_.csv
Child_years = range(2007, 2018) # AnnualReport{year}.pdf
YRBSS_years = range(2003, 2019, 2) # All archived in file sadc_2017_district.dat
```
### Youth Risk Behavior Surveilance System Data
```
# Borough names get cut off
boroughs = {
'State': 'StatenIsland',
'Queen': 'Queens',
'Manha': 'Manhattan',
'Brook': 'Brooklyn'
}
for year in YRBSS_years:
with open('sadc_2017_district.dat', 'r') as source:
data = ''
for line in source:
# It's a national survey, so we extract the NYC-related data
# Metadata specified in which years NYC participated
if 'Borough' in line and f' {year} ' in line:
data += line
with open('tmp', 'w') as tmp:
tmp.write(data)
# File is fixed-width formatted
df = pd.read_fwf('tmp', header=None)
# These columns pertain to state, city, and other classifying data I'm not
# interested in
df.drop(columns=[0, 1, 2, 4, 5], inplace=True)
# Replacing cut-off borough names mentioned above
df[3].replace(to_replace=boroughs, inplace=True)
# Nan values are represented as periods...
df.replace(to_replace='.', value=np.nan, inplace=True)
df.to_csv(f'data/youthriskbehavior{year}.csv')
```
## Find all the PDF files
```
allfiles = os.listdir('data/NYCDataSources')
pdffiles = [filename for filename in allfiles if filename.endswith('.pdf')]
pdffiles
```
### Investigate first the Infant Mortality PDF
```
file = pdffiles[0]
# Obtained from looking at the pdf
# Tried using multi-indexing for the columns, but tabula
# could not properly parse the headings
columns = ['CommunityDistrict', 'Neighborhoods',
'2013-2015 InfantMortalityRate',
'2013-2015 NeonatalMortalityRate',
'2014-2016 InfantMortalityRate',
'2014-2016 NeonatalMortalityRate',
'2015-2017 InfantMortalityRate',
'2015-2017 NeonatalMortalityRate']
# For this, I manually determined the area and column boundaries
df = read_pdf(file, lattice=True, guess=False, area=[126, 55, 727, 548],
columns=[111, 247, 297, 348, 399, 449, 499],
pandas_options={'header': None, 'names': columns})
df.dropna(inplace=True)
df.to_csv(f'data/{file[:-3]}csv', index=False)
```
### Investigate next the Child Abuse and Neglect Report
```
file = pdffiles[-1]
# Again tried using multi-indexing for the columns,
# but again resorted to manually specifying column names
columns = ['CommunityDistrict',
'2015 AbuseNeglectCases',
'2015 AbuseNeglectChildren',
'2015 AbuseNeglectIndication',
'2016 AbuseNeglectCases',
'2016 AbuseNeglectChildren',
'2016 AbuseNeglectIndication',
'2017 AbuseNeglectCases',
'2017 AbuseNeglectChildren',
'2017 AbuseNeglectIndication',
'2018 AbuseNeglectCases',
'2018 AbuseNeglectChildren',
'2018 AbuseNeglectIndication']
# And manually determining the area needed
df = read_pdf(file, pages='all', guess=False, area=[111, 18, 459, 765],
pandas_options={'header': None})
# drop "rank" columns
df.drop(columns=[2,6,10,14], inplace=True)
# tabula put some NaN rows in there just for the fun of it
df.dropna(inplace=True)
df.columns = columns
df.to_csv(f'data/{file[:-3]}csv', index=False)
```
### Investigate the "AnnualReports" -- Child Welfare and Child Services (Foster Care) Reports
This was an attempt at automating the creation of the column names. Only later did I find out that the vast majority of the boroughs and years were missing many (but not all) of the columns that were straight zeroes down the line...
```
file = pdffiles[5]
# Each row of headers ia actually read in as a two-row df
df1 = read_pdf(file, pages=19, guess=False, stream=True,
area=[56, 249, 80, 726],
columns=[278, 312, 344, 376, 414.8, 446, 483, 517, 552, 586, 622, 657, 695],
pandas_options={'header': None})
df2 = read_pdf(file, pages=19, guess=False, stream=True,
area=[104, 249, 126, 731],
columns=[278, 312, 344, 376, 414.8, 446, 483, 517, 552, 586, 622, 657, 695],
pandas_options={'header': None})
df3 = read_pdf(file, pages=19, guess=False, stream=True,
area=[154, 249, 176, 342],
columns=[279.57, 313],
pandas_options={'header': None})
df4 = pd.concat([df1, df2, df3], axis=1, ignore_index=True)
# Here is where the top and bottom rows are combined to create a whole df
def combine_labels(col):
'''
Takes in the top and bottom row of the df and ensures the new
column name is of the format 'FosterCareLength({X}yrs-<{Y}yrs)'
'''
if col[0][-1] != '-':
col[0] += '-'
new_label = f'FosterCareLength({col[0]}{col[1]})'
return re.sub(r'\s+', '', new_label)
CWlabels = df4.apply(lambda col: combine_labels(col), axis=0).values.tolist()
CWlabels.append('TotalReunifications')
CWlabels.insert(26, 'FosterCareLength(13yrs-<13.5yrs)') # Missing in this particular ds
FClabels = ['TotalPlacements', 'PlacementWithin36mos', '%PlacementsWithin36mos']
# Because of the lack of consistency, I opted to keep the high-frequency (shorter-term)
# labels and just make sure to get a total for all <18yrs
CWlabels_std = CWlabels[:10]
def get_areas(num_words, words, labels):
'''
Must be used in conjunction with pdfplumber. pdfplumber's,
word objects have coordinates, which will be useful with tabula's
read_pdf function.
'''
# Initializations
start = -1
areas = []
# Run through each word in succession, checking if it matches
# one of the borough's labels
while start < num_words - 1:
start += 1
top = None
left = None
bottom = None
right = None
df_label = ''
word = words[start]
for first in labels:
# When we find a match, note the top and left coordinates
# (as the label will be the index)
if word['text'] == first:
top = word['top']
left = word['x0']
prev_word = words[start-1]
found = False
# Run through each borough label in reverse order, to
# catch the first instance of the highest label for the
# borough
for last in reversed(labels):
# Start from word following the one we matched previously
# to reduce redundancy
for end in range(start+1, num_words):
word2 = words[end]
# When the word is found note the bottom and right
# coordinates
if word2['text'] == last:
bottom = word2['bottom']
prev_line = words[end-1]
right = prev_line['x1']
# Also include some tagging for easier manipulation
# when reading and formatting the dataframes
if 'Total' in prev_word['text']:
df_label = 'total'
elif r'%' in prev_line['text']:
df_label = 'FC'
# So we can break the outside loop
found = True
# So we don't repeat reading sections
start = end
break
# Breaking the outside loop
if found:
break
try:
# Give some wiggle room by adding and subtracting from coordinates
area = [float(top)-0.5, float(left)-2, float(bottom)+0.5, float(right)+5]
except:
# To help debug
print(top, left, bottom, right, word, prev_word, word2, prev_line)
raise
areas.append([[df_label, word['text'], word2['text']], area])
# Break the final for loop to return to our controlled while loop
# at the appropriate 'start' value
break
return areas
def get_tables(year):
'''
Uses get_areas to return the page numbers, labels, and related areas
for each borough.
'''
# Initializations
borough_dfs = {'Manhattan': {},
'Brooklyn': {},
'Bronx': {},
'Queens': {},
'Staten Island': {}}
file = f'AnnualReport{year}.pdf'
i = 0
with pdfplumber.open(file) as pdf:
# Loop through the pages, keep track of page number as it
# will be used by tabula--which starts numeration at 1
for page in pdf.pages:
i += 1
# Gets all the text from a page as a single string -- easy to check
# for inclusion of substrings
text = page.extract_text()
# Gets all the 'words' as 'word objects' which includes the
# text of the word, and the upper, lower, left, and right coordinates
# for use in get_areas
words = page.extract_words()
num_words = len(words)
for borough, labels in borough_labels.items():
if any(label in text for label in labels):
areas = get_areas(num_words, words, labels)
borough_dfs[borough][str(i)] = areas
return borough_dfs
# Initialize dict for misfits (to be able to debug after-the-fact)
strays = {}
# Initialize dict we will actually use to create the larger df
year_dfs = {}
for year in range(2007, 2018):
borough_dfs = get_tables(year)
year_dfs[str(year)] = borough_dfs
# Take a peek
year_dfs
```
##### The 2017 right-bounding coordinates for pages 17 and 18 can't be correct. Let's change them to 721.
```
latest = year_dfs['2017']
for borough, dfs in latest.items():
try:
info = dfs['18'][0]
page = 18
except:
info = dfs['17'][0]
page = 17
area = info[1]
area[3] = 721
def get_borough_df(file, borough, df_pages, year, strays):
'''
Function to get a unified borough-wide df from each year.
Adds to strays if necessary.
'''
# Initialization
to_merge = []
first = True
k = 0
# dfs is a dictionary with keys that are the page numbers
# and values that are the objects returned by get_areas
for i, info in df_pages.items():
page = int(i)
# Just to keep track of same borough appearing twice on one page
j = 0
for each in info:
j += 1
k += 1
df_label = each[0][0]
begin = each[0][1]
end = each[0][2]
area = each[1]
# tabula doing its work
df = read_pdf(file, pages=page, guess=False, area=area,
pandas_options={'header': None, 'index_col': 0})
try:
n_cols = df.shape[1] - 1
except:
print(f'could not READ {borough}{year}_{i}{j}')
# For the dfs labeled 'total', I am only interested in the last column,
# the rest are sparse. If parents/family are unable to regain custody
# of the child after a few years, it is unlikely they will ever do so.
if df_label == 'total':
try:
df = df.iloc[:, -1]
srs = df.rename(f'{year}TotalReunifications')
except:
print(f'could not RENAME TotalReunifications {borough}{year}_{i}{j}')
strays[borough+str(year)+'_'+i+str(j)] = df
continue
else:
to_merge.append(srs)
continue
# For the label 'FC', these are children returning to foster care--with
# a focus on those returning within the last 3 years
elif df_label == 'FC' or (year != '2007' and k == 1):
try:
df.drop(columns=[1], inplace=True)
df.columns = [f'{year}{FC}' for FC in FClabels]
except:
print(f'could not RENAME FC {borough}{year}_{i}{j}')
strays[borough+str(year)+'_'+i+str(j)] = df
continue
else:
to_merge.append(df)
continue
elif not first:
print(f'DROPPED {borough}{year}_{i}{j}')
strays[borough+str(year)+'_'+i+str(j)] = df
continue
elif df[2].isnull().sum() > 1:
try:
df[2] = df.apply(lambda row: [
n for n in row[1].split(' ') if n.isnumeric()], axis=1)
except:
print(f'could not COPY infant data {borough}{year}_{i}{j}')
strays[borough+str(year)+'_'+i+str(j)] = df
continue
if end == borough_labels[borough][-1]:
first = False
df.drop(columns=[1], inplace=True)
try:
df = df.iloc[:, :10]
df.columns = [f'{year}{CW}' for CW in CWlabels_std]
except:
print(f'could not RENAME {borough}{year}_{i}{j}')
strays[borough+str(year)+'_'+i+str(j)] = df
continue
else:
to_merge.append(df)
try:
df = pd.concat(to_merge, axis=1, join='outer', sort=True)
except:
print(f'unable to MERGE {year} {borough}')
strays[borough+str(year)] = to_merge
return None, strays
return df, strays
# Initialization
y_to_merge = []
# year_dfs is a dict with years as keys and objects returned by get_tables
# as values
for year, borough_dfs in year_dfs.items():
file = f'AnnualReport{year}.pdf'
b_to_merge = []
# Those objects are also a dict with keys as borough names and values
# as dicts with keys of page numbers and values of df info as returned
# by get_areas
for borough, df_pages in borough_dfs.items():
b_df, strays = get_borough_df(file, borough, df_pages, year, strays)
b_to_merge.append(b_df)
try:
df = pd.concat(b_to_merge, axis=0, join='outer', sort=True)
except:
print(f'unable to merge ALL of {year}')
strays[str(year)] = b_to_merge
else:
y_to_merge.append(df)
df = pd.concat(y_to_merge, axis=1, join='outer', sort=True)
df.to_csv(f'data/ChildWelfare.csv', index=True)
```
#### Everything that got dropped was supposed to.
#### First I'll look into why the 2012 merge failed
```
bad_year = strays['2012']
for df in bad_year:
print(df.index, df.columns)
print(len(df.columns))
```
Seems we have repeats in Manhattan
```
badMN = deepcopy(bad_year[0])
badMN
```
Ah, must have been a ds split across two pages
```
FC = badMN.columns[:3]
CWtot = badMN.columns[-1]
fc_df = badMN[FC]
cwtot_df = badMN[CWtot]
badMN.drop(columns=FC, inplace=True)
badMN.drop(columns=CWtot, inplace=True)
badMN
badMN_bot = badMN.loc[badMN.index[-4:]].dropna(axis=1)
badMN_top = badMN.loc[badMN.index[:-4]].dropna(axis=1)
goodMN = pd.concat([badMN_top, badMN_bot], join='outer', sort=True)
goodMN
goodMN_full = pd.concat([fc_df, goodMN, cwtot_df], axis=1, join='outer', sort=True)
bad_year[0] = goodMN_full
for df in bad_year:
print(df.index, df.columns)
print(len(df.columns))
ChildWelfare2012 = pd.concat(bad_year, join='outer', sort=True)
CWminus2012 = pd.read_csv('data/ChildWelfare.csv', index_col=0)
full_df = pd.concat([CWminus2012, ChildWelfare2012],
axis=1, join='outer', sort=True)
full_df.drop(index=['BK 11'], inplace=True)
full_df.to_csv('data/ChildWelfare.csv', index=True)
full_df
```
# Second Collection Round
```
directory = 'data/NYCDataSourcesRound2'
savedir = 'data/raw_csvs/'
allfiles_2 = os.listdir(directory)
# pdffiles = [filename for filename in allfiles if filename.endswith('.pdf')]
csvfiles = [filename for filename in allfiles_2 if filename.endswith('.csv')]
excelfiles = [filename for filename in allfiles_2 if filename.endswith('.xls') or filename.endswith('.xlsx')]
print('CSV Files:')
print(csvfiles)
print('\nExcel Files:')
print(excelfiles)
fourthgrade_stats = csvfiles[0:2]
dfs = []
for file in fourthgrade_stats:
path = f'{directory}/{file}'
df = pd.read_csv(os.path.join(directory, file), index_col=2)
pre = df.iloc[0,0]
df.drop(columns=df.columns[:2], inplace=True)
df.columns = [pre+col for col in df.columns]
dfs.append(df)
df = pd.concat(dfs, axis=1, sort=True)
df.to_csv(savedir+'proficiency_4thgrade')
excelfiles
df = pd.read_excel(os.path.join(directory, excelfiles[3]),
header=None, skiprows=4, nrows=1, squeeze=True)
types = df.values[0]
print(set(types))
include = []
for count in range(len(types)):
if types[count] == 'Community District' or types[count] == 'PUMA':
include.append(count)
include.insert(0,0)
df = pd.read_excel(os.path.join(directory, excelfiles[3]), usecols=include, header=None,
skiprows=[0,1,2,3,4,5,7,8,9,11,12,13,14,15,16,17,18,19], nrows=116)
def get_col_names_year(df, col_names=['GEO LABEL']):
for col in range(1, len(df.columns)):
name = ' '.join([df.iloc[0, col], str(df.iloc[1, col])])
col_names.append(name)
df.columns = col_names
df.drop(index=[0,1], inplace=True)
get_col_names_year(df)
comm_dist = [('Community District' in label) for label in df['GEO LABEL']]
CD = [('CD' in label) for label in df['GEO LABEL']]
df1 = df[comm_dist]
df2 = df[CD]
dfs = [df1, df2]
boroughs_['Richmond'] = boroughs_['Staten Island']
boroughs_['Staten'] = boroughs_['Staten Island']
def get_CD_nums(label, boroughs=boroughs_):
parts = label.split()
borough_code = boroughs[parts[0]][2]
nums = [int(part) for part in parts if part.isnumeric()]
if len(nums) == 1:
new_label = str(nums[0] + borough_code)
else:
new_label = ' '.join([str(num + borough_code) for num in nums])
return new_label
def set_df_index(df):
df.dropna(axis=1, how='all', inplace=True)
df['GEO LABEL'] = df['GEO LABEL'].apply(get_CD_nums)
df['Community District'] = df['GEO LABEL'].values
df.set_index('GEO LABEL', inplace=True)
for df in dfs:
set_df_index(df)
df = pd.concat(dfs, axis=1, join='outer', sort=True)
df.to_csv(savedir+'d2g1.csv')
def dict_or_int(value, initialized=False):
if not initialized:
if isinstance(value, int):
output = list(range(value))
else:
output = {}
for sheet, num in value.items():
output[sheet] = list(range(num))
return output
else:
if isinstance(value, int):
if isinstance(initialized, list):
initialized.remove(value)
else:
for sheet in initialized.keys():
initialized[sheet].remove(value)
else:
if isinstance(initialized, list):
output = {}
for sheet, num in value.items():
initial = deepcopy(initialized)
initial.remove(num)
output[sheet] = initial
return output
else:
for sheet in initialized.keys():
initialized[sheet].remove(value[sheet])
return initialized
def read_d2g_new(filepath, name_row, year_row, startrow, geolabel_col, startcol):
skiprows = dict_or_int(startrow)
skiprows = dict_or_int(name_row, initialized=skiprows)
skiprows = dict_or_int(year_row, initialized=skiprows)
dropcols = dict_or_int(startcol)
dropcols = dict_or_int(geolabel_col, initialized=dropcols)
dfs = {}
try:
for sheet, rows in skiprows.items():
df = pd.read_excel(filepath, sheet_name=sheet, header=None, skiprows=rows)
dfs[sheet] = df
except:
for sheet in ['CD', 'Puma']:
df = pd.read_excel(filepath, sheet_name=sheet, header=None, skiprows=skiprows)
dfs[sheet] = df
try:
for sheet, cols in dropcols.items():
dfs[sheet].drop(columns=cols, inplace=True)
except:
for df in dfs.values():
df.drop(columns=dropcols, inplace=True)
for df in dfs.values():
get_col_names_year(df, col_names=['GEO LABEL'])
set_df_index(df)
df = pd.concat(dfs.values(), axis=1, join='outer', sort=True)
return df
file = os.path.join(directory, excelfiles[0])
df = read_d2g_new(file, 8, 13, 17, 1, {'CD': 6, 'Puma':5})
df.to_csv(savedir +'d2g2.csv')
file = os.path.join(directory, excelfiles[8])
df = read_d2g_new(file, {'CD': 6, 'Puma': 7}, {'CD': 10, 'Puma': 11},
{'CD': 14, 'Puma':15}, 2, 7)
df.to_csv(savedir + 'd2ghealth.csv')
csvfiles
df = pd.read_csv(os.path.join(directory, csvfiles[10]))
df
```
| github_jupyter |
```
# Setup directories
from pathlib import Path
basedir = Path().absolute()
libdir = basedir.parent.parent.parent
# Other imports
import pandas as pd
import numpy as np
from datetime import datetime
from ioos_qc.plotting import bokeh_plot_collected_results
from bokeh import plotting
from bokeh.io import output_notebook
# Install QC library
#!pip install git+git://github.com/ioos/ioos_qc.git
# # Alternative installation (install specific branch):
# !pip uninstall -y ioos_qc
# !pip install git+git://github.com/ioos/ioos_qc.git@new_configs
# # Alternative installation (run with local updates):
# !pip uninstall -y ioos_qc
# import sys
# sys.path.append(str(libdir))
```
## Configuration
```
erddap_server = 'https://ferret.pmel.noaa.gov/pmel/erddap'
dataset_id = 'sd1055'
```
## Get data from ERDDAP as an xarray object
```
from erddapy import ERDDAP
e = ERDDAP(
server=erddap_server,
protocol='tabledap',
)
e.response = 'csv'
e.dataset_id = dataset_id
ds = e.to_xarray()
ds
```
## Generate a QC configuration for each variable
```
# Dataset level metadata to drive climatology extraction
min_t = str(ds.time.min().dt.floor("D").dt.strftime("%Y-%m-%d").data)
max_t = str(ds.time.max().dt.ceil("D").dt.strftime("%Y-%m-%d").data)
min_x = float(ds.longitude.min().data)
min_y = float(ds.latitude.min().data)
max_x = float(ds.longitude.max().data)
max_y = float(ds.latitude.max().data)
bbox = [min_x, min_y, max_x, max_y]
time
# Configure how each variable's config will be generated
default_config = {
"bbox": bbox,
"start_time": min_t,
"end_time": max_t,
"tests": {
"spike_test": {
"suspect_threshold": "1",
"fail_threshold": "2"
},
"gross_range_test": {
"suspect_min": "min - std * 2",
"suspect_max": "max + std / 2",
"fail_min": "mean / std",
"fail_max": "mean * std"
}
}
}
# For any variable name or standard_name you can define a custom config
custom_config = {
'air_temperature': {
"variable": "air"
},
'air_pressure': {
"variable": "pres"
},
'relative_humidity': {
"variable": "rhum"
},
'sea_water_temperature': {
"variable": "temperature"
},
'sea_water_practical_salinity': {
"variable": "salinity"
},
'eastward_wind': {
"variable": "uwnd"
},
'northward_wind': {
"variable": "vwnd"
}
}
# Generate climatology configs
from ioos_qc.config_creator import CreatorConfig, QcConfigCreator, QcVariableConfig, QC_CONFIG_CREATOR_SCHEMA
creator_config = {
"datasets": [
{
"name": "ocean_atlas",
"file_path": "../../../resources/ocean_atlas.nc",
"variables": {
"o2": "o_an",
"salinity": "s_an",
"temperature": "t_an"
},
"3d": "depth"
},
{
"name": "narr",
"file_path": "../../../resources/narr.nc",
"variables": {
"air": "air",
"pres": "slp",
"rhum": "rhum",
"uwnd": "uwnd",
"vwnd": "vwnd"
}
}
]
}
cc = CreatorConfig(creator_config)
qccc = QcConfigCreator(cc)
# Break down variable by standard name
def not_stddev(v):
return v and not v.endswith(' SD')
#air_temp_vars = ds.filter_by_attrs(long_name=not_stddev, standard_name='air_temperature')
#pressure_vars = ds.filter_by_attrs(long_name=not_stddev, standard_name='air_pressure')
# humidity_vars = ds.filter_by_attrs(long_name=not_stddev, standard_name='relative_humidity')
# water_temp_vars = ds.filter_by_attrs(long_name=not_stddev, standard_name='sea_water_temperature')
# salinity_vars = ds.filter_by_attrs(long_name=not_stddev, standard_name='sea_water_practical_salinity')
# uwind_vars = ds.filter_by_attrs(long_name=not_stddev, standard_name='eastward_wind')
# vwind_vars = ds.filter_by_attrs(long_name=not_stddev, standard_name='northward_wind')
# all_vars = [air_temp_vars, pressure_vars, humidity_vars, water_temp_vars, salinity_vars, uwind_vars, vwind_vars]
# all_vars
air_temp = ['air_temperature']
pressure = ['air_pressure']
humidity = ['relative_humidity']
water_temp = ['sea_water_temperature']
salt = ['sea_water_practical_salinity']
u = ['eastward_wind']
v = ['northward_wind']
run_tests = air_temp + pressure + humidity + water_temp + salt + u + v
final_config = {}
for v in ds:
da = ds[v]
# Don't run tests for unknown variables
if 'standard_name' not in da.attrs or da.attrs['standard_name'] not in run_tests:
continue
# The standard names are identical for the mean and the stddev
# so ignore the stddev version of the variable
if v.endswith('_STDDEV'):
continue
config = default_config.copy()
min_t = str(da.time.min().dt.floor("D").dt.strftime("%Y-%m-%d").data)
max_t = str(da.time.max().dt.ceil("D").dt.strftime("%Y-%m-%d").data)
min_x = float(da.longitude.min().data)
min_y = float(da.latitude.min().data)
max_x = float(da.longitude.max().data)
max_y = float(da.latitude.max().data)
bbox = [min_x, min_y, max_x, max_y]
config["bbox"] = bbox
config["start_time"] = min_t
config["end_time"] = max_t
# Allow custom overrides on a variable name basis
if v in custom_config:
config.update(custom_config[v])
# Allow custom overrides on a standard_name name basis
if da.attrs['standard_name'] in custom_config:
config.update(custom_config[da.attrs['standard_name']])
# Generate the ioos_qc Config object
qc_var = QcVariableConfig(config)
qc_config = qccc.create_config(qc_var)
# Strip off the variable that create_config added
qc_config = list(qc_config.values())[0]
# Add it to the final config
final_config[v] = qc_config
final_config
from ioos_qc.config import Config
from ioos_qc.streams import XarrayStream
from ioos_qc.stores import NetcdfStore
from ioos_qc.results import collect_results
c = Config(final_config)
xs = XarrayStream(ds, time='time', lat='latitude', lon='longitude')
qc_results = xs.run(c)
list_results = collect_results(qc_results, how='list')
list_results
# output_notebook()
# plot = bokeh_plot_collected_results(list_results)
# plotting.show(plot)
```
| github_jupyter |
# Overview
This notebook implements fourier series simualtions.
# Dependencies
```
import numpy as np
import scipy.signal as signal
import matplotlib.pyplot as plt
```
# 1.3: Periodicity: Definitions, Examples, and Things to Come
A function $f(t)$ is said to be period of period $t$ if there exists a number $T > 0$ such that $\forall t$ $f(t + T) = f(t)$.
```
# Example: The basic trigonometric functions are periodic.
def period_one_sine(x):
return np.sin(2*np.pi*x)
arange = np.arange(0, 5, 0.01)
sin_out = period_one_sine(arange)
plt.figsize=(20, 4)
plt.plot(arange, sin_out)
plt.axhline(y=0, linestyle='--', color='black')
for i in range(0, 6):
plt.axvline(x=i, linestyle='--', color='black')
```
Note that the sine function above has period one. For a general sinosoid of the form $f(t) = sin(\omega t + \phi)$ the period is given by $\frac{\omega}{2\pi}$. Note $\phi$ is a phase shift.
## Combining sinosoids
Note that summing sinosoids leads to a periodic function with a period that is the maximum period of all of the sinosoids being summed.
```
def sinosoid(x):
return np.sin(2*np.pi*x) + np.cos(4*2*np.pi*x)
sin_out = sinosoid(arange)
plt.figsize=(20, 4)
plt.plot(arange, sin_out)
plt.axhline(y=0, linestyle='--', color='black')
for i in range(0, 6):
plt.axvline(x=i, linestyle='--', color='black')
```
Note that that combination of sinosoids can be arbitrarily complex and the above statement still holds.
## Aperiodic sinosoids
Sinusoids are only periodic if the ratio of the periods being combined is a raional number.
```
def sinosoid(x):
return np.sin(2*np.pi*x) + np.sin(np.sqrt(2)*2*np.pi*x)
sin_out = sinosoid(arange)
plt.figsize=(20, 4)
plt.plot(arange, sin_out)
plt.axhline(y=0, linestyle='--', color='black')
```
# Audio
We can "hear" the audio interpretation of sinosoids, since fundamentally sound is simply waves passing through air.
```
from IPython.display import Audio
sr = 22050 # sample rate
concert_a = lambda x: np.cos(2*np.pi*440*x)
sin_out = concert_a(np.arange(0, 4, 0.0001))
len(sin_out)
Audio(sin_out, rate=sr)
```
# Fourier coefficients
Let's use the theory of fourier series to approximate a few periodic functions. We'll start with the switch function.
```
from scipy.integrate import quad
from numba import vectorize, float64
from numpy import pi
def fourier_series(period, N):
"""Calculate the Fourier series coefficients up to the Nth harmonic"""
result = []
T = len(period)
t = np.arange(T)
for n in range(N+1):
an = 2/T*(period * np.cos(2*np.pi*n*t/T)).sum()
bn = 2/T*(period * np.sin(2*np.pi*n*t/T)).sum()
result.append((an, bn))
return np.array(result)
Fs = 10000
func1 = lambda t: (abs((t%1)-0.25) < 0.25).astype(float) - (abs((t%1)-0.75) < 0.25).astype(float)
func2 = lambda t: t % 1
func3 = lambda t: (abs((t%1)-0.5) < 0.25).astype(float) + 8*(abs((t%1)-0.5)) * (abs((t%1)-0.5)<0.25)
func4 = lambda t: ((t%1)-0.5)**2
t = np.arange(-1.5, 2, 1/Fs)
plt.figure(figsize=(20, 8))
plt.subplot(221); plt.plot(t, func1(t))
plt.subplot(222); plt.plot(t, func2(t))
plt.subplot(223); plt.plot(t, func3(t))
plt.subplot(224); plt.plot(t, func4(t))
plt.figure(figsize=(20, 9))
t_period = np.arange(0, 1, 1/Fs)
F1 = fourier_series(func1(t_period), 20)
plt.subplot(421); plt.stem(F1[:,0])
plt.subplot(422); plt.stem(F1[:,1])
F2 = fourier_series(func2(t_period), 20)
plt.subplot(423); plt.stem(F2[:,0])
plt.subplot(424); plt.stem(F2[:,1])
F3 = fourier_series(func3(t_period), 20)
plt.subplot(425); plt.stem(F3[:,0])
plt.subplot(426); plt.stem(F3[:,1])
F4 = fourier_series(func4(t_period), 20)
plt.subplot(427); plt.stem(F4[:,0])
plt.subplot(428); plt.stem(F4[:,1])
def reconstruct(P, anbn):
result = 0
t = np.arange(P)
for n, (a, b) in enumerate(anbn):
if n == 0:
a = a/2
result = result + a*np.cos(2*np.pi*n*t/P) + b * np.sin(2*np.pi*n*t/P)
return result
plt.figure(figsize=(20, 8))
F = fourier_series(func1(t_period), 100)
plt.plot(t_period, func1(t_period), label='Original', lw=2)
plt.plot(t_period, reconstruct(len(t_period), F[:20,:]), label='Reconstructed with 20 Harmonics');
plt.plot(t_period, reconstruct(len(t_period), F[:100,:]), label='Reconstructed with 100 Harmonics');
```
| github_jupyter |
# INPUT / OUTPUT
## open file handle function
`open(file, mode='r', buffering=-1, encoding=None, errors=None, newline=None, closefd=True, opener=None)`
- file : filename (or file path)
- mode :
- r : read mode (default)
- w : write mode, truncating the file first
- x : open for exclusive creation, failing if the file already exists
- a : open for writing, appending to the end of the file if it exists
- b : binary mode
- t : text mode (default)
- \+ : open for updating (reading and writing)
- So mode default is 'rt'
```
# read text mode
f1 = open('../README.md', 'rt', encoding='utf-8')
first_line = f1.readline() ## Read 1 line, stop at \n char
print(first_line)
f1.close() # ensure to close file
# write mode
import random
import os
def check_file_exised(filename):
if os.path.isfile(filename):
f_stat = os.stat(filename)
print(f'{filename} is existed with {f_stat.st_size} bytes')
else:
print(f'{filename} is not existed')
random_file3 = f'/tmp/python-write-{random.randint(100, 999)}'
check_file_exised(random_file3)
f3 = open(random_file3, 'w')
w1 = f3.write("Hello world")
w2 = f3.write("\nTiếng Việt")
print(w1, w2) # w1, w2 is number of charaters (text mode) or bytes (binary mode) written to file
f3.close() # ensure to close file
check_file_exised(random_file3) # You can see more 4 bytes because 2 unicode chars with (3 bytes / char)
# read binary mode
def read_line_by_line_binary(filename):
f2 = open(filename, 'rb')
# read line by line
for line in f2:
print(line)
f2.close() # ensure to close file
read_line_by_line_binary(random_file3)
# read with update
f4 = open(random_file3, 'rt+')
first_line = f4.readline()
print(first_line)
f4.write("\nNew line")
f4.close()
read_line_by_line_binary(random_file3) # so write on r+ will append on end of file
# seek
f5 = open(random_file3, 'a+')
print(f5.tell()) # tell : current position
f5.seek(6)
print(f5.tell()) # tell : current position
read1 = f5.read(5)
print(f5.tell()) # tell : current position, so read change position too
print(read1)
# write on current pos
f5.write(" ! Python !")
print(f5.tell()) # tell : current position
f5.close()
read_line_by_line_binary(random_file3)
# so in UNIX, you can not write in middle of file (imagine file is array of bytes, insert some bytes in middle you have to shift all right bytes)
# append mode
f6 = open(random_file3, 'a')
f6.write("\nHehehehe")
f6.close()
read_line_by_line_binary(random_file3)
# with (context manager)
# it auto close file (on finish or on exception => safe way)
with open(random_file3, 'rt') as f7:
for line in f7:
print(line, end='') # file already have \n char
```
| github_jupyter |
<div style="width: 100%; overflow: hidden;">
<div style="width: 150px; float: left;"> <img src="data/D4Sci_logo_ball.png" alt="Data For Science, Inc" align="left" border="0"> </div>
<div style="float: left; margin-left: 10px;"> <h1>NLP with Deep Learning for Everyone</h1>
<h1>Foundations of NLP</h1>
<p>Bruno Gonçalves<br/>
<a href="http://www.data4sci.com/">www.data4sci.com</a><br/>
@bgoncalves, @data4sci</p></div>
</div>
```
import warnings
warnings.filterwarnings('ignore')
import os
import gzip
from collections import Counter
from pprint import pprint
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import string
import nltk
from nltk.corpus import stopwords
from nltk.text import TextCollection
from nltk.collocations import BigramCollocationFinder
from nltk.metrics.association import BigramAssocMeasures
import sklearn
from sklearn.manifold import TSNE
from tqdm import tqdm
tqdm.pandas()
import watermark
%load_ext watermark
%matplotlib inline
```
We start by print out the versions of the libraries we're using for future reference
```
%watermark -n -v -m -g -iv
```
Load default figure style
```
plt.style.use('./d4sci.mplstyle')
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
```
# One-Hot Encoding
```
text = """Mary had a little lamb, little lamb,
little lamb. 'Mary' had a little lamb
whose fleece was white as snow.
And everywhere that Mary went
Mary went, MARY went. Everywhere
that mary went,
The lamb was sure to go"""
```
The first step is to tokenize the text. NLTK provides us with a convenient tokenizer function we can use. It is also smart enough to be able to handle different languages
```
tokens = nltk.word_tokenize(text, 'english')
pprint(tokens)
```
You'll note that NLTK includes apostrophes at the beginning of words and returns all punctuation markings.
```
print(tokens[5])
print(tokens[12])
print(tokens[13])
```
We wrap it into a utility function to handle these
```
def tokenize(text, preserve_case=True):
punctuation = set(string.punctuation)
text_words = []
for word in nltk.word_tokenize(text):
# Remove any punctuation characters present in the beginning of the word
while len(word) > 0 and word[0] in punctuation:
word = word[1:]
# Remove any punctuation characters present in the end of the word
while len(word) > 0 and word[-1] in punctuation:
word = word[:-1]
if len(word) > 0:
if preserve_case:
text_words.append(word)
else:
text_words.append(word.lower())
return text_words
text_words = tokenize(text, False)
text_words
```
We can get a quick one-hot encoded version using pandas:
```
one_hot = pd.get_dummies(text_words)
```
Which provides us with a DataFrame where each column corresponds to an individual unique word and each row to a word in our text.
```
temp = one_hot.astype('str')
temp[temp=='0'] = ""
temp = pd.DataFrame(temp)
temp
```
From here can easily generate a mapping between the words and their numerical id
```
word_dict = dict(zip(one_hot.columns, np.arange(one_hot.shape[1])))
word_dict
```
Allowing us to easily transform from words to ids and back
# Bag of Words
From the one-hot encoded representation we can easily obtain the bag of words version of our document
```
pd.DataFrame(one_hot.sum(), columns=['Count'])
```
A more general representation would be to use a dictionary mapping each word to the number of times it occurs. This has the added advantage of being a dense representation that doesn't waste any space
# Stemming
```
words = ['playing', 'loved', 'ran', 'river', 'friendships', 'misunderstanding', 'trouble', 'troubling']
stemmers = {
'LancasterStemmer' : nltk.stem.LancasterStemmer(),
'PorterStemmer' : nltk.stem.PorterStemmer(),
'RegexpStemmer' : nltk.stem.RegexpStemmer('ing$|s$|e$|able$'),
'SnowballStemmer' : nltk.stem.SnowballStemmer('english')
}
matrix = []
for word in words:
row = []
for stemmer in stemmers:
stem = stemmers[stemmer]
row.append(stem.stem(word))
matrix.append(row)
comparison = pd.DataFrame(matrix, index=words, columns=stemmers.keys())
comparison
```
# Lemmatization
```
wordnet = nltk.stem.WordNetLemmatizer()
results_n = [wordnet.lemmatize(word, 'n') for word in words]
results_v = [wordnet.lemmatize(word, 'v') for word in words]
comparison['WordNetLemmatizer Noun'] = results_n
comparison['WordNetLemmatizer Verb'] = results_v
comparison
```
# Stopwords
NLTK provides stopwords for 23 different languages
```
os.listdir('/Users/bgoncalves/nltk_data/corpora/stopwords/')
```
And we can easily use them to filter out meaningless words
```
stop_words = set(stopwords.words('english'))
tokens = tokenize(text)
filtered_sentence = [word if word.lower() not in stop_words else "" for word in tokens]
pd.DataFrame((zip(tokens, filtered_sentence)), columns=['Original', 'Filtered']).set_index('Original')
```
# N-Grams
```
def get_ngrams(text, length):
from nltk.util import ngrams
n_grams = ngrams(tokenize(text), length)
return [ ' '.join(grams) for grams in n_grams]
get_ngrams(text.lower(), 2)
```
# Collocations
```
bigrams = BigramCollocationFinder.from_words(tokenize(text, False))
scored = bigrams.score_ngrams(BigramAssocMeasures.likelihood_ratio)
scored
```
# TF/IDF
NLTK has the TextCollection object that allows us to easily compute tf-idf scores from a given corpus. We generate a small corpus by splitting our text by sentence
```
corpus = text.split('.')
```
We have 4 documents in our corpus
```
len(corpus)
```
NLTK expects the corpus to be tokenized so we do that now
```
corpus = [tokenize(doc, preserve_case=False) for doc in corpus]
corpus
```
We initialize the TextCollection object with our corpus
```
nlp = TextCollection(corpus)
```
This object provides us with a great deal of functionality, like total frequency counts
```
nlp.plot()
```
Individual tf/idf scores, etc
```
nlp.tf_idf('Mary', corpus[3])
```
To get the full TF/IDF scores for our corpus, we do
```
TFIDF = []
for doc in corpus:
current = {}
for token in doc:
current[token] = nlp.tf_idf(token, doc)
TFIDF.append(current)
TFIDF
```
<div style="width: 100%; overflow: hidden;">
<img src="data/D4Sci_logo_full.png" alt="Data For Science, Inc" align="center" border="0" width=300px>
</div>
| github_jupyter |
Early stopping of model simulations
===================
For certain distance functions and certain models it is possible to calculate the
distance on-the-fly while the model is running. This is e.g. possible if the distance is calculated as a cumulative sum and the model is a stochastic process. For example, Markov Jump Processes belong to this class. However, we want to keep things simple here and only demonstrate how to use the pyABC interface in such cases. So don't expect a sophisticated (or even useful) model implementation here.
In this example we'll use in particular the following classes for integrated simulation and accepting/rejecting a parameter:
Let's start with the necessary imports:
```
# install if not done yet
!pip install pyabc --quiet
%matplotlib inline
import pyabc
from pyabc import (ABCSMC,
RV, Distribution,
IntegratedModel, ModelResult,
MedianEpsilon,
LocalTransition,
NoDistance)
from pyabc.sampler import SingleCoreSampler
import matplotlib.pyplot as plt
import os
import tempfile
import pandas as pd
import numpy as np
pyabc.settings.set_figure_params('pyabc') # for beautified plots
db_path = ("sqlite:///" +
os.path.join(tempfile.gettempdir(), "test.db"))
```
We define here a (very) simple stochastic process, purely for demonstrative reasons.
First, we fix the number of steps *n_steps* to 30.
```
n_steps = 30
```
We then define our process as follows:
$$
x(t+1) = x(t) + s \xi,
$$
in which $\xi \sim U(0, 1)$ denotes a uniformly in $[0, 1]$ distributed
random variable, and $s$ is the step size, $s = $ step_size.
The function `simulate` implements this stochastic process:
```
def simulate(step_size):
trajectory = np.zeros(n_steps)
for t in range(1, n_steps):
xi = np.random.uniform()
trajectory[t] = trajectory[t-1] + xi * step_size
return trajectory
```
We take as distance function between two such generated trajectories
the sum of the absolute values of the pointwise differences.
```
def distance(trajectory_1, trajectory_2):
return np.absolute(trajectory_1 - trajectory_2).sum()
```
Let's run the simulation and plot the trajectories to get a better
idea of the so generated data.
We set the ground truth step size *gt_step_size* to
```
gt_step_size = 5
```
This will be used to generate the data which will be subject to inference later on.
```
gt_trajectory = simulate(gt_step_size)
trajectoy_2 = simulate(2)
dist_1_2 = distance(gt_trajectory, trajectoy_2)
plt.plot(gt_trajectory,
label="Step size = {} (Ground Truth)".format(gt_step_size))
plt.plot(trajectoy_2,
label="Step size = 2")
plt.legend();
plt.title("Distance={:.2f}".format(dist_1_2));
```
As you might have noted already we could calculate the distance on the fly.
After each step in the stochastic process, we could increment the cumulative sum.
This will supposedly save time in the ABC-SMC run later on.
Let's start with the code first and explain it afterwards.
```
class MyStochasticProcess(IntegratedModel):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.n_early_stopped = 0
def integrated_simulate(self, pars, eps):
cumsum = 0
trajectory = np.zeros(n_steps)
for t in range(1, n_steps):
xi = np.random.uniform()
next_val = trajectory[t-1] + xi * pars["step_size"]
cumsum += abs(next_val - gt_trajectory[t])
trajectory[t] = next_val
if cumsum > eps:
self.n_early_stopped += 1
return ModelResult(accepted=False)
return ModelResult(accepted=True,
distance=cumsum,
sum_stat={"trajectory": trajectory})
```
Our `MyStochasticProcess` class is a subclass of `IntegratedModel <pyabc.model.IntegratedModel>`.
The `__init__` method is not really necessary. Here, we just want to keep
track of how often early stopping has actually happened.
More interesting is the `integrated_simulate` method. This is where the real thing
happens.
As already said, we calculate the cumulative sum on the fly.
In each simulation step, we update the cumulative sum.
Note that *gt_trajectory* is actually a global variable here.
If *cumsum > eps* at some step of the simulation, we return immediately,
indicating that the parameter was not accepted
by returning `ModelResult(accepted=False)`.
If the *cumsum* never passed *eps*, the parameter got accepted. In this case
we return an accepted result together with the calculated distance and the trajectory.
Note that, while it is mandatory to return the distance, returning the trajectory is optional. If it is returned, it is stored in the database.
We define a uniform prior over the interval $[0, 10]$ over the step size
```
prior = Distribution(step_size=RV("uniform", 0 , 10))
```
and create and instance of our integrated model MyStochasticProcess
```
model = MyStochasticProcess()
```
We then configure the ABC-SMC run.
As the distance function is calculated within `MyStochasticProcess`, we just pass
`None` to the `distance_function` parameter.
As sampler, we use the `SingleCoreSampler` here. We do so to correctly keep track of `MyStochasticProcess.n_early_stopped`. Otherwise, the counter gets incremented in subprocesses and we don't see anything here.
Of course, you could also use the `MyStochasticProcess` model in a multi-core or
distributed setting.
Importantly, we pre-specify the initial acceptance threshold to a given value, here to 300. Otherwise, pyABC will try to automatically determine it by drawing samples from the prior and evaluating the distance function.
However, we do not have a distance function here, so this approach would break down.
```
abc = ABCSMC(models=model,
parameter_priors=prior,
distance_function=NoDistance(),
sampler=SingleCoreSampler(),
population_size=30,
transitions=LocalTransition(k_fraction=.2),
eps=MedianEpsilon(300, median_multiplier=0.7))
```
We then indicate that we want to start a new ABC-SMC run:
```
abc.new(db_path)
```
We do not need to pass any data here. However, we could still pass additionally
a dictionary `{"trajectory": gt_trajectory}` only for storage purposes
to the `new` method. The data will however be ignored during the ABC-SMC run.
Then, let's start the sampling
```
h = abc.run(minimum_epsilon=40, max_nr_populations=3)
```
and check how often the early stopping was used:
```
model.n_early_stopped
```
Quite a lot actually.
Lastly we estimate KDEs of the different populations to inspect our results
and plot everything (the vertical dashed line is the ground truth step size).
```
from pyabc.visualization import plot_kde_1d
fig, ax = plt.subplots()
for t in range(h.max_t+1):
particles = h.get_distribution(m=0, t=t)
plot_kde_1d(*particles, "step_size",
label="t={}".format(t), ax=ax,
xmin=0, xmax=10, numx=300)
ax.axvline(gt_step_size, color="k", linestyle="dashed");
```
That's it. You should be able to see how the distribution
contracts around the true parameter.
| github_jupyter |
### This notebook covers how to get statistics on videos returned for a list of search terms on YouTube with the use of YouTube Data API v3.
First go to [Google Developer](http://console.developers.google.com/) and enable YouTube Data API v3 by clicking on the button "+ ENABLE APIS AND SERVICES" and searching for YouTube Data API v3. Next, get your unique developer key from the Credentials setting. Then run the following in command line to install the necessary client: `pip install --upgrade google-api-python-client`
There are only two things which you need to modify in this notebook. 1) Paste the developer key in the code below. 2) Change the search terms in the keywords list to what you want.
<img src="https://static.wixstatic.com/media/1ea3da_6e02db1850d845ec9e4325ee8c56eb12~mv2.png/v1/fill/w_1035,h_338,al_c,q_80,usm_0.66_1.00_0.01/1ea3da_6e02db1850d845ec9e4325ee8c56eb12~mv2.webp">
```
from apiclient.discovery import build
from apiclient.errors import HttpError
from oauth2client.tools import argparser
import pandas as pd
import numpy as np
import pprint
import matplotlib.pyplot as plt
DEVELOPER_KEY = "" #paste key here in between ""
YOUTUBE_API_SERVICE_NAME = "youtube"
YOUTUBE_API_VERSION = "v3"
#input list of search terms of interest
keywords = ["bts","blackpink","twice","one direction"]
youtube = build(YOUTUBE_API_SERVICE_NAME, YOUTUBE_API_VERSION,developerKey=DEVELOPER_KEY)
max_results=50
order="relevance"
token=None
location=None
location_radius=None
title = []
channelId = []
channelTitle = []
categoryId = []
videoId = []
pubDate = []
viewCount = []
likeCount = []
dislikeCount = []
commentCount = []
favoriteCount = []
category = []
tags = []
videos = []
keyword = []
for q in keywords:
search_response = youtube.search().list(
q=q,
type="video",
pageToken=token,
order = order,
part="id,snippet",
maxResults=max_results,
location=location,
locationRadius=location_radius).execute()
for search_result in search_response.get("items", []):
keyword.append(q)
if search_result["id"]["kind"] == "youtube#video":
title.append(search_result['snippet']['title'])
videoId.append(search_result['id']['videoId'])
response = youtube.videos().list(
part='statistics, snippet',
id=search_result['id']['videoId']).execute()
channelId.append(response['items'][0]['snippet']['channelId'])
channelTitle.append(response['items'][0]['snippet']['channelTitle'])
pubDate.append(response['items'][0]['snippet']['publishedAt'])
categoryId.append(response['items'][0]['snippet']['categoryId'])
favoriteCount.append(response['items'][0]['statistics']['favoriteCount'])
viewCount.append(response['items'][0]['statistics']['viewCount'])
try:
likeCount.append(response['items'][0]['statistics']['likeCount'])
except:
likeCount.append("NaN")
try:
dislikeCount.append(response['items'][0]['statistics']['dislikeCount'])
except:
dislikeCount.append("NaN")
if 'commentCount' in response['items'][0]['statistics'].keys():
commentCount.append(response['items'][0]['statistics']['commentCount'])
else:
commentCount.append(0)
if 'tags' in response['items'][0]['snippet'].keys():
tags.append(response['items'][0]['snippet']['tags'])
else:
tags.append("No Tags")
youtube_dict = {'pubDate': pubDate,'tags': tags,'channelId': channelId,'channelTitle': channelTitle,'categoryId':categoryId,'title':title,'videoId':videoId,'viewCount':viewCount,'likeCount':likeCount,'dislikeCount':dislikeCount,'favoriteCount':favoriteCount, 'commentCount':commentCount, 'keyword':keyword}
df = pd.DataFrame(youtube_dict)
df.head()
df.shape
df['pubDate'] = pd.to_datetime(df.pubDate)
df['publishedDate'] = df['pubDate'].dt.strftime('%d/%m/%Y')
#rearranging order of columns
df1 = df[['keyword','publishedDate','title','viewCount','channelTitle','commentCount','likeCount','dislikeCount','tags','favoriteCount','videoId','channelId','categoryId']]
df1.columns = ['keyword','publishedDate','Title','viewCount','channelTitle','commentCount','likeCount','dislikeCount','tags','favoriteCount','videoId','channelId','categoryId']
df1.head()
df1.to_csv("youtube_bands4.csv") #download to local drive as a csv file
data = pd.read_csv("youtube_bands4.csv")
data.dtypes
list(data)
data = data.drop('Unnamed: 0',1)
list(data)
data[data.isna().any(axis=1)] #see how many rows has missing data
numeric_dtype = ['viewCount','commentCount','likeCount','dislikeCount','favoriteCount']
for i in numeric_dtype:
data[i] = data[i]/1000000 #converting to millions
data
#sorting the data by likeCount in descending order
data.sort_values("likeCount", axis = 0, ascending = False,
inplace = True, na_position ='last')
focus1 = data.head(10) #select top 10 results by likeCount
focus1
focus1.shape #check number of rows and columns
fig, ax = plt.subplots()
ax.barh(range(focus1.shape[0]),focus1['likeCount'])
ax.set_yticks(range(focus1.shape[0]))
ax.set_yticklabels(focus1['Title'])
ax.invert_yaxis() # labels read top-to-bottom
ax.set_xlabel('likeCount in Millions')
plt.show()
#sorting the data by viewCount in descending order
data.sort_values("viewCount", axis = 0, ascending = False,
inplace = True, na_position ='last')
focus1 = data.head(10) #select top 10 results by viewCount
fig, ax = plt.subplots()
ax.barh(range(focus1.shape[0]),focus1['viewCount'])
ax.set_yticks(range(focus1.shape[0]))
ax.set_yticklabels(focus1['Title'])
ax.invert_yaxis() # labels read top-to-bottom
ax.set_xlabel('viewCount in Millions')
plt.show()
```
#### References:
https://medium.com/greyatom/youtube-data-in-python-6147160c5833
https://medium.com/@RareLoot/extract-youtube-video-statistics-based-on-a-search-query-308afd786bfe
https://developers.google.com/youtube/v3/getting-started
| github_jupyter |
```
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import argparse
import os.path
import sys
import time
import tensorflow as tf
TRAIN_FILE = 'train.tfrecords'
def read_and_decode(filename_queue):
reader = tf.TFRecordReader()
_, serialized_example = reader.read(filename_queue)
features = tf.parse_single_example(
serialized_example,
# Defaults are not specified since both keys are required.
features={
'image_raw': tf.FixedLenFeature([],tf.string),
'age': tf.FixedLenFeature([], tf.int64),
'gender':tf.FixedLenFeature([], tf.int64)
})
# Convert from a scalar string tensor (whose single string has
# length mnist.IMAGE_PIXELS) to a uint8 tensor with shape
# [mnist.IMAGE_PIXELS].
image = tf.image.decode_jpeg(features['image_raw'], channels=3)
image = tf.image.resize_images(image,[128,128])
#image = tf.cast(image,tf.uint8)
# image.set_shape([mnist.IMAGE_PIXELS])
# OPTIONAL: Could reshape into a 28x28 image and apply distortions
# here. Since we are not applying any distortions in this
# example, and the next step expects the image to be flattened
# into a vector, we don't bother.
# Convert from [0, 255] -> [-0.5, 0.5] floats.
#image = tf.cast(image, tf.float32) * (1. / 255) - 0.5
# Convert label from a scalar uint8 tensor to an int32 scalar.
age = features['age']#tf.cast(features['age'], tf.int32)
gender = tf.cast(features['gender'], tf.int32)
return image, age, gender
def inputs(train, batch_size, num_epochs):
"""Reads input data num_epochs times.
Args:
train: Selects between the training (True) and validation (False) data.
batch_size: Number of examples per returned batch.
num_epochs: Number of times to read the input data, or 0/None to
train forever.
Returns:
A tuple (images, labels), where:
* images is a float tensor with shape [batch_size, mnist.IMAGE_PIXELS]
in the range [-0.5, 0.5].
* labels is an int32 tensor with shape [batch_size] with the true label,
a number in the range [0, mnist.NUM_CLASSES).
Note that an tf.train.QueueRunner is added to the graph, which
must be run using e.g. tf.train.start_queue_runners().
"""
if not num_epochs: num_epochs = None
# filename = os.path.join(FLAGS.train_dir,
# TRAIN_FILE if train else VALIDATION_FILE)
with tf.name_scope('input'):
filename_queue = tf.train.string_input_producer(
[TRAIN_FILE], num_epochs=num_epochs)
# Even when reading in multiple threads, share the filename
# queue.
image, age, gender = read_and_decode(filename_queue)
# Shuffle the examples and collect them into batch_size batches.
# (Internally uses a RandomShuffleQueue.)
# We run this in two threads to avoid being a bottleneck.
images, sparse_labels = tf.train.shuffle_batch(
[image, age], batch_size=batch_size, num_threads=2,
capacity=1000 + 3 * batch_size,
# Ensures a minimum amount of shuffling of examples.
min_after_dequeue=1000)
return images, sparse_labels
def run_training():
"""Train MNIST for a number of steps."""
# Tell TensorFlow that the model will be built into the default Graph.
with tf.Graph().as_default():
# Input images and labels.
images, labels = inputs(train=True, batch_size=16,
num_epochs=1)
# Build a Graph that computes predictions from the inference model.
#logits = mnist.inference(images,
# FLAGS.hidden1,
# FLAGS.hidden2)
# Add to the Graph the loss calculation.
#loss = mnist.loss(logits, labels)
# Add to the Graph operations that train the model.
#train_op = mnist.training(loss, FLAGS.learning_rate)
# The op for initializing the variables.
init_op = tf.group(tf.global_variables_initializer(),
tf.local_variables_initializer())
# Create a session for running operations in the Graph.
sess = tf.Session()
# Initialize the variables (the trained variables and the
# epoch counter).
sess.run(init_op)
# Start input enqueue threads.
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
try:
step = 0
while not coord.should_stop():
start_time = time.time()
# Run one step of the model. The return values are
# the activations from the `train_op` (which is
# discarded) and the `loss` op. To inspect the values
# of your ops or variables, you may include them in
# the list passed to sess.run() and the value tensors
# will be returned in the tuple from the call.
#_, loss_value = sess.run([train_op, loss])
images_val = sess.run([images])
print(images_val.shape)
duration = time.time() - start_time
# Print an overview fairly often.
#if step % 100 == 0:
# print('Step %d: loss = %.2f (%.3f sec)' % (step, loss_value,
# duration))
step += 1
except tf.errors.OutOfRangeError:
print('Done training for %d epochs, %d steps.' % (1, step))
finally:
# When done, ask the threads to stop.
coord.request_stop()
# Wait for threads to finish.
coord.join(threads)
sess.close()
images, labels = inputs(train=True, batch_size=16,
num_epochs=1)
# Build a Graph that computes predictions from the inference model.
#logits = mnist.inference(images,
# FLAGS.hidden1,
# FLAGS.hidden2)
# Add to the Graph the loss calculation.
#loss = mnist.loss(logits, labels)
# Add to the Graph operations that train the model.
#train_op = mnist.training(loss, FLAGS.learning_rate)
# The op for initializing the variables.
init_op = tf.group(tf.global_variables_initializer(),
tf.local_variables_initializer())
# Create a session for running operations in the Graph.
sess = tf.Session()
# Initialize the variables (the trained variables and the
# epoch counter).
sess.run(init_op)
# Start input enqueue threads.
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
images_val,age_val = sess.run([images,labels])
images_val[0].shape
import matplotlib.pyplot as plt
plt.imshow(images_val[6]/255.0)
plt.show()
sess.close()
```
| github_jupyter |
# Inference and Validation
Now that you have a trained network, you can use it for making predictions. This is typically called **inference**, a term borrowed from statistics. However, neural networks have a tendency to perform *too well* on the training data and aren't able to generalize to data that hasn't been seen before. This is called **overfitting** and it impairs inference performance. To test for overfitting while training, we measure the performance on data not in the training set called the **validation** set. We avoid overfitting through regularization such as dropout while monitoring the validation performance during training. In this notebook, I'll show you how to do this in PyTorch.
As usual, let's start by loading the dataset through torchvision. You'll learn more about torchvision and loading data in a later part. This time we'll be taking advantage of the test set which you can get by setting `train=False` here:
```python
testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)
```
The test set contains images just like the training set. Typically you'll see 10-20% of the original dataset held out for testing and validation with the rest being used for training.
```
import torch
from torchvision import datasets, transforms
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))])
# Download and load the training data
trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
# Download and load the test data
testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)
```
Here I'll create a model like normal, using the same one from my solution for part 4.
```
from torch import nn, optim
import torch.nn.functional as F
class Classifier(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 128)
self.fc3 = nn.Linear(128, 64)
self.fc4 = nn.Linear(64, 10)
def forward(self, x):
# make sure input tensor is flattened
x = x.view(x.shape[0], -1)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = F.log_softmax(self.fc4(x), dim=1)
return x
```
The goal of validation is to measure the model's performance on data that isn't part of the training set. Performance here is up to the developer to define though. Typically this is just accuracy, the percentage of classes the network predicted correctly. Other options are [precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall#Definition_(classification_context)) and top-5 error rate. We'll focus on accuracy here. First I'll do a forward pass with one batch from the test set.
```
model = Classifier()
images, labels = next(iter(testloader))
# Get the class probabilities
ps = torch.exp(model(images))
# Make sure the shape is appropriate, we should get 10 class probabilities for 64 examples
print(ps.shape)
```
With the probabilities, we can get the most likely class using the `ps.topk` method. This returns the $k$ highest values. Since we just want the most likely class, we can use `ps.topk(1)`. This returns a tuple of the top-$k$ values and the top-$k$ indices. If the highest value is the fifth element, we'll get back 4 as the index.
```
top_p, top_class = ps.topk(1, dim=1)
# Look at the most likely classes for the first 10 examples
print(top_class[:10,:])
```
Now we can check if the predicted classes match the labels. This is simple to do by equating `top_class` and `labels`, but we have to be careful of the shapes. Here `top_class` is a 2D tensor with shape `(64, 1)` while `labels` is 1D with shape `(64)`. To get the equality to work out the way we want, `top_class` and `labels` must have the same shape.
If we do
```python
equals = top_class == labels
```
`equals` will have shape `(64, 64)`, try it yourself. What it's doing is comparing the one element in each row of `top_class` with each element in `labels` which returns 64 True/False boolean values for each row.
```
equals = top_class == labels.view(*top_class.shape)
equals.shape
```
Now we need to calculate the percentage of correct predictions. `equals` has binary values, either 0 or 1. This means that if we just sum up all the values and divide by the number of values, we get the percentage of correct predictions. This is the same operation as taking the mean, so we can get the accuracy with a call to `torch.mean`. If only it was that simple. If you try `torch.mean(equals)`, you'll get an error
```
RuntimeError: mean is not implemented for type torch.ByteTensor
```
This happens because `equals` has type `torch.ByteTensor` but `torch.mean` isn't implemented for tensors with that type. So we'll need to convert `equals` to a float tensor. Note that when we take `torch.mean` it returns a scalar tensor, to get the actual value as a float we'll need to do `accuracy.item()`.
```
accuracy = torch.mean(equals.type(torch.FloatTensor))
print(f'Accuracy: {accuracy.item()*100}%')
```
The network is untrained so it's making random guesses and we should see an accuracy around 10%. Now let's train our network and include our validation pass so we can measure how well the network is performing on the test set. Since we're not updating our parameters in the validation pass, we can speed up our code by turning off gradients using `torch.no_grad()`:
```python
# turn off gradients
with torch.no_grad():
# validation pass here
for images, labels in testloader:
...
```
>**Exercise:** Implement the validation loop below and print out the total accuracy after the loop. You can largely copy and paste the code from above, but I suggest typing it in because writing it out yourself is essential for building the skill. In general you'll always learn more by typing it rather than copy-pasting. You should be able to get an accuracy above 80%.
```
model = Classifier()
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr=0.003)
epochs = 30
steps = 0
train_losses, test_losses = [], []
for e in range(epochs):
running_loss = 0
for images, labels in trainloader:
optimizer.zero_grad()
log_ps = model(images)
loss = criterion(log_ps, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
else:
test_loss = 0
accuracy = 0
## TODO: Implement the validation pass and print out the validation accuracy
with torch.no_grad():
# validation pass here
for images, labels in testloader:
#capture the loss
log_ps = model(images)
test_loss += criterion(log_ps, labels)
ps = torch.exp(log_ps)
top_p, top_class = ps.topk(1, dim=1)
equals = top_class == labels.view(*top_class.shape)
accuracy += torch.mean(equals.type(torch.FloatTensor))
train_losses.append(running_loss/len(trainloader))
test_losses.append(test_loss/len(testloader))
print("Epoch: {}/{}.. ".format(e+1, epochs),
"Training Loss: {:.3f}.. ".format(running_loss/len(trainloader)),
"Test Loss: {:.3f}.. ".format(test_loss/len(testloader)),
"Test Accuracy: {:.3f}".format(accuracy/len(testloader)))
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
#plot validation vs training loss
plt.plot(train_losses, label='Training loss')
plt.plot(test_losses, label='Validation loss')
plt.legend(frameon=False)
```
## Overfitting
If we look at the training and validation losses as we train the network, we can see a phenomenon known as overfitting.
<img src='assets/overfitting.png' width=450px>
The network learns the training set better and better, resulting in lower training losses. However, it starts having problems generalizing to data outside the training set leading to the validation loss increasing. The ultimate goal of any deep learning model is to make predictions on new data, so we should strive to get the lowest validation loss possible. One option is to use the version of the model with the lowest validation loss, here the one around 8-10 training epochs. This strategy is called *early-stopping*. In practice, you'd save the model frequently as you're training then later choose the model with the lowest validation loss.
The most common method to reduce overfitting (outside of early-stopping) is *dropout*, where we randomly drop input units. This forces the network to share information between weights, increasing it's ability to generalize to new data. Adding dropout in PyTorch is straightforward using the [`nn.Dropout`](https://pytorch.org/docs/stable/nn.html#torch.nn.Dropout) module.
```python
class Classifier(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 128)
self.fc3 = nn.Linear(128, 64)
self.fc4 = nn.Linear(64, 10)
# Dropout module with 0.2 drop probability
self.dropout = nn.Dropout(p=0.2)
def forward(self, x):
# make sure input tensor is flattened
x = x.view(x.shape[0], -1)
# Now with dropout
x = self.dropout(F.relu(self.fc1(x)))
x = self.dropout(F.relu(self.fc2(x)))
x = self.dropout(F.relu(self.fc3(x)))
# output so no dropout here
x = F.log_softmax(self.fc4(x), dim=1)
return x
```
During training we want to use dropout to prevent overfitting, but during inference we want to use the entire network. So, we need to turn off dropout during validation, testing, and whenever we're using the network to make predictions. To do this, you use `model.eval()`. This sets the model to evaluation mode where the dropout probability is 0. You can turn dropout back on by setting the model to train mode with `model.train()`. In general, the pattern for the validation loop will look like this, where you turn off gradients, set the model to evaluation mode, calculate the validation loss and metric, then set the model back to train mode.
```python
# turn off gradients
with torch.no_grad():
# set model to evaluation mode
model.eval()
# validation pass here
for images, labels in testloader:
...
# set model back to train mode
model.train()
```
> **Exercise:** Add dropout to your model and train it on Fashion-MNIST again. See if you can get a lower validation loss or higher accuracy.
```
## TODO: Define your model with dropout added
class Classifier(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 128)
self.fc3 = nn.Linear(128, 64)
self.fc4 = nn.Linear(64, 10)
#Dropout module with 0.2 drop probability
self.dropout = nn.Dropout(p=0.2)
def forward(self, x):
x = x.view(x.shape[0], -1)
# Now with dropout
x = self.dropout(F.relu(self.fc1(x)))
x = self.dropout(F.relu(self.fc2(x)))
x = self.dropout(F.relu(self.fc3(x)))
#output so no dropout
x = F.log_softmax(self.fc4(x), dim=1)
return x
#training setup
model = Classifier()
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr=0.0003)
epochs = 30
steps = 0
training_losses, test_losses = [], []
## TODO: Train your model with dropout, and monitor the training progress with the validation loss and accuracy
for e in range(epochs):
running_loss = 0
for images, labels in trainloader:
#reset the gradient so it doesn't accumulate
optimizer.zero_grad()
log_ps = model(images)
loss = criterion(log_ps, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
else:
test_loss = 0
accuracy = 0
# Turn off gradients, for validation, saves memory and computations
with torch.no_grad():
#turn off gradient computation for validation
model.eval()
for images, labels in testloader:
log_ps = model(images)
test_loss += criterion(log_ps, labels)
ps = torch.exp(log_ps)
top_p, top_class = ps.topk(1, dim=1)
equals = top_class == labels.view(*top_class.shape)
accuracy += torch.mean(equals.type(torch.FloatTensor))
model.train()
train_losses.append(running_loss/len(trainloader))
test_losses.append(test_loss/len(testloader))
print("Epoch: {}/{}.. ".format(e+1, epochs),
"Training Loss: {:.3f}.. ".format(train_losses[-1]),
"Test Loss: {:.3f}.. ".format(test_losses[-1]),
"Test Accuracy: {:.3f}".format(accuracy/len(testloader)))
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
plt.plot(train_losses, label='Training loss')
plt.plot(test_losses, label='Validation loss')
plt.legend(frameon=False)
```
## Inference
Now that the model is trained, we can use it for inference. We've done this before, but now we need to remember to set the model in inference mode with `model.eval()`. You'll also want to turn off autograd with the `torch.no_grad()` context.
```
# Import helper module (should be in the repo)
import helper
# Test out your network!
model.eval()
dataiter = iter(testloader)
images, labels = dataiter.next()
img = images[0]
# Convert 2D image to 1D vector
img = img.view(1, 784)
# Calculate the class probabilities (softmax) for img
with torch.no_grad():
output = model.forward(img)
ps = torch.exp(output)
# Plot the image and probabilities
helper.view_classify(img.view(1, 28, 28), ps, version='Fashion')
```
## Next Up!
In the next part, I'll show you how to save your trained models. In general, you won't want to train a model everytime you need it. Instead, you'll train once, save it, then load the model when you want to train more or use if for inference.
| github_jupyter |
```
import plotly
plotly.offline.init_notebook_mode(connected=True)
from scipy.optimize import minimize
def objective_function_test(parameters):
a, b = parameters
return float(a + b)
minimize(objective_function_test, (8, 8), bounds=((0, None), (0, None)))
import numpy as np
import ccal
np.random.seed(seed=ccal.RANDOM_SEED)
from statsmodels.sandbox.distributions.extras import ACSkewT_gen
def objective_function(parameters, y_pdf, model, x):
degree_of_freedom, shape, location, scale = parameters
return (
(y_pdf - model.pdf(x, degree_of_freedom, shape, loc=location, scale=scale))
** 2
** 0.5
).sum()
location = 0
scale = 1
y = np.random.normal(loc=location, scale=scale, size=128)
over = (y.max() - y.min()) * 0
x = np.linspace(y.min() - over, y.max() + over, num=y.size)
model = ACSkewT_gen()
fit_result = model.fit(y, loc=location, scale=scale)
y_fit_pdf = model.pdf(
x, fit_result[0], fit_result[1], loc=fit_result[2], scale=fit_result[3]
)
for method in (
"Nelder-Mead",
"Powell",
"CG",
# "BFGS",
# "Newton-CG",
# "L-BFGS-B",
# "TNC",
# "COBYLA",
# "SLSQP",
# "trust-constr",
# "dogleg",
# "trust-ncg",
# "trust-exact",
# "trust-krylov",
):
print(method)
minimize_result = minimize(
objective_function,
(1, 0, location, scale),
args=(y, model, x),
method=method,
bounds=((0, None), (0, 24), (-8, 8), (0, None)),
)
ccal.plot_and_save(
{
"layout": {},
"data": [
{
"type": "histogram",
"name": "Y Distribution",
"histnorm": "probability density",
"x": y,
},
{
"type": "scatter",
"name": "Y PDF (parameters from fit)",
"x": x,
"y": y_fit_pdf,
},
{
"type": "scatter",
"name": "Y PDF (parameters from minimize)",
"x": x,
"y": model.pdf(
x,
np.e ** minimize_result.x[0],
minimize_result.x[1],
loc=minimize_result.x[2],
scale=minimize_result.x[3],
),
},
],
},
None,
)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/bakkiaraj/AIML_CEP_2021/blob/main/Sentiment_Analysis_TA_session_Dec_11.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
We will work with the IMDB dataset, which contains movie reviews from IMDB. Each review is labeled as 1 (for positive) or 0 (for negative) from the rating provided by users together with their reviews.\
This dataset is available [here](https://www.kaggle.com/columbine/imdb-dataset-sentiment-analysis-in-csv-format/download).
Code referred from [here](https://www.kaggle.com/arunmohan003/sentiment-analysis-using-lstm-pytorch/notebook)
#Importing Libraries and Data
```
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim.lr_scheduler as lr_scheduler
from torch.utils.data import TensorDataset, DataLoader
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
from collections import Counter
import string
import re
import seaborn as sns
from tqdm import tqdm
import matplotlib.pyplot as plt
SEED = 1234
np.random.seed(SEED)
torch.manual_seed(SEED)
torch.cuda.manual_seed(SEED)
## upload data files to this colab notebook and read them using the respective paths
df_train = pd.read_csv('/content/Train.csv')
df_val = pd.read_csv('/content/Valid.csv')
df_test = pd.read_csv('/content/Test.csv')
df_train.head()
x_train, y_train = df_train['text'].values, df_train['label'].values
x_val, y_val = df_val['text'].values, df_val['label'].values
x_test, y_test = df_test['text'].values, df_test['label'].values
print(f'shape of train data is {x_train.shape}')
print(f'shape of val data is {x_val.shape}')
print(f'shape of test data is {x_test.shape}')
#plot of positive and negative class count in training set
dd = pd.Series(y_train).value_counts()
sns.barplot(x=np.array(['negative','positive']),y=dd.values)
plt.show()
```
#Pre-Processing Data
```
def preprocess_string(s):
""" preprocessing string to remove special characters, white spaces and digits """
s = re.sub(r"[^\w\s]", '', s) # Remove all non-word characters (everything except numbers and letters)
s = re.sub(r"\s+", '', s) # Replace all runs of whitespaces with no space
s = re.sub(r"\d", '', s) # replace digits with no space
return s
def create_corpus(x_train):
""" creates dictionary of 1000 most frequent words in the training set and assigns token number to the words, returns dictionay (named corpus)"""
word_list = []
stop_words = set(stopwords.words('english'))
for sent in x_train:
for word in sent.lower().split():
word = preprocess_string(word)
if word not in stop_words and word != '':
word_list.append(word)
word_count = Counter(word_list)
# sorting on the basis of most common words
top_words = sorted(word_count, key=word_count.get, reverse=True)[:1000]
# creating a dict
corpus = {w:i+1 for i,w in enumerate(top_words)}
return corpus
def preprocess(x, y, corpus):
""" encodes reviews according to created corpus dictionary"""
x_new = []
for sent in x:
x_new.append([corpus[preprocess_string(word)] for word in sent.lower().split() if preprocess_string(word) in corpus.keys()])
return np.array(x_new), np.array(y)
corpus = create_corpus(x_train)
print(f'Length of vocabulary is {len(corpus)}')
x_train, y_train = preprocess(x_train, y_train, corpus)
x_val, y_val = preprocess(x_val, y_val, corpus)
x_test, y_test = preprocess(x_test, y_test, corpus)
#analysis of word count in reviews
rev_len = [len(i) for i in x_train]
pd.Series(rev_len).hist()
plt.show()
pd.Series(rev_len).describe()
```
From the above data, it can be seen that maximum length of review is 653 and 75% of the reviews have length less than 85. Furthermore length of reviews greater than 300 is not significant (from the graph) so will take maximum length of reviews to be 300.
```
def padding_(sentences, seq_len):
""" to tackle variable length of sequences: this function prepads reviews with 0 for reviews whose length is less than seq_len, and truncates reviews with length greater than
seq_len by removing words after seq_len in review"""
features = np.zeros((len(sentences), seq_len),dtype=int)
for ii, review in enumerate(sentences):
diff = seq_len - len(review)
if diff > 0:
features[ii,diff:] = np.array(review)
else:
features[ii] = np.array(review[:seq_len])
return features
# maximum review length (300)
x_train_pad = padding_(x_train,300)
x_val_pad = padding_(x_val,300)
x_test_pad = padding_(x_test, 300)
is_cuda = torch.cuda.is_available()
# use GPU if available
if is_cuda:
device = torch.device("cuda")
print("GPU is available")
else:
device = torch.device("cpu")
print("GPU not available, CPU used")
# create Tensor datasets
train_data = TensorDataset(torch.from_numpy(x_train_pad), torch.from_numpy(y_train))
valid_data = TensorDataset(torch.from_numpy(x_test_pad), torch.from_numpy(y_test))
batch_size = 50
# dataloaders
train_loader = DataLoader(train_data, shuffle=True, batch_size=batch_size)
valid_loader = DataLoader(valid_data, shuffle=True, batch_size=batch_size)
# one batch of training data
dataiter = iter(train_loader)
sample_x, sample_y = dataiter.next()
print('Sample input size: ', sample_x.size())
print('Sample input: \n', sample_x)
print('Sample output: \n', sample_y)
```
<img src="https://lh3.googleusercontent.com/proxy/JeOqyqJyqifJLvX8Wet6hHNIOZ_wui2xfIkYJsK6fuE13cNJlZxxqe6vZcEe__kIagkOFolHZtyZ150yayUHpBkekTAwdMUg1MNrmVFbd1eumvusUs1zLALKLB5AA3fK" alt="RNN LSTM GRU" width="700" height="250">
Image Credit- [Article](http://dprogrammer.org/rnn-lstm-gru)
#RNN
```
class RNN(nn.Module):
def __init__(self,no_layers,vocab_size,hidden_dim,embedding_dim,output_dim):
super(RNN,self).__init__()
self.output_dim = output_dim
self.hidden_dim = hidden_dim
self.no_layers = no_layers
self.vocab_size = vocab_size
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.rnn = nn.RNN(input_size=embedding_dim,hidden_size=self.hidden_dim,
num_layers=no_layers, batch_first=True)
self.dropout = nn.Dropout(0.3)
self.fc = nn.Linear(self.hidden_dim, output_dim)
self.sig = nn.Sigmoid()
def forward(self,x,hidden):
batch_size = x.size(0)
embeds = self.embedding(x)
rnn_out, hidden = self.rnn(embeds, hidden)
rnn_out = rnn_out.contiguous().view(-1, self.hidden_dim)
out = self.dropout(rnn_out)
out = self.fc(out)
sig_out = self.sig(out)
sig_out = sig_out.view(batch_size, -1)
sig_out = sig_out[:, -1]
return sig_out, hidden
def init_hidden(self, batch_size):
''' Initializes hidden state '''
h0 = torch.zeros((self.no_layers,batch_size,self.hidden_dim)).to(device)
return h0
no_layers = 2
vocab_size = len(corpus) + 1 #extra 1 for padding
embedding_dim = 64
output_dim = 1
hidden_dim = 256
model = RNN(no_layers,vocab_size,hidden_dim,embedding_dim,output_dim)
model.to(device)
print(model)
lr=0.001 #learning rate
criterion = nn.BCELoss() #Binary Cross Entropy loss
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
def acc(pred,label):
""" function to calculate accuracy """
pred = torch.round(pred.squeeze())
return torch.sum(pred == label.squeeze()).item()
patience_early_stopping = 3 #training will stop if model performance does not improve for these many consecutive epochs
cnt = 0 #counter for checking patience level
prev_epoch_acc = 0.0 #initializing prev test accuracy for early stopping condition
scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, mode = 'max', factor = 0.2, patience = 1)
epochs = 10
for epoch in range(epochs):
train_acc = 0.0
model.train()
for inputs, labels in train_loader: #training in batches
inputs, labels = inputs.to(device), labels.to(device)
h = model.init_hidden(batch_size)
model.zero_grad()
output,h = model(inputs,h)
loss = criterion(output.squeeze(), labels.float())
loss.backward()
accuracy = acc(output,labels)
train_acc += accuracy
optimizer.step()
val_acc = 0.0
model.eval()
for inputs, labels in valid_loader:
inputs, labels = inputs.to(device), labels.to(device)
val_h = model.init_hidden(batch_size)
output, val_h = model(inputs, val_h)
accuracy = acc(output,labels)
val_acc += accuracy
epoch_train_acc = train_acc/len(train_loader.dataset)
epoch_val_acc = val_acc/len(valid_loader.dataset)
scheduler.step(epoch_val_acc)
if epoch_val_acc > prev_epoch_acc: #check if val accuracy for current epoch has improved compared to previous epoch
cnt = 0 #f accuracy improves reset counter to 0
else: #otherwise increment current counter
cnt += 1
prev_epoch_acc = epoch_val_acc
print(f'Epoch {epoch+1}')
print(f'train_accuracy : {epoch_train_acc*100} val_accuracy : {epoch_val_acc*100}')
if cnt == patience_early_stopping:
print(f"early stopping as test accuracy did not improve for {patience_early_stopping} consecutive epochs")
break
def predict_text(text):
word_seq = np.array([corpus[preprocess_string(word)] for word in text.split()
if preprocess_string(word) in corpus.keys()])
word_seq = np.expand_dims(word_seq,axis=0)
pad = torch.from_numpy(padding_(word_seq,300))
input = pad.to(device)
batch_size = 1
h = model.init_hidden(batch_size)
#h = tuple([each.data for each in h])
output, h = model(input, h)
return(output.item())
index = 30
print(df_test['text'][index])
print('='*70)
print(f'Actual sentiment is : {df_test["label"][index]}')
print('='*70)
prob = predict_text(df_test['text'][index])
status = "positive" if prob > 0.5 else "negative"
prob = (1 - prob) if status == "negative" else prob
print(f'Predicted sentiment is {status} with a probability of {prob}')
```
#GRU
```
class GRU(nn.Module):
def __init__(self,no_layers,vocab_size,hidden_dim,embedding_dim):
super(GRU,self).__init__()
self.output_dim = output_dim
self.hidden_dim = hidden_dim
self.no_layers = no_layers
self.vocab_size = vocab_size
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.gru = nn.GRU(input_size=embedding_dim,hidden_size=self.hidden_dim,
num_layers=no_layers, batch_first=True)
self.dropout = nn.Dropout(0.3)
self.fc = nn.Linear(self.hidden_dim, output_dim)
self.sig = nn.Sigmoid()
def forward(self,x,hidden):
batch_size = x.size(0)
embeds = self.embedding(x)
gru_out, hidden = self.gru(embeds, hidden)
gru_out = gru_out.contiguous().view(-1, self.hidden_dim)
out = self.dropout(gru_out)
out = self.fc(out)
sig_out = self.sig(out)
sig_out = sig_out.view(batch_size, -1)
sig_out = sig_out[:, -1]
return sig_out, hidden
def init_hidden(self, batch_size):
''' Initializes hidden state '''
h0 = torch.zeros((self.no_layers,batch_size,self.hidden_dim)).to(device)
return h0
no_layers = 2
vocab_size = len(corpus) + 1 #extra 1 for padding
embedding_dim = 64
output_dim = 1
hidden_dim = 256
model = GRU(no_layers,vocab_size,hidden_dim,embedding_dim)
#moving to gpu
model.to(device)
print(model)
# loss and optimization functions
lr=0.001
criterion = nn.BCELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
# function to calculate accuracy
def acc(pred,label):
pred = torch.round(pred.squeeze())
return torch.sum(pred == label.squeeze()).item()
patience_early_stopping = 3 #training will stop if model performance does not improve for these many consecutive epochs
cnt = 0 #counter for checking patience level
prev_epoch_acc = 0.0 #initializing prev test accuracy for early stopping condition
scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, mode = 'max', factor = 0.2, patience = 1)
epochs = 10
for epoch in range(epochs):
train_acc = 0.0
model.train()
for inputs, labels in train_loader:
inputs, labels = inputs.to(device), labels.to(device)
h = model.init_hidden(batch_size)
model.zero_grad()
output,h = model(inputs,h)
loss = criterion(output.squeeze(), labels.float())
loss.backward()
accuracy = acc(output,labels)
train_acc += accuracy
optimizer.step()
val_acc = 0.0
model.eval()
for inputs, labels in valid_loader:
inputs, labels = inputs.to(device), labels.to(device)
val_h = model.init_hidden(batch_size)
output, val_h = model(inputs, val_h)
accuracy = acc(output,labels)
val_acc += accuracy
epoch_train_acc = train_acc/len(train_loader.dataset)
epoch_val_acc = val_acc/len(valid_loader.dataset)
scheduler.step(epoch_val_acc)
if epoch_val_acc > prev_epoch_acc: #check if val accuracy for current epoch has improved compared to previous epoch
cnt = 0 #f accuracy improves reset counter to 0
else: #otherwise increment current counter
cnt += 1
prev_epoch_acc = epoch_val_acc
print(f'Epoch {epoch+1}')
print(f'train_accuracy : {epoch_train_acc*100} val_accuracy : {epoch_val_acc*100}')
if cnt == patience_early_stopping:
print(f"early stopping as test accuracy did not improve for {patience_early_stopping} consecutive epochs")
break
index = 30
print(df_test['text'][index])
print('='*70)
print(f'Actual sentiment is : {df_test["label"][index]}')
print('='*70)
prob = predict_text(df_test['text'][index])
status = "positive" if prob > 0.5 else "negative"
prob = (1 - prob) if status == "negative" else prob
print(f'Predicted sentiment is {status} with a probability of {prob}')
```
#LSTM
```
class LSTM(nn.Module):
def __init__(self,no_layers,vocab_size,hidden_dim,embedding_dim):
super(LSTM,self).__init__()
self.output_dim = output_dim
self.hidden_dim = hidden_dim
self.no_layers = no_layers
self.vocab_size = vocab_size
self.embedding = nn.Embedding(vocab_size, embedding_dim) #embedding layer
self.lstm = nn.LSTM(input_size=embedding_dim,hidden_size=self.hidden_dim,
num_layers=no_layers, batch_first=True) #lstm layer
self.dropout = nn.Dropout(0.3) # dropout layer
self.fc = nn.Linear(self.hidden_dim, output_dim) #fully connected layer
self.sig = nn.Sigmoid() #sigmoid activation
def forward(self,x,hidden):
batch_size = x.size(0)
embeds = self.embedding(x)
lstm_out, hidden = self.lstm(embeds, hidden)
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
out = self.dropout(lstm_out)
out = self.fc(out)
sig_out = self.sig(out)
sig_out = sig_out.view(batch_size, -1)
sig_out = sig_out[:, -1]
return sig_out, hidden
def init_hidden(self, batch_size):
''' Initializes hidden state and cell state for LSTM '''
h0 = torch.zeros((self.no_layers,batch_size,self.hidden_dim)).to(device)
c0 = torch.zeros((self.no_layers,batch_size,self.hidden_dim)).to(device)
hidden = (h0,c0)
return hidden
no_layers = 2
vocab_size = len(corpus) + 1 #extra 1 for padding
embedding_dim = 64
output_dim = 1
hidden_dim = 256
model = LSTM(no_layers,vocab_size,hidden_dim,embedding_dim)
model.to(device)
print(model)
# loss and optimization functions
lr=0.001
criterion = nn.BCELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
# function to calculate accuracy
def acc(pred,label):
pred = torch.round(pred.squeeze())
return torch.sum(pred == label.squeeze()).item()
patience_early_stopping = 3 #training will stop if model performance does not improve for these many consecutive epochs
cnt = 0 #counter for checking patience level
prev_epoch_acc = 0.0 #initializing prev test accuracy for early stopping condition
scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, mode = 'max', factor = 0.2, patience = 1)
epochs = 10
for epoch in range(epochs):
train_acc = 0.0
model.train()
for inputs, labels in train_loader:
inputs, labels = inputs.to(device), labels.to(device)
h = model.init_hidden(batch_size)
model.zero_grad()
output,h = model(inputs,h)
loss = criterion(output.squeeze(), labels.float())
loss.backward()
accuracy = acc(output,labels)
train_acc += accuracy
optimizer.step()
val_acc = 0.0
model.eval()
for inputs, labels in valid_loader:
val_h = model.init_hidden(batch_size)
inputs, labels = inputs.to(device), labels.to(device)
output, val_h = model(inputs, val_h)
accuracy = acc(output,labels)
val_acc += accuracy
epoch_train_acc = train_acc/len(train_loader.dataset)
epoch_val_acc = val_acc/len(valid_loader.dataset)
scheduler.step(epoch_val_acc)
if epoch_val_acc > prev_epoch_acc: #check if val accuracy for current epoch has improved compared to previous epoch
cnt = 0 #f accuracy improves reset counter to 0
else: #otherwise increment current counter
cnt += 1
prev_epoch_acc = epoch_val_acc
print(f'Epoch {epoch+1}')
print(f'train_accuracy : {epoch_train_acc*100} val_accuracy : {epoch_val_acc*100}')
if cnt == patience_early_stopping:
print(f"early stopping as test accuracy did not improve for {patience_early_stopping} consecutive epochs")
break
index = 30
print(df_test['text'][index])
print('='*70)
print(f'Actual sentiment is : {df_test["label"][index]}')
print('='*70)
prob = predict_text(df_test['text'][index])
status = "positive" if prob > 0.5 else "negative"
prob = (1 - prob) if status == "negative" else prob
print(f'Predicted sentiment is {status} with a probability of {prob}')
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
from datetime import datetime
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', 10)
df = pd.read_csv('data/final_cohort.csv')
df.head()
```
## Make Timeline in Python
```
names = df['knumber']
dates = df['Date']
dates
min(dates)
max(dates)
dates = [datetime.strptime(x, "%m/%d/%Y") for x in dates]
levels = np.tile([-5, 5, -3, 3, -1, 1],
int(np.ceil(len(dates)/6)))[:len(dates)]
fig, ax = plt.subplots(figsize=(8.8, 4), constrained_layout=True)
ax.set(title="Test Timeline")
ax.vlines(dates, 0, levels, color="tab:red") # The vertical stems.
ax.plot(dates, np.zeros_like(dates), "-o",
color="k", markerfacecolor="w") # Baseline and markers on it.
# annotate lines
# for d, l, r in zip(dates, levels, names):
# ax.annotate(r, xy=(d, l),
# xytext=(-3, np.sign(l)*3), textcoords="offset points",
# horizontalalignment="right",
# verticalalignment="bottom" if l > 0 else "top")
# format xaxis with 4 month intervals
ax.xaxis.set_major_locator(mdates.MonthLocator(interval=24))
ax.xaxis.set_major_formatter(mdates.DateFormatter("%b %Y"))
plt.setp(ax.get_xticklabels(), rotation=30, ha="right")
# remove y axis and spines
ax.yaxis.set_visible(False)
#ax.spines[["left", "top", "right"]].set_visible(False)
ax.margins(y=0.1)
plt.show()
df['month_year'] = df['Date'].apply(lambda x: x.split('/')[0] + '/15/' + x.split('/')[2])
df['quarter_year'] = df['month_year'].apply(quarter_parse)
def quarter_parse(x):
if x.split('/')[0] in ['01', '02', '03']:
return '02/15/' + x.split('/')[1]
if x.split('/')[0] in ['04', '05', '06']:
return '5/15/' + x.split('/')[1]
if x.split('/')[0] in ['07', '08', '09']:
return '8/15/' + x.split('/')[1]
if x.split('/')[0] in ['10', '11', '12']:
return '11/15/' + x.split('/')[1]
def specialty_sort(x):
if x == 'Radiology':
return 0
elif x == 'Cardiovascular':
return 1
else:
return 2
def path_sort(x):
if x == '510K':
return 0
elif x == 'DEN':
return 1
else:
return 2
df['specialty_sort'] = df['Specialty'].apply(specialty_sort)
df['path_sort'] = df['Specialty'].apply(path_sort)
master_df = pd.DataFrame(columns = df.columns)
for qrt_yr in set(df['quarter_year'].values):
tempdf = df[df['quarter_year'] == qrt_yr]
tempdf = tempdf.sort_values(by = 'specialty_sort').sort_values(by='path_sort')
tempdf['height'] = range(len(tempdf))
master_df = master_df.append(tempdf)
master_df[['quarter_year', 'Name', 'Specialty', 'path', 'height']].to_csv('data/timeline_quarter_data.csv')
master_df = pd.DataFrame(columns = df.columns)
for qrt_yr in set(df['month_year'].values):
tempdf = df[df['month_year'] == qrt_yr]
tempdf = tempdf.sort_values(by = 'specialty_sort').sort_values(by='path_sort')
tempdf['height'] = range(len(tempdf))
master_df = master_df.append(tempdf)
master_df[['month_year', 'Name', 'Specialty', 'path', 'height']].to_csv('data/timeline_month_data.csv')
tempdf
master_df
\
vdf = df[['quarter_year', 'Specialty']].value_counts()
vdf.unstack('Specialty')
vdf.unstack('Specialty').fillna(0).to_csv('data/timeline_bar_data.csv')
vdf
df[['Date', 'Name', 'Specialty', 'path']].to_csv('data/timeline_data.csv')
df['path'].value_counts()
```
## State Map
```
df['STATE'].value_counts()
state_dict = df['STATE'].value_counts().to_dict()
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import json
import requests
import matplotlib.pyplot as plt
plt.style.use('ggplot')
import seaborn as sns
```
## Loading Labeled Dataset with hate tweets IDs
```
df = pd.read_csv('../data/hate_add.csv')
df = df[(df['racism']=='racism') | (df['racism']=='sexism')]
df.head()
df.info()
df.racism.value_counts()
df.columns = ['id','label']
def group_list(l,size = 100):
"""
Generate batches of 100 ids in each
Returns list of strings with , seperated ids
"""
n_l =[]
idx = 0
while idx < len(l):
n_l.append(
','.join([str(i) for i in l[idx:idx+size]])
)
idx += size
return n_l
import config
def tweets_request(tweets_ids):
"""
Make a requests to Tweeter API
"""
df_lst = []
for batch in tweets_ids:
url = "https://api.twitter.com/2/tweets?ids={}&tweet.fields=created_at&expansions=author_id&user.fields=created_at".format(batch)
payload={}
headers = {'Authorization': 'Bearer ' + config.bearer_token,
'Cookie': 'personalization_id="v1_hzpv7qXpjB6CteyAHDWYQQ=="; guest_id=v1%3A161498381400435837'}
r = requests.request("GET", url, headers=headers, data=payload)
data = r.json()
if 'data' in data.keys():
df_lst.append(pd.DataFrame(data['data']))
return pd.concat(df_lst)
```
## Getting actual tweets text with API requests
```
racism_sex_hate_id = group_list(list(df2.id))
# df_rac_sex_hate = tweets_request(racism_sex_hate_id)
df_rac_sex_hate
df_rac_sex_hate = pd.read_csv('../data/df_ras_sex_hate.csv')
df_rac_sex_hate = df_rac_sex_hate.drop(columns=['Unnamed: 0', 'id', 'author_id', 'created_at'])
df_rac_sex_hate['class'] = 1
df_rac_sex_hate.head()
df_rac_sex_hate.shape
```
## Loading Labeled Dataset with tweets
```
df_l = pd.read_csv("../data/labeled_data.csv")
df_l.head()
print(df_l['class'].value_counts(normalize=True))
# Class Imbalance
fig, ax = plt.subplots(figsize=(10,6))
ax = sns.countplot(df_l['class'], palette='Set2')
ax.set_title('Amount of Tweets Per Label',fontsize = 20)
ax.set_xlabel('Type of Tweet',fontsize = 15)
ax.set_ylabel('Count',fontsize = 15)
ax.set_xticklabels(['Hate_speech','Offensive_language', 'Neither'],fontsize = 13)
total = float(len(df_l)) # one person per row
for p in ax.patches:
height = p.get_height()
ax.text(p.get_x()+p.get_width()/2.,
height + 3,
'{:1.2f}'.format(height/total * 100) + '%',
ha="center")
# class 0 - hate tweets
# class 1 - offensive_language tweets
# class 2 - neither tweets
df_l['class'].value_counts()
# considering only class 0 (hate tweets) and class 2 (neither tweets) as binary classification problem
# updating neither tweets as not hate speech
df_l = df_l.drop(columns=['Unnamed: 0', 'count', 'hate_speech', 'offensive_language', 'neither'])
df_l = df_l[(df_l['class']==0) | (df_l['class']==2)]
df_l['class'] = df_l['class'].map(lambda x : 0 if x == 2 else 1)
df_l.rename(columns={'tweet':'text'}, inplace= True)
```
## Lets combine 2 Data Frames with Labeled Classes of hate speach and not hate speach
```
df_combined = pd.concat([df_rac_sex_hate,df_l])
print(df_combined['class'].value_counts(normalize=True))
# Class Imbalance
fig, ax = plt.subplots(figsize=(10,6))
ax = sns.countplot(df_combined['class'], palette='Set2')
ax.set_title('Amount of Tweets Per Label',fontsize = 20)
ax.set_xlabel('Type of Tweet',fontsize = 15)
ax.set_ylabel('Count',fontsize = 15)
ax.set_xticklabels(['Not Hate Speech','Hate Speech'],fontsize = 13)
total = float(len(df_combined)) # one person per row
for p in ax.patches:
height = p.get_height()
ax.text(p.get_x()+p.get_width()/2.,
height + 3,
'{:1.2f}'.format(height/total * 100) + '%',
ha="center")
```
### Saving combined Data to CSV file
```
df_combined.to_csv('../data/balanced_data_combined.csv')
```
| github_jupyter |
```
import os
os.environ['CUDA_VISIBLE_DEVICES'] = ''
import numpy as np
from numpy.random import default_rng
import random
import collections
import re
import tensorflow as tf
from tqdm import tqdm
max_seq_length_encoder = 512
max_seq_length_decoder = 128
masked_lm_prob = 0.2
max_predictions_per_seq = int(masked_lm_prob * max_seq_length_encoder)
do_whole_word_mask = True
EOS_ID = 1
MaskedLmInstance = collections.namedtuple(
'MaskedLmInstance', ['index', 'label']
)
class TrainingInstance(object):
"""A single training instance (sentence pair)."""
def __init__(self, tokens, tokens_y, masked_lm_positions, masked_lm_labels):
self.tokens = tokens
self.tokens_y = tokens_y
self.masked_lm_positions = masked_lm_positions
self.masked_lm_labels = masked_lm_labels
def sliding(strings, n = 5):
results = []
for i in range(len(strings) - n):
results.append(strings[i : i + n])
return results
def _get_ngrams(n, text):
ngram_set = set()
text_length = len(text)
max_index_ngram_start = text_length - n
for i in range(max_index_ngram_start + 1):
ngram_set.add(tuple(text[i : i + n]))
return ngram_set
def _get_word_ngrams(n, sentences):
assert len(sentences) > 0
assert n > 0
words = sum(sentences, [])
return _get_ngrams(n, words)
def cal_rouge(evaluated_ngrams, reference_ngrams):
reference_count = len(reference_ngrams)
evaluated_count = len(evaluated_ngrams)
overlapping_ngrams = evaluated_ngrams.intersection(reference_ngrams)
overlapping_count = len(overlapping_ngrams)
if evaluated_count == 0:
precision = 0.0
else:
precision = overlapping_count / evaluated_count
if reference_count == 0:
recall = 0.0
else:
recall = overlapping_count / reference_count
f1_score = 2.0 * ((precision * recall) / (precision + recall + 1e-8))
return {'f': f1_score, 'p': precision, 'r': recall}
def _rouge_clean(s):
return re.sub(r'[^a-zA-Z0-9 ]', '', s)
def get_rouges(strings, n = 1):
rouges = []
for i in range(len(strings)):
abstract = strings[i]
doc_sent_list = [strings[k] for k in range(len(strings)) if k != i]
sents = _rouge_clean(' '.join(doc_sent_list)).split()
abstract = _rouge_clean(abstract).split()
evaluated_1grams = _get_word_ngrams(n, [sents])
reference_1grams = _get_word_ngrams(n, [abstract])
rouges.append(cal_rouge(evaluated_1grams, reference_1grams)['f'])
return rouges
# Principal Select top-m scored sentences according to importance.
# As a proxy for importance we compute ROUGE1-F1 (Lin, 2004) between the sentence and the rest of the document
def get_rouge(strings, top_k = 1, minlen = 4):
rouges = get_rouges(strings)
s = np.argsort(rouges)[::-1]
s = [i for i in s if len(strings[i].split()) >= minlen]
return s[:top_k]
# Random Uniformly select m sentences at random.
def get_random(strings, top_k = 1):
return rng.choice(len(strings), size = top_k, replace = False)
# Lead Select the first m sentences.
def get_lead(strings, top_k = 1):
return [i for i in range(top_k)]
def combine(l):
r = []
for s in l:
if s[-1] != '.':
if s in ['[MASK]', '[MASK2]']:
e = ' .'
else:
e = '.'
s = s + e
r.append(s)
return ' '.join(r)
def is_number_regex(s):
if re.match('^\d+?\.\d+?$', s) is None:
return s.isdigit()
return True
def reject(token):
t = token.replace('##', '')
if is_number_regex(t):
return True
if t.startswith('RM'):
return True
if token in '!{<>}:;.,"\'':
return True
return False
def create_masked_lm_predictions(
tokens,
vocab_words,
rng,
):
"""Creates the predictions for the masked LM objective."""
cand_indexes = []
for (i, token) in enumerate(tokens):
if token == '[CLS]' or token == '[SEP]' or token == '[MASK2]':
continue
if reject(token):
continue
if (
do_whole_word_mask
and len(cand_indexes) >= 1
and token.startswith('##')
):
cand_indexes[-1].append(i)
else:
cand_indexes.append([i])
rng.shuffle(cand_indexes)
output_tokens = list(tokens)
num_to_predict = min(
max_predictions_per_seq,
max(1, int(round(len(tokens) * masked_lm_prob))),
)
masked_lms = []
covered_indexes = set()
for index_set in cand_indexes:
if len(masked_lms) >= num_to_predict:
break
if len(masked_lms) + len(index_set) > num_to_predict:
continue
is_any_index_covered = False
for index in index_set:
if index in covered_indexes:
is_any_index_covered = True
break
if is_any_index_covered:
continue
for index in index_set:
covered_indexes.add(index)
masked_token = None
# 80% of the time, replace with [MASK]
if rng.random() < 0.8:
masked_token = '[MASK]'
else:
# 10% of the time, keep original
if rng.random() < 0.5:
masked_token = tokens[index]
# 10% of the time, replace with random word
else:
masked_token = vocab_words[
np.random.randint(0, len(vocab_words) - 1)
]
output_tokens[index] = masked_token
masked_lms.append(
MaskedLmInstance(index = index, label = tokens[index])
)
assert len(masked_lms) <= num_to_predict
masked_lms = sorted(masked_lms, key = lambda x: x.index)
masked_lm_positions = []
masked_lm_labels = []
for p in masked_lms:
masked_lm_positions.append(p.index)
masked_lm_labels.append(p.label)
return (output_tokens, masked_lm_positions, masked_lm_labels)
def get_feature(x, y, tokenizer, vocab_words, rng, dedup_factor = 5, **kwargs):
tokens = tokenizer.tokenize(x)
if len(tokens) > (max_seq_length_encoder - 2):
tokens = tokens[:max_seq_length_encoder - 2]
if '[MASK2]' not in tokens:
return []
tokens = ['[CLS]'] + tokens + ['[SEP]']
tokens_y = tokenizer.tokenize(y)
if len(tokens_y) > (max_seq_length_decoder - 1):
tokens_y = tokens_y[:max_seq_length_decoder - 1]
tokens_y = tokenizer.convert_tokens_to_ids(tokens_y)
tokens_y = tokens_y + [EOS_ID]
results = []
for i in range(dedup_factor):
output_tokens, masked_lm_positions, masked_lm_labels = create_masked_lm_predictions(
tokens, vocab_words, rng, **kwargs
)
output_tokens = tokenizer.convert_tokens_to_ids(output_tokens)
masked_lm_labels = tokenizer.convert_tokens_to_ids(masked_lm_labels)
t = TrainingInstance(
output_tokens, tokens_y, masked_lm_positions, masked_lm_labels
)
results.append(t)
return results
def group_doc(data):
results, result = [], []
for i in data:
if not len(i) and len(result):
results.append(result)
result = []
else:
result.append(i)
if len(result):
results.append(result)
return results
def create_int_feature(values):
feature = tf.train.Feature(
int64_list = tf.train.Int64List(value = list(values))
)
return feature
def create_float_feature(values):
feature = tf.train.Feature(
float_list = tf.train.FloatList(value = list(values))
)
return feature
def write_instance_to_example_file(
instances,
output_file
):
writer = tf.python_io.TFRecordWriter(output_file)
for (inst_index, instance) in enumerate(instances):
input_ids = list(instance.tokens)
target_ids = list(instance.tokens_y)
while len(input_ids) < max_seq_length_encoder:
input_ids.append(0)
target_ids.append(0)
masked_lm_positions = list(instance.masked_lm_positions)
masked_lm_ids = list(instance.masked_lm_labels)
masked_lm_weights = [1.0] * len(masked_lm_ids)
while len(masked_lm_positions) < max_predictions_per_seq:
masked_lm_positions.append(0)
masked_lm_ids.append(0)
masked_lm_weights.append(0.0)
features = collections.OrderedDict()
features['input_ids'] = create_int_feature(input_ids)
features['targets_ids'] = create_int_feature(target_ids)
features['masked_lm_positions'] = create_int_feature(
masked_lm_positions
)
features['masked_lm_ids'] = create_int_feature(masked_lm_ids)
features['masked_lm_weights'] = create_float_feature(masked_lm_weights)
tf_example = tf.train.Example(
features = tf.train.Features(feature = features)
)
writer.write(tf_example.SerializeToString())
tf.logging.info('Wrote %d total instances', inst_index)
def process_documents(
file,
output_file,
tokenizer,
min_slide = 5,
max_slide = 13,
dedup_mask = 2,
):
with open(file) as fopen:
data = fopen.read().split('\n')
rng = default_rng()
vocab_words = list(tokenizer.vocab.keys())
grouped = group_doc(data)
results = []
for s in range(min_slide, max_slide, 1):
for r in tqdm(grouped):
slided = sliding(r, s)
X, Y = [], []
for i in range(len(slided)):
try:
strings = slided[i]
rouge_ = get_rouge(strings)
y = strings[rouge_[0]]
strings[rouge_[0]] = '[MASK2]'
x = combine(strings)
result = get_feature(
x,
y,
tokenizer,
vocab_words,
rng,
dedup_factor = dedup_mask,
)
results.extend(result)
except:
pass
write_instance_to_example_file(results, output_file)
import tokenization
tokenizer = tokenization.FullTokenizer(vocab_file = 'pegasus.wordpiece', do_lower_case=False)
file = 'dumping-cleaned-news.txt'
output_file = 'news.tfrecord'
min_slide = 5
max_slide = 13
dedup_mask = 5
with open(file) as fopen:
data = fopen.read().split('\n')
rng = default_rng()
vocab_words = list(tokenizer.vocab.keys())
grouped = group_doc(data)
results = []
for s in range(min_slide, max_slide, 1):
for r in tqdm(grouped[:100]):
slided = sliding(r, s)
X, Y = [], []
for i in range(len(slided)):
try:
strings = slided[i]
rouge_ = get_rouge(strings)
y = strings[rouge_[0]]
strings[rouge_[0]] = '[MASK2]'
x = combine(strings)
result = get_feature(
x,
y,
tokenizer,
vocab_words,
rng,
dedup_factor = dedup_mask,
)
results.extend(result)
except:
pass
write_instance_to_example_file(results, output_file)
!rm {output_file}
```
| github_jupyter |
# Python Basics with Numpy (optional assignment)
Welcome to your first assignment. This exercise gives you a brief introduction to Python. Even if you've used Python before, this will help familiarize you with functions we'll need.
**Instructions:**
- You will be using Python 3.
- Avoid using for-loops and while-loops, unless you are explicitly told to do so.
- Do not modify the (# GRADED FUNCTION [function name]) comment in some cells. Your work would not be graded if you change this. Each cell containing that comment should only contain one function.
- After coding your function, run the cell right below it to check if your result is correct.
**After this assignment you will:**
- Be able to use iPython Notebooks
- Be able to use numpy functions and numpy matrix/vector operations
- Understand the concept of "broadcasting"
- Be able to vectorize code
Let's get started!
## About iPython Notebooks ##
iPython Notebooks are interactive coding environments embedded in a webpage. You will be using iPython notebooks in this class. You only need to write code between the ### START CODE HERE ### and ### END CODE HERE ### comments. After writing your code, you can run the cell by either pressing "SHIFT"+"ENTER" or by clicking on "Run Cell" (denoted by a play symbol) in the upper bar of the notebook.
We will often specify "(≈ X lines of code)" in the comments to tell you about how much code you need to write. It is just a rough estimate, so don't feel bad if your code is longer or shorter.
**Exercise**: Set test to `"Hello World"` in the cell below to print "Hello World" and run the two cells below.
```
### START CODE HERE ### (≈ 1 line of code)
test = "Hello World"
### END CODE HERE ###
print ("test: " + test)
```
**Expected output**:
test: Hello World
<font color='blue'>
**What you need to remember**:
- Run your cells using SHIFT+ENTER (or "Run cell")
- Write code in the designated areas using Python 3 only
- Do not modify the code outside of the designated areas
## 1 - Building basic functions with numpy ##
Numpy is the main package for scientific computing in Python. It is maintained by a large community (www.numpy.org). In this exercise you will learn several key numpy functions such as np.exp, np.log, and np.reshape. You will need to know how to use these functions for future assignments.
### 1.1 - sigmoid function, np.exp() ###
Before using np.exp(), you will use math.exp() to implement the sigmoid function. You will then see why np.exp() is preferable to math.exp().
**Exercise**: Build a function that returns the sigmoid of a real number x. Use math.exp(x) for the exponential function.
**Reminder**:
$sigmoid(x) = \frac{1}{1+e^{-x}}$ is sometimes also known as the logistic function. It is a non-linear function used not only in Machine Learning (Logistic Regression), but also in Deep Learning.
<img src="images/Sigmoid.png" style="width:500px;height:228px;">
To refer to a function belonging to a specific package you could call it using package_name.function(). Run the code below to see an example with math.exp().
```
# GRADED FUNCTION: basic_sigmoid
import math
def basic_sigmoid(x):
"""
Compute sigmoid of x.
Arguments:
x -- A scalar
Return:
s -- sigmoid(x)
"""
### START CODE HERE ### (≈ 1 line of code)
s = 1/(1+math.exp(-1*x))
### END CODE HERE ###
return s
basic_sigmoid(3)
```
**Expected Output**:
<table style = "width:40%">
<tr>
<td>** basic_sigmoid(3) **</td>
<td>0.9525741268224334 </td>
</tr>
</table>
Actually, we rarely use the "math" library in deep learning because the inputs of the functions are real numbers. In deep learning we mostly use matrices and vectors. This is why numpy is more useful.
```
### One reason why we use "numpy" instead of "math" in Deep Learning ###
x = [1, 2, 3]
basic_sigmoid(x) # you will see this give an error when you run it, because x is a vector.
```
In fact, if $ x = (x_1, x_2, ..., x_n)$ is a row vector then $np.exp(x)$ will apply the exponential function to every element of x. The output will thus be: $np.exp(x) = (e^{x_1}, e^{x_2}, ..., e^{x_n})$
```
import numpy as np
# example of np.exp
x = np.array([1, 2, 3])
print(np.exp(x)) # result is (exp(1), exp(2), exp(3))
```
Furthermore, if x is a vector, then a Python operation such as $s = x + 3$ or $s = \frac{1}{x}$ will output s as a vector of the same size as x.
```
# example of vector operation
x = np.array([1, 2, 3])
print (x + 3)
```
Any time you need more info on a numpy function, we encourage you to look at [the official documentation](https://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.exp.html).
You can also create a new cell in the notebook and write `np.exp?` (for example) to get quick access to the documentation.
**Exercise**: Implement the sigmoid function using numpy.
**Instructions**: x could now be either a real number, a vector, or a matrix. The data structures we use in numpy to represent these shapes (vectors, matrices...) are called numpy arrays. You don't need to know more for now.
$$ \text{For } x \in \mathbb{R}^n \text{, } sigmoid(x) = sigmoid\begin{pmatrix}
x_1 \\
x_2 \\
... \\
x_n \\
\end{pmatrix} = \begin{pmatrix}
\frac{1}{1+e^{-x_1}} \\
\frac{1}{1+e^{-x_2}} \\
... \\
\frac{1}{1+e^{-x_n}} \\
\end{pmatrix}\tag{1} $$
```
# GRADED FUNCTION: sigmoid
import numpy as np # this means you can access numpy functions by writing np.function() instead of numpy.function()
def sigmoid(x):
"""
Compute the sigmoid of x
Arguments:
x -- A scalar or numpy array of any size
Return:
s -- sigmoid(x)
"""
### START CODE HERE ### (≈ 1 line of code)
s = 1/(1+np.exp(-1*x))
### END CODE HERE ###
return s
x = np.array([1, 2, 3])
sigmoid(x)
```
**Expected Output**:
<table>
<tr>
<td> **sigmoid([1,2,3])**</td>
<td> array([ 0.73105858, 0.88079708, 0.95257413]) </td>
</tr>
</table>
### 1.2 - Sigmoid gradient
As you've seen in lecture, you will need to compute gradients to optimize loss functions using backpropagation. Let's code your first gradient function.
**Exercise**: Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x. The formula is: $$sigmoid\_derivative(x) = \sigma'(x) = \sigma(x) (1 - \sigma(x))\tag{2}$$
You often code this function in two steps:
1. Set s to be the sigmoid of x. You might find your sigmoid(x) function useful.
2. Compute $\sigma'(x) = s(1-s)$
```
# GRADED FUNCTION: sigmoid_derivative
def sigmoid_derivative(x):
"""
Compute the gradient (also called the slope or derivative) of the sigmoid function with respect to its input x.
You can store the output of the sigmoid function into variables and then use it to calculate the gradient.
Arguments:
x -- A scalar or numpy array
Return:
ds -- Your computed gradient.
"""
### START CODE HERE ### (≈ 2 lines of code)
s = sigmoid(x)
ds = s*(1-s)
### END CODE HERE ###
return ds
x = np.array([1, 2, 3])
print ("sigmoid_derivative(x) = " + str(sigmoid_derivative(x)))
```
**Expected Output**:
<table>
<tr>
<td> **sigmoid_derivative([1,2,3])**</td>
<td> [ 0.19661193 0.10499359 0.04517666] </td>
</tr>
</table>
### 1.3 - Reshaping arrays ###
Two common numpy functions used in deep learning are [np.shape](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.shape.html) and [np.reshape()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html).
- X.shape is used to get the shape (dimension) of a matrix/vector X.
- X.reshape(...) is used to reshape X into some other dimension.
For example, in computer science, an image is represented by a 3D array of shape $(length, height, depth = 3)$. However, when you read an image as the input of an algorithm you convert it to a vector of shape $(length*height*3, 1)$. In other words, you "unroll", or reshape, the 3D array into a 1D vector.
<img src="images/image2vector_kiank.png" style="width:500px;height:300;">
**Exercise**: Implement `image2vector()` that takes an input of shape (length, height, 3) and returns a vector of shape (length\*height\*3, 1). For example, if you would like to reshape an array v of shape (a, b, c) into a vector of shape (a*b,c) you would do:
``` python
v = v.reshape((v.shape[0]*v.shape[1], v.shape[2])) # v.shape[0] = a ; v.shape[1] = b ; v.shape[2] = c
```
- Please don't hardcode the dimensions of image as a constant. Instead look up the quantities you need with `image.shape[0]`, etc.
```
# GRADED FUNCTION: image2vector
def image2vector(image):
"""
Argument:
image -- a numpy array of shape (length, height, depth)
Returns:
v -- a vector of shape (length*height*depth, 1)
"""
### START CODE HERE ### (≈ 1 line of code)
v = image.reshape((image.shape[0]*image.shape[2]*3), 1)
### END CODE HERE ###
return v
# This is a 3 by 3 by 2 array, typically images will be (num_px_x, num_px_y,3) where 3 represents the RGB values
image = np.array([[[ 0.67826139, 0.29380381],
[ 0.90714982, 0.52835647],
[ 0.4215251 , 0.45017551]],
[[ 0.92814219, 0.96677647],
[ 0.85304703, 0.52351845],
[ 0.19981397, 0.27417313]],
[[ 0.60659855, 0.00533165],
[ 0.10820313, 0.49978937],
[ 0.34144279, 0.94630077]]])
print ("image2vector(image) = " + str(image2vector(image)))
```
**Expected Output**:
<table style="width:100%">
<tr>
<td> **image2vector(image)** </td>
<td> [[ 0.67826139]
[ 0.29380381]
[ 0.90714982]
[ 0.52835647]
[ 0.4215251 ]
[ 0.45017551]
[ 0.92814219]
[ 0.96677647]
[ 0.85304703]
[ 0.52351845]
[ 0.19981397]
[ 0.27417313]
[ 0.60659855]
[ 0.00533165]
[ 0.10820313]
[ 0.49978937]
[ 0.34144279]
[ 0.94630077]]</td>
</tr>
</table>
### 1.4 - Normalizing rows
Another common technique we use in Machine Learning and Deep Learning is to normalize our data. It often leads to a better performance because gradient descent converges faster after normalization. Here, by normalization we mean changing x to $ \frac{x}{\| x\|} $ (dividing each row vector of x by its norm).
For example, if $$x =
\begin{bmatrix}
0 & 3 & 4 \\
2 & 6 & 4 \\
\end{bmatrix}\tag{3}$$ then $$\| x\| = np.linalg.norm(x, axis = 1, keepdims = True) = \begin{bmatrix}
5 \\
\sqrt{56} \\
\end{bmatrix}\tag{4} $$and $$ x\_normalized = \frac{x}{\| x\|} = \begin{bmatrix}
0 & \frac{3}{5} & \frac{4}{5} \\
\frac{2}{\sqrt{56}} & \frac{6}{\sqrt{56}} & \frac{4}{\sqrt{56}} \\
\end{bmatrix}\tag{5}$$ Note that you can divide matrices of different sizes and it works fine: this is called broadcasting and you're going to learn about it in part 5.
**Exercise**: Implement normalizeRows() to normalize the rows of a matrix. After applying this function to an input matrix x, each row of x should be a vector of unit length (meaning length 1).
```
# GRADED FUNCTION: normalizeRows
def normalizeRows(x):
"""
Implement a function that normalizes each row of the matrix x (to have unit length).
Argument:
x -- A numpy matrix of shape (n, m)
Returns:
x -- The normalized (by row) numpy matrix. You are allowed to modify x.
"""
### START CODE HERE ### (≈ 2 lines of code)
# Compute x_norm as the norm 2 of x. Use np.linalg.norm(..., ord = 2, axis = ..., keepdims = True)
x_norm = np.linalg.norm(x, axis=1, keepdims = True)
# Divide x by its norm.
x = x/x_norm
### END CODE HERE ###
return x
x = np.array([
[0, 3, 4],
[1, 6, 4]])
print("normalizeRows(x) = " + str(normalizeRows(x)))
```
**Expected Output**:
<table style="width:60%">
<tr>
<td> **normalizeRows(x)** </td>
<td> [[ 0. 0.6 0.8 ]
[ 0.13736056 0.82416338 0.54944226]]</td>
</tr>
</table>
**Note**:
In normalizeRows(), you can try to print the shapes of x_norm and x, and then rerun the assessment. You'll find out that they have different shapes. This is normal given that x_norm takes the norm of each row of x. So x_norm has the same number of rows but only 1 column. So how did it work when you divided x by x_norm? This is called broadcasting and we'll talk about it now!
### 1.5 - Broadcasting and the softmax function ####
A very important concept to understand in numpy is "broadcasting". It is very useful for performing mathematical operations between arrays of different shapes. For the full details on broadcasting, you can read the official [broadcasting documentation](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).
**Exercise**: Implement a softmax function using numpy. You can think of softmax as a normalizing function used when your algorithm needs to classify two or more classes. You will learn more about softmax in the second course of this specialization.
**Instructions**:
- $ \text{for } x \in \mathbb{R}^{1\times n} \text{, } softmax(x) = softmax(\begin{bmatrix}
x_1 &&
x_2 &&
... &&
x_n
\end{bmatrix}) = \begin{bmatrix}
\frac{e^{x_1}}{\sum_{j}e^{x_j}} &&
\frac{e^{x_2}}{\sum_{j}e^{x_j}} &&
... &&
\frac{e^{x_n}}{\sum_{j}e^{x_j}}
\end{bmatrix} $
- $\text{for a matrix } x \in \mathbb{R}^{m \times n} \text{, $x_{ij}$ maps to the element in the $i^{th}$ row and $j^{th}$ column of $x$, thus we have: }$ $$softmax(x) = softmax\begin{bmatrix}
x_{11} & x_{12} & x_{13} & \dots & x_{1n} \\
x_{21} & x_{22} & x_{23} & \dots & x_{2n} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
x_{m1} & x_{m2} & x_{m3} & \dots & x_{mn}
\end{bmatrix} = \begin{bmatrix}
\frac{e^{x_{11}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{12}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{13}}}{\sum_{j}e^{x_{1j}}} & \dots & \frac{e^{x_{1n}}}{\sum_{j}e^{x_{1j}}} \\
\frac{e^{x_{21}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{22}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{23}}}{\sum_{j}e^{x_{2j}}} & \dots & \frac{e^{x_{2n}}}{\sum_{j}e^{x_{2j}}} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
\frac{e^{x_{m1}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m2}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m3}}}{\sum_{j}e^{x_{mj}}} & \dots & \frac{e^{x_{mn}}}{\sum_{j}e^{x_{mj}}}
\end{bmatrix} = \begin{pmatrix}
softmax\text{(first row of x)} \\
softmax\text{(second row of x)} \\
... \\
softmax\text{(last row of x)} \\
\end{pmatrix} $$
```
# GRADED FUNCTION: softmax
def softmax(x):
"""Calculates the softmax for each row of the input x.
Your code should work for a row vector and also for matrices of shape (n, m).
Argument:
x -- A numpy matrix of shape (n,m)
Returns:
s -- A numpy matrix equal to the softmax of x, of shape (n,m)
"""
### START CODE HERE ### (≈ 3 lines of code)
# Apply exp() element-wise to x. Use np.exp(...).
x_exp = np.exp(x)
# Create a vector x_sum that sums each row of x_exp. Use np.sum(..., axis = 1, keepdims = True).
x_sum = np.sum(x_exp, axis = 1, keepdims = True)
# Compute softmax(x) by dividing x_exp by x_sum. It should automatically use numpy broadcasting.
s = x_exp/x_sum
### END CODE HERE ###
return s
x = np.array([
[9, 2, 5, 0, 0],
[7, 5, 0, 0 ,0]])
print("softmax(x) = " + str(softmax(x)))
```
**Expected Output**:
<table style="width:60%">
<tr>
<td> **softmax(x)** </td>
<td> [[ 9.80897665e-01 8.94462891e-04 1.79657674e-02 1.21052389e-04
1.21052389e-04]
[ 8.78679856e-01 1.18916387e-01 8.01252314e-04 8.01252314e-04
8.01252314e-04]]</td>
</tr>
</table>
**Note**:
- If you print the shapes of x_exp, x_sum and s above and rerun the assessment cell, you will see that x_sum is of shape (2,1) while x_exp and s are of shape (2,5). **x_exp/x_sum** works due to python broadcasting.
Congratulations! You now have a pretty good understanding of python numpy and have implemented a few useful functions that you will be using in deep learning.
<font color='blue'>
**What you need to remember:**
- np.exp(x) works for any np.array x and applies the exponential function to every coordinate
- the sigmoid function and its gradient
- image2vector is commonly used in deep learning
- np.reshape is widely used. In the future, you'll see that keeping your matrix/vector dimensions straight will go toward eliminating a lot of bugs.
- numpy has efficient built-in functions
- broadcasting is extremely useful
## 2) Vectorization
In deep learning, you deal with very large datasets. Hence, a non-computationally-optimal function can become a huge bottleneck in your algorithm and can result in a model that takes ages to run. To make sure that your code is computationally efficient, you will use vectorization. For example, try to tell the difference between the following implementations of the dot/outer/elementwise product.
```
import time
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### CLASSIC DOT PRODUCT OF VECTORS IMPLEMENTATION ###
tic = time.process_time()
dot = 0
for i in range(len(x1)):
dot+= x1[i]*x2[i]
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC OUTER PRODUCT IMPLEMENTATION ###
tic = time.process_time()
outer = np.zeros((len(x1),len(x2))) # we create a len(x1)*len(x2) matrix with only zeros
for i in range(len(x1)):
for j in range(len(x2)):
outer[i,j] = x1[i]*x2[j]
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC ELEMENTWISE IMPLEMENTATION ###
tic = time.process_time()
mul = np.zeros(len(x1))
for i in range(len(x1)):
mul[i] = x1[i]*x2[i]
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC GENERAL DOT PRODUCT IMPLEMENTATION ###
W = np.random.rand(3,len(x1)) # Random 3*len(x1) numpy array
tic = time.process_time()
gdot = np.zeros(W.shape[0])
for i in range(W.shape[0]):
for j in range(len(x1)):
gdot[i] += W[i,j]*x1[j]
toc = time.process_time()
print ("gdot = " + str(gdot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### VECTORIZED DOT PRODUCT OF VECTORS ###
tic = time.process_time()
dot = np.dot(x1,x2)
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED OUTER PRODUCT ###
tic = time.process_time()
outer = np.outer(x1,x2)
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED ELEMENTWISE MULTIPLICATION ###
tic = time.process_time()
mul = np.multiply(x1,x2)
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED GENERAL DOT PRODUCT ###
tic = time.process_time()
dot = np.dot(W,x1)
toc = time.process_time()
print ("gdot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
```
As you may have noticed, the vectorized implementation is much cleaner and more efficient. For bigger vectors/matrices, the differences in running time become even bigger.
**Note** that `np.dot()` performs a matrix-matrix or matrix-vector multiplication. This is different from `np.multiply()` and the `*` operator (which is equivalent to `.*` in Matlab/Octave), which performs an element-wise multiplication.
### 2.1 Implement the L1 and L2 loss functions
**Exercise**: Implement the numpy vectorized version of the L1 loss. You may find the function abs(x) (absolute value of x) useful.
**Reminder**:
- The loss is used to evaluate the performance of your model. The bigger your loss is, the more different your predictions ($ \hat{y} $) are from the true values ($y$). In deep learning, you use optimization algorithms like Gradient Descent to train your model and to minimize the cost.
- L1 loss is defined as:
$$\begin{align*} & L_1(\hat{y}, y) = \sum_{i=0}^m|y^{(i)} - \hat{y}^{(i)}| \end{align*}\tag{6}$$
```
# GRADED FUNCTION: L1
def L1(yhat, y):
"""
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L1 loss function defined above
"""
### START CODE HERE ### (≈ 1 line of code)
loss = np.sum(abs(yhat - y),axis = 0, keepdims = True)
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L1 = " + str(L1(yhat,y)))
```
**Expected Output**:
<table style="width:20%">
<tr>
<td> **L1** </td>
<td> 1.1 </td>
</tr>
</table>
**Exercise**: Implement the numpy vectorized version of the L2 loss. There are several way of implementing the L2 loss but you may find the function np.dot() useful. As a reminder, if $x = [x_1, x_2, ..., x_n]$, then `np.dot(x,x)` = $\sum_{j=0}^n x_j^{2}$.
- L2 loss is defined as $$\begin{align*} & L_2(\hat{y},y) = \sum_{i=0}^m(y^{(i)} - \hat{y}^{(i)})^2 \end{align*}\tag{7}$$
```
# GRADED FUNCTION: L2
def L2(yhat, y):
"""
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L2 loss function defined above
"""
### START CODE HERE ### (≈ 1 line of code)
loss = np.sum(abs(yhat - y)**2,axis = 0, keepdims = True)
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L2 = " + str(L2(yhat,y)))
```
**Expected Output**:
<table style="width:20%">
<tr>
<td> **L2** </td>
<td> 0.43 </td>
</tr>
</table>
Congratulations on completing this assignment. We hope that this little warm-up exercise helps you in the future assignments, which will be more exciting and interesting!
<font color='blue'>
**What to remember:**
- Vectorization is very important in deep learning. It provides computational efficiency and clarity.
- You have reviewed the L1 and L2 loss.
- You are familiar with many numpy functions such as np.sum, np.dot, np.multiply, np.maximum, etc...
| github_jupyter |
```
import os, json, sys, time, random
import numpy as np
import torch
from easydict import EasyDict
from math import floor
from easydict import EasyDict
from steves_utils.vanilla_train_eval_test_jig import Vanilla_Train_Eval_Test_Jig
from steves_utils.torch_utils import get_dataset_metrics, independent_accuracy_assesment
from steves_models.configurable_vanilla import Configurable_Vanilla
from steves_utils.torch_sequential_builder import build_sequential
from steves_utils.lazy_map import Lazy_Map
from steves_utils.sequence_aggregator import Sequence_Aggregator
from steves_utils.stratified_dataset.traditional_accessor import Traditional_Accessor_Factory
from steves_utils.cnn_do_report import (
get_loss_curve,
get_results_table,
get_parameters_table,
get_domain_accuracies,
)
from steves_utils.torch_utils import (
confusion_by_domain_over_dataloader,
independent_accuracy_assesment
)
from steves_utils.utils_v2 import (
per_domain_accuracy_from_confusion,
get_datasets_base_path
)
# from steves_utils.ptn_do_report import TBD
required_parameters = {
"experiment_name",
"lr",
"device",
"dataset_seed",
"seed",
"labels",
"domains_target",
"domains_source",
"num_examples_per_domain_per_label_source",
"num_examples_per_domain_per_label_target",
"batch_size",
"n_epoch",
"patience",
"criteria_for_best",
"normalize_source",
"normalize_target",
"x_net",
"NUM_LOGS_PER_EPOCH",
"BEST_MODEL_PATH",
"pickle_name_source",
"pickle_name_target",
"torch_default_dtype",
}
from steves_utils.ORACLE.utils_v2 import (
ALL_SERIAL_NUMBERS,
ALL_DISTANCES_FEET_NARROWED,
)
standalone_parameters = {}
standalone_parameters["experiment_name"] = "MANUAL CORES CNN"
standalone_parameters["lr"] = 0.0001
standalone_parameters["device"] = "cuda"
standalone_parameters["dataset_seed"] = 1337
standalone_parameters["seed"] = 1337
standalone_parameters["labels"] = ALL_SERIAL_NUMBERS
standalone_parameters["domains_source"] = [8,32,50]
standalone_parameters["domains_target"] = [14,20,26,38,44,]
standalone_parameters["num_examples_per_domain_per_label_source"]=-1
standalone_parameters["num_examples_per_domain_per_label_target"]=-1
standalone_parameters["pickle_name_source"] = "oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl"
standalone_parameters["pickle_name_target"] = "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"
standalone_parameters["torch_default_dtype"] = "torch.float32"
standalone_parameters["batch_size"]=128
standalone_parameters["n_epoch"] = 3
standalone_parameters["patience"] = 10
standalone_parameters["criteria_for_best"] = "target_accuracy"
standalone_parameters["normalize_source"] = False
standalone_parameters["normalize_target"] = False
standalone_parameters["x_net"] = [
{"class": "nnReshape", "kargs": {"shape":[-1, 1, 2, 256]}},
{"class": "Conv2d", "kargs": { "in_channels":1, "out_channels":256, "kernel_size":(1,7), "bias":False, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":256}},
{"class": "Conv2d", "kargs": { "in_channels":256, "out_channels":80, "kernel_size":(2,7), "bias":True, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 80*256, "out_features": 256}}, # 80 units per IQ pair
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features":256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": len(standalone_parameters["labels"])}},
]
standalone_parameters["NUM_LOGS_PER_EPOCH"] = 10
standalone_parameters["BEST_MODEL_PATH"] = "./best_model.pth"
# Parameters
parameters = {
"experiment_name": "cnn_3:oracle.run1-oracle.run2",
"domains_source": [8, 32, 50, 14, 20, 26, 38, 44],
"domains_target": [8, 32, 50, 14, 20, 26, 38, 44],
"labels": [
"3123D52",
"3123D65",
"3123D79",
"3123D80",
"3123D54",
"3123D70",
"3123D7B",
"3123D89",
"3123D58",
"3123D76",
"3123D7D",
"3123EFE",
"3123D64",
"3123D78",
"3123D7E",
"3124E4A",
],
"pickle_name_source": "oracle.Run1_10kExamples_stratified_ds.2022A.pkl",
"pickle_name_target": "oracle.Run2_10kExamples_stratified_ds.2022A.pkl",
"device": "cuda",
"lr": 0.0001,
"batch_size": 128,
"normalize_source": False,
"normalize_target": False,
"num_examples_per_domain_per_label_source": -1,
"num_examples_per_domain_per_label_target": -1,
"torch_default_dtype": "torch.float32",
"n_epoch": 50,
"patience": 3,
"criteria_for_best": "target_accuracy",
"x_net": [
{"class": "nnReshape", "kargs": {"shape": [-1, 1, 2, 256]}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 1,
"out_channels": 256,
"kernel_size": [1, 7],
"bias": False,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 256}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 256,
"out_channels": 80,
"kernel_size": [2, 7],
"bias": True,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 20480, "out_features": 256}},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features": 256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 16}},
],
"NUM_LOGS_PER_EPOCH": 10,
"BEST_MODEL_PATH": "./best_model.pth",
"dataset_seed": 500,
"seed": 500,
}
# Set this to True if you want to run this template directly
STANDALONE = False
if STANDALONE:
print("parameters not injected, running with standalone_parameters")
parameters = standalone_parameters
if not 'parameters' in locals() and not 'parameters' in globals():
raise Exception("Parameter injection failed")
#Use an easy dict for all the parameters
p = EasyDict(parameters)
supplied_keys = set(p.keys())
if supplied_keys != required_parameters:
print("Parameters are incorrect")
if len(supplied_keys - required_parameters)>0: print("Shouldn't have:", str(supplied_keys - required_parameters))
if len(required_parameters - supplied_keys)>0: print("Need to have:", str(required_parameters - supplied_keys))
raise RuntimeError("Parameters are incorrect")
###################################
# Set the RNGs and make it all deterministic
###################################
np.random.seed(p.seed)
random.seed(p.seed)
torch.manual_seed(p.seed)
torch.use_deterministic_algorithms(True)
torch.set_default_dtype(eval(p.torch_default_dtype))
###################################
# Build the network(s)
# Note: It's critical to do this AFTER setting the RNG
###################################
x_net = build_sequential(p.x_net)
start_time_secs = time.time()
def wrap_in_dataloader(p, ds):
return torch.utils.data.DataLoader(
ds,
batch_size=p.batch_size,
shuffle=True,
num_workers=1,
persistent_workers=True,
prefetch_factor=50,
pin_memory=True
)
taf_source = Traditional_Accessor_Factory(
labels=p.labels,
domains=p.domains_source,
num_examples_per_domain_per_label=p.num_examples_per_domain_per_label_source,
pickle_path=os.path.join(get_datasets_base_path(), p.pickle_name_source),
seed=p.dataset_seed
)
train_original_source, val_original_source, test_original_source = \
taf_source.get_train(), taf_source.get_val(), taf_source.get_test()
taf_target = Traditional_Accessor_Factory(
labels=p.labels,
domains=p.domains_target,
num_examples_per_domain_per_label=p.num_examples_per_domain_per_label_source,
pickle_path=os.path.join(get_datasets_base_path(), p.pickle_name_target),
seed=p.dataset_seed
)
train_original_target, val_original_target, test_original_target = \
taf_target.get_train(), taf_target.get_val(), taf_target.get_test()
# For CNN We only use X and Y. And we only train on the source.
# Properly form the data using a transform lambda and Lazy_Map. Finally wrap them in a dataloader
transform_lambda = lambda ex: ex[:2] # Strip the tuple to just (x,y)
train_processed_source = wrap_in_dataloader(
p,
Lazy_Map(train_original_source, transform_lambda)
)
val_processed_source = wrap_in_dataloader(
p,
Lazy_Map(val_original_source, transform_lambda)
)
test_processed_source = wrap_in_dataloader(
p,
Lazy_Map(test_original_source, transform_lambda)
)
train_processed_target = wrap_in_dataloader(
p,
Lazy_Map(train_original_target, transform_lambda)
)
val_processed_target = wrap_in_dataloader(
p,
Lazy_Map(val_original_target, transform_lambda)
)
test_processed_target = wrap_in_dataloader(
p,
Lazy_Map(test_original_target, transform_lambda)
)
datasets = EasyDict({
"source": {
"original": {"train":train_original_source, "val":val_original_source, "test":test_original_source},
"processed": {"train":train_processed_source, "val":val_processed_source, "test":test_processed_source}
},
"target": {
"original": {"train":train_original_target, "val":val_original_target, "test":test_original_target},
"processed": {"train":train_processed_target, "val":val_processed_target, "test":test_processed_target}
},
})
ep = next(iter(test_processed_target))
ep[0].dtype
model = Configurable_Vanilla(
x_net=x_net,
label_loss_object=torch.nn.NLLLoss(),
learning_rate=p.lr
)
jig = Vanilla_Train_Eval_Test_Jig(
model=model,
path_to_best_model=p.BEST_MODEL_PATH,
device=p.device,
label_loss_object=torch.nn.NLLLoss(),
)
jig.train(
train_iterable=datasets.source.processed.train,
source_val_iterable=datasets.source.processed.val,
target_val_iterable=datasets.target.processed.val,
patience=p.patience,
num_epochs=p.n_epoch,
num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH,
criteria_for_best=p.criteria_for_best
)
total_experiment_time_secs = time.time() - start_time_secs
source_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test)
target_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test)
source_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val)
target_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val)
history = jig.get_history()
total_epochs_trained = len(history["epoch_indices"])
val_dl = wrap_in_dataloader(p, Sequence_Aggregator((datasets.source.original.val, datasets.target.original.val)))
confusion = confusion_by_domain_over_dataloader(model, p.device, val_dl, forward_uses_domain=False)
per_domain_accuracy = per_domain_accuracy_from_confusion(confusion)
# Add a key to per_domain_accuracy for if it was a source domain
for domain, accuracy in per_domain_accuracy.items():
per_domain_accuracy[domain] = {
"accuracy": accuracy,
"source?": domain in p.domains_source
}
# Do an independent accuracy assesment JUST TO BE SURE!
# _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device)
# _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device)
# _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device)
# _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device)
# assert(_source_test_label_accuracy == source_test_label_accuracy)
# assert(_target_test_label_accuracy == target_test_label_accuracy)
# assert(_source_val_label_accuracy == source_val_label_accuracy)
# assert(_target_val_label_accuracy == target_val_label_accuracy)
###################################
# Write out the results
###################################
experiment = {
"experiment_name": p.experiment_name,
"parameters": p,
"results": {
"source_test_label_accuracy": source_test_label_accuracy,
"source_test_label_loss": source_test_label_loss,
"target_test_label_accuracy": target_test_label_accuracy,
"target_test_label_loss": target_test_label_loss,
"source_val_label_accuracy": source_val_label_accuracy,
"source_val_label_loss": source_val_label_loss,
"target_val_label_accuracy": target_val_label_accuracy,
"target_val_label_loss": target_val_label_loss,
"total_epochs_trained": total_epochs_trained,
"total_experiment_time_secs": total_experiment_time_secs,
"confusion": confusion,
"per_domain_accuracy": per_domain_accuracy,
},
"history": history,
"dataset_metrics": get_dataset_metrics(datasets, "cnn"),
}
get_loss_curve(experiment)
get_results_table(experiment)
get_domain_accuracies(experiment)
print("Source Test Label Accuracy:", experiment["results"]["source_test_label_accuracy"], "Target Test Label Accuracy:", experiment["results"]["target_test_label_accuracy"])
print("Source Val Label Accuracy:", experiment["results"]["source_val_label_accuracy"], "Target Val Label Accuracy:", experiment["results"]["target_val_label_accuracy"])
json.dumps(experiment)
```
| github_jupyter |
# Saving and Loading Models
In this notebook, I'll show you how to save and load models with PyTorch. This is important because you'll often want to load previously trained models to use in making predictions or to continue training on new data.
```
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
from torchvision import datasets, transforms
import helper
import fc_model
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))])
# Download and load the training data
trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
# Download and load the test data
testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)
```
Here we can see one of the images.
```
image, label = next(iter(trainloader))
helper.imshow(image[0,:]);
```
# Train a network
To make things more concise here, I moved the model architecture and training code from the last part to a file called `fc_model`. Importing this, we can easily create a fully-connected network with `fc_model.Network`, and train the network using `fc_model.train`. I'll use this model (once it's trained) to demonstrate how we can save and load models.
```
# Create the network, define the criterion and optimizer
model = fc_model.Network(784, 10, [512, 256, 128])
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
fc_model.train(model, trainloader, testloader, criterion, optimizer, epochs=2)
```
## Saving and loading networks
As you can imagine, it's impractical to train a network every time you need to use it. Instead, we can save trained networks then load them later to train more or use them for predictions.
The parameters for PyTorch networks are stored in a model's `state_dict`. We can see the state dict contains the weight and bias matrices for each of our layers.
```
print("Our model: \n\n", model, '\n')
print("The state dict keys: \n\n", model.state_dict().keys())
```
The simplest thing to do is simply save the state dict with `torch.save`. For example, we can save it to a file `'checkpoint.pth'`.
```
torch.save(model.state_dict(), 'checkpoint.pth')
```
Then we can load the state dict with `torch.load`.
```
state_dict = torch.load('checkpoint.pth')
print(state_dict.keys())
```
And to load the state dict in to the network, you do `model.load_state_dict(state_dict)`.
```
model.load_state_dict(state_dict)
```
Seems pretty straightforward, but as usual it's a bit more complicated. Loading the state dict works only if the model architecture is exactly the same as the checkpoint architecture. If I create a model with a different architecture, this fails.
```
# Try this
model = fc_model.Network(784, 10, [400, 200, 100])
# This will throw an error because the tensor sizes are wrong!
model.load_state_dict(state_dict)
```
This means we need to rebuild the model exactly as it was when trained. Information about the model architecture needs to be saved in the checkpoint, along with the state dict. To do this, you build a dictionary with all the information you need to compeletely rebuild the model.
```
checkpoint = {'input_size': 784,
'output_size': 10,
'hidden_layers': [each.out_features for each in model.hidden_layers],
'state_dict': model.state_dict()}
torch.save(checkpoint, 'checkpoint.pth')
```
Now the checkpoint has all the necessary information to rebuild the trained model. You can easily make that a function if you want. Similarly, we can write a function to load checkpoints.
```
def load_checkpoint(filepath):
checkpoint = torch.load(filepath)
model = fc_model.Network(checkpoint['input_size'],
checkpoint['output_size'],
checkpoint['hidden_layers'])
model.load_state_dict(checkpoint['state_dict'])
return model
model = load_checkpoint('checkpoint.pth')
print(model)
```
| github_jupyter |
# Explore the generated data
Here we explore the data that is generated with the [generate-data.ipynb](generate-data.ipynb) notebook.
You can either run the simulations or download the data set. See [README.md](README.md) for the download link and instructions.
### Joining the seperate data files of one simulation together, example:
```python
# for example if the generated files have the following names:
# 'tmp/1d_alpha_vs_B_x_000.hdf',
# 'tmp/1d_alpha_vs_B_x_001.hdf',
# 'tmp/1d_alpha_vs_B_x_002.hdf', ...
# The following line with join the files and save it as 'data/new_name.hdf'.
df = common.combine_dfs('tmp/1d_alpha_vs_B_x_*.hdf', 'data/new_name.hdf')
```
```
import holoviews as hv
import numpy as np
import pandas as pd
import common
hv.notebook_extension()
def add_energy_gs(df):
hbar = df.hbar.unique()[0]
eV = df.eV.unique()[0]
flux_quantum_over_2pi = hbar / (2 * eV) / (eV * 1e6)
df['E'] = df['currents'].apply(np.cumsum)
df['E'] *= flux_quantum_over_2pi
df['phase_gs_arg'] = df['E'].apply(np.argmin)
df['phase_gs'] = [row['phases'][row['phase_gs_arg']] for i, row in df.iterrows()]
# Move the phase_gs from -π to +π if they are within the tolerance
tol = np.diff(df['phases'].iloc[0]).max()
df['phase_gs'] = [-row['phase_gs'] if row['phase_gs'] < -(np.pi - tol) else row['phase_gs']
for i, row in df.iterrows()]
return df
```
# Data like Figure 4 but with all combinations
```
%%opts Curve (color='k') Scatter (s=200)
def plot(orbital, g, alpha, mu, disorder, salt, B_x):
gr = gb.get_group((orbital, g, alpha, mu, disorder, salt))
gr = gr.set_index('B_x', drop=False)
x = gr.loc[B_x]
current = hv.Curve((gr.B_x, gr.current_c), kdims=['B_x'], vdims=['I_c'])[0:, 0:]
phase_gs = hv.Curve((gr.B_x, gr.phase_gs), kdims=['B_x'], vdims=['theta_gs'])[:, -3.2:3.2]
cpr = hv.Curve((x.phases, x.currents), kdims=['phi'], vdims=['I'])
energy = hv.Curve((x.phases, x.E), kdims=['phi'], vdims=['I'])
E_min = hv.Scatter((x.phase_gs, x.E[x.phase_gs_arg]), kdims=['phi'], vdims=['E'])
VLine = hv.VLine(B_x)
return (current * VLine + phase_gs * VLine + cpr + energy * E_min).cols(2)
kdims = [hv.Dimension('orbital', values=df.orbital.unique()),
hv.Dimension('g', values=df.g.unique()),
hv.Dimension('alpha', values=df.alpha.unique()),
hv.Dimension('mu', values=df.mu.unique()),
hv.Dimension('disorder', values=df.disorder.unique()),
hv.Dimension('salt', values=df.salt.unique()),
hv.Dimension('B_x', values=df.B_x.unique())]
df = pd.read_hdf('data/I_c(B_x)_mu10,20meV_disorder0,75meV_T0.1K_all_combinations_of_effects.hdf')
df = add_energy_gs(df)
params = ['orbital', 'g', 'alpha', 'mu', 'disorder', 'salt']
gb = df.groupby(params)
hv.DynamicMap(plot, kdims=kdims)
```
First mode, no disorder, T=50mK, with orbital and SOI
```
df = pd.read_hdf('data/I_c(B_x)_mu10_disorder0_T0.05K_orbital.hdf')
df = add_energy_gs(df)
params = ['orbital', 'g', 'alpha', 'mu', 'disorder', 'salt']
gb = df.groupby(params)
hv.DynamicMap(plot, kdims=kdims)
```
First mode, no disorder, T=50mK, without orbital and SOI, Zeeman only
```
df = pd.read_hdf('data/I_c(B_x)_mu10_disorder0_T0.05K.hdf')
df = add_energy_gs(df)
params = ['orbital', 'g', 'alpha', 'mu', 'disorder', 'salt']
gb = df.groupby(params)
hv.DynamicMap(plot, kdims=kdims)
%%opts Curve (color='k') Scatter (s=200)
def plot(orbital, g, alpha, mu, disorder, salt, B_x):
gr = gb.get_group((orbital, g, alpha, mu, disorder, salt))
gr = gr.set_index('B_x', drop=False)
x = gr.loc[B_x]
current = hv.Curve((gr.B_x, gr.current_c), kdims=['B_x'], vdims=['I_c'])[0:, 0:]
phase_gs = hv.Curve((gr.B_x, gr.phase_gs), kdims=['B_x'], vdims=['theta_gs'])[:, -3.2:3.2]
cpr = hv.Curve((x.phases, x.currents), kdims=['phi'], vdims=['I'])
energy = hv.Curve((x.phases, x.E), kdims=['phi'], vdims=['I'])
IB = hv.Curve((gr.mu, gr.current_c), kdims=['potential'], vdims=['I_c'])
E_min = hv.Scatter((x.phase_gs, x.E[x.phase_gs_arg]), kdims=['phi'], vdims=['E'])
VLine = hv.VLine(B_x)
return (current * VLine + phase_gs * VLine + cpr + energy * E_min).cols(2)
kdims = [hv.Dimension('orbital', values=df.orbital.unique()),
hv.Dimension('g', values=df.g.unique()),
hv.Dimension('alpha', values=df.alpha.unique()),
hv.Dimension('mu', values=df.mu.unique()),
hv.Dimension('disorder', values=df.disorder.unique()),
hv.Dimension('salt', values=df.salt.unique()),
hv.Dimension('B_x', values=df.B_x.unique())]
```
First mode, no disorder, T=50mK, with orbital but no spin-orbital
```
df = pd.read_hdf('data/I_c(B_x)_mu10_disorder0_T0.05K_onlyorbital.hdf')
df = add_energy_gs(df)
params = ['orbital', 'g', 'alpha', 'mu', 'disorder', 'salt']
gb = df.groupby(params)
hv.DynamicMap(plot, kdims=kdims)
df = pd.read_hdf('data/I_c(B_x)_mu5,10,20meV_disorder0,75meV_T0.05K_orbital_SOI_Zeeman.hdf')
df = add_energy_gs(df)
params = ['orbital', 'g', 'alpha', 'mu', 'disorder', 'salt']
gb = df.groupby(params)
hv.DynamicMap(plot, kdims=kdims)
```
# Different $T$, with or without leads, different lenghts of the system
```
df2 = pd.read_hdf('data/I_c(B_x)_no_disorder_combinations_of_effects_and_geometries.hdf')
df2 = add_energy_gs(df2)
params = ['T', 'L', 'orbital', 'g', 'alpha', 'mu', 'with_leads']
gb = df2.groupby(params)
%%opts Curve (color='k') Scatter (s=200)
def plot(T, L, orbital, g, alpha, mu, with_leads, B_x):
gr = gb.get_group((T, L, orbital, g, alpha, mu, with_leads))
gr = gr.set_index('B_x', drop=False)
x = gr.loc[B_x]
current = hv.Curve((gr.B_x, gr.current_c), kdims=['B_x'], vdims=['I_c'])[:, 0:]
phase_gs = hv.Curve((gr.B_x, gr.phase_gs), kdims=['B_x'], vdims=['theta_gs'])[:, -3.2:3.2]
cpr = hv.Curve((x.phases, x.currents), kdims=['phi'], vdims=['I'])
energy = hv.Curve((x.phases, x.E), kdims=['phi'], vdims=['E'])
E_min = hv.Scatter((x.phase_gs, x.E[x.phase_gs_arg]), kdims=['phi'], vdims=['E'])
VLine = hv.VLine(B_x)
return (current * VLine + phase_gs * VLine + cpr + energy * E_min).cols(2)
kdims = [hv.Dimension('T', values=df2['T'].unique()),
hv.Dimension('L', values=df2.L.unique()),
hv.Dimension('orbital', values=df2.orbital.unique()),
hv.Dimension('g', values=df2.g.unique()),
hv.Dimension('alpha', values=df2.alpha.unique()),
hv.Dimension('mu', values=df2.mu.unique()),
hv.Dimension('with_leads', values=df2.with_leads.unique()),
hv.Dimension('B_x', values=df2.B_x.unique())]
dm = hv.DynamicMap(plot, kdims=kdims)
dm
ds = hv.Dataset(df2)
ds.to.curve(['B_x'], ['current_c'], groupby=params, dynamic=True).overlay('L').select(B=(0, 0.5))
params = ['T', 'B_x', 'orbital', 'g', 'alpha', 'mu', 'with_leads']
curve = ds.to.curve(['L'], ['current_c'], groupby=params, dynamic=True)
curve.redim(current_c=dict(range=(0, None)))
```
# Rotation of field
```
%%opts Path [aspect='square']
df = pd.read_hdf('data/I_c(B_x)_mu20meV_rotation_of_field_in_xy_plane.hdf')
df = add_energy_gs(df)
df2 = common.drop_constant_columns(df)
ds = hv.Dataset(df2)
current = ds.to.curve(kdims='B', vdims='current_c', groupby=['theta', 'disorder']).redim(current_c=dict(range=(0, None)))
phase = ds.to.curve(kdims='B', vdims='phase_gs', groupby=['theta', 'disorder'])
current + phase
```
| github_jupyter |
## In this notebook, images and their corresponding metadata are organized. We take note of the actual existing images, combine with available metadata, and scraped follower counts. After merging and dropping image duplicates, we obtain 7702 total images.
```
import pandas as pd
import numpy as np
import os
from PIL import Image
import json
from pandas.io.json import json_normalize
import ast
IMAGE_DIR = "./images/training/resized/"
```
### Dataframe (df_imagename) of all existing images: 11181 Images
```
# Directory of museum folders
im_dirs = os.listdir(IMAGE_DIR)
folder = []
for f in im_dirs:
if f != '.DS_Store':
print(IMAGE_DIR+f)
folder = folder + os.listdir(IMAGE_DIR+f)
# df_imagename : Dataframe of existing images
df_imagename = pd.DataFrame({"filename": folder})
df_imagename.head()
print("Number of existing images: {}".format(df_imagename.filename.size))
# Takes metadata for museum and returns a dataframe
def load_metadata(file, folder):
data = json.load(file)
df = pd.DataFrame.from_dict(json_normalize(data), orient = 'columns')
df['museum'] = folder
df = df.rename(index=str, columns={"id": "insta_id"})
df.drop(labels = ['comments_disabled', 'edge_media_preview_like.count',
'edge_media_to_caption.edges', 'edge_media_to_comment.count', 'is_video', 'thumbnail_resources', 'thumbnail_src', 'urls',
'video_view_count'], axis = 1, inplace = True)
df['display_url'] = df['display_url'].str.split('/').str[-1]
return df
```
### Dataframe (df) of images in metadata: Metadata for 8362 images
```
# Load all the metadata
df = pd.DataFrame()
for folder in im_dirs:
if folder != ".DS_Store":
print("Loading {} images".format(folder))
meta_file = open("{image_dir}{folder}/{folder}.json".format(image_dir=IMAGE_DIR, folder = folder))
if df.empty:
df = load_metadata(meta_file, folder)
else:
df = pd.concat([df, load_metadata(meta_file, folder)], ignore_index = True)
columns = ['height',
'width',
'filename',
'liked_count',
'insta_id',
'user_id',
'shortcode',
'tags',
'timestamp',
'museum']
df.to_csv('./images/training/data/merged_metadata.csv', header = columns)
df.head()
print("Number of images in metadata: {}".format(df.shortcode.size))
```
## Script for scraping follower counts. Some of the shortcodes used were not valid, possibly because the images were removed.
```
from selenium import webdriver
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from lxml import html
import csv
def write_out_csv(data, filename_base, fieldnames):
print("Writing to output file %s.csv" % filename_base)
with open("%s.csv" % filename_base, "w") as csvfile:
fields = fieldnames
writer = csv.DictWriter(csvfile, fieldnames=fields)
writer.writeheader()
for row in data:
writer.writerow(row)
def scrape_followers(lst, output_filename):
instagram_data = []
error_sc = []
for code in lst:
url = "https://instagram.com/p/" + code
try:
browser.get(url)
elem = wait.until(
EC.element_to_be_clickable(
(By.XPATH, '//div[@class = "e1e1d"]//a[@class = "FPmhX notranslate nJAzx"]')
)
)
elem.click()
elem = wait.until(
EC.element_to_be_clickable((By.XPATH, '//div[@class = "v9tJq "]'))
)
el = browser.find_element_by_xpath("//*")
parser = html.fromstring(el.get_attribute("outerHTML"))
# print(el.get_attribute("outerHTML"))
raw_followers = parser.xpath(
'.//ul[@class="k9GMp "]/li[position()=2]//span[@class = "g47SY "]/@title'
)[0].replace(",", "")
data = {"shortcode": code, "followers": int(raw_followers)}
instagram_data.append(data)
except:
error_sc.append(code)
pass
browser.close()
fields = ["shortcode", "followers"]
print(error_sc)
write_out_csv(instagram_data, "{}".format(output_filename), fields)
# Uncomment the code below to run scraping for a list of shortcodes
# Load the shortcodes of images for which the followers was not scraped
# with open('error_sc4.txt', 'r') as f:
# error_sc4 = ast.literal_eval(f.read())
# print(len(error_sc4))
# browser = webdriver.Chrome()
# wait = WebDriverWait(browser, 15)
# scrape_followers(error_sc3, "followers4")
```
### Dataframe (df_followers) of follower number for each shortcode: 8138 counts, 8068 shortcodes are unique
```
# Follower counts are merged
# lst_followers = [pd.read_csv("followers.csv"), pd.read_csv("followers2.csv"), pd.read_csv("followers3.csv"), pd.read_csv("followers4.csv")]
# df_followers = pd.concat(lst_followers, ignore_index = True)
# df_followers.to_csv("scraped_follower_counts.csv")
# Follower count df: df_followers
# Metadata df: df_images
df_followers = pd.read_csv("./images/training/data/scraped_follower_counts.csv")
df_images = pd.read_csv("./images/training/data/merged_metadata.csv")
print("Number of Follower counts", df_followers.shortcode.size)
print("Number of Follower counts based on unique shortcodes", df_followers.shortcode.unique().size)
print("Number of Images with metadata", df_images.shortcode.size)
print("Number of actual Images", df_imagename.size)
```
### Dataframe (df_final): merge metadata with scraped followers counts.
```
df_final = df_followers.merge(df_images, on = "shortcode")
print("From Metadata - Number of unique filenames: {}".format(df_images.filename.unique().size))
print("From Metadata - Number of filenames: {}".format(df_images.filename.size))
print("Metadata + Followers - Number of unique filenames : {}".format(df_final.filename.unique().size))
print("Metadata + Followers - Number of filenames: {}".format(df_final.filename.size))
df_final.drop_duplicates(subset = ["shortcode"], inplace = True)
df_final.shortcode.unique().size
df_final.shortcode.size
df_final['score'] = df_final.liked_count/df_final.followers
df_final = df_final[df_final['score'] != float('inf')]
print("min: {}, max: {}".format(min(df_final.score), max(df_final.score)))
df_final['norm_score'] = (df_final['score'] - min(df_final.score))/(max(df_final.score) - min(df_final.score))
print("normalized - min: {}, max: {}".format(min(df_final.norm_score), max(df_final.norm_score)))
df_final.head()
```
### Dataframe (df_final) -- existing images merged with metadata images:
```
df_final = df_final.merge(df_imagename, on="filename")
print(df_imagename.filename.unique().size)
print(df_imagename.filename.size)
df_final.filename.unique().size
df_final = df_final.sort_values(by = "score", ascending=False)[['filename', 'museum', 'score', 'liked_count', 'followers', 'norm_score']]
df_final.drop_duplicates(subset = "filename", inplace = True)
print("Number of existing images merged with metadata: {}".format(df_final.filename.size))
df_final.to_csv('./images/training/data/image_data_final.csv')
df_final.read_csv('./images/training/data/image_data_final.csv')
df.filename.size
# Dataframe of follower counts
df_followers = pd.read_csv("./images/training/data/scraped_follower_counts.csv")
df_followers.head()
# Dataframe of metadata
df_images = pd.read_csv("./images/training/data/merged_metadata.csv")
df_images.head()
# Final dataframe of images that are existing and have follower counts and metadata
df_final = pd.read_csv('./images/training/data/image_data_final.csv')
df_final.head()
```
| github_jupyter |
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
# In addition to the imports, we'll also import some constants
# And also define our own
from scipy.constants import gravitational_constant, au
year = 365.25*24*3600
mass_sun = 1.989e30
mars_distance = 227.9*1.e9
# NOTE: All units in SI
```
## Exercise 3 - Gravity!
Continuing from the previous notebook, now we're going to try a more difficult problem: gravity! We need to do this in two dimensions, so now we've got more variables. It's still ordinary differential equations though. The only derivative is a time derivative.
Now we want to solve a vector equation:
$$\vec{F~} = - \frac{G~M~m}{r^2} \hat{r~}$$
We'll take this to be the force on $m$, so $F = m a$. In terms of the unnormalized vector $\vec{r~}$, we have
$$\vec{a~} = - \frac{G~M}{r^2} \frac{\vec{r~}}{r}$$
where $r$ is the length of $\vec{r~}$.
So how do we put this into the form scipy expects? We define the position of the little object by
$$\vec{r~} = (x, y)$$
Then the length is
$$r = \sqrt{x^2 + y^2}$$
We have second-order differential equations for both $x$ and $y$. We need four variables $x$, $y$, $v_x$, $v_y$.
We also need to rescale our variables. Kilograms, meters, and seconds aren't great for describing orbits. We'll get a lot of huge numbers. Let's define a rescaling:
$$t = T~\tau$$
$$r = R~\rho$$
So the differential equation looks something like
$$\frac{d^2 r}{d t^2} = \frac{R}{T^2} \frac{d^2 \rho}{d \tau^2} = - \frac{G~M}{(R~\rho)^2}$$
or
$$\frac{d^2 \rho}{d \tau^2} = - \left( \frac{G~M~T^2}{R^3}\right) ~ \frac{1}{\rho^2}$$
All the units have been collected into one single factor. If we choose $R = 1~\mathrm{AU}$ and $T = 1~\mathrm{yr}$, this factor becomes a nice number close to $1$.
```
# Calculate the factor above
gee_msol = gravitational_constant*mass_sun
scale_factor = (gee_msol/au/au/au) * year * year
print(scale_factor)
```
Now we're ready to define the gravitational acceleration and start some calculations.
```
# Gravitational acceleration in 2D
def fgrav(vec, t):
x, y, vx, vy = vec
r = # FIXME: Calculate the distance from x and y
acc = # FIXME: Calculate the magnitude of the acceleration
return (vx, vy, -acc*x/r, -acc*y/r) # Turn the calculations above into the acceleration vector
r_init = (1., 0., 0., 1.) # Starting values at t = 0
times = np.linspace(0., 4., 10000)
rarr = odeint(fgrav, r_init, times)
plt.figure(figsize=(8,8))
plt.scatter(rarr[:,0], rarr[:,1], s=5)
plt.scatter(0., 0., c='y', s=50)
plt.gca().set_aspect('equal', 'datalim')
```
We just guessed at the initial conditions, and we get a very elliptical orbit. Using the formula for acceleration on a circle
$$v^2/r = G~M/r^2$$
So the velocity on a circular orbit should be
$$v = \sqrt{G~M/r}$$
We can use that to get the initial conditions correct.
**Exercise 3.1**: Fill in the initial condition below to get a circular orbit at $r = 1$.
```
fIr_init1 = (1., 0., 0., 1.) # FIXME: Change the last value
times = np.linspace(0., 4., 10000)
rarr1 = odeint(fgrav, r_init1, times)
plt.figure(figsize=(8,8))
plt.scatter(rarr1[:,0], rarr1[:,1], s=5)
plt.scatter(0., 0., c='y', s=50)
plt.gca().set_aspect('equal', 'datalim')
```
**Exercise 3.2**: How long does a single orbit take? Does this make sense?
**Exercise 3.3**: Play with the conditions below, shooting the planet toward the sun but offset a bit in $y$ so it doesn't go straight through the center. What kind of shapes do you get? Note that we use a different `times` array than the others, so orbits that go way off can be stopped early if you want.
```
r_init2 = (4., 0.5, -10., 0.) # FIXME: Try different values
times2 = np.linspace(0., 2, 1000)
rarr2 = odeint(fgrav, r_init2, times)
plt.figure(figsize=(8,8))
plt.scatter(rarr2[:,0], rarr2[:,1], s=5)
plt.scatter(0., 0., c='y', s=50)
plt.gca().set_aspect('equal', 'datalim')
```
**Exercise 3.4**: I've defined the distance from Mars to the Sun in kilometers as `mars_distance`. Define `r_mars` in our units (the ones where the Earth is at $r = 1$, and change the initial conditions below to add Mars to the plot.
```
r_init3 = (1, 0., 0., 1.) # FIXME: Set correct x and vy for Mars
rarr3 = odeint(fgrav, r_init3, times)
plt.figure(figsize=(8,8))
plt.scatter(rarr1[:,0], rarr1[:,1], s=5)
plt.scatter(rarr3[:,0], rarr3[:,1], c='r', s=4)
plt.scatter(0., 0., c='y', s=50)
plt.gca().set_aspect('equal', 'datalim')
```
| github_jupyter |
# Residual Networks
Welcome to the second assignment of this week! You will learn how to build very deep convolutional networks, using Residual Networks (ResNets). In theory, very deep networks can represent very complex functions; but in practice, they are hard to train. Residual Networks, introduced by [He et al.](https://arxiv.org/pdf/1512.03385.pdf), allow you to train much deeper networks than were previously practically feasible.
**In this assignment, you will:**
- Implement the basic building blocks of ResNets.
- Put together these building blocks to implement and train a state-of-the-art neural network for image classification.
This assignment will be done in Keras.
Before jumping into the problem, let's run the cell below to load the required packages.
```
import numpy as np
from keras import layers
from keras.layers import Input, Add, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D, AveragePooling2D, MaxPooling2D, GlobalMaxPooling2D
from keras.models import Model, load_model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
import pydot
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
from resnets_utils import *
from keras.initializers import glorot_uniform
import scipy.misc
from matplotlib.pyplot import imshow
%matplotlib inline
import keras.backend as K
K.set_image_data_format('channels_last')
K.set_learning_phase(1)
```
## 1 - The problem of very deep neural networks
Last week, you built your first convolutional neural network. In recent years, neural networks have become deeper, with state-of-the-art networks going from just a few layers (e.g., AlexNet) to over a hundred layers.
The main benefit of a very deep network is that it can represent very complex functions. It can also learn features at many different levels of abstraction, from edges (at the lower layers) to very complex features (at the deeper layers). However, using a deeper network doesn't always help. A huge barrier to training them is vanishing gradients: very deep networks often have a gradient signal that goes to zero quickly, thus making gradient descent unbearably slow. More specifically, during gradient descent, as you backprop from the final layer back to the first layer, you are multiplying by the weight matrix on each step, and thus the gradient can decrease exponentially quickly to zero (or, in rare cases, grow exponentially quickly and "explode" to take very large values).
During training, you might therefore see the magnitude (or norm) of the gradient for the earlier layers descrease to zero very rapidly as training proceeds:
<img src="images/vanishing_grad_kiank.png" style="width:450px;height:220px;">
<caption><center> <u> <font color='purple'> **Figure 1** </u><font color='purple'> : **Vanishing gradient** <br> The speed of learning decreases very rapidly for the early layers as the network trains </center></caption>
You are now going to solve this problem by building a Residual Network!
## 2 - Building a Residual Network
In ResNets, a "shortcut" or a "skip connection" allows the gradient to be directly backpropagated to earlier layers:
<img src="images/skip_connection_kiank.png" style="width:650px;height:200px;">
<caption><center> <u> <font color='purple'> **Figure 2** </u><font color='purple'> : A ResNet block showing a **skip-connection** <br> </center></caption>
The image on the left shows the "main path" through the network. The image on the right adds a shortcut to the main path. By stacking these ResNet blocks on top of each other, you can form a very deep network.
We also saw in lecture that having ResNet blocks with the shortcut also makes it very easy for one of the blocks to learn an identity function. This means that you can stack on additional ResNet blocks with little risk of harming training set performance. (There is also some evidence that the ease of learning an identity function--even more than skip connections helping with vanishing gradients--accounts for ResNets' remarkable performance.)
Two main types of blocks are used in a ResNet, depending mainly on whether the input/output dimensions are same or different. You are going to implement both of them.
### 2.1 - The identity block
The identity block is the standard block used in ResNets, and corresponds to the case where the input activation (say $a^{[l]}$) has the same dimension as the output activation (say $a^{[l+2]}$). To flesh out the different steps of what happens in a ResNet's identity block, here is an alternative diagram showing the individual steps:
<img src="images/idblock2_kiank.png" style="width:650px;height:150px;">
<caption><center> <u> <font color='purple'> **Figure 3** </u><font color='purple'> : **Identity block.** Skip connection "skips over" 2 layers. </center></caption>
The upper path is the "shortcut path." The lower path is the "main path." In this diagram, we have also made explicit the CONV2D and ReLU steps in each layer. To speed up training we have also added a BatchNorm step. Don't worry about this being complicated to implement--you'll see that BatchNorm is just one line of code in Keras!
In this exercise, you'll actually implement a slightly more powerful version of this identity block, in which the skip connection "skips over" 3 hidden layers rather than 2 layers. It looks like this:
<img src="images/idblock3_kiank.png" style="width:650px;height:150px;">
<caption><center> <u> <font color='purple'> **Figure 4** </u><font color='purple'> : **Identity block.** Skip connection "skips over" 3 layers.</center></caption>
Here're the individual steps.
First component of main path:
- The first CONV2D has $F_1$ filters of shape (1,1) and a stride of (1,1). Its padding is "valid" and its name should be `conv_name_base + '2a'`. Use 0 as the seed for the random initialization.
- The first BatchNorm is normalizing the channels axis. Its name should be `bn_name_base + '2a'`.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Second component of main path:
- The second CONV2D has $F_2$ filters of shape $(f,f)$ and a stride of (1,1). Its padding is "same" and its name should be `conv_name_base + '2b'`. Use 0 as the seed for the random initialization.
- The second BatchNorm is normalizing the channels axis. Its name should be `bn_name_base + '2b'`.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Third component of main path:
- The third CONV2D has $F_3$ filters of shape (1,1) and a stride of (1,1). Its padding is "valid" and its name should be `conv_name_base + '2c'`. Use 0 as the seed for the random initialization.
- The third BatchNorm is normalizing the channels axis. Its name should be `bn_name_base + '2c'`. Note that there is no ReLU activation function in this component.
Final step:
- The shortcut and the input are added together.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
**Exercise**: Implement the ResNet identity block. We have implemented the first component of the main path. Please read over this carefully to make sure you understand what it is doing. You should implement the rest.
- To implement the Conv2D step: [See reference](https://keras.io/layers/convolutional/#conv2d)
- To implement BatchNorm: [See reference](https://faroit.github.io/keras-docs/1.2.2/layers/normalization/) (axis: Integer, the axis that should be normalized (typically the channels axis))
- For the activation, use: `Activation('relu')(X)`
- To add the value passed forward by the shortcut: [See reference](https://keras.io/layers/merge/#add)
```
# GRADED FUNCTION: identity_block
def identity_block(X, f, filters, stage, block):
"""
Implementation of the identity block as defined in Figure 3
Arguments:
X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
f -- integer, specifying the shape of the middle CONV's window for the main path
filters -- python list of integers, defining the number of filters in the CONV layers of the main path
stage -- integer, used to name the layers, depending on their position in the network
block -- string/character, used to name the layers, depending on their position in the network
Returns:
X -- output of the identity block, tensor of shape (n_H, n_W, n_C)
"""
# defining name basis
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
# Retrieve Filters
F1, F2, F3 = filters
# Save the input value. You'll need this later to add back to the main path.
X_shortcut = X
# First component of main path
X = Conv2D(filters = F1, kernel_size = (1, 1), strides = (1,1), padding = 'valid', name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X)
X = Activation('relu')(X)
### START CODE HERE ###
# Second component of main path (≈3 lines)
X = Conv2D(filters = F2, kernel_size = (f, f), strides = (1,1), padding = 'same', name = conv_name_base + '2b', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2b')(X)
X = Activation('relu')(X)
# Third component of main path (≈2 lines)
X = Conv2D(filters = F3, kernel_size = (1, 1), strides = (1,1), padding = 'valid', name = conv_name_base + '2c', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2c')(X)
# Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines)
X = Add()([X, X_shortcut])
X = Activation('relu')(X)
### END CODE HERE ###
return X
tf.reset_default_graph()
with tf.Session() as test:
np.random.seed(1)
A_prev = tf.placeholder("float", [3, 4, 4, 6])
X = np.random.randn(3, 4, 4, 6)
A = identity_block(A_prev, f = 2, filters = [2, 4, 6], stage = 1, block = 'a')
test.run(tf.global_variables_initializer())
out = test.run([A], feed_dict={A_prev: X, K.learning_phase(): 0})
print("out = " + str(out[0][1][1][0]))
```
**Expected Output**:
<table>
<tr>
<td>
**out**
</td>
<td>
[ 0.94822985 0. 1.16101444 2.747859 0. 1.36677003]
</td>
</tr>
</table>
## 2.2 - The convolutional block
You've implemented the ResNet identity block. Next, the ResNet "convolutional block" is the other type of block. You can use this type of block when the input and output dimensions don't match up. The difference with the identity block is that there is a CONV2D layer in the shortcut path:
<img src="images/convblock_kiank.png" style="width:650px;height:150px;">
<caption><center> <u> <font color='purple'> **Figure 4** </u><font color='purple'> : **Convolutional block** </center></caption>
The CONV2D layer in the shortcut path is used to resize the input $x$ to a different dimension, so that the dimensions match up in the final addition needed to add the shortcut value back to the main path. (This plays a similar role as the matrix $W_s$ discussed in lecture.) For example, to reduce the activation dimensions's height and width by a factor of 2, you can use a 1x1 convolution with a stride of 2. The CONV2D layer on the shortcut path does not use any non-linear activation function. Its main role is to just apply a (learned) linear function that reduces the dimension of the input, so that the dimensions match up for the later addition step.
The details of the convolutional block are as follows.
First component of main path:
- The first CONV2D has $F_1$ filters of shape (1,1) and a stride of (s,s). Its padding is "valid" and its name should be `conv_name_base + '2a'`.
- The first BatchNorm is normalizing the channels axis. Its name should be `bn_name_base + '2a'`.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Second component of main path:
- The second CONV2D has $F_2$ filters of (f,f) and a stride of (1,1). Its padding is "same" and it's name should be `conv_name_base + '2b'`.
- The second BatchNorm is normalizing the channels axis. Its name should be `bn_name_base + '2b'`.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Third component of main path:
- The third CONV2D has $F_3$ filters of (1,1) and a stride of (1,1). Its padding is "valid" and it's name should be `conv_name_base + '2c'`.
- The third BatchNorm is normalizing the channels axis. Its name should be `bn_name_base + '2c'`. Note that there is no ReLU activation function in this component.
Shortcut path:
- The CONV2D has $F_3$ filters of shape (1,1) and a stride of (s,s). Its padding is "valid" and its name should be `conv_name_base + '1'`.
- The BatchNorm is normalizing the channels axis. Its name should be `bn_name_base + '1'`.
Final step:
- The shortcut and the main path values are added together.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
**Exercise**: Implement the convolutional block. We have implemented the first component of the main path; you should implement the rest. As before, always use 0 as the seed for the random initialization, to ensure consistency with our grader.
- [Conv Hint](https://keras.io/layers/convolutional/#conv2d)
- [BatchNorm Hint](https://keras.io/layers/normalization/#batchnormalization) (axis: Integer, the axis that should be normalized (typically the features axis))
- For the activation, use: `Activation('relu')(X)`
- [Addition Hint](https://keras.io/layers/merge/#add)
```
# GRADED FUNCTION: convolutional_block
def convolutional_block(X, f, filters, stage, block, s = 2):
"""
Implementation of the convolutional block as defined in Figure 4
Arguments:
X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
f -- integer, specifying the shape of the middle CONV's window for the main path
filters -- python list of integers, defining the number of filters in the CONV layers of the main path
stage -- integer, used to name the layers, depending on their position in the network
block -- string/character, used to name the layers, depending on their position in the network
s -- Integer, specifying the stride to be used
Returns:
X -- output of the convolutional block, tensor of shape (n_H, n_W, n_C)
"""
# defining name basis
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
# Retrieve Filters
F1, F2, F3 = filters
# Save the input value
X_shortcut = X
##### MAIN PATH #####
# First component of main path
X = Conv2D(F1, (1, 1), strides = (s,s), name = conv_name_base + '2a', padding = 'valid', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X)
X = Activation('relu')(X)
### START CODE HERE ###
# Second component of main path (≈3 lines)
X = Conv2D(F2, (f, f), strides = (1,1), name = conv_name_base + '2b', padding = 'same', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2b')(X)
X = Activation('relu')(X)
# Third component of main path (≈2 lines)
X = Conv2D(F3, (1, 1), strides = (1,1), name = conv_name_base + '2c', padding = 'valid', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2c')(X)
##### SHORTCUT PATH #### (≈2 lines)
X_shortcut = Conv2D(F3, (1, 1), strides = (s,s), name = conv_name_base + '2d', kernel_initializer = glorot_uniform(seed=0))(X_shortcut)
X_shortcut = BatchNormalization(axis = 3, name = bn_name_base + '2d')(X_shortcut)
# Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines)
X = Add()([X, X_shortcut])
X = Activation('relu')(X)
### END CODE HERE ###
return X
tf.reset_default_graph()
with tf.Session() as test:
np.random.seed(1)
A_prev = tf.placeholder("float", [3, 4, 4, 6])
X = np.random.randn(3, 4, 4, 6)
A = convolutional_block(A_prev, f = 2, filters = [2, 4, 6], stage = 1, block = 'a')
test.run(tf.global_variables_initializer())
out = test.run([A], feed_dict={A_prev: X, K.learning_phase(): 0})
print("out = " + str(out[0][1][1][0]))
```
**Expected Output**:
<table>
<tr>
<td>
**out**
</td>
<td>
[ 0.09018463 1.23489773 0.46822017 0.0367176 0. 0.65516603]
</td>
</tr>
</table>
## 3 - Building your first ResNet model (50 layers)
You now have the necessary blocks to build a very deep ResNet. The following figure describes in detail the architecture of this neural network. "ID BLOCK" in the diagram stands for "Identity block," and "ID BLOCK x3" means you should stack 3 identity blocks together.
<img src="images/resnet_kiank.png" style="width:850px;height:150px;">
<caption><center> <u> <font color='purple'> **Figure 5** </u><font color='purple'> : **ResNet-50 model** </center></caption>
The details of this ResNet-50 model are:
- Zero-padding pads the input with a pad of (3,3)
- Stage 1:
- The 2D Convolution has 64 filters of shape (7,7) and uses a stride of (2,2). Its name is "conv1".
- BatchNorm is applied to the channels axis of the input.
- MaxPooling uses a (3,3) window and a (2,2) stride.
- Stage 2:
- The convolutional block uses three set of filters of size [64,64,256], "f" is 3, "s" is 1 and the block is "a".
- The 2 identity blocks use three set of filters of size [64,64,256], "f" is 3 and the blocks are "b" and "c".
- Stage 3:
- The convolutional block uses three set of filters of size [128,128,512], "f" is 3, "s" is 2 and the block is "a".
- The 3 identity blocks use three set of filters of size [128,128,512], "f" is 3 and the blocks are "b", "c" and "d".
- Stage 4:
- The convolutional block uses three set of filters of size [256, 256, 1024], "f" is 3, "s" is 2 and the block is "a".
- The 5 identity blocks use three set of filters of size [256, 256, 1024], "f" is 3 and the blocks are "b", "c", "d", "e" and "f".
- Stage 5:
- The convolutional block uses three set of filters of size [512, 512, 2048], "f" is 3, "s" is 2 and the block is "a".
- The 2 identity blocks use three set of filters of size [512, 512, 2048], "f" is 3 and the blocks are "b" and "c".
- The 2D Average Pooling uses a window of shape (2,2) and its name is "avg_pool".
- The flatten doesn't have any hyperparameters or name.
- The Fully Connected (Dense) layer reduces its input to the number of classes using a softmax activation. Its name should be `'fc' + str(classes)`.
**Exercise**: Implement the ResNet with 50 layers described in the figure above. We have implemented Stages 1 and 2. Please implement the rest. (The syntax for implementing Stages 3-5 should be quite similar to that of Stage 2.) Make sure you follow the naming convention in the text above.
You'll need to use this function:
- Average pooling [see reference](https://keras.io/layers/pooling/#averagepooling2d)
Here're some other functions we used in the code below:
- Conv2D: [See reference](https://keras.io/layers/convolutional/#conv2d)
- BatchNorm: [See reference](https://keras.io/layers/normalization/#batchnormalization) (axis: Integer, the axis that should be normalized (typically the features axis))
- Zero padding: [See reference](https://keras.io/layers/convolutional/#zeropadding2d)
- Max pooling: [See reference](https://keras.io/layers/pooling/#maxpooling2d)
- Fully conected layer: [See reference](https://keras.io/layers/core/#dense)
- Addition: [See reference](https://keras.io/layers/merge/#add)
```
# GRADED FUNCTION: ResNet50
def ResNet50(input_shape = (64, 64, 3), classes = 6):
"""
Implementation of the popular ResNet50 the following architecture:
CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> CONVBLOCK -> IDBLOCK*2 -> CONVBLOCK -> IDBLOCK*3
-> CONVBLOCK -> IDBLOCK*5 -> CONVBLOCK -> IDBLOCK*2 -> AVGPOOL -> TOPLAYER
Arguments:
input_shape -- shape of the images of the dataset
classes -- integer, number of classes
Returns:
model -- a Model() instance in Keras
"""
# Define the input as a tensor with shape input_shape
X_input = Input(input_shape)
# Zero-Padding
X = ZeroPadding2D((3, 3))(X_input)
# Stage 1
X = Conv2D(64, (7, 7), strides = (2, 2), name = 'conv1', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = 'bn_conv1')(X)
X = Activation('relu')(X)
X = MaxPooling2D((3, 3), strides=(2, 2))(X)
# Stage 2
X = convolutional_block(X, f = 3, filters = [64, 64, 256], stage = 2, block='a', s = 1)
X = identity_block(X, 3, [64, 64, 256], stage=2, block='b')
X = identity_block(X, 3, [64, 64, 256], stage=2, block='c')
### START CODE HERE ###
# Stage 3 (≈4 lines)
X = convolutional_block(X, f = 3, filters = [128,128,512], stage = 3, block='a', s = 2)
X = identity_block(X, 3, [128,128,512], stage=3, block='b')
X = identity_block(X, 3, [128,128,512], stage=3, block='c')
X = identity_block(X, 3, [128,128,512], stage=3, block='d')
# Stage 4 (≈6 lines)
X = convolutional_block(X, f = 3, filters = [256, 256, 1024], stage = 4, block='a', s = 2)
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='b')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='c')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='d')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='e')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='f')
# Stage 5 (≈3 lines)
X = convolutional_block(X, f = 3, filters = [512, 512, 2048], stage = 5, block='a', s = 2)
X = identity_block(X, 3, [512, 512, 2048], stage=5, block='b')
X = identity_block(X, 3, [512, 512, 2048], stage=5, block='c')
# AVGPOOL (≈1 line). Use "X = AveragePooling2D(...)(X)"
X = AveragePooling2D(pool_size=(2, 2))(X)
### END CODE HERE ###
# output layer
X = Flatten()(X)
X = Dense(classes, activation='softmax', name='fc' + str(classes), kernel_initializer = glorot_uniform(seed=0))(X)
# Create model
model = Model(inputs = X_input, outputs = X, name='ResNet50')
return model
```
Run the following code to build the model's graph. If your implementation is not correct you will know it by checking your accuracy when running `model.fit(...)` below.
```
model = ResNet50(input_shape = (64, 64, 3), classes = 6)
```
As seen in the Keras Tutorial Notebook, prior training a model, you need to configure the learning process by compiling the model.
```
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
```
The model is now ready to be trained. The only thing you need is a dataset.
Let's load the SIGNS Dataset.
<img src="images/signs_data_kiank.png" style="width:450px;height:250px;">
<caption><center> <u> <font color='purple'> **Figure 6** </u><font color='purple'> : **SIGNS dataset** </center></caption>
```
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
# Normalize image vectors
X_train = X_train_orig/255.
X_test = X_test_orig/255.
# Convert training and test labels to one hot matrices
Y_train = convert_to_one_hot(Y_train_orig, 6).T
Y_test = convert_to_one_hot(Y_test_orig, 6).T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
```
Run the following cell to train your model on 2 epochs with a batch size of 32. On a CPU it should take you around 5min per epoch.
```
model.fit(X_train, Y_train, epochs = 2, batch_size = 32)
```
**Expected Output**:
<table>
<tr>
<td>
** Epoch 1/2**
</td>
<td>
loss: between 1 and 5, acc: between 0.2 and 0.5, although your results can be different from ours.
</td>
</tr>
<tr>
<td>
** Epoch 2/2**
</td>
<td>
loss: between 1 and 5, acc: between 0.2 and 0.5, you should see your loss decreasing and the accuracy increasing.
</td>
</tr>
</table>
Let's see how this model (trained on only two epochs) performs on the test set.
```
preds = model.evaluate(X_test, Y_test)
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1]))
```
**Expected Output**:
<table>
<tr>
<td>
**Test Accuracy**
</td>
<td>
between 0.16 and 0.25
</td>
</tr>
</table>
For the purpose of this assignment, we've asked you to train the model only for two epochs. You can see that it achieves poor performances. Please go ahead and submit your assignment; to check correctness, the online grader will run your code only for a small number of epochs as well.
After you have finished this official (graded) part of this assignment, you can also optionally train the ResNet for more iterations, if you want. We get a lot better performance when we train for ~20 epochs, but this will take more than an hour when training on a CPU.
Using a GPU, we've trained our own ResNet50 model's weights on the SIGNS dataset. You can load and run our trained model on the test set in the cells below. It may take ≈1min to load the model.
```
model = load_model('ResNet50.h5')
preds = model.evaluate(X_test, Y_test)
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1]))
```
ResNet50 is a powerful model for image classification when it is trained for an adequate number of iterations. We hope you can use what you've learnt and apply it to your own classification problem to perform state-of-the-art accuracy.
Congratulations on finishing this assignment! You've now implemented a state-of-the-art image classification system!
## 4 - Test on your own image (Optional/Ungraded)
If you wish, you can also take a picture of your own hand and see the output of the model. To do this:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Write your image's name in the following code
4. Run the code and check if the algorithm is right!
```
img_path = 'images/my_image.jpg'
img = image.load_img(img_path, target_size=(64, 64))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
print('Input image shape:', x.shape)
my_image = scipy.misc.imread(img_path)
imshow(my_image)
print("class prediction vector [p(0), p(1), p(2), p(3), p(4), p(5)] = ")
print(model.predict(x))
```
You can also print a summary of your model by running the following code.
```
model.summary()
```
Finally, run the code below to visualize your ResNet50. You can also download a .png picture of your model by going to "File -> Open...-> model.png".
```
plot_model(model, to_file='model.png')
SVG(model_to_dot(model).create(prog='dot', format='svg'))
```
<font color='blue'>
**What you should remember:**
- Very deep "plain" networks don't work in practice because they are hard to train due to vanishing gradients.
- The skip-connections help to address the Vanishing Gradient problem. They also make it easy for a ResNet block to learn an identity function.
- There are two main type of blocks: The identity block and the convolutional block.
- Very deep Residual Networks are built by stacking these blocks together.
### References
This notebook presents the ResNet algorithm due to He et al. (2015). The implementation here also took significant inspiration and follows the structure given in the github repository of Francois Chollet:
- Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun - [Deep Residual Learning for Image Recognition (2015)](https://arxiv.org/abs/1512.03385)
- Francois Chollet's github repository: https://github.com/fchollet/deep-learning-models/blob/master/resnet50.py
| github_jupyter |
# Road Following - Live demo (TensorRT)
In this notebook, we will use model we trained to move JetBot smoothly on track.
# TensorRT
```
import torch
device = torch.device('cuda')
```
Load the TRT optimized model by executing the cell below
```
import torch
from torch2trt import TRTModule
model_trt = TRTModule()
model_trt.load_state_dict(torch.load('best_steering_model_xy_trt.pth'))
```
### Creating the Pre-Processing Function
We have now loaded our model, but there's a slight issue. The format that we trained our model doesn't exactly match the format of the camera. To do that, we need to do some preprocessing. This involves the following steps:
1. Convert from HWC layout to CHW layout
2. Normalize using same parameters as we did during training (our camera provides values in [0, 255] range and training loaded images in [0, 1] range so we need to scale by 255.0
3. Transfer the data from CPU memory to GPU memory
4. Add a batch dimension
```
import torchvision.transforms as transforms
import torch.nn.functional as F
import cv2
import PIL.Image
import numpy as np
mean = torch.Tensor([0.485, 0.456, 0.406]).cuda().half()
std = torch.Tensor([0.229, 0.224, 0.225]).cuda().half()
def preprocess(image):
image = PIL.Image.fromarray(image)
image = transforms.functional.to_tensor(image).to(device).half()
image.sub_(mean[:, None, None]).div_(std[:, None, None])
return image[None, ...]
```
Awesome! We've now defined our pre-processing function which can convert images from the camera format to the neural network input format.
Now, let's start and display our camera. You should be pretty familiar with this by now.
```
from IPython.display import display
import ipywidgets
import traitlets
from jetbot import Camera, bgr8_to_jpeg
camera = Camera()
image_widget = ipywidgets.Image()
traitlets.dlink((camera, 'value'), (image_widget, 'value'), transform=bgr8_to_jpeg)
display(image_widget)
```
We'll also create our robot instance which we'll need to drive the motors.
```
from jetbot import Robot
robot = Robot()
```
Now, we will define sliders to control JetBot
> Note: We have initialize the slider values for best known configurations, however these might not work for your dataset, therefore please increase or decrease the sliders according to your setup and environment
1. Speed Control (speed_gain_slider): To start your JetBot increase ``speed_gain_slider``
2. Steering Gain Control (steering_gain_slider): If you see JetBot is wobbling, you need to reduce ``steering_gain_slider`` till it is smooth
3. Steering Bias control (steering_bias_slider): If you see JetBot is biased towards extreme right or extreme left side of the track, you should control this slider till JetBot start following line or track in the center. This accounts for motor biases as well as camera offsets
> Note: You should play around above mentioned sliders with lower speed to get smooth JetBot road following behavior.
```
speed_gain_slider = ipywidgets.FloatSlider(min=0.0, max=1.0, step=0.01, description='speed gain')
steering_gain_slider = ipywidgets.FloatSlider(min=0.0, max=1.0, step=0.01, value=0.2, description='steering gain')
steering_dgain_slider = ipywidgets.FloatSlider(min=0.0, max=0.5, step=0.001, value=0.0, description='steering kd')
steering_bias_slider = ipywidgets.FloatSlider(min=-0.3, max=0.3, step=0.01, value=0.0, description='steering bias')
display(speed_gain_slider, steering_gain_slider, steering_dgain_slider, steering_bias_slider)
```
Next, let's display some sliders that will let us see what JetBot is thinking. The x and y sliders will display the predicted x, y values.
The steering slider will display our estimated steering value. Please remember, this value isn't the actual angle of the target, but simply a value that is
nearly proportional. When the actual angle is ``0``, this will be zero, and it will increase / decrease with the actual angle.
```
x_slider = ipywidgets.FloatSlider(min=-1.0, max=1.0, description='x')
y_slider = ipywidgets.FloatSlider(min=0, max=1.0, orientation='vertical', description='y')
steering_slider = ipywidgets.FloatSlider(min=-1.0, max=1.0, description='steering')
speed_slider = ipywidgets.FloatSlider(min=0, max=1.0, orientation='vertical', description='speed')
display(ipywidgets.HBox([y_slider, speed_slider]))
display(x_slider, steering_slider)
```
Next, we'll create a function that will get called whenever the camera's value changes. This function will do the following steps
1. Pre-process the camera image
2. Execute the neural network
3. Compute the approximate steering value
4. Control the motors using proportional / derivative control (PD)
```
angle = 0.0
angle_last = 0.0
def execute(change):
global angle, angle_last
image = change['new']
xy = model_trt(preprocess(image)).detach().float().cpu().numpy().flatten()
x = xy[0]
y = (0.5 - xy[1]) / 2.0
x_slider.value = x
y_slider.value = y
speed_slider.value = speed_gain_slider.value
angle = np.arctan2(x, y)
pid = angle * steering_gain_slider.value + (angle - angle_last) * steering_dgain_slider.value
angle_last = angle
steering_slider.value = pid + steering_bias_slider.value
robot.left_motor.value = max(min(speed_slider.value + steering_slider.value, 1.0), 0.0)
robot.right_motor.value = max(min(speed_slider.value - steering_slider.value, 1.0), 0.0)
execute({'new': camera.value})
```
Cool! We've created our neural network execution function, but now we need to attach it to the camera for processing.
We accomplish that with the observe function.
>WARNING: This code will move the robot!! Please make sure your robot has clearance and it is on Lego or Track you have collected data on. The road follower should work, but the neural network is only as good as the data it's trained on!
```
camera.observe(execute, names='value')
```
Awesome! If your robot is plugged in it should now be generating new commands with each new camera frame.
You can now place JetBot on Lego or Track you have collected data on and see whether it can follow track.
If you want to stop this behavior, you can unattach this callback by executing the code below.
```
import time
camera.unobserve(execute, names='value')
time.sleep(0.1) # add a small sleep to make sure frames have finished processing
robot.stop()
```
Again, let's close the camera conneciton properly so that we can use the camera in other notebooks.
```
camera.stop()
```
### Conclusion
That's it for this live demo! Hopefully you had some fun seeing your JetBot moving smoothly on track following the road!!!
If your JetBot wasn't following road very well, try to spot where it fails. The beauty is that we can collect more data for these failure scenarios and the JetBot should get even better :)
| github_jupyter |
<img src='https://www.iss.nus.edu.sg/Sitefinity/WebsiteTemplates/ISS/App_Themes/ISS/Images/branding-iss.png' width=15% style="float: right;">
<img src='https://www.iss.nus.edu.sg/Sitefinity/WebsiteTemplates/ISS/App_Themes/ISS/Images/branding-nus.png' width=15% style="float: right;">
---
```
import IPython.display
IPython.display.YouTubeVideo('leVZjVahdKs')
```
# 如何使用和开发微信聊天机器人的系列教程
# A workshop to develop & use an intelligent and interactive chat-bot in WeChat
### WeChat is a popular social media app, which has more than 800 million monthly active users.
<img src='https://www.iss.nus.edu.sg/images/default-source/About-Us/7.6.1-teaching-staff/sam-website.tmb-.png' width=8% style="float: right;">
<img src='reference/WeChat_SamGu_QR.png' width=10% style="float: right;">
by: GU Zhan (Sam)
October 2018 : Update to support Python 3 in local machine, e.g. iss-vm.
April 2017 ======= Scan the QR code to become trainer's friend in WeChat =====>>
### 第五课:视频识别和处理
### Lesson 5: Video Recognition & Processing
* 识别视频消息中的物体名字 (Label Detection: Detect entities within the video, such as "dog", "flower" or "car")
* 识别视频的场景片段 (Shot Change Detection: Detect scene changes within the video)
* 识别受限内容 (Explicit Content Detection: Detect adult content within a video)
* 生成视频字幕 (Video Transcription BETA: Transcribes video content in English)
### Using Google Cloud Platform's Machine Learning APIs
From the same API console, choose "Dashboard" on the left-hand menu and "Enable API".
Enable the following APIs for your project (search for them) if they are not already enabled:
<ol>
**<li> Google Cloud Video Intelligence API </li>**
</ol>
Finally, because we are calling the APIs from Python (clients in many other languages are available), let's install the Python package (it's not installed by default on Datalab)
```
# Copyright 2016 Google Inc. Licensed under the Apache License, Version 2.0 (the "License");
# !pip install --upgrade google-api-python-client
```
---
### 短片预览 / Video viewing
```
# 多媒体文件的二进制base64码转换 (Define media pre-processing functions)
# Import the base64 encoding library.
import base64, io, sys, IPython.display
# Python 2
if sys.version_info[0] < 3:
import urllib2
# Python 3
else:
import urllib.request
# Pass the media data to an encoding function.
def encode_media(media_file):
with io.open(media_file, "rb") as media_file:
media_content = media_file.read()
# Python 2
if sys.version_info[0] < 3:
return base64.b64encode(media_content).decode('ascii')
# Python 3
else:
return base64.b64encode(media_content).decode('utf-8')
video_file = 'reference/video_IPA.mp4'
# video_file = 'reference/SampleVideo_360x240_1mb.mp4'
# video_file = 'reference/SampleVideo_360x240_2mb.mp4'
IPython.display.HTML(data=
'''<video alt="test" controls><source src="data:video/mp4;base64,{0}" type="video/mp4" /></video>'''
.format(encode_media(video_file)))
```
---
## <span style="color:blue">Install the client library</span> for Video Intelligence / Processing
```
!pip install --upgrade google-cloud-videointelligence
```
---
```
# Imports the Google Cloud client library
from google.cloud import videointelligence
# [Optional] Display location of service account API key if defined in GOOGLE_APPLICATION_CREDENTIALS
!echo $GOOGLE_APPLICATION_CREDENTIALS
##################################################################
# (1) Instantiates a client - using GOOGLE_APPLICATION_CREDENTIALS
# video_client = videointelligence.VideoIntelligenceServiceClient()
#
# (2) Instantiates a client - using 'service account json' file
video_client = videointelligence.VideoIntelligenceServiceClient.from_service_account_json(
"/media/sf_vm_shared_folder/000-cloud-api-key/mtech-ai-7b7e049cf5f6.json")
##################################################################
```
### * 识别视频消息中的物体名字 (Label Detection: Detect entities within the video, such as "dog", "flower" or "car")
https://cloud.google.com/video-intelligence/docs/analyze-labels
didi_video_label_detection()
```
from google.cloud import videointelligence
def didi_video_label_detection(path):
"""Detect labels given a local file path. (Demo)"""
""" Detects labels given a GCS path. (Exercise / Workshop Enhancement)"""
##################################################################
# (1) Instantiates a client - using GOOGLE_APPLICATION_CREDENTIALS
# video_client = videointelligence.VideoIntelligenceServiceClient()
#
# (2) Instantiates a client - using 'service account json' file
video_client = videointelligence.VideoIntelligenceServiceClient.from_service_account_json(
"/media/sf_vm_shared_folder/000-cloud-api-key/mtech-ai-7b7e049cf5f6.json")
##################################################################
features = [videointelligence.enums.Feature.LABEL_DETECTION]
with io.open(path, 'rb') as movie:
input_content = movie.read()
operation = video_client.annotate_video(
features=features, input_content=input_content)
print('\nProcessing video for label annotations:')
result = operation.result(timeout=90)
print('\nFinished processing.')
# Process video/segment level label annotations
segment_labels = result.annotation_results[0].segment_label_annotations
for i, segment_label in enumerate(segment_labels):
print('Video label description: {}'.format(
segment_label.entity.description))
for category_entity in segment_label.category_entities:
print('\tLabel category description: {}'.format(
category_entity.description))
for i, segment in enumerate(segment_label.segments):
start_time = (segment.segment.start_time_offset.seconds +
segment.segment.start_time_offset.nanos / 1e9)
end_time = (segment.segment.end_time_offset.seconds +
segment.segment.end_time_offset.nanos / 1e9)
positions = '{}s to {}s'.format(start_time, end_time)
confidence = segment.confidence
print('\tSegment {}: {}'.format(i, positions))
print('\tConfidence: {}'.format(confidence))
print('\n')
# Process shot level label annotations
shot_labels = result.annotation_results[0].shot_label_annotations
for i, shot_label in enumerate(shot_labels):
print('Shot label description: {}'.format(
shot_label.entity.description))
for category_entity in shot_label.category_entities:
print('\tLabel category description: {}'.format(
category_entity.description))
for i, shot in enumerate(shot_label.segments):
start_time = (shot.segment.start_time_offset.seconds +
shot.segment.start_time_offset.nanos / 1e9)
end_time = (shot.segment.end_time_offset.seconds +
shot.segment.end_time_offset.nanos / 1e9)
positions = '{}s to {}s'.format(start_time, end_time)
confidence = shot.confidence
print('\tSegment {}: {}'.format(i, positions))
print('\tConfidence: {}'.format(confidence))
print('\n')
# Process frame level label annotations
frame_labels = result.annotation_results[0].frame_label_annotations
for i, frame_label in enumerate(frame_labels):
print('Frame label description: {}'.format(
frame_label.entity.description))
for category_entity in frame_label.category_entities:
print('\tLabel category description: {}'.format(
category_entity.description))
# Each frame_label_annotation has many frames,
# here we print information only about the first frame.
frame = frame_label.frames[0]
time_offset = frame.time_offset.seconds + frame.time_offset.nanos / 1e9
print('\tFirst frame time offset: {}s'.format(time_offset))
print('\tFirst frame confidence: {}'.format(frame.confidence))
print('\n')
return segment_labels, shot_labels, frame_labels
# video_file = 'reference/video_IPA.mp4'
didi_segment_labels, didi_shot_labels, didi_frame_labels = didi_video_label_detection(video_file)
didi_segment_labels
didi_shot_labels
didi_frame_labels
```
### * 识别视频的场景片段 (Shot Change Detection: Detect scene changes within the video)
https://cloud.google.com/video-intelligence/docs/shot_detection
didi_video_shot_detection()
```
from google.cloud import videointelligence
def didi_video_shot_detection(path):
""" Detects camera shot changes given a local file path """
##################################################################
# (1) Instantiates a client - using GOOGLE_APPLICATION_CREDENTIALS
# video_client = videointelligence.VideoIntelligenceServiceClient()
#
# (2) Instantiates a client - using 'service account json' file
video_client = videointelligence.VideoIntelligenceServiceClient.from_service_account_json(
"/media/sf_vm_shared_folder/000-cloud-api-key/mtech-ai-7b7e049cf5f6.json")
##################################################################
features = [videointelligence.enums.Feature.SHOT_CHANGE_DETECTION]
# features = [videointelligence.enums.Feature.LABEL_DETECTION]
with io.open(path, 'rb') as movie:
input_content = movie.read()
# operation = video_client.annotate_video(path, features=features)
operation = video_client.annotate_video(features=features, input_content=input_content)
print('\nProcessing video for shot change annotations:')
result = operation.result(timeout=180)
print('\nFinished processing.')
for i, shot in enumerate(result.annotation_results[0].shot_annotations):
start_time = (shot.start_time_offset.seconds +
shot.start_time_offset.nanos / 1e9)
end_time = (shot.end_time_offset.seconds +
shot.end_time_offset.nanos / 1e9)
print('\tShot {}: {} to {}'.format(i, start_time, end_time))
return result
# video_file = 'reference/video_IPA.mp4'
didi_result = didi_video_shot_detection(video_file)
didi_result
```
### * 识别受限内容 (Explicit Content Detection: Detect adult content within a video)
didi_video_safesearch_detection()
```
from google.cloud import videointelligence
def didi_video_safesearch_detection(path):
""" Detects explicit content given a local file path. """
##################################################################
# (1) Instantiates a client - using GOOGLE_APPLICATION_CREDENTIALS
# video_client = videointelligence.VideoIntelligenceServiceClient()
#
# (2) Instantiates a client - using 'service account json' file
video_client = videointelligence.VideoIntelligenceServiceClient.from_service_account_json(
"/media/sf_vm_shared_folder/000-cloud-api-key/mtech-ai-7b7e049cf5f6.json")
##################################################################
features = [videointelligence.enums.Feature.EXPLICIT_CONTENT_DETECTION]
with io.open(path, 'rb') as movie:
input_content = movie.read()
# operation = video_client.annotate_video(path, features=features)
operation = video_client.annotate_video(features=features, input_content=input_content)
print('\nProcessing video for explicit content annotations:')
result = operation.result(timeout=90)
print('\nFinished processing.')
likely_string = ("Unknown", "Very unlikely", "Unlikely", "Possible",
"Likely", "Very likely")
# first result is retrieved because a single video was processed
for frame in result.annotation_results[0].explicit_annotation.frames:
frame_time = frame.time_offset.seconds + frame.time_offset.nanos / 1e9
print('Time: {}s'.format(frame_time))
print('\tpornography: {}'.format(
likely_string[frame.pornography_likelihood]))
return result
# video_file = 'reference/video_IPA.mp4'
didi_result = didi_video_safesearch_detection(video_file)
```
### <span style="color:red">[ Beta Features ]</span> * 生成视频字幕 (Video Transcription BETA: Transcribes video content in English)
https://cloud.google.com/video-intelligence/docs/beta
Cloud Video Intelligence API includes the following beta features in version v1p1beta1:
Speech Transcription - the Video Intelligence API can transcribe speech to text from the audio in supported video files. Learn more.
```
# Beta Features: videointelligence_v1p1beta1
from google.cloud import videointelligence_v1p1beta1 as videointelligence
def didi_video_speech_transcription(path):
"""Transcribe speech given a local file path."""
##################################################################
# (1) Instantiates a client - using GOOGLE_APPLICATION_CREDENTIALS
# video_client = videointelligence.VideoIntelligenceServiceClient()
#
# (2) Instantiates a client - using 'service account json' file
video_client = videointelligence.VideoIntelligenceServiceClient.from_service_account_json(
"/media/sf_vm_shared_folder/000-cloud-api-key/mtech-ai-7b7e049cf5f6.json")
##################################################################
features = [videointelligence.enums.Feature.SPEECH_TRANSCRIPTION]
with io.open(path, 'rb') as movie:
input_content = movie.read()
config = videointelligence.types.SpeechTranscriptionConfig(
language_code='en-US',
enable_automatic_punctuation=True)
video_context = videointelligence.types.VideoContext(
speech_transcription_config=config)
# operation = video_client.annotate_video(
# input_uri,
# features=features,
# video_context=video_context)
operation = video_client.annotate_video(
features=features,
input_content=input_content,
video_context=video_context)
print('\nProcessing video for speech transcription.')
result = operation.result(timeout=180)
# There is only one annotation_result since only
# one video is processed.
annotation_results = result.annotation_results[0]
speech_transcription = annotation_results.speech_transcriptions[0]
if str(speech_transcription) == '': # result.annotation_results[0].speech_transcriptions[0] == ''
print('\nNOT FOUND: video for speech transcription.')
else:
alternative = speech_transcription.alternatives[0]
print('Transcript: {}'.format(alternative.transcript))
print('Confidence: {}\n'.format(alternative.confidence))
print('Word level information:')
for word_info in alternative.words:
word = word_info.word
start_time = word_info.start_time
end_time = word_info.end_time
print('\t{}s - {}s: {}'.format(
start_time.seconds + start_time.nanos * 1e-9,
end_time.seconds + end_time.nanos * 1e-9,
word))
return result
# video_file = 'reference/video_IPA.mp4'
didi_result = didi_video_speech_transcription(video_file)
didi_result
```
---
## <span style="color:blue">Wrap cloud APIs into Functions() for conversational virtual assistant (VA):</span>
Reuse above defined Functions().
```
def didi_video_processing(video_file):
didi_video_reply = u'[ Video 视频处理结果 ]\n\n'
didi_video_reply += u'[ didi_video_label_detection 识别视频消息中的物体名字 ]\n\n' \
+ str(didi_video_label_detection(video_file)) + u'\n\n'
didi_video_reply += u'[ didi_video_shot_detection 识别视频的场景片段 ]\n\n' \
+ str(didi_video_shot_detection(video_file)) + u'\n\n'
didi_video_reply += u'[ didi_video_safesearch_detection 识别受限内容 ]\n\n' \
+ str(didi_video_safesearch_detection(video_file)) + u'\n\n'
didi_video_reply += u'[ didi_video_speech_transcription 生成视频字幕 ]\n\n' \
+ str(didi_video_speech_transcription(video_file)) + u'\n\n'
return didi_video_reply
# [Optional] Agile testing:
# parm_video_response = didi_video_processing(video_file)
# print(parm_video_response)
```
**Define a global variable for future 'video search' function enhancement**
```
parm_video_response = {} # Define a global variable for future 'video search' function enhancement
```
---
## <span style="color:blue">Start interactive conversational virtual assistant (VA):</span>
### Import ItChat, etc. 导入需要用到的一些功能程序库:
```
import itchat
from itchat.content import *
```
### Log in using QR code image / 用微信App扫QR码图片来自动登录
```
# itchat.auto_login(hotReload=True) # hotReload=True: 退出程序后暂存登陆状态。即使程序关闭,一定时间内重新开启也可以不用重新扫码。
itchat.auto_login(enableCmdQR=-2) # enableCmdQR=-2: 命令行显示QR图片
# @itchat.msg_register([VIDEO], isGroupChat=True)
@itchat.msg_register([VIDEO])
def download_files(msg):
msg.download(msg.fileName)
print('\nDownloaded video file name is: %s' % msg['FileName'])
##############################################################################################################
# call video analysis APIs #
##############################################################################################################
global parm_video_response # save into global variable, which can be accessed by next WeChat keyword search
# python 2 version WeChat Bot
# parm_video_response = KudosData_VIDEO_DETECTION(encode_media(msg['FileName']))
# python 3 version WeChat Bot
parm_video_response = didi_video_processing(msg['FileName'])
##############################################################################################################
# format video API results #
##############################################################################################################
# python 2 version WeChat Bot
# video_analysis_reply = KudosData_video_generate_reply(parm_video_response)
# python 3 version WeChat Bot
video_analysis_reply = parm_video_response # Exercise / Workshop Enhancement: To pase and format result nicely.
print ('')
print(video_analysis_reply)
return video_analysis_reply
itchat.run()
```
---
```
# interupt kernel, then logout
itchat.logout() # 安全退出
```
---
## <span style="color:blue">Exercise / Workshop Enhancement:</span>
<font color='blue'>
<font color='blue'>
[提问 1] 使用文字来搜索视频内容?需要怎么处理?
[Question 1] Can we use text (keywords) as input to search video content? How?
</font>
<font color='blue'>
<font color='blue'>
[提问 2] 使用图片来搜索视频内容?需要怎么处理?
[Question 2] Can we use an image as input to search video content? How?
</font>
```
'''
# Private conversational mode / 单聊模式,基于关键词进行视频搜索:
@itchat.msg_register([TEXT])
def text_reply(msg):
# if msg['isAt']:
list_keywords = [x.strip() for x in msg['Text'].split(',')]
# call video search function:
search_responses = KudosData_search(list_keywords) # return is a list
# Format search results:
search_reply = u'[ Video Search 视频搜索结果 ]' + '\n'
if len(search_responses) == 0:
search_reply += u'[ Nill 无结果 ]'
else:
for i in range(len(search_responses)): search_reply += '\n' + str(search_responses[i])
print ('')
print (search_reply)
return search_reply
'''
'''
# Group conversational mode / 群聊模式,基于关键词进行视频搜索:
@itchat.msg_register([TEXT], isGroupChat=True)
def text_reply(msg):
if msg['isAt']:
list_keywords = [x.strip() for x in msg['Text'].split(',')]
# call video search function:
search_responses = KudosData_search(list_keywords) # return is a list
# Format search results:
search_reply = u'[ Video Search 视频搜索结果 ]' + '\n'
if len(search_responses) == 0:
search_reply += u'[ Nill 无结果 ]'
else:
for i in range(len(search_responses)): search_reply += '\n' + str(search_responses[i])
print ('')
print (search_reply)
return search_reply
'''
```
### 恭喜您!已经完成了:
### 第五课:视频识别和处理
### Lesson 5: Video Recognition & Processing
* 识别视频消息中的物体名字 (Label Detection: Detect entities within the video, such as "dog", "flower" or "car")
* 识别视频的场景片段 (Shot Change Detection: Detect scene changes within the video)
* 识别受限内容 (Explicit Content Detection: Detect adult content within a video)
* 生成视频字幕 (Video Transcription BETA: Transcribes video content in English)
### 下一课是:
### 第六课:交互式虚拟助手的智能应用
### Lesson 6: Interactive Conversatioinal Virtual Assistant Applications / Intelligent Process Automations
* 虚拟员工: 贷款填表申请审批一条龙自动化流程 (Virtual Worker: When Chat-bot meets RPA-bot for mortgage loan application automation)
* 虚拟员工: 文字指令交互(Conversational automation using text/message command)
* 虚拟员工: 语音指令交互(Conversational automation using speech/voice command)
* 虚拟员工: 多种语言交互(Conversational automation with multiple languages)
<img src='reference/WeChat_SamGu_QR.png' width=80% style="float: left;">
---
| github_jupyter |
# Sequence to Sequence attention model for machine translation
This notebook trains a sequence to sequence (seq2seq) model with two different attentions implemented for Spanish to English translation.
The codes are built on TensorFlow Core tutorials: https://www.tensorflow.org/tutorials/text/nmt_with_attention
```
import tensorflow as tf
print(tf.__version__)
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
from sklearn.model_selection import train_test_split
import unicodedata
import re
import numpy as np
import os
import io
import time
```
# Load data set
* Clean the sentences by removing special characters.
* Add a start and end token to each sentence.
* Create a word index and reverse word index (dictionaries mapping from word → id and id → word).
* Pad each sentence to a maximum length.
```
# Download the file
path_to_zip = tf.keras.utils.get_file(
'spa-eng.zip', origin='http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip',
extract=True)
path_to_file = os.path.dirname(path_to_zip)+"/spa-eng/spa.txt"
# Converts the unicode file to ascii
def unicode_to_ascii(s):
return ''.join(c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn')
def preprocess_sentence(w):
w = unicode_to_ascii(w.lower().strip())
# creating a space between a word and the punctuation following it
# eg: "he is a boy." => "he is a boy ."
# Reference:- https://stackoverflow.com/questions/3645931/python-padding-punctuation-with-white-spaces-keeping-punctuation
w = re.sub(r"([?.!,¿])", r" \1 ", w)
w = re.sub(r'[" "]+', " ", w)
# replacing everything with space except (a-z, A-Z, ".", "?", "!", ",","¿")
w = re.sub(r"[^a-zA-Z?.!,¿]+", " ", w)
# remove extra space
w = w.strip()
# adding a start and an end token to the sentence
# so that the model know when to start and stop predicting.
w = '<start> ' + w + ' <end>'
return w
en_sentence = u"May I borrow this @ book?"
sp_sentence = u"¿Puedo tomar prestado este libro?"
print(preprocess_sentence(en_sentence))
print(preprocess_sentence(sp_sentence))
print(preprocess_sentence(sp_sentence).encode("UTF-8"))
# Return word pairs in the format: [ENGLISH, SPANISH]
def create_dataset(path, num_examples):
lines = io.open(path, encoding='UTF-8').read().strip().split('\n')
word_pairs = [[preprocess_sentence(w) for w in l.split('\t')] for l in lines[:num_examples]]
return zip(*word_pairs)
en, sp = create_dataset(path_to_file, None)
print(en[-1])
print(sp[-1])
print(len(en), len(sp))
# Tokenize the sentence into list of words(integers) and pad the sequence to the same length
def tokenize(lang):
lang_tokenizer = tf.keras.preprocessing.text.Tokenizer(
filters='')
lang_tokenizer.fit_on_texts(lang)
tensor = lang_tokenizer.texts_to_sequences(lang)
tensor = tf.keras.preprocessing.sequence.pad_sequences(tensor,
padding='post')
return tensor, lang_tokenizer
def load_dataset(path, num_examples=None):
# creating cleaned input, output pairs
targ_lang, inp_lang = create_dataset(path, num_examples)
input_tensor, inp_lang_tokenizer = tokenize(inp_lang)
target_tensor, targ_lang_tokenizer = tokenize(targ_lang)
return input_tensor, target_tensor, inp_lang_tokenizer, targ_lang_tokenizer
# Try experimenting with the size of that dataset
num_examples = 30000
input_tensor, target_tensor, inp_lang, targ_lang = load_dataset(path_to_file, num_examples)
# Calculate max_length of the target tensors
max_length_targ, max_length_inp = target_tensor.shape[1], input_tensor.shape[1]
print(max_length_targ, max_length_inp)
# Creating training and validation sets using an 80-20 split
input_tensor_train, input_tensor_val, target_tensor_train, target_tensor_val = train_test_split(input_tensor, target_tensor, test_size=0.2)
# Show length
print(len(input_tensor_train), len(target_tensor_train), len(input_tensor_val), len(target_tensor_val))
print(input_tensor_train[0])
print(target_tensor_train[0])
```
# Create a tf.data datasest
The tf.data.Dataset API supports writing descriptive and efficient input pipelines. Dataset usage follows a common pattern:
* Create a source dataset from your input data.
* Apply dataset transformations to preprocess the data.
* Iterate over the dataset and process the elements.
Iteration happens in a streaming fashion, so the full dataset does not need to fit into memory.
```
# Configuration
BUFFER_SIZE = len(input_tensor_train)
BATCH_SIZE = 64
steps_per_epoch = len(input_tensor_train)//BATCH_SIZE
steps_per_epoch_val = len(input_tensor_val)//BATCH_SIZE
embedding_dim = 256 # for word embedding
units = 1024 # dimensionality of the output space of RNN
vocab_inp_size = len(inp_lang.word_index)+1
vocab_tar_size = len(targ_lang.word_index)+1
dataset = tf.data.Dataset.from_tensor_slices((input_tensor_train, target_tensor_train)).shuffle(BUFFER_SIZE)
dataset = dataset.batch(BATCH_SIZE, drop_remainder=True)
validation_dataset = tf.data.Dataset.from_tensor_slices((input_tensor_val, target_tensor_val)).shuffle(BUFFER_SIZE)
validation_dataset = validation_dataset.batch(BATCH_SIZE, drop_remainder=True)
example_input_batch, example_target_batch = next(iter(dataset))
example_input_batch.shape, example_target_batch.shape
```
# Basic seq2seq model: encoder and decoder
Model groups layers into an object with training and inference features. Two ways to define tf model:

Basic sequence to sequence model without attention:

```
class Encoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, enc_units, batch_sz):
super(Encoder, self).__init__()
self.batch_sz = batch_sz
self.enc_units = enc_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(self.enc_units,
return_sequences=True, # Whether to return the last output in the output sequence, or the full sequence.
return_state=True, # Whether to return the last state in addition to the output.
recurrent_initializer='glorot_uniform')
def call(self, x, hidden):
x = self.embedding(x)
output, state = self.gru(x, initial_state = hidden)
return output, state
def initialize_hidden_state(self):
return tf.zeros((self.batch_sz, self.enc_units))
encoder = Encoder(vocab_inp_size, embedding_dim, units, BATCH_SIZE)
# sample input
sample_hidden = encoder.initialize_hidden_state()
sample_output, sample_hidden = encoder(example_input_batch, sample_hidden)
print ('Encoder output shape: (batch size, sequence length, units) {}'.format(sample_output.shape))
print ('Encoder Hidden state shape: (batch size, units) {}'.format(sample_hidden.shape))
class Decoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, dec_units, batch_sz):
super(Decoder, self).__init__()
self.batch_sz = batch_sz
self.dec_units = dec_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(self.dec_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
self.fc = tf.keras.layers.Dense(vocab_size)
def call(self, x, hidden):
# x shape after passing through embedding == (batch_size, 1, embedding_dim)
x = self.embedding(x)
# passing the concatenated vector to the GRU
output, state = self.gru(x, initial_state = hidden)
# output shape == (batch_size * 1, hidden_size)
output = tf.reshape(output, (-1, output.shape[2]))
# output shape == (batch_size, vocab)
x = self.fc(output)
return x, state
tf.reshape([[1,2,3],[4,5,6]], (-1, 2))
decoder = Decoder(vocab_tar_size, embedding_dim, units, BATCH_SIZE)
sample_decoder_output, _ = decoder(tf.random.uniform((BATCH_SIZE, 1)),
sample_hidden)
print ('Decoder output shape: (batch_size, vocab size) {}'.format(sample_decoder_output.shape))
```
# Dot-product attention


```
class DotProductAttention(tf.keras.layers.Layer):
def call(self, query, values):
# query hidden state shape == (batch_size, hidden size)
# query_with_time_axis shape == (batch_size, 1, hidden size)
# values shape == (batch_size, max_len, hidden size)
# we are doing this to broadcast addition along the time axis to calculate the score
query_with_time_axis = tf.expand_dims(query, 1)
# inner product, score shape == (batch_size, max_length, 1)
score = query_with_time_axis * values
score = tf.reduce_sum(score, axis=2)
score = tf.expand_dims(score, 2)
# attention_weights shape == (batch_size, max_length, 1)
attention_weights = tf.nn.softmax(score, axis=1)
# context_vector shape after sum == (batch_size, hidden_size)
context_vector = attention_weights * values
context_vector = tf.reduce_sum(context_vector, axis=1)
return context_vector, attention_weights
attention_layer = DotProductAttention()
attention_result, attention_weights = attention_layer(sample_hidden, sample_output)
print("Attention result shape: (batch size, units) {}".format(attention_result.shape))
print("Attention weights shape: (batch_size, sequence_length, 1) {}".format(attention_weights.shape))
```
# Additive attention

```
class BahdanauAttention(tf.keras.layers.Layer):
def __init__(self, units):
super(BahdanauAttention, self).__init__()
self.W1 = tf.keras.layers.Dense(units)
self.W2 = tf.keras.layers.Dense(units)
self.V = tf.keras.layers.Dense(1)
def call(self, query, values):
# query hidden state shape == (batch_size, hidden size)
# query_with_time_axis shape == (batch_size, 1, hidden size)
# values shape == (batch_size, max_len, hidden size)
query_with_time_axis = tf.expand_dims(query, 1)
# score shape == (batch_size, max_length, 1)
# we get 1 at the last axis because we are applying score to self.V
# the shape of the tensor before applying self.V is (batch_size, max_length, units)
score = self.V(tf.nn.tanh(
self.W1(values) + self.W2(query_with_time_axis)))
# attention_weights shape == (batch_size, max_length, 1)
attention_weights = tf.nn.softmax(score, axis=1)
# context_vector shape after sum == (batch_size, hidden_size)
context_vector = attention_weights * values
context_vector = tf.reduce_sum(context_vector, axis=1)
return context_vector, attention_weights
```
# Decoder layer with attention

```
class DecoderWithAttention(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, dec_units, batch_sz, attention_layer = None):
super(DecoderWithAttention, self).__init__()
self.batch_sz = batch_sz
self.dec_units = dec_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(self.dec_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
self.fc = tf.keras.layers.Dense(vocab_size)
# used for attention
self.attention = attention_layer
def call(self, x, hidden, enc_output):
# x shape after passing through embedding == (batch_size, 1, embedding_dim)
x = self.embedding(x)
attention_weights = None
if self.attention:
# enc_output shape == (batch_size, max_length, hidden_size)
context_vector, attention_weights = self.attention(hidden, enc_output)
# x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size)
x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1)
# passing the concatenated vector to the GRU
output, state = self.gru(x, initial_state = hidden)
# output shape == (batch_size * 1, hidden_size)
output = tf.reshape(output, (-1, output.shape[2]))
# output shape == (batch_size, vocab)
x = self.fc(output)
return x, state, attention_weights
```
# Define loss function
Cross-entropy loss, or log loss, measures the performance of a classification model whose output is a probability value between 0 and 1. Cross-entropy loss increases as the predicted probability diverges from the actual label.

```
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True, reduction='none')
def loss_function(real, pred):
mask = tf.math.logical_not(tf.math.equal(real, 0))
loss_ = loss_object(real, pred)
mask = tf.cast(mask, dtype=loss_.dtype)
loss_ *= mask
return tf.reduce_mean(loss_)
print(loss_object([1,2],[[0,0.6,0.3,0.1],[0,0.6,0.3,0.1]]))
print(loss_function([1,2],[[0,0.6,0.3,0.1],[0,0.6,0.3,0.1]]))
```
# Training
@tf.function
In TensorFlow 2, eager execution is turned on by default. The user interface is intuitive and flexible (running one-off operations is much easier and faster), but this can come at the expense of performance and deployability. It is recommended to debug in eager mode, then decorate with @tf.function for better performance.
In TensorFlow 2.0, users should refactor their code into smaller functions which are called as needed. In general, it's not necessary to decorate each of these smaller functions with tf.function; only use tf.function to decorate high-level computations - for example, one step of training, or the forward pass of your model.
TensorFlow provides the tf.GradientTape API for automatic differentiation; that is, computing the gradient of a computation with respect to some inputs, usually tf.Variables. TensorFlow "records" relevant operations executed inside the context of a tf.GradientTape onto a "tape". TensorFlow then uses that tape to compute the gradients of a "recorded" computation using reverse mode differentiation.
```
optimizer = tf.keras.optimizers.Adam()
def get_train_step_func():
@tf.function
def train_step(inp, targ, enc_hidden, encoder, decoder):
loss = 0
with tf.GradientTape() as tape: # for automatic differentiation
enc_output, enc_hidden = encoder(inp, enc_hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word_index['<start>']] * BATCH_SIZE, 1)
# Teacher forcing - feeding the target as the next input
for t in range(1, targ.shape[1]):
# passing enc_output to the decoder
predictions, dec_hidden, _ = decoder(dec_input, dec_hidden, enc_output)
loss += loss_function(targ[:, t], predictions)
# using teacher forcing
dec_input = tf.expand_dims(targ[:, t], 1)
batch_loss = (loss / int(targ.shape[1]))
variables = encoder.trainable_variables + decoder.trainable_variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables))
return batch_loss
return train_step
def caculate_validation_loss(inp, targ, enc_hidden, encoder, decoder):
loss = 0
enc_output, enc_hidden = encoder(inp, enc_hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word_index['<start>']] * BATCH_SIZE, 1)
# Teacher forcing - feeding the target as the next input
for t in range(1, targ.shape[1]):
predictions, dec_hidden, _ = decoder(dec_input, dec_hidden, enc_output)
loss += loss_function(targ[:, t], predictions)
dec_input = tf.expand_dims(targ[:, t], 1)
loss = loss / int(targ.shape[1])
return loss
def training_seq2seq(epochs, attention):
encoder = Encoder(vocab_inp_size, embedding_dim, units, BATCH_SIZE)
decoder = DecoderWithAttention(vocab_tar_size, embedding_dim, units, BATCH_SIZE, attention)
train_step_func = get_train_step_func()
training_loss = []
validation_loss = []
for epoch in range(epochs):
start = time.time()
enc_hidden = encoder.initialize_hidden_state()
total_loss = 0
for (batch, (inp, targ)) in enumerate(dataset.take(steps_per_epoch)):
batch_loss = train_step_func(inp, targ, enc_hidden, encoder, decoder)
total_loss += batch_loss
if batch % 100 == 0:
print('Epoch {} Batch {} Loss {:.4f}'.format(epoch + 1, batch, batch_loss))
enc_hidden = encoder.initialize_hidden_state()
total_val_loss = 0
for (batch, (inp, targ)) in enumerate(validation_dataset.take(steps_per_epoch)):
val_loss = caculate_validation_loss(inp, targ, enc_hidden, encoder, decoder)
total_val_loss += val_loss
training_loss.append(total_loss / steps_per_epoch)
validation_loss.append(total_val_loss / steps_per_epoch_val)
print('Epoch {} Loss {:.4f} Validation Loss {:.4f}'.format(epoch + 1,
training_loss[-1], validation_loss[-1]))
print('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
return encoder, decoder, training_loss, validation_loss
```
## Training seq2seq without attention
```
epochs = 10
attention = None
print("Running seq2seq model without attention")
encoder, decoder, training_loss, validation_loss = training_seq2seq(epochs, attention)
tloss = training_loss
vloss = validation_loss
```
## Training seq2seq with dot product attention
```
attention = DotProductAttention()
print("Running seq2seq model with dot product attention")
encoder_dp, decoder_dp, training_loss, validation_loss = training_seq2seq(epochs, attention)
tloss = np.vstack((tloss, training_loss))
vloss = np.vstack((vloss, validation_loss))
```
## Training seq2seq with Bahdanau attention
```
epochs = 10
attention = BahdanauAttention(units)
print("Running seq2seq model with Bahdanau attention")
encoder_bah, decoder_bah, training_loss, validation_loss = training_seq2seq(epochs, attention)
tloss = np.vstack((tloss, training_loss))
vloss = np.vstack((vloss, validation_loss))
import matplotlib.pyplot as plt
ax = plt.subplot(111)
t = np.arange(1, epochs+1)
for i in range(0, vloss.shape[0]):
line, = plt.plot(t, vloss[i,:], lw=2)
ax.legend(('No attention', 'Dot product', 'Bahdanau'))
ax.set_title("Validation loss")
```
# Translation
```
def translate(sentence, encoder, decoder):
attention_plot = np.zeros((max_length_targ, max_length_inp))
sentence = preprocess_sentence(sentence)
inputs = [inp_lang.word_index[i] for i in sentence.split(' ')]
inputs = tf.keras.preprocessing.sequence.pad_sequences([inputs],
maxlen=max_length_inp,
padding='post')
inputs = tf.convert_to_tensor(inputs)
result = ''
hidden = [tf.zeros((1, units))]
enc_out, enc_hidden = encoder(inputs, hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word_index['<start>']], 0)
for t in range(max_length_targ):
predictions, dec_hidden, attention_weights = decoder(dec_input,
dec_hidden,
enc_out)
predicted_id = tf.argmax(predictions[0]).numpy()
result += targ_lang.index_word[predicted_id] + ' '
# until the predicted word is <end>.
if targ_lang.index_word[predicted_id] == '<end>':
return result, sentence
# the predicted ID is fed back into the model, no teacher forcing.
dec_input = tf.expand_dims([predicted_id], 0)
return result, sentence
result, sentence = translate(u'esta es mi vida.', encoder_bah, decoder_bah)
print('Input: %s' % (sentence))
print('Predicted translation: {}'.format(result))
result, sentence = translate(u'esta es mi vida.', encoder_dp, decoder_dp)
print('Input: %s' % (sentence))
print('Predicted translation: {}'.format(result))
result, sentence = translate(u'¿todavia estan en casa?', encoder_bah, decoder_bah)
print('Input: %s' % (sentence))
print('Predicted translation: {}'.format(result))
```
# Next Steps
* Training on larger dataset
* Model tuning
* Try out other attention scores such as multiplicative
* Train on other seq2seq tasks
| github_jupyter |
# QCoDeS Example with DynaCool PPMS
This notebook explains how to control the DynaCool PPMS from QCoDeS.
For this setup to work, the proprietary `PPMS Dynacool` application (or, alternatively `Simulate PPMS Dynacool`) must be running on some PC. On that same PC, the `server.py` script (found in `qcodes/instrument_drivers/QuantumDesign/DynaCoolPPMS/private`) must be running. The script can be run from the command line with no arguments and will run under python 3.6+.
The architecture is as follows:
The QCoDeS driver sends strings via VISA to the server who passes those same strings on to the `CommandHandler` (found in `qcodes/instrument_drivers/QuantumDesign/DynaCoolPPMS/commandhandler`). The `CommandHandler` makes the calls into the proprietary API. The QCoDeS driver can thus be called on any machine that can communicate with the machine hosting the server.
Apart from that, the driver is really simple. For this notebook, we used the `Simulate PPMS Dynacool` application running on the same machine as QCoDeS.
```
%matplotlib notebook
from qcodes.instrument_drivers.QuantumDesign.DynaCoolPPMS.DynaCool import DynaCool
```
To instantiate the driver, simply provide the address and port in the standard VISA format.
The connect message is not too pretty, but there does not seem to be a way to query serial and firmware versions.
```
dynacool = DynaCool('dynacool', address="TCPIP0::127.0.0.1::5000::SOCKET")
```
To get an overview over all available parameters, use `print_readable_snapshot`.
A value of "Not available" means (for this driver) that the parameter has been deprecated.
```
dynacool.print_readable_snapshot(update=True)
```
## Temperature Control
As soon as ANY of the temperature rate, the temperature setpoint, or the temperature settling mode parameters has been set, the system will start moving to the given temperature setpoint at the given rate using the given settling mode.
The system can continuously be queried for its temperature.
```
from time import sleep
import matplotlib.pyplot as plt
import numpy as np
# example 1
dynacool.temperature_rate(0.1)
dynacool.temperature_setpoint(dynacool.temperature() - 1.3)
temps = []
while dynacool.temperature_state() == 'tracking':
temp = dynacool.temperature()
temps.append(temp)
sleep(0.75)
print(f'Temperature is now {temp} K')
plt.figure()
timeax = np.linspace(0, len(temps)*0.2, len(temps))
plt.plot(timeax, temps, '-o')
plt.xlabel('Time (s)')
plt.ylabel('Temperature (K)')
```
## Field Control
The field has **four** related parameters:
- `field_measured`: (read-only) the field strength right now
- `field_target`: the target field that the `ramp` method will ramp to when called. Setting this parameter does **not** trigger a ramp
- `field_rate`: the field ramp rate. NB: setting this parameter **will** trigger a ramp
- `field_approach`: the approach the system should use to ramp. NB: setting this parameter **will** trigger a ramp
- `field_ramp`: this is a convenience parameter that sets the target and then triggers a blocking ramp.
The idea is that the user first sets the `field_target` and then ramps the field to that target using the `ramp` method. The ramp method takes a `mode` argument that controls whether the ramp is blocking or non-blocking.
Using the simulation software, the field change is instanteneous irrespective of rate. We nevertheless include two examples of ramping here.
### A blocking ramp
First, we set a field target:
```
field_now = dynacool.field_measured()
target = field_now + 1
dynacool.field_target(target)
```
Note that the field has not changed yet:
```
assert dynacool.field_measured() == field_now
```
And now we ramp:
```
dynacool.ramp(mode='blocking')
```
The ramping will take some finite time on a real instrument. The field value is now at the target field:
```
print(f'Field value: {dynacool.field_measured()} T')
print(f'Field target: {dynacool.field_target()} T')
```
### A non-blocking ramp
The non-blocking ramp is very similar to the the blocking ramp.
```
field_now = dynacool.field_measured()
target = field_now - 0.5
dynacool.field_target(target)
assert dynacool.field_measured() == field_now
dynacool.ramp(mode='non-blocking')
# Here you can do stuff while the magnet ramps
print(f'Field value: {dynacool.field_measured()} T')
print(f'Field target: {dynacool.field_target()} T')
```
### Using the `field_ramp` parameter
The `field_ramp` parameter sets the target field and ramp when being set.
```
print(f'Now the field is {dynacool.field_measured()} T...')
print(f'...and the field target is {dynacool.field_target()} T.')
dynacool.field_ramp(1)
print(f'Now the field is {dynacool.field_measured()} T...')
print(f'...and the field target is {dynacool.field_target()} T.')
```
| github_jupyter |
# Sentiment analysis
```
import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer
df=pd.read_csv("train.csv")
df.head()
df.shape
df_clean=df.drop(['ID','Place','location','date','status','job_title','summary','advice_to_mgmt','score_1','score_2','score_3','score_4','score_5','score_6','overall'],axis=1)
df_postitives=df_clean.drop(['negatives'],axis=1)
df_postitives=df_postitives.head(n=1000)
df_negatives=df_clean.drop(['positives'],axis=1)
df_negatives=df_negatives.head(n=1000)
df_postitives['polarity']=1
#dftive_postis['polarity']=1
df_negatives['polarity']=0
df_postitives.columns=['value','polarity']
df_negatives.columns=['value','polarity']
frames=[df_postitives,df_negatives]
df=pd.concat(frames)
df_postitives
type(df_postitives)
type(df_postitives['value'][2])
df_to_list=df_postitives['value'].values.tolist()
df_to_list0=df_to_list #positives
df_to_list1=df_negatives['value'].values.tolist()
type(df_to_list)
vectorizer=TfidfVectorizer(max_features=1000)
vectors=vectorizer.fit_transform(df.value)
word_df=pd.DataFrame(vectors.toarray(),columns=vectorizer.get_feature_names())
word_df.head()
X=word_df
y=df.polarity
X
X
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import LinearSVC
from sklearn.naive_bayes import MultinomialNB
# logistic regression model
logreg = LogisticRegression(C=1e9, solver='lbfgs', max_iter=1000)
logreg.fit(X, y)
```
# Testing
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test= train_test_split(X,y,test_size=0.33, random_state=12)
# logistic regression model
logreg1 = LogisticRegression(C=1e9, solver='lbfgs', max_iter=1000)
logreg1.fit(X_train, y_train)
svc1=LinearSVC()
svc1.fit(X_train,y_train)
y_pred= logreg1.predict(X_test)
y_pred2=svc1.predict(X_test)
y_test
from sklearn.metrics import confusion_matrix
import matplotlib as plt
cm= confusion_matrix(y_test,y_pred)
print(cm)
import pylab as pl
pl.matshow(cm)
pl.title('Confusion matrix of LogisticRegression')
pl.colorbar()
pl.show()
cm_svc= confusion_matrix(y_test,y_pred2)
print(cm_svc)
import pylab as pl
pl.matshow(cm_svc)
pl.title('Confusion matrix of SVN')
pl.colorbar()
pl.show()
```
## End of testing
```
svc=LinearSVC()
svc.fit(X,y)
unknown = pd.DataFrame({'content': [
"office space is big",
"excellent manager",
"low salary",
"good career opportunity",
"office politics",
"inefficient leadership",
"canteen food is good but service is bad",
"cantenn food is good"
"food taste bad"
]})
unknown_vectors = vectorizer.transform(unknown.content)
unknown_words_df = pd.DataFrame(unknown_vectors.toarray(), columns=vectorizer.get_feature_names())
unknown_words_df.head()
unknown['pred_logreg'] = logreg.predict(unknown_words_df)
unknown
unknown.pred_logreg[0]
df_review = df_postitives
df_review=df_postitives["value"]+df_negatives['value']
df_review
df1=pd.read_csv("train.csv")
df1_overall=df1.drop(['ID','Place','location','date','status','job_title','summary','advice_to_mgmt','score_1','score_2','score_3','score_4','score_5','score_6'],axis=1)
df1_overall["review"] = df1_overall["positives"] + df1_overall["negatives"]
df1_overall.drop(['positives','negatives'],axis=1)
vectorizer=TfidfVectorizer(max_features=1000)
vectors=vectorizer.fit_transform(df1_overall.review)
wordreview_df=pd.DataFrame(vectors.toarray(),columns=vectorizer.get_feature_names())
wordreview_df.head()
XX=wordreview_df.head(n=1000)
yy=df1_overall.overall.head(n=1000)
yy.head
logreg1 = LogisticRegression(C=1e9, solver='lbfgs', max_iter=1000)
logreg1.fit(XX, yy)
unknown = pd.DataFrame({'content': [
"office space is big",
"excellent manager but lack of job security",
"low salary however job security is high",
"good career opportunity low salay",
"office politics",
"inefficient leadership",
"canteen food is good but service is bad",
"cantenn food is good"
"food taste bad"
]})
unknown_vectors = vectorizer.transform(unknown.content)
unknown_words_df = pd.DataFrame(unknown_vectors.toarray(), columns=vectorizer.get_feature_names())
unknown_words_df.head()
unknown['pred_logreg'] = logreg1.predict(unknown_words_df)
unknown
```
# LDA
```
df_to_list=df_to_list0 # df_to_list1 == negatives comments
#df_to_list0 == positives comments
from time import time
import matplotlib.pyplot as plt
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from sklearn.decomposition import NMF, LatentDirichletAllocation
from sklearn.datasets import fetch_20newsgroups
n_samples = 2000
n_features = 1000
n_components = 10
n_top_words = 10
def plot_top_words(model, feature_names, n_top_words, title):
fig, axes = plt.subplots(2, 5, figsize=(30, 15), sharex=True)
axes = axes.flatten()
for topic_idx, topic in enumerate(model.components_):
top_features_ind = topic.argsort()[:-n_top_words - 1:-1]
top_features = [feature_names[i] for i in top_features_ind]
print(top_features)
weights = topic[top_features_ind]
ax = axes[topic_idx]
ax.barh(top_features, weights, height=0.7)
ax.set_title(f'Topic {topic_idx +1}',
fontdict={'fontsize': 30})
ax.invert_yaxis()
ax.tick_params(axis='both', which='major', labelsize=20)
for i in 'top right left'.split():
ax.spines[i].set_visible(False)
fig.suptitle(title, fontsize=40)
plt.subplots_adjust(top=0.90, bottom=0.05, wspace=0.90, hspace=0.3)
plt.show()
tfidf_vectorizer = TfidfVectorizer(max_df=0.95, min_df=2,
max_features=n_features,
stop_words='english')
tfidf = tfidf_vectorizer.fit_transform(df_to_list)
tf_vectorizer = CountVectorizer(max_df=0.95, min_df=2,
max_features=n_features,
stop_words='english')
individual=[df_to_list[2],df_to_list[3],df_to_list[11]] #,df_to_list[3],df_to_list[4],df_to_list[5],df_to_list[6],df_to_list[7],df_to_list[8],df_to_list[9],df_to_list[10],df_to_list[11]
tf = tf_vectorizer.fit_transform(individual) # df_to_list
for i in range(len(individual)):
print(df_to_list[i])
print("=============")
type(individual)
tf
print(tf)
nmf = NMF(n_components=n_components, random_state=1,
alpha=.1, l1_ratio=.5).fit(tfidf)
tfidf_feature_names = tfidf_vectorizer.get_feature_names()
plot_top_words(nmf, tfidf_feature_names, n_top_words,
'Topics in NMF model (Frobenius norm)')
nmf
nmf = NMF(n_components=n_components, random_state=1,
beta_loss='kullback-leibler', solver='mu', max_iter=1000, alpha=.1,
l1_ratio=.5).fit(tfidf)
enumerate(nmf.components_)
n_top_words
tfidf_feature_names = tfidf_vectorizer.get_feature_names()
plot_top_words(nmf, tfidf_feature_names, n_top_words,
'Topics in NMF model (generalized Kullback-Leibler divergence)')
#tfidf_feature_names
lda = LatentDirichletAllocation(n_components=n_components, max_iter=5,
learning_method='online',
learning_offset=50.,
random_state=0)
lda.fit(tf)
tf_feature_names = tf_vectorizer.get_feature_names()
plot_top_words(lda, tf_feature_names, n_top_words, 'Topics in LDA model')
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.