repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
1200wd/bitcoinlib | docs/bitcoinlib-10-minutes.ipynb | gpl-3.0 | from bitcoinlib.keys import Key
k = Key()
k.info()
"""
Explanation: Learn BitcoinLib in 10 minutes
A short walk through all the important BitcionLib classes: create keys, transactions and wallets using this library.
You can run and experiment with the code examples if you have installed Jupyter
Keys
With the Key class you can create and manage private and public cryptographic keys.
Run the code below to create a random key, then show the class data with the info() method which is handy for
developing and debugging your code.
End of explanation
"""
from bitcoinlib.keys import Key
wif = 'KyAoQkmmoAgdC8YXamgpqFb2R8j6g5jiBnGdJo62aDJCxstboTqS'
k = Key(wif)
print(k.address())
"""
Explanation: As you can see a private key, public key, address and WIF (Wallet Import Format) are all created as part of the
Key representation.
You can also see a 'point x' and 'point y' as a private key is nothing more then a point on the
Secp256k1 curve.
If you have already a private key WIF you can use this to create a Key class.
End of explanation
"""
from bitcoinlib.keys import Key
k = Key()
print(k.private_hex)
encrypted_wif = k.encrypt('secret')
print(encrypted_wif)
k_import = Key(encrypted_wif, password='secret')
print(k_import.private_hex)
"""
Explanation: You can use the bip32_encrypt method to create an encrypted key. In the example below this key is used to
recreate the original unencrypted key.
End of explanation
"""
from bitcoinlib.keys import Key
k = Key(network='litecoin')
print(k.address())
print(k.address(encoding='bech32'))
"""
Explanation: And of course the Key class can also represent other networks and encodings, such as Litecoin or the
bech32 encoding used in segwit.
End of explanation
"""
from bitcoinlib.keys import HDKey
k = HDKey(witness_type='segwit')
k.info()
"""
Explanation: HDKey
The HDKey class, or hierarchical deterministic key class, is a child of the Key class and mainly adds a chain code.
This chain code allows to create key structures or related which can be derived from a single master key.
The creation of a random HDKey is just as simple as a normal Key but it gives a range of extra possibilities.
When you run the code below and check the info() output near 'Extended Key' you can find the extra's the HDKey provides.
First the chain code, which is also random generated just as the key. Next the depth and child index allowing to
create levels of keys. And final the WIF representing all HDKey info in an exchangeable string.
End of explanation
"""
from bitcoinlib.keys import HDKey
extended_wif = 'zprvAWgYBBk7JR8Gk4CexKBHgzfYiYXgV71Ybi2tJFU2yDn8RgiqsviqT4eYPE9LofWMdrSkYmWciMtiD7jqA5dccDLnJj' \
'DSMghhGRv41vHo9yx'
k = HDKey(extended_wif)
ck = k.child_private(10)
print("ck.private_hex: %s" % ck.private_hex)
print("ck.depth: %s" % ck.depth)
ck_pub = ck.child_public(0)
print("ck_pub.private_hex: %s" % ck_pub.private_hex)
print("ck_pub.public_hex: %s" % ck_pub.public_hex)
print("ck_pub.depth: %s" % ck_pub.depth)
"""
Explanation: So let's try to import a HDKey wif and create a private child key and the a public child key.
End of explanation
"""
from bitcoinlib.keys import HDKey
prv_masterkey = HDKey()
pub_masterkey = prv_masterkey.public_master()
print("Public masterkey to exchange (method 1): %s" % pub_masterkey.wif())
pub_masterkey = prv_masterkey.subkey_for_path("m/44'/0'/0'")
print("Public masterkey to exchange (method 2): %s" % pub_masterkey.wif())
"""
Explanation: As you can see the depth increases with 1 with every derived child key, and when you derive a public key the private
key is not available anymore.
While this is all very interesting I think, please remember you not have to care of any of this if you just using the
Wallet class. The Wallet class handles key derivation in the background for you.
The HDKey class has various methods to create key paths used in most modern wallet software today.
End of explanation
"""
from bitcoinlib.mnemonic import Mnemonic
from bitcoinlib.keys import HDKey
phrase = Mnemonic().generate()
print(phrase)
k = HDKey.from_passphrase(phrase)
print(k.private_hex)
"""
Explanation: Other key related classes: The Mnemonic and Address class
A Mnemonic sentence is an easy way for us humans to read and exchange private keys. With bitcoinlib you can
create Mnemonic phrases and use them to create keys or wallets.
End of explanation
"""
from bitcoinlib.keys import Address
address = 'bc1q96zj7hv097x9u9f86azlk49ffxak7zltyfghld'
a = Address.import_address(address)
print(a.as_json())
"""
Explanation: Use the address class to create an address or to analyse address information.
End of explanation
"""
from bitcoinlib.transactions import Transaction
from bitcoinlib.keys import HDKey
t = Transaction()
prev_hash = '9c81f44c29ff0226f835cd0a8a2f2a7eca6db52a711f8211b566fd15d3e0e8d4'
t.add_input(prev_hash, output_n=0)
k = HDKey()
t.add_output(100000, k.address())
t.info()
"""
Explanation: Transactions
Transactions represent the transaction of value and are stored on the Blockchain. A transaction has 1 or more inputs
to unlock previous outputs and 1 or more outputs containing locking scripts. These locking script can be unlocked with a
private key and then spent again in inputs of a new transaction.
Let's create a simple transaction.
End of explanation
"""
from bitcoinlib.transactions import Transaction
rawtx = "010000000001015f171218b2e273be55af4f1cf0a56c0499b48b098d16ebdc68c62db78c55765a0100000000ffffff000200e1f505" \
"0000000017a9140db01b8486f63ef80f02fe78bada7680c46c11ef8730f10801000000001600141dfba959940495c3a92cbb80b0b5" \
"0246cfe0f11702473044022004eb67e91dc04179a367d99c0d65617cda385c313e79d717f8ade695a5731b8c02207a273d8592d815" \
"9d6f587a0db993ab4b4a030fbfa390229b490d789a77b8c8540121029422dbe194e42bac01e94925cf8b619f0fd4aa5d0181633659" \
"e7deed79eb5b7a00000000"
t = Transaction.parse_hex(rawtx)
t.info()
"""
Explanation: There are many way to create transactions in most cases you will get them for the blockchain or create them
with the Wallet class.
Below is an example of how to parse a raw transaction.
End of explanation
"""
from bitcoinlib.services.services import Service
srv = Service()
print("Estimated transaction fee: %s" % srv.estimatefee())
print("Latest block: %s" % srv.blockcount())
"""
Explanation: Service providers
The library can get transaction and blockchain information from various online or locally installed
providers. So you can get check if a transaction output (UTXO) is spent already, update a wallet's
address balance or push a new transaction to the network.
End of explanation
"""
from bitcoinlib.services.services import Service
srv = Service()
unknown_txid = '9c81f44c29ff0226f835cd0a8a2f2a7eca6db52a711f8211b566fd15d3e0e8d4'
srv.gettransaction(unknown_txid)
print(srv.results)
print(srv.errors)
"""
Explanation: After a request with the Service class the results and errors are stored in the class. If you for instance request an
unknown transaction, the request returns False, the Service().results is empty and the errors can be found in
Service().errors.
End of explanation
"""
from bitcoinlib.wallets import Wallet
w = Wallet.create('jupyter-test-wallet')
w.info()
"""
Explanation: Wallets
The Wallet class combines all classes we've seen so far. The wallet class is used to store and manage keys,
check for new utxo's, create, sign and send transactions.
Creating a bitcoin wallet is very easy, just use the create() method and pass a wallet name.
End of explanation
"""
from bitcoinlib.wallets import wallet_create_or_open
w = wallet_create_or_open('bitcoinlib-testnet1', network='testnet', witness_type='segwit')
wk = w.new_key()
print("Deposit to address %s to get started" % wk.address)
"""
Explanation: The create() methods raises an error if the wallet already exists, so if you are not sure use the wallet_create_or_open
method.
And now let's conclude with a Bitcoin segwit testnet wallet and a send a real transaction.
End of explanation
"""
from bitcoinlib.wallets import wallet_create_or_open
w = wallet_create_or_open('bitcoinlib-testnet1', network='testnet', witness_type='segwit')
n_utxos = w.utxos_update()
if n_utxos:
print("Found new unspent outputs (UTXO's), we are ready to create a transaction")
w.info()
"""
Explanation: Now go to a Bitcoin testnet faucet to get some coins.
Check if the testnet coins are successfully send, so we can start creating a transaction.
End of explanation
"""
from bitcoinlib.wallets import wallet_create_or_open
w = wallet_create_or_open('bitcoinlib-testnet1', network='testnet', witness_type='segwit')
t = w.send_to('tb1qprqnf4dqwuphxs9xqpzkjdgled6eeptn389nec', 4000, fee=1000)
t.info()
"""
Explanation: The utxos_update() method only checks for new unspent outputs for existing keys. Two other methods to receive
transaction information for your wallet are the transactions_update() and scan() method. Transactions_update
retrieves all transactions from the service providers for your wallet's key: incoming and outgoing.
The scan() method also generates new keys/addresses and updates all transactions.
Now we have a wallet with coins and can create a transaction and send some coins. The send_to() method
creates a transactions and then broadcasts the signed transactions to the network.
End of explanation
"""
|
milancurcic/lunch-bytes | Spring_2019/LB29/GettingData_XR.ipynb | cc0-1.0 | import xarray as xr
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import netCDF4 as nc
from mpl_toolkits.basemap import Basemap
"""
Explanation: <a name="top"></a>
<div style="width:1000 px">
<div style="float:right; width:98 px; height:98px;">
<img src="https://cdn.miami.edu/_assets-common/images/system/um-logo-gray-bg.png" alt="Miami Logo" style="height: 98px;">
</div>
<h1>Lunch Byte 4/19/2019</h1>
By Kayla Besong
<br>
<br>
<br>Introduction to Xarray simple functions to upload and process data from NOAA for ProcessData_XR.ipynb
<br>use to compare to GettingData_XRDASK.ipynb
<div style="clear:both"></div>
</div>
<hr style="height:2px;">
End of explanation
"""
%%time
heights = [] # empty array to append opened netCDF's to
temps = []
date_range = np.arange(1995,2001,1) # range of years interested in obtaining, remember python starts counting at 0 so for 10 years we actually need to say through 2005
for i in date_range:
url_h = 'https://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis2/pressure/hgt.%s.nc' % i # string subset --> %.s and % i will insert the i in date_range we are looping through
url_t = 'https://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis2/pressure/air.%s.nc' % i
print(url_h, url_t)
h1 = xr.open_dataset(url_h)
t1 = xr.open_dataset(url_t) # open netcdfs with xarray
heights.append(h1) # append opened files
temps.append(t1)
"""
Explanation: Let's Import Some Data through NOAA
End of explanation
"""
heights[0]
temps[0]
"""
Explanation: Take a peak to ensure everything was read successfully and understand the dataset that you have
End of explanation
"""
%%time
for i in range(len(heights)):
heights[i] = heights[i].sel(lat = slice(90,0), level = 500)
temps[i] = temps[i].sel(lat = slice(90,0), level = 925)
"""
Explanation: Select what you're looking for
End of explanation
"""
%%time
concat_h = xr.concat(heights, dim = 'time') # aligns all the lat, lon, lev, values of all the datasets based on dimesnion of time
%%time
concat_t = xr.concat(temps, dim = 'time')
"""
Explanation: Turn list of urls into one large, combined (concatenated) dataset based on time
End of explanation
"""
%%time
concat_h = concat_h.resample(time = '24H').mean(dim = 'time') # resample can take various time arguments such as season, year, minutes
concat_t = concat_t.resample(time = '24H').mean(dim = 'time')
"""
Explanation: Resample into daily averaged
End of explanation
"""
%%time
concat_h.to_netcdf('heights_9520.nc')
%%time
concat_t.to_netcdf('temps_9520.nc')
"""
Explanation: Write out data for processing
End of explanation
"""
|
yala/introdeeplearning | Lab2.ipynb | mit | import math
import pickle as p
import tensorflow as tf
import numpy as np
import utils
import json
"""
Explanation: Lab Part II: RNN Sentiment Classifier
In the previous lab, you built a tweet sentiment classifier with a simple feedforward neural network. Now we ask you to improve this model by representing it as a sequence of words, with a recurrent neural network.
First import some things:
End of explanation
"""
# set variables
tweet_size = 20
hidden_size = 100
vocab_size = 7597
batch_size = 64
# this just makes sure that all our following operations will be placed in the right graph.
tf.reset_default_graph()
# create a session variable that we can run later.
session = tf.Session()
"""
Explanation: Our model will be like this:
We feed words one by one into LSTM layers. After feeding in all the words, we take the final state of the LSTM and run it thorugh one fully connected layer to multiply it by a final set of weights. We specificy that this fully connected layer should have a single output, which, one sigmoid-ed, is the probability that the tweet is positive!
Step 1: Set up our Model Parameters
Similarly to the last lab, we'll be training using batches. Our hidden layer will have 100 units, and we have 7597 words in the vocabulary.
End of explanation
"""
# the placeholder for tweets has first dimension batch_size for each tweet in a batch,
# second dimension tweet_size for each word in the tweet, and third dimension vocab_size
# since each word itself is represented by a one-hot vector of size vocab_size.
# Note that we use 'None' instead of batch_size for the first dimsension. This allows us
# to deal with variable batch sizes
tweets = tf.placeholder(tf.float32, [None, tweet_size, vocab_size])
'''TODO: create a placeholder for the labels (our predictions).
This should be a 1D vector with size = None,
since we are predicting one value for each tweet in the batch,
but we want to be able to deal with variable batch sizes.''';
labels = #todo
"""
Explanation: Step 2: Create Placeholders
We need to create placeholders for variable data that we will feed in ourselves (aka our tweets). Placeholders allow us to incorporate this data into the graph even though we don't know what it is yet.
End of explanation
"""
'''TODO: create an LSTM Cell using BasicLSTMCell. Note that this creates a *layer* of LSTM
cells, not just a single one.''';
lstm_cell = #todo
'''TODO: create three LSTM layers by wrapping three instances of
lstm_cell from above in tf.contrib.rnn_cell.MultiRNNCell. Note that
you can create multiple cells by doing [lstm_cell] * 2. Also note
that you should use state_is_tuple=True as an argument. This will allow
us to access the part of the cell state that we need later on.''';
multi_lstm_cells = #todo
'''TODO: define the operation to create the RNN graph across time.
tf.nn.dynamic_rnn dynamically constructs the graph when it is executed,
and returns the final cell state.''';
_, final_state = #todo
"""
Explanation: Step 3: Build the LSTM Layers
We want to feed the input sequence, word by word, into an LSTM layer, or multiple LSTM layers (we could also call this an LSTM encoder). At each "timestep", we feed in the next word, and the LSTM updates its cell state. The final LSTM cell state can then be fed through a final classification layer(s) to get our sentiment prediction.
Now let's make our LSTM layer. The steps for this are:
1. Create a LSTM Cell using tf.contrib.rnn.LSTMCell
Wrap a couple of these cells in tf.nn.rnn_cell.MultiRNNCell to create a multiple LSTM layers.
Define the operation to run these layers with dynamic_rnn.
End of explanation
"""
## We define this function that creates a weight matrix + bias parameter
## and uses them to do a matrix multiplication.
def linear(input_, output_size, name, init_bias=0.0):
shape = input_.get_shape().as_list()
with tf.variable_scope(name):
W = tf.get_variable("weights", [shape[-1], output_size], tf.float32, tf.random_normal_initializer(stddev=1.0 / math.sqrt(shape[-1])))
if init_bias is None:
return tf.matmul(input_, W)
with tf.variable_scope(name):
b = tf.get_variable("bias", [output_size], initializer=tf.constant_initializer(init_bias))
return tf.matmul(input_, W) + b
'''TODO: pass the final state into this linear function to multiply it
by the weights and add bias to get our output.
{Quick note that we need to feed in final_state[-1][-1] into linear since
final_state is actually a tuple consisting of the cell state
(used internally for the cell to keep track of things)
as well as the hidden state (the output of the cell), and one of these
tuples for each layer. We want the hidden state for the last layer, so we use
final_state[-1][-1]}''';
sentiment = #todo
"""
Explanation: Step 4: Classification Layer
Now we have the final state of the LSTM layers after feeding in the tweet word by word. We can take this final state and feed it into a simple classfication layer that takes the cell state, multiplies it by some weight matrix (with bias) and outputs a single value corresponding to whether it thinks the tweet is overall positive or not.
End of explanation
"""
sentiment = tf.squeeze(sentiment, [1])
'''TODO: define our loss function.
We will use tf.nn.sigmoid_cross_entropy_with_logits, which will compare our
sigmoid-ed prediction (sentiment from above) to the ground truth (labels).''';
loss = #todo
# our loss with sigmoid_cross_entropy_with_logits gives us a loss for each
# example in the batch. We take the mean of all these losses.
loss = tf.reduce_mean(loss)
# to get actual results like 'positive' or 'negative' ,
# we round the prediction probability to 0 or 1.
prediction = tf.to_float(tf.greater_equal(sentiment, 0.5))
# calculate the error based on which predictions were actually correct.
pred_err = tf.to_float(tf.not_equal(prediction, labels))
pred_err = tf.reduce_sum(pred_err)
"""
Explanation: Step 5: Define Loss
Now we define a loss function that we'll use to determine the difference between what we predicted and what's actually correct. We'll want to use cross entropy, since we can take into account what probability the model gave to the a tweet being positive.
The output we just got from the linear classification layer is called a 'logit' -- the raw value before transforming it into a probability between 0 and 1. We can feed these logits to tf.nn.sigmoid_cross_entropy_with_logits, which will take the sigmoid of these logits (making them between 0 and 1) and then calculate the cross-entropy with the ground truth labels.
End of explanation
"""
'''Define the operation that specifies the AdamOptimizer and tells
it to minimize the loss.''';
optimizer = #todo
"""
Explanation: Step 6: Train
Now we define the operation that actually changes the weights by minimizing the loss.
tf.train.AdamOptimizer is just a gradient descent algorithm that uses a variable learning rate to converge faster and more effectively.
We want to specify this optimizer and then call the minimize function, the optimizer knows it wants to minimize the loss we defined above.
End of explanation
"""
# initialize any variables
tf.global_variables_initializer().run(session=session)
# load our data and separate it into tweets and labels
train_data = json.load(open('data/trainTweets_preprocessed.json', 'r'))
train_data = list(map(lambda row:(np.array(row[0],dtype=np.int32),str(row[1])),train_data))
train_tweets = np.array([t[0] for t in train_data])
train_labels = np.array([int(t[1]) for t in train_data])
test_data = json.load(open('data/testTweets_preprocessed.json', 'r'))
test_data = list(map(lambda row:(np.array(row[0],dtype=np.int32),str(row[1])),test_data))
# we are just taking the first 1000 things from the test set for faster evaluation
test_data = test_data[0:1000]
test_tweets = np.array([t[0] for t in test_data])
one_hot_test_tweets = utils.one_hot(test_tweets, vocab_size)
test_labels = np.array([int(t[1]) for t in test_data])
# we'll train with batches of size 128. This means that we run
# our model on 128 examples and then do gradient descent based on the loss
# over those 128 examples.
num_steps = 1000
for step in range(num_steps):
# get data for a batch
offset = (step * batch_size) % (len(train_data) - batch_size)
batch_tweets = utils.one_hot(train_tweets[offset : (offset + batch_size)], vocab_size)
batch_labels = train_labels[offset : (offset + batch_size)]
# put this data into a dictionary that we feed in when we run
# the graph. this data fills in the placeholders we made in the graph.
data = {tweets: batch_tweets, labels: batch_labels}
# run the 'optimizer', 'loss', and 'pred_err' operations in the graph
_, loss_value_train, error_value_train = session.run(
[optimizer, loss, pred_err], feed_dict=data)
# print stuff every 50 steps to see how we are doing
if (step % 50 == 0):
print("Minibatch train loss at step", step, ":", loss_value_train)
print("Minibatch train error: %.3f%%" % error_value_train)
# get test evaluation
test_loss = []
test_error = []
for batch_num in range(int(len(test_data)/batch_size)):
test_offset = (batch_num * batch_size) % (len(test_data) - batch_size)
test_batch_tweets = one_hot_test_tweets[test_offset : (test_offset + batch_size)]
test_batch_labels = test_labels[test_offset : (test_offset + batch_size)]
data_testing = {tweets: test_batch_tweets, labels: test_batch_labels}
loss_value_test, error_value_test = session.run([loss, pred_err], feed_dict=data_testing)
test_loss.append(loss_value_test)
test_error.append(error_value_test)
print("Test loss: %.3f" % np.mean(test_loss))
print("Test error: %.3f%%" % np.mean(test_error))
"""
Explanation: Step 7: Run Session!
Now that we've made all the variable and operations in our graph, we can load the data, feed it in, and run the model!
End of explanation
"""
import math
import pickle as p
import tensorflow as tf
import numpy as np
import utils
# set variables
tweet_size = 20
hidden_size = 100
vocab_size = 7597
batch_size = 64
# this just makes sure that all our following operations will be placed in the right graph.
tf.reset_default_graph()
# create a session variable that we can run later.
session = tf.Session()
# make placeholders for data we'll feed in
tweets = tf.placeholder(tf.float32, [None, tweet_size, vocab_size])
labels = tf.placeholder(tf.float32, [None])
# make the lstm cells, and wrap them in MultiRNNCell for multiple layers
lstm_cell = tf.contrib.rnn.LSTMCell(hidden_size)
multi_lstm_cells = tf.contrib.rnn.MultiRNNCell(cells=[lstm_cell] * 2, state_is_tuple=True)
# define the op that runs the LSTM, across time, on the data
_, final_state = tf.nn.dynamic_rnn(multi_lstm_cells, tweets, dtype=tf.float32)
# a useful function that takes an input and what size we want the output
# to be, and multiples the input by a weight matrix plus bias (also creating
# these variables)
def linear(input_, output_size, name, init_bias=0.0):
shape = input_.get_shape().as_list()
with tf.variable_scope(name):
W = tf.get_variable("weight_matrix", [shape[-1], output_size], tf.float32, tf.random_normal_initializer(stddev=1.0 / math.sqrt(shape[-1])))
if init_bias is None:
return tf.matmul(input_, W)
with tf.variable_scope(name):
b = tf.get_variable("bias", [output_size], initializer=tf.constant_initializer(init_bias))
return tf.matmul(input_, W) + b
# define that our final sentiment logit is a linear function of the final state
# of the LSTM
sentiment = linear(final_state[-1][-1], 1, name="output")
sentiment = tf.squeeze(sentiment, [1])
# define cross entropy loss function
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=sentiment, labels=labels)
loss = tf.reduce_mean(loss)
# round our actual probabilities to compute error
prob = tf.nn.sigmoid(sentiment)
prediction = tf.to_float(tf.greater_equal(prob, 0.5))
pred_err = tf.to_float(tf.not_equal(prediction, labels))
pred_err = tf.reduce_sum(pred_err)
# define our optimizer to minimize the loss
optimizer = tf.train.AdamOptimizer().minimize(loss)
# initialize any variables
tf.global_variables_initializer().run(session=session)
# load our data and separate it into tweets and labels
train_data = json.load(open('data/trainTweets_preprocessed.json', 'r'))
train_data = list(map(lambda row:(np.array(row[0],dtype=np.int32),str(row[1])),train_data))
train_tweets = np.array([t[0] for t in train_data])
train_labels = np.array([int(t[1]) for t in train_data])
test_data = json.load(open('data/testTweets_preprocessed.json', 'r'))
test_data = map(lambda row:(np.array(row[0],dtype=np.int32),str(row[1])),test_data)
# we are just taking the first 1000 things from the test set for faster evaluation
test_data = test_data[0:1000]
test_tweets = np.array([t[0] for t in test_data])
one_hot_test_tweets = utils.one_hot(test_tweets, vocab_size)
test_labels = np.array([int(t[1]) for t in test_data])
# we'll train with batches of size 128. This means that we run
# our model on 128 examples and then do gradient descent based on the loss
# over those 128 examples.
num_steps = 1000
for step in range(num_steps):
# get data for a batch
offset = (step * batch_size) % (len(train_data) - batch_size)
batch_tweets = utils.one_hot(train_tweets[offset : (offset + batch_size)], vocab_size)
batch_labels = train_labels[offset : (offset + batch_size)]
# put this data into a dictionary that we feed in when we run
# the graph. this data fills in the placeholders we made in the graph.
data = {tweets: batch_tweets, labels: batch_labels}
# run the 'optimizer', 'loss', and 'pred_err' operations in the graph
_, loss_value_train, error_value_train = session.run(
[optimizer, loss, pred_err], feed_dict=data)
# print stuff every 50 steps to see how we are doing
if (step % 50 == 0):
print("Minibatch train loss at step", step, ":", loss_value_train)
print("Minibatch train error: %.3f%%" % error_value_train)
# get test evaluation
test_loss = []
test_error = []
for batch_num in range(int(len(test_data)/batch_size)):
test_offset = (batch_num * batch_size) % (len(test_data) - batch_size)
test_batch_tweets = one_hot_test_tweets[test_offset : (test_offset + batch_size)]
test_batch_labels = test_labels[test_offset : (test_offset + batch_size)]
data_testing = {tweets: test_batch_tweets, labels: test_batch_labels}
loss_value_test, error_value_test = session.run([loss, pred_err], feed_dict=data_testing)
test_loss.append(loss_value_test)
test_error.append(error_value_test)
print("Test loss: %.3f" % np.mean(test_loss))
print("Test error: %.3f%%" % np.mean(test_error))
"""
Explanation: Solutions
End of explanation
"""
|
jinntrance/MOOC | coursera/ml-foundations/week6/Deep Features for Image Classification.ipynb | cc0-1.0 | import graphlab
"""
Explanation: Using deep features to build an image classifier
Fire up GraphLab Create
End of explanation
"""
image_train = graphlab.SFrame('image_train_data/')
image_test = graphlab.SFrame('image_test_data/')
"""
Explanation: Load a common image analysis dataset
We will use a popular benchmark dataset in computer vision called CIFAR-10.
(We've reduced the data to just 4 categories = {'cat','bird','automobile','dog'}.)
This dataset is already split into a training set and test set.
End of explanation
"""
graphlab.canvas.set_target('ipynb')
image_train['image'].show()
image_train['label'].sketch_summary()
"""
Explanation: Exploring the image data
End of explanation
"""
raw_pixel_model = graphlab.logistic_classifier.create(image_train,target='label',
features=['image_array'])
"""
Explanation: Train a classifier on the raw image pixels
We first start by training a classifier on just the raw pixels of the image.
End of explanation
"""
image_test[0:3]['image'].show()
image_test[0:3]['label']
raw_pixel_model.predict(image_test[0:3])
"""
Explanation: Make a prediction with the simple model based on raw pixels
End of explanation
"""
raw_pixel_model.evaluate(image_test)
"""
Explanation: The model makes wrong predictions for all three images.
Evaluating raw pixel model on test data
End of explanation
"""
len(image_train)
"""
Explanation: The accuracy of this model is poor, getting only about 46% accuracy.
Can we improve the model using deep features
We only have 2005 data points, so it is not possible to train a deep neural network effectively with so little data. Instead, we will use transfer learning: using deep features trained on the full ImageNet dataset, we will train a simple model on this small dataset.
End of explanation
"""
#deep_learning_model = graphlab.load_model('http://s3.amazonaws.com/GraphLab-Datasets/deeplearning/imagenet_model_iter45')
#image_train['deep_features'] = deep_learning_model.extract_features(image_train)
"""
Explanation: Computing deep features for our images
The two lines below allow us to compute deep features. This computation takes a little while, so we have already computed them and saved the results as a column in the data you loaded.
(Note that if you would like to compute such deep features and have a GPU on your machine, you should use the GPU enabled GraphLab Create, which will be significantly faster for this task.)
End of explanation
"""
image_train.head()
"""
Explanation: As we can see, the column deep_features already contains the pre-computed deep features for this data.
End of explanation
"""
deep_features_model = graphlab.logistic_classifier.create(image_train,
features=['deep_features'],
target='label')
cat_model = graphlab.nearest_neighbors.create(image_train[image_train['label']=='cat'],
features=['deep_features'],
target='label')
dog_model = graphlab.nearest_neighbors.create(image_train[image_train['label']=='dog'],
features=['deep_features'],
target='label')
automobile_model = graphlab.nearest_neighbors.create(image_train[image_train['label']=='automobile'],
features=['deep_features'],
target='label')
bird_model = graphlab.nearest_neighbors.create(image_train[image_train['label']=='bird'],
features=['deep_features'],
target='label')
cat_model.query(image_test[0:1])['distance'].mean()
image_train[image_train['label']=='cat'][181:182]['image'].show()
dog_model.query(image_test[0:1])
image_train[image_train['label']=='dog'][159:160]['image'].show()
dog_model.query(image_test[0:1])['distance'].mean()
"""
Explanation: Given the deep features, let's train a classifier
End of explanation
"""
image_test[0:3]['image'].show()
image_test_dog=image_test[image_test['label']=='dog']
dog_cat_neighbors = cat_model.query(image_test_dog, k=1)
dog_dog_neighbors = dog_model.query(image_test_dog, k=1)
dog_automobile_neighbors = automobile_model.query(image_test_dog, k=1)
dog_bird_neighbors = bird_model.query(image_test_dog, k=1)
deep_features_model.predict(image_test[0:3])
dog_distances = graphlab.SFrame({
'dog-cat': dog_cat_neighbors['distance'],
'dog-dog': dog_dog_neighbors['distance'],
'dog-automobile': dog_automobile_neighbors['distance'],
'dog-bird': dog_bird_neighbors['distance']
})
dog_distances
def is_dog_correct(row):
d = row['dog-dog']
if d<=row['dog-cat'] and d<=row['dog-automobile'] and d<=row['dog-bird']:
return 1
else:
return 0
dog_distances.num_rows
"""
Explanation: Apply the deep features model to first few images of test set
End of explanation
"""
deep_features_model.evaluate(image_test)
"""
Explanation: The classifier with deep features gets all of these images right!
Compute test_data accuracy of deep_features_model
As we can see, deep features provide us with significantly better accuracy (about 78%)
End of explanation
"""
|
johnpfay/environ859 | 06_WebGIS/Notebooks/12-Exploring-the-BISON-API.ipynb | gpl-3.0 | #First, import the wonderful requests module
import requests
#Now, we'll deconstruct the example URL into the service URL and parameters, saving the paramters as a dictionary
url = 'http://bison.usgs.gov/api/search.json'
params = {'species':'Bison bison',
'type':'scientific_name',
'start':'0',
'count':'10'
}
response = requests.get(url,params)
print(response.content)
"""
Explanation: Using the BISON API
The USGS provides an API for accessing species observation data. https://bison.usgs.gov/doc/api.jsp
This API is much better documented than the NWIS API and we'll use it to dig a bit deeper into how the requests package can faciliate data access via APIs.
We'll begin by replicating the example API call they show on their web page:<br>
https://bison.usgs.gov/api/search.json?species=Bison bison&type=scientific_name&start=0&count=1
End of explanation
"""
#Import the module
import json
#Convert the response
data = json.loads(response.content)
type(data)
#Ok, if it's a dictionary, what are it's keys?
data.keys()
#What are the values of the 'data' key
data['data']
#Oh, it's a list of occurrences! Let's examine the first one
data['data'][0]
#We see it's a dictionary too
#We can get the latitude of the record from it's `decimalLatitude` key
data['data'][0]['decimalLatitude']
"""
Explanation: Yikes, that's much less readable than the NWIS output!
Well, that's because the response from the BISON server is in JSON format. We can use Python's build in json module to convert this raw JSON text to a handy dictionary (of dictionaries) that we can [somewhat] easily manipulate...
End of explanation
"""
#Loop thorough each observation and print the lat and long values
for observation in data['data']:
print observation['decimalLatitude'],observation['decimalLongitude']
"""
Explanation: So we see the Bison observations are stored as list of dictionaries which are accessed within the data key in the results dictionary genrated from the JSON response to our API request. (Phew!)
With a bit more code we can loop through all the data records and print out the lat and long coordinates...
End of explanation
"""
import pandas as pd
df = pd.DataFrame(data['data'])
df.head()
"""
Explanation: Or we can witness Pandas cleverness again! Here, we convert the collection of observations into a data frame
End of explanation
"""
df[df.provider == 'iNaturalist.org']
"""
Explanation: And Pandas allows us to do some nifty analyses, including subsetting records where the provider is 'iNaturalist.org'
End of explanation
"""
|
samejack/blog-content | keras-ml/mnist-neural-network.ipynb | apache-2.0 | from keras.datasets import mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
"""
Explanation: Python Keras MNIST 手寫辨識
這是一個神經網路的範例,利用了 Python Keras 來訓練一個手寫辨識分類 Model。
我們要的問題是將手寫數字的灰度圖像(28x28 Pixel)分類為 10 類(0至9)。使用的數據集是 MNIST 典數據集,它是由國家標準技術研究所(MNIST 的 NIST)在1980年代組裝而成的,包含 60,000 張訓練圖像和 10,000 張測試圖像。您可以將「解決」MNIST 視為深度學習的 "Hello World"。
由於 Keras 已經整理了一些經典的 Play Book Data,因此我們可以很快透過以下方式取得 MNIST 資料集
End of explanation
"""
train_images.shape
len(train_labels)
train_labels
test_images.shape
len(test_labels)
test_labels
"""
Explanation: images 是用來訓練與測試的資料,label 則為每一筆影像資料對應的正確答案,每一張手寫圖片都是 28 x 28 的灰階 Bit Map
End of explanation
"""
from keras import models
from keras import layers
network = models.Sequential()
network.add(layers.Dense(512, activation='relu', input_shape=(28 * 28,)))
network.add(layers.Dense(10, activation='softmax'))
"""
Explanation: 建立準備訓練的神經網路
End of explanation
"""
network.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
"""
Explanation: 上面這裡是神經網路的核心組成方式,我們在全連接層建立了兩層,由一個有 512 個神經元的網路架構連接到 10 個神經元的輸出層。輸出層採用 softmax 表示數字 0~9 的機率分配,這 10 個數字的總和將會是 1。以下將我們建立的網路進行 compile,這裡詳細的參數以後會介紹。
End of explanation
"""
fix_train_images = train_images.reshape((60000, 28 * 28)).astype('float32') / 255
fix_test_images = test_images.reshape((10000, 28 * 28)).astype('float32') / 255
"""
Explanation: 以下將資料正規劃成為 0~1 的數值,變成 60000, 28x28 Shape 好送進上面定義的網路輸入層。
End of explanation
"""
from keras.utils import to_categorical
fix_train_labels = to_categorical(train_labels)
fix_test_labels = to_categorical(test_labels)
"""
Explanation: 由於我們使用的 categorical_crossentropy 損失函數,因此將標記資料進行格式轉換。如下:
End of explanation
"""
result = network.fit(
fix_train_images,
fix_train_labels,
epochs=20,
batch_size=128,
validation_data=(fix_test_images, fix_test_labels))
"""
Explanation: 進行訓練模型,訓練中的正確率應該會在 0.989 左右
End of explanation
"""
test_loss, test_acc = network.evaluate(fix_test_images, fix_test_labels)
print('test_loss:', test_loss)
print('test_acc:', test_acc)
"""
Explanation: 將訓練後的模型輸入測試資料進行評比,一般說這樣的正確率應該會在 0.977% 左右
End of explanation
"""
history_dict = result.history
loss_values = history_dict['loss']
val_loss_values = history_dict['val_loss']
epochs = range(1, len(loss_values) + 1)
import matplotlib.pyplot as plt
plt.plot(epochs, loss_values, 'bo', label='Training loss')
plt.plot(epochs, val_loss_values, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
"""
Explanation: 為什麽訓練時的正確率會高於驗證測試呢?在這樣數據中,由於模型訓練時對訓練資料造成些微的過度擬合 (Over Fitting) 。一般來說這樣的情況是正常的,未來我們可以透過參數的調整或其他方法提高正確性。
透過圖表協助分析訓練過程
由於訓練 Model 時會進行好幾次的 Epoch,每一次 Epoch 都是對訓練資料集進行一輪完整的訓練,妥善觀察每一次 Epoch 的數據是很重要地。我們可以透過 matplotlib 函式庫繪製圖表,幫我們進行分析。
以下方式可以繪製訓練過程 Loss Function 對應的損失分數。Validation loss 不一定會跟隨 Training loss 一起降低,當 Model Over Fitting Train Data 時,就會發生 Validation loss 上升的情況。
End of explanation
"""
plt.clf()
acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
"""
Explanation: 以下程式可以繪製訓練過程的正確率變化。訓練的過程中,當 Accuracy 後期並有沒太大的變化,表示 Model 很快就在假設空間裡進行不錯的收斂。
End of explanation
"""
|
ethen8181/machine-learning | deep_learning/rnn/1_pytorch_rnn.ipynb | mit | # code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', '..', 'notebook_format'))
from formats import load_style
load_style(css_style='custom2.css', plot_style=False)
os.chdir(path)
# 1. magic for inline plot
# 2. magic to print version
# 3. magic so that the notebook will reload external python modules
# 4. magic to enable retina (high resolution) plots
# https://gist.github.com/minrk/3301035
%matplotlib inline
%load_ext watermark
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format='retina'
import numpy as np
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
%watermark -a 'Ethen' -d -t -v -p torch,numpy,matplotlib
"""
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Pytorch-Introduction" data-toc-modified-id="Pytorch-Introduction-1"><span class="toc-item-num">1 </span>Pytorch Introduction</a></span><ul class="toc-item"><li><span><a href="#Linear-Regression" data-toc-modified-id="Linear-Regression-1.1"><span class="toc-item-num">1.1 </span>Linear Regression</a></span></li><li><span><a href="#Linear-Regression-Version-2" data-toc-modified-id="Linear-Regression-Version-2-1.2"><span class="toc-item-num">1.2 </span>Linear Regression Version 2</a></span></li><li><span><a href="#Logistic-Regression" data-toc-modified-id="Logistic-Regression-1.3"><span class="toc-item-num">1.3 </span>Logistic Regression</a></span></li><li><span><a href="#Recurrent-Neural-Network-(RNN)" data-toc-modified-id="Recurrent-Neural-Network-(RNN)-1.4"><span class="toc-item-num">1.4 </span>Recurrent Neural Network (RNN)</a></span><ul class="toc-item"><li><span><a href="#Vanilla-RNN" data-toc-modified-id="Vanilla-RNN-1.4.1"><span class="toc-item-num">1.4.1 </span>Vanilla RNN</a></span></li><li><span><a href="#LSTM" data-toc-modified-id="LSTM-1.4.2"><span class="toc-item-num">1.4.2 </span>LSTM</a></span></li></ul></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
End of explanation
"""
# make up some trainig data and specify the type to be float, i.e. np.float32
# We DO not recommend double, i.e. np.float64, especially on the GPU. GPUs have bad
# double precision performance since they are optimized for float32
X_train = np.asarray([3.3, 4.4, 5.5, 6.71, 6.93, 4.168, 9.779, 6.182, 7.59,
2.167, 7.042, 10.791, 5.313, 7.997, 5.654, 9.27, 3.1], dtype = np.float32)
X_train = X_train.reshape(-1, 1)
y_train = np.asarray([1.7, 2.76, 2.09, 3.19, 1.694, 1.573, 3.366, 2.596, 2.53,
1.221, 2.827, 3.465, 1.65, 2.904, 2.42, 2.94, 1.3], dtype = np.float32)
y_train = y_train.reshape(-1, 1)
# Convert numpy array to Pytorch Tensors
X = torch.FloatTensor(X_train)
y = torch.FloatTensor(y_train)
"""
Explanation: Pytorch Introduction
```bash
installation on a mac
for more information on installation refer to
the following link:
http://pytorch.org/
conda install pytorch torchvision -c pytorch
```
At its core, PyTorch provides two main features:
An n-dimensional Tensor, similar to numpy array but can run on GPUs. PyTorch provides many functions for operating on these Tensors, thus it can be used as a general purpose scientific computing tool.
Automatic differentiation for building and training neural networks.
Let's dive in by looking at some examples:
Linear Regression
End of explanation
"""
# with linear regression, we apply a linear transformation
# to the incoming data, i.e. y = Xw + b, here we only have a 1
# dimensional data, thus the feature size will be 1
model = nn.Linear(in_features=1, out_features=1)
# although we can write our own loss function, the nn module
# also contains definitions of popular loss functions; here
# we use the MSELoss, a.k.a the L2 loss, and size_average parameter
# simply divides it with the number of examples
criterion = nn.MSELoss(size_average=True)
# Then we use the optim module to define an Optimizer that will update the weights of
# the model for us. Here we will use SGD; but it contains many other
# optimization algorithms. The first argument to the SGD constructor tells the
# optimizer the parameters that it should update
learning_rate = 0.01
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
# start the optimization process
n_epochs = 100
for _ in range(n_epochs):
# torch accumulates the gradients, thus before running new things
# use the optimizer object to zero all of the gradients for the
# variables it will update (which are the learnable weights of the model),
# think in terms of refreshing the gradients before doing the another round of update
optimizer.zero_grad()
# forward pass: compute predicted y by passing X to the model
output = model(X)
# compute the loss function
loss = criterion(output, y)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# call the step function on an Optimizer makes an update to its parameters
optimizer.step()
# plot the data and the fitted line to confirm the result
# change default style figure and font size
plt.rcParams['figure.figsize'] = 8, 6
plt.rcParams['font.size'] = 14
# convert a torch FloatTensor back to a numpy ndarray
# here, we also call .detach to detach the result from the computation history,
# to prevent future computations on it from being tracked
y_pred = model(X).detach().numpy()
plt.plot(X_train, y_train, 'ro', label='Original data')
plt.plot(X_train, y_pred, label='Fitted line')
plt.legend()
plt.show()
# to get the parameters, i.e. weight and bias from the model,
# we can use the state_dict() attribute from the model that
# we've defined
model.state_dict()
# or we could get it from the model's parameter
# which by itself is a generator
list(model.parameters())
"""
Explanation: Here we start defining the linear regression model, recall that in linear regression, we are optimizing for the squared loss.
\begin{align}
L = \frac{1}{2}(y-(Xw + b))^2
\end{align}
End of explanation
"""
class LinearRegression(nn.Module):
def __init__(self, in_features, out_features):
super().__init__() # boilerplate call
self.in_features = in_features
self.out_features = out_features
self.linear = nn.Linear(in_features, out_features)
def forward(self, x):
out = self.linear(x)
return out
# same optimization process
n_epochs = 100
learning_rate = 0.01
criterion = nn.MSELoss(size_average=True)
model = LinearRegression(in_features=1, out_features=1)
# when we defined our LinearRegression class, we've assigned
# a neural network's component/layer to a class variable in the
# __init__ function, and now notice that we can directly call
# .parameters() on the class we've defined due to some Python magic
# from the Pytorch devs
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
for epoch in range(n_epochs):
# forward + backward + optimize
optimizer.zero_grad()
output = model(X)
loss = criterion(output, y)
loss.backward()
optimizer.step()
# print the loss per 20 epoch this time
if (epoch + 1) % 20 == 0:
# starting from pytorch 0.4.0, we use .item to get a python number from a
# torch scalar, before loss.item() looks something like loss.data[0]
print('Epoch [{}/{}], Loss: {:.4f}'.format(epoch + 1, n_epochs, loss.item()))
"""
Explanation: Linear Regression Version 2
A better way of defining our model is to inherit the nn.Module class, to use it all we need to do is define our model's forward pass and the nn.Module will automatically define the backward method for us, where the gradients will be computed using autograd.
End of explanation
"""
checkpoint_path = 'model.pkl'
torch.save(model.state_dict(), checkpoint_path)
model.load_state_dict(torch.load(checkpoint_path))
y_pred = model(X).detach().numpy()
plt.plot(X_train, y_train, 'ro', label='Original data')
plt.plot(X_train, y_pred, label='Fitted line')
plt.legend()
plt.show()
"""
Explanation: After training our model, we can also save the model's parameter and load it back into the model in the future
End of explanation
"""
# define some toy dataset
train_data = [
('me gusta comer en la cafeteria'.split(), 'SPANISH'),
('Give it to me'.split(), 'ENGLISH'),
('No creo que sea una buena idea'.split(), 'SPANISH'),
('No it is not a good idea to get lost at sea'.split(), 'ENGLISH')
]
test_data = [
('Yo creo que si'.split(), 'SPANISH'),
('it is lost on me'.split(), 'ENGLISH')
]
"""
Explanation: Logistic Regression
Let's now look at a classification example, here we'll define a logistic regression that takes in a bag of words representation of some text and predicts over two labels "English" and "Spanish".
End of explanation
"""
idx_to_label = ['SPANISH', 'ENGLISH']
label_to_idx = {"SPANISH": 0, "ENGLISH": 1}
word_to_idx = {}
for sent, _ in train_data + test_data:
for word in sent:
if word not in word_to_idx:
word_to_idx[word] = len(word_to_idx)
print(word_to_idx)
VOCAB_SIZE = len(word_to_idx)
NUM_LABELS = len(label_to_idx)
"""
Explanation: The next code chunk create words to index mappings. To build our bag of words (BoW) representation, we need to assign each word in our vocabulary an unique index. Let's say our entire corpus only consists of two words "hello" and "world", with "hello" corresponding to index 0 and "world" to index 1. Then the BoW vector for the sentence "hello world hello world" will be [2, 2], i.e. the count for the word "hello" will be at position 0 of the array and so on.
End of explanation
"""
class BoWClassifier(nn.Module):
def __init__(self, vocab_size, num_labels):
super().__init__()
self.linear = nn.Linear(vocab_size, num_labels)
def forward(self, bow_vector):
"""
When we're performing a classification, after passing
through the linear layer or also known as the affine layer
we also need pass it through the softmax layer to convert a vector
of real numbers into probability distribution, here we use
log softmax for numerical stability reasons.
"""
return F.log_softmax(self.linear(bow_vector), dim = 1)
def make_bow_vector(sentence, word_to_idx):
vector = torch.zeros(len(word_to_idx))
for word in sentence:
vector[word_to_idx[word]] += 1
return vector.view(1, -1)
def make_target(label, label_to_idx):
return torch.LongTensor([label_to_idx[label]])
"""
Explanation: Next we define our model using the inherenting from nn.Module approach and also two helper functions to convert our data to torch Tensors so we can use to during training.
End of explanation
"""
model = BoWClassifier(VOCAB_SIZE, NUM_LABELS)
# note that instead of using NLLLoss (negative log likelihood),
# we could have used CrossEntropyLoss and remove the log_softmax
# function call in our forward method. The CrossEntropyLoss docstring
# explicitly states that this criterion combines `LogSoftMax` and
# `NLLLoss` in one single class.
criterion = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.1)
n_epochs = 100
for epoch in range(n_epochs):
for instance, label in train_data:
bow_vector = make_bow_vector(instance, word_to_idx)
target = make_target(label, label_to_idx)
# standard step to perform the forward and backward step
model.zero_grad()
log_probs = model(bow_vector)
loss = criterion(log_probs, target)
loss.backward()
optimizer.step()
# we can also wrap the code block in with torch.no_grad(): to
# prevent history tracking, this is often used in model inferencing,
# or when evaluating the model as we won't be needing the gradient during
# this stage
with torch.no_grad():
# predict on the test data to check if the model actually learned anything
for instance, label in test_data:
bow_vec = make_bow_vector(instance, word_to_idx)
log_probs = model(bow_vec)
y_pred = np.argmax(log_probs[0].numpy())
label_pred = idx_to_label[y_pred]
print('true label: ', label, ' predicted label: ', label_pred)
"""
Explanation: We are now ready to train this!
End of explanation
"""
torch.manual_seed(777)
# suppose we have a
# one hot encoding for each char in 'hello'
# and the sequence length for the word 'hello' is 5
seq_len = 5
h = [1, 0, 0, 0]
e = [0, 1, 0, 0]
l = [0, 0, 1, 0]
o = [0, 0, 0, 1]
# here we specify a single RNN cell with the property of
# input_dim (4) -> output_dim (2)
# batch_first explained in the following
rnn_cell = nn.RNN(input_size=4, hidden_size=2, batch_first=True)
# our input shape should be of shape
# (batch, seq_len, input_size) when batch_first=True;
# the input size basically refers to the number of features
# (seq_len, batch_size, input_size) when batch_first=False (default)
# thus we reshape our input to the appropriate size, torch.view is
# equivalent to numpy.reshape
inputs = torch.Tensor([h, e, l, l, o])
inputs = inputs.view(1, 5, -1)
# our hidden is the weights that gets passed along the cells,
# here we initialize some random values for it:
# (batch, num_layers * num_directions, hidden_size) for batch_first=True
# disregard the second argument as of now
hidden = torch.zeros(1, 1, 2)
out, hidden = rnn_cell(inputs, hidden)
print('sequence input size', inputs.size())
print('out size', out.size())
print('sequence size', hidden.size())
# the first value returned by the rnn cell is all
# of the hidden state throughout the sequence, while
# the second value is the most recent hidden state;
# hence we can compare the last slice of the the first
# value with the second value to confirm that they are
# the same
print('\ncomparing rnn cell output:')
print(out[:, -1, :])
hidden[0]
"""
Explanation: Recurrent Neural Network (RNN)
The idea behind RNN is to make use of sequential information that exists in our dataset. In feedforward neural network, we assume that all inputs and outputs are independent of each other. But for some tasks, this might not be the best way to tackle the problem. For example, in Natural Language Processing (NLP) applications, if we wish to predict the next word in a sentence (one business application of this is Swiftkey), then we could imagine that knowing the word that comes before it can come in handy.
Vanilla RNN
The input $x$ will be a sequence of words, and each $x_t$ is a single word. And because of how matrix multiplication works, we can't simply use a word index like (36) as an input, instead we represent each word as a one-hot vector with a size of the total number of vocabulary. For example, the word with index 36 have the value 1 at position 36 and the rest of the value in the vector would all be 0's.
End of explanation
"""
# create an index to character mapping
idx2char = ['h', 'i', 'e', 'l', 'o']
# Teach hihell -> ihello
x_data = [[0, 1, 0, 2, 3, 3]] # hihell
x_one_hot = [[[1, 0, 0, 0, 0], # h 0
[0, 1, 0, 0, 0], # i 1
[1, 0, 0, 0, 0], # h 0
[0, 0, 1, 0, 0], # e 2
[0, 0, 0, 1, 0], # l 3
[0, 0, 0, 1, 0]]] # l 3
x_one_hot = np.array(x_one_hot)
y_data = np.array([1, 0, 2, 3, 3, 4]) # ihello
# As we have one batch of samples, we will change them to variables only once
inputs = torch.Tensor(x_one_hot)
labels = torch.LongTensor(y_data)
# hyperparameters
seq_len = 6 # |hihell| == 6, equivalent to time step
input_size = 5 # one-hot size
batch_size = 1 # one sentence per batch
num_layers = 1 # one-layer rnn
num_classes = 5 # predicting 5 distinct character
hidden_size = 4 # output from the RNN
class RNN(nn.Module):
"""
The RNN model will be a RNN followed by a linear layer,
i.e. a fully-connected layer
"""
def __init__(self, seq_len, num_classes, input_size, hidden_size, num_layers):
super().__init__()
self.seq_len = seq_len
self.num_layers = num_layers
self.input_size = input_size
self.num_classes = num_classes
self.hidden_size = hidden_size
self.rnn = nn.RNN(input_size, hidden_size, batch_first=True)
self.linear = nn.Linear(hidden_size, num_classes)
def forward(self, x):
# assuming batch_first = True for RNN cells
batch_size = x.size(0)
hidden = self._init_hidden(batch_size)
x = x.view(batch_size, self.seq_len, self.input_size)
# apart from the output, rnn also gives us the hidden
# cell, this gives us the opportunity to pass it to
# the next cell if needed; we won't be needing it here
# because the nn.RNN already computed all the time steps
# for us. rnn_out will of size [batch_size, seq_len, hidden_size]
rnn_out, _ = self.rnn(x, hidden)
linear_out = self.linear(rnn_out.view(-1, hidden_size))
return linear_out
def _init_hidden(self, batch_size):
"""
Initialize hidden cell states, assuming
batch_first = True for RNN cells
"""
return torch.zeros(batch_size, self.num_layers, self.hidden_size)
# Set loss, optimizer and the RNN model
torch.manual_seed(777)
rnn = RNN(seq_len, num_classes, input_size, hidden_size, num_layers)
print('network architecture:\n', rnn)
# train the model
num_epochs = 15
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(rnn.parameters(), lr=0.1)
for epoch in range(1, num_epochs + 1):
optimizer.zero_grad()
outputs = rnn(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# check the current predicted string
# max gives the maximum value and its
# corresponding index, we will only
# be needing the index
_, idx = outputs.max(dim = 1)
idx = idx.detach().numpy()
result_str = [idx2char[c] for c in idx]
print('epoch: {}, loss: {:1.3f}'.format(epoch, loss.item()))
print('Predicted string: ', ''.join(result_str))
"""
Explanation: In the next section, we'll teach our RNN to produce "ihello" from "hihell".
End of explanation
"""
# These will usually be more like 32 or 64 dimensional.
# We will keep them small for this toy example
EMBEDDING_SIZE = 6
HIDDEN_SIZE = 6
training_data = [
("The dog ate the apple".split(), ["DET", "NN", "V", "DET", "NN"]),
("Everybody read that book".split(), ["NN", "V", "DET", "NN"])
]
idx_to_tag = ['DET', 'NN', 'V']
tag_to_idx = {'DET': 0, 'NN': 1, 'V': 2}
word_to_idx = {}
for sent, tags in training_data:
for word in sent:
if word not in word_to_idx:
word_to_idx[word] = len(word_to_idx)
word_to_idx
def prepare_sequence(seq, to_idx):
"""Convert sentence/sequence to torch Tensors"""
idxs = [to_idx[w] for w in seq]
return torch.LongTensor(idxs)
seq = training_data[0][0]
inputs = prepare_sequence(seq, word_to_idx)
inputs
class LSTMTagger(nn.Module):
def __init__(self, embedding_size, hidden_size, vocab_size, tagset_size):
super().__init__()
self.embedding_size = embedding_size
self.hidden_size = hidden_size
self.vocab_size = vocab_size
self.tagset_size = tagset_size
self.embedding = nn.Embedding(vocab_size, embedding_size)
self.lstm = nn.LSTM(embedding_size, hidden_size)
self.hidden2tag = nn.Linear(hidden_size, tagset_size)
def forward(self, x):
embed = self.embedding(x)
hidden = self._init_hidden()
# the second dimension refers to the batch size, which we've hard-coded
# it as 1 throughout the example
lstm_out, lstm_hidden = self.lstm(embed.view(len(x), 1, -1), hidden)
output = self.hidden2tag(lstm_out.view(len(x), -1))
return output
def _init_hidden(self):
# the dimension semantics are [num_layers, batch_size, hidden_size]
return (torch.rand(1, 1, self.hidden_size),
torch.rand(1, 1, self.hidden_size))
model = LSTMTagger(EMBEDDING_SIZE, HIDDEN_SIZE, len(word_to_idx), len(tag_to_idx))
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.1)
epochs = 300
for epoch in range(epochs):
for sentence, tags in training_data:
model.zero_grad()
sentence = prepare_sequence(sentence, word_to_idx)
target = prepare_sequence(tags, tag_to_idx)
output = model(sentence)
loss = criterion(output, target)
loss.backward()
optimizer.step()
inputs = prepare_sequence(training_data[0][0], word_to_idx)
tag_scores = model(inputs)
# validating that the sentence "the dog ate the apple".
# the correct tag should be DET NOUN VERB DET NOUN
print('expected target: ', training_data[0][1])
tag_scores = tag_scores.detach().numpy()
tag = [idx_to_tag[idx] for idx in np.argmax(tag_scores, axis = 1)]
print('generated target: ', tag)
"""
Explanation: LSTM
The example below uses an LSTM to generate part of speech tags. The usage of LSTM API is essentially the same as the RNN we were using in the last section. Expect in this example, we will prepare the word to index mapping ourselves and as for the modeling part, we will add an embedding layer before the LSTM layer, this is a common technique in NLP applications. So for each word, instead of using the one hot encoding way of representation the data (which can be inefficient and it treats all words as independent entities with no relationships amongst each other), word embeddings will compress them into a lower dimension that encode the semantics of the words, i.e. how similar each word is used within our given corpus.
End of explanation
"""
|
ML4DS/ML4all | U1.KMeans/.ipynb_checkpoints/Python_intro-checkpoint.ipynb | mit | str1 = '"Hola" is how we say "hello" in Spanish.'
str2 = "Strings can also be defined with quotes; try to be sistematic."
"""
Explanation: A brief tutorial of basic python
From the wikipedia: "Python is a widely used general-purpose, high-level programming language. Its design philosophy emphasizes code readability, and its syntax allows programmers to express concepts in fewer lines of code than would be possible in languages such as C++ or Java. The language provides constructs intended to enable clear programs on both a small and large scale."
Through this tutorial, students will learn some basic characteristics of the Python programming language, that will be useful for working with corpuses of text data.
1. Introduction to Strings
Among the different native python types, we will focus on strings, since they will be the core type that we will recur to represent text. Essentially, a string is just a concatenation of characters.
End of explanation
"""
print str1
print type(str1)
print type(3)
print type(3.)
"""
Explanation: It is easy to check the type of a variable with the type() command:
End of explanation
"""
print str1[0:5]
print str1+str2
print str1.lower()
print str1.upper()
print len(str1)
print str1.replace('h','H')
str= 'This is a question'
str.replace('i','o’)
str.lower()
print str[0:4]
"""
Explanation: The following commands implement some common operations with strings in Python. Have a look at them, and try to deduce what the result of each operation will be. Then, execute the commands and check what are the actual results.
End of explanation
"""
print 'This is just a carriage return symbol.\r Second line will start on top of the first line.'
print 'If you wish to start a new line,\r\nthe line feed character should also be used.'
print 'But note that most applications are tolerant\nto the use of \'line feed\' only.'
"""
Explanation: It is interesting to notice the difference in the use of commands 'lower' and 'len'. Python is an object-oriented language, and str1 is an instance of the Python class 'string'. Then, str1.lower() invokes the method lower() of the class string to which object str1 belongs, while len(str1) or type(str1) imply the use of external methods, not belonging to the class string. In any case, we will not pay (much) attention to these issues during the session.
Finally, we remark that there exist special characters that require special consideration. Apart from language-oriented characters or special symbols (e.g., \euro), the following characters are commonly used to denote carriage return and the start of new lines
End of explanation
"""
list1 = ['student', 'teacher', 1997, 2000]
print list1
list2 = [1, 2, 3, 4, 5 ]
print list2
list3 = ["a", "b", "c", "d"]
print list3
"""
Explanation: 2. Working with Python lists
Python lists are containers that hold a number of other objects, in a given order. To create a list, just put different comma-separated values between square brackets
End of explanation
"""
print list1[0]
print list2[2:4]
print list3[-1]
"""
Explanation: To check the value of a list element, indicate between brackets the index (or indices) to obtain the value (or values) at that position (positions).
Run the code fragment below, and try to guess what the output of each command will be.
Note: Python indexing starts from 0!!!!
End of explanation
"""
list1 = ['student', 'teacher', 1997, 2000]
list1.append(3)
print list1
list1.remove('teacher')
print list1
"""
Explanation: To add elements in a list you can use the method append() and to remove them the method remove()
End of explanation
"""
list2 = [1, 2, 3, 4, 5 ]
print len(list2)
print max(list2)
print min(list2)
"""
Explanation: Other useful functions are:
len(list): Gives the number of elements in a list.
max(list): Returns item from the list with max value.
min(list): Returns item from the list with min value.
End of explanation
"""
x = int(raw_input("Please enter an integer: "))
if x < 0:
x = 0
print 'Negative changed to zero'
elif x == 0:
print 'Zero'
elif x == 1:
print 'Single'
else:
print 'More'
"""
Explanation: 3. Flow control (with 'for' and 'if')
As in other programming languages, python offers mechanisms to loop through a piece of code several times, or for conditionally executing a code fragment when certain conditions are satisfied.
For conditional execution, you can we use the 'if', 'elif' and 'else' statements.
Try to play with the following example:
End of explanation
"""
words = ['cat', 'window', 'open-course']
for w in words:
print w, len(w)
"""
Explanation: The above fragment, allows us also to discuss some important characteristics of the Python language syntaxis:
Unlike other languages, Python does not require to use the 'end' keyword to indicate that a given code fragment finishes. Instead, Python recurs to indentation
Indentation in Python is mandatory, and consists of 4 spaces (for first level indentation)
The condition lines conclude with ':', which are then followed by the indented blocks that will be executed only when the indicated conditions are satisfied.
The statement 'for' lets you iterate over the items of any sequence (a list or a string), in the order that they appear in the sequence
End of explanation
"""
words = ['cat', 'window', 'open-course']
for (i, w) in enumerate(words):
print 'element ' + str(i) + ' is ' + w
"""
Explanation: In combination with enumerate(), you can iterate over the elementes of the sequeence and have a counter over them
End of explanation
"""
f = open('workfile', 'w')
"""
Explanation: 4. File input and output operations
First of all, you need to open a file with the open() function (if it does not exist, it creates it).
End of explanation
"""
f.write('This is a test\n with 2 lines')
f.close()
"""
Explanation: The first argument is a string containing the filename. The second argument defines the mode in which the file will be used:
'r' : only to be read,
'w' : for only writing (an existing file with the same name would be erased),
'a' : the file is opened for appending; any data written to the file is automatically appended to the end.
'r+': opens the file for both reading and writing.
If the mode argument is not included, 'r' will be assumed.
Use f.write(string) to write the contents of a string to the file. When you are done, do not forget to close the file:
End of explanation
"""
f2 = open('workfile', 'r')
text=f2.read()
f2.close()
print text
"""
Explanation: To read the content of a file, use the function f.read():
End of explanation
"""
f2 = open('workfile', 'r')
for line in f2:
print line
f2.close()
"""
Explanation: You can also read line by line from the file identifier
End of explanation
"""
import time
print time.time() # returns the current processor time in seconds
time.sleep(2) # suspends execution for the given number of seconds
print time.time() # returns the current processor time in seconds again!!!
"""
Explanation: 5. Modules import
Python lets you define modules which are files consisting of Python code. A module can define functions, classes and variables.
Most Python distributions already include the most popular modules with predefined libraries which make our programmer lifes easier. Some well-known libraries are: time, sys, os, numpy, ...
There are several ways to import a library:
1) Import all the contents of the library: import lib_name
Note: You have to call these methods as part of the library
End of explanation
"""
import time as t
print t.time()
"""
Explanation: 2) Define a short name to use the library: import lib_name as lib
End of explanation
"""
from time import time, sleep
print time()
"""
Explanation: 3) Import only some elements of the library
Note: now you have to use the methods directly
End of explanation
"""
|
sjchoi86/Tensorflow-101 | notebooks/char_rnn_train_tutorial.ipynb | mit | # Now convert all text to index using vocab!
corpus = np.array(list(map(vocab.get, data)))
print ("Type of 'corpus' is %s, shape is %s, and length is %d"
% (type(corpus), corpus.shape, len(corpus)))
check_len = 10
print ("\n'corpus' looks like %s" % (corpus[0:check_len]))
for i in range(check_len):
_wordidx = corpus[i]
print ("[%d/%d] chars[%02d] corresponds to '%s'"
% (i, check_len, _wordidx, chars[_wordidx]))
# Generate batch data
batch_size = 50
seq_length = 200
num_batches = int(corpus.size / (batch_size * seq_length))
# First, reduce the length of corpus to fit batch_size
corpus_reduced = corpus[:(num_batches*batch_size*seq_length)]
xdata = corpus_reduced
ydata = np.copy(xdata)
ydata[:-1] = xdata[1:]
ydata[-1] = xdata[0]
print ('xdata is ... %s and length is %d' % (xdata, xdata.size))
print ('ydata is ... %s and length is %d' % (ydata, xdata.size))
print ("")
# Second, make batch
xbatches = np.split(xdata.reshape(batch_size, -1), num_batches, 1)
ybatches = np.split(ydata.reshape(batch_size, -1), num_batches, 1)
print ("Type of 'xbatches' is %s and length is %d"
% (type(xbatches), len(xbatches)))
print ("Type of 'ybatches' is %s and length is %d"
% (type(ybatches), len(ybatches)))
print ("")
# How can we access to xbatches??
nbatch = 5
temp = xbatches[0:nbatch]
print ("Type of 'temp' is %s and length is %d"
% (type(temp), len(temp)))
for i in range(nbatch):
temp2 = temp[i]
print ("Type of 'temp[%d]' is %s and shape is %s" % (i, type(temp2), temp2.shape,))
"""
Explanation: chars[0] converts index to char
vocab['a'] converts char to index
End of explanation
"""
# Important RNN parameters
vocab_size = len(vocab)
rnn_size = 128
num_layers = 2
grad_clip = 5.
# Construct RNN model
unitcell = tf.nn.rnn_cell.BasicLSTMCell(rnn_size)
cell = tf.nn.rnn_cell.MultiRNNCell([unitcell] * num_layers)
input_data = tf.placeholder(tf.int32, [batch_size, seq_length])
targets = tf.placeholder(tf.int32, [batch_size, seq_length])
istate = cell.zero_state(batch_size, tf.float32)
# Weigths
with tf.variable_scope('rnnlm'):
softmax_w = tf.get_variable("softmax_w", [rnn_size, vocab_size])
softmax_b = tf.get_variable("softmax_b", [vocab_size])
with tf.device("/cpu:0"):
embedding = tf.get_variable("embedding", [vocab_size, rnn_size])
inputs = tf.split(1, seq_length, tf.nn.embedding_lookup(embedding, input_data))
inputs = [tf.squeeze(_input, [1]) for _input in inputs]
# Output
def loop(prev, _):
prev = tf.nn.xw_plus_b(prev, softmax_w, softmax_b)
prev_symbol = tf.stop_gradient(tf.argmax(prev, 1))
return tf.nn.embedding_lookup(embedding, prev_symbol)
"""
loop_function: If not None, this function will be applied to the i-th output
in order to generate the i+1-st input, and decoder_inputs will be ignored,
except for the first element ("GO" symbol).
"""
outputs, last_state = tf.nn.seq2seq.rnn_decoder(inputs, istate, cell
, loop_function=None, scope='rnnlm')
output = tf.reshape(tf.concat(1, outputs), [-1, rnn_size])
logits = tf.nn.xw_plus_b(output, softmax_w, softmax_b)
probs = tf.nn.softmax(logits)
# Loss
loss = tf.nn.seq2seq.sequence_loss_by_example([logits], # Input
[tf.reshape(targets, [-1])], # Target
[tf.ones([batch_size * seq_length])], # Weight
vocab_size)
# Optimizer
cost = tf.reduce_sum(loss) / batch_size / seq_length
final_state = last_state
lr = tf.Variable(0.0, trainable=False)
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
_optm = tf.train.AdamOptimizer(lr)
optm = _optm.apply_gradients(zip(grads, tvars))
print ("Network Ready")
# Train the model!
num_epochs = 50
save_every = 500
learning_rate = 0.002
decay_rate = 0.97
sess = tf.Session()
sess.run(tf.initialize_all_variables())
summary_writer = tf.train.SummaryWriter(save_dir, graph=sess.graph)
saver = tf.train.Saver(tf.all_variables())
init_time = time.time()
for epoch in range(num_epochs):
# Learning rate scheduling
sess.run(tf.assign(lr, learning_rate * (decay_rate ** epoch)))
state = sess.run(istate)
batchidx = 0
for iteration in range(num_batches):
start_time = time.time()
randbatchidx = np.random.randint(num_batches)
xbatch = xbatches[batchidx]
ybatch = ybatches[batchidx]
batchidx = batchidx + 1
# Note that, num_batches = len(xbatches)
# Train!
train_loss, state, _ = sess.run([cost, final_state, optm]
, feed_dict={input_data: xbatch, targets: ybatch, istate: state})
total_iter = epoch*num_batches + iteration
end_time = time.time();
duration = end_time - start_time
if total_iter % 100 == 0:
print ("[%d/%d] cost: %.4f / Each batch learning took %.4f sec"
% (total_iter, num_epochs*num_batches, train_loss, duration))
if total_iter % save_every == 0:
ckpt_path = os.path.join(save_dir, 'model.ckpt')
saver.save(sess, ckpt_path, global_step = total_iter)
# Save network!
print("model saved to '%s'" % (ckpt_path))
"""
Explanation: Now, we are ready to make our RNN model with seq2seq
End of explanation
"""
print ("Done!! It took %.4f second. " %(time.time() - init_time))
"""
Explanation: Run the command line
tensorboard --logdir=/tmp/tf_logs/char_rnn_tutorial
Open http://localhost:6006/ into your web browser
End of explanation
"""
|
tuchandra/sleep-analysis | spring-sleep-analysis.ipynb | mit | %matplotlib inline
import matplotlib
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
import json
import datetime
import scipy.stats
matplotlib.style.use('ggplot')
plt.rcParams['figure.figsize'] = [12.0, 8.0]
"""
Explanation: Spring Sleep Analysis
Studying springtime sleep habits though analysis of my Fitbit sleep data
In this notebook, I will study my spring quarter sleep habits with data extracted from my Fitbit Charge HR tracker. The data was obtained through use of Fitbit's API; detailed information about this can be found in README.md and fitbit.py. Each day's sleep data is stored in the /logs/ directory, in the format yyyy-mm-dd.json, which is the output of the fitbit.py script.
Setup
Let's begin with importing what we need and configuring plots for later.
End of explanation
"""
with open('logs/2016-07-01.json') as f:
sample_data = json.loads(f.read())
list(sample_data.keys())
list(sample_data['summary'].keys())
list(sample_data['sleep'][0].keys())
"""
Explanation: Before we dive in, we must first understand the shape of the data. Refer to the Fitbit documentation for detailed information on the format of the log files. As a quick overview, each day contains two top-level attributes, sleep and summary:
- sleep is a list of one or more entries for different sleep events in a day (e.g., a nap and then one's main sleep). Each entry has several attributes, like isMainSleep, minutesAsleep, and minuteData. minuteData contains minute-by-minute sleep data, where each minute has an identifier, 1, 2, or 3, to denote asleep, restless, or awake.
- summary contains summary statistics. It includes the number of sleep events (totalSleepRecords) and the total time in bed or asleep (totalTimeInBed and totalMinutesAsleep).
We can run a couple of simple commands to illustrate this.
End of explanation
"""
dates = pd.date_range('2016-03-29', '2016-06-10')
time_in_bed = []
for date in dates:
fname = 'logs/' + date.strftime('%Y-%m-%d') + '.json'
with open(fname) as f:
date_data = json.loads(f.read())
time_in_bed.append(date_data['summary']['totalTimeInBed'] / 60.0)
df = pd.DataFrame(time_in_bed, index = dates)
df.columns = ['bed']
df.plot()
plt.ylabel('Hours in Bed');
"""
Explanation: Time Spent in Bed
How long was I in bed each night? I restrict my analysis to the spring academic term at my school (March 29th to June 10th).
(I am choosing to study totalTimeInBed, rather than totalMinutesAsleep, for a couple of reasons. I generally fall asleep quickly and sleep well; I rarely get up in the middle of the night; and a quick look at the distributions shows them as being very similar. I choose to study the dataset that counts time I was 'restless.')
End of explanation
"""
df.describe().transpose()
df.plot.hist(bins = 8, range = (3, 11))
plt.xlim(3, 11)
plt.xticks(range(3, 11))
plt.xlabel('Hours in Bed')
plt.ylabel('Count');
"""
Explanation: As interesting as this looks, there isn't a whole lot to take from it on the surface. The vague periodicity of the graph sugggests that we look at different days of the week, so we'll do that soon. Let's first look at the distribution of how long I was in bed.
End of explanation
"""
df['day_of_week'] = df.index.weekday
df['day_type'] = df['day_of_week'].apply(lambda x: 'Weekend' if x >= 5 else 'Weekday')
df.head()
df.boxplot(column = 'bed', by = 'day_type', positions = [2, 1],
vert = False, widths = 0.5)
plt.xlabel('Hours in Bed')
plt.suptitle('')
plt.title('');
# Group dataframe by weekday vs. weekend
df_weekdays = df[df.day_of_week < 5]
df_weekend = df[df.day_of_week >= 5]
scipy.stats.ttest_ind(df_weekdays['bed'], df_weekend['bed'])
"""
Explanation: As far as behavioral data goes, this is reasonably well-behaved. Notice that this distribution varies quite a bit, with a standard deviation of nearly two hours. (Anyone familiar with the sleep habits of a college student will not be surprised.)
Sleep Paterns by Day of Week
Let's now look at the different days of the week. Did I sleep more on weekends? (I certainly hope so.) What nights were the worst? We can add another identifier denoting the day of the week, and then group the days based on whether they were weekdays or weekends. We'll then look at the distriutions and compare them with a t-test.
Note that a sleep record for a certain day (e.g., Friday) contains data from the previous night (in that case, Thursday night). Also, pandas numbers days as 0 = Monday, ... , 6 = Sunday. Altogether, this means weekdays are labeled 0 through 4, and weekends are labeled 5 and 6.
End of explanation
"""
# Add a label for day name, to make the boxplot more readable
days = {0: 'Monday', 1: 'Tuesday', 2: 'Wednesday', 3: 'Thursday', 4: 'Friday',
5: 'Saturday', 6: 'Sunday'}
df['day_name'] = df['day_of_week'].apply(lambda x: days[x])
df.boxplot(column = 'bed', by = 'day_name', positions = [5, 1, 6, 7, 4, 2, 3])
# Configure title and axes
plt.suptitle('')
plt.title('')
plt.ylabel('Hours in Bed')
plt.xlabel('');
"""
Explanation: A low p-value would suggest I sleep statistically significantly more on weekends than on weekdays. The two boxplots illustrate this observation. I'm not at all surprised by this, but it's always cool to have data to back up your intuition.
What about individual days? We can consider each night of the week separately, and look at the distributions there.
End of explanation
"""
bedtimes = []
# Read data into list
for date in dates:
fname = 'logs/' + date.strftime('%Y-%m-%d') + '.json'
with open(fname) as f:
date_data = json.loads(f.read())
# Note that sleep_event['startTime'][11:16] gets the hh:mm characters
# from the start of a sleep event; it is then converted to a datetime
for sleep_event in date_data['sleep']:
bedtimes.append((pd.to_datetime(sleep_event['startTime'][11:16]),
sleep_event['timeInBed'] / 60.0,
sleep_event['isMainSleep']))
# Convert to dataframe, and make 'bedtime' a float (e.g., 5:30 -> 5.5)
df = pd.DataFrame(bedtimes, columns = ['bedtime', 'duration', 'main'])
df['bedtime'] = df['bedtime'].dt.hour + df['bedtime'].dt.minute / 60.0
# Make first plot: scatterplot of bedtime vs. duration, colored by main sleep
ax = df[df.main == True].plot.scatter(x = 'bedtime', y = 'duration',
color = 'Red', s = 100,
label = 'Main Sleep')
df[df.main == False].plot.scatter(x = 'bedtime', y = 'duration',
color = 'SlateBlue', ax = ax, s = 100,
label = 'Secondary')
# List of times to use for labels
times = [str(2 * h) + ':00' for h in range(12)]
# Configure legend, x-axis, labels
plt.legend(scatterpoints = 1, markerfirst = False)
plt.xticks(range(0, 24, 2), times)
plt.xlim(0, 24)
plt.xlabel('Bedtime')
plt.ylabel('Hours in Bed');
# Overlay a histogram of bedtimes on the same plot, using a secondary y-axis
ax2 = ax.twinx()
df['bedtime'].map(lambda x: int(x)).plot.hist(bins = range(24),
color = 'MediumSeaGreen',
alpha = 0.3, grid = False)
# Configure secondary y-axis
plt.yticks(range(0, 28, 4))
plt.ylim(0, 28);
"""
Explanation: This is very illustrative. Tuesdays, Thursdays, and Fridays were the days on which I got the least amount of sleep the previous night. Looking back on my quarter, I can find explanations for all of these -- Monday nights were extremely busy, and I could not start work until pretty late; Thursday mornings, I had to wake up earlier than usual to go to work, impacting my sleep the night before; and on Fridays, I had a quantum mechanics problem set due, which was too often put off to the night before. I find it extremely interesting to see my academic habits reflected in my sleep habits.
Bedtimes
I've learned quite a bit about how much I sleep. What about when I sleep? Let's create a new dataset; this one will have columns for bedtime (formally defined as the start of any sleep event, including naps), for the time I was in bed (as before), and for whether or not it was a 'main sleep' (the longest sleep event of a 24-hour period, distinguishing regular sleep from naps).
Let's look at my sleep duration versus my bedtime. While the next two code blocks are pretty large, their purposes are straightforward: we want to create the dataset described above, then plot it.
End of explanation
"""
|
elleros/spoofed-speech-detection | gmm_ml_synthetic_speech_detection.ipynb | mit | import os
import time
import numpy as np
import pandas as pd
from bob.bio.spear import preprocessor, extractor
from bob.bio.gmm import algorithm
from bob.io.base import HDF5File
from bob.learn import em
from sklearn.metrics import classification_report, roc_curve, roc_auc_score
WAV_FOLDER = 'Wav/' #'ASV2015dataset/wav/' # Path to folder containing speakers .wav subfolders
LABEL_FOLDER = 'CM_protocol/' #'ASV2015dataset/CM_protocol/' # Path to ground truth csv files
EXT = '.wav'
%matplotlib inline
"""
Explanation: Spoofed Speech Detection via Maximum Likelihood Estimation of Gaussian Mixture Models
The goal of synthetic speech detection is to determine whether a speech segment $S$ is natural or synthetic/converted speeach.
This notebook implements a Gaussian mixture model maximum likelihood (GMM-ML) classifier for synthetic (spoofed) speech detection. This approach uses regular mel frequency cepstral coefficients (MFCC) features and gives the best performance on the ASVspoof 2015 dataset among the standard classifiers (GMM-SV, GMM-UBM, ...). For more background information see: Hanilçi, Cemal, Tomi Kinnunen, Md Sahidullah, and Aleksandr Sizov. "Classifiers for synthetic speech detection: a comparison." In INTERSPEECH 2015. The scripts use the Python package Bob.Bio.SPEAR 2.04 for speaker recogntion.
This work is part of the "DDoS Resilient Emergency Dispatch Center" project at the University of Houston, funded by the Department of Homeland Security (DHS).
April 19, 2015
Lorenzo Rossi
(lorenzo [dot] rossi [at] gmail [dot] com)
End of explanation
"""
train_subfls = ['T1']#, 'T2', 'T3', 'T4', 'T5', 'T6', 'T7', 'T8', 'T9', 'T13'] #T13 used instead of T10 for gender balance
devel_subfls = ['D1']#, 'D2', 'D3', 'D4', 'D5', 'D6', 'D7', 'D8', 'D9', 'D10']
evalu_subfls = ['E1']#, 'E2', 'E3', 'E4', 'E5', 'E6','E7', 'E8', 'E9', 'E10']
train = pd.read_csv(LABEL_FOLDER + 'cm_train.trn', sep=' ', header=None, names=['folder','file','method','source'])
if len(train_subfls): train = train[train.folder.isin(train_subfls)]
train.sort_values(['folder', 'file'], inplace=True)
devel = pd.read_csv(LABEL_FOLDER + 'cm_develop.ndx', sep=' ', header=None, names=['folder','file','method','source'])
if len(devel_subfls): devel = devel[devel.folder.isin(devel_subfls)]
devel.sort_values(['folder', 'file'], inplace=True)
evalu = pd.read_csv(LABEL_FOLDER +'cm_evaluation.ndx', sep=' ', header=None, names=['folder','file','method','source'])
if len(evalu_subfls): evalu = evalu[evalu.folder.isin(evalu_subfls)]
evalu.sort_values(['folder', 'file'], inplace=True)
label_2_class = {'human':1, 'spoof':0}
print('training samples:',len(train))
print('development samples:',len(devel))
print('evaluation samples:',len(evalu))
"""
Explanation: Loading the Ground Truth
Load the dataframes (tables) with the labels for the training, development and evaluation (hold out) sets. Each subfolder corresponds to a different speaker. For example, T1 and D4 indicate the subfolders associated to the utterances and spoofed segments of speakers T1 and D4, respectively in training and development sets. Note that number of evaluation samples >> number of development samples >> testing samples.
You can either select the speakers in each set one by one, e.g.:
train_subfls = ['T1', 'T2']
will only load segments from speakers T1 and T2 for training,
or use all the available speakers in a certain subset by leaving the list empty, e.g.:
devel_subfls = []
will load all the available Dx speaker segments for the development stage. If you are running this notebook for the first time, you may want to start only with 2 or so speakers per set for sake of quick testing. All the scripts may take several hours to run on the full size datsets.
End of explanation
"""
# Parameters
n_ceps = 60 # number of ceptral coefficients (implicit in extractor)
silence_removal_ratio = .1
subfolders = train_subfls
ground_truth = train
# initialize feature matrix
features = []
y = np.zeros((len(ground_truth),))
print("Extracting features for training stage.")
vad = preprocessor.Energy_Thr(ratio_threshold=silence_removal_ratio)
cepstrum = extractor.Cepstral()
k = 0
start_time = time.clock()
for folder in subfolders[0:n_subfls]:
print(folder, end=", ")
folder = "".join(('Wav/',folder,'/'))
f_list = os.listdir(folder)
for f_name in f_list:
# ground truth
try:
label = ground_truth[ground_truth.file==f_name[:-len(EXT)]].source.values[0]
except IndexError:
continue
y[k] = label_2_class[label]
# silence removal
x = vad.read_original_data(folder+f_name)
vad_data = vad(x)
if not vad_data[2].max():
vad = preprocessor.Energy_Thr(ratio_threshold=silence_removal_ratio*.8)
vad_data = vad(x)
vad = preprocessor.Energy_Thr(ratio_threshold=silence_removal_ratio)
# MFCC extraction
mfcc = cepstrum(vad_data)
features.append(mfcc)
k += 1
Xf = np.array(features)
print(k,"files processed in",(time.clock()-start_time)/60,"minutes.")
"""
Explanation: Speech Preprocessing and MFCC Extraction
Silence removal and MFCC feature extraction for training segments. More details about the bob.bio.spear involved libraries at:
https://www.idiap.ch/software/bob/docs/latest/bioidiap/bob.bio.spear/master/implemented.html
You can also skip this stage and load a set of feaures (see Loading features cell).
End of explanation
"""
np.save('X.npy',Xf)
np.save('y.npy',y)
print('Feature and label matrices saved to disk')
"""
Explanation: Saving features
End of explanation
"""
# Load already extracter features to skip the preprocessing-extraction stage
Xf = np.load('train_features_10.npy')
y = np.load('y_10.npy')
"""
Explanation: Loading features
End of explanation
"""
# Parameters of the GMM machines
n_gaussians = 128 # number of Gaussians
max_iterats = 25 # maximum number of iterations
"""
Explanation: GMM - ML Classification
GMM Training
Train the GMMs for natural and synthetic speach. For documentation on bob.bio k-means and GMM machines see:
https://pythonhosted.org/bob.learn.em/guide.html
You can also skip the training stage and load an already trained GMM model (see cell Loading GMM Model).
End of explanation
"""
# Initialize and train k-means machine: the means will initialize EM algorithm for GMM machine
start_time = time.clock()
kmeans_nat = em.KMeansMachine(n_gaussians,n_ceps)
kmeansTrainer = em.KMeansTrainer()
em.train(kmeansTrainer, kmeans_nat, np.vstack(Xf[y==1]), max_iterations = max_iterats, convergence_threshold = 1e-5)
#kmeans_nat.means
# initialize and train GMM machine
gmm_nat = em.GMMMachine(n_gaussians,n_ceps)
trainer = em.ML_GMMTrainer(True, True, True)
gmm_nat.means = kmeans_nat.means
em.train(trainer, gmm_nat, np.vstack(Xf[y==1]), max_iterations = max_iterats, convergence_threshold = 1e-5)
#gmm_nat.save(HDF5File('gmm_nat.hdf5', 'w'))
print("Done in:", (time.clock() - start_time)/60, "minutes")
print(gmm_nat)
"""
Explanation: GMM for natural speech
End of explanation
"""
# initialize and train k-means machine: the means will initialize EM algorithm for GMM machine
start_time = time.clock()
kmeans_synt = em.KMeansMachine(n_gaussians,n_ceps)
kmeansTrainer = em.KMeansTrainer()
em.train(kmeansTrainer, kmeans_synt, np.vstack(Xf[y==0]), max_iterations = max_iterats, convergence_threshold = 1e-5)
# initialize and train GMM machine
gmm_synt = em.GMMMachine(n_gaussians,n_ceps)
trainer = em.ML_GMMTrainer(True, True, True)
gmm_synt.means = kmeans_synt.means
em.train(trainer, gmm_synt, np.vstack(Xf[y==0]), max_iterations = max_iterats, convergence_threshold = 1e-5)
print("Done in:", (time.clock() - start_time)/60, "minutes")
#gmm_synt.save(HDF5File('gmm_synt.hdf5', 'w'))
print(gmm_synt)
"""
Explanation: GMM for synthetic speech
End of explanation
"""
gmm_nat = em.GMMMachine()
gmm_nat.load(HDF5File('gmm_nat.hdf5', 'r'))
gmm_synt = em.GMMMachine()
gmm_synt.load(HDF5File('gmm_synt.hdf5','r'))
np.save('p_gmm_ml_eval_10.npy',llr_score)
np.save('z_gmm_ml_eval_est_10.npy',z_gmm)
"""
Explanation: Loading GMM model
End of explanation
"""
status = 'devel' # 'devel'(= test) OR 'evalu'(= hold out)
start_time = time.clock()
if status == 'devel':
subfolders = devel_subfls
ground_truth = devel
elif status == 'evalu':
subfolders = evalu_subfls
ground_truth = evalu
n_subfls = len(subfolders)
# initialize score and class arrays
llr_gmm_score = np.zeros(len(ground_truth),)
z_gmm = np.zeros(len(ground_truth),)
print(status)
vad = preprocessor.Energy_Thr(ratio_threshold=.1)
cepstrum = extractor.Cepstral()
k = 0
thr = .5
speaker_list = ground_truth.folder.unique()
for speaker_id in speaker_list:
#speaker = ground_truth[ground_truth.folder==speaker_id]
f_list = list(ground_truth[ground_truth.folder==speaker_id].file)
folder = "".join(['Wav/',speaker_id,'/'])
print(speaker_id, end=',')
for f in f_list:
f_name = "".join([folder,f,'.wav'])
x = vad.read_original_data(f_name)
# voice activity detection
vad_data = vad(x)
if not vad_data[2].max():
vad = preprocessor.Energy_Thr(ratio_threshold=.08)
vad_data = vad(x)
vad = preprocessor.Energy_Thr(ratio_threshold=.1)
# MFCC extraction
mfcc = cepstrum(vad_data)
# Log likelihood ratio computation
llr_gmm_score[k] = gmm_nat(mfcc)-gmm_synt(mfcc)
z_gmm[k] = int(llr_gmm_score[k]>0)
k += 1
ground_truth['z'] = ground_truth.source.map(lambda x: int(x=='human'))
ground_truth['z_gmm'] = z_gmm
ground_truth['score_gmm'] = llr_gmm_score
print(roc_auc_score(ground_truth.z, ground_truth.z_gmm))
print(k,"files processed in",(time.clock()-start_time)/60,"minutes.")
# Performance evaluation
humans = z_gmm[z_dvl==0]
spoofed = z_gmm[z_dvl==1]
fnr = 100*(1-(humans<thr).sum()/len(humans))
fpr = 100*(1-(spoofed>=thr).sum()/len(spoofed))
print("ROC AUC score:", roc_auc_score(z_dvl,z_gmm))
print("False negative rate %:", fnr)
print("False positive rate %:", fpr)
print("EER %: <=", (fnr+fpr)/2)
"""
Explanation: GMM-ML Scoring
Extract the features for the testing data, compute the likelihood ratio test and compute ROC AUC and estimated EER scores.
End of explanation
"""
thr = -.115
pz = llr_gmm_score
spoofed = pz[np.array(ground_truth.z)==1]
humans = pz[np.array(ground_truth.z)==0]
fnr = 100*(humans>thr).sum()/len(humans)
fpr = 100*(spoofed<=thr).sum()/len(spoofed)
print("False negative vs positive rates %:", fnr, fpr)
print("FNR - FPR %:", fnr-fpr)
if np.abs(fnr-fpr) <.25:
print("EER =", (fnr+fpr)/2,"%")
else:
print("EER ~", (fnr+fpr)/2,"%")
"""
Explanation: EER computation
Adjust the threshold $thr$ to reduce $FNR-FPR$ for a more accurate estimate of the $EER$.
The Equal Error Rate ($EER$) is the value where the false negative rate ($FNR$) equals the false positive rate ($FPR$). It's an error metric commonly used to characterize biometric systems.
End of explanation
"""
|
dblyon/PandasIntro | Exercises_part_A_with_Solutions.ipynb | mit | %%javascript
$.getScript('misc/kmahelona_ipython_notebook_toc.js')
"""
Explanation: <h1 id="tocheading">Table of Contents</h1>
<div id="toc"></div>
End of explanation
"""
data = {'animal': ['cat', 'cat', 'snake', 'dog', 'dog', 'cat', 'snake', 'cat', 'dog', 'dog'],
'age': [2.5, 3, 0.5, np.nan, 5, 2, 4.5, np.nan, 7, 3],
'visits': [1, 3, 2, 3, 2, 3, 1, 1, 2, 1],
'priority': ['yes', 'yes', 'no', 'yes', 'no', 'no', 'no', 'yes', 'no', 'no']}
labels = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']
"""
Explanation: DataFrame basics
Difficulty: easy
A few of the fundamental routines for selecting, sorting, adding and aggregating data in DataFrames
Consider the following Python dictionary data and Python list labels:
End of explanation
"""
# Answer
df = pd.DataFrame(data, index=labels)
"""
Explanation: Task: Create a DataFrame df from this dictionary data which has the index labels.
End of explanation
"""
# Answer
df.info()
# ...or...
df.describe()
"""
Explanation: Task: Display a summary of the basic information about this DataFrame and its data.
End of explanation
"""
# Answer
df.iloc[:3]
# or equivalently
df.head(3)
"""
Explanation: Task: Return the first 3 rows of the DataFrame df.
End of explanation
"""
# Answer
df.loc[:, ['animal', 'age']]
# or
df[['animal', 'age']]
"""
Explanation: Task: Select just the 'animal' and 'age' columns from the DataFrame df.
End of explanation
"""
# Answer
df.loc[["d", "e", "i"], ["animal", "age"]]
"""
Explanation: Task: Select the data in rows ["d", "e", "i"] and in columns ['animal', 'age'].
End of explanation
"""
# Answer
# either
df.ix[[3, 4, 8], ['animal', 'age']]
# or preferentially
df.iloc[[3, 4, 8], [1, 0]]
"""
Explanation: Task: Select the data in rows [3, 4, 8] and in columns ['animal', 'age'].
End of explanation
"""
# Answer
df[df['visits'] > 2]
"""
Explanation: Task: Select only the rows where the number of visits is greater than 2.
End of explanation
"""
# Answer
df[df['age'].isnull()]
"""
Explanation: Task: Select the rows where the age is missing, i.e. is NaN.
End of explanation
"""
# Answer
df[(df['animal'] == 'cat') & (df['age'] < 3)]
"""
Explanation: Task: Select the rows where the animal is a cat and the age is less than 3.
End of explanation
"""
# Answer
df.loc['f', 'age'] = 1.5
"""
Explanation: Task: Change the age in row 'f' to 1.5.
End of explanation
"""
# Answer
df['visits'].sum()
"""
Explanation: Task: Calculate the sum of all visits (the total number of visits).
End of explanation
"""
# Answer
df.groupby('animal')['age'].mean()
"""
Explanation: Task: Calculate the mean age for each different animal in df.
End of explanation
"""
# Answer
df.loc['k'] = [5.5, 'dog', 'no', 2]
# and then deleting the new row...
df = df.drop('k')
"""
Explanation: Task: Append a new row 'k' to df with your choice of values for each column. Then delete that row to return the original DataFrame.
End of explanation
"""
# Answer
df['animal'].value_counts()
"""
Explanation: Task: Count the number of each type of animal in df.
End of explanation
"""
# Answer
df.sort_values(by=['age', 'visits'], ascending=[False, True])
"""
Explanation: Task: Sort df first by the values in the 'age' in decending order, then by the value in the 'visit' column in ascending order.
End of explanation
"""
# Answer
df['priority'] = df['priority'].map({'yes': True, 'no': False})
"""
Explanation: Task: The 'priority' column contains the values 'yes' and 'no'. Replace this column with a column of boolean values: 'yes' should be True and 'no' should be False.
End of explanation
"""
# Answer
df['animal'] = df['animal'].replace('snake', 'python')
"""
Explanation: Task: In the 'animal' column, change the 'snake' entries to 'python'.
End of explanation
"""
df = pd.DataFrame({'A': [1, 2, 2, 3, 4, 5, 5, 5, 6, 7, 7]})
# Answer
len(df) - df.duplicated(keep=False).sum()
# or perhaps more simply...
len(df.drop_duplicates(keep=False))
"""
Explanation: Task: For each animal type and each number of visits, find the mean age. In other words, each row is an animal, each column is a number of visits and the values are the mean ages (hint: use a pivot table).
DataFrames: beyond the basics
Slightly trickier: you may need to combine two or more methods to get the right answer
Difficulty: medium
The previous section was tour through some basic but essential DataFrame operations. Below are some ways that you might need to cut your data, but for which there is no single "out of the box" method.
Task: How do you count how many unique rows a DataFrame has (i.e. ignore all rows that are duplicates)?
End of explanation
"""
df = pd.DataFrame({'grps': list('aaabbcaabcccbbc'),
'vals': [12,345,3,1,45,14,4,52,54,23,235,21,57,3,87]})
# Answer
df.groupby('grps')['vals'].nlargest(3).sum(level=0)
"""
Explanation: Task: A DataFrame has a column of groups 'grps' and and column of numbers 'vals'.
For each group, find the sum of the three greatest values.
End of explanation
"""
df = pd.DataFrame.from_dict({'A': {1: 0, 2: 6, 3: 12, 4: 18, 5: 24},
'B': {1: 1, 2: 7, 3: 13, 4: 19, 5: 25},
'C': {1: 2, 2: 8, 3: 14, 4: 20, 5: 26},
'D': {1: 3, 2: 9, 3: 15, 4: 21, 5: 27},
'E': {1: 4, 2: 10, 3: 16, 4: 22, 5: 28},
'F': {1: 5, 2: 11, 3: 17, 4: 23, 5: 29}})
df.head()
# Answer
df.unstack().sort_values()[-3:].index.tolist()
# http://stackoverflow.com/questions/14941261/index-and-column-for-the-max-value-in-pandas-dataframe/
# credit: DSM
"""
Explanation: DataFrames: harder problems
These might require a bit of thinking outside the box...
...but all are solvable using just the usual pandas/NumPy methods (and so avoid using explicit for loops).
Difficulty: hard
Task: Consider a DataFrame consisting of purely numerical data. Create a list of the row-column indices of the 3 largest values.
End of explanation
"""
import pandas as pd
df = pd.DataFrame([[1,1],[1,-1],[2,1],[2,2]], columns=["groups", "vals"])
df
# Answer
def replace_within_group(group):
mask = group < 0
# Select those values where it is < 0, and replace
# them with the mean of the values which are not < 0.
group[mask] = group[~mask].mean() # "~" is the "invert" or "complement" operation
return group
df.groupby(['groups'])['vals'].transform(replace_within_group)
# http://stackoverflow.com/questions/14760757/replacing-values-with-groupby-means/
# credit: unutbu
"""
Explanation: Task: Given a DataFrame with a column of group IDs, 'groups', and a column of corresponding integer values, 'vals', replace any negative values in 'vals' with the group mean.
End of explanation
"""
df = pd.DataFrame({'From_To': ['LoNDon_paris', 'MAdrid_miLAN', 'londON_StockhOlm',
'Budapest_PaRis', 'Brussels_londOn'],
'FlightNumber': [10045, np.nan, 10065, np.nan, 10085],
'RecentDelays': [[23, 47], [], [24, 43, 87], [13], [67, 32]],
'Airline': ['KLM(!)', '<Air France> (12)', '(British Airways. )',
'12. Air France', '"Swiss Air"']})
"""
Explanation: Cleaning Data
Making a DataFrame easier to work with
Difficulty: easy/medium
It happens all the time: someone gives you data containing malformed strings, Python, lists and missing data. How do you tidy it up so you can get on with the analysis?
End of explanation
"""
# Answer
df['FlightNumber'] = df['FlightNumber'].interpolate().astype(int)
"""
Explanation: Task: Some values in the the FlightNumber column are missing. These numbers are meant to increase by 10 with each row. Therefore the numbers 10055 and 10075 need to replace the missing values. Fill in these missing numbers and make the column an integer column (instead of a float column).
End of explanation
"""
# Answer
# temp = df.From_To.str.split('_', expand=True)
# temp.columns = ['From', 'To']
# or
temp = pd.DataFrame()
temp["From"], temp["To"] = zip(*df["From_To"].apply(lambda s: s.split("_")))
"""
Explanation: Apply
Task: The From_To column would be better as two separate columns! Split each string on the underscore delimiter _ to make two new columns with the correct values. Assign the correct column names to a new temporary DataFrame called temp.
End of explanation
"""
# Answer
# temp['From'] = temp['From'].str.capitalize()
# temp['To'] = temp['To'].str.capitalize()
# or
temp['From'] = temp['From'].apply(lambda s: s.capitalize())
temp['To'] = temp['To'].apply(lambda s: s.capitalize())
"""
Explanation: Task: Notice how the capitalisation of the city names is all mixed up in this temporary DataFrame. Standardise the strings so that only the first letter is uppercase (e.g. "londON" should become "London".)
End of explanation
"""
# Answer
df = df.drop('From_To', axis=1)
df = df.join(temp)
"""
Explanation: Task: Delete the From_To column from df and attach the temporary DataFrame from the previous questions.
End of explanation
"""
# Answer
df['Airline'] = df['Airline'].str.extract('([a-zA-Z\s]+)', expand=False).str.strip()
"""
Explanation: Task: In the Airline column, you can see some extra puctuation and symbols have appeared around the airline names. Pull out just the airline name. E.g. '(British Airways. )' should become 'British Airways'.
End of explanation
"""
# Answer
# there are several ways to do this, but the following approach is possibly the simplest
delays = df['RecentDelays'].apply(pd.Series)
delays.columns = ['delay_{}'.format(n) for n in range(1, len(delays.columns)+1)]
df = df.drop('RecentDelays', axis=1).join(delays)
"""
Explanation: Task:. In the RecentDelays column, the values have been entered into the DataFrame as a list. We would like each first value in its own column, each second value in its own column, and so on. If there isn't an Nth value, the value should be NaN.
Expand the Series of lists into a DataFrame named delays, rename the columns delay_1, delay_2, etc. and replace the unwanted RecentDelays column in df with delays.
End of explanation
"""
fn = "data/Saliva.txt"
df = pd.read_csv(fn, sep='\t')
df
"""
Explanation: The DataFrame should look much better now.
Rename, delete, rank and pivot
End of explanation
"""
# Answer:
df = df.rename(columns={"species": "NCBI_TaxID"})
# or
# columns = df.columns.tolist()
# df.columns = [columns[0]] + ["NCBI_TaxID"] + columns[2:]
"""
Explanation: Task: Rename species to NCBI_TaxID
End of explanation
"""
# Answer:
df = df.drop(["rank", "frequency"], axis=1)
df
"""
Explanation: Task: Delete the columns named frequency and rank
End of explanation
"""
# Answer:
def top_n(df, colname, n=3):
return df.sort_values(colname, ascending=False)[:n]
df.groupby("SampleCategory").apply(top_n, "Abundance", 2)
"""
Explanation: Task: Select the top 2 most abundant taxa per sample category. Write a function that expects a DataFrame, a column-name, and top-n (an Integer indicating the top n most abundant things within the given column-name).
End of explanation
"""
# Answer:
df["Rank"] = df.groupby("SampleCategory")["Abundance"].rank(ascending=False)
df
"""
Explanation: Task: Create a column named Rank that ranks the taxa by their abundance within each SampleCategory in descending order (most abundant taxa get the lowest rank).
End of explanation
"""
# Answer:
abu = df.pivot_table(values='Abundance', index='TaxName', columns='SampleCategory')
rank = df.pivot_table(values='Rank', index='TaxName', columns='SampleCategory')
dfp = pd.concat([rank, abu], axis=1).reset_index()
dfp
"""
Explanation: Task: Reshape the DataFrame so that you can compare the values of Rank and Abundance from the three sample categories by placing them next to each other in one row per taxon. In other words, reshape in a way that you get one row per taxon, placing Rank and Abundance values from the three sample categories next to each other (from "long" to "wide" format).
End of explanation
"""
|
ktaneishi/deepchem | examples/notebooks/pong.ipynb | mit | import deepchem as dc
import numpy as np
class PongEnv(dc.rl.GymEnvironment):
def __init__(self):
super(PongEnv, self).__init__('Pong-v0')
self._state_shape = (80, 80)
@property
def state(self):
# Crop everything outside the play area, reduce the image size,
# and convert it to black and white.
cropped = np.array(self._state)[34:194, :, :]
reduced = cropped[0:-1:2, 0:-1:2]
grayscale = np.sum(reduced, axis=2)
bw = np.zeros(grayscale.shape)
bw[grayscale != 233] = 1
return bw
def __deepcopy__(self, memo):
return PongEnv()
env = PongEnv()
"""
Explanation: Pong in DeepChem with A3C
This notebook demonstrates using reinforcement learning to train an agent to play Pong.
The first step is to create an Environment that implements this task. Fortunately,
OpenAI Gym already provides an implementation of Pong (and many other tasks appropriate
for reinforcement learning). DeepChem's GymEnvironment class provides an easy way to
use environments from OpenAI Gym. We could just use it directly, but in this case we
subclass it and preprocess the screen image a little bit to make learning easier.
End of explanation
"""
import deepchem.models.tensorgraph.layers as layers
import tensorflow as tf
class PongPolicy(dc.rl.Policy):
def create_layers(self, state, **kwargs):
conv1 = layers.Conv2D(num_outputs=16, in_layers=state, kernel_size=8, stride=4)
conv2 = layers.Conv2D(num_outputs=32, in_layers=conv1, kernel_size=4, stride=2)
dense = layers.Dense(out_channels=256, in_layers=layers.Flatten(in_layers=conv2), activation_fn=tf.nn.relu)
gru = layers.GRU(n_hidden=16, batch_size=1, in_layers=layers.Reshape(shape=(1, -1, 256), in_layers=dense))
concat = layers.Concat(in_layers=[dense, layers.Reshape(shape=(-1, 16), in_layers=gru)])
action_prob = layers.Dense(out_channels=env.n_actions, activation_fn=tf.nn.softmax, in_layers=concat)
value = layers.Dense(out_channels=1, in_layers=concat)
return {'action_prob':action_prob, 'value':value}
policy = PongPolicy()
"""
Explanation: Next we create a network to implement the policy. We begin with two convolutional layers to process
the image. That is followed by a dense (fully connected) layer to provide plenty of capacity for game
logic. We also add a small Gated Recurrent Unit. That gives the network a little bit of memory, so
it can keep track of which way the ball is moving.
We concatenate the dense and GRU outputs together, and use them as inputs to two final layers that serve as the
network's outputs. One computes the action probabilities, and the other computes an estimate of the
state value function.
End of explanation
"""
from deepchem.models.tensorgraph.optimizers import Adam
a3c = dc.rl.A3C(env, policy, model_dir='model', optimizer=Adam(learning_rate=0.0002))
"""
Explanation: We will optimize the policy using the Asynchronous Advantage Actor Critic (A3C) algorithm. There are lots of hyperparameters we could specify at this point, but the default values for most of them work well on this problem. The only one we need to customize is the learning rate.
End of explanation
"""
a3c.fit(1000)
# Change this for how long you have the patience
million = 10e6
num_rounds = 0
for round in range(num_rounds):
a3c.fit(million, restore=True)
"""
Explanation: Optimize for as long as you have patience to. By 1 million steps you should see clear signs of learning. Around 3 million steps it should start to occasionally beat the game's built in AI. By 7 million steps it should be winning almost every time. Running on my laptop, training takes about 20 minutes for every million steps.
End of explanation
"""
from datetime import datetime
def render_env(env):
try:
env.env.render()
except Exception as e:
print(e)
a3c.restore()
env.reset()
start = datetime.now()
while (datetime.now() - start).total_seconds() < 120:
render_env(env)
env.step(a3c.select_action(env.state))
"""
Explanation: Let's watch it play and see how it does!
End of explanation
"""
|
turbomanage/training-data-analyst | courses/machine_learning/deepdive/06_structured/1_explore.ipynb | apache-2.0 | # change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
"""
Explanation: <h1> 1. Exploring natality dataset </h1>
This notebook illustrates:
<ol>
<li> Exploring a BigQuery dataset using AI Platform Notebooks.
</ol>
End of explanation
"""
# Create SQL query using natality data after the year 2000
query = """
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
"""
# Call BigQuery and examine in dataframe
from google.cloud import bigquery
df = bigquery.Client().query(query + " LIMIT 100").to_dataframe()
df.head()
"""
Explanation: <h2> Explore data </h2>
The data is natality data (record of births in the US). My goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that -- this way, twins born on the same day won't end up in different cuts of the data.
End of explanation
"""
# Create function that finds the number of records and the average weight for each value of the chosen column
def get_distinct_values(column_name):
sql = """
SELECT
{0},
COUNT(1) AS num_babies,
AVG(weight_pounds) AS avg_wt
FROM
publicdata.samples.natality
WHERE
year > 2000
GROUP BY
{0}
""".format(column_name)
return bigquery.Client().query(sql).to_dataframe()
# Bar plot to see is_male with avg_wt linear and num_babies logarithmic
df = get_distinct_values('is_male')
df.plot(x='is_male', y='num_babies', kind='bar');
df.plot(x='is_male', y='avg_wt', kind='bar');
# Line plots to see mother_age with avg_wt linear and num_babies logarithmic
df = get_distinct_values('mother_age')
df = df.sort_values('mother_age')
df.plot(x='mother_age', y='num_babies');
df.plot(x='mother_age', y='avg_wt');
# Bar plot to see plurality(singleton, twins, etc.) with avg_wt linear and num_babies logarithmic
df = get_distinct_values('plurality')
df = df.sort_values('plurality')
df.plot(x='plurality', y='num_babies', logy=True, kind='bar');
df.plot(x='plurality', y='avg_wt', kind='bar');
# Bar plot to see gestation_weeks with avg_wt linear and num_babies logarithmic
df = get_distinct_values('gestation_weeks')
df = df.sort_values('gestation_weeks')
df.plot(x='gestation_weeks', y='num_babies', logy=True, kind='bar');
df.plot(x='gestation_weeks', y='avg_wt', kind='bar');
"""
Explanation: Let's write a query to find the unique values for each of the columns and the count of those values.
This is important to ensure that we have enough examples of each data value, and to verify our hunch that the parameter has predictive value.
End of explanation
"""
|
keras-team/keras-io | examples/vision/ipynb/vivit.ipynb | apache-2.0 | !pip install -qq medmnist
"""
Explanation: Video Vision Transformer
Author: Aritra Roy Gosthipaty, Ayush Thakur (equal contribution)<br>
Date created: 2022/01/12<br>
Last modified: 2022/01/12<br>
Description: A Transformer-based architecture for video classification.
Introduction
Videos are sequences of images. Let's assume you have an image
representation model (CNN, ViT, etc.) and a sequence model
(RNN, LSTM, etc.) at hand. We ask you to tweak the model for video
classification. The simplest approach would be to apply the image
model to individual frames, use the sequence model to learn
sequences of image features, then apply a classification head on
the learned sequence representation.
The Keras example
Video Classification with a CNN-RNN Architecture
explains this approach in detail. Alernatively, you can also
build a hybrid Transformer-based model for video classification as shown in the Keras example
Video Classification with Transformers.
In this example, we minimally implement
ViViT: A Video Vision Transformer
by Arnab et al., a pure Transformer-based model
for video classification. The authors propose a novel embedding scheme
and a number of Transformer variants to model video clips. We implement
the embedding scheme and one of the variants of the Transformer
architecture, for simplicity.
This example requires TensorFlow 2.6 or higher, and the medmnist
package, which can be installed by running the code cell below.
End of explanation
"""
import os
import io
import imageio
import medmnist
import ipywidgets
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
# Setting seed for reproducibility
SEED = 42
os.environ["TF_CUDNN_DETERMINISTIC"] = "1"
keras.utils.set_random_seed(SEED)
"""
Explanation: Imports
End of explanation
"""
# DATA
DATASET_NAME = "organmnist3d"
BATCH_SIZE = 32
AUTO = tf.data.AUTOTUNE
INPUT_SHAPE = (28, 28, 28, 1)
NUM_CLASSES = 11
# OPTIMIZER
LEARNING_RATE = 1e-4
WEIGHT_DECAY = 1e-5
# TRAINING
EPOCHS = 60
# TUBELET EMBEDDING
PATCH_SIZE = (8, 8, 8)
NUM_PATCHES = (INPUT_SHAPE[0] // PATCH_SIZE[0]) ** 2
# ViViT ARCHITECTURE
LAYER_NORM_EPS = 1e-6
PROJECTION_DIM = 128
NUM_HEADS = 8
NUM_LAYERS = 8
"""
Explanation: Hyperparameters
The hyperparameters are chosen via hyperparameter
search. You can learn more about the process in the "conclusion" section.
End of explanation
"""
def download_and_prepare_dataset(data_info: dict):
"""Utility function to download the dataset.
Arguments:
data_info (dict): Dataset metadata.
"""
data_path = keras.utils.get_file(origin=data_info["url"], md5_hash=data_info["MD5"])
with np.load(data_path) as data:
# Get videos
train_videos = data["train_images"]
valid_videos = data["val_images"]
test_videos = data["test_images"]
# Get labels
train_labels = data["train_labels"].flatten()
valid_labels = data["val_labels"].flatten()
test_labels = data["test_labels"].flatten()
return (
(train_videos, train_labels),
(valid_videos, valid_labels),
(test_videos, test_labels),
)
# Get the metadata of the dataset
info = medmnist.INFO[DATASET_NAME]
# Get the dataset
prepared_dataset = download_and_prepare_dataset(info)
(train_videos, train_labels) = prepared_dataset[0]
(valid_videos, valid_labels) = prepared_dataset[1]
(test_videos, test_labels) = prepared_dataset[2]
"""
Explanation: Dataset
For our example we use the
MedMNIST v2: A Large-Scale Lightweight Benchmark for 2D and 3D Biomedical Image Classification
dataset. The videos are lightweight and easy to train on.
End of explanation
"""
@tf.function
def preprocess(frames: tf.Tensor, label: tf.Tensor):
"""Preprocess the frames tensors and parse the labels."""
# Preprocess images
frames = tf.image.convert_image_dtype(
frames[
..., tf.newaxis
], # The new axis is to help for further processing with Conv3D layers
tf.float32,
)
# Parse label
label = tf.cast(label, tf.float32)
return frames, label
def prepare_dataloader(
videos: np.ndarray,
labels: np.ndarray,
loader_type: str = "train",
batch_size: int = BATCH_SIZE,
):
"""Utility function to prepare the dataloader."""
dataset = tf.data.Dataset.from_tensor_slices((videos, labels))
if loader_type == "train":
dataset = dataset.shuffle(BATCH_SIZE * 2)
dataloader = (
dataset.map(preprocess, num_parallel_calls=tf.data.AUTOTUNE)
.batch(batch_size)
.prefetch(tf.data.AUTOTUNE)
)
return dataloader
trainloader = prepare_dataloader(train_videos, train_labels, "train")
validloader = prepare_dataloader(valid_videos, valid_labels, "valid")
testloader = prepare_dataloader(test_videos, test_labels, "test")
"""
Explanation: tf.data pipeline
End of explanation
"""
class TubeletEmbedding(layers.Layer):
def __init__(self, embed_dim, patch_size, **kwargs):
super().__init__(**kwargs)
self.projection = layers.Conv3D(
filters=embed_dim,
kernel_size=patch_size,
strides=patch_size,
padding="VALID",
)
self.flatten = layers.Reshape(target_shape=(-1, embed_dim))
def call(self, videos):
projected_patches = self.projection(videos)
flattened_patches = self.flatten(projected_patches)
return flattened_patches
"""
Explanation: Tubelet Embedding
In ViTs, an image is divided into patches, which are then spatially
flattened, a process known as tokenization. For a video, one can
repeat this process for individual frames. Uniform frame sampling
as suggested by the authors is a tokenization scheme in which we
sample frames from the video clip and perform simple ViT tokenization.
| |
| :--: |
| Uniform Frame Sampling Source |
Tubelet Embedding is different in terms of capturing temporal
information from the video.
First, we extract volumes from the video -- these volumes contain
patches of the frame and the temporal information as well. The volumes
are then flattened to build video tokens.
| |
| :--: |
| Tubelet Embedding Source |
End of explanation
"""
class PositionalEncoder(layers.Layer):
def __init__(self, embed_dim, **kwargs):
super().__init__(**kwargs)
self.embed_dim = embed_dim
def build(self, input_shape):
_, num_tokens, _ = input_shape
self.position_embedding = layers.Embedding(
input_dim=num_tokens, output_dim=self.embed_dim
)
self.positions = tf.range(start=0, limit=num_tokens, delta=1)
def call(self, encoded_tokens):
# Encode the positions and add it to the encoded tokens
encoded_positions = self.position_embedding(self.positions)
encoded_tokens = encoded_tokens + encoded_positions
return encoded_tokens
"""
Explanation: Positional Embedding
This layer adds positional information to the encoded video tokens.
End of explanation
"""
def create_vivit_classifier(
tubelet_embedder,
positional_encoder,
input_shape=INPUT_SHAPE,
transformer_layers=NUM_LAYERS,
num_heads=NUM_HEADS,
embed_dim=PROJECTION_DIM,
layer_norm_eps=LAYER_NORM_EPS,
num_classes=NUM_CLASSES,
):
# Get the input layer
inputs = layers.Input(shape=input_shape)
# Create patches.
patches = tubelet_embedder(inputs)
# Encode patches.
encoded_patches = positional_encoder(patches)
# Create multiple layers of the Transformer block.
for _ in range(transformer_layers):
# Layer normalization and MHSA
x1 = layers.LayerNormalization(epsilon=1e-6)(encoded_patches)
attention_output = layers.MultiHeadAttention(
num_heads=num_heads, key_dim=embed_dim // num_heads, dropout=0.1
)(x1, x1)
# Skip connection
x2 = layers.Add()([attention_output, encoded_patches])
# Layer Normalization and MLP
x3 = layers.LayerNormalization(epsilon=1e-6)(x2)
x3 = keras.Sequential(
[
layers.Dense(units=embed_dim * 4, activation=tf.nn.gelu),
layers.Dense(units=embed_dim, activation=tf.nn.gelu),
]
)(x3)
# Skip connection
encoded_patches = layers.Add()([x3, x2])
# Layer normalization and Global average pooling.
representation = layers.LayerNormalization(epsilon=layer_norm_eps)(encoded_patches)
representation = layers.GlobalAvgPool1D()(representation)
# Classify outputs.
outputs = layers.Dense(units=num_classes, activation="softmax")(representation)
# Create the Keras model.
model = keras.Model(inputs=inputs, outputs=outputs)
return model
"""
Explanation: Video Vision Transformer
The authors suggest 4 variants of Vision Transformer:
Spatio-temporal attention
Factorized encoder
Factorized self-attention
Factorized dot-product attention
In this example, we will implement the Spatio-temporal attention
model for simplicity. The following code snippet is heavily inspired from
Image classification with Vision Transformer.
One can also refer to the
official repository of ViViT
which contains all the variants, implemented in JAX.
End of explanation
"""
def run_experiment():
# Initialize model
model = create_vivit_classifier(
tubelet_embedder=TubeletEmbedding(
embed_dim=PROJECTION_DIM, patch_size=PATCH_SIZE
),
positional_encoder=PositionalEncoder(embed_dim=PROJECTION_DIM),
)
# Compile the model with the optimizer, loss function
# and the metrics.
optimizer = keras.optimizers.Adam(learning_rate=LEARNING_RATE)
model.compile(
optimizer=optimizer,
loss="sparse_categorical_crossentropy",
metrics=[
keras.metrics.SparseCategoricalAccuracy(name="accuracy"),
keras.metrics.SparseTopKCategoricalAccuracy(5, name="top-5-accuracy"),
],
)
# Train the model.
_ = model.fit(trainloader, epochs=EPOCHS, validation_data=validloader)
_, accuracy, top_5_accuracy = model.evaluate(testloader)
print(f"Test accuracy: {round(accuracy * 100, 2)}%")
print(f"Test top 5 accuracy: {round(top_5_accuracy * 100, 2)}%")
return model
model = run_experiment()
"""
Explanation: Train
End of explanation
"""
NUM_SAMPLES_VIZ = 25
testsamples, labels = next(iter(testloader))
testsamples, labels = testsamples[:NUM_SAMPLES_VIZ], labels[:NUM_SAMPLES_VIZ]
ground_truths = []
preds = []
videos = []
for i, (testsample, label) in enumerate(zip(testsamples, labels)):
# Generate gif
with io.BytesIO() as gif:
imageio.mimsave(gif, (testsample.numpy() * 255).astype("uint8"), "GIF", fps=5)
videos.append(gif.getvalue())
# Get model prediction
output = model.predict(tf.expand_dims(testsample, axis=0))[0]
pred = np.argmax(output, axis=0)
ground_truths.append(label.numpy().astype("int"))
preds.append(pred)
def make_box_for_grid(image_widget, fit):
"""Make a VBox to hold caption/image for demonstrating option_fit values.
Source: https://ipywidgets.readthedocs.io/en/latest/examples/Widget%20Styling.html
"""
# Make the caption
if fit is not None:
fit_str = "'{}'".format(fit)
else:
fit_str = str(fit)
h = ipywidgets.HTML(value="" + str(fit_str) + "")
# Make the green box with the image widget inside it
boxb = ipywidgets.widgets.Box()
boxb.children = [image_widget]
# Compose into a vertical box
vb = ipywidgets.widgets.VBox()
vb.layout.align_items = "center"
vb.children = [h, boxb]
return vb
boxes = []
for i in range(NUM_SAMPLES_VIZ):
ib = ipywidgets.widgets.Image(value=videos[i], width=100, height=100)
true_class = info["label"][str(ground_truths[i])]
pred_class = info["label"][str(preds[i])]
caption = f"T: {true_class} | P: {pred_class}"
boxes.append(make_box_for_grid(ib, caption))
ipywidgets.widgets.GridBox(
boxes, layout=ipywidgets.widgets.Layout(grid_template_columns="repeat(5, 200px)")
)
"""
Explanation: Inference
End of explanation
"""
|
oxy-cms/cms-tours | tours/2016-F_COMP-131_Li,J/RasPi-SimpleCV/DemoCV-orig-Copy0.ipynb | gpl-3.0 | from SimpleCV import *
from time import sleep
#import pytesseract
from bs4 import BeautifulSoup
"""
Explanation: The beginning of a demo
End of explanation
"""
cam = Camera()
"""
Explanation: Here we set up the camera
End of explanation
"""
img = cam.getImage()
"""
Explanation: Now we also get a single image from the cam, so mug it up!
End of explanation
"""
img.scale(0.5).show()
# Okay, let's keep that
img = img.scale(0.5)
"""
Explanation: In Ipython Notebook, you can sometimes use TAB autocompletion if the variable is in memory.
Let's see what "sh" brings up.
img.sh
Let's try shrinking beforehand
End of explanation
"""
img.edges?
"""
Explanation: Let's try getting some documentation
End of explanation
"""
# This draws directly onto the image (in place)
img.drawText('RASPBERRY Pi', 10, 10, color=Color.BLUE, )
img.show()
#Let's try bigger font
img.drawText('RASPBERRY Pi', 10, 10, color=Color.ORANGE, fontsize=50)
img.drawText('SimpleCV', 10, 100, color=Color.ORANGE, fontsize=50)
img.show()
#from SimpleCV import Image, Color, Display
# Make a function that does a half and half image.
def halfsies(left,right):
result = left
# crop the right image to be just the right side.
crop = right.crop(right.width/2.0,0,right.width/2.0,right.height)
# now paste the crop on the left image.
result = result.blit(crop,(left.width/2,0))
# return the results.
return result
# Load an image from imgur.
#img = Image('http://i.imgur.com/lfAeZ4n.png')
# binarize the image using a threshold of 90
# and invert the results.
output = img.binarize().invert()
# create the side by side image.
result = halfsies(img,output)
result.show()
"""
Explanation: Okay, cool - built in documentation. Good.
Let's draw some text on it.
End of explanation
"""
img.listHaarFeatures()
ppl = Image('group.jpg')
ppl = Image('tng.jpg')
ppl.show()
#obs = ppl.findHaarFeatures('face.xml')
obs = ppl.findHaarFeatures('upper_body.xml')
for f in obs:
print str(f.coordinates())
f.draw(color=Color.YELLOW, width=4)
ppl.show()
"""
Explanation: Let's look at some people
Who's there?
This function lists what features we can detect (built in).
End of explanation
"""
print obs.sortX()
# Highlight the person on the end
feat1 = obs.sortX()[-1]
feat1.draw(color=Color.RED, width=4)
feat2 = obs.sortX()[0]
# reshow the image after drawing in it
ppl.show()
# Here's a function that may help you swap parts in two images.
def swap( img1, f1, f2):
# Gte a "mask" that is the first feature cropped and ressized
# to the size of the second feature.
f1mask = f1.crop().resize(w=f2.width(), h=f2.height())
# get the similar for the other
f2mask = f2.crop().resize(w=f1.width(), h=f1.height())
# Set up the output images
out1 = img1.blit( f2mask, f1.topLeftCorner(), alpha=0.8)
out2 = out1.blit( f1mask, f2.topLeftCorner(), alpha=0.8)
return out1, out2
# Run the swap and ignore the first return value
img3, img4 = swap( ppl, feat1, feat2)
img4.show()
while True:
# Reduce the image size
img = cam.getImage().scale(0.5)
faces = img.findHaarFeatures("face.xml")
for f in faces:
f.draw(width=3)
img.show()
### NOTE: I think you'll want to move this code to the while loop above so it's executed repeatedly.
# That while loop never exits right now (always True)
face1 = faces.sortX()[]
face2= faces.sortX()[1]
img1 = swap (img, face1, face2)
img1.show()
"""
Explanation: Look at the image set as a whole
End of explanation
"""
|
GraysonR/titanic-data-analysis | 2015-12-23-titanic-initial-exploration.ipynb | mit | # Import magic
%matplotlib inline
# More imports
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
# Set up Seaborn
sns.set() # matplotlib defaults
# Load and show CSV data
titanic_data = pd.read_csv('titanic_data.csv')
titanic_data.head()
"""
Explanation: Exploring Titanic Dataset
Questions:
What does the data look like?
What possible factors could be correlated to (not) surviving the disaster?
End of explanation
"""
titanic_data.shape
titanic_data.columns
"""
Explanation: Observations
Tickets vary and most likely don't affect survivorship, or at least the ticket #'s don't. There are other variables attached to the ticket (price, location, class) that most likely have a clearer affect on survivorship.
What do the columns mean?
What do the embarked codes mean?
variable source
survival Survival (0 = No; 1 = Yes)
pclass Passenger Class (1 = 1st; 2 = 2nd; 3 = 3rd)
name Name
sex Sex
age Age
sibsp Number of Siblings/Spouses Aboard
parch Number of Parents/Children Aboard
ticket Ticket Number
fare Passenger Fare
cabin Cabin
embarked Port of Embarkation (C = Cherbourg; Q = Queenstown; S = Southampton)
End of explanation
"""
titanic_data[pd.notnull(titanic_data['Cabin'])].shape
"""
Explanation: Cabin seems to have a lot of NaN spaces. How much information is there?
End of explanation
"""
first = second = third = 0
cher = queens = shamp = some = 0
male = female = 0
for index, row in titanic_data.iterrows():# Determine classes
if row['Pclass'] == 1:
first += 1
elif row['Pclass'] == 2:
second += 2
else:
third += 1
if row['Embarked'] == 'C':
cher += 1
elif row['Embarked'] == 'Q':
queens += 1
elif row['Embarked'] == 'S':
shamp +=1
else:
some += 1
if row['Sex'] == 'male':
male += 1
else:
female += 1
print '''\tFirst: %d
Second: %d
Third: %d \n
Cherbourg: %d
Queenstown: %d
Southampton: %d
nill: %d \n
Male: %d
Female: %d''' % (first, second, third, cher, queens, shamp, some, male, female)
"""
Explanation: What is the distribution of class, port, gender?
End of explanation
"""
titanic_data[pd.notnull(titanic_data['Age'])].shape
titanic_data['Age'].describe()
# Plot of non-null age
sns.distplot(titanic_data[pd.notnull(titanic_data['Age'])]['Age'], kde=False)
"""
Explanation: IMPORTANT: Age has NaN
End of explanation
"""
married = titanic_data[((titanic_data['SibSp'] > 0) & (titanic_data['Age'] > 18))]
married.shape
sns.distplot(married[pd.notnull(married['Age'])]['Age'], kde=False, color='r')
not_married = titanic_data[((titanic_data['SibSp'] == 0) & (titanic_data['Age'] > 18))]
sns.distplot(not_married[pd.notnull(not_married['Age'])]['Age'], kde=False)
"""
Explanation: Possible Question
How did married couples fair? e.g. number both died, just one?
End of explanation
"""
|
bharat-b7/NN_glimpse | 2.2 CNN HandsOn - MNIST Dataset.ipynb | unlicense | import numpy as np
import keras
from keras.datasets import mnist
import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"] = ""
# Load the datasets
(X_train, y_train), (X_test, y_test) = mnist.load_data()
"""
Explanation: CNN HandsOn with Keras
Problem Definition
Recognize handwritten digits
Data
The MNIST database (link) has a database of handwritten digits.
The training set has $60,000$ samples.
The test set has $10,000$ samples.
The digits are size-normalized and centered in a fixed-size image.
The data page has description on how the data was collected. It also has reports the benchmark of various algorithms on the test dataset.
Load the data
The data is available in the repo's data folder. Let's load that using the keras library.
For now, let's load the data and see how it looks.
End of explanation
"""
# What is the type of X_train?
# What is the type of y_train?
# Find number of observations in training data
# Find number of observations in test data
# Display first 2 records of X_train
# Display the first 10 records of y_train
# Find the number of observations for each digit in the y_train dataset
# Find the number of observations for each digit in the y_test dataset
# What is the dimension of X_train?. What does that mean?
"""
Explanation: Basic data analysis on the dataset
End of explanation
"""
from matplotlib import pyplot
import matplotlib as mpl
%matplotlib inline
# Displaying the first training data
fig = pyplot.figure()
ax = fig.add_subplot(1,1,1)
imgplot = ax.imshow(X_train[20], cmap=mpl.cm.Greys)
imgplot.set_interpolation('nearest')
ax.xaxis.set_ticks_position('top')
ax.yaxis.set_ticks_position('left')
pyplot.show()
# Let's now display the 11th record
"""
Explanation: Display Images
Let's now display some of the images and see how they look
We will be using matplotlib library for displaying the image
End of explanation
"""
|
ernestyalumni/CompPhys | Cpp/Integrate.ipynb | apache-2.0 | from itertools import combinations
import sympy
from sympy import Function, integrate, Product, Sum, Symbol, symbols
from sympy.abc import a,b,h,i,k,m,n,x
from sympy import Rational as Rat
def lagrange_basis_polys(N,x,xpts=None):
"""
lagrange_basis_polynomials(N,x,xpts)
returns the Lagrange basis polynomials as a list
INPUTS/PARAMETERS
-----------------
<int> N - N > 0. Note that there are N+1 points total
<sympy.Symbol> x
<list> xpts
"""
assert N > 0
if xpts != None:
assert len(xpts) == N + 1
if xpts == None:
print "I'll generate symbolic sympy symbols for you for xpts"
xpts = symbols('x0:'+str(N+1))
basis_polys = []
for i in range(N+1):
tmpprod = Rat(1)
for k in [k for k in range(N+1) if k != i]:
tmpprod = tmpprod * (x - xpts[k])/(xpts[i]-xpts[k])
basis_polys.append(tmpprod)
return basis_polys
def lagrange_interp(N,x,xpts=None,ypts=None):
"""
lagrange_interp(N,x,xpts,ypts)
Lagrange interpolation formula
"""
if xpts != None and ypts != None:
assert len(xpts) == len(ypts)
if xpts == None:
print "I'll generate symbolic sympy symbols for you for xpts"
xpts = symbols('x0:'+str(N+1))
if ypts == None:
print "I'll generate symbolic sympy symbols for you for xpts"
ypts = symbols('y0:'+str(N+1))
basis = lagrange_basis_polys(N,x,xpts)
p_N = sum( [ypts[i]*basis[i] for i in range(N+1)] )
return p_N
xpts = symbols('x0:'+str(1+1))
ypts = symbols('y0:'+str(1+1))
p_1x = lagrange_interp(1,x,xpts,ypts)
"""
Explanation: Numerical Integration
End of explanation
"""
x_0 = Symbol('x_0',real=True)
f = Function('f')
f_minush = p_1x.subs({xpts[0]:x_0-h,xpts[1]:x_0, ypts[0]:f(x_0-h), ypts[1]:f(x_0) })
integrate( f_minush, (x,x_0-h,x_0 ) )
"""
Explanation: Below is, mathematically, $f_{-h} := p_1(x)$ with $(x_0,y_0) = (x_0-h, f(x_0-h)), (x_1,y_1) = (x_0,f(x_0))$ and
$\int_{x_0-h}^{x_0} f_{-h}$
End of explanation
"""
f_h = p_1x.subs({xpts[0]:x_0,xpts[1]:x_0+h, ypts[0]:f(x_0), ypts[1]:f(x_0+h) })
integrate( f_h, (x,x_0,x_0+h ) )
( integrate( f_minush, (x,x_0-h,x_0 ) ) + integrate( f_h, (x,x_0,x_0+h ) ) ).simplify()
"""
Explanation: Then, we can use sympy to calculate, symbolically, $f_{h} := p_1(x)$ with $(x_0,y_0) = (x_0, f(x_0)), (x_1,y_1) = (x_0+h,f(x_0+h))$ and
$\int_{x_0}^{x_0+h} f_{h}$
End of explanation
"""
xpts = symbols('x0:'+str(2+1))
ypts = symbols('y0:'+str(2+1))
p_2x = lagrange_interp(2,x,xpts,ypts)
f2_h = p_2x.subs({xpts[0]:x_0-h,xpts[1]:x_0,xpts[2]:x_0+h,ypts[0]:f(x_0-h), ypts[1]:f(x_0),ypts[2]:f(x_0+h) })
integrate( f2_h,(x,x_0-h,x_0+h)).simplify()
"""
Explanation: Success! Trapezoid rule was rederived (stop using pen/pencil and paper or chalkboard; computers can do computations faster and without mistakes)
For a second order polynomial, $p_{N=2}(x)$,
End of explanation
"""
from sympy.polys.orthopolys import legendre_poly
print "n \t \t \t \t P_n(x) \n"
for i in range(11):
print str(i) + "\t \t \t \t " , legendre_poly(i,x)
sympy.latex(legendre_poly(2,x))
sympy.N( sympy.integrate(1/(2+x**2),(x,0,3)) )
"""
Explanation: Legendre Polynomials
I don't find the sympy documentation very satisfying (other than listing the argument syntax, no examples of usage, nor further explanation, beyond the barebones argument syntax, is given). So what I've done here is to try to show what I've done.
End of explanation
"""
|
AnasFullStack/Awesome-Full-Stack-Web-Developer | algorithms/algorithm_analyisis.ipynb | mit | import time
def sumOfN(n):
start = time.time()
theSum = 0
for i in range(1,n+1):
theSum = theSum + i
end = time.time()
return theSum,end-start
for i in range(5):
print("Sum is %d required %10.7f seconds"%sumOfN(1000000))
"""
Explanation: 2. Algorithm Analysis
Book URL
2.2. What Is Algorithm Analysis?
End of explanation
"""
import matplotlib.pyplot as plt
import math
n = 11
x = list(range(1,n))
linear = [x for x in range(1, n)]
Quadratic = [x**2 for x in range(1, n)]
LogLinear = [x * math.log(x) for x in range(1, n)]
Cubic = [x**3 for x in range(1, n)]
Exponential = [2**x for x in range(1, n)]
logarithmic = [math.log(x) for x in range(1, n)]
plt.plot(x, linear,'k--', label='linear')
plt.plot(x, Quadratic, 'k:', label='Quadratic')
plt.plot(x, LogLinear, 'y', label='LogLinear')
plt.plot(x, Cubic, 'b', label='Cubic')
plt.plot(x, Exponential, 'r', label='Exponential')
plt.plot(x, logarithmic, 'g', label='logarithmic')
legend = plt.legend(shadow=True)
plt.grid(True)
plt.show()
a=5
b=6
c=10
n = 100
for i in range(n):
for j in range(n):
x = i * i
y = j * j
z = i * j
for k in range(n):
w = a*k + 45
v = b*b
d = 33
"""
Explanation: 2.3. Big-O Notation
The order of magnitude function describes the part of T(n) that increases the fastest as the value of n increases. Order of magnitude is often called Big-O notation (for “order”) and written as O(f(n)).
It provides a useful approximation to the actual number of steps in the computation. The function f(n) provides a simple representation of the dominant part of the original T(n).
The parameter n is often referred to as the “size of the problem,” and we can read this as “T(n) is the time it takes to solve a problem of size n”
As another example, suppose that for some algorithm, the exact number of steps is
T(n)=5nˆ2+27n+1005
When n is small, say 1 or 2, the constant 1005 seems to be the dominant part of the function. However, as n gets larger, the nˆ2 term becomes the most important.
In fact, when n is really large, the other two terms become insignificant in the role that they play in determining the final result
Again, to approximate T(n) as n gets large, we can ignore the other terms and focus on 5nˆ2.
In addition, the coefficient 5 becomes insignificant as n gets large.
We would say then that the function T(n) has an order of magnitude f(n)=n2, or simply that it is O(n2).
Although we do not see this in the summation example, sometimes the performance of an algorithm depends on the exact values of the data rather than simply the size of the problem.
For these kinds of algorithms we need to characterize their performance in terms of best case, worst case, or average case performance.
The worst case performance refers to a particular data set where the algorithm performs especially poorly. Whereas a different data set for the exact same algorithm might have extraordinarily good performance. However, in most cases the algorithm performs somewhere in between these two extremes (average case). It is important for a computer scientist to understand these distinctions so they are not misled by one particular case.
A number of very common order of magnitude functions will come up over and over as you study algorithms
| f(n) | Name |
| --- | --- |
| 1 | Constant |
| log n | Logarithmic |
|n|Linear|
|n log n|Log Linear|
|nˆ2|Quadratic|
|nˆ3|Cubic|
|2ˆn|Exponential|
End of explanation
"""
import timeit
def test1():
l = []
for i in range(1000):
l = l + [i]
def test2():
l = []
for i in range(1000):
l.append(i)
def test3():
l = [i for i in range(1000)]
def test4():
l = list(range(1000))
t1 = timeit.Timer("test1()", "from __main__ import test1")
print("concat ",t1.timeit(number=1000), "milliseconds")
t2 = timeit.Timer("test2()", "from __main__ import test2")
print("append ",t2.timeit(number=1000), "milliseconds")
t3 = timeit.Timer("test3()", "from __main__ import test3")
print("comprehension ",t3.timeit(number=1000), "milliseconds")
t4 = timeit.Timer("test4()", "from __main__ import test4")
print("list range ",t4.timeit(number=1000), "milliseconds")
"""
Explanation: The number of assignment operations is the sum of four terms.
- The first term is the constant 3, representing the three assignment statements at the start of the fragment.
- The second term is 3nˆ2, since there are three statements that are performed nˆ2 times due to the nested iteration.
- The third term is 2n, two statements iterated n times.
- Finally, the fourth term is the constant 1, representing the final assignment statement.
This gives us T(n)=3+3n2+2n+1=3nˆ2+2n+4.
- By looking at the exponents, we can easily see that the n2 term will be dominant and therefore this fragment of code is O(n2).
- Note that all of the other terms as well as the coefficient on the dominant term can be ignored as n grows larger.
2.6. Lists
End of explanation
"""
popzero = timeit.Timer("x.pop(0)",
"from __main__ import x")
popend = timeit.Timer("x.pop()",
"from __main__ import x")
x = list(range(2000000))
print(popzero.timeit(number=1000))
x = list(range(2000000))
print(popend.timeit(number=1000))
"""
Explanation: Big-O efficiency of all the basic list operations.
| Operation | Big-O Efficiency |
| --- | --- |
| index [] | O(1) |
| index assignment | O(1) |
| append | O(1) |
| pop() | O(1) |
| pop(i) | O(n) |
| insert(i,item) | O(n) |
| del operator | O(n) |
| iteration | O(n) |
| contains (in) | O(n) |
| get slice [x:y] | O(k) |
| del slice | O(n) |
| set slice | O(n+k) |
| reverse | O(n) |
| concatenate | O(k) |
| sort | O(n log n) |
| multiply | O(nk) |
End of explanation
"""
import timeit
import random
for i in range(10000,1000001,20000):
t = timeit.Timer("random.randrange(%d) in x" % i,
"from __main__ import random, x")
x = list(range(i))
lst_time = t.timeit(number=1000)
x = {j:None for j in range(i)}
d_time = t.timeit(number=1000)
print("%d,%10.3f,%10.3f" % (i, lst_time, d_time))
"""
Explanation: 2.7. Dictionaries
| operation | Big-O Efficiency |
| --- | --- |
| copy | O(n) |
| get item | O(1) |
| set item | O(1) |
| delete item | O(1) |
| contains (in) | O(1) |
| iteration | O(n) |
Comparing Dictionary time with list time
End of explanation
"""
|
tensorflow/docs-l10n | site/en-snapshot/io/tutorials/prometheus.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow IO Authors.
End of explanation
"""
import os
try:
%tensorflow_version 2.x
except Exception:
pass
!pip install tensorflow-io
from datetime import datetime
import tensorflow as tf
import tensorflow_io as tfio
"""
Explanation: Load metrics from Prometheus server
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/io/tutorials/prometheus"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/io/blob/master/docs/tutorials/prometheus.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/io/blob/master/docs/tutorials/prometheus.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/io/docs/tutorials/prometheus.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Caution: In addition to python packages this notebook uses sudo apt-get install to install third party packages.
Overview
This tutorial loads CoreDNS metrics from a Prometheus server into a tf.data.Dataset, then uses tf.keras for training and inference.
CoreDNS is a DNS server with a focus on service discovery, and is widely deployed as a part of the Kubernetes cluster. For that reason it is often closely monitoring by devops operations.
This tutorial is an example that could be used by devops looking for automation in their operations through machine learning.
Setup and usage
Install required tensorflow-io package, and restart runtime
End of explanation
"""
!curl -s -OL https://github.com/coredns/coredns/releases/download/v1.6.7/coredns_1.6.7_linux_amd64.tgz
!tar -xzf coredns_1.6.7_linux_amd64.tgz
!curl -s -OL https://raw.githubusercontent.com/tensorflow/io/master/docs/tutorials/prometheus/Corefile
!cat Corefile
# Run `./coredns` as a background process.
# IPython doesn't recognize `&` in inline bash cells.
get_ipython().system_raw('./coredns &')
"""
Explanation: Install and setup CoreDNS and Prometheus
For demo purposes, a CoreDNS server locally with port 9053 open to receive DNS queries and port 9153 (defult) open to expose metrics for scraping. The following is a basic Corefile configuration for CoreDNS and is available to download:
.:9053 {
prometheus
whoami
}
More details about installation could be found on CoreDNS's documentation.
End of explanation
"""
!curl -s -OL https://github.com/prometheus/prometheus/releases/download/v2.15.2/prometheus-2.15.2.linux-amd64.tar.gz
!tar -xzf prometheus-2.15.2.linux-amd64.tar.gz --strip-components=1
!curl -s -OL https://raw.githubusercontent.com/tensorflow/io/master/docs/tutorials/prometheus/prometheus.yml
!cat prometheus.yml
# Run `./prometheus` as a background process.
# IPython doesn't recognize `&` in inline bash cells.
get_ipython().system_raw('./prometheus &')
"""
Explanation: The next step is to setup Prometheus server and use Prometheus to scrape CoreDNS metrics that are exposed on port 9153 from above. The prometheus.yml file for configuration is also available for download:
End of explanation
"""
!sudo apt-get install -y -qq dnsutils
!dig @127.0.0.1 -p 9053 demo1.example.org
!dig @127.0.0.1 -p 9053 demo2.example.org
"""
Explanation: In order to show some activity, dig command could be used to generate a few DNS queries against the CoreDNS server that has been setup:
End of explanation
"""
dataset = tfio.experimental.IODataset.from_prometheus(
"coredns_dns_request_count_total", 5, endpoint="http://localhost:9090")
print("Dataset Spec:\n{}\n".format(dataset.element_spec))
print("CoreDNS Time Series:")
for (time, value) in dataset:
# time is milli second, convert to data time:
time = datetime.fromtimestamp(time // 1000)
print("{}: {}".format(time, value['coredns']['localhost:9153']['coredns_dns_request_count_total']))
"""
Explanation: Now a CoreDNS server whose metrics are scraped by a Prometheus server and ready to be consumed by TensorFlow.
Create Dataset for CoreDNS metrics and use it in TensorFlow
Create a Dataset for CoreDNS metrics that is available from PostgreSQL server, could be done with tfio.experimental.IODataset.from_prometheus. At the minimium two arguments are needed. query is passed to Prometheus server to select the metrics and length is the period you want to load into Dataset.
You can start with "coredns_dns_request_count_total" and "5" (secs) to create the Dataset below. Since earlier in the tutorial two DNS queries were sent, it is expected that the metrics for "coredns_dns_request_count_total" will be "2.0" at the end of the time series:
End of explanation
"""
dataset = tfio.experimental.IODataset.from_prometheus(
"go_memstats_gc_sys_bytes", 5, endpoint="http://localhost:9090")
print("Time Series CoreDNS/Prometheus Comparision:")
for (time, value) in dataset:
# time is milli second, convert to data time:
time = datetime.fromtimestamp(time // 1000)
print("{}: {}/{}".format(
time,
value['coredns']['localhost:9153']['go_memstats_gc_sys_bytes'],
value['prometheus']['localhost:9090']['go_memstats_gc_sys_bytes']))
"""
Explanation: Further looking into the spec of the Dataset:
```
(
TensorSpec(shape=(), dtype=tf.int64, name=None),
{
'coredns': {
'localhost:9153': {
'coredns_dns_request_count_total': TensorSpec(shape=(), dtype=tf.float64, name=None)
}
}
}
)
```
It is obvious that the dataset consists of a (time, values) tuple where the values field is a python dict expanded into:
"job_name": {
"instance_name": {
"metric_name": value,
},
}
In the above example, 'coredns' is the job name, 'localhost:9153' is the instance name, and 'coredns_dns_request_count_total' is the metric name. Note that depending on the Prometheus query used, it is possible that multiple jobs/instances/metrics could be returned. This is also the reason why python dict has been used in the structure of the Dataset.
Take another query "go_memstats_gc_sys_bytes" as an example. Since both CoreDNS and Prometheus are written in Golang, "go_memstats_gc_sys_bytes" metric is available for both "coredns" job and "prometheus" job:
Note: This cell may error out the first time you run it. Run it again and it will pass .
End of explanation
"""
n_steps, n_features = 2, 1
simple_lstm_model = tf.keras.models.Sequential([
tf.keras.layers.LSTM(8, input_shape=(n_steps, n_features)),
tf.keras.layers.Dense(1)
])
simple_lstm_model.compile(optimizer='adam', loss='mae')
"""
Explanation: The created Dataset is ready to be passed to tf.keras directly for either training or inference purposes now.
Use Dataset for model training
With metrics Dataset created, it is possible to directly pass the Dataset to tf.keras for model training or inference.
For demo purposes, this tutorial will just use a very simple LSTM model with 1 feature and 2 steps as input:
End of explanation
"""
n_samples = 10
dataset = tfio.experimental.IODataset.from_prometheus(
"go_memstats_sys_bytes", n_samples + n_steps - 1 + 1, endpoint="http://localhost:9090")
# take go_memstats_gc_sys_bytes from coredns job
dataset = dataset.map(lambda _, v: v['coredns']['localhost:9153']['go_memstats_sys_bytes'])
# find the max value and scale the value to [0, 1]
v_max = dataset.reduce(tf.constant(0.0, tf.float64), tf.math.maximum)
dataset = dataset.map(lambda v: (v / v_max))
# expand the dimension by 1 to fit n_features=1
dataset = dataset.map(lambda v: tf.expand_dims(v, -1))
# take a sliding window
dataset = dataset.window(n_steps, shift=1, drop_remainder=True)
dataset = dataset.flat_map(lambda d: d.batch(n_steps))
# the first value is x and the next value is y, only take 10 samples
x = dataset.take(n_samples)
y = dataset.skip(1).take(n_samples)
dataset = tf.data.Dataset.zip((x, y))
# pass the final dataset to model.fit for training
simple_lstm_model.fit(dataset.batch(1).repeat(10), epochs=5, steps_per_epoch=10)
"""
Explanation: The dataset to be used is the value of 'go_memstats_sys_bytes' for CoreDNS with 10 samples. However, since a sliding window of window=n_steps and shift=1 are formed, additional samples are needed (for any two consecute elements, the first is taken as x and the second is taken as y for training). The total is 10 + n_steps - 1 + 1 = 12 seconds.
The data value is also scaled to [0, 1].
End of explanation
"""
|
dvirsamuel/MachineLearningCourses | Visual Recognision - Stanford/assignment2/BatchNormalization.ipynb | gpl-3.0 | # As usual, a bit of setup
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.iteritems():
print '%s: ' % k, v.shape
"""
Explanation: Batch Normalization
One way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. One idea along these lines is batch normalization which was recently proposed by [3].
The idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.
The authors of [3] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [3] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.
It is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.
[3] Sergey Ioffe and Christian Szegedy, "Batch Normalization: Accelerating Deep Network Training by Reducing
Internal Covariate Shift", ICML 2015.
End of explanation
"""
# Check the training-time forward pass by checking means and variances
# of features both before and after batch normalization
# Simulate the forward pass for a two-layer network
N, D1, D2, D3 = 200, 50, 60, 3
X = np.random.randn(N, D1)
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
a = np.maximum(0, X.dot(W1)).dot(W2)
print 'Before batch normalization:'
print ' means: ', a.mean(axis=0)
print ' stds: ', a.std(axis=0)
# Means should be close to zero and stds close to one
print 'After batch normalization (gamma=1, beta=0)'
a_norm, _ = batchnorm_forward(a, np.ones(D3), np.zeros(D3), {'mode': 'train'})
print ' mean: ', a_norm.mean(axis=0)
print ' std: ', a_norm.std(axis=0)
# Now means should be close to beta and stds close to gamma
gamma = np.asarray([1.0, 2.0, 3.0])
beta = np.asarray([11.0, 12.0, 13.0])
a_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})
print 'After batch normalization (nontrivial gamma, beta)'
print ' means: ', a_norm.mean(axis=0)
print ' stds: ', a_norm.std(axis=0)
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
N, D1, D2, D3 = 200, 50, 60, 3
W1 = np.random.randn(D1, D2)
W2 = np.random.randn(D2, D3)
bn_param = {'mode': 'train'}
gamma = np.ones(D3)
beta = np.zeros(D3)
for t in xrange(50):
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
batchnorm_forward(a, gamma, beta, bn_param)
bn_param['mode'] = 'test'
X = np.random.randn(N, D1)
a = np.maximum(0, X.dot(W1)).dot(W2)
a_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print 'After batch normalization (test-time):'
print ' means: ', a_norm.mean(axis=0)
print ' stds: ', a_norm.std(axis=0)
"""
Explanation: Batch normalization: Forward
In the file cs231n/layers.py, implement the batch normalization forward pass in the function batchnorm_forward. Once you have done so, run the following to test your implementation.
End of explanation
"""
# Gradient check batchnorm backward pass
N, D = 4, 5
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
fx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: batchnorm_forward(x, gamma, beta, bn_param)[0]
fb = lambda b: batchnorm_forward(x, gamma, beta, bn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma, dout)
db_num = eval_numerical_gradient_array(fb, beta, dout)
_, cache = batchnorm_forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = batchnorm_backward(dout, cache)
print 'dx error: ', rel_error(dx_num, dx)
print 'dgamma error: ', rel_error(da_num, dgamma)
print 'dbeta error: ', rel_error(db_num, dbeta)
"""
Explanation: Batch Normalization: backward
Now implement the backward pass for batch normalization in the function batchnorm_backward.
To derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.
Once you have finished, run the following to numerically check your backward pass.
End of explanation
"""
N, D = 100, 500
x = 5 * np.random.randn(N, D) + 12
gamma = np.random.randn(D)
beta = np.random.randn(D)
dout = np.random.randn(N, D)
bn_param = {'mode': 'train'}
out, cache = batchnorm_forward(x, gamma, beta, bn_param)
t1 = time.time()
dx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)
t2 = time.time()
dx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)
t3 = time.time()
print 'dx difference: ', rel_error(dx1, dx2)
print 'dgamma difference: ', rel_error(dgamma1, dgamma2)
print 'dbeta difference: ', rel_error(dbeta1, dbeta2)
print 'speedup: %.2fx' % ((t2 - t1) / (t3 - t2))
"""
Explanation: Batch Normalization: alternative backward
In class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For the sigmoid function, it turns out that you can derive a very simple formula for the backward pass by simplifying gradients on paper.
Surprisingly, it turns out that you can also derive a simple expression for the batch normalization backward pass if you work out derivatives on paper and simplify. After doing so, implement the simplified batch normalization backward pass in the function batchnorm_backward_alt and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.
NOTE: You can still complete the rest of the assignment if you don't figure this part out, so don't worry too much if you can't get it.
End of explanation
"""
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for reg in [0, 3.14]:
print 'Running check with reg = ', reg
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2, dtype=np.float64,
use_batchnorm=True)
loss, grads = model.loss(X, y)
print 'Initial loss: ', loss
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))
if reg == 0: print
"""
Explanation: Fully Connected Nets with Batch Normalization
Now that you have a working implementation for batch normalization, go back to your FullyConnectedNet in the file cs2312n/classifiers/fc_net.py. Modify your implementation to add batch normalization.
Concretely, when the flag use_batchnorm is True in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.
HINT: You might find it useful to define an additional helper layer similar to those in the file cs231n/layer_utils.py. If you decide to do so, do it in the file cs231n/classifiers/fc_net.py.
End of explanation
"""
# Try training a very deep net with batchnorm
hidden_dims = [100, 100, 100, 100, 100]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = 2e-2
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=200)
bn_solver.train()
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=200)
solver.train()
"""
Explanation: Batchnorm for deep networks
Run the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.
End of explanation
"""
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label='baseline')
plt.plot(bn_solver.loss_history, 'o', label='batchnorm')
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label='baseline')
plt.plot(bn_solver.train_acc_history, '-o', label='batchnorm')
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label='baseline')
plt.plot(bn_solver.val_acc_history, '-o', label='batchnorm')
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(10, 9)
plt.show()
"""
Explanation: Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.
End of explanation
"""
# Try training a very deep net with batchnorm
hidden_dims = [50, 50, 50, 50, 50, 50, 50]
num_train = 1000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
bn_solvers = {}
solvers = {}
weight_scales = np.logspace(-4, 0, num=20)
for i, weight_scale in enumerate(weight_scales):
print 'Running weight scale %d / %d' % (i + 1, len(weight_scales))
bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)
model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)
bn_solver = Solver(bn_model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
bn_solver.train()
bn_solvers[weight_scale] = bn_solver
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=False, print_every=200)
solver.train()
solvers[weight_scale] = solver
# Plot results of weight scale experiment
best_train_accs, bn_best_train_accs = [], []
best_val_accs, bn_best_val_accs = [], []
final_train_loss, bn_final_train_loss = [], []
for ws in weight_scales:
best_train_accs.append(max(solvers[ws].train_acc_history))
bn_best_train_accs.append(max(bn_solvers[ws].train_acc_history))
best_val_accs.append(max(solvers[ws].val_acc_history))
bn_best_val_accs.append(max(bn_solvers[ws].val_acc_history))
final_train_loss.append(np.mean(solvers[ws].loss_history[-100:]))
bn_final_train_loss.append(np.mean(bn_solvers[ws].loss_history[-100:]))
plt.subplot(3, 1, 1)
plt.title('Best val accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best val accuracy')
plt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
plt.title('Best train accuracy vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Best training accuracy')
plt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')
plt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')
plt.legend()
plt.subplot(3, 1, 3)
plt.title('Final training loss vs weight initialization scale')
plt.xlabel('Weight initialization scale')
plt.ylabel('Final training loss')
plt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')
plt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')
plt.legend()
#plt.gcf().set_size_inches(10, 15)
plt.gcf().set_size_inches(9,7)
#fg = plt.figure()
#plt.savefig('vsvs.png')
plt.show()
"""
Explanation: Batch normalization and initialization
We will now run a small experiment to study the interaction of batch normalization and weight initialization.
The first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second cell will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.
End of explanation
"""
|
tensorflow/cloud | g3doc/tutorials/hp_tuning_cifar10_using_google_cloud.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2021 The TensorFlow Cloud Authors.
End of explanation
"""
import datetime
import uuid
import numpy as np
import pandas as pd
import tensorflow as tf
import os
import sys
import subprocess
from tensorflow.keras import datasets, layers, models
from sklearn.model_selection import train_test_split
! pip install -q tensorflow-cloud
import tensorflow_cloud as tfc
tf.version.VERSION
"""
Explanation: HP Tuning on Google Cloud with CloudTuner
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/cloud/tutorials/hp_tuning_cifar10_using_google_cloud.ipynb"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/cloud/blob/master/g3doc/tutorials/hp_tuning_cifar10_using_google_cloud.ipynb""><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/cloud/blob/master/g3doc/tutorials/hp_tuning_cifar10_using_google_cloud.ipynb.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/cloud/tutorials/hp_tuning_cifar10_using_google_cloud.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://kaggle.com/kernels/welcome?src=https://github.com/tensorflow/cloud/blob/master/g3doc/tutorials/hp_tuning_cifar10_using_google_cloud.ipynb" target="blank">
<img width="90" src="https://www.kaggle.com/static/images/site-logo.png" alt="Kaggle logo">Run in Kaggle
</a>
</td>
</table>
This example is based on Keras-Tuner CIFAR10 sample to demonstrate how to run HP tuning jobs using
TensorFlow Cloud and Google Cloud Platform at scale.
Import required modules
End of explanation
"""
# Set Google Cloud Specific parameters
# TODO: Please set GCP_PROJECT_ID to your own Google Cloud project ID.
GCP_PROJECT_ID = 'YOUR_PROJECT_ID' #@param {type:"string"}
# TODO: Change the Service Account Name to your own Service Account
SERVICE_ACCOUNT_NAME = 'YOUR_SERVICE_ACCOUNT_NAME' #@param {type:"string"}
SERVICE_ACCOUNT = f'{SERVICE_ACCOUNT_NAME}@{GCP_PROJECT_ID}.iam.gserviceaccount.com'
# TODO: set GCS_BUCKET to your own Google Cloud Storage (GCS) bucket.
GCS_BUCKET = 'YOUR_GCS_BUCKET_NAME' #@param {type:"string"}
# DO NOT CHANGE: Currently only the 'us-central1' region is supported.
REGION = 'us-central1'
# Set Tuning Specific parameters
# OPTIONAL: You can change the job name to any string.
JOB_NAME = 'cifar10' #@param {type:"string"}
# OPTIONAL: Set Number of concurrent tuning jobs that you would like to run.
NUM_JOBS = 5 #@param {type:"string"}
# TODO: Set the study ID for this run. Study_ID can be any unique string.
# Reusing the same Study_ID will cause the Tuner to continue tuning the
# Same Study parameters. This can be used to continue on a terminated job,
# or load stats from a previous study.
STUDY_NUMBER = '00001' #@param {type:"string"}
STUDY_ID = f'{GCP_PROJECT_ID}_{JOB_NAME}_{STUDY_NUMBER}'
# Setting location were training logs and checkpoints will be stored
GCS_BASE_PATH = f'gs://{GCS_BUCKET}/{JOB_NAME}/{STUDY_ID}'
TENSORBOARD_LOGS_DIR = os.path.join(GCS_BASE_PATH,"logs")
"""
Explanation: Project Configurations
Set project parameters For Google Cloud Specific parameters refer to Google Cloud Project Setup Instructions.
End of explanation
"""
# Using tfc.remote() to ensure this code only runs in notebook
if not tfc.remote():
# Authentication for Kaggle Notebooks
if "kaggle_secrets" in sys.modules:
from kaggle_secrets import UserSecretsClient
UserSecretsClient().set_gcloud_credentials(project=GCP_PROJECT_ID)
# Authentication for Colab Notebooks
if "google.colab" in sys.modules:
from google.colab import auth
auth.authenticate_user()
os.environ["GOOGLE_CLOUD_PROJECT"] = GCP_PROJECT_ID
"""
Explanation: Authenticating the notebook to use your Google Cloud Project
For Kaggle Notebooks click on "Add-ons"->"Google Cloud SDK" before running the cell below.
End of explanation
"""
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
# Setting input specific parameters
# The model expects input of dimensions (INPUT_IMG_SIZE, INPUT_IMG_SIZE, 3)
INPUT_IMG_SIZE = 32
NUM_CLASSES = 10
"""
Explanation: Load and prepare data
Read raw data and split to train and test data sets.
End of explanation
"""
import kerastuner
from tensorflow.keras import layers
# Configure the search space
HPS = kerastuner.engine.hyperparameters.HyperParameters()
HPS.Int('conv_blocks', 3, 5, default=3)
for i in range(5):
HPS.Int('filters_' + str(i), 32, 256, step=32)
HPS.Choice('pooling_' + str(i), ['avg', 'max'])
HPS.Int('hidden_size', 30, 100, step=10, default=50)
HPS.Float('dropout', 0, 0.5, step=0.1, default=0.5)
HPS.Float('learning_rate', 1e-4, 1e-2, sampling='log')
def build_model(hp):
inputs = tf.keras.Input(shape=(INPUT_IMG_SIZE, INPUT_IMG_SIZE, 3))
x = inputs
for i in range(hp.get('conv_blocks')):
filters = hp.get('filters_'+ str(i))
for _ in range(2):
x = layers.Conv2D(
filters, kernel_size=(3, 3), padding='same')(x)
x = layers.BatchNormalization()(x)
x = layers.ReLU()(x)
if hp.get('pooling_' + str(i)) == 'max':
x = layers.MaxPool2D()(x)
else:
x = layers.AvgPool2D()(x)
x = layers.GlobalAvgPool2D()(x)
x = layers.Dense(hp.get('hidden_size'),
activation='relu')(x)
x = layers.Dropout(hp.get('dropout'))(x)
outputs = layers.Dense(NUM_CLASSES, activation='softmax')(x)
model = tf.keras.Model(inputs, outputs)
model.compile(
optimizer=tf.keras.optimizers.Adam(
hp.get('learning_rate')),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
return model
"""
Explanation: Define the model architecture and hyperparameters
In this section we define our tuning parameters using Keras Tuner Hyper Parameters and a model-building function. The model-building function takes an argument hp from which you can sample hyperparameters, such as hp.Int('units', min_value=32, max_value=512, step=32) (an integer from a certain range).
End of explanation
"""
from tensorflow_cloud import CloudTuner
distribution_strategy = None
if not tfc.remote():
# Using MirroredStrategy to use a single instance with multiple GPUs
# during remote execution while using no strategy for local.
distribution_strategy = tf.distribute.MirroredStrategy()
tuner = CloudTuner(
build_model,
project_id=GCP_PROJECT_ID,
project_name= JOB_NAME,
region=REGION,
objective='accuracy',
hyperparameters=HPS,
max_trials=100,
directory=GCS_BASE_PATH,
study_id=STUDY_ID,
overwrite=True,
distribution_strategy=distribution_strategy)
# Configure Tensorboard logs
callbacks=[
tf.keras.callbacks.TensorBoard(log_dir=TENSORBOARD_LOGS_DIR)]
# Setting to run tuning remotely, you can run tuner locally to validate it works first.
if tfc.remote():
tuner.search(x=x_train, y=y_train, epochs=30, validation_split=0.2, callbacks=callbacks)
# You can uncomment the code below to run the tuner.search() locally to validate
# everything works before submitting the job to Cloud. Stop the job manually
# after one epoch.
# else:
# tuner.search(x=x_train, y=y_train, epochs=1, validation_split=0.2, callbacks=callbacks)
"""
Explanation: Configure a CloudTuner
In this section we configure the cloud tuner for both remote and local execution. The main difference between the two is the distribution strategy.
End of explanation
"""
# You can install custom modules for your Docker images via requirements txt file.
with open('requirements.txt','w') as f:
f.write('pandas==1.1.5\n')
f.write('numpy==1.18.5\n')
f.write('tensorflow-cloud\n')
f.write('keras-tuner\n')
# Optional: Some recommended base images. If you provide none the system will choose one for you.
TF_GPU_IMAGE= "gcr.io/deeplearning-platform-release/tf2-cpu.2-5"
TF_CPU_IMAGE= "gcr.io/deeplearning-platform-release/tf2-gpu.2-5"
tfc.run_cloudtuner(
distribution_strategy='auto',
requirements_txt='requirements.txt',
docker_config=tfc.DockerConfig(
parent_image=TF_GPU_IMAGE,
image_build_bucket=GCS_BUCKET
),
chief_config=tfc.COMMON_MACHINE_CONFIGS['K80_4X'],
job_labels={'job': JOB_NAME},
service_account=SERVICE_ACCOUNT,
num_jobs=NUM_JOBS
)
"""
Explanation: Start the remote training
This step will prepare your code from this notebook for remote execution and start NUM_JOBS parallel runs remotely to train the model. Once the jobs are submitted you can go to the next step to monitor the jobs progress via Tensorboard.
End of explanation
"""
%load_ext tensorboard
%tensorboard --logdir $TENSORBOARD_LOGS_DIR
"""
Explanation: Training Results
Reconnect your Colab instance
Most remote training jobs are long running, if you are using Colab it may time out before the training results are available. In that case rerun the following sections to reconnect and configure your Colab instance to access the training results. Run the following sections in order:
Import required modules
Project Configurations
Authenticating the notebook to use your Google Cloud Project
Load Tensorboard
While the training is in progress you can use Tensorboard to view the results. Note the results will show only after your training has started. This may take a few minutes.
End of explanation
"""
if not tfc.remote():
tuner.results_summary(1)
best_model = tuner.get_best_models(1)[0]
best_hyperparameters = tuner.get_best_hyperparameters(1)[0]
# References to best trial assets
best_trial_id = tuner.oracle.get_best_trials(1)[0].trial_id
best_trial_dir = tuner.get_trial_dir(best_trial_id)
"""
Explanation: You can access the training assets as follows. Note the results will show only after your tuning job has completed at least once trial. This may take a few minutes.
End of explanation
"""
|
LSSTC-DSFP/LSSTC-DSFP-Sessions | Sessions/Session05/Day4/stackdiff_Narayan/01_Registration/Register_images_exercise.ipynb | mit | !ls *fits
"""
Explanation: Image Registration Exercise
Written by Gautham Narayan (gnarayan@stsci.edu) for LSST DSFP #5
In this directory, you should be able to find two fits file from one of the projects I worked on
End of explanation
"""
import astropy.io.fits as afits
from astropy.wcs import WCS
from astropy.visualization import ZScaleInterval
import matplotlib
%matplotlib notebook
%pylab
##### I've given you some imports above. They're a big hint on what to try. You get to do this!!! #####
"""
Explanation: While the images have been de-trended, they still have the original WCS from the telescope. They aren't aligned. You could use ds9 to check this trivially, but lets do it with astropy instead.
End of explanation
"""
!solve-field -h
"""
Explanation: Use the astrometry.net client (solve-field) to determine an accurate WCS solution for this field.
End of explanation
"""
print(WCS(f1[0].header))
"""
Explanation: Options you might want to look at:
--ra, --dec and --radius: Restrict the solution to some radius around RA and Dec. The regular telescope WCS should be plenty for an initial guess.
--scale-units, --scale-low and --scale-high: You might not know the exact pixel scale (and it's a function of where you are on the detector in any case, but you also set limits from this based on the existing CD1_2, CD2_1
-D, -N: Write to an output directory and write out a new FITS file with the solved WCS.
<u>Don't use out/ as the output directory.</u>
--parity: You can usually set this and get a speedup of 2x
To get info from the header, you can use astropy, or just use imhead from the WCSTools package at the command line
End of explanation
"""
!imhead wdd7.040920_0452.051_6.fits
"""
Explanation: or
End of explanation
"""
!wcs-to-tan -h
"""
Explanation: Use the above info to solve for the WCS for both images.
The problem with the distortion polynomial that astronometry.net uses is that routines like ds9 ignore it. Look at astrometry.net's wcs-to-tan routine and convert the solved WCS to the tangent projection.
End of explanation
"""
|
gilmana/Cu_transition_time_course- | data_explore_failed_clstr_mthds/Data_exploring_post_clustering_failure.ipynb | mit | %matplotlib inline
df2_TPM_values = df2_TPM.loc[:,"5GB1_FM40_T0m_TR2":"5GB1_FM40_T180m_TR1"]
df2_TPM_values
df2_TPM_values.describe()
df2_TPM.idxmax?
index = df2_TPM_values.sort("5GB1_FM40_T0m_TR2", ascending = False).index.tolist()
top_expressed = df2_TPM.loc[index].iloc[:20,:]
top_expressed.to_csv("top_expressed.csv")
top_expressed
#note: looking with mitch at gene MCBURv2_200002 - this is low expressed in all other transcriptomcis data except for
#fm81 where we added extra NO3 and its within the top 15 top expressed genes
#gene 20471 is top expressed (within top 15 genes across all transcripomics samples that we have)
df2_TPM_values.plot.hist(bins=100)
import matplotlib.pyplot as plt
plt.style.use("seaborn-white")
"""
columns = ['5GB1_FM40_T0m_TR2', '5GB1_FM40_T10m_TR3', '5GB1_FM40_T20m_TR2', '5GB1_FM40_T40m_TR1',
'5GB1_FM40_T60m_TR1', '5GB1_FM40_T90m_TR2', '5GB1_FM40_T150m_TR1_remake', '5GB1_FM40_T180m_TR1']
"""
x1 = df2_TPM_values["5GB1_FM40_T0m_TR2"]
x2 = df2_TPM_values['5GB1_FM40_T10m_TR3']
x3 = df2_TPM_values['5GB1_FM40_T20m_TR2']
x4 = df2_TPM_values['5GB1_FM40_T40m_TR1']
x5 = df2_TPM_values['5GB1_FM40_T60m_TR1']
x6 = df2_TPM_values['5GB1_FM40_T90m_TR2']
x7 = df2_TPM_values['5GB1_FM40_T150m_TR1_remake']
x8 = df2_TPM_values['5GB1_FM40_T180m_TR1']
kwargs = dict(histtype = "stepfilled", alpha = 0.1, normed = True, bins = 1000)
axes = plt.gca()
axes.set_xlim([0,5000])
axes.set_ylim([0,0.001])
plt.hist(x1, **kwargs)
plt.hist(x2, **kwargs)
plt.hist(x3, **kwargs)
plt.hist(x4, **kwargs)
plt.hist(x5, **kwargs)
plt.hist(x6, **kwargs)
plt.hist(x7, **kwargs)
plt.hist(x8, **kwargs)
"""
Explanation: Historgram of raw data (TPM)
End of explanation
"""
from sklearn.preprocessing import StandardScaler, MinMaxScaler
#will start with standard scalar - need to scale along rows, therefore transposing the data to complete standard scale.
df2_TPM_values_T = df2_TPM_values.T #transposing the data
standard_scaler = StandardScaler()
array_stand_scale = standard_scaler.fit_transform(df2_TPM_values_T)
df2a_stand_scale = pd.DataFrame(array_stand_scale)
df2a_stand_scale = df2a_stand_scale.T
df2a_stand_scale.columns = df2_TPM_values.columns
df2a_stand_scale.index = df2_TPM_values.index
df2a_stand_scale.describe()
"""
columns = ['5GB1_FM40_T0m_TR2', '5GB1_FM40_T10m_TR3', '5GB1_FM40_T20m_TR2', '5GB1_FM40_T40m_TR1',
'5GB1_FM40_T60m_TR1', '5GB1_FM40_T90m_TR2', '5GB1_FM40_T150m_TR1_remake', '5GB1_FM40_T180m_TR1']
"""
x1 = df2a_stand_scale["5GB1_FM40_T0m_TR2"]
x2 = df2a_stand_scale['5GB1_FM40_T10m_TR3']
x3 = df2a_stand_scale['5GB1_FM40_T20m_TR2']
x4 = df2a_stand_scale['5GB1_FM40_T40m_TR1']
x5 = df2a_stand_scale['5GB1_FM40_T60m_TR1']
x6 = df2a_stand_scale['5GB1_FM40_T90m_TR2']
x7 = df2a_stand_scale['5GB1_FM40_T150m_TR1_remake']
x8 = df2a_stand_scale['5GB1_FM40_T180m_TR1']
kwargs = dict(histtype = "stepfilled", alpha = 0.1, normed = True, bins = 1000)
"""
axes = plt.gca()
axes.set_xlim([0,5000])
axes.set_ylim([0,0.001])
"""
plt.hist(x1, **kwargs)
#plt.hist(x2, **kwargs)
#plt.hist(x3, **kwargs)
#plt.hist(x4, **kwargs)
#plt.hist(x5, **kwargs)
#plt.hist(x6, **kwargs)
#plt.hist(x7, **kwargs)
#plt.hist(x8, **kwargs)
"""
Explanation: Standard scale across rows and replot historgram
End of explanation
"""
def draw_histograms(df, variables, n_rows, n_cols):
fig=plt.figure(figsize = (20,10))
for i, var_name in enumerate(variables):
ax=fig.add_subplot(n_rows,n_cols,i+1)
df[var_name].hist(ax=ax, histtype = "stepfilled", alpha = 0.3, normed = True, bins = 1000)
ax.set_title(var_name)
plt.show()
testa = df2a_stand_scale
draw_histograms(testa, testa.columns, 2, 4)
"""
Explanation: Would like to plot every standard scaled histogram by itself. Found a function from stack exchagne about making histograms for every column
End of explanation
"""
df2_TPM_values_T = df2_TPM_values.T #transposing the data
min_max_scalar = MinMaxScaler()
df2b_mean_max_rows = min_max_scalar.fit_transform(df2_TPM_values_T)
df2b_mean_max_rows = pd.DataFrame(df2b_mean_max_rows.T)
df2b_mean_max_rows.columns = df2_TPM_values.columns
df2b_mean_max_rows.index = df2_TPM_values.index
testb = df2b_mean_max_rows
draw_histograms(testb, testb.columns, 2, 4)
"""
Explanation: Min/Max scale across rows and replot the histograms
End of explanation
"""
min_max_scalar = MinMaxScaler()
df2c_mean_max_columns = min_max_scalar.fit_transform(df2_TPM_values)
df2c_mean_max_columns = pd.DataFrame(df2c_mean_max_columns)
df2c_mean_max_columns.columns = df2_TPM_values.columns
df2c_mean_max_columns.index = df2_TPM_values.index
def draw_histograms(df, variables, n_rows, n_cols):
fig=plt.figure(figsize = (20,10))
for i, var_name in enumerate(variables):
ax=fig.add_subplot(n_rows,n_cols,i+1)
df[var_name].hist(ax=ax, histtype = "stepfilled", alpha = 0.3, normed = True, bins = 100)
ax.set_title(var_name)
ax.set_ylim([0, 5])
plt.show()
testc = df2c_mean_max_columns
draw_histograms(testc, testc.columns, 2, 4)
"""
Explanation: Min/Max scale across columns and replot the histograms
End of explanation
"""
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(df2c_mean_max_columns)
print(pca.explained_variance_ratio_) #how much of the dimensionality reduction is
pcat = pca.transform(df2c_mean_max_columns)
plt.figure()
plt.scatter(pcat[:,0], pcat[:,1])
pca = PCA(n_components=2)
pca.fit(df2c_mean_max_columns)
print(pca.explained_variance_ratio_) #how much of the dimensionality reduction is
pcat = pca.transform(df2c_mean_max_columns)
plt.figure()
plt.xlim(-.1, .4)
plt.ylim(-.2, .2)
plt.scatter(pcat[:,0], pcat[:,1])
"""
Explanation: replicating Daves PCA
End of explanation
"""
pca = PCA(n_components=2)
pca.fit(df2a_stand_scale)
print(pca.explained_variance_ratio_) #how much of the dimensionality reduction is
pcat = pca.transform(df2a_stand_scale)
#plt.figure()
plt.scatter(pcat[:,0], pcat[:,1])
"""
Explanation: Applying PCA to standard scaled data
End of explanation
"""
#!/usr/bin/env python3
import math
import sys
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.cluster import DBSCAN
from sklearn import metrics
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.decomposition import PCA
df = df2_TPM_values
#print(df.head())
#dfr = df / df.loc("5GB1_FM40_T0m_TR2")
#print(dfr.head())
#sys.exit()
# PCA and clustering of dfr
#df.plot.hist(bins=100)
#plt.savefig("hist.pdf")
#X = StandardScaler().fit_transform(df)
X = MinMaxScaler().fit_transform(df)
"""
dfX = pd.DataFrame(X)
dfX.plot.hist(bins=100)
print(dfX.head())
plt.savefig("histX.pdf")
pca = PCA(n_components=2)
pca.fit(X)
print(pca.explained_variance_ratio_)
pcat = pca.transform(X)
plt.figure()
#plt.xlim(-5, 5)
#plt.ylim(-5, 5)
plt.scatter(pcat[:,0], pcat[:,1])
plt.show()
plt.savefig("pca.pdf")
"""
for eps_value in [0.0001, 0.005, 0.01, 0.05, 0.075, 0.1]:
for ms in [3, 5]:
db = DBSCAN(eps=eps_value, min_samples=ms).fit(X)
print(set(db.labels_))
labels = db.labels_
n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0)
print('eps = %f - min_samples = %d - number of clusters: %d' % (eps_value, ms, n_clusters_))
"""
Explanation: Reviewing Daves gridsearch scan through the epsioln parameter in DBSCAN
End of explanation
"""
X = MinMaxScaler().fit_transform(df2_TPM_values)
db_TPM_values = DBSCAN(eps=0.0001, min_samples=3).fit(X)
labels_TPM_values = db_TPM_values.labels_
print(np.unique(labels_TPM_values))
print(np.bincount(labels_TPM_values[labels_TPM_values!=-1]))
labels_TPM_values
"""
Explanation: What do the 14 clusters look like?
End of explanation
"""
#!/usr/bin/env python3
import math
import sys
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.cluster import DBSCAN
from sklearn import metrics
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.decomposition import PCA
df = df2a_stand_scale
#print(df.head())
#dfr = df / df.loc("5GB1_FM40_T0m_TR2")
#print(dfr.head())
#sys.exit()
# PCA and clustering of dfr
#df.plot.hist(bins=100)
#plt.savefig("hist.pdf")
#X = StandardScaler().fit_transform(df)
#X = MinMaxScaler().fit_transform(df)
"""
dfX = pd.DataFrame(X)
dfX.plot.hist(bins=100)
print(dfX.head())
plt.savefig("histX.pdf")
pca = PCA(n_components=2)
pca.fit(X)
print(pca.explained_variance_ratio_)
pcat = pca.transform(X)
plt.figure()
#plt.xlim(-5, 5)
#plt.ylim(-5, 5)
plt.scatter(pcat[:,0], pcat[:,1])
plt.show()
plt.savefig("pca.pdf")
"""
for eps_value in [0.0001, 0.005, 0.01, 0.05, 0.075, 0.1]:
for ms in [3, 5]:
db = DBSCAN(eps=eps_value, min_samples=ms).fit(df)
print(set(db.labels_))
labels = db.labels_
n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0)
print('eps = %f - min_samples = %d - number of clusters: %d' % (eps_value, ms, n_clusters_))
"""
Explanation: Applying Daves grid search to mean centered data across rows
End of explanation
"""
#!/usr/bin/env python3
import math
import sys
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.cluster import DBSCAN
from sklearn import metrics
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.decomposition import PCA
df = df2_TPM_values
#print(df.head())
#dfr = df / df.loc("5GB1_FM40_T0m_TR2")
#print(dfr.head())
#sys.exit()
# PCA and clustering of dfr
#df.plot.hist(bins=100)
#plt.savefig("hist.pdf")
X = StandardScaler().fit_transform(df)
#X = MinMaxScaler().fit_transform(df)
"""
dfX = pd.DataFrame(X)
dfX.plot.hist(bins=100)
print(dfX.head())
plt.savefig("histX.pdf")
pca = PCA(n_components=2)
pca.fit(X)
print(pca.explained_variance_ratio_)
pcat = pca.transform(X)
plt.figure()
#plt.xlim(-5, 5)
#plt.ylim(-5, 5)
plt.scatter(pcat[:,0], pcat[:,1])
plt.show()
plt.savefig("pca.pdf")
"""
for eps_value in [0.0001, 0.005, 0.01, 0.05, 0.075, 0.1]:
for ms in [3, 5]:
db = DBSCAN(eps=eps_value, min_samples=ms).fit(X)
print(set(db.labels_))
labels = db.labels_
n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0)
print('eps = %f - min_samples = %d - number of clusters: %d' % (eps_value, ms, n_clusters_))
"""
Explanation: Applying Daves grid search to mean centered data across columns
It doesnt make sense to me to mean center across columns
End of explanation
"""
df2_TPM_log2.columns
tp_zero = df2_TPM_log2["5GB1_FM40_T0m_TR2"]
df2_TPM_log2_diff = df2_TPM_log2.subtract(df2_TPM_log2["5GB1_FM40_T0m_TR2"], axis = "index")
df2_TPM_log2_diff = df2_TPM_log2_diff.loc[:,'5GB1_FM40_T10m_TR3':'5GB1_FM40_T180m_TR1']
df2_TPM_log2_diff.describe()
def draw_histograms(df, variables, n_rows, n_cols):
fig=plt.figure(figsize = (20,10))
for i, var_name in enumerate(variables):
ax=fig.add_subplot(n_rows,n_cols,i+1)
df[var_name].hist(ax=ax, histtype = "stepfilled", alpha = 0.3, normed = True, bins = 1000)
ax.set_title(var_name)
plt.show()
testa = df2_TPM_log2_diff
draw_histograms(testa, testa.columns, 2, 4)
"""
Explanation: Log2 fold change over timepoint zero
End of explanation
"""
pca = PCA(n_components=2)
pca.fit(df2_TPM_log2_diff)
print(pca.explained_variance_ratio_) #how much of the dimensionality reduction is
pcat = pca.transform(df2_TPM_log2_diff)
#plt.figure()
plt.scatter(pcat[:,0], pcat[:,1])
#!/usr/bin/env python3
import math
import sys
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.cluster import DBSCAN
from sklearn import metrics
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.decomposition import PCA
df = df2_TPM_log2_diff
#print(df.head())
#dfr = df / df.loc("5GB1_FM40_T0m_TR2")
#print(dfr.head())
#sys.exit()
# PCA and clustering of dfr
#df.plot.hist(bins=100)
#plt.savefig("hist.pdf")
#X = StandardScaler().fit_transform(df)
#X = MinMaxScaler().fit_transform(df)
X = df
"""
dfX = pd.DataFrame(X)
dfX.plot.hist(bins=100)
print(dfX.head())
plt.savefig("histX.pdf")
pca = PCA(n_components=2)
pca.fit(X)
print(pca.explained_variance_ratio_)
pcat = pca.transform(X)
plt.figure()
#plt.xlim(-5, 5)
#plt.ylim(-5, 5)
plt.scatter(pcat[:,0], pcat[:,1])
plt.show()
plt.savefig("pca.pdf")
"""
for eps_value in [0.0001, 0.005, 0.01, 0.05, 0.075, 0.1]:
for ms in [3, 5]:
db = DBSCAN(eps=eps_value, min_samples=ms).fit(X)
print(set(db.labels_))
labels = db.labels_
n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0)
print(np.unique(labels))
print('eps = %f - min_samples = %d - number of clusters: %d' % (eps_value, ms, n_clusters_))
"""
Explanation: PCA this distribution
End of explanation
"""
dbscan_log2diff = DBSCAN(eps=0.1, min_samples=3).fit(df2_TPM_log2_diff)
labels_log2diff = dbscan_log2diff.labels_
print(np.unique(labels_log2diff))
print(np.bincount(labels_log2diff[labels_log2diff!=-1]))
#cluster index
clusters_38 = {i: np.where(labels_log2diff == i)[0] for i in range(38)}
clusters_38
pca = PCA(n_components=2)
pca.fit(df2_TPM_log2_diff)
print(pca.explained_variance_ratio_) #how much of the dimensionality reduction is
pcat = pca.transform(df2_TPM_log2_diff)
pcat = pd.DataFrame(pcat)
pcat[2] = labels_log2diff
#plt.figure()
#plt.scatter(pcat[:,0], pcat[:,1])
pcat[2].max()
def dfScatter(df, xcol=0, ycol=1, catcol=2):
fig, ax = plt.subplots()
categories = np.unique(df[catcol])
colors = np.linspace(0, 1, len(categories))
colordict = dict(zip(categories, colors))
df["Color"] = df[catcol].apply(lambda x: colordict[x])
ax.scatter(df[xcol], df[ycol], c = df.Color)
plt.xlim(-1, 1)
plt.ylim(-1, 1)
return fig
dfScatter(pcat)
pcat2 = pcat
pcat2.iloc[0:1000,2] = 0
#pcat2.iloc[1000:3000,2] = 1
#pcat2.iloc[3000:,2]= 2
"""
Explanation: ploting the 38 clusters unto the log2fold diff PCA
Lets explore the eps = 0.075 and min_samples =5
End of explanation
"""
#!/usr/bin/env python3
import math
import sys
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.cluster import DBSCAN
from sklearn import metrics
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.decomposition import PCA
df = df2_TPM_values
print(df.head())
#dfr = df / df.loc("5GB1_FM40_T0m_TR2")
#print(dfr.head())
#sys.exit()
# PCA and clustering of dfr
df.plot.hist(bins=100)
plt.savefig("hist.pdf")
#X = StandardScaler().fit_transform(df)
X = MinMaxScaler().fit_transform(df)
dfX = pd.DataFrame(X)
dfX.plot.hist(bins=100)
print(dfX.head())
plt.savefig("histX.pdf")
pca = PCA(n_components=2)
pca.fit(X)
print(pca.explained_variance_ratio_)
pcat = pca.transform(X)
plt.figure()
#plt.xlim(-5, 5)
#plt.ylim(-5, 5)
plt.scatter(pcat[:,0], pcat[:,1])
plt.show()
plt.savefig("pca.pdf")
for eps in [0.0001, 0.005, 0.01, 0.05, 0.075, 0.1]:
for ms in [3, 5]:
db = DBSCAN(eps=0.3, min_samples=3).fit(X)
print(set(db.labels_))
labels = db.labels_
n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0)
print('eps = %f - min_samples = %d - number of clusters: %d' % (eps, ms, n_clusters_))
"""
Explanation: dfScatter(pcat)
Code from Dave
End of explanation
"""
#!/usr/bin/env python3
import math
import sys
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.cluster import DBSCAN
from sklearn import metrics
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.decomposition import PCA
df = df2_TPM_values
#print(df.head())
#dfr = df / df.loc("5GB1_FM40_T0m_TR2")
#print(dfr.head())
#sys.exit()
# PCA and clustering of dfr
#df.plot.hist(bins=100)
#plt.savefig("hist.pdf")
#X = StandardScaler().fit_transform(df)
X = MinMaxScaler().fit_transform(df)
"""
dfX = pd.DataFrame(X)
dfX.plot.hist(bins=100)
print(dfX.head())
plt.savefig("histX.pdf")
pca = PCA(n_components=2)
pca.fit(X)
print(pca.explained_variance_ratio_)
pcat = pca.transform(X)
plt.figure()
#plt.xlim(-5, 5)
#plt.ylim(-5, 5)
plt.scatter(pcat[:,0], pcat[:,1])
plt.show()
plt.savefig("pca.pdf")
"""
for eps_value in [0.0001, 0.005, 0.01, 0.05, 0.075, 0.1]:
for ms in [3, 5]:
db = DBSCAN(eps=eps_value, min_samples=ms).fit(array_stand_scale)
print(set(db.labels_))
labels = db.labels_
n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0)
print('eps = %f - min_samples = %d - number of clusters: %d' % (eps_value, ms, n_clusters_))
"""
Explanation: Code from Dave with mods
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive/03_tensorflow/labs/b_estimator.ipynb | apache-2.0 | !sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.6
import tensorflow as tf
import pandas as pd
import numpy as np
import shutil
print(tf.__version__)
"""
Explanation: <h1>2b. Machine Learning using tf.estimator </h1>
In this notebook, we will create a machine learning model using tf.estimator and evaluate its performance. The dataset is rather small (7700 samples), so we can do it all in-memory. We will also simply pass the raw data in as-is.
End of explanation
"""
# In CSV, label is the first column, after the features, followed by the key
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
FEATURES = CSV_COLUMNS[1:len(CSV_COLUMNS) - 1]
LABEL = CSV_COLUMNS[0]
df_train = pd.read_csv('./taxi-train.csv', header = None, names = CSV_COLUMNS)
df_valid = pd.read_csv('./taxi-valid.csv', header = None, names = CSV_COLUMNS)
df_test = pd.read_csv('./taxi-test.csv', header = None, names = CSV_COLUMNS)
"""
Explanation: Read data created in the previous chapter.
End of explanation
"""
# TODO: Create an appropriate input_fn to read the training data
def make_train_input_fn(df, num_epochs):
return tf.compat.v1.estimator.inputs.pandas_input_fn(
#ADD CODE HERE
)
# TODO: Create an appropriate input_fn to read the validation data
def make_eval_input_fn(df):
return tf.compat.v1.estimator.inputs.pandas_input_fn(
#ADD CODE HERE
)
"""
Explanation: <h2> Train and eval input functions to read from Pandas Dataframe </h2>
End of explanation
"""
# TODO: Create an appropriate prediction_input_fn
def make_prediction_input_fn(df):
return tf.compat.v1.estimator.inputs.pandas_input_fn(
#ADD CODE HERE
)
"""
Explanation: Our input function for predictions is the same except we don't provide a label
End of explanation
"""
# TODO: Create feature columns
"""
Explanation: Create feature columns for estimator
End of explanation
"""
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.INFO)
OUTDIR = 'taxi_trained'
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
# TODO: Train a linear regression model
model = #ADD CODE HERE
model.train(#ADD CODE HERE
)
"""
Explanation: <h3> Linear Regression with tf.Estimator framework </h3>
End of explanation
"""
def print_rmse(model, df):
metrics = model.evaluate(input_fn = make_eval_input_fn(df))
print('RMSE on dataset = {}'.format(np.sqrt(metrics['average_loss'])))
print_rmse(model, df_valid)
"""
Explanation: Evaluate on the validation data (we should defer using the test data to after we have selected a final model).
End of explanation
"""
# TODO: Predict from the estimator model we trained using test dataset
"""
Explanation: This is nowhere near our benchmark (RMSE of $6 or so on this data), but it serves to demonstrate what TensorFlow code looks like. Let's use this model for prediction.
End of explanation
"""
# TODO: Copy your LinearRegressor estimator and replace with DNNRegressor. Remember to add a list of hidden units i.e. [32, 8, 2]
"""
Explanation: This explains why the RMSE was so high -- the model essentially predicts the same amount for every trip. Would a more complex model help? Let's try using a deep neural network. The code to do this is quite straightforward as well.
<h3> Deep Neural Network regression </h3>
End of explanation
"""
from google.cloud import bigquery
import numpy as np
import pandas as pd
def create_query(phase, EVERY_N):
"""
phase: 1 = train 2 = valid
"""
base_query = """
SELECT
(tolls_amount + fare_amount) AS fare_amount,
EXTRACT(DAYOFWEEK FROM pickup_datetime) * 1.0 AS dayofweek,
EXTRACT(HOUR FROM pickup_datetime) * 1.0 AS hourofday,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count * 1.0 AS passengers,
CONCAT(CAST(pickup_datetime AS STRING), CAST(pickup_longitude AS STRING), CAST(pickup_latitude AS STRING), CAST(dropoff_latitude AS STRING), CAST(dropoff_longitude AS STRING)) AS key
FROM
`nyc-tlc.yellow.trips`
WHERE
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
"""
if EVERY_N == None:
if phase < 2:
# Training
query = "{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 4)) < 2".format(base_query)
else:
# Validation
query = "{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 4)) = {1}".format(base_query, phase)
else:
query = "{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), {1})) = {2}".format(base_query, EVERY_N, phase)
return query
query = create_query(2, 100000)
df = bigquery.Client().query(query).to_dataframe()
print_rmse(model, df)
"""
Explanation: We are not beating our benchmark with either model ... what's up? Well, we may be using TensorFlow for Machine Learning, but we are not yet using it well. That's what the rest of this course is about!
But, for the record, let's say we had to choose between the two models. We'd choose the one with the lower validation error. Finally, we'd measure the RMSE on the test data with this chosen model.
<h2> Benchmark dataset </h2>
Let's do this on the benchmark dataset.
End of explanation
"""
|
tbarrongh/cosc-learning-labs | src/notebook/Interface.ipynb | apache-2.0 | import learning_lab
"""
Explanation: Learning Lab: Interface
This Learning Lab explores the network interfaces of a network device.
Pre-requisite Learning Lab: Inventory
|Keyword|Definition|
|--|--|
|Network Interface|Connector from one network device to a network interface on one or more other network devices.|
|Network Device|Routers, switches, etc.|
|Controller|Service that intermediates between client applications and network devices. E.g. ODL, COSC|
|ODL|OpenDayLight|
|COSC|Cisco Open Source Controller, enhanced ODL|
Table of Contents
Table of Contents
Prologue
Import Learning Lab
Verify Controller
Sample Network Device
Introduction
List of Network Interfaces
Management Interface
Interface Configuration
XML
For all network interfaces
For one network interface
JSON
Python type InterfaceConfiguration
Interface Properties
XML
JSON
Python type InterfaceProperties
Conclusion
HTTP Reference
RPC interface_configuration_xml(device)
RPC interface_configuration_xml(device, interface)
RPC interface_configuration_json(device)
RPC interface_configuration_json(device, interface)
RPC interface_properties_xml(device)
RPC interface_properties_xml(device, interface)
RPC interface_properties_json(device)
RPC interface_properties_json(device, interface)
Prologue
Import Learning Lab
If the Learning Lab team have provided you with either:
* a Python Interactive Shell, or
* an IPython Notebook
then please proceed. Otherwise, please follow the instructions on web page: How to set up your computer
Import the Python module named learning_lab.
End of explanation
"""
%run 01_inventory_mount
from basics.sample import sample_device
device_name = 'xrvr-1'
"""
Explanation: Sample Network Device
We need one network device to explore. Any network device is satisfactory, provided it is connected to the Controller and has multiple network interfaces.
The following code selects a suitable network device from the inventory using convenience function sample_device().
End of explanation
"""
device_name
"""
Explanation: Note that we only require the name of the sample network device.
End of explanation
"""
from basics.interface import interface_names
interface_names(device_name)
"""
Explanation: Introduction
In this lesson, we will:
1. List the names of network interfaces on a single network device.
2. Determine which network interface the Controller is using to manage the network device.
3. For each network interface, obtain:
* configuration
* properties
4. Configuration and properties will be obtained:
* in bulk - all network interfaces
* individually - a single network interface, specified by name
5. The data formats will be:
* XML - demonstration of transformation to HTML using XSLT
* JSON - indexing deep into a complex document
* Python - types InterfaceConfiguration and InterfaceProperties
List of Network Interfaces
List the network interfaces of the selected network device, using function:
* interface_names(device_name).
End of explanation
"""
from basics.interface import management_interface
management_interface(device_name)
"""
Explanation: Management Interface
Determine which network interface the Controller is using to manage the network device, using function:
* management_interface(device_name)
End of explanation
"""
from basics.interface import interface_configuration_xml
xml = interface_configuration_xml(device_name)
"""
Explanation: Each network interface belongs to one of the following two categories:
* management interface
* data interface
Normal network traffic flows between data interfaces. A management interface is for monitoring and modifying the data interfaces on the same network device. For example, you can shutdown/startup a data interface via a management interface.
Note: if you shutdown a management interface then the connection terminates and you will not be able to send it the startup command.
Interface Configuration
Interface Configuration: XML
For all network interfaces
Obtain the configuration of all network interfaces on one network device, using function:
* interface_configuration_xml(device_name)
The XML representation of data is popular in client applications. The Controller is capable of representing the configuration data in either XML or JSON format. The HTTP Reference shows the actual XML and JSON documents returned from the Controller.
End of explanation
"""
from basics.interface import interface_configuration_to_html
from IPython.core.display import HTML
HTML(interface_configuration_to_html(xml))
"""
Explanation: Additional functions are used to transform the XML into a HTML table to make it pleasant for humans to read. The implementation uses XSLT.
End of explanation
"""
import random
interface_name = random.choice(interface_names(device_name))
"""
Explanation: For one network interface
A single network interface can be specified by name as the second parameter to function:
* interface_configuration_xml(device_name, interface_name)
A single network interface is chosen, randomly, from the list of interface names.
End of explanation
"""
xml = interface_configuration_xml(device_name, interface_name)
"""
Explanation: The XML representation is obtained...
End of explanation
"""
HTML(interface_configuration_to_html(xml))
"""
Explanation: ... and transformed to HTML. We expect the HTML table to contain exactly one row of data, identical to one row in the table above.
End of explanation
"""
from basics.interface import interface_configuration_json
json = interface_configuration_json(device_name, interface_name)
"""
Explanation: Interface Configuration: JSON
XML is ideal for transformation to another markup language, such as HTML, as shown above. JSON is an alternative to XML that is popular. Network interface configuration is available in JSON from functions:
* interface_configuration_json(device_name)
* interface_configuration_json(device_name, interface_name)
End of explanation
"""
json
"""
Explanation: The JSON for a single network interface is reasonably small.
End of explanation
"""
json = interface_configuration_json(device_name)
"""
Explanation: The JSON for all network interfaces (of one network device).
End of explanation
"""
[(
item[u'interface-name'],
item[u'description']
)
for item in json
[u'interface-configurations']
[u'interface-configuration']]
"""
Explanation: This JSON data structure is shown in the HTTP Reference due to its complexity.
Typically usage of JSON is to index into the data structure. This example accesses the interface name and description.
End of explanation
"""
xml.findtext('//{*}primary/{*}address')
"""
Explanation: It is more difficult to access the IP address because the path depends on IPv4 versus IPv6. Note that in XML the IP address is accessed using a XPATH expression with a wildcard matching either IPv4 or IPv6.
End of explanation
"""
from basics.interface import interface_configuration_tuple
tuple = interface_configuration_tuple(device_name, interface_name)
"""
Explanation: Python type InterfaceConfiguration
Configuration of network interfaces is available as a Python named tuple. The functions are:
* interface_configuration_tuple(device_name)
* interface_configuration_tuple(device_name, interface_name)
End of explanation
"""
type(tuple)
"""
Explanation: The type of the named tuple is a class in Python.
End of explanation
"""
tuple.name
tuple.address
tuple.netmask
tuple.shutdown
"""
Explanation: A tuple provides access to the fields in a language that is very natural. Several fields are accessed to demonstrate.
End of explanation
"""
json = interface_configuration_json(device_name, interface_name)
shutdown = 'shutdown' in json['interface-configuration'][0]
assert shutdown == tuple.shutdown
"""
Explanation: The value of field shutdown, above, is implied by the existence or absence of a specific element in XML (or field in JSON). For this reason, the Python tuple is more convenient to use in a client application than a test for the presence of fields in XML and JSON.
End of explanation
"""
tuple.__dict__.keys()
"""
Explanation: List all field names of the Python class.
End of explanation
"""
json['interface-configuration'][0]['description']
"""
Explanation: Note that the tuple field names are not strings, as they are in JSON. Here is the access of field description in JSON.
End of explanation
"""
tuple.description
"""
Explanation: The JSON field access, above, can only be verified at run-time, whereas the tuple field access, below, can be checked by the editor and can participate in code completion, because the field is pre-defined. This is another way the Python tuple adds value to the underlying XML or JSON (from which the tuple is built).
End of explanation
"""
bulk = interface_configuration_tuple(device_name)
[(tuple.name, tuple.address) for tuple in bulk]
"""
Explanation: All examples of the Python tuple, above, have been for a single network interface. The configuration of multiple network interfaces (of one network device) is available as a list of tuples.
End of explanation
"""
from basics.interface import interface_properties_xml
xml = interface_properties_xml(device_name)
"""
Explanation: Interface Properties
The Controller is capable of representing network interface properties in either XML or JSON format. The HTTP Reference shows the actual XML and JSON documents returned from the Controller.
Interface Properties: XML
Obtain the properties of all network interfaces of one network device, using function:
* interface_properties_xml(device_name)
End of explanation
"""
from basics.interface import interface_properties_to_html
HTML(interface_properties_to_html(xml))
"""
Explanation: Additional functions are employed to transform the XML into two HTML tables. The reason for two HTML tables is simply to fit all the properties into the width of this document. The XML is transformed to HTML using a XSLT stylesheet.
End of explanation
"""
xml = interface_properties_xml(device_name, interface_name)
"""
Explanation: A single network interface can be specified by name as the second parameter to function:
* interface_properties_xml(device_name, interface_name)
The specific interface was chosen randomly.
End of explanation
"""
HTML(interface_properties_to_html(xml))
"""
Explanation: The XML document is displayed in the HTTP Reference. The same XSLT stylesheet used above can be applied to the XML to produce HTML tables. We expect the two HTML tables to contain exactly one row of data, identical to one row in each of the two tables above.
End of explanation
"""
from basics.interface import interface_properties_json
[(
item[u'interface-name'],
item[u'type'],
item[u'encapsulation']
)
for item in interface_properties_json(device_name)
[u'interface-properties']
[u'data-nodes']
[u'data-node']
[0]
[u'system-view']
[u'interfaces']
[u'interface']
]
json = interface_properties_json(device_name, interface_name)
"""
Explanation: Interface Properties: JSON
The interface properties are returned in JSON representation from function:
* interface_properties_json(device_name)
The HTTP Reference shows the actual JSON document returned from the Controller.
The sample code below extracts three of the many properties available for each network interface:
* interface-name
* type
* encapsulation.
End of explanation
"""
[(name, value) for (name,value) in json[u'interface'][0].iteritems()]
"""
Explanation: The JSON document is displayed in the HTTP Reference.
The property names and values can be accessed directly from the JSON structure.
End of explanation
"""
from basics.interface import interface_properties_tuple
tuple = interface_properties_tuple(device_name, interface_name)
"""
Explanation: Python type InterfaceProperties
The properties of a network interface are available as a Python named tuple. The functions are:
* interface_properties_tuple(device_name)
* interface_properties_tuple(device_name, interface_name)
End of explanation
"""
type(tuple)
"""
Explanation: The type of the tuple is a class in Python.
End of explanation
"""
tuple.__dict__.keys()
"""
Explanation: List all fields of the Python class.
End of explanation
"""
tuple.name
tuple.l2_transport
tuple.mtu
"""
Explanation: A tuple provides access to the fields in a language that is very natural. Several fields are accessed to demonstrate. Note that the property values below have different types: string, boolean and integer.
End of explanation
"""
bulk = interface_properties_tuple(device_name)
[(tuple.name, tuple.state) for tuple in bulk]
"""
Explanation: All examples of the Python tuple, above, have been for a single network interface. The properties of multiple network interfaces (of one network device) are available as a list of tuples.
End of explanation
"""
from basics.interface import interface_configuration_xml_http
from basics.http import http_to_html
HTML(http_to_html(interface_configuration_xml_http(device_name)))
"""
Explanation: Conclusion
In this lesson we:
1. Listed the names of network interfaces on a single network device.
2. Determined which network interface the Controller is using to manage the network device.
3. For each network interface we obtained:
* configuration
* properties
4. The configuration and properties of network interfaces are available, per network device:
* in bulk - all network interfaces
* individually - a single network interface, specified by name
5. The data formats used were:
* XML - demonstration of transformation to HTML using XSLT
* JSON - indexing deep into a complex document
* Python - types InterfaceConfiguration and InterfaceProperties
HTTP Reference
The HTTP request and response are shown for each remote procedure call (RPC) to the Controller.
See the Learning Lab document HTTP for a demonstration of how to use the tables below.
RPC interface_configuration_xml: multiple
Function:
* interface_configuration_xml(device_name)
invokes function:
* interface_configuration_xml_http(device_name)
which returns the HTTP request and response, as shown in the table below.
The response from the Controller is very large because it describes multiple network interfaces. The next RPC is for a single network interface and is thus shorter. To help you navigate this document, here is a link to the next section and the TOC.
End of explanation
"""
http = interface_configuration_xml_http(device_name, interface_name)
HTML(http_to_html(http))
"""
Explanation: RPC interface_configuration_xml: single
The function:
* interface_configuration_xml(device_name, interface_name)
invokes function:
* interface_configuration_xml_http(device_name, interface_name)
which returns the HTTP request and response, as shown in the table below. The specific interface was chosen randomly.
To help you navigate around the large tables in this document, here are links to the previous section, next section and the TOC.
End of explanation
"""
from basics.interface import interface_configuration_json_http
HTML(http_to_html(interface_configuration_json_http(device_name)))
"""
Explanation: RPC interface_configuration_json: multiple
Function:
* interface_configuration_json(device_name)
invokes function:
* interface_configuration_json_http(device_name)
which returns the HTTP request and response, as shown in the table below.
The response from the Controller is very large because it describes multiple network interfaces. The next RPC is for a single network interface and is thus shorter. To help you navigate this document, here is a link to the previous section, next section and the TOC.
End of explanation
"""
HTML(http_to_html(interface_configuration_json_http(device_name, interface_name)))
"""
Explanation: RPC interface_configuration_json: single
The function:
* interface_configuration_json(device_name, interface_name)
invokes function:
* interface_configuration_json_http(device_name, interface_name)
which returns the HTTP request and response, as shown in the table below. The specific interface was chosen randomly.
To help you navigate around the large tables in this document, here are links to the previous section, next section and the TOC.
End of explanation
"""
from basics.interface import interface_properties_xml_http
HTML(http_to_html(interface_properties_xml_http(device_name)))
"""
Explanation: RPC interface_properties_xml: multiple
The function:
* interface_properties_xml(device_name)
invokes function:
* interface_properties_xml_http(device_name)
which returns the HTTP request and response, as shown in the table below.
To help you navigate around the large tables in this document, here are links to the previous section, next section and the TOC.
End of explanation
"""
HTML(http_to_html(interface_properties_xml_http(device_name, interface_name)))
"""
Explanation: RPC interface_properties_xml: single
The function:
* interface_properties_json(device_name, interface_name)
invokes function:
* interface_properties_json_http(device_name, interface_name)
which returns the HTTP request and response, as shown in the table below. The specific interface was chosen randomly.
To help you navigate around the large tables in this document, here are links to the previous section, next section and the TOC.
End of explanation
"""
from basics.interface import interface_properties_json_http
HTML(http_to_html(interface_properties_json_http(device_name)))
"""
Explanation: RPC interface_properties_json: multiple
The function:
* interface_properties_json(device_name)
invokes function:
* interface_properties_json_http(device_name)
which returns the HTTP request and response, as shown in the table below.
To help you navigate around the large tables in this document, here are links to the previous section, next section and the TOC.
End of explanation
"""
HTML(http_to_html(interface_properties_json_http(device_name, interface_name)))
"""
Explanation: RPC interface_properties_json: single
The function:
* interface_properties_json(device_name, interface_name)
invokes function:
* interface_properties_json_http(device_name, interface_name)
which returns the HTTP request and response, as shown in the table below. The specific interface was chosen randomly.
To help you navigate around the large tables in this document, here are links to the previous section and the TOC.
End of explanation
"""
|
statsmodels/statsmodels.github.io | v0.12.2/examples/notebooks/generated/distributed_estimation.ipynb | bsd-3-clause | import numpy as np
from scipy.stats.distributions import norm
from statsmodels.base.distributed_estimation import DistributedModel
def _exog_gen(exog, partitions):
"""partitions exog data"""
n_exog = exog.shape[0]
n_part = np.ceil(n_exog / partitions)
ii = 0
while ii < n_exog:
jj = int(min(ii + n_part, n_exog))
yield exog[ii:jj, :]
ii += int(n_part)
def _endog_gen(endog, partitions):
"""partitions endog data"""
n_endog = endog.shape[0]
n_part = np.ceil(n_endog / partitions)
ii = 0
while ii < n_endog:
jj = int(min(ii + n_part, n_endog))
yield endog[ii:jj]
ii += int(n_part)
"""
Explanation: Distributed Estimation
This notebook goes through a couple of examples to show how to use distributed_estimation. We import the DistributedModel class and make the exog and endog generators.
End of explanation
"""
X = np.random.normal(size=(1000, 25))
beta = np.random.normal(size=25)
beta *= np.random.randint(0, 2, size=25)
y = norm.rvs(loc=X.dot(beta))
m = 5
"""
Explanation: Next we generate some random data to serve as an example.
End of explanation
"""
debiased_OLS_mod = DistributedModel(m)
debiased_OLS_fit = debiased_OLS_mod.fit(zip(_endog_gen(y, m), _exog_gen(X, m)),
fit_kwds={"alpha": 0.2})
"""
Explanation: This is the most basic fit, showing all of the defaults, which are to use OLS as the model class, and the debiasing procedure.
End of explanation
"""
from statsmodels.genmod.generalized_linear_model import GLM
from statsmodels.genmod.families import Gaussian
debiased_GLM_mod = DistributedModel(m, model_class=GLM,
init_kwds={"family": Gaussian()})
debiased_GLM_fit = debiased_GLM_mod.fit(zip(_endog_gen(y, m), _exog_gen(X, m)),
fit_kwds={"alpha": 0.2})
"""
Explanation: Then we run through a slightly more complicated example which uses the GLM model class.
End of explanation
"""
from statsmodels.base.distributed_estimation import _est_regularized_naive, _join_naive
naive_OLS_reg_mod = DistributedModel(m, estimation_method=_est_regularized_naive,
join_method=_join_naive)
naive_OLS_reg_params = naive_OLS_reg_mod.fit(zip(_endog_gen(y, m), _exog_gen(X, m)),
fit_kwds={"alpha": 0.2})
"""
Explanation: We can also change the estimation_method and the join_method. The below example show how this works for the standard OLS case. Here we using a naive averaging approach instead of the debiasing procedure.
End of explanation
"""
from statsmodels.base.distributed_estimation import _est_unregularized_naive, DistributedResults
naive_OLS_unreg_mod = DistributedModel(m, estimation_method=_est_unregularized_naive,
join_method=_join_naive,
results_class=DistributedResults)
naive_OLS_unreg_params = naive_OLS_unreg_mod.fit(zip(_endog_gen(y, m), _exog_gen(X, m)),
fit_kwds={"alpha": 0.2})
"""
Explanation: Finally, we can also change the results_class used. The following example shows how this work for a simple case with an unregularized model and naive averaging.
End of explanation
"""
|
GoogleCloudPlatform/mlops-with-vertex-ai | 06-model-deployment.ipynb | apache-2.0 | import os
import logging
logging.getLogger().setLevel(logging.INFO)
"""
Explanation: 06 - Model Deployment
The purpose of this notebook is to execute a CI/CD routine to test and deploy the trained model to Vertex AI as an Endpoint for online prediction serving. The notebook covers the following steps:
1. Run the test steps locally.
2. Execute the model deployment CI/CD steps using Cloud Build.
Setup
Import libraries
End of explanation
"""
PROJECT = '[your-project-id]' # Change to your project id.
REGION = 'us-central1' # Change to your region.
if PROJECT == "" or PROJECT is None or PROJECT == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT = shell_output[0]
print("Project ID:", PROJECT)
print("Region:", REGION)
"""
Explanation: Setup Google Cloud project
End of explanation
"""
VERSION = 'v01'
DATASET_DISPLAY_NAME = 'chicago-taxi-tips'
MODEL_DISPLAY_NAME = f'{DATASET_DISPLAY_NAME}-classifier-{VERSION}'
ENDPOINT_DISPLAY_NAME = f'{DATASET_DISPLAY_NAME}-classifier'
CICD_IMAGE_NAME = 'cicd:latest'
CICD_IMAGE_URI = f"gcr.io/{PROJECT}/{CICD_IMAGE_NAME}"
"""
Explanation: Set configurations
End of explanation
"""
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
os.environ['MODEL_DISPLAY_NAME'] = MODEL_DISPLAY_NAME
os.environ['ENDPOINT_DISPLAY_NAME'] = ENDPOINT_DISPLAY_NAME
"""
Explanation: 1. Run CI/CD steps locally
End of explanation
"""
!py.test src/tests/model_deployment_tests.py::test_model_artifact -s
"""
Explanation: Run the model artifact testing
End of explanation
"""
!python build/utils.py \
--mode=create-endpoint\
--project={PROJECT}\
--region={REGION}\
--endpoint-display-name={ENDPOINT_DISPLAY_NAME}
"""
Explanation: Run create endpoint
End of explanation
"""
!python build/utils.py \
--mode=deploy-model\
--project={PROJECT}\
--region={REGION}\
--endpoint-display-name={ENDPOINT_DISPLAY_NAME}\
--model-display-name={MODEL_DISPLAY_NAME}
"""
Explanation: Run deploy model
End of explanation
"""
!py.test src/tests/model_deployment_tests.py::test_model_endpoint
"""
Explanation: Test deployed model endpoint
End of explanation
"""
!echo $CICD_IMAGE_URI
!gcloud builds submit --tag $CICD_IMAGE_URI build/. --timeout=15m
"""
Explanation: 2. Execute the Model Deployment CI/CD routine in Cloud Build
The CI/CD routine is defined in the model-deployment.yaml file, and consists of the following steps:
1. Load and test the the trained model interface.
2. Create and endpoint in Vertex AI if it doesn't exists.
3. Deploy the model to the endpoint.
4. Test the endpoint.
Build CI/CD container Image for Cloud Build
This is the runtime environment where the steps of testing and deploying model will be executed.
End of explanation
"""
REPO_URL = "https://github.com/GoogleCloudPlatform/mlops-with-vertex-ai.git" # Change to your github repo.
BRANCH = "main"
SUBSTITUTIONS=f"""\
_REPO_URL='{REPO_URL}',\
_BRANCH={BRANCH},\
_CICD_IMAGE_URI={CICD_IMAGE_URI},\
_PROJECT={PROJECT},\
_REGION={REGION},\
_MODEL_DISPLAY_NAME={MODEL_DISPLAY_NAME},\
_ENDPOINT_DISPLAY_NAME={ENDPOINT_DISPLAY_NAME},\
"""
!echo $SUBSTITUTIONS
!gcloud builds submit --no-source --config build/model-deployment.yaml --substitutions {SUBSTITUTIONS} --timeout=30m
"""
Explanation: Run CI/CD from model deployment using Cloud Build
End of explanation
"""
|
dschick/udkm1Dsim | docs/source/examples/m3tm.ipynb | mit | import udkm1Dsim as ud
u = ud.u # import the pint unit registry from udkm1Dsim
import scipy.constants as constants
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
u.setup_matplotlib() # use matplotlib with pint units
"""
Explanation: Microscopic 3-Temperature-Model
Here we adapt the NTM from the last example to allow for calculations of the magnetization within the microscopic 3-temperature-model as proposed by:
Koopmans, B., Malinowski, G., Dalla Longa, F. et al.
Explaining the paradoxical diversity of ultrafast laser-induced demagnetization.
Nature Mater 9, 259–265 (2010).
We need to solve the following coupled differential equations:
\begin{align}
c_e(T_e) \rho \frac{\partial T_e}{\partial t} & = & \frac{\partial}{\partial z}
\left( k_e(T_e) \frac{\partial T_e}{\partial z} \right)
- G_{ep}(T_e-T_p) + S(z, t) \
c_p(T_p) \rho \frac{\partial T_p}{\partial t} & = & \frac{\partial}{\partial z}
\left( k_p(T_p) \frac{\partial T_p}{\partial z} \right)
+ G_{ep}(T_e-T_p) \
\frac{\partial m}{\partial t} & = & R m \frac{T_p}{T_C}
\left( 1 - m \coth\left( \frac{m T_C}{T_e} \right) \right)
\end{align}
We treat the temperature of the 3rd subsystem as magnetization $m$.
For that we have to set its heat_capacity to $1/\rho$ and thermal_conductivity to zero.
We put the complete right term of the last equation in the sub_system_coupling term for the 3rd subsystem.
Here, we need to rewrite the kotangens hyperbolicus in Python as
$$\coth(x) = 1 + \frac{2}{e^{2x} - 1}$$
The values of the used parameters are not experimentally verified.
Setup
Do all necessary imports and settings.
End of explanation
"""
Co = ud.Atom('Co')
Ni = ud.Atom('Ni')
CoNi = ud.AtomMixed('CoNi')
CoNi.add_atom(Co, 0.5)
CoNi.add_atom(Ni, 0.5)
Si = ud.Atom('Si')
prop_CoNi = {}
prop_CoNi['heat_capacity'] = ['0.1*T',
532*u.J/u.kg/u.K,
1/7000
]
prop_CoNi['therm_cond'] = [20*u.W/(u.m*u.K),
80*u.W/(u.m*u.K),
0]
R = 25.3/1e-12
Tc = 1388
g = 4.0e18
prop_CoNi['sub_system_coupling'] = ['-{:f}*(T_0-T_1)'.format(g),
'{:f}*(T_0-T_1)'.format(g),
'{0:f}*T_2*T_1/{1:f}*(1-T_2* (1 + 2/(exp(2*T_2*{1:f}/T_0) - 1) ))'.format(R, Tc)
]
prop_CoNi['lin_therm_exp'] = [0, 11.8e-6, 0]
prop_CoNi['sound_vel'] = 4.910*u.nm/u.ps
prop_CoNi['opt_ref_index'] = 2.9174+3.3545j
layer_CoNi = ud.AmorphousLayer('CoNi', 'CoNi amorphous', thickness=1*u.nm,
density=7000*u.kg/u.m**3, atom=CoNi, **prop_CoNi)
prop_Si = {}
prop_Si['heat_capacity'] = [100*u.J/u.kg/u.K, 603*u.J/u.kg/u.K, 1]
prop_Si['therm_cond'] = [0, 100*u.W/(u.m*u.K), 0]
prop_Si['sub_system_coupling'] = [0, 0, 0]
prop_Si['lin_therm_exp'] = [0, 2.6e-6, 0]
prop_Si['sound_vel'] = 8.433*u.nm/u.ps
prop_Si['opt_ref_index'] = 3.6941+0.0065435j
layer_Si = ud.AmorphousLayer('Si', "Si amorphous", thickness=1*u.nm,
density=2336*u.kg/u.m**3, atom=Si, **prop_Si)
S = ud.Structure('CoNi')
S.add_sub_structure(layer_CoNi, 20)
S.add_sub_structure(layer_Si, 50)
S.visualize()
"""
Explanation: Structure
to the structure-example for more details.
End of explanation
"""
h = ud.Heat(S, True)
h.save_data = False
h.disp_messages = True
h.excitation = {'fluence': [30]*u.mJ/u.cm**2,
'delay_pump': [0]*u.ps,
'pulse_width': [0.05]*u.ps,
'multilayer_absorption': True,
'wavelength': 800*u.nm,
'theta': 45*u.deg}
# temporal and spatial grid
delays = np.r_[-1:5:0.005]*u.ps
_, _, distances = S.get_distances_of_layers()
"""
Explanation: Initialize Heat and the Excitation
End of explanation
"""
# enable heat diffusion
h.heat_diffusion = True
# set the boundary conditions
h.boundary_conditions = {'top_type': 'isolator', 'bottom_type': 'isolator'}
# The resulting temperature profile is calculated in one line:
init_temp = np.ones([S.get_number_of_layers(), 3])
init_temp[:, 0] = 300
init_temp[:, 1] = 300
init_temp[20:, 2] = 0
temp_map, delta_temp = h.get_temp_map(delays, init_temp)
plt.figure(figsize=[6, 12])
plt.subplot(3, 1, 1)
plt.pcolormesh(distances.to('nm').magnitude, delays.to('ps').magnitude, temp_map[:, :, 0], shading='auto')
plt.colorbar()
plt.xlabel('Distance [nm]')
plt.ylabel('Delay [ps]')
plt.title('Temperature Map Electrons')
plt.subplot(3, 1, 2)
plt.pcolormesh(distances.to('nm').magnitude, delays.to('ps').magnitude, temp_map[:, :, 1], shading='auto')
plt.colorbar()
plt.xlabel('Distance [nm]')
plt.ylabel('Delay [ps]')
plt.title('Temperature Map Phonons')
plt.subplot(3, 1, 3)
plt.pcolormesh(distances.to('nm').magnitude, delays.to('ps').magnitude, temp_map[:, :, 2], shading='auto')
plt.colorbar()
plt.xlabel('Distance [nm]')
plt.ylabel('Delay [ps]')
plt.title('Magnetization')
plt.tight_layout()
plt.show()
plt.figure(figsize=[6, 8])
plt.subplot(2, 1, 1)
plt.plot(delays.to('ps'), np.mean(temp_map[:, S.get_all_positions_per_unique_layer()['CoNi'], 0], 1), label='electrons')
plt.plot(delays.to('ps'), np.mean(temp_map[:, S.get_all_positions_per_unique_layer()['CoNi'], 1], 1), label='phonons')
plt.ylabel('Temperature [K]')
plt.xlabel('Delay [ps]')
plt.legend()
plt.title('M3TM Koopmans et. al')
plt.subplot(2, 1, 2)
plt.plot(delays.to('ps'), np.mean(temp_map[:, S.get_all_positions_per_unique_layer()['CoNi'], 2], 1), label='M')
plt.ylabel('Magnetization')
plt.xlabel('Delay [ps]')
plt.legend()
plt.show()
"""
Explanation: Calculate Heat Diffusion
The init_temp sets the magnetization in the 3rd subsystem to 1 for CoNi and 0 for Si.
End of explanation
"""
|
macks22/gensim | docs/notebooks/ldaseqmodel.ipynb | lgpl-2.1 | # setting up our imports
from gensim.models import ldaseqmodel
from gensim.corpora import Dictionary, bleicorpus
import numpy
from gensim.matutils import hellinger
"""
Explanation: Dynamic Topic Models Tutorial
What is this tutorial about?
This tutorial will exaplin what Dynamic Topic Models are, and how to use them using the LdaSeqModel class of gensim. I will start with some theory and already documented examples, before moving on to an example run of Dynamic Topic Model on a sample dataset. I will also do an comparison between the python code and the already present gensim wrapper, and illustrate how to easily visualise them and do topic coherence with DTM. Any suggestions to improve documentation or code through PRs/Issues is always appreciated!
Why Dynamic Topic Models?
Imagine you have a gigantic corpus which spans over a couple of years. You want to find semantically similar documents; one from the very beginning of your time-line, and one in the very end. How would you?
This is where Dynamic Topic Models comes in. By having a time-based element to topics, context is preserved while key-words may change.
David Blei does a good job explaining the theory behind this in this Google talk. If you prefer to directly read the paper on DTM by Blei and Lafferty, that should get you upto speed too.
In his talk, Blei gives a very interesting example of the motivation to use DTM. After running DTM on a dataset of the Science Journal from 1880 onwards, he picks up this paper - The Brain of the Orang (1880). It's topics are concentrated on a topic which must be to do with Monkeys, and with Neuroscience or brains.
<img src="Monkey Brains.png" width="500">
He goes ahead to pick up another paper with likely very less common words, but in the same context - analysing monkey brains. In fact, this one is called - "Representation of the visual field on the medial wall of occipital-parietal cortex in the owl monkey". Quite the title, eh? Like mentioned before, you wouldn't imagine too many common words in these two papers, about a 100 years apart.
<img src="Monkey Brains New.png" witdth="400">
But a Hellinger Distance based Document-Topic distribution gives a very high similarity value! The same topics evolved smoothly over time and the context remains. A document similarity match using other traditional techniques might not work very well on this!
Blei defines this technique as - "Time corrected Document Similarity".
Another obviously useful analysis is to see how words in a topic change over time. The same broad classified topic starts looking more 'mature' as time goes on. This image illustrates an example from the same paper linked to above.
<img src="Dynamic Topic Model.png" width="800">
So, briefly put :
Dynamic Topic Models are used to model the evolution of topics in a corpus, over time. The Dynamic Topic Model is part of a class of probabilistic topic models, like the LDA.
While most traditional topic mining algorithms do not expect time-tagged data or take into account any prior ordering, Dynamic Topic Models (DTM) leverages the knowledge of different documents belonging to a different time-slice in an attempt to map how the words in a topic change over time.
This blog post is also useful in breaking down the ideas and theory behind DTM.
Motivation to code this!
But - why even undertake this, especially when Gensim itself have a wrapper?
The main motivation was the lack of documentation in the original code - and the fact that doing an only python version makes it easier to use gensim building blocks. For example, for setting up the Sufficient Statistics to initialize the DTM, you can just pass a pre-trained gensim LDA model!
There is some clarity on how they built their code now - Variational Inference using Kalman Filters, as described in section 3 of the paper. The mathematical basis for the code is well described in the appendix of the paper. If the documentation is lacking or not clear, comments via Issues or PRs via the gensim repo would be useful in improving the quality.
This project was part of the Google Summer of Code 2016 program: I have been regularly blogging about my progress with implementing this, which you can find here.
Using LdaSeqModel for DTM
Gensim already has a wrapper for original C++ DTM code, but the LdaSeqModel class is an effort to have a pure python implementation of the same.
Using it is very similar to using any other gensim topic-modelling algorithm, with all you need to start is an iterable gensim corpus, id2word and a list with the number of documents in each of your time-slices.
End of explanation
"""
# loading our corpus and dictionary
try:
dictionary = Dictionary.load('datasets/news_dictionary')
except FileNotFoundError as e:
raise ValueError("SKIP: Please download the Corpus/news_dictionary dataset.")
corpus = bleicorpus.BleiCorpus('datasets/news_corpus')
# it's very important that your corpus is saved in order of your time-slices!
time_slice = [438, 430, 456]
"""
Explanation: We will be loading the corpus and dictionary from disk. Here our corpus in the Blei corpus format, but it can be any iterable corpus.
The data set here consists of news reports over 3 months (March, April, May) downloaded from here and cleaned.
The files can also be found after processing in the datasets folder, but it would be a good exercise to download the files in the link and have a shot at cleaning and processing yourself. :)
Note: the dataset is not cleaned and requires pre-processing. This means that some care must be taken to group the relevant docs together (the months are mentioned as per the folder name), after which it needs to be broken into tokens, and stop words must be removed. This is a link to some basic pre-processing for gensim - link.
What is a time-slice?
A very important input for DTM to work is the time_slice input. It should be a list which contains the number of documents in each time-slice. In our case, the first month had 438 articles, the second 430 and the last month had 456 articles. This means we'd need an input which looks like this: time_slice = [438, 430, 456].
Technically, a time-slice can be a month, year, or any way you wish to split up the number of documents in your corpus, time-based.
Once you have your corpus, id2word and time_slice ready, we're good to go!
End of explanation
"""
ldaseq = ldaseqmodel.LdaSeqModel(corpus=corpus, id2word=dictionary, time_slice=time_slice, num_topics=5)
"""
Explanation: For DTM to work it first needs the Sufficient Statistics from a trained LDA model on the same dataset.
By default LdaSeqModel trains it's own model and passes those values on, but can also accept a pre-trained gensim LDA model, or a numpy matrix which contains the Suff Stats.
We will be training our model in default mode, so gensim LDA will be first trained on the dataset.
NOTE: You have to set logging as true to see your progress!
End of explanation
"""
ldaseq.print_topics(time=0)
"""
Explanation: Now that our model is trained, let's see what our results look like.
Results
Much like LDA, the points of interest would be in what the topics are and how the documents are made up of these topics.
In DTM we have the added interest of seeing how these topics evolve over time.
Let's go through some of the functions to print Topics and analyse documents.
Printing Topics
To print all topics from a particular time-period, simply use print_topics.
The input parameter to print_topics is a time-slice option. By passing 0 we are seeing the topics in the 1st time-slice.
The result would be a list of lists, where each individual list contains a tuple of the most probable words in the topic. i.e (word, word_probability)
End of explanation
"""
ldaseq.print_topic_times(topic=0) # evolution of 1st topic
"""
Explanation: We can see 5 different fairly well defined topics:
1. Economy
2. Entertainment
3. Football
4. Technology and Entertainment
5. Government
Looking for Topic Evolution
To fix a topic and see it evolve, use print_topic_times.
The input parameter is the topic_id
In this case, we are looking at the evolution of the technology topic.
If you look at the lower frequencies; the word "may" is creeping itself into prominence. This makes sense, as the 3rd time-slice are news reports about the economy in the month of may. This isn't present at all in the first time-slice because it's still March!
End of explanation
"""
# to check Document - Topic proportions, use `doc-topics`
words = [dictionary[word_id] for word_id, count in corpus[558]]
print (words)
"""
Explanation: Document - Topic Proportions
the function doc_topics checks the topic proportions on documents already trained on. It accepts the document number in the corpus as an input.
Let's pick up document number 558 arbitrarily and have a look.
End of explanation
"""
doc = ldaseq.doc_topics(558) # check the 558th document in the corpuses topic distribution
print (doc)
"""
Explanation: It's pretty clear that it's a news article about football. What topics will it likely be comprised of?
End of explanation
"""
doc_football_1 = ['economy', 'bank', 'mobile', 'phone', 'markets', 'buy', 'football', 'united', 'giggs']
doc_football_1 = dictionary.doc2bow(doc_football_1)
doc_football_1 = ldaseq[doc_football_1]
print (doc_football_1)
"""
Explanation: Let's look at our topics as desribed by us, again:
Economy
Entertainment
Football
Technology and Entertainment
Government
Our topic distribution for the above document is largely in topics 3 and 4. Considering it is a document about a news article on a football match, the distribution makes perfect sense!
If we wish to analyse a document not in our training set, we can use simply pass the doc to the model similar to the __getitem__ funciton for LdaModel.
Let's let our document be a hypothetical news article about the effects of Ryan Giggs buying mobiles affecting the British economy.
End of explanation
"""
doc_football_2 = ['arsenal', 'fourth', 'wenger', 'oil', 'middle', 'east', 'sanction', 'fluctuation']
doc_football_2 = dictionary.doc2bow(doc_football_2)
doc_football_2 = ldaseq[doc_football_2]
hellinger(doc_football_1, doc_football_2)
"""
Explanation: Pretty neat! Topic 1 is about the Economy, and this document also has traces of football and technology, so topics 1, 3, and 4 got correctly activated.
Distances between documents
One of the more handy uses of DTMs topic modelling is that we can compare documents across different time-frames and see how similar they are topic-wise. When words may not necessarily overlap over these time-periods, this is very useful.
The current dataset doesn't provide us the diversity for this to be an effective example; but we will nevertheless illustrate how to do the same.
End of explanation
"""
doc_governemt_1 = ['tony', 'government', 'house', 'party', 'vote', 'european', 'official', 'house']
doc_governemt_1 = dictionary.doc2bow(doc_governemt_1)
doc_governemt_1 = ldaseq[doc_governemt_1]
hellinger(doc_football_1, doc_governemt_1)
"""
Explanation: The topic distributions are quite related - matches well in football and economy.
Now let's try with documents that shouldn't be similar.
End of explanation
"""
ldaseq.print_topic_times(1)
"""
Explanation: As expected, the value is very high, meaning the topic distributions are far apart.
For more information on how to use the gensim distance metrics, check out this notebook.
Performance
The code currently runs between 5 to 7 times slower than the original C++ DTM code. The bottleneck is in the scipy optimize.fmin_cg method for updating obs. Speeding this up would fix things up!
Since it uses iterable gensim corpuses, the memory stamp is also cleaner.
TODO: check memory, check BLAS, see how performance can be improved memory and speed wise.
The advantages of the python port are that unlike the C++ code we needn't treat it like a black-box; PRs to help make the code better are welcomed, as well as help to make the documentation clearer and improve performance. It is also in pure python and doesn't need any dependancy outside of what gensim already needs. The added functionality of being able to analyse new documents is also a plus!
Choosing your best Dynamic Topic Model.
Like we've been going on and on before, the advantage in having a python port is the transparency with which you can train your DTM.
We'll go over two key ideas: changing variance, and changing suff stats.
Chain Variance
One of the key aspects of topic evolution is how fast/slow these topics evolve. And this is where the factor of variance comes in.
By setting the chain_variance input to the DTM model higher, we can tweak our topic evolution.
The default value is 0.005. (this is the value suggested by Blei in his tech talk and is the default value in the C++ code)
Let us see a small example illustrating the same.
Let's first see the evolution of values for the first time-slice.
End of explanation
"""
ldaseq_chain = ldaseqmodel.LdaSeqModel(corpus=corpus, id2word=dictionary, time_slice=time_slice, num_topics=5, chain_variance=0.05)
"""
Explanation: Now let us do the same, but after increasing the chain_variance value.
End of explanation
"""
ldaseq_chain.print_topic_times(2)
"""
Explanation: It's noticable that the values are moving more freely after increasing the chain_variance. Film went from highest probability to 5th to 8th!
End of explanation
"""
from gensim.models.wrappers.dtmmodel import DtmModel
from gensim.corpora import Dictionary, bleicorpus
import pyLDAvis
# dtm_path = "/Users/bhargavvader/Downloads/dtm_release/dtm/main"
# dtm_model = DtmModel(dtm_path, corpus, time_slice, num_topics=5, id2word=dictionary, initialize_lda=True)
# dtm_model.save('dtm_news')
# if we've saved before simply load the model
dtm_model = DtmModel.load('dtm_news')
"""
Explanation: LDA Model and DTM
For the first slice of DTM to get setup, we need to provide it sufficient stats from an LDA model. As discussed before, this is done by fitting gensim LDA on the dataset first.
We also, however, have the option of passing our own model or suff stats values. Our final DTM results are heavily influenced by what we pass over here. We already know what a "Good" or "Bad" LDA model is (if not, read about it here).
It's quite obvious, then, that by passing a "bad" LDA model we will get not so satisfactory results; and by passing a better fitted or better trained LDA we will get better results. The same logic goes if we wish to directly pass the suff_stats numpy matrix.
Visualising Dynamic Topic Models.
Let us use pyLDAvis to visualise both the DTM wrapper and DTM python port.
With the new DTMvis methods it is now very straightforward to visualise DTM for a particular time-slice.
End of explanation
"""
doc_topic, topic_term, doc_lengths, term_frequency, vocab = dtm_model.dtm_vis(time=0, corpus=corpus)
vis_wrapper = pyLDAvis.prepare(topic_term_dists=topic_term, doc_topic_dists=doc_topic, doc_lengths=doc_lengths, vocab=vocab, term_frequency=term_frequency)
pyLDAvis.display(vis_wrapper)
"""
Explanation: If you take some time to look at the topics, you will notice large semantic similarities with the wrapper and the python DTM.
Functionally, the python DTM replicates the wrapper quite well.
End of explanation
"""
doc_topic, topic_term, doc_lengths, term_frequency, vocab = ldaseq.dtm_vis(time=0, corpus=corpus)
vis_dtm = pyLDAvis.prepare(topic_term_dists=topic_term, doc_topic_dists=doc_topic, doc_lengths=doc_lengths, vocab=vocab, term_frequency=term_frequency)
pyLDAvis.display(vis_dtm)
"""
Explanation: Now let us do the same for the python DTM.
In particular, look at topic 2 of the wrapper and topic 3 of the python port. Notice how they are both about football.
End of explanation
"""
from gensim.models.coherencemodel import CoherenceModel
import pickle
# we just have to specify the time-slice we want to find coherence for.
topics_wrapper = dtm_model.dtm_coherence(time=0)
topics_dtm = ldaseq.dtm_coherence(time=2)
# running u_mass coherence on our models
cm_wrapper = CoherenceModel(topics=topics_wrapper, corpus=corpus, dictionary=dictionary, coherence='u_mass')
cm_DTM = CoherenceModel(topics=topics_dtm, corpus=corpus, dictionary=dictionary, coherence='u_mass')
print ("U_mass topic coherence")
print ("Wrapper coherence is ", cm_wrapper.get_coherence())
print ("DTM Python coherence is", cm_DTM.get_coherence())
# to use 'c_v' we need texts, which we have saved to disk.
texts = pickle.load(open('Corpus/texts', 'rb'))
cm_wrapper = CoherenceModel(topics=topics_wrapper, texts=texts, dictionary=dictionary, coherence='c_v')
cm_DTM = CoherenceModel(topics=topics_dtm, texts=texts, dictionary=dictionary, coherence='c_v')
print ("C_v topic coherence")
print ("Wrapper coherence is ", cm_wrapper.get_coherence())
print ("DTM Python coherence is", cm_DTM.get_coherence())
"""
Explanation: Visualising topics is a handy way to compare topic models.
Topic Coherence for DTM
Similar to visualising DTM time-slices, finding coherence values for both python DTM and the wrapper is very easy.
We just have to specify the time-slice we want to find coherence for.
The following examples will illustrate this.
End of explanation
"""
|
philippgrafendorfe/stackedautoencoders | MNIST_Autoencoder.ipynb | mit | from keras.layers import Input, Dense, Dropout
from keras.models import Model
from keras.datasets import mnist
from keras.models import Sequential, load_model
from keras.optimizers import RMSprop
from keras.callbacks import TensorBoard
from __future__ import print_function
from IPython.display import SVG, Image
from keras import regularizers
from matplotlib import rc
import keras
import matplotlib.pyplot as plt
import numpy as np
# import graphviz
%matplotlib inline
font = {'family' : 'monospace',
'weight' : 'bold',
'size' : 20}
rc('font', **font)
"""
Explanation: Modules
End of explanation
"""
num_classes = 10
input_dim = 784
batch_size = 256
"""
Explanation: Global Variables
The number of classes, the input dimension and the batch size are constant.
End of explanation
"""
(x_train, y_train), (x_val, y_val) = mnist.load_data()
y_train
(x_train, y_train), (x_val, y_val) = mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_val = x_val.astype('float32') / 255.
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
x_val = x_val.reshape((len(x_val), np.prod(x_val.shape[1:])))
print(x_train.shape)
y_train = keras.utils.to_categorical(y_train, num_classes)
y_val = keras.utils.to_categorical(y_val, num_classes)
"""
Explanation: Data import and preprocessing
All data is normalized and serialized into a vector.
End of explanation
"""
# constants
hidden1_dim = 512
hidden2_dim = 512
"""
Explanation: Train Neural Net to recognize MNIST digits
End of explanation
"""
input_data = Input(shape=(input_dim,), dtype='float32', name='main_input')
x = Dense(hidden1_dim, activation='relu', kernel_initializer='normal')(input_data)
x = Dropout(0.2)(x)
x = Dense(hidden2_dim, activation='relu', kernel_initializer='normal')(x)
x = Dropout(0.2)(x)
output_layer = Dense(num_classes, activation='softmax', kernel_initializer='normal')(x)
model = Model(input_data, output_layer)
model.compile(loss='categorical_crossentropy',
optimizer=RMSprop(),
metrics=['accuracy'])
Image("images/mnist_nn1.png")
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=20,
verbose=0,
validation_split=0.1)
score = model.evaluate(x_val, y_val, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
fig = plt.figure(figsize=(20,10))
plt.plot(model.history.history['val_acc'])
plt.plot(model.history.history['acc'])
plt.axhline(y=score[1], c="red")
plt.text(0, score[1], "test: " + str(round(score[1], 4)), fontdict=font)
plt.title('model accuracy for neural net with 2 hidden layers')
plt.ylabel('accuracy')
plt.xlabel('epochs')
plt.legend(['valid', 'train'], loc='lower right')
plt.show()
"""
Explanation: Dropout consists in randomly setting a fraction rate of input units to 0 at each update during training time, which helps prevent overfitting.
End of explanation
"""
encoding_dim = 32
# input placeholder
input_img = Input(shape=(input_dim,))
encoded = Dense(encoding_dim, activation='relu')(input_img)
decoded = Dense(input_dim, activation='sigmoid')(encoded)
single_autoencoder = Model(input_img, decoded)
"""
Explanation: Single autoencoder
Model Definitions
Using keras module with compression to 32 floats.
End of explanation
"""
# this model maps an input to its encoded representation
single_encoder = Model(input_img, encoded)
"""
Explanation: Encoder Model:
End of explanation
"""
encoded_input = Input(shape=(encoding_dim,))
# retrieve the last layer of the autoencoder model
decoder_layer = single_autoencoder.layers[-1]
# create the decoder model
single_decoder = Model(encoded_input, decoder_layer(encoded_input))
"""
Explanation: Decoder Model:
End of explanation
"""
single_autoencoder.compile(optimizer=RMSprop(), loss='binary_crossentropy')
"""
Explanation: First, we'll configure our model to use a per-pixel binary crossentropy loss, and the RMSprop optimizer:
Binary Cross Entropy = Binomial Cross Entropy = Special Case of Multinomial Cross Entropy
End of explanation
"""
# single_autoencoder = keras.models.load_model('models/single_autoencoder.h5')
single_autoencoder.fit(x_train, x_train,
epochs=50,
batch_size=batch_size,
shuffle=False,
verbose=0,
validation_split=0.1,
callbacks=[TensorBoard(log_dir='/tmp/autoencoder')])
# single_autoencoder.save('models/single_autoencoder.h5')
score = single_autoencoder.evaluate(x_val, x_val, verbose=0)
print(score)
# plot_model(single_autoencoder, to_file='images/single_autoencoder.png', show_shapes=True, show_layer_names=True, rankdir='LR')
Image("images/single_autoencoder.png")
"""
Explanation: Train/load model
End of explanation
"""
encoded_imgs = single_encoder.predict(x_val)
# decoded_imgs = single_decoder.predict(encoded_imgs)
decoded_imgs = single_autoencoder.predict(x_val)
n = 10
plt.figure(figsize=(20, 6))
for i in range(n):
# display original
ax = plt.subplot(2, n, i + 1)
plt.imshow(x_val[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
"""
Explanation: After 50 epochs, the autoencoder seems to reach a stable train/test loss value of about {{score}}. We can try to visualize the reconstructed inputs and the encoded representations. We will use Matplotlib.
End of explanation
"""
######## constants for stacked autoencoder ############
encoding_dim1 = 128
encoding_dim2 = 64
encoding_dim3 = 32
decoding_dim1 = 64
decoding_dim2 = 128
decoding_dim3 = input_dim
epochs = 100
batch_size = 256
input_img = Input(shape=(input_dim,))
encoded = Dense(encoding_dim1, activation='relu')(input_img)
encoded = Dense(encoding_dim2, activation='relu')(encoded)
encoded = Dense(encoding_dim3, activation='relu')(encoded)
decoded = Dense(decoding_dim1, activation='relu')(encoded)
decoded = Dense(decoding_dim2, activation='relu')(decoded)
decoded = Dense(decoding_dim3, activation='sigmoid')(decoded)
stacked_autoencoder = Model(input_img, decoded)
stacked_autoencoder.compile(optimizer=RMSprop(), loss='binary_crossentropy')
stacked_autoencoder = keras.models.load_model('models/stacked_autoencoder.h5')
#stacked_autoencoder.fit(x_train, x_train,
# epochs=epochs,
# batch_size=batch_size,
# shuffle=False,
# validation_split=0.1)
"""
Explanation: Stacked Autoencoder
End of explanation
"""
# stacked_autoencoder.save('models/stacked_autoencoder.h5')
score = stacked_autoencoder.evaluate(x_val, x_val, verbose=0)
print(score)
"""
Explanation: Save the model
End of explanation
"""
# plot_model(stacked_autoencoder, to_file='images/stacked_autoencoderTD.png', show_shapes=True, show_layer_names=True, rankdir='LR')
Image("images/stacked_autoencoderTD.png")
decoded_imgs = stacked_autoencoder.predict(x_val)
n = 10 # how many digits we will display
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i + 1)
plt.imshow(x_val[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
"""
Explanation: The result is slightly better than for single autoencoder.
End of explanation
"""
noise_factor = 0.5
x_train_noisy = x_train + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=x_train.shape)
x_val_noisy = x_val + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=x_val.shape)
# re-normalization by clipping to the intervall (0,1)
x_train_noisy = np.clip(x_train_noisy, 0., 1.)
x_val_noisy = np.clip(x_val_noisy, 0., 1.)
n = 10
plt.figure(figsize=(20, 2))
for i in range(n):
ax = plt.subplot(1, n, i + 1)
plt.imshow(x_val_noisy[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
"""
Explanation: Denoising Data
End of explanation
"""
denoising_autoencoder = Model(input_img, decoded)
denoising_autoencoder.compile(optimizer=RMSprop(), loss='binary_crossentropy')
"""
Explanation: Stick with the stack
End of explanation
"""
# denoising_autoencoder = keras.models.load_model('models/denoising_autoencoder.h5')
denoising_autoencoder.fit(x_train_noisy, x_train,
epochs=epochs,
batch_size=batch_size,
shuffle=True,
validation_split=0.1,
verbose=0,
callbacks=[TensorBoard(log_dir='/tmp/autoencoder')])
fig = plt.figure(figsize=(20,10))
plt.plot(denoising_autoencoder.history.history['val_loss'])
plt.plot(denoising_autoencoder.history.history['loss'])
plt.title('model loss for stacked denoising autoencoder')
plt.ylabel('loss')
plt.xlabel('epochs')
plt.legend(['valid', 'train'], loc='lower right')
plt.show()
decoded_imgs = denoising_autoencoder.predict(x_val_noisy)
n = 10 # how many digits we will display
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i + 1)
plt.imshow(x_val[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
"""
Explanation: Train or load a stacked denoising autoencoder
End of explanation
"""
score = model.evaluate(x_val_noisy, y_val, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
"""
Explanation: Compare results
Classification of noisy data
End of explanation
"""
score = model.evaluate(decoded_imgs, y_val, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
"""
Explanation: Classification of denoised data
End of explanation
"""
input_img = Input(shape=(input_dim,))
# add a Dense layer with a L1 activity regularizer
encoded = Dense(encoding_dim, activation='relu',
activity_regularizer=regularizers.l1(10e-5))(input_img)
decoded = Dense(input_dim, activation='sigmoid')(encoded)
sparse_autoencoder = Model(input_img, decoded)
sparse_encoder = Model(input_img, encoded)
encoded_input = Input(shape=(encoding_dim,))
# retrieve the last layer of the autoencoder model
decoder_layer = sparse_autoencoder.layers[-1]
# create the decoder model
sparse_decoder = Model(encoded_input, decoder_layer(encoded_input))
"""
Explanation: The difference in accuracy is huge. The autoencoder architecture is highly recommendable for denoising signals.
Train with sparsity constraint on hidden layer (not for presentation)
So what does all this mean for sparsity? How does the rectifier activation function allow for sparsity in the hidden units? If the hidden units are exposed to a range of input values it makes sense that the rectifier activation function should lead to more ‘real zeros’ as we sweep across possible inputs. In other words, less neurons in our network would activate because of the limitations imposed by the rectifier activation function. https://www.quora.com/What-does-it-mean-that-activation-functions-like-ReLUs-in-NNs-induce-sparsity-in-the-hidden-units
End of explanation
"""
sparse_autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
sparse_autoencoder = keras.models.load_model('models/sparse_autoencoder.h5')
sparse_autoencoder.fit(x_train, x_train,
epochs=20,
batch_size=batch_size,
validation_data=(x_val, x_val),
callbacks=[TensorBoard(log_dir='/tmp/autoencoder')])
# sparse_autoencoder.save('models/sparse_autoencoder.h5')
score = sparse_autoencoder.evaluate(x_val, x_val, verbose=0)
print(score)
encoded_imgs = sparse_encoder.predict(x_val)
# decoded_imgs = single_decoder.predict(encoded_imgs)
decoded_imgs = sparse_decoder.predict(encoded_imgs)
n = 10
plt.figure(figsize=(20, 6))
for i in range(n):
# display original
ax = plt.subplot(2, n, i + 1)
plt.imshow(x_val[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
encoded_imgs.mean()
"""
Explanation: Sparse Autoencoder
End of explanation
"""
single_ae_test_model = Sequential()
# single_ae_test_model.add(Dense(16, activation='relu', input_shape=(36,)))
# single_ae_test_model.add(Dropout(0.2))
single_ae_test_model.add(Dense(num_classes, activation='softmax', input_shape=(encoding_dim,)))
single_ae_test_model.compile(loss='categorical_crossentropy',
optimizer=RMSprop(),
metrics=['accuracy'])
single_ae_test_model = keras.models.load_model('models/single_ae_test_model.h5')
# single_ae_test_model.fit(single_encoder.predict(x_train), y_train,
# batch_size=batch_size,
# epochs=40,
# verbose=1,
# validation_data=(single_encoder.predict(x_val), y_val))
single_ae_test_model.save('models/single_ae_test_model.h5')
score = single_ae_test_model.evaluate(single_encoder.predict(x_val), y_val, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
"""
Explanation: Test the Vanilla Autoencoder with a simple Neural Net (not for presentation)
End of explanation
"""
sparse_ae_test_model = Sequential()
# single_ae_test_model.add(Dense(16, activation='relu', input_shape=(36,)))
# single_ae_test_model.add(Dropout(0.2))
sparse_ae_test_model.add(Dense(num_classes, activation='softmax', input_shape=(encoding_dim,)))
sparse_ae_test_model.compile(loss='categorical_crossentropy',
optimizer=RMSprop(),
metrics=['accuracy'])
# sparse_ae_test_model = keras.models.load_model('models/sparse_ae_test_model.h5')
sparse_ae_test_model.fit(sparse_encoder.predict(x_train), y_train,
batch_size=batch_size,
epochs=40,
verbose=1,
validation_data=(sparse_encoder.predict(x_val), y_val))
sparse_ae_test_model.save('models/sparse_ae_test_model.h5')
score = sparse_ae_test_model.evaluate(sparse_encoder.predict(x_val), y_val, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
"""
Explanation: Test the Sparse Autoencoder with a simple Neural Net (not for presentation)
End of explanation
"""
|
tclaudioe/Scientific-Computing | SC1v2/02_floating_point_arithmetic.ipynb | bsd-3-clause | import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: <center>
<img src="http://sct.inf.utfsm.cl/wp-content/uploads/2020/04/logo_di.png" style="width:60%">
<h1> INF-285 - Computación Científica </h1>
<h2> Floating Point Arithmetic </h2>
<h2> <a href="#acknowledgements"> [S]cientific [C]omputing [T]eam </a> </h2>
<h2> Version: 1.21 </h2>
</center>
<div id='toc' />
Table of Contents
Introduction
The nature of floating point numbers
Visualization of floating point numbers
What is the first integer that is not representable in double precision?
Loss of significance
Loss of significance in funcion evaluation
Another analysis (example from textbook)
Acknowledgements
End of explanation
"""
# This code translate from a binary number using the prefix "0b" to a base 10 number.
int('0b11', 2)
# This code translate from base 10 to base 2.
bin(9)
# Just looking a large binary number
bin(2**53)
"""
Explanation: <div id='intro' />
Introduction
Back to TOC
Hello! This notebook is an introduction to how our computers handle the representation of real numbers using double-precision floating-point standard IEEE 754.
To understand the contents of this notebook you should have at least a basic notion of how binary numbers work.
The double-precision floating-point format occupies 64 bits which are divided as follows:
1 bit for the sign
11 bits for the exponent
52 bits for the mantissa
This means that the very next representable number after $1$ is $1 + 2^{-52}$, and their difference, $2^{-52}$, is the called $\epsilon_{mach}$.
Additionally, if you'd like to quickly go from a base-2 integer to a base-10 integer and viceversa, Python has some functions that can help you with that.
End of explanation
"""
import bitstring as bs
"""
Explanation: <div id='nature' />
The nature of floating point numbers
Back to TOC
As we know until now, float representations of real numbers are just a finite and bounded representation of them.
But another interesting thing is that these floating point numbers are distributed across the real numbers not uniformly.
To see that, it's really important to keep in mind the following property of floating point numbers:
$$
\begin{equation}
\left|\frac{\text{fl}(x)-x}{x}\right| \leq \frac{1}{2} \epsilon_{\text{mach}},
\end{equation}
$$
where $\text{fl}(x)$ means the floating point representation of $x \in \mathbb{R}$, this means that $\text{fl}(x)$ is the actual number that is stored in memory when we try to store the number $x$.
What it says is that the relative error in representing any non-zero real number x is bounded by a quantity that depends on the precision bing used, i.e. ($\epsilon_{\text{mach}}$).
Maybe, now you're thinking: What does this relationship have to do with the distribution of floating point numbers?
So, if we rewrite the previous as follows we get:
$$
\begin{equation}
|\text{fl}(x)-x| \leq \frac{1}{2} \epsilon_{\text{mach}}\,|x|.
\end{equation}
$$
It's clear then: The absolute error (distance) between a real number and its floating point representation is proportional to the real number's magnitude.
Intuitively speaking, the representation error of a number increases as its magnitude increases, this implies that the distance between a floating point number and the next representable floating point number will increase as the magnitude of such number increases (and conversely).
Could you prove that?
For now, we will prove it numerically.
We will use a library named bitstring to visualize what it is being stored in double precision.
You can install it with:
conda install bitstring
End of explanation
"""
def next_float(f):
#packing double-precision foat
b = bs.pack('>d', f)
#extracting mantisa as unsigned int and adding up 1.
# There are two cases, (1) if the bits of the mantissa are all 1,
# (2) all the other cases.
m = b[12:].uint
if m==4503599627370495:
# Case (1)
m=0
b[12:] = m
exp=b[1:12].uint
exp +=1
b[1:12] = exp
else:
# Case (2)
m += 1
# putting the result in his place
b[12:] = m
return b.float
def gap(f):
next_f = next_float(f)
return next_f - f
"""
Explanation: The next two functions are self-explanatory:
next_float(f) computes the next representable float number right after $f$.
gap(f) computes the difference between $f$ and the next representable float number.
Notice that here we are considering that we are using double-precision.
End of explanation
"""
gap(1)
# What happens when we increase the value?
gap(2**40)
# When will the gap be greater that 1?
print(gap(2.**52))
print(gap(2.**53))
# What does it mean to have a gap larger than 1?
"""
Explanation: So if we compute gap(1) we should get machine epsilon.
Let's try it:
End of explanation
"""
values = np.array([2**i for i in range(-5,60)]).astype(float)
# Corresponding gaps:
# The Numpy function "vectorize" is very useful to be use one wants to apply
# a scalar function to each element of an array.
vgap = np.vectorize(gap)
gaps = vgap(values)
"""
Explanation: In order to prove our hypothesis (that floating point numbers are not uniformly distributed), we will create an array of values: $[2^{-5},...,2^{60}]$ and compute their corresponding gaps.
End of explanation
"""
fig = plt.figure(figsize=(10,5))
plt.subplot(121)
plt.plot(values, gaps,'.',markersize=20)
plt.xlabel('float(x)')
plt.ylabel('Gap between next representable number')
plt.title('Linear scale')
plt.grid(True)
plt.subplot(122)
plt.loglog(values, gaps,'.')
plt.xlabel('float(x)')
plt.ylabel('Gap between next representable number')
plt.title('Log scale')
plt.grid(True)
fig.tight_layout()
plt.show()
"""
Explanation: We include now a comparison between a linear scale plot and a loglog scale plot. Which one is more useful here?
End of explanation
"""
# This function shows the bits used for the sign, exponent and mantissa for a 64-bit double presision number.
# fps: Floating Point Standard
# Double: Double precision IEEE 754
def to_fps_double(f):
b = bs.pack('>d', f)
b = b.bin
#show sign + exponent + mantisa
print(b[0]+' '+b[1:12]+ ' '+b[12:])
"""
Explanation: As you can see, the hypothesis was right. In other words: Floating point numbers are not uniformly distributed across the real numbers, and the distance between them is proportional to their magnitude.
Tiny numbers (~ 0) are closer between each other than larger numbers are.
Moreover, we can conclude that for large values the gap is larger that $1$, which means that there will be integers that will not be stored!!.
<div id='visualization' />
Visualization of floating point numbers
Back to TOC
With the help of bitstring library we could write a function to visualize floating point numbers in its binary representation.
End of explanation
"""
to_fps_double(1.)
int('0b01111111111', 2)
to_fps_double(1.+gap(1.))
to_fps_double(+0.)
to_fps_double(-0.)
to_fps_double(np.inf)
to_fps_double(-np.inf)
to_fps_double(np.nan)
to_fps_double(-np.nan)
to_fps_double(2.**-1074)
print(2.**-1074)
to_fps_double(2.**-1075)
print(2.**-1075)
to_fps_double(9.4)
"""
Explanation: Let's see some intereseting examples
End of explanation
"""
to_fps_double(1)
to_fps_double(1+2**-52)
"""
Explanation: <div id='firstinteger' />
What is the first integer that is not representable in double precision?
Back to TOC
Recall that $\epsilon_{\text{mach}}=2^{-52}$ in double precision.
End of explanation
"""
for i in np.arange(1,11):
to_fps_double(1+i*2**-55)
"""
Explanation: This means that if we want to store any number in the interval $[1,1+\epsilon_{\text{mach}}]$, only the numbers $1$ and $1+\epsilon_{\text{mach}}$ will be stored. For example, compare the exponent and the mantissa in the previous cell with the following outputs:
End of explanation
"""
for i in np.arange(11):
to_fps_double((1+i*2**-55)*2**52)
"""
Explanation: Now, we can scale this difference such that the scaling factor multiplied but $\epsilon_{\text{mach}}$ is one. The factor will be $2^{52}$. This means $2^{52}\,\epsilon_{\text{mach}}=1$. Repeating the same example as before but with the scaling factor we obtain:
End of explanation
"""
to_fps_double(2**52)
to_fps_double(2**52+1)
"""
Explanation: Which means we can only store exactly the numbers:
End of explanation
"""
to_fps_double(2**53)
to_fps_double(2**53+1)
"""
Explanation: This means, the distance now from $2^{52}$ and the following number representable is $1$ !!!! So, what would happend if I can to store $2^{53}+1$?
End of explanation
"""
x = np.logspace(-200,800,1000, base=2, dtype=np.dtype(float))
plt.figure()
plt.loglog(x,x,'b',label='fl$(x)$: value stored')
plt.loglog(x,np.power(2.,-52)*np.abs(x)/2,'r',label='Upper bound of $|$fl$(x)-x|$ estimated')
plt.legend(loc='best')
plt.grid(True)
plt.show()
"""
Explanation: I can't stored the Integer $2^{53}+1$! Thus, the first integer not representable is $2^{53}+1$.
<div id='error_bound' />
Understanding the error bound between |fl(x)-x|
Back to TOC
The following code shows a plot of the upper bound of the absolute error we are making when storing the value $x$ as a floating point in double precision, i.e. fl $(x)$.
This means:
$$
\begin{equation}
|\text{fl}(x)-x| \leq \frac{1}{2} \epsilon_{\text{mach}} |x|
\end{equation}.
$$
End of explanation
"""
a = 1.
b = 2.**(-52) #emach
result_1 = a + b # arithmetic result is 1.0000000000000002220446049250313080847263336181640625
result_1b = result_1-1.0
print("{0:.1000}".format(result_1))
print(result_1b)
print(b)
c = 2.**(-53)
result_2 = a + c # arithmetic result is 1.00000000000000011102230246251565404236316680908203125
np.set_printoptions(precision=16)
print("{0:.1000}".format(result_2))
print(result_2-a)
to_fps_double(result_2)
to_fps_double(result_2-a)
d = 2.**(-53) + 2.**(-54)
result_3 = a + d # arithmetic result is 1.000000000000000166533453693773481063544750213623046875
print("{0:.1000}".format(result_3))
to_fps_double(result_3)
to_fps_double(d)
"""
Explanation: As you may have expected, the error grows proportionally with the value of $x$.
It is importnant however that you don't get confused with the apparently small difference between the blue and the red lines, actuallu that difference is about 16-order of magnitud!
Just look at the $y$-axis scale, it is a logarithmic scale.
The outcome of this example is that it tells us that the absolute error we are making when we store the value $x$ as a double precision floating point fl $(x)$ is proportional to $x$, this is what it is.
The key point is that we must always remember this.
<div id='loss' />
Loss of significance
Back to TOC
As we mentioned, there's a small gap between 1 and the next representable number, which means that if you want to represent a number between those two, you won't be able to do so; what you would need to do is to round it to a representable number before storing it in memory.
End of explanation
"""
# What does it mean to store 0.5+delta?
e = 2.**(-1)
f = b/2. # emach/2
result_4 = e + f # 0.50000000000000011102230246251565404236316680908203125
print("{0:.100}".format(result_4))
result_5 = e + b # 0.5000000000000002220446049250313080847263336181640625
print("{0:.100}".format(result_5))
g = b/4.
result_5 = e + g # 0.500000000000000055511151231257827021181583404541015625
print("{0:.100}".format(result_5))
"""
Explanation: As you can see, if you try to save a number between $1$ and $1 + \epsilon {mach}$, it will have to be rounded (according to IEEE rounding criteria) to a representable number before being stored, thus creating a difference between the _real number and the stored number.
This situation is an example of loss of significance.
Does that mean that the gap between representable numbers is always going to be $\epsilon _{mach}$? Of course not! Some numbers will have smaller gaps, and some others will require larger gaps, as studied before.
In any interval of the form $[2^n,2^{n+1}]$ for representable $n\in \mathbb{Z}$, the gap is constant.
For example, all the numbers between $2^{-1}$ and $2^0$ have a distance of $\epsilon {mach}/2$ between them.
All the numbers between $2^0$ and $2^1$ have a distance of $\epsilon {mach}$ between them.
Those between $2^1$ and $2^2$ have a distance of $2\,\epsilon _{mach}$ between them, and so on.
End of explanation
"""
num_1 = a
num_2 = b
result = a + b
print("{0:.100}".format(result))
"""
Explanation: We'll let the students find some representable numbers and some non-representable numbers.
End of explanation
"""
f1 = lambda x: (1.-np.cos(x))/(np.power(np.sin(x),2))
f2 = lambda x: 1./(1+np.cos(x))
x = np.arange(-10,10,0.1)
plt.figure()
plt.plot(x,f1(x),'-')
plt.grid(True)
plt.show()
"""
Explanation: <div id='func' />
Loss of significance in function evaluation
Back to TOC
Loss of Significance is present too in the representation of functions. A classical example (which you can see in the guide book), is the next function:
$$
\begin{equation}
f_1(x)= \frac{1 - \cos x}{\sin^{2}x}
\end{equation}
$$
Applying trigonometric identities, we can obtain the 'equivalent' function:
$$
\begin{equation}
f_2(x)= \frac{1}{1 + \cos x}
\end{equation}
$$
Both of these functions are apparently equals in exact arithmetic. Nevertheless, its graphics say to us another thing when $x$ is equal to zero.
Before we analize the behaviour about $x=0$, let's take a look at them in the range $x\in[-10,10]$.
End of explanation
"""
plt.figure()
plt.plot(x,f2(x),'-')
plt.grid(True)
plt.show()
"""
Explanation: The first plot shows some spikes, are these expected? or is it an artifact?
Notice that we mean that something it is an artifact when it only appears due to a numerical computation but it should not be there theoretically.
Are these spikes real or not?
End of explanation
"""
x = np.arange(-1,1,0.01)
plt.figure()
plt.plot(x,f1(x),'.',markersize=10)
plt.grid(True)
plt.show()
"""
Explanation: The second function also shows the spikes! It seems they are real.
Actually, they are real!
At which points do we expect them?
Do we expect them at $x=0$?
To answer the last question, we will plot the functions in the range $x\in[-1,1]$.
End of explanation
"""
plt.figure()
plt.plot(x,f2(x),'.',markersize=10)
plt.grid(True)
plt.show()
"""
Explanation: In the previous function, we see an outlier at $x=0$, this point is telling us that $f_1(x)$ at $x=0$ seems to be $0$.
Is this real or it is an artifact?
Let's look at the plot of $f_2(x)$ in the same range.
End of explanation
"""
# This function corresponds to the numerator of f1(x)
f3 = lambda x: (1.-np.cos(x))
# This function corresponds to the denominator of f2(x)
f4 = lambda x: np.power(np.sin(x),2)
x = np.flip(np.logspace(-19,0,20))
o1 = f1(x)
o2 = f2(x)
o3 = f3(x)
o4 = f4(x)
print("x, f1(x), f2(x), f3(x), f4(x)")
for i in np.arange(len(x)):
print("%1.10f, %1.10f, %1.10f, %1.25f, %1.25f" % (x[i],o1[i],o2[i],o3[i],o4[i]))
"""
Explanation: In this case we see a different behavior. Is this the correct one?
Yes!
This happens because when $x$ is equal to zero, the first function has an indetermination, but previously, the computer makes a subtraction between numbers that are almost equals. This generates a loss of significance, turning the expression close to this point to be zero.
However, modifying this expression to make the second function, eliminates this substraction, fixing the error in its calculation when $x=0$.
In conclusion, for us, two representations of a function can be equals in exact arithmetic, but for the computer they can be different!
<div id='another' />
Another analysis (example from textbook)
Back to TOC
The following code tries to explain why $f_1(x)$ gives $0$ near $x=0$.
End of explanation
"""
x = np.logspace(-20,0,40)
plt.figure()
plt.loglog(x,1-np.cos(x),'.',label='$1-\cos(x)$')
plt.loglog(x,0*x+1e-20,'.', label='$10^{-20}$')
plt.grid(True)
plt.xlabel('$x$')
plt.legend(loc='best')
plt.show()
# For this value of x=1e-7 we obtain an outcome greater than 0
to_fps_double(1-np.cos(1.e-7))
# But for x=1e-8 we actually get 0. This explains why in the previous plot the blue dots stop appearing for values less or equal than 1e-8 approximately.
to_fps_double(1-np.cos(1.e-8))
"""
Explanation: From teh previous table, we see that the numerator of $f_1(x)$ becomes $0$. What is happening with $1-\cos(x)$ about $x=0$?
In the following code we study numerically what happens with $1-\cos(x)$ as $x$ tends to $0^+$.
End of explanation
"""
xp = lambda a,b,c: (-b+np.sqrt((b**2)-4*a*c))/(2*a)
print(xp(1,np.power(10.,10),-1))
"""
Explanation: Another example: quadratic formula
Find the positive root from the quadratic formula of:
$$
x^2+10^{10}\,x-1=0.
$$
The positive root is:
$$
x_+ = \dfrac{-10^{10}+\sqrt{(10^{10})^2+4}}{2}
$$
End of explanation
"""
|
harpolea/pyro2 | multigrid/variable_coeff_elliptic.ipynb | bsd-3-clause | %pylab inline
from sympy import init_session
init_session()
alpha = 2.0 + cos(2*pi*x)*cos(2*pi*y)
phi = sin(2*pi*x)*sin(2*pi*y)
"""
Explanation: Variable Coefficient Poisson
Derive the form of a test variable-coefficient elliptic equation with periodic boundary conditions for testing the variable-coefficient multigrid solver.
We want to solve an equation of the form $\nabla \cdot (\alpha \nabla \phi) = f$
Note: it is important for solvability that the RHS, f, integrate to zero over our domain. It seems sufficient to ensure that phi integrates to zero to have this condition met.
End of explanation
"""
phi_x = diff(phi, x)
phi_y = diff(phi, y)
f = diff(alpha*phi_x, x) + diff(alpha*phi_y, y)
f = simplify(f)
f
print(f)
phi.subs(x, 0)
phi.subs(x, 1)
phi.subs(y, 0)
phi.subs(y, 1)
"""
Explanation: we want to compute $\nabla \cdot (\alpha \nabla \phi)$
End of explanation
"""
phi = sin(2*pi*x)*sin(2*pi*y)
phi
alpha = 1.0
beta = 2.0 + cos(2*pi*x)*cos(2*pi*y)
gamma_x = sin(2*pi*x)
gamma_y = sin(2*pi*y)
alpha
beta
gamma_x
gamma_y
phi_x = diff(phi, x)
phi_y = diff(phi, y)
f = alpha*phi + diff(beta*phi_x, x) + diff(beta*phi_y, y) + gamma_x*phi_x + gamma_y*phi_y
f = simplify(f)
f
print(f)
print(alpha)
print(beta)
print(gamma_x)
print (gamma_y)
"""
Explanation: General Elliptic
Derive the form of a test variable-coefficient elliptic equation with periodic boundary conditions for testing the variable-coefficient multigrid solver.
We are solving
$$\alpha \phi + \nabla \cdot (\beta \nabla \phi ) + \gamma \cdot \nabla \phi = f$$
Note: it is important for solvability that the RHS, f, integrate to zero over our domain. It seems sufficient to ensure that phi integrates to zero to have this condition met.
End of explanation
"""
phi = cos(Rational(1,2)*pi*x)*cos(Rational(1,2)*pi*y)
phi
alpha = 10
gamma_x = 1
gamma_y = 1
beta = x*y + 1
phi_x = diff(phi, x)
phi_y = diff(phi, y)
f = alpha*phi + diff(beta*phi_x, x) + diff(beta*phi_y, y) + gamma_x*phi_x + gamma_y*phi_y
f
simplify(f)
"""
Explanation: Inhomogeneous BCs
End of explanation
"""
phi.subs(x,0)
phi.subs(x,1)
phi.subs(y,0)
phi.subs(y,1)
"""
Explanation: boundary conditions
End of explanation
"""
|
tensorflow/docs-l10n | site/ja/tensorboard/tensorboard_projector_plugin.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
%load_ext tensorboard
import os
import tensorflow as tf
import tensorflow_datasets as tfds
from tensorboard.plugins import projector
"""
Explanation: TensorBoard の Embedding Projector でデータを視覚化する
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_brank" href="https://www.tensorflow.org/tensorboard/tensorboard_projector_plugin"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a> </td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tensorboard/tensorboard_projector_plugin.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a> </td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tensorboard/tensorboard_projector_plugin.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a> </td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tensorboard/tensorboard_projector_plugin.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a></td>
</table>
概要
TensorBoard の Embedding Projector を使用すると、高次元埋め込みをグラフィカルに表現することができます。Embedding レイヤーの視覚化、調査、および理解に役立てられます。
<img src="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/text/images/embedding.jpg?raw=%5C" alt="Screenshot of the embedding projector">
このチュートリアルでは、この種のトレーニング済みのレイヤーを視覚化する方法を学習します。
セットアップ
このチュートリアルでは、TensorBoard を使用して、映画レビューデータを分類するために生成された Embedding レイヤーを視覚化します。
End of explanation
"""
(train_data, test_data), info = tfds.load(
"imdb_reviews/subwords8k",
split=(tfds.Split.TRAIN, tfds.Split.TEST),
with_info=True,
as_supervised=True,
)
encoder = info.features["text"].encoder
# Shuffle and pad the data.
train_batches = train_data.shuffle(1000).padded_batch(
10, padded_shapes=((None,), ())
)
test_batches = test_data.shuffle(1000).padded_batch(
10, padded_shapes=((None,), ())
)
train_batch, train_labels = next(iter(train_batches))
"""
Explanation: IMDB データ
IMDB が提供する、センチメント(肯定的/否定的)でラベル付けされた 25,000 件の映画レビューのデータセットを使用します。レビューは前処理済みであり、それぞれ単語インデックスのシーケンス(整数)としてエンコードされています。便宜上、単語はデータセット内の全体的な頻度によってインデックス付けされてるため、たとえば、整数「3」はデータ内で 3 番目に頻度の高い単語にエンコードされます。このため、「高頻度で使用される上位 10,000 個の単語のみを考慮し、高頻度で使用される上位 20 個を除去する」といったフィルタ操作を素早く行うことができます。
慣例として、「0」は特定の単語を表しませんが、任意の不明な単語をエンコードするのに使用されます。チュートリアルの後の方で、「0」の行を視覚化から取り除きます。
End of explanation
"""
# Create an embedding layer.
embedding_dim = 16
embedding = tf.keras.layers.Embedding(encoder.vocab_size, embedding_dim)
# Configure the embedding layer as part of a keras model.
model = tf.keras.Sequential(
[
embedding, # The embedding layer should be the first layer in a model.
tf.keras.layers.GlobalAveragePooling1D(),
tf.keras.layers.Dense(16, activation="relu"),
tf.keras.layers.Dense(1),
]
)
# Compile model.
model.compile(
optimizer="adam",
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=["accuracy"],
)
# Train model for one epoch.
history = model.fit(
train_batches, epochs=1, validation_data=test_batches, validation_steps=20
)
"""
Explanation: Keras Embedding レイヤー
Keras Embedding レイヤーは、語彙の各単語に対して埋め込みをトレーニングするために使用できます。各単語(またはこの場合はサブ単語)は、モデルがトレーニングする 16 次元のベクトル(または埋め込み)に関連付けられます。
単語の埋め込みに関する詳細は、このチュートリアルをご覧ください。
End of explanation
"""
# Set up a logs directory, so Tensorboard knows where to look for files.
log_dir='/logs/imdb-example/'
if not os.path.exists(log_dir):
os.makedirs(log_dir)
# Save Labels separately on a line-by-line manner.
with open(os.path.join(log_dir, 'metadata.tsv'), "w") as f:
for subwords in encoder.subwords:
f.write("{}\n".format(subwords))
# Fill in the rest of the labels with "unknown".
for unknown in range(1, encoder.vocab_size - len(encoder.subwords)):
f.write("unknown #{}\n".format(unknown))
# Save the weights we want to analyze as a variable. Note that the first
# value represents any unknown word, which is not in the metadata, here
# we will remove this value.
weights = tf.Variable(model.layers[0].get_weights()[0][1:])
# Create a checkpoint from embedding, the filename and key are the
# name of the tensor.
checkpoint = tf.train.Checkpoint(embedding=weights)
checkpoint.save(os.path.join(log_dir, "embedding.ckpt"))
# Set up config.
config = projector.ProjectorConfig()
embedding = config.embeddings.add()
# The name of the tensor will be suffixed by `/.ATTRIBUTES/VARIABLE_VALUE`.
embedding.tensor_name = "embedding/.ATTRIBUTES/VARIABLE_VALUE"
embedding.metadata_path = 'metadata.tsv'
projector.visualize_embeddings(log_dir, config)
# Now run tensorboard against on log data we just saved.
%tensorboard --logdir /logs/imdb-example/
"""
Explanation: TensorBoard 用にデータを保存する
TensorBoard は、tensorflow プロジェクトのログからテンソルとメタデータを読み取ります。ログディレクトリへのパスは、以下のlog_dirで指定されます。このチュートリアルでは、/logs/imdb-example/ を使用します。
データを Tensorboard に読み込むには、モデル内の特定の対象レイヤーの視覚化を可能にするメタデータとともに、トレーニングチェックポイントをそのディレクトリに保存する必要があります。
End of explanation
"""
|
scikit-optimize/scikit-optimize.github.io | 0.8/notebooks/auto_examples/hyperparameter-optimization.ipynb | bsd-3-clause | print(__doc__)
import numpy as np
"""
Explanation: Tuning a scikit-learn estimator with skopt
Gilles Louppe, July 2016
Katie Malone, August 2016
Reformatted by Holger Nahrstaedt 2020
.. currentmodule:: skopt
If you are looking for a :obj:sklearn.model_selection.GridSearchCV replacement checkout
sphx_glr_auto_examples_sklearn-gridsearchcv-replacement.py instead.
Problem statement
Tuning the hyper-parameters of a machine learning model is often carried out
using an exhaustive exploration of (a subset of) the space all hyper-parameter
configurations (e.g., using :obj:sklearn.model_selection.GridSearchCV), which
often results in a very time consuming operation.
In this notebook, we illustrate how to couple :class:gp_minimize with sklearn's
estimators to tune hyper-parameters using sequential model-based optimisation,
hopefully resulting in equivalent or better solutions, but within less
evaluations.
Note: scikit-optimize provides a dedicated interface for estimator tuning via
:class:BayesSearchCV class which has a similar interface to those of
:obj:sklearn.model_selection.GridSearchCV. This class uses functions of skopt to perform hyperparameter
search efficiently. For example usage of this class, see
sphx_glr_auto_examples_sklearn-gridsearchcv-replacement.py
example notebook.
End of explanation
"""
from sklearn.datasets import load_boston
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.model_selection import cross_val_score
boston = load_boston()
X, y = boston.data, boston.target
n_features = X.shape[1]
# gradient boosted trees tend to do well on problems like this
reg = GradientBoostingRegressor(n_estimators=50, random_state=0)
"""
Explanation: Objective
To tune the hyper-parameters of our model we need to define a model,
decide which parameters to optimize, and define the objective function
we want to minimize.
End of explanation
"""
from skopt.space import Real, Integer
from skopt.utils import use_named_args
# The list of hyper-parameters we want to optimize. For each one we define the
# bounds, the corresponding scikit-learn parameter name, as well as how to
# sample values from that dimension (`'log-uniform'` for the learning rate)
space = [Integer(1, 5, name='max_depth'),
Real(10**-5, 10**0, "log-uniform", name='learning_rate'),
Integer(1, n_features, name='max_features'),
Integer(2, 100, name='min_samples_split'),
Integer(1, 100, name='min_samples_leaf')]
# this decorator allows your objective function to receive a the parameters as
# keyword arguments. This is particularly convenient when you want to set
# scikit-learn estimator parameters
@use_named_args(space)
def objective(**params):
reg.set_params(**params)
return -np.mean(cross_val_score(reg, X, y, cv=5, n_jobs=-1,
scoring="neg_mean_absolute_error"))
"""
Explanation: Next, we need to define the bounds of the dimensions of the search space
we want to explore and pick the objective. In this case the cross-validation
mean absolute error of a gradient boosting regressor over the Boston
dataset, as a function of its hyper-parameters.
End of explanation
"""
from skopt import gp_minimize
res_gp = gp_minimize(objective, space, n_calls=50, random_state=0)
"Best score=%.4f" % res_gp.fun
print("""Best parameters:
- max_depth=%d
- learning_rate=%.6f
- max_features=%d
- min_samples_split=%d
- min_samples_leaf=%d""" % (res_gp.x[0], res_gp.x[1],
res_gp.x[2], res_gp.x[3],
res_gp.x[4]))
"""
Explanation: Optimize all the things!
With these two pieces, we are now ready for sequential model-based
optimisation. Here we use gaussian process-based optimisation.
End of explanation
"""
from skopt.plots import plot_convergence
plot_convergence(res_gp)
"""
Explanation: Convergence plot
End of explanation
"""
|
ysh329/Homework | CS100.1x Introduction to Big Data with Apache Spark/lab2_apache_log_student.ipynb | mit | from pyspark.sql import Row
Person = Row("name", "age")
print Person
ali = Person("Alice", 11)
print ali
help(Row)
import re
import datetime
from pyspark.sql import Row
month_map = {'Jan': 1, 'Feb': 2, 'Mar':3, 'Apr':4, 'May':5, 'Jun':6, 'Jul':7,
'Aug':8, 'Sep': 9, 'Oct':10, 'Nov': 11, 'Dec': 12}
def parse_apache_time(s):
""" Convert Apache time format into a Python datetime object
Args:
s (str): date and time in Apache time format
Returns:
datetime: datetime object (ignore timezone for now)
"""
# 01/Aug/1995:00:00:01
return datetime.datetime(int(s[7:11]),
month_map[s[3:6]],
int(s[0:2]),
int(s[12:14]),
int(s[15:17]),
int(s[18:20]))
def parseApacheLogLine(logline):
""" Parse a line in the Apache Common Log format
Args:
logline (str): a line of text in the Apache Common Log format
Returns:
tuple: either a dictionary containing the parts of the Apache Access Log and 1,
or the original invalid log line and 0
"""
match = re.search(APACHE_ACCESS_LOG_PATTERN, logline)
if match is None:
return (logline, 0)
size_field = match.group(9)
if size_field == '-':
size = long(0)
else:
size = long(match.group(9))
return (Row(
host = match.group(1),
client_identd = match.group(2),
user_id = match.group(3),
date_time = parse_apache_time(match.group(4)),
method = match.group(5),
endpoint = match.group(6),
protocol = match.group(7),
response_code = int(match.group(8)),
content_size = size
), 1)
# A regular expression pattern to extract fields from the log line
APACHE_ACCESS_LOG_PATTERN = '^(\S+) (\S+) (\S+) \[([\w:/]+\s[+\-]\d{4})\] "(\S+) (\S+)\s*(\S*)" (\d{3}) (\S+)'
"""
Explanation: version 1.0.0
+
Web Server Log Analysis with Apache Spark
This lab will demonstrate how easy it is to perform web server log analysis with Apache Spark.
Server log analysis is an ideal use case for Spark. It's a very large, common data source and contains a rich set of information. Spark allows you to store your logs in files on disk cheaply, while still providing a quick and simple way to perform data analysis on them. This homework will show you how to use Apache Spark on real-world text-based production logs and fully harness the power of that data. Log data comes from many sources, such as web, file, and compute servers, application logs, user-generated content, and can be used for monitoring servers, improving business and customer intelligence, building recommendation systems, fraud detection, and much more.
How to complete this assignment
This assignment is broken up into sections with bite-sized examples for demonstrating Spark functionality for log processing. For each problem, you should start by thinking about the algorithm that you will use to efficiently process the log in a parallel, distributed manner. This means using the various RDD operations along with lambda functions that are applied at each worker.
This assignment consists of 4 parts:
Part 1: Apache Web Server Log file format
Part 2: Sample Analyses on the Web Server Log File
Part 3: Analyzing Web Server Log File
Part 4: Exploring 404 Response Codes
Part 1: Apache Web Server Log file format
The log files that we use for this assignment are in the Apache Common Log Format (CLF). The log file entries produced in CLF will look something like this:
127.0.0.1 - - [01/Aug/1995:00:00:01 -0400] "GET /images/launch-logo.gif HTTP/1.0" 200 1839
Each part of this log entry is described below.
127.0.0.1
This is the IP address (or host name, if available) of the client (remote host) which made the request to the server.
-
The "hyphen" in the output indicates that the requested piece of information (user identity from remote machine) is not available.
-
The "hyphen" in the output indicates that the requested piece of information (user identity from local logon) is not available.
[01/Aug/1995:00:00:01 -0400]
The time that the server finished processing the request. The format is:
[day/month/year:hour:minute:second timezone]
* ####day = 2 digits
* ####month = 3 letters
* ####year = 4 digits
* ####hour = 2 digits
* ####minute = 2 digits
* ####second = 2 digits
* ####zone = (+ | -) 4 digits
"GET /images/launch-logo.gif HTTP/1.0"
This is the first line of the request string from the client. It consists of a three components: the request method (e.g., GET, POST, etc.), the endpoint (a Uniform Resource Identifier), and the client protocol version.
200
This is the status code that the server sends back to the client. This information is very valuable, because it reveals whether the request resulted in a successful response (codes beginning in 2), a redirection (codes beginning in 3), an error caused by the client (codes beginning in 4), or an error in the server (codes beginning in 5). The full list of possible status codes can be found in the HTTP specification (RFC 2616 section 10).
1839
The last entry indicates the size of the object returned to the client, not including the response headers. If no content was returned to the client, this value will be "-" (or sometimes 0).
Note that log files contain information supplied directly by the client, without escaping. Therefore, it is possible for malicious clients to insert control-characters in the log files, so care must be taken in dealing with raw logs.
NASA-HTTP Web Server Log
For this assignment, we will use a data set from NASA Kennedy Space Center WWW server in Florida. The full data set is freely available (http://ita.ee.lbl.gov/html/contrib/NASA-HTTP.html) and contains two month's of all HTTP requests. We are using a subset that only contains several days worth of requests.
(1a) Parsing Each Log Line
Using the CLF as defined above, we create a regular expression pattern to extract the nine fields of the log line using the Python regular expression search function. The function returns a pair consisting of a Row object and 1. If the log line fails to match the regular expression, the function returns a pair consisting of the log line string and 0. A '-' value in the content size field is cleaned up by substituting it with 0. The function converts the log line's date string into a Python datetime object using the given parse_apache_time function.
End of explanation
"""
import sys
import os
from test_helper import Test
baseDir = os.path.join('data')
inputPath = os.path.join('cs100', 'lab2', 'apache.access.log.PROJECT')
logFile = os.path.join(baseDir, inputPath)
print baseDir
print inputPath
print logFile
import sys
import os
from test_helper import Test
baseDir = os.path.join('data')
inputPath = os.path.join('cs100', 'lab2', 'apache.access.log.PROJECT')
logFile = os.path.join(baseDir, inputPath)
def parseLogs():
""" Read and parse log file """
parsed_logs = (sc
.textFile(logFile)
.map(parseApacheLogLine)
.cache())
access_logs = (parsed_logs
.filter(lambda s: s[1] == 1)
.map(lambda s: s[0])
.cache())
failed_logs = (parsed_logs
.filter(lambda s: s[1] == 0)
.map(lambda s: s[0]))
failed_logs_count = failed_logs.count()
if failed_logs_count > 0:
print 'Number of invalid logline: %d' % failed_logs.count()
for line in failed_logs.take(20):
print 'Invalid logline: %s' % line
print 'Read %d lines, successfully parsed %d lines, failed to parse %d lines' % (parsed_logs.count(), access_logs.count(), failed_logs.count())
return parsed_logs, access_logs, failed_logs
parsed_logs, access_logs, failed_logs = parseLogs()
"""
Explanation: (1b) Configuration and Initial RDD Creation
We are ready to specify the input log file and create an RDD containing the parsed log file data. The log file has already been downloaded for you.
To create the primary RDD that we'll use in the rest of this assignment, we first load the text file using sc.textfile(logFile) to convert each line of the file into an element in an RDD.
Next, we use map(parseApacheLogLine) to apply the parse function to each element (that is, a line from the log file) in the RDD and turn each line into a pair Row object.
Finally, we cache the RDD in memory since we'll use it throughout this notebook.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
#This was originally '^(\S+) (\S+) (\S+) \[([\w:/]+\s[+\-]\d{4})\] "(\S+) (\S+)\s*(\S*)" (\d{3}) (\S+)'
APACHE_ACCESS_LOG_PATTERN = '^(\S+) (\S+) (\S+) \[([\w:/]+\s[+\-]\d{4})\] "(\S+) (\S+)\s*(\S*)\s*" (\d{3}) (\S+)'
parsed_logs, access_logs, failed_logs = parseLogs()
# TEST Data cleaning (1c)
Test.assertEquals(failed_logs.count(), 0, 'incorrect failed_logs.count()')
Test.assertEquals(parsed_logs.count(), 1043177 , 'incorrect parsed_logs.count()')
Test.assertEquals(access_logs.count(), parsed_logs.count(), 'incorrect access_logs.count()')
print failed_logs.take(1)
print parsed_logs.take(1)
print access_logs.take(1)
"""
Explanation: (1c) Data Cleaning
Notice that there are a large number of log lines that failed to parse. Examine the sample of invalid lines and compare them to the correctly parsed line, an example is included below. Based on your observations, alter the APACHE_ACCESS_LOG_PATTERN regular expression below so that the failed lines will correctly parse, and press Shift-Enter to rerun parseLogs().
127.0.0.1 - - [01/Aug/1995:00:00:01 -0400] "GET /images/launch-logo.gif HTTP/1.0" 200 1839
If you not familar with Python regular expression search function, now would be a good time to check up on the documentation. One tip that might be useful is to use an online tester like http://pythex.org or http://www.pythonregex.com. To use it, copy and paste the regular expression string below (located between the single quotes ') and test it against one of the 'Invalid logline' above.
End of explanation
"""
print "access_logs.count():", access_logs.count()
print "access_logs.take(1):", access_logs.take(1)
print "access_logs.take(1)[0]:", access_logs.take(1)[0]
print "type(access_logs.take(1)[0]):", type(access_logs.take(1)[0])
a = Row(name = "Zhang", age = 19)
print type(a)
print a
print a.name
print type(a.name)
a = sc.parallelize([1,2,3,4])
print a.collect()
# Calculate statistics based on the content size.
content_sizes = access_logs.map(lambda log: log.content_size).cache()
print 'Content Size Avg: %i, Min: %i, Max: %s' % (
content_sizes.reduce(lambda a, b : a + b) / content_sizes.count(),
content_sizes.min(),
content_sizes.max())
"""
Explanation: Part 2: Sample Analyses on the Web Server Log File
Now that we have an RDD containing the log file as a set of Row objects, we can perform various analyses.
(2a) Example: Content Size Statistics
Let's compute some statistics about the sizes of content being returned by the web server. In particular, we'd like to know what are the average, minimum, and maximum content sizes.
We can compute the statistics by applying a map to the access_logs RDD. The lambda function we want for the map is to extract the content_size field from the RDD. The map produces a new RDD containing only the content_sizes (one element for each Row object in the access_logs RDD). To compute the minimum and maximum statistics, we can use min() and max() functions on the new RDD. We can compute the average statistic by using the reduce function with a lambda function that sums the two inputs, which represent two elements from the new RDD that are being reduced together. The result of the reduce() is the total content size from the log and it is to be divided by the number of requests as determined using the count() function on the new RDD.
End of explanation
"""
# Response Code to Count
responseCodeToCount = (access_logs
.map(lambda log: (log.response_code, 1))
.reduceByKey(lambda a, b : a + b)
.cache())
responseCodeToCountList = responseCodeToCount.take(100)
print 'Found %d response codes' % len(responseCodeToCountList)
print 'Response Code Counts: %s' % responseCodeToCountList
assert len(responseCodeToCountList) == 7
assert sorted(responseCodeToCountList) == [(200, 940847), (302, 16244), (304, 79824), (403, 58), (404, 6185), (500, 2), (501, 17)]
"""
Explanation: (2b) Example: Response Code Analysis
Next, lets look at the response codes that appear in the log. As with the content size analysis, first we create a new RDD by using a lambda function to extract the response_code field from the access_logs RDD. The difference here is that we will use a pair tuple instead of just the field itself. Using a pair tuple consisting of the response code and 1 will let us count how many records have a particular response code. Using the new RDD, we perform a reduceByKey function. reduceByKey performs a reduce on a per-key basis by applying the lambda function to each element, pairwise with the same key. We use the simple lambda function of adding the two values. Then, we cache the resulting RDD and create a list by using the take function.
End of explanation
"""
labels = responseCodeToCount.map(lambda (x, y): x).collect()
print labels
count = access_logs.count()
fracs = responseCodeToCount.map(lambda (x, y): (float(y) / count)).collect()
print fracs
import matplotlib.pyplot as plt
def pie_pct_format(value):
""" Determine the appropriate format string for the pie chart percentage label
Args:
value: value of the pie slice
Returns:
str: formated string label; if the slice is too small to fit, returns an empty string for label
"""
return '' if value < 7 else '%.0f%%' % value
fig = plt.figure(figsize=(4.5, 4.5), facecolor='white', edgecolor='white')
colors = ['yellowgreen', 'lightskyblue', 'gold', 'purple', 'lightcoral', 'yellow', 'black']
explode = (0.05, 0.05, 0.1, 0, 0, 0, 0)
patches, texts, autotexts = plt.pie(fracs, labels=labels, colors=colors,
explode=explode, autopct=pie_pct_format,
shadow=False, startangle=125)
for text, autotext in zip(texts, autotexts):
if autotext.get_text() == '':
text.set_text('') # If the slice is small to fit, don't show a text label
plt.legend(labels, loc=(0.80, -0.1), shadow=True)
pass
"""
Explanation: (2c) Example: Response Code Graphing with matplotlib
Now, lets visualize the results from the last example. We can visualize the results from the last example using matplotlib. First we need to extract the labels and fractions for the graph. We do this with two separate map functions with a lambda functions. The first map function extracts a list of of the response code values, and the second map function extracts a list of the per response code counts divided by the total size of the access logs. Next, we create a figure with figure() constructor and use the pie() method to create the pie plot.
End of explanation
"""
# Any hosts that has accessed the server more than 10 times.
hostCountPairTuple = access_logs.map(lambda log: (log.host, 1))
hostSum = hostCountPairTuple.reduceByKey(lambda a, b : a + b)
hostMoreThan10 = hostSum.filter(lambda s: s[1] > 10)
hostsPick20 = (hostMoreThan10
.map(lambda s: s[0])
.take(20))
print 'Any 20 hosts that have accessed more then 10 times: %s' % hostsPick20
# An example: [u'204.120.34.185', u'204.243.249.9', u'slip1-32.acs.ohio-state.edu', u'lapdog-14.baylor.edu', u'199.77.67.3', u'gs1.cs.ttu.edu', u'haskell.limbex.com', u'alfred.uib.no', u'146.129.66.31', u'manaus.bologna.maraut.it', u'dialup98-110.swipnet.se', u'slip-ppp02.feldspar.com', u'ad03-053.compuserve.com', u'srawlin.opsys.nwa.com', u'199.202.200.52', u'ix-den7-23.ix.netcom.com', u'151.99.247.114', u'w20-575-104.mit.edu', u'205.25.227.20', u'ns.rmc.com']
"""
Explanation: (2d) Example: Frequent Hosts
Let's look at hosts that have accessed the server multiple times (e.g., more than ten times). As with the response code analysis in (2b), first we create a new RDD by using a lambda function to extract the host field from the access_logs RDD using a pair tuple consisting of the host and 1 which will let us count how many records were created by a particular host's request. Using the new RDD, we perform a reduceByKey function with a lambda function that adds the two values. We then filter the result based on the count of accesses by each host (the second element of each pair) being greater than ten. Next, we extract the host name by performing a map with a lambda function that returns the first element of each pair. Finally, we extract 20 elements from the resulting RDD - note that the choice of which elements are returned is not guaranteed to be deterministic.
End of explanation
"""
endpoints = (access_logs
.map(lambda log: (log.endpoint, 1))
.reduceByKey(lambda a, b : a + b)
.cache())
ends = endpoints.map(lambda (x, y): x).collect()
counts = endpoints.map(lambda (x, y): y).collect()
fig = plt.figure(figsize=(8,4.2), facecolor='white', edgecolor='white')
plt.axis([0, len(ends), 0, max(counts)])
plt.grid(b=True, which='major', axis='y')
plt.xlabel('Endpoints')
plt.ylabel('Number of Hits')
plt.plot(counts)
pass
"""
Explanation: (2e) Example: Visualizing Endpoints
Now, lets visualize the number of hits to endpoints (URIs) in the log. To perform this task, we first create a new RDD by using a lambda function to extract the endpoint field from the access_logs RDD using a pair tuple consisting of the endpoint and 1 which will let us count how many records were created by a particular host's request. Using the new RDD, we perform a reduceByKey function with a lambda function that adds the two values. We then cache the results.
Next we visualize the results using matplotlib. We previously imported the matplotlib.pyplot library, so we do not need to import it again. We perform two separate map functions with lambda functions. The first map function extracts a list of endpoint values, and the second map function extracts a list of the visits per endpoint values. Next, we create a figure with figure() constructor, set various features of the plot (axis limits, grid lines, and labels), and use the plot() method to create the line plot.
End of explanation
"""
# Top Endpoints
endpointCounts = (access_logs
.map(lambda log: (log.endpoint, 1))
.reduceByKey(lambda a, b : a + b))
topEndpoints = endpointCounts.takeOrdered(10, lambda s: -1 * s[1])
print 'Top Ten Endpoints: %s' % topEndpoints
assert topEndpoints == [(u'/images/NASA-logosmall.gif', 59737), (u'/images/KSC-logosmall.gif', 50452), (u'/images/MOSAIC-logosmall.gif', 43890), (u'/images/USA-logosmall.gif', 43664), (u'/images/WORLD-logosmall.gif', 43277), (u'/images/ksclogo-medium.gif', 41336), (u'/ksc.html', 28582), (u'/history/apollo/images/apollo-logo1.gif', 26778), (u'/images/launch-logo.gif', 24755), (u'/', 20292)], 'incorrect Top Ten Endpoints'
"""
Explanation: (2f) Example: Top Endpoints
For the final example, we'll look at the top endpoints (URIs) in the log. To determine them, we first create a new RDD by using a lambda function to extract the endpoint field from the access_logs RDD using a pair tuple consisting of the endpoint and 1 which will let us count how many records were created by a particular host's request. Using the new RDD, we perform a reduceByKey function with a lambda function that adds the two values. We then extract the top ten endpoints by performing a takeOrdered with a value of 10 and a lambda function that multiplies the count (the second element of each pair) by -1 to create a sorted list with the top endpoints at the bottom.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
# HINT: Each of these <FILL IN> below could be completed with a single transformation or action.
# You are welcome to structure your solution in a different way, so long as
# you ensure the variables used in the next Test section are defined (ie. endpointSum, topTenErrURLs).
not200 = access_logs.filter(lambda log : log.response_code != 200)
endpointCountPairTuple = not200.map(lambda log: (log.endpoint, 1))
endpointSum = endpointCountPairTuple.reduceByKey(lambda a, b : a + b).cache()
topTenErrURLs = endpointSum.takeOrdered(10, lambda s: -1 * s[1])
print 'Top Ten failed URLs: %s' % topTenErrURLs
# TEST Top ten error endpoints (3a)
Test.assertEquals(endpointSum.count(), 7689, 'incorrect count for endpointSum')
Test.assertEquals(topTenErrURLs, [(u'/images/NASA-logosmall.gif', 8761), (u'/images/KSC-logosmall.gif', 7236), (u'/images/MOSAIC-logosmall.gif', 5197), (u'/images/USA-logosmall.gif', 5157), (u'/images/WORLD-logosmall.gif', 5020), (u'/images/ksclogo-medium.gif', 4728), (u'/history/apollo/images/apollo-logo1.gif', 2907), (u'/images/launch-logo.gif', 2811), (u'/', 2199), (u'/images/ksclogosmall.gif', 1622)], 'incorrect Top Ten failed URLs (topTenErrURLs)')
"""
Explanation: Part 3: Analyzing Web Server Log File
Now it is your turn to perform analyses on web server log files.
(3a) Exercise: Top Ten Error Endpoints
What are the top ten endpoints which did not have return code 200? Create a sorted list containing top ten endpoints and the number of times that they were accessed with non-200 return code.
Think about the steps that you need to perform to determine which endpoints did not have a 200 return code, how you will uniquely count those endpoints, and sort the list.
You might want to refer back to the previous Lab (Lab 1 Word Count) for insights.
End of explanation
"""
print access_logs.take(1)[0]
# TODO: Replace <FILL IN> with appropriate code
# HINT: Do you recall the tips from (3a)? Each of these <FILL IN> could be an transformation or action.
hosts = access_logs.map(lambda log: (log.host , 1))
uniqueHosts = hosts.reduceByKey(lambda a, b: a + b).cache()
uniqueHostCount = uniqueHosts.count()
print 'Unique hosts: %d' % uniqueHostCount
# TEST Number of unique hosts (3b)
Test.assertEquals(uniqueHostCount, 54507, 'incorrect uniqueHostCount')
"""
Explanation: (3b) Exercise: Number of Unique Hosts
How many unique hosts are there in the entire log?
Think about the steps that you need to perform to count the number of different hosts in the log.
End of explanation
"""
aa = datetime.datetime(2015,8,28,19,50,6,)
print aa
print
print aa.day
# TODO: Replace <FILL IN> with appropriate code
# (log.date_time.date().day,log.host),1)中(log.date_time.date().day, log.host)作为key
# 1作为value
dayToHostPairTuple = access_logs.map(lambda log : ((log.date_time.date().day, log.host), 1) )
# (log.date_time.date().day, log.host)
dayGroupedHosts = dayToHostPairTuple.reduceByKey(lambda a, b: a + b).map(lambda a : a[0]).cache()
# (log.date_time.date().day, 1)
dayHostCount = dayGroupedHosts.map(lambda a: (a[0], 1))
dailyHosts = dayHostCount.reduceByKey(lambda a, b : a + b).sortByKey().cache()
dailyHostsList = dailyHosts.take(30)
print 'Unique hosts per day: %s' % dailyHostsList
# TEST Number of unique daily hosts (3c)
Test.assertEquals(dailyHosts.count(), 21, 'incorrect dailyHosts.count()')
Test.assertEquals(dailyHostsList, [(1, 2582), (3, 3222), (4, 4190), (5, 2502), (6, 2537), (7, 4106), (8, 4406), (9, 4317), (10, 4523), (11, 4346), (12, 2864), (13, 2650), (14, 4454), (15, 4214), (16, 4340), (17, 4385), (18, 4168), (19, 2550), (20, 2560), (21, 4134), (22, 4456)], 'incorrect dailyHostsList')
Test.assertTrue(dailyHosts.is_cached, 'incorrect dailyHosts.is_cached')
"""
Explanation: (3c) Exercise: Number of Unique Daily Hosts
For an advanced exercise, let's determine the number of unique hosts in the entire log on a day-by-day basis. This computation will give us counts of the number of unique daily hosts. We'd like a list sorted by increasing day of the month which includes the day of the month and the associated number of unique hosts for that day. Make sure you cache the resulting RDD dailyHosts so that we can reuse it in the next exercise.
Think about the steps that you need to perform to count the number of different hosts that make requests each day.
Since the log only covers a single month, you can ignore the month.
End of explanation
"""
print "dailyHosts.count():", dailyHosts.count()
print "dailyHosts.take(1):", dailyHosts.take(1)
print "dailyHosts.collect():", dailyHosts.collect()
# TODO: Replace <FILL IN> with appropriate code
daysWithHosts = dailyHosts.map(lambda a : a[0]).collect()
hosts = dailyHosts.map(lambda a : a[1]).collect()
print daysWithHosts
print hosts
a = [10, 17, 12, 14]
print a
a.remove(17)
print a
# TEST Visualizing unique daily hosts (3d)
test_days = range(1, 23)
test_days.remove(2)
Test.assertEquals(daysWithHosts, test_days, 'incorrect days')
Test.assertEquals(hosts, [2582, 3222, 4190, 2502, 2537, 4106, 4406, 4317, 4523, 4346, 2864, 2650, 4454, 4214, 4340, 4385, 4168, 2550, 2560, 4134, 4456], 'incorrect hosts')
fig = plt.figure(figsize=(8,4.5), facecolor='white', edgecolor='white')
plt.axis([min(daysWithHosts), max(daysWithHosts), 0, max(hosts)+500])
plt.grid(b=True, which='major', axis='y')
plt.xlabel('Day')
plt.ylabel('Hosts')
plt.plot(daysWithHosts, hosts)
pass
"""
Explanation: (3d) Exercise: Visualizing the Number of Unique Daily Hosts
Using the results from the previous exercise, use matplotlib to plot a "Line" graph of the unique hosts requests by day.
daysWithHosts should be a list of days and hosts should be a list of number of unique hosts for each corresponding day.
* How could you convert a RDD into a list? See the collect() method*
End of explanation
"""
a = datetime.datetime(1,2,3,4,5,6)
print "a:", a
aa = datetime.datetime(1,2,3,4,5,6).date()
print "aa:", aa
aaa = datetime.datetime(1,2,3,4,5,6).date().day
print "aaa:", aaa
# TODO: Replace <FILL IN> with appropriate code
print "access_logs.take(3):", access_logs.take(3)
print
# ( (log.date_time.date().day, log.host), 1 )
dayAndHostTuple = access_logs.map(lambda log : ((log.date_time.date().day, log.host), 1) )
print "dayAndHostTuple.take(3):", dayAndHostTuple.take(3)
print
# (log.date_time.date().day, (1, (log.date_time.date().day, log.host)) )
groupedByDay = dayAndHostTuple.reduceByKey(lambda a, b : a + b ).map(lambda a :(a[0][0],(1,a[1])))
sortedByDay = groupedByDay.reduceByKey(lambda a, b :(a[0] + b[0] , a[1]+b[1])).sortByKey()
print "sortedByDay.take(2):", sortedByDay.take(2)
print
avgDailyReqPerHost = sortedByDay.map(lambda a :(a[0], a[1][1]/a[1][0])).cache()
avgDailyReqPerHostList = avgDailyReqPerHost.take(30)
print 'Average number of daily requests per Hosts is %s' % avgDailyReqPerHostList
# TEST Average number of daily requests per hosts (3e)
Test.assertEquals(avgDailyReqPerHostList, [(1, 13), (3, 12), (4, 14), (5, 12), (6, 12), (7, 13), (8, 13), (9, 14), (10, 13), (11, 14), (12, 13), (13, 13), (14, 13), (15, 13), (16, 13), (17, 13), (18, 13), (19, 12), (20, 12), (21, 13), (22, 12)], 'incorrect avgDailyReqPerHostList')
Test.assertTrue(avgDailyReqPerHost.is_cached, 'incorrect avgDailyReqPerHost.is_cache')
"""
Explanation: (3e) Exercise: Average Number of Daily Requests per Hosts
Next, let's determine the average number of requests on a day-by-day basis. We'd like a list by increasing day of the month and the associated average number of requests per host for that day. Make sure you cache the resulting RDD avgDailyReqPerHost so that we can reuse it in the next exercise.
To compute the average number of requests per host, get the total number of request across all hosts and divide that by the number of unique hosts.
Since the log only covers a single month, you can skip checking for the month.
Also to keep it simple, when calculating the approximate average use the integer value - you do not need to upcast to float
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
print 'Average number of daily requests per Hosts is %s' % avgDailyReqPerHostList
daysWithAvg = avgDailyReqPerHost.map(lambda a : a[0]).collect()
avgs = avgDailyReqPerHost.map(lambda a : a[1]).collect()
# TEST Average Daily Requests per Unique Host (3f)
Test.assertEquals(daysWithAvg, [1, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22], 'incorrect days')
Test.assertEquals(avgs, [13, 12, 14, 12, 12, 13, 13, 14, 13, 14, 13, 13, 13, 13, 13, 13, 13, 12, 12, 13, 12], 'incorrect avgs')
fig = plt.figure(figsize=(8,4.2), facecolor='white', edgecolor='white')
plt.axis([0, max(daysWithAvg), 0, max(avgs)+2])
plt.grid(b=True, which='major', axis='y')
plt.xlabel('Day')
plt.ylabel('Average')
plt.plot(daysWithAvg, avgs)
pass
"""
Explanation: (3f) Exercise: Visualizing the Average Daily Requests per Unique Host
Using the result avgDailyReqPerHost from the previous exercise, use matplotlib to plot a "Line" graph of the average daily requests per unique host by day.
daysWithAvg should be a list of days and avgs should be a list of average daily requests per unique hosts for each corresponding day.
End of explanation
"""
print access_logs.take(1)
# TODO: Replace <FILL IN> with appropriate code
badRecords = (access_logs
.filter(lambda log : log.response_code == 404)
).cache()
print 'Found %d 404 URLs' % badRecords.count()
# TEST Counting 404 (4a)
Test.assertEquals(badRecords.count(), 6185, 'incorrect badRecords.count()')
Test.assertTrue(badRecords.is_cached, 'incorrect badRecords.is_cached')
"""
Explanation: Part 4: Exploring 404 Response Codes
Let's drill down and explore the error 404 response code records. 404 errors are returned when an endpoint is not found by the server (i.e., a missing page or object).
(4a) Exercise: Counting 404 Response Codes
Create a RDD containing only log records with a 404 response code. Make sure you cache() the RDD badRecords as we will use it in the rest of this exercise.
How many 404 records are in the log?
End of explanation
"""
print badRecords.take(1)
# TODO: Replace <FILL IN> with appropriate code
badEndpoints = badRecords.map(lambda log : (log.endpoint, 1))
badUniqueEndpoints = badEndpoints.reduceByKey(lambda a , b : a + b).map(lambda a : a[0])
print "badUniqueEndpoints.count():", badUniqueEndpoints.count()
print
badUniqueEndpointsPick40 = badUniqueEndpoints.take(40)
print '404 URLS: %s' % badUniqueEndpointsPick40
# TEST Listing 404 records (4b)
badUniqueEndpointsSet40 = set(badUniqueEndpointsPick40)
Test.assertEquals(len(badUniqueEndpointsSet40), 40, 'badUniqueEndpointsPick40 not distinct')
"""
Explanation: (4b) Exercise: Listing 404 Response Code Records
Using the RDD containing only log records with a 404 response code that you cached in part (4a), print out a list up to 40 distinct endpoints that generate 404 errors - no endpoint should appear more than once in your list.
End of explanation
"""
print badRecords.take(1)
# TODO: Replace <FILL IN> with appropriate code
badEndpointsCountPairTuple = badRecords.map(lambda log : (log.endpoint, 1))
badEndpointsSum = badEndpointsCountPairTuple.reduceByKey(lambda a , b: a + b).cache()
print "badEndP=pointsSum.count():", badEndpointsSum.count()
print
badEndpointsTop20 = badEndpointsSum.takeOrdered(20, lambda (a, b): -b)
print 'Top Twenty 404 URLs: %s' % badEndpointsTop20
# TEST Top twenty 404 URLs (4c)
Test.assertEquals(badEndpointsTop20, [(u'/pub/winvn/readme.txt', 633), (u'/pub/winvn/release.txt', 494), (u'/shuttle/missions/STS-69/mission-STS-69.html', 431), (u'/images/nasa-logo.gif', 319), (u'/elv/DELTA/uncons.htm', 178), (u'/shuttle/missions/sts-68/ksc-upclose.gif', 156), (u'/history/apollo/sa-1/sa-1-patch-small.gif', 146), (u'/images/crawlerway-logo.gif', 120), (u'/://spacelink.msfc.nasa.gov', 117), (u'/history/apollo/pad-abort-test-1/pad-abort-test-1-patch-small.gif', 100), (u'/history/apollo/a-001/a-001-patch-small.gif', 97), (u'/images/Nasa-logo.gif', 85), (u'/shuttle/resources/orbiters/atlantis.gif', 64), (u'/history/apollo/images/little-joe.jpg', 62), (u'/images/lf-logo.gif', 59), (u'/shuttle/resources/orbiters/discovery.gif', 56), (u'/shuttle/resources/orbiters/challenger.gif', 54), (u'/robots.txt', 53), (u'/elv/new01.gif>', 43), (u'/history/apollo/pad-abort-test-2/pad-abort-test-2-patch-small.gif', 38)], 'incorrect badEndpointsTop20')
"""
Explanation: (4c) Exercise: Listing the Top Twenty 404 Response Code Endpoints
Using the RDD containing only log records with a 404 response code that you cached in part (4a), print out a list of the top twenty endpoints that generate the most 404 errors.
Remember, top endpoints should be in sorted order
End of explanation
"""
print "badRecords.take(1):", badRecords.take(1)
# TODO: Replace <FILL IN> with appropriate code
errHostsCountPairTuple = badRecords.map(lambda log : (log.host ,1))
errHostsSum = errHostsCountPairTuple.reduceByKey(lambda a, b: a + b).cache()
errHostsTop25 = errHostsSum.takeOrdered(25, key = lambda (a , b): -b)
print 'Top 25 hosts that generated errors: %s' % errHostsTop25
# TEST Top twenty-five 404 response code hosts (4d)
Test.assertEquals(len(errHostsTop25), 25, 'length of errHostsTop25 is not 25')
Test.assertEquals(len(set(errHostsTop25) - set([(u'maz3.maz.net', 39), (u'piweba3y.prodigy.com', 39), (u'gate.barr.com', 38), (u'm38-370-9.mit.edu', 37), (u'ts8-1.westwood.ts.ucla.edu', 37), (u'nexus.mlckew.edu.au', 37), (u'204.62.245.32', 33), (u'163.206.104.34', 27), (u'spica.sci.isas.ac.jp', 27), (u'www-d4.proxy.aol.com', 26), (u'www-c4.proxy.aol.com', 25), (u'203.13.168.24', 25), (u'203.13.168.17', 25), (u'internet-gw.watson.ibm.com', 24), (u'scooter.pa-x.dec.com', 23), (u'crl5.crl.com', 23), (u'piweba5y.prodigy.com', 23), (u'onramp2-9.onr.com', 22), (u'slip145-189.ut.nl.ibm.net', 22), (u'198.40.25.102.sap2.artic.edu', 21), (u'gn2.getnet.com', 20), (u'msp1-16.nas.mr.net', 20), (u'isou24.vilspa.esa.es', 19), (u'dial055.mbnet.mb.ca', 19), (u'tigger.nashscene.com', 19)])), 0, 'incorrect errHostsTop25')
"""
Explanation: (4d) Exercise: Listing the Top Twenty-five 404 Response Code Hosts
Instead of looking at the endpoints that generated 404 errors, let's look at the hosts that encountered 404 errors. Using the RDD containing only log records with a 404 response code that you cached in part (4a), print out a list of the top twenty-five hosts that generate the most 404 errors.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
errDateCountPairTuple = badRecords.map(lambda log : (log.date_time.date().day, 1))
errDateSum = errDateCountPairTuple.reduceByKey(lambda a , b: a + b).cache()
errDateSorted = errDateSum.sortByKey().cache()
errByDate = errDateSorted.take(30)
print '404 Errors by day: %s' % errByDate
# TEST 404 response codes per day (4e)
Test.assertEquals(errByDate, [(1, 243), (3, 303), (4, 346), (5, 234), (6, 372), (7, 532), (8, 381), (9, 279), (10, 314), (11, 263), (12, 195), (13, 216), (14, 287), (15, 326), (16, 258), (17, 269), (18, 255), (19, 207), (20, 312), (21, 305), (22, 288)], 'incorrect errByDate')
Test.assertTrue(errDateSorted.is_cached, 'incorrect errDateSorted.is_cached')
"""
Explanation: (4e) Exercise: Listing 404 Response Codes per Day
Let's explore the 404 records temporally. Break down the 404 requests by day (cache() the RDD errDateSorted) and get the daily counts sorted by day as a list.
Since the log only covers a single month, you can ignore the month in your checks.
End of explanation
"""
print "errDateSorted.take(1):", errDateSorted.take(1)
print
print "errDateSorted.count():", errDateSorted.count()
print
print "errDateSorted.collect():", errDateSorted.collect()
# TODO: Replace <FILL IN> with appropriate code
daysWithErrors404 = errDateSorted.map(lambda a : a[0]).collect()
print "daysWithErrors404:", daysWithErrors404
print
errors404ByDay = errDateSorted.map(lambda a : a[1]).collect()
print "errors404ByDay:", errors404ByDay
# TEST Visualizing the 404 Response Codes by Day (4f)
Test.assertEquals(daysWithErrors404, [1, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22], 'incorrect daysWithErrors404')
Test.assertEquals(errors404ByDay, [243, 303, 346, 234, 372, 532, 381, 279, 314, 263, 195, 216, 287, 326, 258, 269, 255, 207, 312, 305, 288], 'incorrect errors404ByDay')
fig = plt.figure(figsize=(8,4.2), facecolor='white', edgecolor='white')
plt.axis([0, max(daysWithErrors404), 0, max(errors404ByDay)])
plt.grid(b=True, which='major', axis='y')
plt.xlabel('Day')
plt.ylabel('404 Errors')
plt.plot(daysWithErrors404, errors404ByDay)
pass
"""
Explanation: (4f) Exercise: Visualizing the 404 Response Codes by Day
Using the results from the previous exercise, use matplotlib to plot a "Line" or "Bar" graph of the 404 response codes by day.
End of explanation
"""
print errDateSorted.collect()
# TODO: Replace <FILL IN> with appropriate code
topErrDate = errDateSorted.takeOrdered(5 , lambda (a, b): -b)
print 'Top Five dates for 404 requests: %s' % topErrDate
# TEST Five dates for 404 requests (4g)
Test.assertEquals(topErrDate, [(7, 532), (8, 381), (6, 372), (4, 346), (15, 326)], 'incorrect topErrDate')
"""
Explanation: (4g) Exercise: Top Five Days for 404 Response Codes
Using the RDD errDateSorted you cached in the part (4e), what are the top five days for 404 response codes and the corresponding counts of 404 response codes?
End of explanation
"""
print "badRecords.count():", badRecords.count()
print
print "badRecords.take(1):", badRecords.take(1)
# TODO: Replace <FILL IN> with appropriate code
hourCountPairTuple = badRecords.map(lambda a : (a.date_time.hour , 1))
hourRecordsSum = hourCountPairTuple.reduceByKey(lambda a , b: a + b)
hourRecordsSorted = hourRecordsSum.sortByKey().cache()
errHourList = hourRecordsSorted.take(24)
print 'Top hours for 404 requests: %s' % errHourList
# TEST Hourly 404 response codes (4h)
Test.assertEquals(errHourList, [(0, 175), (1, 171), (2, 422), (3, 272), (4, 102), (5, 95), (6, 93), (7, 122), (8, 199), (9, 185), (10, 329), (11, 263), (12, 438), (13, 397), (14, 318), (15, 347), (16, 373), (17, 330), (18, 268), (19, 269), (20, 270), (21, 241), (22, 234), (23, 272)], 'incorrect errHourList')
Test.assertTrue(hourRecordsSorted.is_cached, 'incorrect hourRecordsSorted.is_cached')
"""
Explanation: (4h) Exercise: Hourly 404 Response Codes
Using the RDD badRecords you cached in the part (4a) and by hour of the day and in decreasing order, create an RDD containing how many requests had a 404 return code for each hour of the day. Cache the resulting RDD hourRecordsSorted and print that as a list.
End of explanation
"""
# TODO: Replace <FILL IN> with appropriate code
hoursWithErrors404 = hourRecordsSorted.map(lambda a : a[0]).collect()
print "hoursWithErrors404:", hoursWithErrors404
print
errors404ByHours = hourRecordsSorted.map(lambda a : a[1]).collect()
print "errors404ByHours:", errors404ByHours
# TEST Visualizing the 404 Response Codes by Hour (4i)
Test.assertEquals(hoursWithErrors404, [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], 'incorrect hoursWithErrors404')
Test.assertEquals(errors404ByHours, [175, 171, 422, 272, 102, 95, 93, 122, 199, 185, 329, 263, 438, 397, 318, 347, 373, 330, 268, 269, 270, 241, 234, 272], 'incorrect errors404ByHours')
fig = plt.figure(figsize=(8,4.2), facecolor='white', edgecolor='white')
plt.axis([0, max(hoursWithErrors404), 0, max(errors404ByHours)])
plt.grid(b=True, which='major', axis='y')
plt.xlabel('Hour')
plt.ylabel('404 Errors')
plt.plot(hoursWithErrors404, errors404ByHours)
pass
"""
Explanation: (4i) Exercise: Visualizing the 404 Response Codes by Hour
Using the results from the previous exercise, use matplotlib to plot a "Line" or "Bar" graph of the 404 response codes by hour.
End of explanation
"""
|
f-guitart/data_mining | notes/97 - Approaximate String Matching.ipynb | gpl-3.0 | import pandas as pd
names = pd.DataFrame({"name" : ["Alice","Bob","Charlie","Dennis"],
"surname" : ["Doe","Smith","Sheen","Quaid"]})
names
names.name.str.match("A\w+")
debts = pd.DataFrame({"debtor":["D.Quaid","C.Sheen"],
"amount":[100,10000]})
debts
"""
Explanation: String Matching
The idea of string matching is to find strings that match a given pattern. We have seen that Pandas provides some useful functions to do that job.
End of explanation
"""
debts["surname"] = debts.debtor.str.extract("\w+\.(\w+)")
debts
"""
Explanation: Imagine I want to have a list of my friends with the amount of money I borrowed to each other, toghether with their names and surnames.
End of explanation
"""
names.merge(debts, left_on="surname", right_on="surname", how="left")
names.merge(debts, left_on="surname", right_on="surname", how="right")
names.merge(debts, left_on="surname", right_on="surname", how="inner")
names.merge(debts, left_on="surname", right_on="surname", how="outer")
names.merge(debts, left_index=True, right_index=True, how="left")
"""
Explanation: To merge two dataframes we can use merge function: https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html. Merge DataFrame objects by performing a database-style join operation by columns or indexes.
We can use merge in two different ways (very simmilar):
* using the DataFrame method: left_df.merge(right_df, ...)
* using the Pandas function: pd.merge(left_df, right_df, ...)
Common parameters
how : {‘left’, ‘right’, ‘outer’, ‘inner’}, default ‘inner’
left: use only keys from left frame, similar to a SQL left outer join; preserve key order
right: use only keys from right frame, similar to a SQL right outer join; preserve key order
outer: use union of keys from both frames, similar to a SQL full outer join; sort keys lexicographically
inner: use intersection of keys from both frames, similar to a SQL inner join; preserve the order of the left keys
on : label or list
Field names to join on. Must be found in both DataFrames. If on is None and not merging on indexes, then it merges on the intersection of the columns by default.
left_on : label or list, or array-like
Field names to join on in left DataFrame. Can be a vector or list of vectors of the length of the DataFrame to use a particular vector as the join key instead of columns
right_on : label or list, or array-like
Field names to join on in right DataFrame or vector/list of vectors per left_on docs
left_index : boolean, default False
Use the index from the left DataFrame as the join key(s). If it is a MultiIndex, the number of keys in the other DataFrame (either the index or a number of columns) must match the number of levels
right_index : boolean, default False
Use the index from the right DataFrame as the join key. Same caveats as left_index
End of explanation
"""
lat_lon_mun = pd.read_excel("lat_lon_municipalities.xls", skiprows=2)
lat_lon_mun.head()
mun_codes = pd.read_excel("11codmun.xls", encoding="latin1", skiprows=1)
mun_codes.head()
lat_lon_mun[lat_lon_mun["Población"].str.match(".*anaria.*")]
mun_codes[mun_codes["NOMBRE"].str.match(".*anaria.*")]
"Valsequillo de Gran Canaria" == "Valsequillo de Gran Canaria"
"Palmas de Gran Canaria (Las)" == "Palmas de Gran Canaria, Las"
"""
Explanation: Approximate String Matching
Fuzzy String Matching, also called Approximate String Matching, is the process of finding strings that approximatively match a given pattern.
For example, we can have two datasets with information about local municipalities:
End of explanation
"""
from fuzzywuzzy import fuzz
fuzz.ratio("NEW YORK METS","NEW YORK MEATS")
fuzz.ratio("Palmas de Gran Canaria (Las)","Palmas de Gran Canaria, Las")
"""
Explanation: The closeness of a match is often measured in terms of edit distance, which is the number of primitive operations necessary to convert the string into an exact match.
Primitive operations are usually: insertion (to insert a new character at a given position), deletion (to delete a particular character) and substitution (to replace a character with a new one).
Fuzzy String Matching can have different practical applications. Typical examples are spell-checking, text re-use detection (the politically correct way of calling plagiarism detection), spam filtering, as well as several applications in the bioinformatics domain, e.g. matching DNA sequences.
FuzzyWuzzy
The main modules in FuzzyWuzzy are called fuzz, for string-to-string comparisons, and process to compare a string with a list of strings.
Under the hood, FuzzyWuzzy uses difflib, part of the standard library, so there is nothing extra to install.
String Similarity
The simplest way to compare two strings is with a measurement of edit distance. For example, the following two strings are quite similar:
NEW YORK METS
NEW YORK MEATS
Now, according to the ratio:
Return a measure of the sequences' similarity as a float in the range [0, 1]. Where T is the total number of elements in both sequences, and M is the number of matches, this is 2.0*M / T.
End of explanation
"""
"San Millán de Yécora" == "Millán de Yécora"
fuzz.ratio("San Millán de Yécora", "Millán de Yécora")
"""
Explanation: Partial String Similarity
What to do when we want to find if two strings are simmilar, and one contains the other.
In this case the ratio will be low but if we would know how to split the bigger string, the match would be perfect. Let's see an example:
End of explanation
"""
fuzz.ratio("YANKEES", "NEW YORK YANKEES")
fuzz.ratio("NEW YORK METS", "NEW YORK YANKEES")
"""
Explanation: In fact we can have the following situation:
End of explanation
"""
fuzz.partial_ratio("San Millán de Yécora", "Millán de Yécora")
fuzz.partial_ratio("YANKEES", "NEW YORK YANKEES")
fuzz.partial_ratio("NEW YORK METS", "NEW YORK YANKEES")
"""
Explanation: partial_ratio, seeks the more appealing substring and returns its ratio
End of explanation
"""
s1 = "Las Palmas de Gran Canaria"
s2 = "Gran Canaria, Las Palmas de"
s3 = "Palmas de Gran Canaria, Las"
s4 = "Palmas de Gran Canaria, (Las)"
"""
Explanation: Out of Order
What happens if we have just the same strings but in a different order, let's have an example:
End of explanation
"""
fuzz.token_sort_ratio("Las Palmas de Gran Canaria", "Palmas de Gran Canaria Las")
fuzz.ratio("Las Palmas de Gran Canaria", "Palmas de Gran Canaria Las")
"""
Explanation: FuzzyWuzzy provides two ways to deal with this situation:
Token Sort
The token sort approach involves tokenizing the string in question, sorting the tokens alphabetically, and then joining them back into a string. Then compare the transformed strings with a simple ratio(.
End of explanation
"""
t0 = ["Canaria,","de","Gran", "Palmas"]
t1 = ["Canaria,","de","Gran", "Palmas"] + ["Las"]
t2 = ["Canaria,","de","Gran", "Palmas"] + ["(Las)"]
fuzz.token_sort_ratio("Palmas de Gran Canaria, Las", "Palmas de Gran Canaria, (Las)")
"""
Explanation: Token Set
The token set approach is similar, but a little bit more flexible. Here, we tokenize both strings, but instead of immediately sorting and comparing, we split the tokens into two groups: intersection and remainder. We use those sets to build up a comparison string.
t0 = [SORTED_INTERSECTION]
t1 = [SORTED_INTERSECTION] + [SORTED_REST_OF_STRING1]
t2 = [SORTED_INTERSECTION] + [SORTED_REST_OF_STRING2]
max(ratio(t0,t1),ratio(t0,t2),ratio(t1,t2))
End of explanation
"""
mun_codes.shape
mun_codes.head()
lat_lon_mun.shape
lat_lon_mun.head()
"""
Explanation: Example
We want to merge both mun_codes and lat_lon_mun. So we have to have a good municipality name in both datasets. From that names we can do:
* Exact string matching: match these names common in both datasets
* Approximate string matching: match these names with highest similarity
Step 1: Explore datasets
End of explanation
"""
df1 = mun_codes.merge(lat_lon_mun, left_on="NOMBRE", right_on="Población",how="inner")
df1.head()
"""
Explanation: Step 2: Merge datasets
End of explanation
"""
df1["match_ratio"] = 100
"""
Explanation: Step 3: Create a new variable called match_ratio
End of explanation
"""
df2 = mun_codes.merge(df1, left_on="NOMBRE", right_on="NOMBRE", how="left")
df2.head()
df3 = df2.loc[: ,["CPRO_x","CMUN_x","DC_x","NOMBRE","match_ratio"]]
df3.rename(columns={"CPRO_x": "CPRO", "CMUN_x":"CMUN","DC_x":"DC"},inplace=True)
df3.head()
df3.loc[df3.match_ratio.isnull(),:].head()
"""
Explanation: Step 4: Merge again with original dataset and select those which have no direct match
End of explanation
"""
mun_names = lat_lon_mun["Población"].tolist()
def approx_str_compare(x):
ratio = [fuzz.ratio(x,m) for m in mun_names]
res = pd.DataFrame({"ratio" : ratio,
"name": mun_names})
return res.sort_values(by="ratio",ascending=False).iloc[0,:]
df4 = df3.loc[df3.match_ratio.isnull(),"NOMBRE"].map(approx_str_compare)
"""
Explanation: Step 5: Apply approximate string matching
End of explanation
"""
df4.map(lambda x: x["name"])
df4.map(lambda x: x["ratio"])
df6 = df3.loc[df3.match_ratio.isnull(),:]
df6["match_ratio"] = df4.map(lambda x: x["ratio"])
df6["NOMBRE"] = df4.map(lambda x: x["name"])
df6.head()
df7 = pd.concat([df3,df6])
"""
Explanation: Step 6: Concatenate results
End of explanation
"""
|
liganega/Gongsu-DataSci | ref_materials/exams/2017/A01/finalexam_20171215.ipynb | gpl-3.0 | a = np.arange(6) + np.arange(0, 51, 10)[:, np.newaxis]
a
"""
Explanation: 2017년 2학기 공학수학 기말고사
이름 :
학번 :
시험에서 사용하는 모듈 임포트 하기
import __future__ import division, print_function
import numpy as np
import pandas as pd
from datetime import datetime as dt
넘파이 어레이 인덱싱과 슬라이싱
아래 코드로 생성된 어레이를 이용하는 문제이다.
End of explanation
"""
a[3::2, :4:2]
"""
Explanation: 예를 들어, a 어레이를 이용하여 아래 모양의 어레이를 생성할 수 있다.
$$\left [ \begin{matrix} 30 & 32 \ 50 & 52 \end{matrix} \right ]$$
End of explanation
"""
a[(1, 2, 3), (2, 3, 4)]
"""
Explanation: 문제 1.
(1) a 어레이에 인덱싱과 슬라이싱을 이용하여 아래 모양의 어레이를 생성하라.
$$\left [ \begin{matrix} 11 & 13 & 15 \ 31 & 33 & 35 \ 51 & 53 & 55 \end{matrix} \right ]$$
```
.
```
(2) a 어레이에 인덱싱과 슬라이싱을 이용하여 아래 모양의 어레이를 생성하라.
$$\left [ \begin{matrix} 2 & 12 & 22 & 32 & 42 & 52 \end{matrix} \right ]$$
```
.
```
(3) 마스크 인덱싱을 사용하여 아래 결과가 나오도록 하라.
array([0, 3, 12, 15, 21, 24, 30, 33, 42, 45, 51, 54])
```
.
```
정수 인덱싱
정수 인덱싱을 사용하여 아래 결과가 나오도록 할 수 있다.
array([12, 23, 34])
End of explanation
"""
prices_pd = pd.read_csv("data/Weed_Price.csv", parse_dates = [-1])
"""
Explanation: (4) 정수 인덱싱을 사용하여 아래 모양의 어레이가 나오도록 하라.
$$\left [ \begin{matrix} 30 & 32 & 35 \ 40 & 42 & 45 \ 50 & 52 & 55 \end{matrix} \right ]$$
```
.
```
데이터 분석
데이터는 다음과 같다.
미국의 51개 주(State)별 담배(식물) 도매가격 및 판매 일자: Weed_Price.csv
아래 그림은 미국의 주별 담배(식물) 판매 데이터를 담은 Weed_Price.csv 파일을 엑셀로 읽었을 때의 일부를 보여준다. 실제 데이터량은 22899개 이며, 아래 그림에는 5개의 데이터만을 보여주고 있다.
주의 : 1번줄은 테이블의 열별 목록(column names)을 담고 있다.
열별 목록 : State, HighQ, HighQN, MedQ, MedQN, LowQ, LowQN, date
<p>
<table cellspacing="20">
<tr>
<td>
<img src="weed_price.png", width=600>
</td>
</tr>
</table>
</p>
문제 2.
(1) 아래 코드를 설명하라.
End of explanation
"""
def getYear(x):
return x.year
year_col = prices_pd.date.apply(getYear)
prices_pd["year"] = year_col
prices_pd.tail()
"""
Explanation: ```
.
```
(2) prices_pd의 처음 10개의 데이터 출력되도록 코드를 작성하라.
```
.
```
(3) 아래의 코드를 설명하라.
End of explanation
"""
k_age = pd.read_csv("kungfu_age.csv")
"""
Explanation: ```
.
```
문제 3.
다음과 같은 데이터가 있다.
쿵푸 교실 참가자 나이 : kungfu_age.csv
아래의 그림은 쿵푸 교실 참가자의 나이를 담은 kungfu_age.csv 파일을 엑셀로 읽었을 때 일부를 보여준다.
<p>
<table cellspacing="20">
<tr>
<td>
<img src="age.png", width=150>
</td>
</tr>
</table>
</p>
참가자들의 나이를 표로 살펴보면 아래와 같다.
|나이 | 19 | 20 | 21 | 145 | 147 |
|---|---|---|---|---|---|
| 도수 | 3 | 8 | 3 | 1 | 1 |
End of explanation
"""
k_sum = k_age['age'].sum()
k_count = k_age['age'].count()
k_sum / k_count
"""
Explanation: (1) 아래 코드를 설명하라.
End of explanation
"""
se = pd.Series(['blue', 'purple', 'yellow'], index=[0, 2, 4])
se
"""
Explanation: ```
.
```
(2) (1)의 결과가 쿵푸 교실 참자가들의 나이를 대표할 수 있는가?
이와 같은 현상이 발생하는 이유를 설명하라.
```
.
```
(3) 위와 같은 현상을 피하기 위해서 어떤 값을 대푯값으로 해야 하는지 설명하라.
```
.
```
시리즈
다음과 같은 시리즈(series)가 있다.
End of explanation
"""
se.reindex(range(6), method='nearest')
"""
Explanation: 문제 4.
아래 코드를 설명하고, 출력된 결과를 말하여라.
End of explanation
"""
pop = {'Nevada' : {2001: 2.4, 2002: 2.9},
'Ohio': {2000: 1.5, 2001: 1.7, 2002: 3.6}}
df = pd.DataFrame(pop)
df
dfT = df.T
"""
Explanation: ```
.
```
데이터 프레임
문제 5.
중첩 사전을 이용하여 데이터 프레임을 생성할 수 있다.
End of explanation
"""
df['Nevada'].iloc[0] = 1.0
"""
Explanation: 추가로 아래의 코드를 실행하자.
End of explanation
"""
|
tensorflow/docs-l10n | site/ko/probability/examples/Gaussian_Process_Regression_In_TFP.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2018 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
"""
import time
import numpy as np
import matplotlib.pyplot as plt
import tensorflow.compat.v2 as tf
import tensorflow_probability as tfp
tfb = tfp.bijectors
tfd = tfp.distributions
tfk = tfp.math.psd_kernels
tf.enable_v2_behavior()
from mpl_toolkits.mplot3d import Axes3D
%pylab inline
# Configure plot defaults
plt.rcParams['axes.facecolor'] = 'white'
plt.rcParams['grid.color'] = '#666666'
%config InlineBackend.figure_format = 'png'
"""
Explanation: TensorFlow Probability의 가우시안 프로세스 회귀
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/probability/examples/Gaussian_Process_Regression_In_TFP"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a> </td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/probability/examples/Gaussian_Process_Regression_In_TFP.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행</a></td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/probability/examples/Gaussian_Process_Regression_In_TFP.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서 소스 보기</a>
</td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/probability/examples/Gaussian_Process_Regression_In_TFP.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드</a></td>
</table>
이 colab에서는 TensorFlow 및 TensorFlow Probability를 사용하여 가우시안 프로세스 회귀를 살펴봅니다. 알려진 일부 함수에서 노이즈가 있는 관측치를 생성하고 GP 모델을 해당 데이터에 맞춤 조정합니다. 그런 다음 GP 사후 확률에서 샘플링하고 해당 도메인의 그리드에 샘플링된 함수값을 플롯합니다.
배경 설명
$\mathcal{X}$를 임의의 집합으로 둡니다. 가우시안 프로세스(GP)는 $\mathcal{X}$로 인덱싱한 확률 함수 모음으로, ${X_1, \ldots, X_n} \subset \mathcal{X}$가 유한 부분 집합이면 한계 밀도 $p(X_1 = x_1, \ldots, X_n = x_n)$는 다변량 가우시안입니다. 모든 가우시안 분포는 첫 번째와 두 번째 중심 모멘트(평균 및 공분산)로 완전히 지정되며 GP도 예외는 아닙니다. 평균 함수 $\mu : \mathcal{X} \to \mathbb{R}$ 및 공분산 함수 $k : \mathcal{X} \times \mathcal{X} \to \mathbb{R}$로 GP를 완전히 지정할 수 있습니다. GP의 표현 능력의 대부분은 선택한 공분산 함수로 캡슐화됩니다. 다양한 이유로 공분산 함수를 커널 함수라고도 합니다. 대칭 양정치면 됩니다(Rasmussen & Williams의 4장 참조). 아래에서는 ExponentiatedQuadratic 공분산 커널을 사용합니다. 형태는 다음과 같습니다.
$$ k(x, x') := \sigma^2 \exp \left( \frac{|x - x'|^2}{\lambda^2} \right) $$
여기서 $\sigma^2$는 '진폭'이고 $\lambda$는 길이 척도 입니다. 커널 매개변수는 최대 가능성 최적화 절차를 통해 선택할 수 있습니다.
GP의 전체 샘플은 전체 공간 $\mathcal{X}$에 대한 실수값 함수로 구성되며 실제로는 실현하기가 비현실적입니다. 종종 샘플을 관찰할 점집합을 선택하고 이들 지점에서 함수값을 추출합니다. 이는 적절한 유한 차원 다변량 가우시안에서 샘플링하여 달성됩니다.
위의 정의에 따르면 모든 유한 차원 다변량 가우시안 분포도 가우시안 프로세스입니다. 일반적으로 GP를 참조할 때 인덱스 세트가 일부 $\mathbb{R}^n$라는 것은 암시적이며 여기에서도 실제로 이러한 가정을 사용할 것입니다.
머신러닝에서 가우시안 프로세스의 일반적인 적용은 가우시안 프로세스 회귀입니다. 이 아이디어는 유한수의 지점 ${x_1, \ldots x_N}.$에서 함수의 노이즈가 있는 관측치 ${y_1, \ldots, y_N}$를 고려하여 알려지지 않은 함수를 추정하는 것입니다. 다음의 생성 과정을 생각해봅니다.
$$ \begin{align} f \sim : & \textsf{GaussianProcess}\left( \text{mean_fn}=\mu(x), \text{covariance_fn}=k(x, x')\right) \ y_i \sim : & \textsf{Normal}\left( \text{loc}=f(x_i), \text{scale}=\sigma\right), i = 1, \ldots, N \end{align} $$
위에서 언급했듯이 샘플링된 함수는 무한수의 지점에서 값이 필요하므로 계산이 불가능합니다. 대신 다변량 가우시안의 유한 샘플을 고려합니다.
$$ \begin{gather} \begin{bmatrix} f(x_1) \ \vdots \ f(x_N) \end{bmatrix} \sim \textsf{MultivariateNormal} \left( : \text{loc}= \begin{bmatrix} \mu(x_1) \ \vdots \ \mu(x_N) \end{bmatrix} :,: \text{scale}= \begin{bmatrix} k(x_1, x_1) & \cdots & k(x_1, x_N) \ \vdots & \ddots & \vdots \ k(x_N, x_1) & \cdots & k(x_N, x_N) \ \end{bmatrix}^{1/2} : \right) \end{gather} \ y_i \sim \textsf{Normal} \left( \text{loc}=f(x_i), \text{scale}=\sigma \right) $$
공분산 행렬의 지수 $\frac{1}{2}$에 유의하세요. 이는 콜레스키(Cholesky) 분해를 나타냅니다. MVN은 위치 스케일 패밀리 분포이므로 콜레스키 계산이 필요합니다. 불행히도 콜레스키 분해는 $O(N^3)$ 시간과 $O(N^2)$ 공간을 차지하기 때문에 계산 비용이 많이 듭니다. GP 문헌의 대부분은 이 겉보기에 무해한 작은 지수를 다루는 데 초점을 맞추고 있습니다.
사전 확률 평균 함수를 상수(종종 0)로 사용하는 것이 일반적입니다. 일부 표기법도 편리합니다. 샘플링된 함수값의 유한 벡터에 대해 $\mathbf{f}$를 종종 작성합니다. $k$를 입력 쌍에 적용한 결과 공분산 행렬에는 여러 가지 흥미로운 표기법이 사용됩니다. (Quiñonero-Candela, 2005)에 이어 행렬의 구성 요소가 특정 입력 지점에서 함수값의 공분산이라는 점에 주목합니다. 따라서 공분산 행렬을 $K_{AB}$로 표시할 수 있습니다. 여기서 $A$ 및 $B$는 주어진 행렬 차원에 따른 함수값 모음의 일부 지표입니다.
예를 들어, 잠재 함수값 $\mathbf{f}$가 포함된 관측 데이터 $(\mathbf{x}, \mathbf{y})$가 주어지면 다음과 같이 작성할 수 있습니다.
$$ K_{\mathbf{f},\mathbf{f}} = \begin{bmatrix} k(x_1, x_1) & \cdots & k(x_1, x_N) \ \vdots & \ddots & \vdots \ k(x_N, x_1) & \cdots & k(x_N, x_N) \ \end{bmatrix} $$
이와 유사하게, 다음과 같이 입력 집합을 혼합할 수 있습니다.
$$ K_{\mathbf{f},} = \begin{bmatrix} k(x_1, x^_1) & \cdots & k(x_1, x^_T) \ \vdots & \ddots & \vdots \ k(x_N, x^_1) & \cdots & k(x_N, x^*_T) \ \end{bmatrix} $$
$N$ 훈련 입력과 $T$ 테스트 입력이 있다고 가정합니다. 위의 생성 프로세스는 다음과 같이 간결하게 작성될 수 있습니다.
$$ \begin{align} \mathbf{f} \sim : & \textsf{MultivariateNormal} \left( \text{loc}=\mathbf{0}, \text{scale}=K_{\mathbf{f},\mathbf{f}}^{1/2} \right) \ y_i \sim : & \textsf{Normal} \left( \text{loc}=f_i, \text{scale}=\sigma \right), i = 1, \ldots, N \end{align} $$
첫 번째 줄의 샘플링 연산은 위의 GP 추출 표기법에서와 같이 전체 함수가 아닌 다변량 가우시안에서 유한한 $N$ 함수값 집합을 생성합니다. 두 번째 줄은 고정된 관측 노이즈 $\sigma^2$를 사용하여 다양한 함수값을 중심으로 하는 일변량 가우시안에서 $N$ 추출 모음을 설명합니다.
위의 생성 모델을 사용하면 사후 확률 추론 문제를 고려할 수 있습니다. 이렇게 하면 위의 프로세스에서 관찰된 노이즈가 있는 데이터를 조건으로 새로운 테스트 포인트 집합에서 함수값에 대한 사후 확률 분포가 생성됩니다.
위의 표기법을 사용하면 다음과 같이 해당 입력 및 훈련 데이터를 조건으로 미래의 관찰(노이즈가 있는)에 대한 사후 확률 예측 분포를 다음과 같이 간결하게 작성할 수 있습니다(자세한 내용은 Rasmussen & Williams의 §2.2 참조).
$$ \mathbf{y}^ \mid \mathbf{x}^, \mathbf{x}, \mathbf{y} \sim \textsf{Normal} \left( \text{loc}=\mathbf{\mu}^, \text{scale}=(\Sigma^)^{1/2} \right), $$
여기서
$$ \mathbf{\mu}^ = K_{,\mathbf{f}}\left(K_{\mathbf{f},\mathbf{f}} + \sigma^2 I \right)^{-1} \mathbf{y} $$
및
$$ \Sigma^ = K_{,} - K_{,\mathbf{f}} \left(K_{\mathbf{f},\mathbf{f}} + \sigma^2 I \right)^{-1} K_{\mathbf{f},*} $$
가져오기
End of explanation
"""
def sinusoid(x):
return np.sin(3 * np.pi * x[..., 0])
def generate_1d_data(num_training_points, observation_noise_variance):
"""Generate noisy sinusoidal observations at a random set of points.
Returns:
observation_index_points, observations
"""
index_points_ = np.random.uniform(-1., 1., (num_training_points, 1))
index_points_ = index_points_.astype(np.float64)
# y = f(x) + noise
observations_ = (sinusoid(index_points_) +
np.random.normal(loc=0,
scale=np.sqrt(observation_noise_variance),
size=(num_training_points)))
return index_points_, observations_
# Generate training data with a known noise level (we'll later try to recover
# this value from the data).
NUM_TRAINING_POINTS = 100
observation_index_points_, observations_ = generate_1d_data(
num_training_points=NUM_TRAINING_POINTS,
observation_noise_variance=.1)
"""
Explanation: 예: 노이즈가 있는 사인파 데이터에 대한 정확한 GP 회귀
여기에서는 노이즈가 있는 사인파에서 훈련 데이터를 생성한 다음 GP 회귀 모델의 사후 확률에서 여러 곡선을 샘플링합니다. Adam을 사용하여 커널 하이퍼 매개변수를 최적화합니다(사전 확률 데이터의 음의 로그 가능성을 최소화합니다). 훈련 곡선과 실제 함수와 사후 확률 샘플을 플롯합니다.
End of explanation
"""
def build_gp(amplitude, length_scale, observation_noise_variance):
"""Defines the conditional dist. of GP outputs, given kernel parameters."""
# Create the covariance kernel, which will be shared between the prior (which we
# use for maximum likelihood training) and the posterior (which we use for
# posterior predictive sampling)
kernel = tfk.ExponentiatedQuadratic(amplitude, length_scale)
# Create the GP prior distribution, which we will use to train the model
# parameters.
return tfd.GaussianProcess(
kernel=kernel,
index_points=observation_index_points_,
observation_noise_variance=observation_noise_variance)
gp_joint_model = tfd.JointDistributionNamed({
'amplitude': tfd.LogNormal(loc=0., scale=np.float64(1.)),
'length_scale': tfd.LogNormal(loc=0., scale=np.float64(1.)),
'observation_noise_variance': tfd.LogNormal(loc=0., scale=np.float64(1.)),
'observations': build_gp,
})
"""
Explanation: 커널 하이퍼 매개변수에 우선 순위를 두고 tfd.JointDistributionNamed를 사용하여 하이퍼 매개변수와 관찰된 데이터의 결합 분포를 작성합니다.
End of explanation
"""
x = gp_joint_model.sample()
lp = gp_joint_model.log_prob(x)
print("sampled {}".format(x))
print("log_prob of sample: {}".format(lp))
"""
Explanation: 사전 확률에서 샘플링할 수 있는지 확인하여 구현의 온전성 검사를 수행하고 샘플의 로그 밀도를 계산할 수 있습니다.
End of explanation
"""
# Create the trainable model parameters, which we'll subsequently optimize.
# Note that we constrain them to be strictly positive.
constrain_positive = tfb.Shift(np.finfo(np.float64).tiny)(tfb.Exp())
amplitude_var = tfp.util.TransformedVariable(
initial_value=1.,
bijector=constrain_positive,
name='amplitude',
dtype=np.float64)
length_scale_var = tfp.util.TransformedVariable(
initial_value=1.,
bijector=constrain_positive,
name='length_scale',
dtype=np.float64)
observation_noise_variance_var = tfp.util.TransformedVariable(
initial_value=1.,
bijector=constrain_positive,
name='observation_noise_variance_var',
dtype=np.float64)
trainable_variables = [v.trainable_variables[0] for v in
[amplitude_var,
length_scale_var,
observation_noise_variance_var]]
"""
Explanation: 이제 사후 확률이 가장 높은 매개변수값을 찾기 위해 최적화해 보겠습니다. 각 매개변수에 대한 변수를 정의하고 값을 양수로 제한합니다.
End of explanation
"""
def target_log_prob(amplitude, length_scale, observation_noise_variance):
return gp_joint_model.log_prob({
'amplitude': amplitude,
'length_scale': length_scale,
'observation_noise_variance': observation_noise_variance,
'observations': observations_
})
# Now we optimize the model parameters.
num_iters = 1000
optimizer = tf.optimizers.Adam(learning_rate=.01)
# Use `tf.function` to trace the loss for more efficient evaluation.
@tf.function(autograph=False, jit_compile=False)
def train_model():
with tf.GradientTape() as tape:
loss = -target_log_prob(amplitude_var, length_scale_var,
observation_noise_variance_var)
grads = tape.gradient(loss, trainable_variables)
optimizer.apply_gradients(zip(grads, trainable_variables))
return loss
# Store the likelihood values during training, so we can plot the progress
lls_ = np.zeros(num_iters, np.float64)
for i in range(num_iters):
loss = train_model()
lls_[i] = loss
print('Trained parameters:')
print('amplitude: {}'.format(amplitude_var._value().numpy()))
print('length_scale: {}'.format(length_scale_var._value().numpy()))
print('observation_noise_variance: {}'.format(observation_noise_variance_var._value().numpy()))
# Plot the loss evolution
plt.figure(figsize=(12, 4))
plt.plot(lls_)
plt.xlabel("Training iteration")
plt.ylabel("Log marginal likelihood")
plt.show()
# Having trained the model, we'd like to sample from the posterior conditioned
# on observations. We'd like the samples to be at points other than the training
# inputs.
predictive_index_points_ = np.linspace(-1.2, 1.2, 200, dtype=np.float64)
# Reshape to [200, 1] -- 1 is the dimensionality of the feature space.
predictive_index_points_ = predictive_index_points_[..., np.newaxis]
optimized_kernel = tfk.ExponentiatedQuadratic(amplitude_var, length_scale_var)
gprm = tfd.GaussianProcessRegressionModel(
kernel=optimized_kernel,
index_points=predictive_index_points_,
observation_index_points=observation_index_points_,
observations=observations_,
observation_noise_variance=observation_noise_variance_var,
predictive_noise_variance=0.)
# Create op to draw 50 independent samples, each of which is a *joint* draw
# from the posterior at the predictive_index_points_. Since we have 200 input
# locations as defined above, this posterior distribution over corresponding
# function values is a 200-dimensional multivariate Gaussian distribution!
num_samples = 50
samples = gprm.sample(num_samples)
# Plot the true function, observations, and posterior samples.
plt.figure(figsize=(12, 4))
plt.plot(predictive_index_points_, sinusoid(predictive_index_points_),
label='True fn')
plt.scatter(observation_index_points_[:, 0], observations_,
label='Observations')
for i in range(num_samples):
plt.plot(predictive_index_points_, samples[i, :], c='r', alpha=.1,
label='Posterior Sample' if i == 0 else None)
leg = plt.legend(loc='upper right')
for lh in leg.legendHandles:
lh.set_alpha(1)
plt.xlabel(r"Index points ($\mathbb{R}^1$)")
plt.ylabel("Observation space")
plt.show()
"""
Explanation: 관찰된 데이터에서 모델을 조정하기 위해 커널 하이퍼 매개변수(여전히 추론되어야 함)를 사용하는 target_log_prob 함수를 정의합니다.
End of explanation
"""
num_results = 100
num_burnin_steps = 50
sampler = tfp.mcmc.TransformedTransitionKernel(
tfp.mcmc.NoUTurnSampler(
target_log_prob_fn=target_log_prob,
step_size=tf.cast(0.1, tf.float64)),
bijector=[constrain_positive, constrain_positive, constrain_positive])
adaptive_sampler = tfp.mcmc.DualAveragingStepSizeAdaptation(
inner_kernel=sampler,
num_adaptation_steps=int(0.8 * num_burnin_steps),
target_accept_prob=tf.cast(0.75, tf.float64))
initial_state = [tf.cast(x, tf.float64) for x in [1., 1., 1.]]
# Speed up sampling by tracing with `tf.function`.
@tf.function(autograph=False, jit_compile=False)
def do_sampling():
return tfp.mcmc.sample_chain(
kernel=adaptive_sampler,
current_state=initial_state,
num_results=num_results,
num_burnin_steps=num_burnin_steps,
trace_fn=lambda current_state, kernel_results: kernel_results)
t0 = time.time()
samples, kernel_results = do_sampling()
t1 = time.time()
print("Inference ran in {:.2f}s.".format(t1-t0))
"""
Explanation: 참고: 위의 코드를 여러 번 실행하면 때로는 좋아 보이고 다른 때는 끔찍해 보입니다. 매개변수의 최대 가능성 훈련은 매우 민감하며 때로는 불량 모델로 수렴됩니다. 가장 좋은 방법은 MCMC를 사용하여 모델 하이퍼 매개변수를 주변화하는 것입니다.
HMC로 하이퍼 매개변수 주변화하기
하이퍼 매개변수를 최적화하는 대신 해밀턴 몬테카를로와 통합해 보겠습니다. 먼저 샘플러를 정의하고 실행하여 관측치를 고려하여 커널 하이퍼 매개변수에 대한 사후 확률 분포에서 대략적으로 추출합니다.
End of explanation
"""
(amplitude_samples,
length_scale_samples,
observation_noise_variance_samples) = samples
f = plt.figure(figsize=[15, 3])
for i, s in enumerate(samples):
ax = f.add_subplot(1, len(samples) + 1, i + 1)
ax.plot(s)
"""
Explanation: 하이퍼 매개변수 추적을 검사하여 샘플러에 대한 온전성 검사를 수행하겠습니다.
End of explanation
"""
# The sampled hyperparams have a leading batch dimension, `[num_results, ...]`,
# so they construct a *batch* of kernels.
batch_of_posterior_kernels = tfk.ExponentiatedQuadratic(
amplitude_samples, length_scale_samples)
# The batch of kernels creates a batch of GP predictive models, one for each
# posterior sample.
batch_gprm = tfd.GaussianProcessRegressionModel(
kernel=batch_of_posterior_kernels,
index_points=predictive_index_points_,
observation_index_points=observation_index_points_,
observations=observations_,
observation_noise_variance=observation_noise_variance_samples,
predictive_noise_variance=0.)
# To construct the marginal predictive distribution, we average with uniform
# weight over the posterior samples.
predictive_gprm = tfd.MixtureSameFamily(
mixture_distribution=tfd.Categorical(logits=tf.zeros([num_results])),
components_distribution=batch_gprm)
num_samples = 50
samples = predictive_gprm.sample(num_samples)
# Plot the true function, observations, and posterior samples.
plt.figure(figsize=(12, 4))
plt.plot(predictive_index_points_, sinusoid(predictive_index_points_),
label='True fn')
plt.scatter(observation_index_points_[:, 0], observations_,
label='Observations')
for i in range(num_samples):
plt.plot(predictive_index_points_, samples[i, :], c='r', alpha=.1,
label='Posterior Sample' if i == 0 else None)
leg = plt.legend(loc='upper right')
for lh in leg.legendHandles:
lh.set_alpha(1)
plt.xlabel(r"Index points ($\mathbb{R}^1$)")
plt.ylabel("Observation space")
plt.show()
"""
Explanation: 이제 최적화된 하이퍼 매개변수로 단일 GP를 구성하는 대신, 사후 확률 예측 분포 GP의 혼합으로 구성합니다. 이때 각 GP는 하이퍼 매개변수에 대한 사후 확률 분포의 샘플로 정의됩니다. 몬테카를로 샘플링을 통해 사후 확률 매개변수에 대해 대략적으로 통합하여 관찰되지 않은 위치에서 주변 예측 분포를 계산합니다.
End of explanation
"""
|
buguen/pylayers | pylayers/notebooks/Simultraj_and_CorSer.ipynb | lgpl-3.0 | m
for k,lk in enumerate(llinks):
print k,lk
"""
Explanation: Set links
Simulation has an object network which contains a links dictionnary.
This dictionnary has wireless standard as key, and list of links with their type as value.
End of explanation
"""
link={'ieee802154':[]}
link['ieee802154'].append(S.N.links['ieee802154'][100])
link['ieee802154'].append(S.N.links['ieee802154'][101])
"""
Explanation: Let's select 2 link from this list:
End of explanation
"""
lt = range(0,100,10)
print lt
"""
Explanation: Set Time
Let's set timestamps where simulation will be run
End of explanation
"""
link
stemS.run(links=link,t=lt)
"""
Explanation: Run simulation
run simulation for the given links and selected timestanmps
End of explanation
"""
output=S.get_value(typ=['ak','tk'],links=link,t=lt)
"""
Explanation: Get data from simulation
End of explanation
"""
len(output['HKB:6-HKB:1']['ak'])
plot(output['HKB:6-HKB:1']['tk'][2],10*log10(output['HKB:6-HKB:1']['ak'][2]),'.')
C.dist.shape
101*60
C.tmocap
len(C.tmocap)
d16=C.dist[:,0,5]
d61=C.dist[:,5,0]
plot(d16)
plot(d61)
tak2 =[]
tt = []
for k in range(len(output['HKB:6-HKB:1']['t'])):
t = output['HKB:6-HKB:1']['t'][k]
tt.append(t)
ak = output['HKB:6-HKB:1']['ak'][k]
ak2 = sum(ak*ak)
tak2.append(10*log10(ak2))
plot(tt,tak2)
plot(C.tmocap,d61)
"""
Explanation: output is a dictionnay of dicionnary:
The first key is the considered link "node_id_a-node_id_b"
The scond key is the requested parameter among 'ak' or 'tk' , and the time 't'
The value is a list containing values of the resquested parameter.
Hence, to obtain the ak value of the link 'HKB:16', 'HKB:14' fro the 2nd timestamp , just type:
End of explanation
"""
|
shuaiyuancn/commercial-science-taster | notebooks/00 Ta-feng dataset.ipynb | apache-2.0 | import os
import pandas as pd
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import seaborn
%matplotlib inline
font = {'family' : 'monospace',
'weight' : 'bold',
'size' : 14}
plt.rc('font', **font)
plt.rc('figure', figsize=(18, 6))
os.chdir('../datasets/ta-feng/')
"""
Explanation: TOC
Ta-feng dataset description
Visualise the dataset
Customers and transactions forecasting with EWMA
Seasonal product recommendation and Outlier detection
Purchase prediction
Product recommendation
End of explanation
"""
header = [
'day', 'customer_id', 'age_group', 'residence_area', 'product_subclass', 'product_id', 'amount', 'asset', 'sales_price'
]
full = pd.DataFrame(columns=header)
for fn in ['D01', 'D02', 'D11', 'D12']:
full = pd.concat(
[full, pd.read_csv(fn, sep=';',
parse_dates=['day'],
index_col=False,
header=0, names=header)] # overwrite the header
)
full.head()
"""
Explanation: Ta-feng dataset
http://recsyswiki.com/wiki/Grocery_shopping_datasets
http://www.bigdatalab.ac.cn/benchmark/bm/dd?data=Ta-Feng
It contains these files
D11: Transaction data collected in November, 2000
D12: Transaction data collected in December, 2000
D01: Transaction data collected in January, 2001
D02: Transaction data collected in February, 2001
Format of Transaction Data
First line: Column definition in Traditional Chinese
Second line and the rest: data columns separated by ";"
Column definition
Transaction date and time (time invalid and useless)
Customer ID
Age, 10 possible values: A <25, B 25-29, C 30-34, D 35-39, E 40-44, F 45-49, G 50-54, H 55-59, I 60-64, J >65
Residence Area, 8 possible values: A-F: zipcode area: 105,106,110,114,115,221, G: others, H: Unknown Distance to store.
(From the closest: 115,221,114,105,106,110)
Product subclass
Product ID
Amount
Asset
Sales price
End of explanation
"""
LIMIT = 500
vc = full['product_id'].value_counts().head(LIMIT)
lables = vc.index
indices = range(len(vc))
plt.bar(
indices, vc.values, align='center'
)
plt.xlim(0, LIMIT)
"""
Explanation: Visualise the dataset
Product sales distribution show a typical long-tail distribution
End of explanation
"""
# customers by day
by_day = pd.DataFrame(
[(day, len(data['customer_id'].value_counts())) for (day, data) in full.groupby('day')],
columns=['day', 'customers']
)
# transactions by day
by_day['transactions'] = pd.Series(
[len(data) for (day, data) in full.groupby('day')]
)
by_day.index = by_day['day']
plt.figure()
by_day['customers'].plot(legend=True)
by_day['transactions'].plot(secondary_y=True, legend=True)
"""
Explanation: Customers, transactions, and total sales per day
End of explanation
"""
vc = full['age_group'].value_counts().sort_index()
plt.bar(
range(len(vc.index)), vc.values, tick_label=vc.index, align='center'
)
vc = full['residence_area'].value_counts().sort_index()
plt.bar(
range(len(vc.index)), vc.values, tick_label=vc.index, align='center'
)
"""
Explanation: Public holidays in the period:
New year: 30/12/2000 - 1/1/2001
Spring festival: 24/1 - 28/1/2001
Note:
Some thing happened from 19/12 to 25/12/2000
Spring festival increased sales
End of explanation
"""
days = []
sales_prices = []
for day, data in full.groupby('day'):
total_price = data.sum()['sales_price']
days.append(day)
sales_prices.append(total_price)
plt.plot(
days, sales_prices
)
"""
Explanation: Note:
E is the closest, then F
C is the furthest
Total sales amount
End of explanation
"""
customers_by_day = pd.DataFrame(
[(day, len(data['customer_id'].value_counts())) for (day, data) in full.groupby('day')],
columns=['day', 'customers']
)
regular = pd.ewma(customers_by_day['customers'], halflife=2)
reverse = pd.ewma(customers_by_day['customers'][::-1], halflife=2)
# the regular EWMA is not good at predicting trends
# a shortcut is to average with the reverse series
average = (regular + reverse) / 2
indices = range(len(customers_by_day))
plt.plot(
indices, customers_by_day['customers']
)
plt.plot(
indices, average, '--', label='with correction'
)
plt.plot(
indices, regular, '-.', label='without correction'
)
plt.legend()
"""
Explanation: Customers and transactions prediction
Exponentially-weighted moving average with correction
End of explanation
"""
customers_by_day['day_of_week'] = customers_by_day['day'].map(lambda day: day.dayofweek)
customers_by_day['week_of_year'] = customers_by_day['day'].map(lambda day: day.week)
customers_by_day['day_of_year'] = customers_by_day['day'].map(lambda day: day.dayofyear)
customers_by_day = pd.get_dummies(customers_by_day, columns=['day_of_week', 'week_of_year', 'day_of_year'])
customers_by_day['ewma'] = average.values
del customers_by_day['day']
"""
Explanation: Create features from day
End of explanation
"""
from sklearn import linear_model
SPLIT = int(len(customers_by_day) * .8)
for exclude in [
['customers'],
['customers', 'ewma']
]:
X = customers_by_day[[col for col in customers_by_day.columns if col not in exclude]]
Y = customers_by_day['customers']
train_x = X[:SPLIT]
train_y = Y[:SPLIT]
test_x = X[SPLIT:]
test_y = Y[SPLIT:]
clf = linear_model.LinearRegression()
clf.fit(
train_x, train_y
)
print 'without EWMA: ' if 'ewma' in exclude else 'with EWMA:'
print clf.score(test_x, test_y)
print
"""
Explanation: Compare the predictions of models with/without EWMA
End of explanation
"""
full['week'] = full['day'].map(
lambda day: day.weekofyear
)
popular_product = {}
LIMIT = 100
for week in full['week'].value_counts().index:
df = full[
full['week'] == week
]
for code, count in df['product_id'].value_counts().head(LIMIT).iteritems(): # result from value_counts() is sorted
try:
popular_product[code].append(
(week, count)
)
except KeyError:
popular_product[code] = [
(week, count)
]
"""
Explanation: still far from perfect (1.0) but great improvement with EWMA
Seasonal product recommendation and Outlier detection
Popular items on the weekly basis
End of explanation
"""
FREQ_THRESHOLD_MAX = 8
all_time_common_products = []
for code, data in popular_product.iteritems():
if len(data) > FREQ_THRESHOLD_MAX:
all_time_common_products.append(code)
"""
Explanation: Find the all-time common ones (appearing more than 8 weeks, about 50% of the period)
End of explanation
"""
FREQ_THRESHOLD_MIN = 1
least_common_products = []
for code, data in popular_product.iteritems():
if len(data) == FREQ_THRESHOLD_MIN:
least_common_products.append(code)
ax = plt.gca()
for code, data in popular_product.iteritems():
if code not in all_time_common_products and code not in least_common_products:
ax.plot(
[it[0] for it in data],
[it[1] for it in data],
'.'
)
"""
Explanation: Find the least common ones (appearing just 1 week)
End of explanation
"""
from scipy.sparse import csr_matrix
from sklearn import svm, decomposition, covariance
full['day_of_year'] = full['day'].map(
lambda x: x.dayofyear
)
full.head()
"""
Explanation: Outlier detection
PCA, then EllipticEnvelope and SVM
End of explanation
"""
y = []
x = []
for product_id, data in full.groupby('product_id'):
y.append(product_id)
x.append(
data['day_of_year'].values
)
"""
Explanation: Conver the appearance of product_id into a matrix
End of explanation
"""
idx_row = []
idx_column = []
for idx in x:
idx_row.extend(
[len(idx_row) for _ in idx]
)
idx_column.extend(
idx
)
matrix_x = csr_matrix(
(np.ones(len(idx_row)), (idx_row, idx_column))
)
dense_x = matrix_x.toarray() # PCA requires a dense matrix
pca = decomposition.PCA(n_components=2) # high dimension outlier detection is not trivial
pca_x = pca.fit_transform(dense_x)
plt.scatter(
[it[0] for it in pca_x],
[it[1] for it in pca_x],
)
"""
Explanation: Use sparse matrix for the ease of initialisation
End of explanation
"""
clf = svm.OneClassSVM()
clf.fit(pca_x)
# OneClassSVM is really a novelty detection algorithm that requires 'pure' training data
clf.get_params()
xx, yy = np.meshgrid(np.linspace(-200, 1600, 500), np.linspace(-600, 1000, 500))
Z = clf.decision_function(np.c_[
xx.ravel(), yy.ravel()
]).reshape(xx.shape)
plt.contour(xx, yy, Z, levels=[0])
# takes too long to compute the frontier, skipped
"""
Explanation: Try EllipticEnvelope and OneClassSVM, with evaluation and viz
OneClassSVM
End of explanation
"""
clf = covariance.EllipticEnvelope()
clf.fit(pca_x)
# always gives:
# ValueError: Singular covariance matrix.
# otherwise should be more robust given the amount of the data
Z = clf.decision_function(np.c_[
xx.ravel(), yy.ravel()
]).reshape(xx.shape)
plt.contour(xx, yy, Z, levels=[0])
# cannot create the model, skipped
"""
Explanation: EllipticEnvelope
End of explanation
"""
from functools import partial
from rosetta.parallel.parallel_easy import imap_easy
import itertools
N_JOBS = 8 # 8 cores
CHUNKSIZE = int(len(full) / N_JOBS) # evenly distribute data among cores
def merge_customer_columns(data):
idx, row = data
return '{:.0f}_{}_{}'.format(row['customer_id'], row['age_group'].strip(), row['residence_area'].strip())
def merge_product_columns(data):
idx, row = data
return '{:.0f}_{:.0f}_{:.0f}_{:.0f}'.format(
row['product_subclass'], row['product_id'], row['asset'] / row['amount'], row['sales_price'] / row['amount']
)
full['customer'] = pd.Series(
imap_easy(
# ordered=False to avoid extra shuffle
partial(merge_customer_columns), full.iterrows(), n_jobs=N_JOBS, chunksize=CHUNKSIZE, ordered=False
)
)
full['product'] = pd.Series(
imap_easy(
partial(merge_product_columns), full.iterrows(), n_jobs=N_JOBS, chunksize=CHUNKSIZE, ordered=False
)
)
days = full['day'].value_counts().index
products = full['product'].value_counts().index
def get_a_random_row():
return np.random.choice(days), np.random.choice(products)
# the usage of `full` stops here. persist for easier access in future
full.to_csv('full_product_customer_merged.csv', index=False)
"""
Explanation: Purchase prediction
The goal is to build a purchase prediction engine.
The dataset only contains positive samples. We could randomly generate the same number of negative samples (not bought). We want to make sure that for each customer, we have a balanced dataset.
End of explanation
"""
def ravel_amount(data):
idx, row = data
return [
{'day': row['day'], 'product': row['product'], 'customer': row['customer'], 'buy': 1}
for _ in range(int(row['amount'])-1)
]
flat_full = pd.DataFrame(
# DataFrame doesn't allow passing in a generator
# this may not fit into the memory for larger datasets
list(itertools.chain.from_iterable(imap_easy(
partial(ravel_amount), full.iterrows(), n_jobs=N_JOBS, chunksize=CHUNKSIZE, ordered=False
)))
)
"""
Explanation: Positive samples, Ravel amount to multiple rows
End of explanation
"""
def generate_negative_samples_for_customer(data):
customer, row = data
ret = []
# generate the same amount of negative samples as positive ones
# don't iterate over row directly --- that will iterate columns, not rows
for _ in range(len(row)):
day, product = get_a_random_row()
ret.append({'day': day, 'product': product, 'customer': customer, 'buy': 0})
return ret
flat_full = pd.concat([
flat_full, pd.DataFrame(
list(itertools.chain.from_iterable(
imap_easy(generate_negative_samples_for_customer, flat_full.groupby('customer'), n_jobs=N_JOBS, chunksize=CHUNKSIZE)
))
)
], axis=0)
flat_full['buy'].value_counts() # make sure the balance
"""
Explanation: Negative samples (ignore the case that we randomly generate a positive sample)
End of explanation
"""
CHUNKSIZE = int(len(flat_full) / N_JOBS)
def split_and_get(data, idx, sep='_'):
return data[1].split(sep)[idx]
for idx, key in enumerate(['product_subclass', 'product_id', 'asset', 'sales_price']):
flat_full[key] = pd.Series(
imap_easy(partial(split_and_get, idx=idx), flat_full['product'].iteritems(), N_JOBS, CHUNKSIZE)
)
for idx, key in enumerate(['customer_id', 'age_group', 'residence_area']):
flat_full[key] = pd.Series(
imap_easy(partial(split_and_get, idx=idx), flat_full['customer'].iteritems(), N_JOBS, CHUNKSIZE)
)
"""
Explanation: Split so we can build features
End of explanation
"""
# should be fast enough, skip the parallelism
flat_full['week_of_year'] = flat_full['day'].map(lambda x: x.week)
flat_full['day_of_week'] = flat_full['day'].map(lambda x: x.dayofweek)
flat_full['day_of_year'] = flat_full['day'].map(lambda x: x.dayofyear)
flat_full.to_csv('flat_full_with_basic_transformation.csv', index=False)
"""
Explanation: Basic transformation on day only
End of explanation
"""
from sklearn.feature_extraction import DictVectorizer
enc = DictVectorizer()
# be specific about features if possible
feature_columns = [
'customer', # keep the 2nd level features
'customer_id', 'age_group', 'residence_area',
'product',
'product_subclass', 'product_id', 'asset', 'sales_price',
'week_of_year', 'day_of_week', 'day_of_year', 'day'
]
x = enc.fit_transform(flat_full[feature_columns].to_dict(orient='records'))
feature_names = enc.get_feature_names()
y = flat_full['buy'].values
"""
Explanation: Build features
End of explanation
"""
# use a few basic models to demonstrate the idea
# cv need to be added
from sklearn.linear_model import LogisticRegression, Lasso
from sklearn.ensemble import RandomForestClassifier
from sklearn.utils import shuffle
x, y = shuffle(x, y)
SPLIT = int(x.shape[0] * .8)
train_x = x[:SPLIT]
train_y = y[:SPLIT]
test_x = x[SPLIT:]
test_y = y[SPLIT:]
for name, clf in [
('LogisticRegression', LogisticRegression()),
('Lasso', Lasso()),
('RandomForest', RandomForestClassifier())
]:
clf.fit(train_x, train_y)
print name
print clf.score(test_x, test_y)
print
"""
Explanation: Simple test on shuffles
End of explanation
"""
customer_bought_products = full[['customer_id', 'product_id', 'amount']]
"""
Explanation: Product recommendation
User based
End of explanation
"""
aggregated = customer_bought_products.groupby(['customer_id', 'product_id']).sum()
aggregated.reset_index(inplace=True)
def concat_product_amount(data):
idx, row = data
return '{:.0f}_{:.0f}'.format(row['product_id'], row['amount'])
aggregated['product_amount'] = pd.Series(
imap_easy(concat_product_amount, aggregated[['product_id', 'amount']].iterrows(), N_JOBS, CHUNKSIZE)
)
del aggregated['product_id']
del aggregated['amount']
aggregated.head()
"""
Explanation: Ignore the recency and aggregate the purchase history
End of explanation
"""
enc = DictVectorizer()
x = enc.fit_transform(aggregated[['product_amount', ]].to_dict(orient='record'))
feature_names = enc.get_feature_names()
x = pd.SparseDataFrame(
[pd.SparseSeries(x[idx].toarray().ravel()) for idx in np.arange(x.shape[0])],
columns=feature_names
)
x['customer_id'] = aggregated['customer_id']
x = x.groupby('customer_id').sum()
"""
Explanation: Build user vectors
End of explanation
"""
from sklearn.neighbors import NearestNeighbors
nbrs = NearestNeighbors(n_neighbors=4).fit(x)
"""
Explanation: Build NearestNeighbors model
End of explanation
"""
|
shareactorIO/pipeline | source.ml/jupyterhub.ml/notebooks/zz_old/TensorFlow/Fundamentals/Basic-ConvolutionNeuralNetworks-ImageProcessing.ipynb | apache-2.0 | # TODO: @Sam, please fill this in...
import tensorflow as tf
import numpy as np
from IPython.display import clear_output, Image, display, HTML
# Note: All datasets are available here: /root/pipeline/datasets/...
# Modules required for file download and extraction
import os
import sys
import tarfile
from six.moves.urllib.request import urlretrieve
def maybe_download(filename, url, force=False):
"""Download a file if not present."""
if force or not os.path.exists('/root/pipeline/datasets/notmnist/' + filename):
filename, _ = urlretrieve(url + filename, '/root/pipeline/datasets/notmnist/' + filename)
print('\nDownload complete for {}'.format(filename))
else:
print('File {} already present.'.format(filename))
return filename
def maybe_extract(filename, force=False):
root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz
if os.path.isdir(root) and not force:
# You may override by setting force=True.
print('{} already present - Skipping extraction of {}.'.format(root, filename))
else:
print('Extracting data for {}. This may take a while. Please wait.'.format(root))
tar = tarfile.open(filename)
sys.stdout.flush()
tar.extractall(root)
tar.close()
data_folders = [
os.path.join(root, d) for d in sorted(os.listdir(root))
if os.path.isdir(os.path.join(root, d))]
print(data_folders)
return data_folders
#maybe_extract('/root/pipeline/datasets/notmnist/notMNIST_large.tar.gz')
#maybe_extract('/root/pipeline/datasets/notmnist/notMNIST_small.tar.gz')
ll /root/pipeline/datasets
# Prepare input for the format expected by the graph
t_input = tf.placeholder(np.float32, name='our_input') # define the input tensor
imagenet_mean = 117.0
t_preprocessed = tf.expand_dims(t_input-imagenet_mean, 0)
# Load graph and import into graph used by our session
model_fn = '/root/pipeline/datasets/inception/classify_image_graph_def.pb'
graph_def = tf.GraphDef.FromString(open(model_fn).read())
tf.import_graph_def(graph_def, {'input':t_preprocessed})
"""
Explanation: Basic Convolutional Neural Network: Image Processing
End of explanation
"""
# Helper functions for TF Graph visualization
def strip_consts(graph_def, max_const_size=32):
"""Strip large constant values from graph_def."""
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = "<stripped %d bytes>"%size
return strip_def
def rename_nodes(graph_def, rename_func):
res_def = tf.GraphDef()
for n0 in graph_def.node:
n = res_def.node.add()
n.MergeFrom(n0)
n.name = rename_func(n.name)
for i, s in enumerate(n.input):
n.input[i] = rename_func(s) if s[0]!='^' else '^'+rename_func(s[1:])
return res_def
def show_graph(graph_def, max_const_size=32):
"""Visualize TensorFlow graph."""
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code = """
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
""".format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe = """
<iframe seamless style="width:800px;height:620px;border:0" srcdoc="{}"></iframe>
""".format(code.replace('"', '"'))
display(HTML(iframe))
# Visualizing the network graph. Be sure expand the "mixed" nodes to see their
# internal structure. We are going to visualize "Conv2D" nodes.
tmp_def = rename_nodes(graph_def, lambda s:"/".join(s.split('_',1)))
#show_graph(tmp_def)
"""
Explanation: Utility Functions to Display TensorBoard Inline
End of explanation
"""
k = np.float32([1,4,6,4,1])
k = np.outer(k, k)
k5x5 = k[:,:,None,None]/k.sum()*np.eye(3, dtype=np.float32)
def lap_split(img):
'''Split the image into lo and hi frequency components'''
with tf.name_scope('split'):
lo = tf.nn.conv2d(img, k5x5, [1,2,2,1], 'SAME') # Blurred image -- low frequencies only
lo2 = tf.nn.conv2d_transpose(lo, k5x5*4, tf.shape(img), [1,2,2,1])
hi = img-lo2 # hi is img with low frequencies removed
return lo, hi
def lap_split_n(img, n):
'''Build Laplacian pyramid with n splits'''
levels = []
for i in xrange(n):
img, hi = lap_split(img)
levels.append(hi)
levels.append(img)
return levels[::-1] # List of images with lower and lower frequencies
def lap_merge(levels):
'''Merge Laplacian pyramid'''
img = levels[0]
for hi in levels[1:]:
with tf.name_scope('merge'):
img = tf.nn.conv2d_transpose(img, k5x5*4, tf.shape(hi), [1,2,2,1]) + hi
return img # Reconstructed image, all frequencies added back together
def normalize_std(img, eps=1e-10):
'''Normalize image by making its standard deviation = 1.0'''
with tf.name_scope('normalize'):
std = tf.sqrt(tf.reduce_mean(tf.square(img)))
return img/tf.maximum(std, eps)
def lap_normalize(img, scale_n=4):
'''Perform the Laplacian pyramid normalization.'''
img = tf.expand_dims(img,0)
tlevels = lap_split_n(img, scale_n) # Split into frequencies
tlevels = map(normalize_std, tlevels) # Normalize each frequency band
out = lap_merge(tlevels) # Put image back together
return out[0,:,:,:]
# Showing the lap_normalize graph with TensorBoard
lap_graph = tf.Graph()
with lap_graph.as_default():
lap_in = tf.placeholder(np.float32, name='lap_in')
lap_out = lap_normalize(lap_in)
show_graph(lap_graph)
"""
Explanation: Irrelevant code - only placed here to demo Inline TensorBoard
End of explanation
"""
|
VVard0g/ThreatHunter-Playbook | docs/notebooks/windows/02_execution/WIN-190813181020.ipynb | mit | from openhunt.mordorutils import *
spark = get_spark()
"""
Explanation: Service Creation
Metadata
| Metadata | Value |
|:------------------|:---|
| collaborators | ['@Cyb3rWard0g', '@Cyb3rPandaH'] |
| creation date | 2019/08/13 |
| modification date | 2020/09/20 |
| playbook related | [] |
Hypothesis
Adversaries might be creating new services to execute code on a compromised endpoint in my environment
Technical Context
None
Offensive Tradecraft
Adversaries may execute a binary, command, or script via a method that interacts with Windows services, such as the Service Control Manager.
This can be done by by adversaries creating a new service.
Security Datasets
| Metadata | Value |
|:----------|:----------|
| docs | https://securitydatasets.com/notebooks/atomic/windows/lateral_movement/SDWIN-190518210652.html |
| link | https://raw.githubusercontent.com/OTRF/Security-Datasets/master/datasets/atomic/windows/lateral_movement/host/empire_psexec_dcerpc_tcp_svcctl.zip |
Analytics
Initialize Analytics Engine
End of explanation
"""
sd_file = "https://raw.githubusercontent.com/OTRF/Security-Datasets/master/datasets/atomic/windows/lateral_movement/host/empire_psexec_dcerpc_tcp_svcctl.zip"
registerMordorSQLTable(spark, sd_file, "sdTable")
"""
Explanation: Download & Process Security Dataset
End of explanation
"""
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, SubjectUserName ServiceName, ServiceType, ServiceStartType, ServiceAccount
FROM sdTable
WHERE LOWER(Channel) = "security" AND EventID = 4697
'''
)
df.show(10,False)
"""
Explanation: Analytic I
Look for new services being created in your environment and stack the values of it
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Service | Microsoft-Windows-Security-Auditing | User created Service | 4697 |
End of explanation
"""
|
patrick-kidger/diffrax | examples/continuous_normalising_flow.ipynb | apache-2.0 | import math
import os
import pathlib
import time
from typing import List, Tuple
import diffrax
import equinox as eqx # https://github.com/patrick-kidger/equinox
import imageio
import jax
import jax.lax as lax
import jax.nn as jnn
import jax.numpy as jnp
import jax.random as jrandom
import matplotlib.pyplot as plt
import optax # https://github.com/deepmind/optax
import scipy.stats as stats
here = pathlib.Path(os.getcwd())
"""
Explanation: Continuous Normalising Flow
This example is a bit of fun! It constructs a continuous normalising flow (CNF) to learn a distribution specified by a (greyscale) image. That is, the target distribution is over $\mathbb{R}^2$, and the image specifies the (unnormalised) density at each point.
You can specify your own images, and learn your own flows.
Some example outputs from this script:
Reference:
bibtex
@article{grathwohl2019ffjord,
title={{FFJORD}: {F}ree-form {C}ontinuous {D}ynamics for {S}calable {R}eversible
{G}enerative {M}odels},
author={Grathwohl, Will and Chen, Ricky T. Q. and Bettencourt, Jesse and
Sutskever, Ilya and Duvenaud, David},
journal={International Conference on Learning Representations},
year={2019},
}
This example is available as a Jupyter notebook here.
!!! warning
This example will need a GPU to run efficiently.
!!! danger "Advanced example"
This is a pretty advanced example.
End of explanation
"""
class Func(eqx.Module):
layers: List[eqx.nn.Linear]
def __init__(self, *, data_size, width_size, depth, key, **kwargs):
super().__init__(**kwargs)
keys = jrandom.split(key, depth + 1)
layers = []
if depth == 0:
layers.append(
ConcatSquash(in_size=data_size, out_size=data_size, key=keys[0])
)
else:
layers.append(
ConcatSquash(in_size=data_size, out_size=width_size, key=keys[0])
)
for i in range(depth - 1):
layers.append(
ConcatSquash(
in_size=width_size, out_size=width_size, key=keys[i + 1]
)
)
layers.append(
ConcatSquash(in_size=width_size, out_size=data_size, key=keys[-1])
)
self.layers = layers
def __call__(self, t, y, args):
t = jnp.asarray(t)[None]
for layer in self.layers[:-1]:
y = layer(t, y)
y = jnn.tanh(y)
y = self.layers[-1](t, y)
return y
# Credit: this layer, and some of the default hyperparameters below, are taken from the
# FFJORD repo.
class ConcatSquash(eqx.Module):
lin1: eqx.nn.Linear
lin2: eqx.nn.Linear
lin3: eqx.nn.Linear
def __init__(self, *, in_size, out_size, key, **kwargs):
super().__init__(**kwargs)
key1, key2, key3 = jrandom.split(key, 3)
self.lin1 = eqx.nn.Linear(in_size, out_size, key=key1)
self.lin2 = eqx.nn.Linear(1, out_size, key=key2)
self.lin3 = eqx.nn.Linear(1, out_size, use_bias=False, key=key3)
def __call__(self, t, y):
return self.lin1(y) * jnn.sigmoid(self.lin2(t)) + self.lin3(t)
"""
Explanation: First let's define some vector fields. This is basically just an MLP, using tanh as the activation function and "ConcatSquash" instead of linear layers.
This is the vector field on the right hand side of the ODE.
End of explanation
"""
def approx_logp_wrapper(t, y, args):
y, _ = y
*args, eps, func = args
fn = lambda y: func(t, y, args)
f, vjp_fn = jax.vjp(fn, y)
(eps_dfdy,) = vjp_fn(eps)
logp = jnp.sum(eps_dfdy * eps)
return f, logp
def exact_logp_wrapper(t, y, args):
y, _ = y
*args, _, func = args
fn = lambda y: func(t, y, args)
f, vjp_fn = jax.vjp(fn, y)
(size,) = y.shape # this implementation only works for 1D input
eye = jnp.eye(size)
(dfdy,) = jax.vmap(vjp_fn)(eye)
logp = jnp.trace(dfdy)
return f, logp
"""
Explanation: When training, we need to wrap our vector fields in something that also computes the change in log-density.
This can be done either approximately (using Hutchinson's trace estimator) or exactly (the divergence of the vector field; relatively computationally expensive).
End of explanation
"""
def normal_log_likelihood(y):
return -0.5 * (y.size * math.log(2 * math.pi) + jnp.sum(y**2))
class CNF(eqx.Module):
funcs: List[Func]
data_size: int
exact_logp: bool
t0: float
t1: float
dt0: float
def __init__(
self,
*,
data_size,
exact_logp,
num_blocks,
width_size,
depth,
key,
**kwargs,
):
super().__init__(**kwargs)
keys = jrandom.split(key, num_blocks)
self.funcs = [
Func(
data_size=data_size,
width_size=width_size,
depth=depth,
key=k,
)
for k in keys
]
self.data_size = data_size
self.exact_logp = exact_logp
self.t0 = 0.0
self.t1 = 0.5
self.dt0 = 0.05
# Runs backward-in-time to train the CNF.
def train(self, y, *, key):
if self.exact_logp:
term = diffrax.ODETerm(exact_logp_wrapper)
else:
term = diffrax.ODETerm(approx_logp_wrapper)
solver = diffrax.Tsit5()
eps = jrandom.normal(key, y.shape)
delta_log_likelihood = 0.0
for func in reversed(self.funcs):
y = (y, delta_log_likelihood)
sol = diffrax.diffeqsolve(
term, solver, self.t1, self.t0, -self.dt0, y, (eps, func)
)
(y,), (delta_log_likelihood,) = sol.ys
return delta_log_likelihood + normal_log_likelihood(y)
# Runs forward-in-time to draw samples from the CNF.
def sample(self, *, key):
y = jrandom.normal(key, (self.data_size,))
for func in self.funcs:
term = diffrax.ODETerm(func)
solver = diffrax.Tsit5()
sol = diffrax.diffeqsolve(term, solver, self.t0, self.t1, self.dt0, y)
(y,) = sol.ys
return y
# To make illustrations, we have a variant sample method we can query to see the
# evolution of the samples during the forward solve.
def sample_flow(self, *, key):
t_so_far = self.t0
t_end = self.t0 + (self.t1 - self.t0) * len(self.funcs)
save_times = jnp.linspace(self.t0, t_end, 6)
y = jrandom.normal(key, (self.data_size,))
out = []
for i, func in enumerate(self.funcs):
if i == len(self.funcs) - 1:
save_ts = save_times[t_so_far <= save_times] - t_so_far
else:
save_ts = (
save_times[
(t_so_far <= save_times)
& (save_times < t_so_far + self.t1 - self.t0)
]
- t_so_far
)
t_so_far = t_so_far + self.t1 - self.t0
term = diffrax.ODETerm(func)
solver = diffrax.Tsit5()
saveat = diffrax.SaveAt(ts=save_ts)
sol = diffrax.diffeqsolve(
term, solver, self.t0, self.t1, self.dt0, y, saveat=saveat
)
out.append(sol.ys)
y = sol.ys[-1]
out = jnp.concatenate(out)
assert len(out) == 6 # number of points we saved at
return out
"""
Explanation: Wrap up the differential equation solve into a model.
End of explanation
"""
def get_data(path):
# integer array of shape (height, width, channels) with values in {0, ..., 255}
img = jnp.asarray(imageio.imread(path))
if img.shape[-1] == 4:
img = img[..., :-1] # ignore alpha channel
height, width, channels = img.shape
assert channels == 3
# Convert to greyscale for simplicity.
img = img @ jnp.array([0.2989, 0.5870, 0.1140])
img = jnp.transpose(img)[:, ::-1] # (width, height)
x = jnp.arange(width, dtype=jnp.float32)
y = jnp.arange(height, dtype=jnp.float32)
x, y = jnp.broadcast_arrays(x[:, None], y[None, :])
weights = 1 - img.reshape(-1).astype(jnp.float32) / jnp.max(img)
dataset = jnp.stack(
[x.reshape(-1), y.reshape(-1)], axis=-1
) # shape (dataset_size, 2)
# For efficiency we don't bother with the particles that will have weight zero.
cond = img.reshape(-1) < 254
dataset = dataset[cond]
weights = weights[cond]
mean = jnp.mean(dataset, axis=0)
std = jnp.std(dataset, axis=0) + 1e-6
dataset = (dataset - mean) / std
return dataset, weights, mean, std, img, width, height
"""
Explanation: Alright, that's the models done. Now let's get some data.
First we have a function for taking the specified input image, and turning it into data.
End of explanation
"""
class DataLoader(eqx.Module):
arrays: Tuple[jnp.ndarray]
batch_size: int
key: jrandom.PRNGKey
def __post_init__(self):
dataset_size = self.arrays[0].shape[0]
assert all(array.shape[0] == dataset_size for array in self.arrays)
def __call__(self, step):
dataset_size = self.arrays[0].shape[0]
num_batches = dataset_size // self.batch_size
epoch = step // num_batches
key = jrandom.fold_in(self.key, epoch)
perm = jrandom.permutation(key, jnp.arange(dataset_size))
start = step * self.batch_size
slice_size = self.batch_size
batch_indices = lax.dynamic_slice_in_dim(perm, start, slice_size)
return tuple(array[batch_indices] for array in self.arrays)
"""
Explanation: Now to load the data during training, we need a dataloader. In this case our dataset is small enough to fit in-memory, so we use a dataloader implementation that we can include within our overall JIT wrapper, for speed.
End of explanation
"""
def main(
in_path,
out_path=None,
batch_size=500,
virtual_batches=2,
lr=1e-3,
weight_decay=1e-5,
steps=10000,
exact_logp=True,
num_blocks=2,
width_size=64,
depth=3,
print_every=100,
seed=5678,
):
if out_path is None:
out_path = here / pathlib.Path(in_path).name
else:
out_path = pathlib.Path(out_path)
key = jrandom.PRNGKey(seed)
model_key, loader_key, loss_key, sample_key = jrandom.split(key, 4)
dataset, weights, mean, std, img, width, height = get_data(in_path)
dataset_size, data_size = dataset.shape
dataloader = DataLoader((dataset, weights), batch_size, key=loader_key)
model = CNF(
data_size=data_size,
exact_logp=exact_logp,
num_blocks=num_blocks,
width_size=width_size,
depth=depth,
key=model_key,
)
optim = optax.adamw(lr, weight_decay=weight_decay)
opt_state = optim.init(eqx.filter(model, eqx.is_inexact_array))
@eqx.filter_value_and_grad
def loss(model, data, weight, loss_key):
batch_size, _ = data.shape
noise_key, train_key = jrandom.split(loss_key, 2)
train_key = jrandom.split(key, batch_size)
data = data + jrandom.normal(noise_key, data.shape) * 0.5 / std
log_likelihood = jax.vmap(model.train)(data, key=train_key)
return -jnp.mean(weight * log_likelihood) # minimise negative log-likelihood
@eqx.filter_jit
def make_step(model, opt_state, step, loss_key):
# We only need gradients with respect to floating point JAX arrays, not any
# other part of our model. (e.g. the `exact_logp` flag. What would it even mean
# to differentiate that? Note that `eqx.filter_value_and_grad` does the same
# filtering by `eqx.is_inexact_array` by default.)
value = 0
grads = jax.tree_map(
lambda leaf: jnp.zeros_like(leaf) if eqx.is_inexact_array(leaf) else None,
model,
)
# Get more accurate gradients by accumulating gradients over multiple batches.
# (Or equivalently, get lower memory requirements by splitting up a batch over
# multiple steps.)
def make_virtual_step(_, state):
value, grads, step, loss_key = state
data, weight = dataloader(step)
value_, grads_ = loss(model, data, weight, loss_key)
value = value + value_
grads = jax.tree_map(lambda a, b: a + b, grads, grads_)
step = step + 1
loss_key = jrandom.split(loss_key, 1)[0]
return value, grads, step, loss_key
value, grads, step, loss_key = lax.fori_loop(
0, virtual_batches, make_virtual_step, (value, grads, step, loss_key)
)
value = value / virtual_batches
grads = jax.tree_map(lambda a: a / virtual_batches, grads)
updates, opt_state = optim.update(grads, opt_state, model)
model = eqx.apply_updates(model, updates)
return value, model, opt_state, step, loss_key
step = 0
while step < steps:
start = time.time()
value, model, opt_state, step, loss_key = make_step(
model, opt_state, step, loss_key
)
end = time.time()
if (step % print_every) == 0 or step == steps - 1:
print(f"Step: {step}, Loss: {value}, Computation time: {end - start}")
num_samples = 5000
sample_key = jrandom.split(sample_key, num_samples)
samples = jax.vmap(model.sample)(key=sample_key)
sample_flows = jax.vmap(model.sample_flow, out_axes=-1)(key=sample_key)
fig, (*axs, ax, axtrue) = plt.subplots(
1,
2 + len(sample_flows),
figsize=((2 + len(sample_flows)) * 10 * height / width, 10),
)
samples = samples * std + mean
x = samples[:, 0]
y = samples[:, 1]
ax.scatter(x, y, c="black", s=2)
ax.set_xlim(-0.5, width - 0.5)
ax.set_ylim(-0.5, height - 0.5)
ax.set_aspect(height / width)
ax.set_xticks([])
ax.set_yticks([])
axtrue.imshow(img.T, origin="lower", cmap="gray")
axtrue.set_aspect(height / width)
axtrue.set_xticks([])
axtrue.set_yticks([])
x_resolution = 100
y_resolution = int(x_resolution * (height / width))
sample_flows = sample_flows * std[:, None] + mean[:, None]
x_pos, y_pos = jnp.broadcast_arrays(
jnp.linspace(-1, width + 1, x_resolution)[:, None],
jnp.linspace(-1, height + 1, y_resolution)[None, :],
)
positions = jnp.stack([jnp.ravel(x_pos), jnp.ravel(y_pos)])
densities = [stats.gaussian_kde(samples)(positions) for samples in sample_flows]
for i, (ax, density) in enumerate(zip(axs, densities)):
density = jnp.reshape(density, (x_resolution, y_resolution))
ax.imshow(density.T, origin="lower", cmap="plasma")
ax.set_aspect(height / width)
ax.set_xticks([])
ax.set_yticks([])
plt.savefig(out_path)
plt.show()
"""
Explanation: Bring everything together. This function is our entry point.
End of explanation
"""
main(in_path="../imgs/cat.png")
"""
Explanation: And now the following commands will reproduce the images displayed at the start.
python
main(in_path="../imgs/cat.png")
main(in_path="../imgs/butterfly.png", num_blocks=3)
main(in_path="../imgs/target.png", width_size=128)
Let's run the first one of those as a demonstration.
End of explanation
"""
|
ajayrfhp/dvd | examples/MNIST.ipynb | mit | import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', one_hot=False)
img = mnist.train.images[123]
img = np.reshape(img,(28,28))
plt.imshow(img, cmap = 'gray')
plt.show()
img = np.reshape(img,(28,28,1))
print img.shape, 'label = ', mnist.train.labels[123]
from dvd import dvd
img_embedding = dvd.get_embedding_x(img)
print img_embedding.shape
"""
Explanation: MNIST with transfer learning.
First let us build a MNIST logistic regression classifier.
We will then get better feature embeddings for images by using dvd library. This involves transfer learning.
We will compare simple classifier with transfer learnt model for accuracy score.
End of explanation
"""
from sklearn import linear_model
from sklearn.metrics import accuracy_score
clf = linear_model.LogisticRegression()
clf.fit(mnist.train.images, mnist.train.labels)
preds = clf.predict(mnist.test.images)
print accuracy_score(preds, mnist.test.labels)
"""
Explanation: Simple logistic Regression
End of explanation
"""
train = np.reshape(mnist.train.images, (mnist.train.images.shape[0],28,28))
print 'initial training shape = ', train.shape
train = dvd.get_embedding_X(train)
print 'training shape after embedding =', train.shape
test = np.reshape(mnist.test.images, (mnist.test.images.shape[0],28,28))
test = dvd.get_embedding_X(test)
from sklearn import linear_model
from sklearn.metrics import accuracy_score
clf = linear_model.LogisticRegression()
clf.fit(train, mnist.train.labels)
preds = clf.predict(test)
print accuracy_score(preds, mnist.test.labels)
"""
Explanation: Lets get VGG embeddings for train and test input images and convert them to transfer learnt space.
End of explanation
"""
|
JasonSanchez/w261 | week3/MIDS-W261-HW-03-Sanchez.ipynb | mit | %%writefile ComplaintDistribution.py
from mrjob.job import MRJob
class ComplaintDistribution(MRJob):
def mapper(self, _, lines):
line = lines[:30]
if "Debt collection" in line:
self.increment_counter('Complaint', 'Debt collection', 1)
elif "Mortgage" in line:
self.increment_counter('Complaint', 'Mortgage', 1)
else:
self.increment_counter('Complaint', 'Other', 1)
if __name__ == "__main__":
ComplaintDistribution.run()
%%time
!python ComplaintDistribution.py Temp_data/Consumer_Complaints.csv
"""
Explanation: DATSCIW261 ASSIGNMENT
Version 2016-01-27 (FINAL)
Week 3 ASSIGNMENTS
Jason Sanchez - Group 2
HW3.0.
How do you merge two sorted lists/arrays of records of the form [key, value]?
Merge sort.
Where is this used in Hadoop MapReduce? [Hint within the shuffle]
It is used after files are spilled to disk from the circular buffer before the map-side combiner is run as well as after the data is partitioned during the "Hadoop shuffle" and before it is fed into the reduce-side combiner.
What is a combiner function in the context of Hadoop?
Combiners improve the speed of MapReduce jobs. Map-side combiners can reduce the amount of data that needs to be transferred over the network by acting as a simplified reducer. Also, long running map jobs block the merge sort that is part of the "Hadoop shuffle" phase. A map-side combiner can run on all of the data that has been processed by the mappers before being blocked by the merge-sort.
Give an example where it can be used and justify why it should be used in the context of this problem.
Word count. Greatly reduce data needed to transfer of the network by combining key-value pairs.
What is the Hadoop shuffle?
Partition --> Merge sort --> Pass to reduce-side combiner (or directly to reducer)
HW3.1 consumer complaints dataset: Use Counters to do EDA (exploratory data analysis and to monitor progress)
Counters are lightweight objects in Hadoop that allow you to keep track of system progress in both the map and reduce stages of processing. By default, Hadoop defines a number of standard counters in "groups"; these show up in the jobtracker webapp, giving you information such as "Map input records", "Map output records", etc.
While processing information/data using MapReduce job, it is a challenge to monitor the progress of parallel threads running across nodes of distributed clusters. Moreover, it is also complicated to distinguish between the data that has been processed and the data which is yet to be processed. The MapReduce Framework offers a provision of user-defined Counters, which can be effectively utilized to monitor the progress of data across nodes of distributed clusters.
Use the Consumer Complaints Dataset provide here to complete this question:
https://www.dropbox.com/s/vbalm3yva2rr86m/Consumer_Complaints.csv?dl=0
The consumer complaints dataset consists of diverse consumer complaints, which have been reported across the United States regarding various types of loans. The dataset consists of records of the form:
Complaint ID,Product,Sub-product,Issue,Sub-issue,State,ZIP code,Submitted via,Date received,Date sent to company,Company,Company response,Timely response?,Consumer disputed?
Here’s is the first few lines of the of the Consumer Complaints Dataset:
Complaint ID,Product,Sub-product,Issue,Sub-issue,State,ZIP code,Submitted via,Date received,Date sent to company,Company,Company response,Timely response?,Consumer disputed?
1114245,Debt collection,Medical,Disclosure verification of debt,Not given enough info to verify debt,FL,32219,Web,11/13/2014,11/13/2014,"Choice Recovery, Inc.",Closed with explanation,Yes,
1114488,Debt collection,Medical,Disclosure verification of debt,Right to dispute notice not received,TX,75006,Web,11/13/2014,11/13/2014,"Expert Global Solutions, Inc.",In progress,Yes,
1114255,Bank account or service,Checking account,Deposits and withdrawals,,NY,11102,Web,11/13/2014,11/13/2014,"FNIS (Fidelity National Information Services, Inc.)",In progress,Yes,
1115106,Debt collection,"Other (phone, health club, etc.)",Communication tactics,Frequent or repeated calls,GA,31721,Web,11/13/2014,11/13/2014,"Expert Global Solutions, Inc.",In progress,Yes,
User-defined Counters
Now, let’s use Hadoop Counters to identify the number of complaints pertaining to debt collection, mortgage and other categories (all other categories get lumped into this one) in the consumer complaints dataset. Basically produce the distribution of the Product column in this dataset using counters (limited to 3 counters here).
Hadoop offers Job Tracker, an UI tool to determine the status and statistics of all jobs. Using the job tracker UI, developers can view the Counters that have been created. Screenshot your job tracker UI as your job completes and include it here. Make sure that your user defined counters are visible.
Presumes you have downloaded the file and put it in the "Temp_data" folder.
End of explanation
"""
%%writefile SimpleCounters.py
from mrjob.job import MRJob
class SimpleCounters(MRJob):
def mapper_init(self):
self.increment_counter("Mappers", "Count", 1)
def mapper(self, _, lines):
self.increment_counter("Mappers", "Tasks", 1)
for word in lines.split():
yield (word, 1)
def reducer_init(self):
self.increment_counter("Reducers", "Count", 1)
def reducer(self, word, count):
self.increment_counter("Reducers", "Tasks", 1)
yield (word, sum(count))
if __name__ == "__main__":
SimpleCounters.run()
!echo "foo foo quux labs foo bar quux" | python SimpleCounters.py --jobconf mapred.map.tasks=2 --jobconf mapred.reduce.tasks=2
"""
Explanation: HW 3.2 Analyze the performance of your Mappers, Combiners and Reducers using Counters
For this brief study the Input file will be one record (the next line only):
foo foo quux labs foo bar quux
Perform a word count analysis of this single record dataset using a Mapper and Reducer based WordCount (i.e., no combiners are used here) using user defined Counters to count up how many time the mapper and reducer are called. What is the value of your user defined Mapper Counter, and Reducer Counter after completing this word count job. The answer should be 1 and 4 respectively. Please explain.
End of explanation
"""
%%writefile IssueCounter.py
from mrjob.job import MRJob
import csv
import sys
class IssueCounter(MRJob):
def mapper(self, _, lines):
self.increment_counter("Mappers", "Tasks", 1)
terms = list(csv.reader([lines]))[0]
yield (terms[3], 1)
def reducer(self, word, count):
self.increment_counter("Reducers", "Tasks", 1)
self.increment_counter("Reducers", "Lines processed", len(list(count)))
yield (word, sum(count))
if __name__ == "__main__":
IssueCounter.run()
!cat Temp_data/Consumer_Complaints.csv | python IssueCounter.py | head -n 1
"""
Explanation: Please use multiple mappers and reducers for these jobs (at least 2 mappers and 2 reducers).
Perform a word count analysis of the Issue column of the Consumer Complaints Dataset using a Mapper and Reducer based WordCount (i.e., no combiners used anywhere) using user defined Counters to count up how many time the mapper and reducer are called. What is the value of your user defined Mapper Counter, and Reducer Counter after completing your word count job.
End of explanation
"""
# We can easily confirm the first hypothesis
!wc -l Temp_data/Consumer_Complaints.csv
"""
Explanation: Mapper tasks = 312913. The mapper was called this many times because that is how many lines there are in the file.
Reducer tasks = 80. The reducer was called this many times because that is how many unique issues there are in the file.
Reducer lines processed = 312913. The reducer was passed all of the data from the mappers.
End of explanation
"""
%%writefile IssueCounterCombiner.py
from mrjob.job import MRJob
from mrjob.step import MRStep
import csv
import sys
class IssueCounterCombiner(MRJob):
def mapper(self, _, lines):
self.increment_counter("Mappers", "Tasks", 1)
terms = list(csv.reader([lines]))[0]
yield (terms[3], 1)
def combiner(self, word, count):
self.increment_counter("Combiners", "Tasks", 1)
yield (word, sum(count))
def reducer(self, word, count):
self.increment_counter("Reducers", "Tasks", 1)
self.increment_counter("Reducers", "Lines processed", len(list(count)))
yield (word, sum(count))
if __name__ == "__main__":
IssueCounterCombiner.run()
%%writefile python_mr_driver.py
from IssueCounterCombiner import IssueCounterCombiner
mr_job = IssueCounterCombiner(args=['Temp_data/Consumer_Complaints.csv'])
with mr_job.make_runner() as runner:
runner.run()
print(runner.counters())
# for line in runner.stream_output():
# print(mr_job.parse_output_line(line))
results = !python python_mr_driver.py
results
"""
Explanation: Perform a word count analysis of the Issue column of the Consumer Complaints Dataset using a Mapper, Reducer, and standalone combiner (i.e., not an in-memory combiner) based WordCount using user defined Counters to count up how many time the mapper, combiner, reducer are called. What is the value of your user defined Mapper Counter, and Reducer Counter after completing your word count job.
End of explanation
"""
%%writefile Top50.py
from mrjob.job import MRJob
from mrjob.step import MRStep
import csv
import sys
def order_key(order_in_reducer, key_name):
number_of_stars = order_in_reducer//10 + 1
number = str(order_in_reducer%10)
return "%s %s" % ("*"*number_of_stars+number, key_name)
class Top50(MRJob):
MRJob.SORT_VALUES = True
def mapper_get_issue(self, _, lines):
terms = list(csv.reader([lines]))[0]
issue = terms[3]
if issue == "":
issue = "<blank>"
yield (issue, 1)
def combiner_count_issues(self, word, count):
yield (word, sum(count))
def reducer_init_totals(self):
self.issue_counts = []
def reducer_count_issues(self, word, count):
issue_count = sum(count)
self.issue_counts.append(int(issue_count))
yield (word, issue_count)
def reducer_final_emit_counts(self):
yield (order_key(1, "Total"), sum(self.issue_counts))
yield (order_key(2, "40th"), sorted(self.issue_counts)[-40])
def reducer_init(self):
self.increment_counter("Reducers", "Count", 1)
self.var = {}
def reducer(self, word, count):
if word.startswith("*"):
_, term = word.split()
self.var[term] = next(count)
else:
total = sum(count)
if total >= self.var["40th"]:
yield (word, (total/self.var["Total"], total))
def mapper_sort(self, key, value):
value[0] = 1-float(value[0])
yield value, key
def reducer_sort(self, key, value):
key[0] = round(1-float(key[0]),3)
yield key, next(value)
def steps(self):
mr_steps = [MRStep(mapper=self.mapper_get_issue,
combiner=self.combiner_count_issues,
reducer_init=self.reducer_init_totals,
reducer=self.reducer_count_issues,
reducer_final=self.reducer_final_emit_counts),
MRStep(reducer_init=self.reducer_init,
reducer=self.reducer),
MRStep(mapper=self.mapper_sort,
reducer=self.reducer_sort)
]
return mr_steps
if __name__ == "__main__":
Top50.run()
!head -n 3001 Temp_data/Consumer_Complaints.csv | python Top50.py --jobconf mapred.reduce.tasks=1
"""
Explanation: Although the same amount of map and reduce tasks were called, because 146 combiner tasks were called, my hypothesis would be that the number of observations read by reducers was less. I went back and included a counter that kept track of the lines passed over the network. With the combiner, only 146 observations were passed over the network. This is equal to the number of times the combiner was called (which makes sense because combiners act as map-side reducers and each one would process on a different key of the data and output a single line).
Using a single reducer: What are the top 50 most frequent terms in your word count analysis? Present the top 50 terms and their frequency and their relative frequency. If there are ties please sort the tokens in alphanumeric/string order. Present bottom 10 tokens (least frequent items).
End of explanation
"""
!head -n 10 Temp_data/ProductPurchaseData.txt
%%writefile ProductPurchaseStats.py
from mrjob.job import MRJob
from mrjob.step import MRStep
import sys
import heapq
class TopList(list):
def __init__(self, max_size):
"""
Just like a list, except the append method adds the new value to the
list only if it is larger than the smallest value (or if the size of
the list is less than max_size). If each element of the list is an int
or float, uses that value for comparison. If the first element is a
list or tuple, uses the first element of the list or tuple for the
comparison.
"""
self.max_size = max_size
def _get_key(self, x):
return x[0] if isinstance(x, (list, tuple)) else x
def append(self, val):
key=lambda x: x[0] if isinstance(x, (list, tuple)) else x
if len(self) < self.max_size:
heapq.heappush(self, val)
elif self._get_key(self[0]) < self._get_key(val):
heapq.heapreplace(self, val)
def final_sort(self):
return sorted(self, key=self._get_key, reverse=True)
class ProductPurchaseStats(MRJob):
def mapper_init(self):
self.largest_basket = 0
self.total_items = 0
def mapper(self, _, lines):
products = lines.split()
n_products = len(products)
self.total_items += n_products
if n_products > self.largest_basket:
self.largest_basket = n_products
for prod in products:
yield (prod, 1)
def mapper_final(self):
self.increment_counter("product stats", "largest basket", self.largest_basket)
yield ("*** Total", self.total_items)
def combiner(self, keys, values):
yield keys, sum(values)
def reducer_init(self):
self.top50 = TopList(50)
self.total = 0
def reducer(self, key, values):
value_count = sum(values)
if key == "*** Total":
self.total = value_count
else:
self.increment_counter("product stats", "unique products")
self.top50.append([value_count, value_count/self.total, key])
def reducer_final(self):
for counts, relative_rate, key in self.top50.final_sort():
yield key, (counts, round(relative_rate,3))
if __name__ == "__main__":
ProductPurchaseStats.run()
!cat Temp_data/ProductPurchaseData.txt | python ProductPurchaseStats.py --jobconf mapred.reduce.tasks=1
"""
Explanation: 3.2.1
Using 2 reducers: What are the top 50 most frequent terms in your word count analysis?
Present the top 50 terms and their frequency and their relative frequency. Present the top 50 terms and their frequency and their relative frequency. If there are ties please sort the tokens in alphanumeric/string order. Present bottom 10 tokens (least frequent items). Please use a combiner.
HW3.3. Shopping Cart Analysis
Product Recommendations: The action or practice of selling additional products or services
to existing customers is called cross-selling. Giving product recommendation is
one of the examples of cross-selling that are frequently used by online retailers.
One simple method to give product recommendations is to recommend products that are frequently
browsed together by the customers.
For this homework use the online browsing behavior dataset located at:
https://www.dropbox.com/s/zlfyiwa70poqg74/ProductPurchaseData.txt?dl=0
Each line in this dataset represents a browsing session of a customer.
On each line, each string of 8 characters represents the id of an item browsed during that session.
The items are separated by spaces.
Here are the first few lines of the ProductPurchaseData
FRO11987 ELE17451 ELE89019 SNA90258 GRO99222
GRO99222 GRO12298 FRO12685 ELE91550 SNA11465 ELE26917 ELE52966 FRO90334 SNA30755 ELE17451 FRO84225 SNA80192
ELE17451 GRO73461 DAI22896 SNA99873 FRO86643
ELE17451 ELE37798 FRO86643 GRO56989 ELE23393 SNA11465
ELE17451 SNA69641 FRO86643 FRO78087 SNA11465 GRO39357 ELE28573 ELE11375 DAI54444
Do some exploratory data analysis of this dataset guided by the following questions:.
How many unique items are available from this supplier?
Using a single reducer: Report your findings such as number of unique products; largest basket; report the top 50 most frequently purchased items, their frequency, and their relative frequency (break ties by sorting the products alphabetical order) etc. using Hadoop Map-Reduce.
End of explanation
"""
%%writefile PairsRecommender.py
from mrjob.job import MRJob
import heapq
import sys
def all_itemsets_of_size_two(array, key=None, return_type="string", concat_val=" "):
"""
Generator that yields all valid itemsets of size two
where each combo is returned in an order sorted by key.
key = None defaults to standard sorting.
return_type: can be "string" or "tuple". If "string",
concatenates values with concat_val and returns string.
If tuple, returns a tuple with two elements.
"""
array = sorted(array, key=key)
for index, item in enumerate(array):
for other_item in array[index:]:
if item != other_item:
if return_type == "string":
yield "%s%s%s" % (str(item), concat_val, str(other_item))
else:
yield (item, other_item)
class TopList(list):
def __init__(self, max_size):
"""
Just like a list, except the append method adds the new value to the
list only if it is larger than the smallest value (or if the size of
the list is less than max_size). If each element of the list is an int
or float, uses that value for comparison. If the first element is a
list or tuple, uses the first element of the list or tuple for the
comparison.
"""
self.max_size = max_size
def _get_key(self, x):
return x[0] if isinstance(x, (list, tuple)) else x
def append(self, val):
key=lambda x: x[0] if isinstance(x, (list, tuple)) else x
if len(self) < self.max_size:
heapq.heappush(self, val)
elif self._get_key(self[0]) < self._get_key(val):
heapq.heapreplace(self, val)
def final_sort(self):
return sorted(self, key=self._get_key, reverse=True)
class PairsRecommender(MRJob):
def mapper_init(self):
self.total_baskets = 0
def mapper(self, _, lines):
self.total_baskets += 1
products = lines.split()
self.increment_counter("job stats", "number of items", len(products))
for itemset in all_itemsets_of_size_two(products):
self.increment_counter("job stats", "number of item combos")
yield (itemset, 1)
def mapper_final(self):
self.increment_counter("job stats", "number of baskets", self.total_baskets)
yield ("*** Total", self.total_baskets)
def combiner(self, key, values):
self.increment_counter("job stats", "number of keys fed to combiner")
yield key, sum(values)
def reducer_init(self):
self.top_values = TopList(50)
self.total_baskets = 0
def reducer(self, key, values):
values_sum = sum(values)
if key == "*** Total":
self.total_baskets = values_sum
elif values_sum >= 100:
self.increment_counter("job stats", "number of unique itemsets >= 100")
basket_percent = values_sum/self.total_baskets
self.top_values.append([values_sum, round(basket_percent,3), key])
else:
self.increment_counter("job stats", "number of unique itemsets < 100")
def reducer_final(self):
for values_sum, basket_percent, key in self.top_values.final_sort():
yield key, (values_sum, basket_percent)
if __name__ == "__main__":
PairsRecommender.run()
%%time
!cat Temp_data/ProductPurchaseData.txt | python PairsRecommender.py --jobconf mapred.reduce.tasks=1
!system_profiler SPHardwareDataType
"""
Explanation: 3.3.1 OPTIONAL
Using 2 reducers: Report your findings such as number of unique products; largest basket; report the top 50 most frequently purchased items, their frequency, and their relative frequency (break ties by sorting the products alphabetical order) etc. using Hadoop Map-Reduce.
HW3.4. (Computationally prohibitive but then again Hadoop can handle this) Pairs
Suppose we want to recommend new products to the customer based on the products they
have already browsed on the online website. Write a map-reduce program
to find products which are frequently browsed together. Fix the support count (cooccurence count) to s = 100
(i.e. product pairs need to occur together at least 100 times to be considered frequent)
and find pairs of items (sometimes referred to itemsets of size 2 in association rule mining) that have a support count of 100 or more.
List the top 50 product pairs with corresponding support count (aka frequency), and relative frequency or support (number of records where they coccur, the number of records where they coccur/the number of baskets in the dataset) in decreasing order of support for frequent (100>count) itemsets of size 2.
Use the Pairs pattern (lecture 3) to extract these frequent itemsets of size 2. Free free to use combiners if they bring value. Instrument your code with counters for count the number of times your mapper, combiner and reducers are called.
Please output records of the following form for the top 50 pairs (itemsets of size 2):
item1, item2, support count, support
Fix the ordering of the pairs lexicographically (left to right),
and break ties in support (between pairs, if any exist)
by taking the first ones in lexicographically increasing order.
Report the compute time for the Pairs job. Describe the computational setup used (E.g., single computer; dual core; linux, number of mappers, number of reducers)
Instrument your mapper, combiner, and reducer to count how many times each is called using Counters and report these counts.
End of explanation
"""
%%writefile StripesRecommender.py
from mrjob.job import MRJob
from collections import Counter
import sys
import heapq
def all_itemsets_of_size_two_stripes(array, key=None):
"""
Generator that yields all valid itemsets of size two
where each combo is as a stripe.
key = None defaults to standard sorting.
"""
array = sorted(array, key=key)
for index, item in enumerate(array[:-1]):
yield (item, {val:1 for val in array[index+1:]})
class TopList(list):
def __init__(self, max_size):
"""
Just like a list, except the append method adds the new value to the
list only if it is larger than the smallest value (or if the size of
the list is less than max_size). If each element of the list is an int
or float, uses that value for comparison. If the first element is a
list or tuple, uses the first element of the list or tuple for the
comparison.
"""
self.max_size = max_size
def _get_key(self, x):
return x[0] if isinstance(x, (list, tuple)) else x
def append(self, val):
key=lambda x: x[0] if isinstance(x, (list, tuple)) else x
if len(self) < self.max_size:
heapq.heappush(self, val)
elif self._get_key(self[0]) < self._get_key(val):
heapq.heapreplace(self, val)
def final_sort(self):
return sorted(self, key=self._get_key, reverse=True)
class StripesRecommender(MRJob):
def mapper_init(self):
self.basket_count = 0
def mapper(self, _, lines):
self.basket_count += 1
products = lines.split()
for item, value in all_itemsets_of_size_two_stripes(products):
yield item, value
def mapper_final(self):
yield ("*** Total", {"total": self.basket_count})
def combiner(self, keys, values):
values_sum = Counter()
for val in values:
values_sum += Counter(val)
yield keys, dict(values_sum)
def reducer_init(self):
self.top = TopList(50)
def reducer(self, keys, values):
values_sum = Counter()
for val in values:
values_sum += Counter(val)
if keys == "*** Total":
self.total = values_sum["total"]
else:
for k, v in values_sum.items():
if v >= 100:
self.top.append([v, round(v/self.total,3), keys+" "+k])
def reducer_final(self):
for count, perc, key in self.top.final_sort():
yield key, (count, perc)
if __name__ == "__main__":
StripesRecommender.run()
%%time
!cat Temp_data/ProductPurchaseData.txt | python StripesRecommender.py --jobconf mapred.reduce.tasks=1
"""
Explanation: HW3.5: Stripes
Repeat 3.4 using the stripes design pattern for finding cooccuring pairs.
Report the compute times for stripes job versus the Pairs job. Describe the computational setup used (E.g., single computer; dual core; linux, number of mappers, number of reducers)
Instrument your mapper, combiner, and reducer to count how many times each is called using Counters and report these counts. Discuss the differences in these counts between the Pairs and Stripes jobs
OPTIONAL: all HW below this are optional
End of explanation
"""
set([1,2,3])
[1,2,3][:-1]
"""
Explanation: The pairs operation took 1 minute 30 seconds. The stripes operation took 24 seconds, which is about a quarter of the time for pairs.
HW3.6 Computing Relative Frequencies on 100K WikiPedia pages (93Meg)
Dataset description
For this assignment you will explore a set of 100,000 Wikipedia documents:
https://www.dropbox.com/s/n5lfbnztclo93ej/wikitext_100k.txt?dl=0
s3://cs9223/wikitext_100k.txt, or
https://s3.amazonaws.com/cs9223/wikitext_100k.txt
Each line in this file consists of the plain text extracted from a Wikipedia document.
Task
Compute the relative frequencies of each word that occurs in the documents in wikitext_100k.txt and output the top 100 word pairs sorted by decreasing order of relative frequency.
Recall that the relative frequency (RF) of word B given word A is defined as follows:
f(B|A) = Count(A, B) / Count (A) = Count(A, B) / sum_B'(Count (A, B')
where count(A,B) is the number of times A and B co-occur within a window of two words (co-occurrence window size of two) in a document and count(A) the number of times A occurs with anything else. Intuitively, given a document collection, the relative frequency captures the proportion of time the word B appears in the same document as A. (See Section 3.3, in Data-Intensive Text Processing with MapReduce).
In the async lecture you learned different approaches to do this, and in this assignment, you will implement them:
a. Write a mapreduce program which uses the Stripes approach and writes its output in a file named rfstripes.txt
b. Write a mapreduce program which uses the Pairs approach and writes its output in a file named rfpairs.txt
c. Compare the performance of the two approaches and output the relative performance to a file named rfcomp.txt. Compute the relative performance as follows: (running time for Pairs/ running time for Stripes). Also include an analysis comparing the communication costs for the two approaches. Instrument your mapper and reduces for counters where necessary to aid with your analysis.
NOTE: please limit your analysis to the top 100 word pairs sorted by decreasing order of relative frequency for each word (tokens with all alphabetical letters).
Please include markdown cell named rf.txt that describes the following:
the input/output format in each Hadoop task, i.e., the keys for the mappers and reducers
the Hadoop cluster settings you used, i.e., number of mappers and reducers
the running time for each approach: pairs and stripes
You can write your program using Python or MrJob (with Hadoop streaming) and you should run it on AWS. It is a good idea to develop and test your program on a local machine before deploying on AWS. Remember your notebook, needs to have all the commands you used to run each Mapreduce job (i.e., pairs and stripes) -- include the Hadoop streaming commands you used to run your jobs.
In addition the All the following files should be compressed in one ZIP file and submitted. The ZIP file should contain:
A. The result files: rfstripes.txt, rfpairs.txt, rfcomp.txt
Prior to working with Hadoop, the corpus should first be preprocessed as follows:
perform tokenization (whitespace and all non-alphabetic characters) and stopword removal using standard tools from the Lucene search engine. All tokens should then be replaced
with unique integers for a more efficient encoding.
== Preliminary information for the remaing HW problems===
Much of this homework beyond this point will focus on the Apriori algorithm for frequent itemset mining and the additional step for extracting association rules from these frequent itemsets.
Please acquaint yourself with the background information (below)
before approaching the remaining assignments.
=== Apriori background information ===
Some background material for the Apriori algorithm is located at:
Slides in Live Session #3
https://en.wikipedia.org/wiki/Apriori_algorithm
https://www.dropbox.com/s/k2zm4otych279z2/Apriori-good-slides.pdf?dl=0
http://snap.stanford.edu/class/cs246-2014/slides/02-assocrules.pdf
Association Rules are frequently used for Market Basket Analysis (MBA) by retailers to
understand the purchase behavior of their customers. This information can be then used for
many different purposes such as cross-selling and up-selling of products, sales promotions,
loyalty programs, store design, discount plans and many others.
Evaluation of item sets: Once you have found the frequent itemsets of a dataset, you need
to choose a subset of them as your recommendations. Commonly used metrics for measuring
significance and interest for selecting rules for recommendations are: confidence; lift; and conviction.
HW3.7 Apriori Algorithm
What is the Apriori algorithm? Describe an example use in your domain of expertise and what kind of . Define confidence and lift.
NOTE:
For the remaining homework use the online browsing behavior dataset located at (same dataset as used above):
https://www.dropbox.com/s/zlfyiwa70poqg74/ProductPurchaseData.txt?dl=0
Each line in this dataset represents a browsing session of a customer.
On each line, each string of 8 characters represents the id of an item browsed during that session.
The items are separated by spaces.
Here are the first few lines of the ProductPurchaseData
FRO11987 ELE17451 ELE89019 SNA90258 GRO99222
GRO99222 GRO12298 FRO12685 ELE91550 SNA11465 ELE26917 ELE52966 FRO90334 SNA30755 ELE17451 FRO84225 SNA80192
ELE17451 GRO73461 DAI22896 SNA99873 FRO86643
ELE17451 ELE37798 FRO86643 GRO56989 ELE23393 SNA11465
ELE17451 SNA69641 FRO86643 FRO78087 SNA11465 GRO39357 ELE28573 ELE11375 DAI54444
HW3.8. Shopping Cart Analysis
Product Recommendations: The action or practice of selling additional products or services
to existing customers is called cross-selling. Giving product recommendation is
one of the examples of cross-selling that are frequently used by online retailers.
One simple method to give product recommendations is to recommend products that are frequently
browsed together by the customers.
Suppose we want to recommend new products to the customer based on the products they
have already browsed on the online website. Write a program using the A-priori algorithm
to find products which are frequently browsed together. Fix the support to s = 100
(i.e. product sets need to occur together at least 100 times to be considered frequent)
and find itemsets of size 2 and 3.
Then extract association rules from these frequent items.
A rule is of the form:
(item1, item5) ⇒ item2.
List the top 10 discovered rules in descreasing order of confidence in the following format
(item1, item5) ⇒ item2, supportCount ,support, confidence
HW3.8
Benchmark your results using the pyFIM implementation of the Apriori algorithm
(Apriori - Association Rule Induction / Frequent Item Set Mining implemented by Christian Borgelt).
You can download pyFIM from here:
http://www.borgelt.net/pyfim.html
Comment on the results from both implementations (your Hadoop MapReduce of apriori versus pyFIM)
in terms of results and execution times.
END OF HOMEWORK
End of explanation
"""
|
googledatalab/notebooks | samples/ML Toolbox/Regression/Census/4 Service Evaluate.ipynb | apache-2.0 | import google.datalab as datalab
import google.datalab.ml as ml
import mltoolbox.regression.dnn as regression
import os
"""
Explanation: Evaluating a Model with Dataflow and BigQuery
This notebook is the third in the set of steps to run machine learning on the cloud. In this step, we will use the model training in the previous notebook and continue with evaluating the resulting model.
Evaluation is accomplished by first running batch prediction over one or more evaluation datasets. This is done via Cloud Dataflow, and then analyzing the results using BigQuery. Doing evaluation using these services allow you to scale to large evaluation datasets.
Workspace Setup
The first step is to setup the workspace that we will use within this notebook - the python libraries, and the Google Cloud Storage bucket that will be used to contain the inputs and outputs produced over the course of the steps.
End of explanation
"""
storage_bucket = 'gs://' + datalab.Context.default().project_id + '-datalab-workspace/'
storage_region = 'us-central1'
workspace_path = os.path.join(storage_bucket, 'census')
training_path = os.path.join(workspace_path, 'training')
"""
Explanation: The storage bucket was created earlier. We'll re-declare it here, so we can use it.
End of explanation
"""
!gsutil ls -r {training_path}/model
"""
Explanation: Model
Lets take a quick look at the model that was previously produced as a result of the training job.
End of explanation
"""
eval_data_path = os.path.join(workspace_path, 'data/eval.csv')
evaluation_path = os.path.join(workspace_path, 'evaluation')
regression.batch_predict(training_dir=training_path, prediction_input_file=eval_data_path,
output_dir=evaluation_path,
mode='evaluation',
output_format='csv',
cloud=True)
"""
Explanation: Batch Prediction
We'll submit a batch prediction Dataflow job, to use this model by loading it into TensorFlow, and running it in evaluation mode (the mode that expects the input data to contain a value for target). The other mode, prediction, is used to predict over data where the target column is missing.
NOTE: Batch prediction can take a few minutes to launch while compute resources are provisioned. In the case of large datasets in real-world problems, this overhead is a much smaller part of the overall job lifetime.
End of explanation
"""
!gsutil ls {evaluation_path}
!gsutil cat {evaluation_path}/csv_schema.json
!gsutil -q -m cp -r {evaluation_path}/ /tmp
!head -n 5 /tmp/evaluation/predictions-00000*
"""
Explanation: Once prediction is done, the individual predictions will be written out into Cloud Storage.
End of explanation
"""
%bq datasource --name eval_results --paths gs://cloud-ml-users-datalab-workspace/census/evaluation/predictions*
{
"schema": [
{
"type": "STRING",
"mode": "nullable",
"name": "SERIALNO"
},
{
"type": "FLOAT",
"mode": "nullable",
"name": "predicted_target"
},
{
"type": "FLOAT",
"mode": "nullable",
"name": "target_from_input"
}
]
}
%bq query --datasource eval_results
SELECT SQRT(AVG(error)) AS rmse FROM (
SELECT POW(target_from_input - predicted_target, 2) AS error
FROM eval_results
)
%bq query --name distribution_query --datasource eval_results
WITH errors AS (
SELECT predicted_target - target_from_input AS error
FROM eval_results
),
error_stats AS (
SELECT MIN(error) AS min_error, MAX(error) - MIN(error) AS error_range FROM errors
),
quantized_errors AS (
SELECT error, FLOOR((error - min_error) * 20 / error_range) error_bin FROM errors CROSS JOIN error_stats
)
SELECT AVG(error) AS error, COUNT(error_bin) AS instances
FROM quantized_errors
GROUP BY error_bin
ORDER BY error_bin
%chart columns --data distribution_query --fields error,instances
"""
Explanation: Analysis with BigQuery
We're going to use BigQuery to do evaluation. BigQuery can directly work against CSV data in Cloud Storage. However, if you have very large evaluation results, or you're going to be running multiple queries, it is advisable to first load the results into a BigQuery table.
End of explanation
"""
|
AntArch/Presentations_Github | 20150916_OGC_Reuse_under_licence/20150916_OGC_Reuse_under_licence.ipynb | cc0-1.0 | from IPython.display import YouTubeVideo
YouTubeVideo('F4rFuIb1Ie4')
## PDF output using pandoc
import os
### Export this notebook as markdown
commandLineSyntax = 'ipython nbconvert --to markdown 20150916_OGC_Reuse_under_licence.ipynb'
print (commandLineSyntax)
os.system(commandLineSyntax)
### Export this notebook and the document header as PDF using Pandoc
commandLineSyntax = 'pandoc -f markdown -t latex -N -V geometry:margin=1in DocumentHeader.md 20150916_OGC_Reuse_under_licence.md --filter pandoc-citeproc --latex-engine=xelatex --toc -o interim.pdf '
os.system(commandLineSyntax)
### Remove cruft from the pdf
commandLineSyntax = 'pdftk interim.pdf cat 1-3 16-end output 20150916_OGC_Reuse_under_licence.pdf'
os.system(commandLineSyntax)
### Remove the interim pdf
commandLineSyntax = 'rm interim.pdf'
os.system(commandLineSyntax)
"""
Explanation: Go down for licence and other metadata about this presentation
\newpage
Preamble
Licence
Unless stated otherwise all content is released under a [CC0]+BY licence. I'd appreciate it if you reference this but it is not necessary.
\newpage
Using Ipython for presentations
A short video showing how to use Ipython for presentations
End of explanation
"""
%install_ext https://raw.githubusercontent.com/rasbt/python_reference/master/ipython_magic/watermark.py
%load_ext watermark
%watermark -a "Anthony Beck" -d -v -m -g
#List of installed conda packages
!conda list
#List of installed pip packages
!pip list
"""
Explanation: The environment
In order to replicate my environment you need to know what I have installed!
Set up watermark
This describes the versions of software used during the creation.
Please note that critical libraries can also be watermarked as follows:
python
%watermark -v -m -p numpy,scipy
End of explanation
"""
!ipython nbconvert 20150916_OGC_Reuse_under_licence.ipynb --to slides --post serve
"""
Explanation: Running dynamic presentations
You need to install the RISE Ipython Library from Damián Avila for dynamic presentations
To convert and run this as a static presentation run the following command:
End of explanation
"""
#Future proof python 2
from __future__ import print_function #For python3 print syntax
from __future__ import division
# def
import IPython.core.display
# A function to collect user input - ipynb_input(varname='username', prompt='What is your username')
def ipynb_input(varname, prompt=''):
"""Prompt user for input and assign string val to given variable name."""
js_code = ("""
var value = prompt("{prompt}","");
var py_code = "{varname} = '" + value + "'";
IPython.notebook.kernel.execute(py_code);
""").format(prompt=prompt, varname=varname)
return IPython.core.display.Javascript(js_code)
# inline
%pylab inline
"""
Explanation: To close this instances press control 'c' in the ipython notebook terminal console
Static presentations allow the presenter to see speakers notes (use the 's' key)
If running dynamically run the scripts below
Pre load some useful libraries
End of explanation
"""
from IPython.display import YouTubeVideo
YouTubeVideo('jUzGF401vLc')
"""
Explanation: \newpage
About me
Research Fellow, University of Nottingham: orcid
Director, Geolytics Limited - A spatial data analytics consultancy
About this presentation
Available on GitHub - https://github.com/AntArch/Presentations_Github/
Fully referenced PDF
\newpage
A potted history of mapping
In the beginning was the geoword
and the word was cartography
\newpage
Cartography was king. Static representations of spatial knowledge with the cartographer deciding what to represent.
\newpage
And then there was data .........
\newpage
Restrictive data
\newpage
Making data interoperable and open
\newpage
Technical interoperability - levelling the field
\newpage
Facilitating data driven visualization
From Map to Model The changing paradigm of map creation from cartography to data driven visualization
\newpage
\newpage
\newpage
\newpage
What about non-technical interoperability issues?
Issues surrounding non-technical interoperability include:
Policy interoperabilty
Licence interoperability
Legal interoperability
Social interoperability
We will focus on licence interoperability
\newpage
There is a multitude of formal and informal data.
\newpage
Each of these data objects can be licenced in a different way. This shows some of the licences described by the RDFLicence ontology
\newpage
What is a licence?
Wikipedia state:
A license may be granted by a party ("licensor") to another party ("licensee") as an element of an agreement between those parties.
A shorthand definition of a license is "an authorization (by the licensor) to use the licensed material (by the licensee)."
Concepts (derived from Formal Concept Analysis) surrounding licences
\newpage
Two lead organisations have developed legal frameworks for content licensing:
Creative Commons (CC) and
Open Data Commons (ODC).
Until the release of CC version 4, published in November 2013, the CC licence did not cover data. Between them, CC and ODC licences can cover all forms of digital work.
There are many other licence types
Many are bespoke
Bespoke licences are difficult to manage
Many legacy datasets have bespoke licences
I'll describe CC in more detail
\newpage
Creative Commons Zero
Creative Commons Zero (CC0) is essentially public domain which allows:
Reproduction
Distribution
Derivations
Constraints on CC0
The following clauses constrain CC0:
Permissions
ND – No derivatives: the licensee can not derive new content from the resource.
Requirements
BY – By attribution: the licensee must attribute the source.
SA – Share-alike: if the licensee adapts the resource, it must be released under the same licence.
Prohibitions
NC – Non commercial: the licensee must not use the work commercially without prior approval.
CC license combinations
License|Reproduction|Distribution|Derivation|ND|BY|SA|NC
----|----|----|----|----|----|----|----
CC0|X|X|X||||
CC-BY-ND|X|X||X|X||
CC-BY-NC-ND|X|X||X|X||X
CC-BY|X|X|X||X||
CC-BY-SA|X|X|X||X|X|
CC-BY-NC|X|X|X||X||X
CC-BY-NC-SA|X|X|X||X|X|X
Table: Creative Commons license combinations
\newpage
Why are licenses important?
They tell you what you can and can't do with 'stuff'
Very significant when multiple datasets are combined
It then becomes an issue of license compatibility
\newpage
Which is important when we mash up data
Certain licences when combined:
Are incompatible
Creating data islands
Inhibit commercial exploitation (NC)
Force the adoption of certain licences
If you want people to commercially exploit your stuff don't incorporate CC-BY-NC-SA data!
Stops the derivation of new works
A conceptual licence processing workflow. The licence processing service analyses the incoming licence metadata and determines if the data can be legally integrated and any resulting licence implications for the derived product.
\newpage
A rudimentry logic example
```
Data1 hasDerivedContentIn NewThing.
Data1 hasLicence a cc-by-sa.
What hasLicence a cc-by-sa? #reason here
If X hasDerivedContentIn Y and hasLicence Z then Y hasLicence Z. #reason here
Data2 hasDerivedContentIn NewThing.
Data2 hasLicence a cc-by-nc-sa.
What hasLicence a cc-by-nc-sa? #reason here
Nothing hasLicence a cc-by-nc-sa and hasLicence a cc-by-sa. #reason here
```
And processing this within the Protege reasoning environment
End of explanation
"""
from IPython.display import YouTubeVideo
YouTubeVideo('tkRB5Rp1_W4')
"""
Explanation: \newpage
Here's something I prepared earlier
A live presentation (for those who weren't at the event).....
End of explanation
"""
|
egentry/dwarf_photo-z | dwarfz/data/get_training_galaxy_images.ipynb | mit | # give access to importing dwarfz
import os, sys
dwarfz_package_dir = os.getcwd().split("dwarfz")[0]
if dwarfz_package_dir not in sys.path:
sys.path.insert(0, dwarfz_package_dir)
import dwarfz
# back to regular import statements
%matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
sns.set(context="poster", style="ticks", font_scale=1.4)
import numpy as np
import pandas as pd
import glob
import shutil
from scipy.special import expit
COSMOS_filename = os.path.join(dwarfz.data_dir_default, "COSMOS_reference.sqlite")
COSMOS = dwarfz.datasets.COSMOS(COSMOS_filename)
HSC_filename = os.path.join(dwarfz.data_dir_default, "HSC_COSMOS_median_forced.sqlite3")
HSC = dwarfz.datasets.HSC(HSC_filename)
matches_filename = os.path.join(dwarfz.data_dir_default, "matches.sqlite3")
matches_df = dwarfz.matching.Matches.load_from_filename(matches_filename)
combined = matches_df[matches_df.match].copy()
combined["ra"] = COSMOS.df.loc[combined.index].ra
combined["dec"] = COSMOS.df.loc[combined.index].dec
combined["photo_z"] = COSMOS.df.loc[combined.index].photo_z
combined["log_mass"] = COSMOS.df.loc[combined.index].mass_med
photometry_cols = [
"gcmodel_flux","gcmodel_flux_err","gcmodel_flux_flags", "gcmodel_mag",
"rcmodel_flux","rcmodel_flux_err","rcmodel_flux_flags", "rcmodel_mag",
"icmodel_flux","icmodel_flux_err","icmodel_flux_flags", "icmodel_mag",
"zcmodel_flux","zcmodel_flux_err","zcmodel_flux_flags", "zcmodel_mag",
"ycmodel_flux","ycmodel_flux_err","ycmodel_flux_flags", "ycmodel_mag",
]
for col in photometry_cols:
combined[col] = HSC.df.loc[combined.catalog_2_ids][col].values
"""
Explanation: What do I want?
Previously in ../catalog_only_classifier/classifier_comparison.ipynb I got a Random Forest-predicted probability of each galaxy being a low-z dwarf. Now I want to select a subset of the galaxies most likely to be dwarfs, and pull their images from the HSC servers.
Code
First let's load some general information about each galaxy, in case we want to see how our "dwarf galaxy scores" correlate with various properties
End of explanation
"""
combined["g_minus_r"] = combined.gcmodel_mag - combined.rcmodel_mag
combined["r_minus_i"] = combined.rcmodel_mag - combined.icmodel_mag
combined["i_minus_z"] = combined.icmodel_mag - combined.zcmodel_mag
combined["z_minus_y"] = combined.zcmodel_mag - combined.ycmodel_mag
"""
Explanation: Turn magnitudes into colors
End of explanation
"""
mask = np.isfinite(combined["g_minus_r"]) & np.isfinite(combined["r_minus_i"]) \
& np.isfinite(combined["i_minus_z"]) & np.isfinite(combined["z_minus_y"]) \
& np.isfinite(combined["icmodel_mag"]) \
& (~combined.gcmodel_flux_flags) & (~combined.rcmodel_flux_flags) \
& (~combined.icmodel_flux_flags) & (~combined.zcmodel_flux_flags) \
& (~combined.ycmodel_flux_flags)
combined = combined[mask]
"""
Explanation: Filter out bad data
End of explanation
"""
df_frankenz = pd.read_sql_table("photo_z",
"sqlite:///{}".format(
os.path.join(dwarfz.data_dir_default,
"HSC_matched_to_FRANKENZ.sqlite")),
index_col="object_id")
df_frankenz.head()
combined = combined.join(df_frankenz[["photoz_best", "photoz_risk_best"]],
on="catalog_2_ids")
"""
Explanation: Get FRANKENZ photo-z's
End of explanation
"""
low_z = (combined.photo_z < .15)
low_mass = (combined.log_mass > 8) & (combined.log_mass < 9)
combined["low_z_low_mass"] = (low_z & low_mass)
combined.low_z_low_mass.mean()
"""
Explanation: Create classification labels
End of explanation
"""
filename = "galaxy_images_training/2017_09_26-dwarf_galaxy_scores.csv"
!head -n 20 $filename
df_dwarf_prob = pd.read_csv(filename)
df_dwarf_prob.head()
theoretical_probs=np.linspace(0,1,num=20+1)
empirical_probs_RF = np.empty(theoretical_probs.size-1)
for i in range(theoretical_probs.size-1):
prob_lim_low = theoretical_probs[i]
prob_lim_high = theoretical_probs[i+1]
mask_RF = (df_dwarf_prob.dwarf_prob >= prob_lim_low) & (df_dwarf_prob.dwarf_prob < prob_lim_high)
empirical_probs_RF[i] = df_dwarf_prob.low_z_low_mass[mask_RF].mean()
color_RF = "g"
label_RF = "Random Forest"
plt.hist(df_dwarf_prob.dwarf_prob, bins=theoretical_probs)
plt.xlim(0,1)
plt.yscale("log")
plt.ylabel("counts")
plt.figure()
plt.step(theoretical_probs, [empirical_probs_RF[0], *empirical_probs_RF],
linestyle="steps", color=color_RF, label=label_RF)
plt.plot(theoretical_probs, theoretical_probs-.05,
drawstyle="steps", color="black", label="ideal", linestyle="dashed")
plt.xlabel("Reported Probability")
plt.ylabel("Actual (Binned) Probability")
plt.legend(loc="best")
plt.xlim(0,1)
plt.ylim(0,1)
"""
Explanation: Load scores for each galaxy
This scores were created within catalog_only_classifier/classifier_ROC_curves.ipynb. These scores are also non-deterministic -- getting a new realization will lead to slightly different results (but hopefully with no dramatic changes).
End of explanation
"""
COSMOS_field_area = 2 # sq. deg.
sample_size = int(1000 * COSMOS_field_area)
print("sample size: ", sample_size)
selected_galaxies = df_dwarf_prob.sort_values("dwarf_prob", ascending=False)[:sample_size]
print("threshold: ", selected_galaxies.dwarf_prob.min())
print("galaxies at or above threshold: ", (df_dwarf_prob.dwarf_prob>=selected_galaxies.dwarf_prob.min()).sum() )
df_dwarf_prob.dwarf_prob.min()
bins = np.linspace(0,1, num=100)
plt.hist(df_dwarf_prob.dwarf_prob, bins=bins, label="RF score")
plt.axvline(selected_galaxies.dwarf_prob.min(), linestyle="dashed", color="black", label="threshold")
plt.legend(loc="best")
plt.xlabel("Dwarf Prob.")
plt.ylabel("counts (galaxies)")
plt.yscale("log")
# how balanced is the CNN training set? What fraction are actually dwarfs?
selected_galaxies.low_z_low_mass.mean()
"""
Explanation: select the best 1000 / sq.deg.
End of explanation
"""
selected_galaxy_coords = HSC.df.loc[selected_galaxies.HSC_id][["ra", "dec"]]
selected_galaxy_coords.head()
selected_galaxy_coords.to_csv("galaxy_images_training/2017_09_26-selected_galaxy_coords.csv")
width = "20asec"
filters = ["HSC-G", "HSC-R", "HSC-I", "HSC-Z", "HSC-Y"]
rerun = "pdr1_deep"
tmp_filename = "galaxy_images_training/tmp_quarry.txt"
tmp_filename_missing_ext = tmp_filename[:-3]
with open(tmp_filename, mode="w") as f:
# print("#? ra dec filter sw sh rerun", file=f)
print_formatter = " {galaxy.ra:.6f}deg {galaxy.dec:.6f}deg {filter} {width} {width} {rerun}"
for galaxy in selected_galaxy_coords.itertuples():
for filter in filters:
print(print_formatter.format(galaxy=galaxy,
width=width,
filter=filter,
rerun=rerun),
file=f)
!head -n 10 $tmp_filename
!wc -l $tmp_filename
!split -a 1 -l 1000 $tmp_filename $tmp_filename_missing_ext
"""
Explanation: Get the images from the quarry
For technical details, see: https://hsc-release.mtk.nao.ac.jp/das_quarry/manual.html
Create a coordinates list
End of explanation
"""
for filename_old in sorted(glob.glob("galaxy_images_training/tmp_quarry.?")):
filename_new = filename_old + ".txt"
with open(filename_new, mode="w") as f_new:
f_new.write("#? ra dec filter sw sh rerun\n")
with open(filename_old, mode="r") as f_old:
data = f_old.read()
f_new.write(data)
os.remove(filename_old)
!ls galaxy_images_training/
"""
Explanation: To do: I need to find a way to deal with the script below when there aren't any files to process (i.e. they've already been processeded)
End of explanation
"""
!head -n 10 galaxy_images_training/tmp_quarry.a.txt
!ls -lhtr galaxy_images_training/quarry_files_a | head -n 11
"""
Explanation: Make the request via curl
1)
First you need to setup you authentication information. Add it to a file like galaxy_images_training/curl_netrc which should look like:
machine hsc-release.mtk.nao.ac.jp login <your username> password <your password>
This allows you to script the curl calls, without being prompted for your password each time
2a)
The curl call (in (2b)) will spit out files into a somewhat unpredicatably named directory, like arch-170928-231223. You should rename this to match the batch suffix. You really should do this right away, so you don't get confused. In general I add the rename onto the same line as the curl call:
curl ... | tar xvf - && mv arch-* quarry_files_a
This only works if it finds one arch- directory, but you really shouldn't have multiple arch directories at any given time; that's a recipe for getting your galaxies mixed up.
2b)
Here's the actual curl invocation:
curl --netrc-file galaxy_images_training/curl_netrc https://hsc-release.mtk.nao.ac.jp/das_quarry/cgi-bin/quarryImage --form list=@<coord list filename> | tar xvf -
End of explanation
"""
!mkdir -p galaxy_images_training/quarry_files
"""
Explanation: rename files with HSC-id
steps:
1) figure out what suffix we're using (e.g. "a")
2) convert suffix into zero-indexed integer (e.g. 0)
3) determine which HSC-ids corresponded to that file
- e.g. from suffix_int*200 to (suffix_int+1)*200
4) swapping the line number prefix with an HSC_id prefix, while preserving the rest of the filename (esp. the band information)
make target directory
This'll hold the renamed files for all the batches
End of explanation
"""
prefix_map = { char:i for i, char in enumerate(["a","b","c","d","e",
"f","g","h","i","j"])
}
prefix = "a"
prefix_int = prefix_map[prefix]
print(prefix_int)
"""
Explanation: set prefix
End of explanation
"""
files = glob.glob("galaxy_images_training/quarry_files_{prefix}/*".format(prefix=prefix))
file_numbers = [int(os.path.basename(file).split("-")[0]) for file in files]
file_numbers = np.array(sorted(file_numbers))
desired_range = np.arange(2,1002)
print("missing file numbers: ", desired_range[~np.isin(desired_range, file_numbers)])
"""
Explanation: find which files are missing
What should I do about these incomplete files? idk, I guess just throw them out for now.
End of explanation
"""
i_start = prefix_int*200
i_end = (prefix_int+1)*200
print(i_start, i_end)
HSC_ids_for_prefix = selected_galaxies.iloc[i_start:i_end].HSC_id.values
HSC_ids_for_prefix
i_file = 1
for HSC_id in HSC_ids_for_prefix:
for filter in filters:
i_file += 1
filenames = glob.glob("galaxy_images_training/quarry_files_{prefix}/{i_file}-*-{filter}-*.fits".format(
prefix=prefix,
i_file=i_file,
filter=filter))
if len(filenames) > 1:
raise ValueError("too many matching files for i_file={}".format(i_file))
elif len(filenames) == 0:
print("skipping missing i_file={}".format(i_file))
continue
old_filename = filenames[0]
new_filename = old_filename.replace("/{i_file}-".format(i_file=i_file),
"/{HSC_id}-".format(HSC_id=HSC_id))
new_filename = new_filename.replace("quarry_files_{prefix}".format(prefix=prefix),
"quarry_files")
if os.path.exists(old_filename):
# for now use a copy operation, so I can fix things if it breaks,
# but later this should be a move instead
os.rename(old_filename, new_filename)
"""
Explanation: get HSC ids for index
End of explanation
"""
for file in glob.glob("galaxy_images_training/tmp_quarry.?.txt"):
os.remove(file)
for file in glob.glob("galaxy_images_training/quarry_files_?"):
# will fail if the directory isn't empty
# (directory should be empty after moving the renamed files to `quarry_files`)
os.rmdir(file)
filename = "galaxy_images_training/tmp_quarry.txt"
if os.path.exists(filename):
os.remove(filename)
"""
Explanation: clean up old files
End of explanation
"""
|
danielfather7/teach_Python | SEDS_Hw/seds-hw4-coding-and-testing-big-league-danielfather7/SEDS-HW4.ipynb | gpl-3.0 | import knn
import pandas as pd
test = pd.read_csv('testing.csv')
data = pd.read_csv('atomsradii.csv')
test
data
"""
Explanation: K-NN Classifier
Import packages and data:
End of explanation
"""
"""
def dist(vec1,vec2):
#Use scipy package to calculate Euclidean distance.
from scipy.spatial import distance
return(distance.euclidean(vec1,vec2))
"""
print()
#Try
knn.dist(data.loc[0][0:2],test.loc[0][0:2])
"""
Explanation: Functions:
1. dist(vec1,vec2): caculate Euclidean distance.
input: 2 same size 1-D arrays.
output: Euclidean distance between them.
End of explanation
"""
"""
def sortdist(df, vec):
#Use for loop, calculate Eulidean distance between input 1-D array and
#each row in input dataframe.
distlist = []
for index, row in df.iterrows():
distlist.append([dist(row[0:2],vec),index])
#Output is ordered from min to max with its index.
distlist.sort()
return(distlist)
"""
print()
#Try
knn.sortdist(data,[0.5,0.5])
"""
Explanation: 2. def sortdist(df, vec): calculate Eulidean distance between input 1-D array and each row in input dataframe.
input: (df: a dataframe; vec: 1-D array with 2 elements)
output: distance is ordered from min to max with its index.
End of explanation
"""
"""
def classify(k,df,vec):
import itertools as it
countlist = []
grouplist = []
gpsorted = []
makechoice = []
#Choose k numbers of sorted data.
kdistlist = sortdist(df,vec)[0:k]
#Find out the group it belongs to.
for i in range(k):
grouplist.append(df.loc[kdistlist[i][1]]['Type'])
#Use groupby to get numbers of the same group and unduplicated group name.
gp = sorted(grouplist)
# print(grouplist)
for key, group in it.groupby(gp):
countlist.append(len(list(group)))
gpsorted.append(key)
# print(countlist)
# print(gpsorted)
#Use a if loop to deal with 2 or more max values in countlist.
if countlist.count(max(countlist))>1:
for j in range(len(countlist)):
if countlist[j] == max(countlist):
makechoice.append(grouplist.index(gpsorted[j]))
# print(makechoice)
#It will choose the group which appeared ahead in grouplist.
#(In most case, it is the shortest distance among equal numbers of same
#groups.)
group = grouplist[min(makechoice)]
else:
group = gpsorted[countlist.index(max(countlist))]
return group
"""
print()
# Try
knn.classify(8,data,[0.5,0.5])
"""
Explanation: 3. classify(k,df,vec): give 1-D array a group by choosing a k value and training dataframe.
input: (k: k values; df: training dataframe, there should be a column named 'Type'; vec: 1-D array with 2 elements)
output: group which vec belongs to.
Note: If there are 2 or more max values, it will choose the closet point as its group among those max values.(Please example below)
End of explanation
"""
"""
def knn(traindf,inputdf,k):
#Wrap all functions and append a column called 'Knn Type'.
group = []
for index, row in inputdf.iterrows():
group.append(classify(k,traindf,row[0:2]))
inputdf['Knn Type'] = group
return(inputdf)
"""
print()
knn.knn(data, test, 4)
"""
Explanation: In this class, the group types in order of the distance to point [0.5, 0.5] are ['TM', 'TM', 'PT', 'Alk', 'PT', 'Alk', 'PT', 'Alk'] (can be checked by sortdist function). There are 2 'TM', 3 'PT', and 3 'Alk'. It will choose 'PT' rather than 'Alk' because the first 'PT' (index = 2) is in front of the first 'Alk' (index = 3), which means the first 'PT' is closer to the point [0.5,0.5].
4. knn(traindf,inputdf,k): Knn classfy by a given k.
input: (traindf: training dataframe with a column named 'Type'; inputdf: dataframe which needs to be classified; k: k value)
output: inputdf with a new column named 'Knn Type'.
End of explanation
"""
"""
def knnac(traindf,inputdf,k):
#Creat a dictionary, and calculate accuracy.
k_ac = {}
for i in k:
n = 0
for j in range(inputdf.shape[0]):
if knn(traindf,inputdf,i)['Type'][j] == knn(traindf,inputdf,i)['Knn $
n += 1
k_ac[i] = '{0:.1%}'.format(n/inputdf.shape[0])
return k_ac
"""
print()
#Try
knn.knnac(data,test,range(1,data.shape[0]))
"""
Explanation: 5. knnac(traindf,inputdf,k): calculate accuracy by different k value.
input: (traindf: training dataframe with a column named 'Type'; inputdf: dataframe which needs to be classified; k: a list of k value)
output: a dictionary with k as the keys and the training accuracy of the test set.
End of explanation
"""
import test_knn as tk
"""
Explanation: Unit tests:
End of explanation
"""
tk.test_dist()
"""
def test_dist():
import knn
vec1 = [0, 0, 0, 1]
vec2 = [0, 0, 0, 0]
assert int(knn.dist(vec1, vec2)) == 1, "Distance calculate not correct."
"""
print()
"""
Explanation: 1.
End of explanation
"""
tk.test_sortdist()
"""
def test_sortdist():
import knn
import pandas as pd
df = pd.DataFrame([[0,3], [0,1], [0,2]])
vec = [0,0]
assert int(knn.sortdist(df,vec)[0][0]) == 1, "Sort not handled properly"
assert int(knn.sortdist(df,vec)[0][1]) == 1, "Index finding not handled properly."
"""
print()
"""
Explanation: 2.
End of explanation
"""
tk.test_classify()
"""
def test_classify():
import knn
import pandas as pd
df = pd.DataFrame([[0,3,'C'], [0,1,'B'], [0,2,'A']],columns = [1,2,'Type'])
vec = [0,0]
#Test if there are multiple group type by a given k.
assert knn.classify(1,df,vec) == 'B', "Group finding not handled properly."
assert knn.classify(2,df,vec) == 'B', "Group finding not handled properly."
assert knn.classify(3,df,vec) == 'B', "Group finding not handled properly."
"""
print()
"""
Explanation: 3.
End of explanation
"""
tk.test_knn()
"""
def test_knn():
import knn
import pandas as pd
df1 = pd.DataFrame([[0,3,'C'], [0,1,'B'], [0,2,'A']],columns = [1,2,'Type'])
df2 = pd.DataFrame([[0,3.1], [0,1.1], [0,2.1]])
assert list(knn.knn(df1,df2,1)['Knn Type']) == ['C', 'B', 'A'], "Knn not handled properly."
assert knn.knn(df1,df2,1).shape[1] == 3, "Column appending not handled properly."
"""
print()
"""
Explanation: 4.
End of explanation
"""
tk.test_knnac()
"""
def test_knnac():
import knn
import pandas as pd
df1 = pd.DataFrame([[0,3,'C'], [0,1,'B'], [0,2,'A']],columns = [1,2,'Type'])
df2 = pd.DataFrame([[0,3.1,'C'], [0,1.1,'B'], [0,2.1,'A']],columns = [1,2,'Type'])
assert len(knn.knnac(df1,df2,[1,2,3])) == 3, "Maybe miss a calculation of a k value."
assert knn.knnac(df1,df2,[1,2,3])[1] == '100.0%', "Accuracy not handled properly."
assert type(knn.knnac(df1,df2,[1,2,3])) == dict, "Dictionary not handled properly"
"""
print()
"""
Explanation: 5.
End of explanation
"""
|
mfouesneau/pyphot | examples/QuickStart.ipynb | mit | %matplotlib inline
import pylab as plt
import numpy as np
import sys
sys.path.append('../')
import pyphot
"""
Explanation: pyphot - A tool for computing photometry from spectra
Some examples are provided in this notebook
Full documentation available at http://mfouesneau.github.io/docs/pyphot/
End of explanation
"""
import pyphot
# get the internal default library of passbands filters
lib = pyphot.get_library()
print("Library contains: ", len(lib), " filters")
# find all filter names that relates to IRAC
# and print some info
f = lib.find('irac')
for name in f:
lib[name].info(show_zeropoints=True)
"""
Explanation: Quick Start
Quick start example to access the library and it's content
End of explanation
"""
# convert to magnitudes
import numpy as np
# We'll use Vega spectrum as example
from pyphot.vega import Vega
vega = Vega()
f = lib['HST_WFC3_F110W']
# compute the integrated flux through the filter f
# note that it work on many spectra at once
fluxes = f.get_flux(vega.wavelength, vega.flux, axis=-1)
# convert to vega magnitudes
mags = -2.5 * np.log10(fluxes) - f.Vega_zero_mag
print("Vega magnitude of Vega in {0:s} is : {1:f} mag".format(f.name, mags))
mags = -2.5 * np.log10(fluxes) - f.AB_zero_mag
print("AB magnitude of Vega in {0:s} is : {1:f} mag".format(f.name, mags))
mags = -2.5 * np.log10(fluxes) - f.ST_zero_mag
print("ST magnitude of Vega in {0:s} is : {1:f} mag".format(f.name, mags))
"""
Explanation: Suppose one has a calibrated spectrum and wants to compute the vega magnitude throug the HST WFC3 F110W passband,
End of explanation
"""
import pyphot
# define header and table format (as csv)
hdr = ("name", "detector type", "wavelength units",
"central wavelength", "pivot wavelength", "effective wavelength",
"Vega mag", "Vega flux", "Vega Jy",
"AB mag", "AB flux", "AB Jy",
"ST mag", "ST flux", "ST Jy")
fmt = "{0:s},{1:s},{2:s},{3:.3f},{4:.3f},{5:.3f},{6:.5f},{7:.5g},{8:.5g},{9:.5f},{10:.5g},{11:.5g},{12:.5f},{13:.5g},{14:.5g}\n"
l = pyphot.get_library()
with open('table.csv', 'w') as output:
output.write(','.join(hdr) + '\n')
for k in sorted(l.content):
fk = l[k]
rec = (fk.name, fk.dtype, fk.wavelength_unit,
fk.cl.magnitude, fk.lpivot.magnitude, fk.leff.magnitude,
fk.Vega_zero_mag, fk.Vega_zero_flux.magnitude, fk.Vega_zero_Jy.magnitude,
fk.AB_zero_mag, fk.AB_zero_flux.magnitude, fk.AB_zero_Jy.magnitude,
fk.ST_zero_mag, fk.ST_zero_flux.magnitude, fk.ST_zero_Jy.magnitude)
output.write(fmt.format(*rec))
"""
Explanation: Provided Filter library
This section shows the content of the provided library with respective properties of the passband filters. The code to generate the table is also provided in the documentation.
End of explanation
"""
import pandas as pd
df = pd.read_csv('./table.csv')
df.head()
"""
Explanation: Table description
name: the identification name of the filter in the library.
detector type: energy or photon counter.
wavelength units: filter defined with these units and all wavelength properties: central wavelength, pivot wavelength, and effective wavelength.
<X> mag: magnitude in Vega, AB or ST system (w.r.t. the detector type)
<X> flux: flux in $erg/s/cm^2/AA $ in the X system
<X> Jy: flux in $Jy$ (Jansky) in the X system
End of explanation
"""
# convert to magnitudes
import numpy as np
from pyphot import LickLibrary
from pyphot.vega import Vega
vega = Vega()
# using the internal collection of indices
lib = LickLibrary()
f = lib['CN_1']
# work on many spectra at once
index = f.get(vega.wavelength, vega.flux, axis=-1)
print("The index of Vega in {0:s} is {1:f} {2:s}".format(f.name, index, f.index_unit))
"""
Explanation: Extention to Lick indices
We also include functions to compute lick indices and provide a series of commonly use ones.
The Lick system of spectral line indices is one of the most commonly used methods of determining ages and metallicities of unresolved (integrated light) stellar populations.
End of explanation
"""
# define header and table format (as csv)
hdr = ("name", "wavelength units", "index units", "min", "max" "min blue", "max blue", "min red", "max red")
fmt = "{0:s},{1:s},{2:s},{3:.3f},{4:.3f},{5:.3f},{6:.5f},{7:.3f},{8:.3f}\n"
l = pyphot.LickLibrary()
with open('licks_table.csv', 'w') as output:
output.write(','.join(hdr) + '\n')
for k in sorted(l.content):
fk = l[k]
# wavelength have units
band = fk.band.magnitude
blue = fk.blue.magnitude
red = fk.red.magnitude
rec = (fk.name, fk.wavelength_unit, fk.index_unit, band[0], band[1],
blue[0], blue[1], red[0], red[1])
output.write(fmt.format(*rec))
import pandas as pd
df = pd.read_csv('./licks_table.csv')
df.head()
"""
Explanation: Similarly, we show the content of the provided library with respective properties of the passband filters.
The table below is also part of the documentation.
End of explanation
"""
|
karlstroetmann/Artificial-Intelligence | Python/4 Automatic Theorem Proving/Parser.ipynb | gpl-2.0 | import ply.lex as lex
tokens = [ 'NUMBER', 'VAR', 'FCT', 'BACKSLASH' ]
"""
Explanation: A Simple Parser for Term Rewriting
This file implements a parser for terms and equations. It uses the parser generator Ply. To install Ply, change the cell below into a code cell and execute it. If the package ply is already installed, this command will only produce a message that the package is already installed.
!conda install -y -c anaconda ply
Specification of the Scanner
The scanner that is implemented below recognizes numbers, variable names, function names, and various operator symbols. Variable names have to start with a lower case letter, while function names start with an uppercase letter.
End of explanation
"""
def t_NUMBER(t):
r'0|[1-9][0-9]*'
return t
"""
Explanation: The token Number specifies a natural number. Syntactically, numbers are treated a function symbols.
End of explanation
"""
def t_VAR(t):
r'[a-zA-Z][a-zA-Z0-9_]*(?=[^(a-zA-Z0-9_])'
return t
"""
Explanation: Variables start with a letter, followed by letters, digits, and underscores. They must be followed by a character that is not an opening parenthesis (.
End of explanation
"""
def t_FCT(t):
r'[a-zA-Z][a-zA-Z0-9_]*(?=[(])'
return t
def t_BACKSLASH(t):
r'\\'
return t
"""
Explanation: Function names start with a letter, followed by letters, digits, and underscores.
They have to be followed by an opening parenthesis (.
End of explanation
"""
def t_COMMENT(t):
r'//[^\n]*'
t.lexer.lineno += t.value.count('\n')
pass
"""
Explanation: Single line comments are supported and work as in C.
End of explanation
"""
literals = ['+', '-', '*', '/', '\\', '%', '^', '(', ')', ';', '=', ',']
"""
Explanation: The arithmetic operators and a few other symbols are supported.
End of explanation
"""
t_ignore = ' \t\r'
"""
Explanation: White space, i.e. space characters, tabulators, and carriage returns are ignored.
End of explanation
"""
def t_newline(t):
r'\n'
t.lexer.lineno += 1
return
"""
Explanation: Syntactically, newline characters are ignored. However, we still need to keep track of them in order to know which line we are in. This information is needed later for error messages.
End of explanation
"""
def find_column(token):
program = token.lexer.lexdata
line_start = program.rfind('\n', 0, token.lexpos) + 1
return (token.lexpos - line_start) + 1
"""
Explanation: Given a token, the function find_colum returns the column where token starts.
This is possible, because token.lexer.lexdata stores the string that is given to the scanner and token.lexpos is the number of characters that precede token.
End of explanation
"""
def t_error(t):
column = find_column(t)
print(f"Illegal character '{t.value[0]}' in line {t.lineno}, column {column}.")
t.lexer.skip(1)
"""
Explanation: The function t_error is called for any token t that can not be scanned by the lexer. In this case, t.value[0] is the first character that can not be recognized by the scanner.
End of explanation
"""
__file__ = 'main'
lexer = lex.lex()
def test_scanner(file_name):
with open(file_name, 'r') as handle:
program = handle.read()
print(program)
lexer.input(program)
lexer.lineno = 1
return [t for t in lexer]
"""
Explanation: The next assignment is necessary to make the lexer think that the code given above is part of some file.
End of explanation
"""
import ply.yacc as yacc
"""
Explanation: for t in test_scanner('Examples/quasigroup.eqn'):
print(t)
Specification of the Parser
We will use the following grammar to specify the language that our compiler can translate:
```
axioms
: equation
| axioms equation
;
equation
: term '=' term
;
term: term '+' term
| term '-' term
| term '*' term
| term '/' term
| term '\' term
| term '%' term
| term '^' term
| '(' term ')'
| FCT '(' term_list ')'
| FCT
| VAR
;
term_list
: / epsilon /
| term
| term ',' ne_term_list
;
ne_term_list
: term
| term ',' ne_term_list
;
```
We will use precedence declarations to resolve the ambiguity that is inherent in this grammar.
End of explanation
"""
start = 'axioms'
precedence = (
('nonassoc', '='),
('left', '+', '-'),
('left', '*', '/', 'BACKSLASH', '%'),
('right', '^')
)
def p_axioms_one(p):
"axioms : equation"
p[0] = ('axioms', p[1])
def p_axioms_more(p):
"axioms : axioms equation"
p[0] = p[1] + (p[2],)
def p_equation(p):
"equation : term '=' term ';'"
p[0] = ('=', p[1], p[3])
def p_term_plus(p):
"term : term '+' term"
p[0] = ('+', p[1], p[3])
def p_term_minus(p):
"term : term '-' term"
p[0] = ('-', p[1], p[3])
def p_term_times(p):
"term : term '*' term"
p[0] = ('*', p[1], p[3])
def p_term_divide(p):
"term : term '/' term"
p[0] = ('/', p[1], p[3])
def p_term_backslash(p):
"term : term BACKSLASH term"
p[0] = ('\\', p[1], p[3])
def p_term_modulo(p):
"term : term '%' term"
p[0] = ('%', p[1], p[3])
def p_term_power(p):
"term : term '^' term"
p[0] = ('^', p[1], p[3])
def p_term_group(p):
"term : '(' term ')'"
p[0] = p[2]
def p_term_fct_call(p):
"term : FCT '(' term_list ')'"
p[0] = (p[1],) + p[3][1:]
def p_term_number(p):
"term : NUMBER"
p[0] = (p[1],)
def p_term_id(p):
"term : VAR"
p[0] = ('$var', p[1])
def p_term_list_empty(p):
"term_list :"
p[0] = ('.',)
def p_term_list_one(p):
"term_list : term"
p[0] = ('.', p[1])
def p_term_list_more(p):
"term_list : term ',' ne_term_list"
p[0] = ('.', p[1]) + p[3][1:]
def p_ne_term_list_one(p):
"ne_term_list : term"
p[0] = ('.', p[1])
def p_ne_term_list_more(p):
"ne_term_list : term ',' ne_term_list"
p[0] = ('.', p[1]) + p[3][1:]
def p_error(p):
if p:
column = find_column(p)
print(f'Syntax error at token "{p.value}" in line {p.lineno}, column {column}.')
else:
print('Syntax error at end of input.')
"""
Explanation: The start variable of our grammar is axioms.
End of explanation
"""
parser = yacc.yacc(write_tables=False, debug=True)
"""
Explanation: Setting the optional argument write_tables to False is required to prevent an obscure bug where the parser generator tries to read an empty parse table. As we have used precedence declarations to resolve all shift/reduce conflicts, the action table should contain no conflict.
End of explanation
"""
%run AST-2-Dot.ipynb
"""
Explanation: !cat parser.out
The notebook AST-2-Dot.ipynb provides the function tuple2dot. This function can be used to visualize the abstract syntax tree that is generated by the function yacc.parse.
End of explanation
"""
def test_parse(file_name):
lexer.lineno = 1
with open(file_name, 'r') as handle:
program = handle.read()
ast = yacc.parse(program)
print(ast)
return tuple2dot(ast)
"""
Explanation: The function parse takes a file_name as its sole argument. The file is read and parsed.
The resulting parse tree is visualized using graphviz. It is important to reset the
attribute lineno of the scanner, for otherwise error messages will not have the correct line numbers.
End of explanation
"""
def parse_file(file_name):
lexer.lineno = 1
with open(file_name, 'r') as handle:
program = handle.read()
AST = yacc.parse(program)
if AST:
_, *L = AST
return L
return None
"""
Explanation: !cat Examples/quasigroup.eqn
test_parse('Examples/quasigroup.eqn')
The function indent is used to indent the generated assembler commands by preceding them with 8 space characters.
End of explanation
"""
def parse_equation(s):
lexer.lineno = 1
AST = yacc.parse(s + ';')
if AST:
_, *L = AST
return L[0]
return None
"""
Explanation: parse_file('Examples/group-theory.eqn')
End of explanation
"""
def parse_term(s):
lexer.lineno = 1
AST = yacc.parse(s + '= 1;')
if AST:
_, *L = AST
return L[0][1]
return None
"""
Explanation: parse_equation('i(x) * x = 1')
End of explanation
"""
def to_str(t):
if isinstance(t, set):
return '{' + ', '.join({ f'{to_str(eq)}' for eq in t }) + '}'
if isinstance(t, list):
return '[' + ', '.join([ f'{to_str(eq)}' for eq in t ]) + ']'
if isinstance(t, dict):
return '{' + ', '.join({ f'{k}: {to_str(v)}' for k, v in t.items() }) + '}'
if isinstance(t, str):
return t
if t[0] == '$var':
return t[1]
if len(t) == 3 and t[0] in ['=']:
_, lhs, rhs = t
return f'{to_str(lhs)} = {to_str(rhs)}'
if t[0] == '\\':
op, lhs, rhs = t
return to_str_paren(lhs) + ' \\ ' + to_str_paren(rhs)
if len(t) == 3 and t[0] in ['+', '-', '*', '/', '%', '^']:
op, lhs, rhs = t
return f'{to_str_paren(lhs)} {op} {to_str_paren(rhs)}'
f, *Args = t
if Args == []:
return f
return f'{f}({to_str_list(Args)})'
def to_str_paren(t):
if isinstance(t, str):
return t
if t[0] == '$var':
return t[1]
if len(t) == 3:
op, lhs, rhs = t
return f'({to_str_paren(lhs)} {op} {to_str_paren(rhs)})'
f, *Args = t
if Args == []:
return f
return f'{f}({to_str_list(Args)})'
def to_str_list(TL):
if TL == []:
return ''
t, *Ts = TL
if Ts == []:
return f'{to_str(t)}'
return f'{to_str(t)}, {to_str_list(Ts)}'
"""
Explanation: parse_term('i(x) * x')
End of explanation
"""
|
jenshnielsen/HJCFIT | exploration/OpenMP_example.ipynb | gpl-3.0 | import os
os.environ['OMP_NUM_THREADS'] = '4'
"""
Explanation: OpenMP example
In this example we illustrate how OpenMP can be used to speedup the calculation of the likelihood.
First we set the number of openmp threads. This is done via an environmental variable called OMP_NUM_THREADS. In this example we set the value of the variable from Python but typically this will be done directly in a shell script before running the example i.e. something like:
export OMP_NUM_THREADS=4
python script.py
End of explanation
"""
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import sys, time, math
import numpy as np
from numpy import linalg as nplin
"""
Explanation: Note that only the value of OMP_NUM_THREADS at import time infulences the execution. To experiment with OpenMP restart the notebook kernel, change the value in the cell above reexecute. You should see the time of execution change in the last cell.
Some general settings:
End of explanation
"""
from dcpyps.samples import samples
from dcpyps import dataset, mechanism, dcplots, dcio
# LOAD DATA: Burzomato 2004 example set.
scnfiles = [["./samples/glydemo/A-10.scn"],
["./samples/glydemo/B-30.scn"],
["./samples/glydemo/C-100.scn"],
["./samples/glydemo/D-1000.scn"]]
tr = [0.000030, 0.000030, 0.000030, 0.000030]
tc = [0.004, -1, -0.06, -0.02]
conc = [10e-6, 30e-6, 100e-6, 1000e-6]
"""
Explanation: Load data
HJCFIT depends on DCPROGS/DCPYPS module for data input and setting kinetic mechanism:
End of explanation
"""
# Initaialise SCRecord instance.
recs = []
bursts = []
for i in range(len(scnfiles)):
rec = dataset.SCRecord(scnfiles[i], conc[i], tr[i], tc[i])
recs.append(rec)
bursts.append(rec.bursts.intervals())
"""
Explanation: Initialise Single-Channel Record from dcpyps. Note that SCRecord takes a list of file names; several SCN files from the same patch can be loaded.
End of explanation
"""
# LOAD FLIP MECHANISM USED in Burzomato et al 2004
mecfn = "./samples/mec/demomec.mec"
version, meclist, max_mecnum = dcio.mec_get_list(mecfn)
mec = dcio.mec_load(mecfn, meclist[2][0])
# PREPARE RATE CONSTANTS.
# Fixed rates.
#fixed = np.array([False, False, False, False, False, False, False, True,
# False, False, False, False, False, False])
for i in range(len(mec.Rates)):
mec.Rates[i].fixed = False
# Constrained rates.
mec.Rates[21].is_constrained = True
mec.Rates[21].constrain_func = mechanism.constrain_rate_multiple
mec.Rates[21].constrain_args = [17, 3]
mec.Rates[19].is_constrained = True
mec.Rates[19].constrain_func = mechanism.constrain_rate_multiple
mec.Rates[19].constrain_args = [17, 2]
mec.Rates[16].is_constrained = True
mec.Rates[16].constrain_func = mechanism.constrain_rate_multiple
mec.Rates[16].constrain_args = [20, 3]
mec.Rates[18].is_constrained = True
mec.Rates[18].constrain_func = mechanism.constrain_rate_multiple
mec.Rates[18].constrain_args = [20, 2]
mec.Rates[8].is_constrained = True
mec.Rates[8].constrain_func = mechanism.constrain_rate_multiple
mec.Rates[8].constrain_args = [12, 1.5]
mec.Rates[13].is_constrained = True
mec.Rates[13].constrain_func = mechanism.constrain_rate_multiple
mec.Rates[13].constrain_args = [9, 2]
mec.update_constrains()
# Rates constrained by microscopic reversibility
mec.set_mr(True, 7, 0)
mec.set_mr(True, 14, 1)
# Update constrains
mec.update_constrains()
#Propose initial guesses different from recorded ones
initial_guesses = [5000.0, 500.0, 2700.0, 2000.0, 800.0, 15000.0, 300.0, 120000, 6000.0,
0.45E+09, 1500.0, 12000.0, 4000.0, 0.9E+09, 7500.0, 1200.0, 3000.0,
0.45E+07, 2000.0, 0.9E+07, 1000, 0.135E+08]
mec.set_rateconstants(initial_guesses)
mec.update_constrains()
"""
Explanation: Load demo mechanism (C&H82 numerical example)
End of explanation
"""
def dcprogslik(x, lik, m, c):
m.theta_unsqueeze(np.exp(x))
l = 0
for i in range(len(c)):
m.set_eff('c', c[i])
l += lik[i](m.Q)
return -l * math.log(10)
# Import HJCFIT likelihood function
from dcprogs.likelihood import Log10Likelihood
kwargs = {'nmax': 2, 'xtol': 1e-12, 'rtol': 1e-12, 'itermax': 100,
'lower_bound': -1e6, 'upper_bound': 0}
likelihood = []
for i in range(len(recs)):
likelihood.append(Log10Likelihood(bursts[i], mec.kA,
recs[i].tres, recs[i].tcrit, **kwargs))
theta = mec.theta()
"""
Explanation: Prepare likelihood function
End of explanation
"""
%timeit dcprogslik(np.log(theta), likelihood, mec, conc)
"""
Explanation: Time evaluation of likelihood function
End of explanation
"""
|
manipopopo/tensorflow | tensorflow/contrib/eager/python/examples/generative_examples/text_generation.ipynb | apache-2.0 | !pip install unidecode
"""
Explanation: Copyright 2018 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License").
Text Generation using a RNN
<table class="tfo-notebook-buttons" align="left"><td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/generative_examples/text_generation.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td><td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/generative_examples/text_generation.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on Github</a></td></table>
This notebook demonstrates how to generate text using an RNN using tf.keras and eager execution. If you like, you can write a similar model using less code. Here, we show a lower-level impementation that's useful to understand as prework before diving in to deeper examples in a similar, like Neural Machine Translation with Attention.
This notebook is an end-to-end example. When you run it, it will download a dataset of Shakespeare's writing. We'll use a collection of plays, borrowed from Andrej Karpathy's excellent The Unreasonable Effectiveness of Recurrent Neural Networks. The notebook will train a model, and use it to generate sample output.
Here is the output(with start string='w') after training a single layer GRU for 30 epochs with the default settings below:
```
were to the death of him
And nothing of the field in the view of hell,
When I said, banish him, I will not burn thee that would live.
HENRY BOLINGBROKE:
My gracious uncle--
DUKE OF YORK:
As much disgraced to the court, the gods them speak,
And now in peace himself excuse thee in the world.
HORTENSIO:
Madam, 'tis not the cause of the counterfeit of the earth,
And leave me to the sun that set them on the earth
And leave the world and are revenged for thee.
GLOUCESTER:
I would they were talking with the very name of means
To make a puppet of a guest, and therefore, good Grumio,
Nor arm'd to prison, o' the clouds, of the whole field,
With the admire
With the feeding of thy chair, and we have heard it so,
I thank you, sir, he is a visor friendship with your silly your bed.
SAMPSON:
I do desire to live, I pray: some stand of the minds, make thee remedies
With the enemies of my soul.
MENENIUS:
I'll keep the cause of my mistress.
POLIXENES:
My brother Marcius!
Second Servant:
Will't ple
```
Of course, while some of the sentences are grammatical, most do not make sense. But, consider:
Our model is character based (when we began training, it did not yet know how to spell a valid English word, or that words were even a unit of text).
The structure of the output resembles a play (blocks begin with a speaker name, in all caps similar to the original text). Sentences generally end with a period. If you look at the text from a distance (or don't read the invididual words too closely, it appears as if it's an excerpt from a play).
As a next step, you can experiment training the model on a different dataset - any large text file(ASCII) will do, and you can modify a single line of code below to make that change. Have fun!
Install unidecode library
A helpful library to convert unicode to ASCII.
End of explanation
"""
# Import TensorFlow >= 1.10 and enable eager execution
import tensorflow as tf
# Note: Once you enable eager execution, it cannot be disabled.
tf.enable_eager_execution()
import numpy as np
import re
import random
import unidecode
import time
"""
Explanation: Import tensorflow and enable eager execution.
End of explanation
"""
path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt')
"""
Explanation: Download the dataset
In this example, we will use the shakespeare dataset. You can use any other dataset that you like.
End of explanation
"""
text = unidecode.unidecode(open(path_to_file).read())
# length of text is the number of characters in it
print (len(text))
"""
Explanation: Read the dataset
End of explanation
"""
# unique contains all the unique characters in the file
unique = sorted(set(text))
# creating a mapping from unique characters to indices
char2idx = {u:i for i, u in enumerate(unique)}
idx2char = {i:u for i, u in enumerate(unique)}
# setting the maximum length sentence we want for a single input in characters
max_length = 100
# length of the vocabulary in chars
vocab_size = len(unique)
# the embedding dimension
embedding_dim = 256
# number of RNN (here GRU) units
units = 1024
# batch size
BATCH_SIZE = 64
# buffer size to shuffle our dataset
BUFFER_SIZE = 10000
"""
Explanation: Creating dictionaries to map from characters to their indices and vice-versa, which will be used to vectorize the inputs
End of explanation
"""
input_text = []
target_text = []
for f in range(0, len(text)-max_length, max_length):
inps = text[f:f+max_length]
targ = text[f+1:f+1+max_length]
input_text.append([char2idx[i] for i in inps])
target_text.append([char2idx[t] for t in targ])
print (np.array(input_text).shape)
print (np.array(target_text).shape)
"""
Explanation: Creating the input and output tensors
Vectorizing the input and the target text because our model cannot understand strings only numbers.
But first, we need to create the input and output vectors.
Remember the max_length we set above, we will use it here. We are creating max_length chunks of input, where each input vector is all the characters in that chunk except the last and the target vector is all the characters in that chunk except the first.
For example, consider that the string = 'tensorflow' and the max_length is 9
So, the input = 'tensorflo' and output = 'ensorflow'
After creating the vectors, we convert each character into numbers using the char2idx dictionary we created above.
End of explanation
"""
dataset = tf.data.Dataset.from_tensor_slices((input_text, target_text)).shuffle(BUFFER_SIZE)
dataset = dataset.apply(tf.contrib.data.batch_and_drop_remainder(BATCH_SIZE))
"""
Explanation: Creating batches and shuffling them using tf.data
End of explanation
"""
class Model(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, units, batch_size):
super(Model, self).__init__()
self.units = units
self.batch_sz = batch_size
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
if tf.test.is_gpu_available():
self.gru = tf.keras.layers.CuDNNGRU(self.units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
else:
self.gru = tf.keras.layers.GRU(self.units,
return_sequences=True,
return_state=True,
recurrent_activation='sigmoid',
recurrent_initializer='glorot_uniform')
self.fc = tf.keras.layers.Dense(vocab_size)
def call(self, x, hidden):
x = self.embedding(x)
# output shape == (batch_size, max_length, hidden_size)
# states shape == (batch_size, hidden_size)
# states variable to preserve the state of the model
# this will be used to pass at every step to the model while training
output, states = self.gru(x, initial_state=hidden)
# reshaping the output so that we can pass it to the Dense layer
# after reshaping the shape is (batch_size * max_length, hidden_size)
output = tf.reshape(output, (-1, output.shape[2]))
# The dense layer will output predictions for every time_steps(max_length)
# output shape after the dense layer == (max_length * batch_size, vocab_size)
x = self.fc(output)
return x, states
"""
Explanation: Creating the model
We use the Model Subclassing API which gives us full flexibility to create the model and change it however we like. We use 3 layers to define our model.
Embedding layer
GRU layer (you can use an LSTM layer here)
Fully connected layer
End of explanation
"""
model = Model(vocab_size, embedding_dim, units, BATCH_SIZE)
optimizer = tf.train.AdamOptimizer()
# using sparse_softmax_cross_entropy so that we don't have to create one-hot vectors
def loss_function(real, preds):
return tf.losses.sparse_softmax_cross_entropy(labels=real, logits=preds)
"""
Explanation: Call the model and set the optimizer and the loss function
End of explanation
"""
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(optimizer=optimizer,
model=model)
"""
Explanation: Checkpoints (Object-based saving)
End of explanation
"""
# Training step
EPOCHS = 30
for epoch in range(EPOCHS):
start = time.time()
# initializing the hidden state at the start of every epoch
hidden = model.reset_states()
for (batch, (inp, target)) in enumerate(dataset):
with tf.GradientTape() as tape:
# feeding the hidden state back into the model
# This is the interesting step
predictions, hidden = model(inp, hidden)
# reshaping the target because that's how the
# loss function expects it
target = tf.reshape(target, (-1,))
loss = loss_function(target, predictions)
grads = tape.gradient(loss, model.variables)
optimizer.apply_gradients(zip(grads, model.variables))
if batch % 100 == 0:
print ('Epoch {} Batch {} Loss {:.4f}'.format(epoch+1,
batch,
loss))
# saving (checkpoint) the model every 5 epochs
if epoch % 5 == 0:
checkpoint.save(file_prefix = checkpoint_prefix)
print ('Epoch {} Loss {:.4f}'.format(epoch+1, loss))
print('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
"""
Explanation: Train the model
Here we will use a custom training loop with the help of GradientTape()
We initialize the hidden state of the model with zeros and shape == (batch_size, number of rnn units). We do this by calling the function defined while creating the model.
Next, we iterate over the dataset(batch by batch) and calculate the predictions and the hidden states associated with that input.
There are a lot of interesting things happening here.
The model gets hidden state(initialized with 0), lets call that H0 and the first batch of input, lets call that I0.
The model then returns the predictions P1 and H1.
For the next batch of input, the model receives I1 and H1.
The interesting thing here is that we pass H1 to the model with I1 which is how the model learns. The context learned from batch to batch is contained in the hidden state.
We continue doing this until the dataset is exhausted and then we start a new epoch and repeat this.
After calculating the predictions, we calculate the loss using the loss function defined above. Then we calculate the gradients of the loss with respect to the model variables(input)
Finally, we take a step in that direction with the help of the optimizer using the apply_gradients function.
Note:- If you are running this notebook in Colab which has a Tesla K80 GPU it takes about 23 seconds per epoch.
End of explanation
"""
# restoring the latest checkpoint in checkpoint_dir
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
"""
Explanation: Restore the latest checkpoint
End of explanation
"""
# Evaluation step(generating text using the model learned)
# number of characters to generate
num_generate = 1000
# You can change the start string to experiment
start_string = 'Q'
# converting our start string to numbers(vectorizing!)
input_eval = [char2idx[s] for s in start_string]
input_eval = tf.expand_dims(input_eval, 0)
# empty string to store our results
text_generated = ''
# low temperatures results in more predictable text.
# higher temperatures results in more surprising text
# experiment to find the best setting
temperature = 1.0
# hidden state shape == (batch_size, number of rnn units); here batch size == 1
hidden = [tf.zeros((1, units))]
for i in range(num_generate):
predictions, hidden = model(input_eval, hidden)
# using a multinomial distribution to predict the word returned by the model
predictions = predictions / temperature
predicted_id = tf.multinomial(tf.exp(predictions), num_samples=1)[0][0].numpy()
# We pass the predicted word as the next input to the model
# along with the previous hidden state
input_eval = tf.expand_dims([predicted_id], 0)
text_generated += idx2char[predicted_id]
print (start_string + text_generated)
"""
Explanation: Predicting using our trained model
The below code block is used to generated the text
We start by choosing a start string and initializing the hidden state and setting the number of characters we want to generate.
We get predictions using the start_string and the hidden state
Then we use a multinomial distribution to calculate the index of the predicted word. We use this predicted word as our next input to the model
The hidden state returned by the model is fed back into the model so that it now has more context rather than just one word. After we predict the next word, the modified hidden states are again fed back into the model, which is how it learns as it gets more context from the previously predicted words.
If you see the predictions, the model knows when to capitalize, make paragraphs and the text follows a shakespeare style of writing which is pretty awesome!
End of explanation
"""
|
jmfranck/pyspecdata | docs/_downloads/871a80cfd7a71c1edf9cefede4305190/matrix_mult.ipynb | bsd-3-clause | # -*- coding: utf-8 -*-
from pylab import *
from pyspecdata import *
from numpy.random import random
import time
init_logging('debug')
"""
Explanation: Matrix Multiplication
Various ways of implementing different matrix multiplications.
Read the documentation embedded in the code.
End of explanation
"""
a_nd = nddata(random(10*2048),[10,2048],['x','y']).setaxis('x','#').setaxis('y','#')
a = a_nd.data
"""
Explanation: In this example, the assertions essentially tell the story of what's going on
Note that in all these examples, the pyspecdata version appears more
complicated.
But, that's because these are toy examples, where we have no need for the
dimension names or axes.
Nonetheless, we wanted to give the simplest working example possible.
First, we demonstrate matrix multiplication
for all the below, I attach an axis to make sure the routines work with the
axes attached
End of explanation
"""
a2_nd = nddata(random(10*2048),[2048,10],['y','z']).setaxis('y','#').setaxis('z','#')
a2 = a2_nd.data
# multiply two different matrices
time1 = time.time()
b = a @ a2
time2 = time.time()
b_nd = a_nd @ a2_nd
"""
Explanation: in the next line, note how only the dimension that goes away is named the
same!
if you think about the matrix a transforming from one vector space (labeled
y) to another (labeled x) this makes sense
End of explanation
"""
time3 = time.time()
assert b_nd.dimlabels == ['x','z'], b_nd.dimlabels
assert all(isclose(b,b_nd.data))
print("total time",(time3-time2),"time/(time for raw)",((time3-time2)/(time2-time1)))
assert ((time3-time2)/(time2-time1))<1
"""
Explanation: the previous is unambiguous b/c only 'y' is shared between the two,
but I can do the following for clarity:
b_nd = a_nd.along('y') @ a2_nd
Note that "along" gives the dimension along which the sum is performed -- and
so this dimension goes away upon matrix multiplication.
If only one dimension is shared between the matrices, then we know to take
the sum along the shared dimension.
For example, here a2_nd transforms from a space called "z" into a space called "y",
while a_nd transforms from "y" into "x" -- so it's obvious that a_nd @ a2_nd should
transform from "z" into "y".
End of explanation
"""
time1 = time.time()
b = a @ a.T
time2 = time.time()
"""
Explanation: calculate a projection matrix
End of explanation
"""
b_nd = a_nd.along('y',('x','x_new')) @ a_nd
time3 = time.time()
assert b_nd.dimlabels == ['x_new','x'], b_nd.dimlabels
assert all(b_nd.getaxis('x_new') == b_nd.getaxis('x'))
assert (id(b_nd.getaxis('x_new')) != id(b_nd.getaxis('x')))
assert all(isclose(b,b_nd.data))
if time2-time1>0:
print("total time",(time3-time2),"time/(time for raw)",((time3-time2)/(time2-time1)))
assert ((time3-time2)/(time2-time1))<1.1
"""
Explanation: note that here, I have to rename the column space
End of explanation
"""
a_nd = nddata(random(10),[10],['myaxis']).setaxis('myaxis','#')
b_nd = nddata(random(10),[10],['myaxis']).setaxis('myaxis','#')
a = a_nd.data
b = b_nd.data
assert all(isclose(a.dot(b),(a_nd @ b_nd).data))
"""
Explanation: now, a standard dot product note how I don't need along here, since it's
unambiguous
End of explanation
"""
a_nd = nddata(random(10*2048),[10,2048],['x','y']).setaxis('x','#').setaxis('y','#')
a = a_nd.data
b_nd = a_nd.along('y') @ a_nd
b = matmul(a_nd.data.reshape(10,1,2048),
a_nd.data.reshape(10,2048,1)).reshape(-1)
assert all(isclose(b,b_nd.data))
assert len(b.data) == 10
"""
Explanation: Finally, let's show what happens when we multiply a matrix by itself and
don't rename one of the dimensions
By doing this, we indicate that we're not interested in transforming from one
vector space to another (as a projection matrix does), but rather just have
two sets of vectors and are interested in finding the dot products between
the two sets
This will take the dot product of our 10 2048-long vectors, and present them
10-long array
End of explanation
"""
|
computational-class/cjc2016 | code/sympy.ipynb | mit | sy.integrate(6*x**5, x)
sy.integrate(x**3, (x, 0, 10)) #定积分
sy.integrate(6*x**5+y, x,y) #双重不定积分
sy.integrate(x**3+y, (x, -1, 1),(y,1,3) ) #双重定积分
"""
Explanation: 表达式变换
sy.exp(sy.I*x).expand(complex=True) # 展开为复数, 可理解为将x当作复数处理 表达式展开
sy.cos(x).series(x, 0, 10) # .series(var, point, order) 泰勒级数展开
sy.simplify(expr) # 对数学表达式进行化简 化简
sy.radsimp(expr) # 对表达式的分母进行有理化 分母进行有理化
它所得到的表达式的分母部分将不含无理数。即将表达式转换为分子除分母的形式。
sy.ratsimp(expr) # 对表达式的分母进行通分运算 通分运算
sy.cancel(expr) # 对表达式的分子分母进行约分运算 约分运算
只能对纯符号的分式表达式进行约分
不能对函数内部的表达式进行约分,不能对带函数的表达式进行约分
sy.trim(expr) # 对表达式的分子分母进行约分运算 约分运算
可以对表达式中的任意部分进行约分,并且支持函数表达式的约分运算。
sy.apart(expr) # 对表达式进行部分分式分解 部分分式分解
sy.factor(expr) # 对表达式进行因式分解 因式分解
sy.together(1/x+1/y) # 代数式的合并 代数式的合并
sy.trigsimp(expr) # 对表达式中的三角函数进行化简 三角函数化简
有两个可选参数——deep和recursive,默认值都为False。
当deep参数为True时,将对表达式中所有子表达式进行化简运算
当recursive参数为True时,将递归使用trigsimp()进行最大限度的化简
sy.expand_trig(expr) # 展开三角函数 三角函数展开
积分
End of explanation
"""
print f(x).diff(x)
sy.diff(sy.sin(x), x) # 解析微分
sy.diff(sy.sin(2*x), x, 2) # 高阶微分 diff(func, var, n)
sy.diff(sy.sin(x*y),x,2,y,3) # 对表达式进行x的2阶求导,对y进行3阶求导
sy.diff(sy.sin(2*x),x)
sy.sin(2*x).diff(x,1)
t=sy.Derivative(sy.sin(x),x)
# Derivative是表示导函数的类,它的第一个参数是需要进行求导的数学函数,第二个参数是求导的自变量,
# 注意:Derivative所得到的是一个导函数,它不会进行求导运算。
"""
Explanation: 微分
End of explanation
"""
sy.limit(sy.sin(x)/x, x, 0) # 极限
"""
Explanation: 极限
End of explanation
"""
sy.summation(2*x - 1, (x, 1, y)) # 连加求和
"""
Explanation: 求和
End of explanation
"""
(1 + x*y).subs(x, sy.pi)
"""
Explanation: 替换
expression.subs(x, y) # 将算式中的x替换成y 替换
End of explanation
"""
(1 + x*y).subs([(x, sy.pi), (y, 2)])
"""
Explanation: expression.subs({x:y,u:v}) 使用字典进行多次替换
expression.subs([(x,y),(u,v)]) 使用列表进行多次替换
End of explanation
"""
# 用sy.Eq构建方程
u_max, rho_max, A, B = sy.symbols('u_max rho_max A B') # 符号对象, 可限定 real = True , positive = True, complex = True , integer = True ...
eq1 = sy.Eq( 0, u_max*rho_max*(1 - A*rho_max-B*rho_max**2) ) # 一般方程
eq1 = sy.Eq(f(x).diff(x)+f(x)+f(x)**2,0) # 微分方程
print eq1
"""
Explanation: 方程
方程构建: 任何表达式都可以直接表示值为0的方程,即表达式 expr 也是方程 expr=0
End of explanation
"""
sy.solve(eq1,x)
sy.solveset(sy.sin(x)+x,x) # 即对 sy.sin(x)+x=0 求解
"""
Explanation: sy.solve(方程,未知数) # 方程求解 ,返回值为list 方程求解
它的第一个参数是表示方程的表达式,其后的参数是表示方程中未知量的符号。
End of explanation
"""
sy.dsolve(eq1,f(x)) # 微分方程求解 微分方程求解
"""
Explanation: sy.roots(expr) # 计算单变量方程expr=0的根 方程求解
End of explanation
"""
from sympy.utilities.lambdify import lambdify
lambdify([x],f(x),modules="numpy") # 第一个参数为自变量列表
sy.N(sy.pi,20)
sy.pi.evalf(n=50)
i=sy.Symbol('i')
f = sy.Sum(sy.Indexed('x',i)*sy.cos(i*sy.pi),(i,1,10))
f
func = lambdify(x, f, 'numpy') # returns a numpy-ready function
print func
numpy_array_of_results = func(y)
numpy_array_of_results
"""
Explanation: 可以对微分方程进行符号求解,它的第一个参数是一个带未知函数的表达式,第二个参数是需要进行求解的未知函数。
它在解微分方程中可以传递hint参数,指定微分方程的解法,若设置为“best”则放dsolve()尝试所有的方法并返回最简单的解。
End of explanation
"""
|
ageron/tensorflow-safari-course | 01_basics_phases_ex1.ipynb | apache-2.0 | from __future__ import absolute_import, division, print_function, unicode_literals
"""
Explanation: Try not to peek at the solutions when you go through the exercises. ;-)
First let's make sure this notebook works well in both Python 2 and Python 3:
End of explanation
"""
import tensorflow as tf
tf.__version__
"""
Explanation: TensorFlow basics
End of explanation
"""
>>> a = tf.constant(3)
>>> b = tf.constant(5)
>>> s = a + b
a
b
s
tf.get_default_graph()
>>> graph = tf.Graph()
>>> with graph.as_default():
... a = tf.constant(3)
... b = tf.constant(5)
... s = a + b
...
"""
Explanation: Construction Phase
End of explanation
"""
>>> with tf.Session(graph=graph) as sess:
... result = s.eval()
...
>>> result
>>> with tf.Session(graph=graph) as sess:
... result = sess.run(s)
...
>>> result
>>> with tf.Session(graph=graph) as sess:
... result = sess.run([a,b,s])
...
>>> result
"""
Explanation: Execution Phase
End of explanation
"""
import numpy as np
from IPython.display import display, HTML
def strip_consts(graph_def, max_const_size=32):
"""Strip large constant values from graph_def."""
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = b"<stripped %d bytes>"%size
return strip_def
def show_graph(graph_def=None, max_const_size=32):
"""Visualize TensorFlow graph."""
graph_def = graph_def or tf.get_default_graph()
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code = """
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
""".format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe = """
<iframe seamless style="width:1200px;height:620px;border:0" srcdoc="{}"></iframe>
""".format(code.replace('"', '"'))
display(HTML(iframe))
"""
Explanation: Exercise 1
1.1) Create a simple graph that calculates $ c = \exp(\sqrt 8 + 3) $.
Tip: TensorFlow's API documentation is available at:
https://www.tensorflow.org/versions/master/api_docs/python/
1.2) Now create a Session() and evaluate the operation that gives you the result of the equation above:
1.3) Create a graph that evaluates and prints both $ b = \sqrt 8 $ and $ c = \exp(\sqrt 8 + 3) $. Try to implement this in a way that only evaluates $ \sqrt 8 $ once.
1.4) The following code is needed to display TensorFlow graphs in Jupyter. Just run this cell then visualize your graph by calling show_graph(your graph):
End of explanation
"""
graph = tf.Graph()
with graph.as_default():
c = tf.exp(tf.add(tf.sqrt(tf.constant(8.)), tf.constant(3.)))
# or simply...
c = tf.exp(tf.sqrt(8.) + 3.)
"""
Explanation: Try not to peek at the solution below before you have done the exercise! :)
Exercise 1 - Solution
1.1)
End of explanation
"""
with tf.Session(graph=graph):
c_val = c.eval()
c_val
"""
Explanation: 1.2)
End of explanation
"""
graph = tf.Graph()
with graph.as_default():
b = tf.sqrt(8.)
c = tf.exp(b + 3)
with tf.Session(graph=graph) as sess:
b_val, c_val = sess.run([b, c])
b_val
c_val
"""
Explanation: 1.3)
End of explanation
"""
# WRONG!
with tf.Session(graph=graph):
b_val = b.eval() # evaluates b
c_val = c.eval() # evaluates c, which means evaluating b again!
b_val
c_val
"""
Explanation: Important: the following implementation gives the right result, but it runs the graph twice, once to evaluate b, and once to evaluate c. Since c depends on b, it means that b will be evaluated twice. Not what we wanted.
End of explanation
"""
show_graph(graph)
"""
Explanation: 1.4)
End of explanation
"""
|
omaraltaher/kaggle-pizza-project | Random_Acts_of_Pizza_Kaggle_Competition_Project.ipynb | mit | # This tells matplotlib not to try opening a new window for each plot.
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
# General libraries.
import json
import re
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import datetime
# SK-learn libraries for learning.
from sklearn.pipeline import Pipeline
from sklearn.neighbors import KNeighborsClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import BernoulliNB
from sklearn.naive_bayes import GaussianNB
from sklearn.model_selection import GridSearchCV #update module model_selection
from sklearn.svm import SVC
from sklearn.svm import LinearSVC
from sklearn.ensemble import RandomForestClassifier
# SK-learn libraries for evaluation.
from sklearn.metrics import confusion_matrix
from sklearn import metrics
from sklearn.decomposition import PCA
from sklearn.metrics import classification_report
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import cross_val_predict, cross_val_score
from sklearn import preprocessing
from sklearn.mixture import GMM
# SK-learn libraries for feature extraction from text.
from sklearn.feature_extraction.text import *
"""
Explanation: W207 Summer 2017 Final Project iPython notebook
Omar Al Taher, Ted Pham, Chris SanChez
End of explanation
"""
#load json training data into pandas dataframe
df = pd.read_json('train.json')
df.info()
"""
Explanation: I. Background:
This project aims to predict whether a reddit post asking for pizzas would get funded. Since it's a binary classification problem, we will explore several algorithms with a focus on logistic regression. In particular, we will look into details how to extract features from text.
II. Data Pre-Processing:
The data in its raw form consists of 4040 observations of 31 features. The original columns consist of 19 integer values, 4 floats, and 8 objects (there is one boolean column which is the outcome variable). In order to extract predictive value from the dataset a good deal of pre-processing and feature engineering was required. A walkthrough of the various steps taken follows below in a narrative format:
End of explanation
"""
df['requester_received_pizza'] = np.where(df['requester_received_pizza'] == True, 1, 0)
df['requester_received_pizza'].value_counts()
"""
Explanation: We'll start by transforming the outcome variable into a binary variable.
End of explanation
"""
good_indexes = []
for i, name in enumerate(df.columns):
if re.findall('retrieval', name):
pass
else:
good_indexes.append(i)
# Remove at_retrieval fields from dataframce df
columns = df.columns[good_indexes]
df = df.loc[:,columns]
"""
Explanation: Next, we'll remove all the "_at_retrieval" columns from the dataset as they are not found in the test data set and therefore represent data that is not avaialable at the time of the request for pizza.
End of explanation
"""
#Drop six more columns from dataset
df.drop(['giver_username_if_known', 'post_was_edited', 'request_id',
'requester_user_flair', 'requester_username'], axis=1, inplace=True)
df.info()
"""
Explanation: We also found that there were several other columns that were not needed for predictive power, and we therefore removed them as well.
End of explanation
"""
#Show that 104 observations have a blank "request_text" field
len(df[df['request_text'].str.len() == 0])
#1. Combine request_text and request_title fields
#2. Lowercase all words
df['request_text_n_title'] = (df['request_title'] + ' ' + df['request_text_edit_aware'])
df['request_text_n_title'] = [ text.split(" ",1)[1].lower() for text in df['request_text_n_title']]
print df['request_text_n_title'].head()
#3. Add a total length feature to the dataset
df['total_length'] =df['request_text_n_title'].apply(lambda x: len(x.split(' ')))
#4. Ensure there are no zero length requests in the new feature/column
print '\nAfter combining request_title and request_text, number of requests with length of \
"zero": {}'.format(len(df[df['request_text_n_title'] == 0]))
#Showcase of new added features to dataset
df.loc[:5,['request_text_n_title', 'total_length']]
"""
Explanation: Removing the "at_retrieval" columns and the other six columns, reduces our dataset by 17 total features (leaving us with 14 features, not including the outcome variable ).
During the EDA phase of this project we found that several (104 to be exact), observations had a "request_text" length of zero, some of these observations actually ended up being given a pizza. After looking through the data, we discovered that some people had left their request in the "request_title" field of the RAOP Reddit page and had left their request_text field blank. In order to clean this discrepancy up, we decided to combine these two fields together, as it is unclear if the benefactors (those who ended up giving pizzas away), were responding to the '"request_title" field, the "request_text" field, or both when they made their altruistic decision.
End of explanation
"""
np.random.seed(0)
#separate into features and labels
text_features = df['request_text_n_title'].values # text features
success_rate = sum(df['requester_received_pizza'])/4040.
print "Original success rate: {}".format(round(success_rate, 4))
# Create target field for received pizza are only 1's and 0's
target = df['requester_received_pizza'].values
#shuffle our data to ensure randomization
shuffle = np.random.permutation(len(text_features))
text_features, target = text_features[shuffle], target[shuffle]
#separate into training and dev groups
train_data, train_labels = text_features[:3200], target[:3200]
dev_data, dev_labels = text_features[3200:], target[3200:]
#check to ensure success rate is roughly preserved across sets
train_success_rate = sum(train_labels)/3200.
dev_success_rate = sum(dev_labels)/840.
print "Training success rate: {}".format(round(train_success_rate, 4))
print "Dev success rate: {}".format(round(dev_success_rate, 4))
#check to ensure we've got the right datasets
print '\n\nTraining Data shape: \t{}'.format(train_data.shape)
print 'Training Labels shape: \t{}'.format(train_labels.shape)
print 'Dev data shape: \t{}'.format(dev_data.shape)
print 'Dev Labels shape: \t{}'.format(dev_labels.shape)
"""
Explanation: III. Baseline Modeling:
With pre-processing out of the way, we decided to run a baseline text analysis model using the "request_text_n_title" feature alone. This model serves as our basis against which we can compare the success or failure of all futue feature engineering efforts.
IIIa. Train and Dev set creation
We begin by forming our training and dev sets. Given that the success rate for receiving a pizza in the original dataset is ~25%, we'll check our randomized training and dev sets to ensure a similar success rate is preserved.
End of explanation
"""
#fit logistic classifier to training data
print "BASELINE LOGISTIC REGRESSION"
print "----------------------------"
vec = TfidfVectorizer()
train_matrix = vec.fit_transform(train_data)
dev_matrix = vec.transform(dev_data)
lr= LogisticRegression(n_jobs=-1, class_weight='balanced').fit(train_matrix, train_labels)
predictions = lr.predict(dev_matrix)
score = round(roc_auc_score(dev_labels, predictions, average='weighted'), 4)
print "Baseline ROC AUC score: {}".format(score)
print "\n\nRESTRICTED FEATURES LOGISTIC REGRESSION"
print "---------------------------------------"
model_scores = []
range_features = np.arange(385,400)
for features in range_features:
vec = TfidfVectorizer(stop_words='english',sublinear_tf=1, ngram_range=(1,1),max_features=features)
train_matrix = vec.fit_transform(train_data)
dev_matrix = vec.transform(dev_data)
lr= LogisticRegression(n_jobs=-1, class_weight='balanced').fit(train_matrix, train_labels)
predictions = lr.predict(dev_matrix)
model_scores.append(round(metrics.roc_auc_score(dev_labels, predictions, average = 'weighted'), 4))
best_score = round(max(model_scores), 4)
print "Max number of features: {}".format(range_features[np.argmax(model_scores)])
print "Best ROC AUC Score: {}".format(best_score)
#best_max_feature
best_max_feature = range_features[np.argmax(model_scores)]
#fit logistic classifier to training data
vec = TfidfVectorizer(stop_words='english',sublinear_tf=1, ngram_range=(1,1), max_features=best_max_feature)
train_matrix = vec.fit_transform(train_data)
dev_matrix = vec.transform(dev_data)
print "\n"
print "LOGISTIC REGRESSION with tuning c value and restricted # of features"
print "--------------------------------------------------------------------"
c_values = np.logspace(.0001, 2, 200)
c_scores = []
for value in c_values:
lr = LogisticRegression(C=value, n_jobs=-1, class_weight='balanced', penalty='l2')
lr.fit(train_matrix, train_labels)
predictions = lr.predict(dev_matrix)
c_scores.append(round(metrics.roc_auc_score(dev_labels, predictions, average = 'weighted'), 4))
best_c_value = c_values[np.argmax(c_scores)]
print "Best C-value: {}".format(best_c_value)
print "Best ROC AUC Score: {}".format(c_scores[np.argmax(c_scores)])
lr = LogisticRegression(C=best_c_value, n_jobs=-1).fit(train_matrix, train_labels)
predictions = lr.predict(dev_matrix)
print '\n'
"""
Explanation: IIIb. Logistic Regression
Our initial model will make use of the sklearn TfidfVectorizer function to encapsulate predictive power from the training data. After some initial trial runs, we realized that we needed to limit the number of features to make a useful model. In this case, a predictive "sweet spot" emerged in the 350+ - 400 range, so we limited our feature number to this amount.
End of explanation
"""
#fit Multinomial Naive Bayes classifier to training data
print "BERNOULLI NAIVE BAYES"
print "-----------------------"
model_scores = []
range_features = np.arange(415, 431)
for features in range_features:
vec = TfidfVectorizer(max_features=features)
train_matrix = vec.fit_transform(train_data)
dev_matrix = vec.transform(dev_data)
alphas = np.linspace(.0001, 1, 20)
bnb = BernoulliNB(alpha=.001).fit(train_matrix, train_labels)
predictions = bnb.predict(dev_matrix)
model_scores.append(round(roc_auc_score(dev_labels, predictions, average='weighted'), 4))
print "Best ROC AUC Score: {}".format(max(model_scores))
print "Max Features: {}".format(range_features[np.argmax(model_scores)])
"""
Explanation: Our initial baseline model with Logistic Regression does suprisingly well using only the raw text without any feature engineering or hyperparameter tuning. The initial score represents a rough 9% improvement over random guessing alone. After restricting the number of features in the model we see a 2% improvement in predictive accuracy, suggesting that the great majority of unique words in the feature space do not contribute to predictive accuracy. We further tested baseline models using Multinomial Naive Bayes and SVM.
IIIc. Bernoulli Naive Bayes
End of explanation
"""
print "\nSupport Vector Machine"
print "------------------------"
vec = TfidfVectorizer(max_features=best_max_feature, stop_words='english')
train_matrix = vec.fit_transform(train_data)
dev_matrix = vec.transform(dev_data)
svc = LinearSVC().fit(train_matrix, train_labels)
predictions = svc.predict(dev_matrix)
print round(roc_auc_score(dev_labels, predictions, average='weighted'),4)
"""
Explanation: Using Bernoulli NB we get similar results to LR, again making use of restricting the number of features in our training matrix.
IIId. Support Vector Machine
End of explanation
"""
#Create feature where in image is included in the request
df['image_incl'] = np.where(df['request_text_n_title'].str.contains("imgur"), int(1), int(0))
"""
Explanation: SVM performs no better than previous models.
IV. Feature Engineering
Setting aside text processing for the time being, we decided to focus on specific features within the dataset that could possibly lead to increased predictive power. For example, we noted that users who included pictues in their request (presumably as a means of validating their story), were disporportionately more likely to receive a pizza. Similarly, users who wrote longer requests, or who had spent more time in the RAOP community were also more likely to recieve a pizza. Therefore, what follows is an explanation of our attempts at engineering features to build the best predictive model possible. With a few exceptions all features were binarized.
a. Including an image
The inclusion of an image in the request indicates that the requester is providing some evidence of their need for a pizza. A photo might be a screenshot of a bank account balance, a large bill, an injury, or other misfortune. Adding such evidence increases the odds of receiving a pizza.
End of explanation
"""
#Create feature that is an aggregate of all indicators of community status including seniority
df['karma'] = df['requester_account_age_in_days_at_request'] + df['requester_days_since_first_post_on_raop_at_request']\
+ df['requester_number_of_comments_at_request'] + df['requester_number_of_comments_in_raop_at_request'] + \
df['requester_number_of_posts_at_request'] + df['requester_number_of_posts_on_raop_at_request'] + \
df['requester_number_of_subreddits_at_request'] + df['requester_upvotes_minus_downvotes_at_request']
karma_winners = df['karma'][df['requester_received_pizza'] == 1].describe()
karma_losers = df['karma'][df['requester_received_pizza'] == 0].describe()
karma_comparison = pd.concat([karma_winners, karma_losers], axis=1)
karma_comparison.columns = ['winners', 'losers']
karma_comparison
"""
Explanation: b. Community standing/status
End of explanation
"""
low_losers= len(df[df['requester_received_pizza'] == 0][df['karma'] < 15])
low_winners=len(df[df['requester_received_pizza'] == 1][df['karma'] < 15])
print low_losers, low_losers/3046.
print low_winners, low_winners/994.
high_losers = len(df[df['requester_received_pizza'] == 0][df['karma'] > 5000])
high_winners = len(df[df['requester_received_pizza'] == 1][df['karma'] > 5000])
print high_losers, high_losers/3046.
print high_winners, high_winners/994.
df['karma_low'] = np.where(df['karma'] < 15, 1, 0)
"""
Explanation: A quick analysis of the newly created "karma" features highlights the fact that people who received pizza have a median karma value of over 660, compared to those who did not with a median value of only 480. This large difference will likely have strong predictve power for our model. As indicated in the code snippet below, roughly one-third of all observations who did not receive a pizza have a karma value below 100.
End of explanation
"""
length_winners = df['total_length'][df['requester_received_pizza'] == 1].describe()
length_losers = df['total_length'][df['requester_received_pizza'] == 0].describe()
length_comparison = pd.concat([length_winners, length_losers], axis=1)
length_comparison.columns = ['winners', 'losers']
length_comparison
"""
Explanation: c. Request length
End of explanation
"""
df['day']=df['unix_timestamp_of_request_utc'].apply(lambda x:int(datetime.datetime.fromtimestamp(int(x)).strftime('%d')))
df['time']=df['unix_timestamp_of_request'].apply(lambda x:int(datetime.datetime.fromtimestamp(int(x)).strftime('%H')))
df['first_half'] = np.where(df['day'] < 16, 1, 0)
"""
Explanation: The analsyis here indicates that while not as dramatic a difference as the previous feature, there is a moderate difference in text length between winners and losers. Apparently, a longer text (request) indicates more of an effort to explain the users particular situation, and is therefore more likely to be seen as sincere or plausible by the RAOP community.
d. Extracting time-based features:
The UTC time stamp is parsed and converted to human readable format. Also, the first half of the month is identified with a binary variable since people might be more money to use for donating a pizza earlier in the month.
End of explanation
"""
#Create binary variables that show whether requester is grateful or willing to "pay it forward"
df['requester_grateful'] = np.where(df['request_text_n_title'].str.contains('thanks' or 'advance' or 'guy'\
or 'reading' or 'anyone' or 'anything' or'story'or 'tonight'or 'favor'or'craving'), int(1), int(0))
df['requester_payback'] = np.where(df['request_text_n_title'].str.contains('return' or 'pay it back'\
or 'pay it forward' or 'favor'), int(1), int(0))
"""
Explanation: e. Extracting gratitude and "pay it forward" sentiment
End of explanation
"""
# Define narrative categories
narratives = {
'money':['money','now','broke','week','until','time',
'last','day','when','today','tonight','paid',
'next','first','night','night','after','tomorrow',
'while','account','before','long','friday','rent',
'buy','bank','still','bills','ago','cash','due',
'soon','past','never','paycheck','check','spent',
'year','years','poor','till','yesterday','morning',
'dollars','financial','hour','bill','evening','credit',
'budget','loan','bucks','deposit','dollar','current','payed'],
'job':['work','job','paycheck','unemployment','interviewed',
'fired','employment','hired','hire'],
'student':['college','student','school','roommate','studying',
'study','university','finals','semester','class','project',
'dorm','tuition'],
'family': ['family','mom','wife','parents','mother','husband','dad','son',
'daughter','father','parent','mum','children','starving','hungry'],
'craving': ['friend','girlfriend','birthday','boyfriend','celebrate',
'party','game','games','movie','movies','date','drunk',
'beer','celebrating','invited','drinks','crave','wasted','invited']
}
# function to extract word count for each narrative from one text post, normalize by the word count of that post
def single_extract(text):
count = {'money':0.,
'job':0.,
'student':0.,
'family':0.,
'craving':0.}
words = text.split(' ')
length = 1./len(words)
for word in text.split(' '):
for i,k in narratives.items():
if word in k:
count[i] += length
return count.values()
# Extract request_text_n_title field
texts = df['request_text_n_title'].copy()
#initialize count
count =[]
# return normalized count for each narrative from all requests
for text in texts:
count.append(single_extract(text))
# narrative dataframe
narrative = pd.DataFrame(count)
# set up median for using with the test set
median_values = []
#extract narrative field
for i,k in enumerate(narratives.keys()):
median_values.append(np.median(narrative[i]))
narrative['narrative_'+k] = (narrative[i] > np.median(narrative[i])).astype(int)
narrative.drop([i],axis=1,inplace=True)
# concatenate
df = pd.concat([df,narrative],axis=1)
"""
Explanation: f. Extracting request narratives
We extract the narrative out of each request. Based on our EDA, we picked 5 narratives: Money, Job, Student, Family and Craving. 5 new columns will be added for the narratives which are coded in binary 0 or 1. The terms that specify each narrative are given in the narratives' dictionary. We counted the total occurent of the words in each post, normalize this count by the word count in the post, and use median-threshold to determin whether the post fall into the respective narrative.
End of explanation
"""
df.info()
"""
Explanation: In the end, we added a total of 16 additional features, 3 continuous and 13 binary.
End of explanation
"""
continuous_list = ['total_length']#, 'time', 'karma']
binary_list = [
u'image_incl',
u'karma_low',
# u'month',
# u'day',
u'time',
u'first_half',
u'requester_grateful',
u'requester_payback',
u'narrative_money',
u'narrative_job',
u'narrative_family',
u'narrative_student',
u'narrative_craving'
]
#create new DataFrame using previously defined "numeric_features" object to determine all columns in the DF
numeric_features = df.copy().loc[:,continuous_list]
numeric_features_norm = pd.DataFrame(data=preprocessing.normalize(numeric_features, axis=0),\
columns=numeric_features.columns.values)
# combine to contious and binary
numeric_features_norm = pd.concat([df[binary_list],numeric_features_norm],axis=1)
numeric_features_norm.head()
features = numeric_features_norm.copy()
target = df['requester_received_pizza']
features.info()
"""
Explanation: To fit our model, we combined all engineered features into a single dataframe. And after several iterations, decided to truncate some of the added features to improve predictive accuracy ('time' and 'karma' specifically).
End of explanation
"""
X, y = features.values, target.copy()
#shuffle = np.random.permutation(len(X))
X, y = X[shuffle], y[shuffle]
Xtrain, Xdev, ytrain, ydev = X[:3200], X[3200:], y[:3200], y[3200:]
Xtrain.shape, Xdev.shape, ytrain.shape, ydev.shape
"""
Explanation: V. Final Modeling
We decided to create three final models.
One model using just our engineered features (without unigram text)
One combined model using engineered features and unigram text
One model using predicted probabilities as model features
End of explanation
"""
scores = []
c_values = np.linspace(.001, 100, 20)
for c in c_values:
lr_8 = LogisticRegression(C=c, class_weight='balanced', n_jobs=-1).fit(Xtrain, ytrain)
predictions = lr_8.predict(Xdev)
scores.append(round(roc_auc_score(ydev, predictions, average='weighted'), 4))
print "Best C-value: {}".format(c_values[np.argmax(scores)])
print 'Best AUC score based on metrics.roc_auc_score = {}'.format(max(scores))
"""
Explanation: a. Model One: Engineered Features only
End of explanation
"""
#Initialize our vectorizer using hyperparameters from previous models
vec = TfidfVectorizer(stop_words='english',sublinear_tf=1, max_features=best_max_feature)
train_matrix = vec.fit_transform(train_data)
#Transform our text features
text_features = df['request_text_n_title'].values.copy()
text_transformed = vec.transform(text_features)
# Concatenate egineered numeric_features with vectorized text features
combined_features = np.append(features.values, text_transformed.toarray(), axis = 1)
print combined_features.shape
#prep data for modeling
X, y = combined_features, target.copy()
X, y = X[shuffle], y[shuffle]
Xtrain, Xdev, ytrain, ydev = X[:3200], X[3200:], y[:3200], y[3200:]
Xtrain.shape, Xdev.shape, ytrain.shape, ydev.shape
#Fit logistic model to data
c_values = np.logspace(.0001, 2, 200)
c_scores = []
for value in c_values:
lr = LogisticRegression(C=value, n_jobs=-1, class_weight='balanced', penalty='l2')
lr.fit(Xtrain, ytrain)
predictions = lr.predict(Xdev)
c_scores.append(round(metrics.roc_auc_score(ydev, predictions, average = 'weighted'), 4))
best_C_value = c_values[np.argmax(c_scores)]
print "Best C-value: {}".format(best_C_value)
print "Best ROC AUC Score: {}".format(c_scores[np.argmax(c_scores)])
"""
Explanation: b. Model Two: Engineered Features combined with Unigram text
End of explanation
"""
#fit logistic classifier to training data
vec = TfidfVectorizer(stop_words='english',sublinear_tf=1, max_features=best_max_feature)
train_matrix = vec.fit_transform(train_data)
lr1 = LogisticRegression(C=best_c_value, n_jobs=-1).fit(train_matrix, train_labels)
text_features = df['request_text_n_title'].values.copy()
text_transformed = vec.transform(text_features)
# create pizza_predict field for all records based on features array
pizza_predict = lr1.predict_proba(text_transformed)[:,1][:, np.newaxis]
# Concatenate numeric_features with pizza_predict to create pizza_predict + numeric features in ens_features
combined_features = np.append(features.values, pizza_predict, axis = 1)
X, y = combined_features, target.copy()
#shuffle = np.random.permutation(len(X))
X, y = X[shuffle], y[shuffle]
Xtrain, Xdev, ytrain, ydev = X[:3200], X[3200:], y[:3200], y[3200:]
Xtrain.shape, Xdev.shape, ytrain.shape, ydev.shape
#####
c_values = np.logspace(.0001, 2, 200)
c_scores = []
for value in c_values:
lr = LogisticRegression(C=value, n_jobs=-1, class_weight='balanced', penalty='l2')
lr.fit(Xtrain, ytrain)
predictions = lr.predict(Xdev)
c_scores.append(round(metrics.roc_auc_score(ydev, predictions, average = 'weighted'), 4))
best_C_value = c_values[np.argmax(c_scores)]
print "Best C-value: {}".format(best_C_value)
print "Best ROC AUC Score: {}".format(c_scores[np.argmax(c_scores)])
"""
Explanation: Model Three: Unigrams predict_probability combined with engineered features
End of explanation
"""
"""Our first attempt is a brute force method with logistics regression that
use the best parameters from developing the model"""
df_test = pd.read_json('test.json')
df_test['request_text_n_title'] = (df_test['request_title'] + ' ' + df_test['request_text_edit_aware'])
df_test['request_text_n_title'] = [ text.split(" ",1)[1].lower() for text in df_test['request_text_n_title']]
# print df_test['request_text_n_title'].head()
#create test set
test_features = df_test['request_text_n_title'].values # text features
#create new training set using all 4040 training samples
train_features = df['request_text_n_title'].values
train_labels = df['requester_received_pizza'].values
#shuffle our data to ensure randomization
shuffle = np.random.permutation(len(train_features))
train_features, train_labels = train_features[shuffle], train_labels[shuffle]
#transform raw data into Tfdif vector
vec = TfidfVectorizer(stop_words='english',sublinear_tf=1, max_features=387)
train_matrix = vec.fit_transform(train_features)
test_matrix = vec.transform(test_features)
#fit Logistic Regression model
lr= LogisticRegression(n_jobs=-1, class_weight='balanced').fit(train_matrix, train_labels)
test_predictions = lr.predict(test_matrix)[:, np.newaxis]
# print type(predictions_test)
sub_1 = np.append(df_test['request_id'].values[:, np.newaxis], test_predictions, axis = 1)
sub_1_df = pd.DataFrame(data=sub_1, columns=['request_id', 'requester_received_pizza']) # 1st row as the column names
sub_1_df.to_csv("submission_1.csv", sep=',', header=True, mode='w', index=0)
"""With this submission, we get a AUC score of 0.59386"""
print "Kaggle submission 1, result AUC = 0.59386"
"""The 2nd submission, we define a function process
to process the train and test json files
This function use the file name and the max number of feature tuned previously.
The output of process function depends on if it's train or test set
"""
def process(filename,max_feature_length,train=True):
df = pd.read_json(filename)
df['request_text_n_title'] = (df['request_title'] + ' ' + df['request_text_edit_aware'])
df['request_text_n_title'] = [ text.split(" ",1)[1].lower() for text in df['request_text_n_title']]
#total length
df['total_length'] =df['request_text_n_title'].apply(lambda x: len(x.split(' ')))
#get engineer features
# image included?
df['image_incl'] = np.where(df['request_text_n_title'].str.contains("imgur"), int(1), int(0))
#Karma
df['karma'] = df['requester_account_age_in_days_at_request']\
+ df['requester_days_since_first_post_on_raop_at_request']\
+ df['requester_number_of_comments_at_request'] + df['requester_number_of_comments_in_raop_at_request'] + \
df['requester_number_of_posts_at_request'] + df['requester_number_of_posts_on_raop_at_request'] + \
df['requester_number_of_subreddits_at_request'] + df['requester_upvotes_minus_downvotes_at_request']
df['karma_low'] = np.where(df['karma'] < 15, 1, 0)
# time
df['day']=df['unix_timestamp_of_request_utc'].apply(lambda x:int(datetime.datetime.fromtimestamp(int(x)).strftime('%d')))
df['time']=df['unix_timestamp_of_request'].apply(lambda x:int(datetime.datetime.fromtimestamp(int(x)).strftime('%H')))
df['first_half'] = np.where(df['day'] < 16, 1, 0)
# requester's attitude
df['requester_grateful'] = np.where(df['request_text_n_title'].
str.contains('thanks' or 'advance' or 'guy'\
or 'reading' or 'anyone' or 'anything'\
'story'or 'tonight'or 'favor'or'craving'),
int(1), int(0))
df['requester_payback'] = np.where(df['request_text_n_title'].
str.contains('return' or 'pay it back' or 'pay it forward' or 'favor'), int(1), int(0))
#narrative
# Define narrative categories
narratives = {
'money':['money','now','broke','week','until','time',
'last','day','when','today','tonight','paid',
'next','first','night','night','after','tomorrow',
'while','account','before','long','friday','rent',
'buy','bank','still','bills','ago','cash','due',
'soon','past','never','paycheck','check','spent',
'year','years','poor','till','yesterday','morning',
'dollars','financial','hour','bill','evening','credit',
'budget','loan','bucks','deposit','dollar','current','payed'],
'job':['work','job','paycheck','unemployment','interviewed',
'fired','employment','hired','hire'],
'student':['college','student','school','roommate','studying',
'study','university','finals','semester','class','project',
'dorm','tuition'],
'family': ['family','mom','wife','parents','mother','husband','dad','son',
'daughter','father','parent','mum','children','starving','hungry'],
'craving': ['friend','girlfriend','birthday','boyfriend','celebrate',
'party','game','games','movie','movies','date','drunk',
'beer','celebrating','invited','drinks','crave','wasted','invited']
}
# function to extract word count for each narrative from one text post
# normalize by the word count of that post
def single_extract(text):
count = {'money':0.,
'job':0.,
'student':0.,
'family':0.,
'craving':0.}
words = text.split(' ')
length = 1./len(words)
for word in text.split(' '):
for i,k in narratives.items():
if word in k:
count[i] += length
return count.values()
# Extract request_text_n_title field
texts = df['request_text_n_title'].copy()
#initialize count
count =[]
# return normalized count for each narrative from all requests
for text in texts:
count.append(single_extract(text))
# narrative dataframe
narrative = pd.DataFrame(count)
# set up median for using with the test set
#extract narrative field
for i,k in enumerate(narratives.keys()):
narrative['narrative_'+k] = (narrative[i] > np.median(narrative[i])).astype(int)
narrative.drop([i],axis=1,inplace=True)
# concatenate
df = pd.concat([df,narrative],axis=1)
continuous_list = ['total_length','time']
# include relevant binary variables and avoid overfitting
binary_list = ['requester_grateful','narrative_job',
'narrative_family', 'narrative_craving',
'image_incl']
#create new DataFrame using previously defined "numeric_features" object to determine all columns in the DF
continuous_features = df.copy().loc[:,continuous_list]
engineered_features = pd.concat([df[binary_list],continuous_features],axis=1)
if train==True:
# binarize sucess
# combine title and text
df['requester_received_pizza'] = np.where(df['requester_received_pizza'] == True, 1, 0)
# Get vectorized fields
vec = TfidfVectorizer(stop_words='english',sublinear_tf=1, max_features=max_feature_length)
text_features = df['request_text_n_title'].values.copy()
vectorized_matrix = vec.fit_transform(text_features)
target = df['requester_received_pizza']
return vec, vectorized_matrix, engineered_features, target
else:
text_features = df['request_text_n_title'].values.copy()
request_id = df['request_id']
return request_id,text_features, engineered_features
# process train.json get vectorizer, train_tfid matrix, engineered features, and target
vec, train_tfid_matrix, train_engr_features,train_target = process('train.json',best_max_feature)
# Model 2 on combined tfid_matrix and engr features
combined_features = np.append(train_engr_features.values, train_tfid_matrix.toarray(), axis = 1)
lr2 = LogisticRegression(n_jobs=-1,class_weight='balanced',penalty = 'l2').fit(combined_features, train_target)
# get test
request_id,test_text_features, test_engr_features = process('test.json',best_max_feature,train=False)
test_tfid_matrix = vec.transform(test_text_features)
combined_test_features = np.append(test_engr_features.values,test_tfid_matrix.toarray(),axis=1)
# predict
predict2 = lr2.predict(combined_test_features)[:,np.newaxis]
# print type(predictions_test)
sub_2 = np.append(request_id.values[:, np.newaxis], predict2, axis = 1)
sub_2_df = pd.DataFrame(data=sub_2, columns=['request_id', 'requester_received_pizza']) # 1st row as the column names
sub_2_df.to_csv("submission_2.csv", sep=',', header=True, mode='w', index=0)
"""We achieved AUC of 0.60278 with this submission"""
print "Kaggle submission 2, result AUC = 0.60278"
"""
Explanation: VI. Results and Analysis
Our best model ended up being model Two which was a combination of engineered features and unigram text. This result is not surprising, especially considering the amount of work required to engineer the given features. We submitted different variants of model two to the Kaggle website with varying degrees of success.
FOR SUBMISSION WITH FINAL MODELS
End of explanation
"""
|
h-mayorquin/time_series_basic | presentations/2016-01-25(Wall-Street-Letters-Visualizations-Of-The-Spatio-Temporal-Clusters).ipynb | bsd-3-clause | import numpy as np
import h5py
import matplotlib.pyplot as plt
%matplotlib inline
# Now nexa modules|
import sys
sys.path.append("../")
from visualization.sensor_clustering import visualize_clusters_text_to_image
"""
Explanation: Visualization of the Spatiotemporal clusters.
Here we present a visualization of how the clusters look in the frame of reference of the letters.
End of explanation
"""
# First we load the file
file_location = '../results_database/text_wall_street_big.hdf5'
run_name = '/low-resolution'
f = h5py.File(file_location, 'r')
# Now we need to get the letters and align them
text_directory = '../data/wall_street_letters.npy'
letters_sequence = np.load(text_directory)
Nletters = len(letters_sequence)
symbols = set(letters_sequence)
"""
Explanation: First we load all the files
End of explanation
"""
# Nexa parameters
Nspatial_clusters = 3
Ntime_clusters = 15
Nembedding = 3
parameters_string = '/' + str(Nspatial_clusters)
parameters_string += '-' + str(Ntime_clusters)
parameters_string += '-' + str(Nembedding)
nexa = f[run_name + parameters_string]
"""
Explanation: Nexa parameters and the name of the run
End of explanation
"""
fig = visualize_clusters_text_to_image(nexa, f, run_name)
plt.show(fig)
"""
Explanation: Now we call the function for plotting and plot
End of explanation
"""
|
Soil-Carbon-Coalition/atlasdata | Challenge observations -- cleanup and classification.ipynb | mit | # The preamble
import pandas as pd
#pd.set_option('mode.sim_interactive', True)
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
#from collections import OrderedDict
import json, csv
import re
df
"""
Explanation: This notebook is about classifying Challenge data according to type, and some cleanup of inconsistent terms. There are probably much better ways to classify
End of explanation
"""
#df = pd.read_csv('/Users/Peter/Documents/scc/challenge/PSQL/coverNEW.csv', parse_dates=['date'], usecols=['id','type','date','filename'])
#df = df.replace(np.nan,' ', regex=True)#MUST REPLACE NAN AS NAN IS NOT ITERABLE
#df = pd.read_csv('/Users/Peter/Documents/scc/challenge/datamungedOct2016c.csv')
df = pd.read_csv('/Users/Peter/Documents/atlas/atlasbiowork.com/atlasbiowork/db/changeREADYD.csv')
#df.dtypes
print df.to_json(orient='records')
df.to_csv('/Users/Peter/Documents/atlas/atlasbiowork.com/atlasbiowork/db/changeREADYD.csv', index=False)
df.values[5]
print df.values[1]
"""
Explanation: load csv
This notebook records some data operations on Soil Carbon Challenge data to prepare it for insertion in atlasbiowork database. The munging and cleanup process will benefit from alternating between using Excel and notebook for inspection and cleanup.
End of explanation
"""
#df.to_csv('/Users/Peter/Documents/scc/challenge/datamungedOct2016d.csv', index=False)
df.to_csv('/Users/Peter/Documents/scc/challenge/obs_types/changeREADYB.csv', index=False)
#ADD SOME COLUMNS and DROP OTHERS
df = df[df['id']!='geometry']
df['obs_type'] = ""
df['newurl'] =""
df = df.drop('lon', axis=1)
df = df.drop('marker', axis=1)
"""
Explanation: save your work periodically but w. new filename
End of explanation
"""
df = df.replace(np.nan,' ', regex=True)#MUST REPLACE NAN AS NAN IS NOT ITERABLE
#analysis_terms = ['analysis', 'density', 'average','variance','pH']
def mymunge(row): # a VARIETY OF CRITERIA
if any( ['along' in row['note'], 'transect' in row['type'], 'grade' in row['type'], 'line' in row['type'], 'map' in row['type'], 'plot' in row['type'], 'remonitor' in row['type'], 'plants' in row['type']] ):
return 'transect'
if any( ['vert' in row['type'], 'ang' in row['type'], 'step back' in row['note'], 'hoop' in row['note'], 'Hoop' in row['note'], 'bare' in row['label1']] ):
return 'cover'
if any( ['infiltracion' in row['type'], 'infiltration' in row['type']] ):
return 'infiltration'
if any( ['analysis' in row['type'], 'composited' in row['type'], 'misc' in row['type'], 'density' in row['type'],'variance' in row['type'], 'average' in row['type'], 'pH' in row['type']] ):
return 'analysis'
if any( ['photo' in row['type']] ):
return 'photo'
if any( ['change' in row['type']] ):
return 'change'
if any( ['brix' in row['type'], 'biomass' in row['type'], 'clip' in row['type']] ):
return 'food_analysis'
df['obs_type'] = df.apply(mymunge, axis=1)
#SEE HOW WE DID
print(len(df))
print(df.obs_type.value_counts(normalize=False, sort=True, ascending=False, bins=None, dropna=False))
df = df.replace(np.nan,' ', regex=True)#MUST REPLACE NAN AS NAN IS NOT ITERABLE
#analysis_terms = ['analysis', 'density', 'average','variance','pH']
def mymunge(row): # a VARIETY OF CRITERIA
if any( ['along' in row['note'], 'transect' in row['type'], 'grade' in row['type'], 'line' in row['type'], 'map' in row['type'], 'plot' in row['type'], 'remonitor' in row['type'], 'plants' in row['type']] ):
return 'transect'
if any( ['vert' in row['type'], 'ang' in row['type'], 'step back' in row['note'], 'hoop' in row['note'], 'Hoop' in row['note'], 'bare' in row['label1']] ):
return 'cover'
if any( ['infiltracion' in row['type'], 'infiltration' in row['type']] ):
return 'infiltration'
if any( ['analysis' in row['type'], 'composited' in row['type'], 'misc' in row['type'], 'density' in row['type'],'variance' in row['type'], 'average' in row['type'], 'pH' in row['type']] ):
return 'analysis'
if any( ['photo' in row['type']] ):
return 'photo'
if any( ['change' in row['type']] ):
return 'change'
if any( ['brix' in row['type'], 'biomass' in row['type'], 'clip' in row['type']] ):
return 'food_analysis'
df['obs_type'] = df.apply(mymunge, axis=1)
#SEE HOW WE DID
print(len(df))
print(df.obs_type.value_counts(normalize=False, sort=True, ascending=False, bins=None, dropna=False))
df = pd.read_csv('/Users/Peter/Documents/scc/challenge/obs_types/change.csv')
df = df.replace(np.nan,'', regex=True)#MUST REPLACE NAN AS NAN IS NOT ITERABLE
def buildValues(row):
v = (row['url'], row['photo1'])
return v
df['list'] = df.apply(buildValues, axis=1)
print list(df['list'])
df = df.replace(np.nan,'', regex=True)#MUST REPLACE NAN AS NAN IS NOT ITERABLE
def buildValues(row): # build the values json field
v = '{"observer": "' + row['observer'] \
+ '", "label1": "' + row['label1'] \
+ '", "value1": "' + row['value1'] \
+ '", "label2": "' + row['label2'] \
+ '", "value2": "' + row['value2'] \
+ '", "label3": "' + row['label3'] \
+ '", "value3": "' + row['value3'] \
+ '", "start_date": "' + row['start_date'] \
+ '", "end_date": "' + row['end_date'] \
+ '", "description": "' + row['description'] \
+ '", "photo1": "' + row['photo1'] \
+ '", "photo2": "' + row['photo2'] \
+ '"}'
return v
df['values'] = df.apply(buildValues, axis=1)
#SEE HOW WE DID
#print(len(df))
#print(df.obs_type.value_counts(normalize=False, sort=True, ascending=False, bins=None, dropna=False))
print df['values'][50]
print df['values'][0:10]
#df[df['obs_type'].isnull()]
df = df.replace(np.nan,' ', regex=True)#MUST REPLACE NAN AS NAN IS NOT ITERABLE
#analysis_terms = ['analysis', 'density', 'average','variance','pH']
def mymunge(row): # a VARIETY OF CRITERIA
if any( ['along' in row['note'], 'transect' in row['type'], 'grade' in row['type'], 'line' in row['type'], 'map' in row['type'], 'plot' in row['type'], 'remonitor' in row['type'], 'plants' in row['type']] ):
return 'transect'
if any( ['vert' in row['type'], 'ang' in row['type'], 'step back' in row['note'], 'bare' in row['label1']] ):
return 'cover'
if any( ['infiltracion' in row['type'], 'infiltration' in row['type']] ):
return 'infiltration'
if any( ['analysis' in row['type'], 'composited' in row['type'], 'misc' in row['type'], 'density' in row['type'],'variance' in row['type'], 'average' in row['type'], 'pH' in row['type']] ):
return 'analysis'
if any( ['photo' in row['type']] ):
return 'photo'
if any( ['change' in row['type']] ):
return 'change'
if any( ['brix' in row['type'], 'biomass' in row['type'], 'clip' in row['type']] ):
return 'food_analysis'
df['obs_type'] = df.apply(mymunge, axis=1)
#SEE HOW WE DID
print(len(df))
print(df.obs_type.value_counts(normalize=False, sort=True, ascending=False, bins=None, dropna=False))
df = pd.read_csv('/Users/Peter/Documents/atlas/atlasbiowork.com/atlasbiowork/db/changeREADYG.csv')
#print df.to_json(orient='records')
df.to_csv('/Users/Peter/Documents/atlas/atlasbiowork.com/atlasbiowork/db/changeREADYH.csv', index=False)
def buildJSON(row): # a VARIETY OF CRITERIA
v=''
v += 'Observation(values=' + row['values']
v += ', observer_id=' + str(row['observer_id'])
v += ', site_id=' + str(row['site_id'])
v += ', type_id=' + str(row['type_id'])
v += ').save()'
return v
df['json'] = df.apply(buildJSON, axis=1)
df['json'].to_csv('/Users/Peter/Documents/atlas/atlasbiowork.com/atlasbiowork/db/changeK.csv', index=False)
"""
Explanation: Using APPLY to classify according to keywords or strings
End of explanation
"""
#GET STRINGS OF LABELS "list_of_terms"
a1 = df.label1.unique()
a2 = df.label2.unique()
a3 = df.label3.unique()
#a4 = df.label4.unique()
#a5 = df.label5.unique()
list_of_terms = np.concatenate([a1,a2,a3])
#sorted(list_of_terms)
#SEARCH AND REPLACE
#searchterm = 'lichen'
#replaceterm = 'squish'
#the_list = [item for item in list_of_terms if searchterm in str(item)]
#need to target specific columns
mycols=df[['label1', 'label2','label3','label4','label5']]
#mycols.replace('=','wox', regex=True)
#mycols.replace(to_replace=the_list, value=replaceterm, inplace=True)
#df = df.replace(np.nan,' ', regex=True)
df.label4[df.label4.str.contains('litter')]
searchterm = 'lichen'
replaceterm = 'moss/algae/lichen'
the_list = [item for item in list_of_terms if searchterm in str(item)]
#need to target specific columns
mycols=df[['label1', 'label2','label3','label4','label5']]
mycols.describe()
#mycols.replace(to_replace=the_list, value=replaceterm, inplace=True)
searchterm = 'lichen'
replaceterm = 'moss/algae/lichen'
the_list = [item for item in list_of_terms if searchterm in str(item)]
#need to target specific columns
df[['label1', 'label2','label3','label4','label5']].replace(to_replace=thelist, value=replaceterm, inplace=True)
transects.to_csv('/Users/Peter/Documents/scc/challenge/PSQL/transectsOct23.csv', index=False)
linephotos = df[(df.type.str.contains('line'))]
angphotos = df[(df.type.str.contains('ang')) | (df.note.str.contains('step back'))]
vertphotos = df[df.type.str.contains('vert')]
len(vertphotos)
#re.findall('\d+', s) #finds digits in s
def get_num(x):
digits = ''.join(ele for ele in x if ele.isdigit())
if digits:
return int(digits)
pass
#get_num('Hoop 1, 125\'')
#df.ix[459]
for y in range(len(df)):
#basic row selection from http://www.gregreda.com/2013/10/26/working-with-pandas-dataframes/
#which has GREAT info on joins
peter_plots = df[(df.lat > 0) & (df.observer.str.contains('Peter Donovan'))]
features = df[(df.type == '*plot summary')|(df.type == 'change')|(df.type.str.contains('remonitor'))];
#df.iloc[100] and df.ix[100] get the row referred to by the default 0-based index
# df.loc['Kellogg LTER'] doesn't work because it's not an index;
# dfnew = df.set_index('id'); this works even tho id is not unique
#dfnew.loc['Kellogg LTER'] and this works; use inplace=True as arg to modify existing df
# dfnew.loc['BURR1'] returns all rows for this index
#column selector
#df[['type','label3']]; need to use double [] to enclose a list
#new column
#df['new'] = df.lat + 2
#df['featureID'] = df.id.str[0:6] #str function slices string
#df.type #although type is a keyword this works
#subsetting dataframes by finding rows
change = df[(df['type'] == "change")]
plots = df[df['type']=="*plot summary"]
peter = df[df.observer == "Peter Donovan"]
north = df[df.lat > 49] # gives 19 but df.lat > 49 gives all rows
type(change)
"""
Explanation: search and replace in dataframe
First, get the strings you'd like to standardize. Then, enter your lists and replacement term. You have to target specific columns.
End of explanation
"""
food_analysis = df[(df['obs_type'] == "food_analysis")]
food_analysis.to_csv('/Users/Peter/Documents/scc/challenge/obs_types/food_analysis.csv', index=False)
"""
Explanation: http://www.gregreda.com/2013/10/26/working-with-pandas-dataframes/
End of explanation
"""
|
tensorflow/docs-l10n | site/en-snapshot/tfx/tutorials/tfx/template.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
import sys
# Use the latest version of pip.
!pip install --upgrade pip
# Install tfx and kfp Python packages.
!pip install --upgrade "tfx[kfp]<2"
"""
Explanation: Create a TFX pipeline using templates
Note: We recommend running this tutorial on Google Cloud Vertex AI Workbench. Launch this notebook on Vertex AI Workbench.
<div class="devsite-table-wrapper"><table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/tfx/tutorials/tfx/template">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png"/>View on TensorFlow.org</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/tfx/blob/master/docs/tutorials/tfx/template.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/tfx/tree/master/docs/tutorials/tfx/template.ipynb">
<img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/tfx/docs/tutorials/tfx/template.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a></td>
</table></div>
Introduction
This document will provide instructions to create a TensorFlow Extended (TFX) pipeline
using templates which are provided with TFX Python package.
Many of the instructions are Linux shell commands, which will run on an AI Platform Notebooks instance. Corresponding Jupyter Notebook code cells which invoke those commands using ! are provided.
You will build a pipeline using Taxi Trips dataset
released by the City of Chicago. We strongly encourage you to try building
your own pipeline using your dataset by utilizing this pipeline as a baseline.
Step 1. Set up your environment.
AI Platform Pipelines will prepare a development environment to build a pipeline, and a Kubeflow Pipeline cluster to run the newly built pipeline.
NOTE: To select a particular TensorFlow version, or select a GPU instance, create a TensorFlow pre-installed instance in AI Platform Notebooks.
Install tfx python package with kfp extra requirement.
End of explanation
"""
!python3 -c "from tfx import version ; print('TFX version: {}'.format(version.__version__))"
"""
Explanation: Let's check the versions of TFX.
End of explanation
"""
# Read GCP project id from env.
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
GOOGLE_CLOUD_PROJECT=shell_output[0]
%env GOOGLE_CLOUD_PROJECT={GOOGLE_CLOUD_PROJECT}
print("GCP project ID:" + GOOGLE_CLOUD_PROJECT)
"""
Explanation: In AI Platform Pipelines, TFX is running in a hosted Kubernetes environment using Kubeflow Pipelines.
Let's set some environment variables to use Kubeflow Pipelines.
First, get your GCP project ID.
End of explanation
"""
# This refers to the KFP cluster endpoint
ENDPOINT='' # Enter your ENDPOINT here.
if not ENDPOINT:
from absl import logging
logging.error('Set your ENDPOINT in this cell.')
"""
Explanation: We also need to access your KFP cluster. You can access it in your Google Cloud Console under "AI Platform > Pipeline" menu. The "endpoint" of the KFP cluster can be found from the URL of the Pipelines dashboard, or you can get it from the URL of the Getting Started page where you launched this notebook. Let's create an ENDPOINT environment variable and set it to the KFP cluster endpoint. ENDPOINT should contain only the hostname part of the URL. For example, if the URL of the KFP dashboard is https://1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com/#/start, ENDPOINT value becomes 1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com.
NOTE: You MUST set your ENDPOINT value below.
End of explanation
"""
# Docker image name for the pipeline image.
CUSTOM_TFX_IMAGE='gcr.io/' + GOOGLE_CLOUD_PROJECT + '/tfx-pipeline'
"""
Explanation: Set the image name as tfx-pipeline under the current GCP project.
End of explanation
"""
PIPELINE_NAME="my_pipeline"
import os
PROJECT_DIR=os.path.join(os.path.expanduser("~"),"imported",PIPELINE_NAME)
"""
Explanation: And, it's done. We are ready to create a pipeline.
Step 2. Copy the predefined template to your project directory.
In this step, we will create a working pipeline project directory and files by copying additional files from a predefined template.
You may give your pipeline a different name by changing the PIPELINE_NAME below. This will also become the name of the project directory where your files will be put.
End of explanation
"""
!tfx template copy \
--pipeline-name={PIPELINE_NAME} \
--destination-path={PROJECT_DIR} \
--model=taxi
"""
Explanation: TFX includes the taxi template with the TFX python package. If you are planning to solve a point-wise prediction problem, including classification and regresssion, this template could be used as a starting point.
The tfx template copy CLI command copies predefined template files into your project directory.
End of explanation
"""
%cd {PROJECT_DIR}
"""
Explanation: Change the working directory context in this notebook to the project directory.
End of explanation
"""
!{sys.executable} -m models.features_test
!{sys.executable} -m models.keras.model_test
"""
Explanation: NOTE: Don't forget to change directory in File Browser on the left by clicking into the project directory once it is created.
Step 3. Browse your copied source files
The TFX template provides basic scaffold files to build a pipeline, including Python source code, sample data, and Jupyter Notebooks to analyse the output of the pipeline. The taxi template uses the same Chicago Taxi dataset and ML model as the Airflow Tutorial.
Here is brief introduction to each of the Python files.
- pipeline - This directory contains the definition of the pipeline
- configs.py — defines common constants for pipeline runners
- pipeline.py — defines TFX components and a pipeline
- models - This directory contains ML model definitions.
- features.py, features_test.py — defines features for the model
- preprocessing.py, preprocessing_test.py — defines preprocessing
jobs using tf::Transform
- estimator - This directory contains an Estimator based model.
- constants.py — defines constants of the model
- model.py, model_test.py — defines DNN model using TF estimator
- keras - This directory contains a Keras based model.
- constants.py — defines constants of the model
- model.py, model_test.py — defines DNN model using Keras
- local_runner.py, kubeflow_runner.py — define runners for each orchestration engine
You might notice that there are some files with _test.py in their name. These are unit tests of the pipeline and it is recommended to add more unit tests as you implement your own pipelines.
You can run unit tests by supplying the module name of test files with -m flag. You can usually get a module name by deleting .py extension and replacing / with .. For example:
End of explanation
"""
!gsutil cp data/data.csv gs://{GOOGLE_CLOUD_PROJECT}-kubeflowpipelines-default/tfx-template/data/taxi/data.csv
"""
Explanation: Step 4. Run your first TFX pipeline
Components in the TFX pipeline will generate outputs for each run as ML Metadata Artifacts, and they need to be stored somewhere. You can use any storage which the KFP cluster can access, and for this example we will use Google Cloud Storage (GCS). A default GCS bucket should have been created automatically. Its name will be <your-project-id>-kubeflowpipelines-default.
Let's upload our sample data to GCS bucket so that we can use it in our pipeline later.
End of explanation
"""
!tfx pipeline create --pipeline-path=kubeflow_runner.py --endpoint={ENDPOINT} \
--build-image
"""
Explanation: Let's create a TFX pipeline using the tfx pipeline create command.
Note: When creating a pipeline for KFP, we need a container image which will be used to run our pipeline. And skaffold will build the image for us. Because skaffold pulls base images from the docker hub, it will take 5~10 minutes when we build the image for the first time, but it will take much less time from the second build.
End of explanation
"""
!tfx run create --pipeline-name={PIPELINE_NAME} --endpoint={ENDPOINT}
"""
Explanation: While creating a pipeline, Dockerfile will be generated to build a Docker image. Don't forget to add it to the source control system (for example, git) along with other source files.
NOTE: kubeflow will be automatically selected as an orchestration engine if airflow is not installed and --engine is not specified.
Now start an execution run with the newly created pipeline using the tfx run create command.
End of explanation
"""
# Update the pipeline
!tfx pipeline update \
--pipeline-path=kubeflow_runner.py \
--endpoint={ENDPOINT}
# You can run the pipeline the same way.
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
"""
Explanation: Or, you can also run the pipeline in the KFP Dashboard. The new execution run will be listed under Experiments in the KFP Dashboard. Clicking into the experiment will allow you to monitor progress and visualize the artifacts created during the execution run.
However, we recommend visiting the KFP Dashboard. You can access the KFP Dashboard from the Cloud AI Platform Pipelines menu in Google Cloud Console. Once you visit the dashboard, you will be able to find the pipeline, and access a wealth of information about the pipeline.
For example, you can find your runs under the Experiments menu, and when you open your execution run under Experiments you can find all your artifacts from the pipeline under Artifacts menu.
Note: If your pipeline run fails, you can see detailed logs for each TFX component in the Experiments tab in the KFP Dashboard.
One of the major sources of failure is permission related problems. Please make sure your KFP cluster has permissions to access Google Cloud APIs. This can be configured when you create a KFP cluster in GCP, or see Troubleshooting document in GCP.
Step 5. Add components for data validation.
In this step, you will add components for data validation including StatisticsGen, SchemaGen, and ExampleValidator. If you are interested in data validation, please see Get started with Tensorflow Data Validation.
Double-click to change directory to pipeline and double-click again to open pipeline.py. Find and uncomment the 3 lines which add StatisticsGen, SchemaGen, and ExampleValidator to the pipeline. (Tip: search for comments containing TODO(step 5):). Make sure to save pipeline.py after you edit it.
You now need to update the existing pipeline with modified pipeline definition. Use the tfx pipeline update command to update your pipeline, followed by the tfx run create command to create a new execution run of your updated pipeline.
End of explanation
"""
!tfx pipeline update \
--pipeline-path=kubeflow_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
"""
Explanation: Check pipeline outputs
Visit the KFP dashboard to find pipeline outputs in the page for your pipeline run. Click the Experiments tab on the left, and All runs in the Experiments page. You should be able to find the latest run under the name of your pipeline.
Step 6. Add components for training.
In this step, you will add components for training and model validation including Transform, Trainer, Resolver, Evaluator, and Pusher.
Double-click to open pipeline.py. Find and uncomment the 5 lines which add Transform, Trainer, Resolver, Evaluator and Pusher to the pipeline. (Tip: search for TODO(step 6):)
As you did before, you now need to update the existing pipeline with the modified pipeline definition. The instructions are the same as Step 5. Update the pipeline using tfx pipeline update, and create an execution run using tfx run create.
End of explanation
"""
!tfx pipeline update \
--pipeline-path=kubeflow_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
"""
Explanation: When this execution run finishes successfully, you have now created and run your first TFX pipeline in AI Platform Pipelines!
NOTE: If we changed anything in the model code, we have to rebuild the
container image, too. We can trigger rebuild using --build-image flag in the
pipeline update command.
NOTE: You might have noticed that every time we create a pipeline run, every component runs again and again even though the input and the parameters were not changed.
It is waste of time and resources, and you can skip those executions with pipeline caching. You can enable caching by specifying enable_cache=True for the Pipeline object in pipeline.py.
Step 7. (Optional) Try BigQueryExampleGen
BigQuery is a serverless, highly scalable, and cost-effective cloud data warehouse. BigQuery can be used as a source for training examples in TFX. In this step, we will add BigQueryExampleGen to the pipeline.
Double-click to open pipeline.py. Comment out CsvExampleGen and uncomment the line which creates an instance of BigQueryExampleGen. You also need to uncomment the query argument of the create_pipeline function.
We need to specify which GCP project to use for BigQuery, and this is done by setting --project in beam_pipeline_args when creating a pipeline.
Double-click to open configs.py. Uncomment the definition of GOOGLE_CLOUD_REGION, BIG_QUERY_WITH_DIRECT_RUNNER_BEAM_PIPELINE_ARGS and BIG_QUERY_QUERY. You should replace the region value in this file with the correct values for your GCP project.
Note: You MUST set your GCP region in the configs.py file before proceeding.
Change directory one level up. Click the name of the directory above the file list. The name of the directory is the name of the pipeline which is my_pipeline if you didn't change.
Double-click to open kubeflow_runner.py. Uncomment two arguments, query and beam_pipeline_args, for the create_pipeline function.
Now the pipeline is ready to use BigQuery as an example source. Update the pipeline as before and create a new execution run as we did in step 5 and 6.
End of explanation
"""
!tfx pipeline update \
--pipeline-path=kubeflow_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
"""
Explanation: Step 8. (Optional) Try Dataflow with KFP
Several TFX Components uses Apache Beam to implement data-parallel pipelines, and it means that you can distribute data processing workloads using Google Cloud Dataflow. In this step, we will set the Kubeflow orchestrator to use dataflow as the data processing back-end for Apache Beam.
Double-click pipeline to change directory, and double-click to open configs.py. Uncomment the definition of GOOGLE_CLOUD_REGION, and DATAFLOW_BEAM_PIPELINE_ARGS.
Change directory one level up. Click the name of the directory above the file list. The name of the directory is the name of the pipeline which is my_pipeline if you didn't change.
Double-click to open kubeflow_runner.py. Uncomment beam_pipeline_args. (Also make sure to comment out current beam_pipeline_args that you added in Step 7.)
Now the pipeline is ready to use Dataflow. Update the pipeline and create an execution run as we did in step 5 and 6.
End of explanation
"""
!tfx pipeline update \
--pipeline-path=kubeflow_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
"""
Explanation: You can find your Dataflow jobs in Dataflow in Cloud Console.
Step 9. (Optional) Try Cloud AI Platform Training and Prediction with KFP
TFX interoperates with several managed GCP services, such as Cloud AI Platform for Training and Prediction. You can set your Trainer component to use Cloud AI Platform Training, a managed service for training ML models. Moreover, when your model is built and ready to be served, you can push your model to Cloud AI Platform Prediction for serving. In this step, we will set our Trainer and Pusher component to use Cloud AI Platform services.
Before editing files, you might first have to enable AI Platform Training & Prediction API.
Double-click pipeline to change directory, and double-click to open configs.py. Uncomment the definition of GOOGLE_CLOUD_REGION, GCP_AI_PLATFORM_TRAINING_ARGS and GCP_AI_PLATFORM_SERVING_ARGS. We will use our custom built container image to train a model in Cloud AI Platform Training, so we should set masterConfig.imageUri in GCP_AI_PLATFORM_TRAINING_ARGS to the same value as CUSTOM_TFX_IMAGE above.
Change directory one level up, and double-click to open kubeflow_runner.py. Uncomment ai_platform_training_args and ai_platform_serving_args.
Update the pipeline and create an execution run as we did in step 5 and 6.
End of explanation
"""
|
astroumd/GradMap | notebooks/Lectures2021/Lecture1/GradMap_L1_Student.ipynb | gpl-3.0 | ## You can use Python as a calculator:
5*7 #This is a comment and does not affect your code.
#You can have as many as you want.
#Comments help explain your code to others and yourself.
#No worries.
5+7
5-7
5/7
"""
Explanation: Introduction to "Doing Science" in Python for REAL Beginners
Python is one of many languages you can use for research and homework purposes. In the next few days, we will work through many of the tool, tips, and tricks that we as graduate students (and PhD researchers) use on a daily basis. We will NOT attempt to teach you all of Python as there isn't time. We will however build up a set of code(s) that will allow you to read and write data, make beautiful publish-worthy plots, fit a line (or any function) to data, and set up algorithms. You will also begin to learn the syntax of Python and can hopefuly apply this knowledge to your current and future work.
Before we begin, a few words on navigating the iPython Notebook:
There are two main types of cells : Code and Text
In "code" cells "#" at the beginning of a line marks the line as comment
In "code" cells every non-commented line is intepreted
In "code" cells, commands that are preceded by % are "magics" and are special commands in IPython to add some functionality to the runtime interactive environment.
Shift+Return shortcut to execute a cell
Alt+Return (Option+Return on Macs) shortcut to execute a cell and create another one below
Here you can find a complete documentation about the notebook.
http://ipython.org/ipython-doc/1/interactive/notebook.html
In particular have a look at the section about the keyboard shortcuts. You can also access the keyboard
shortcuts from the $\textbf{Help}$ menu above.
And remember that :
Indentation has a meaning (we'll talk about this when we cover loops)
Indexing starts from 0
We will discuss more about these concepts while doing things. Let's get started now!!!!
A. Numbers, Calculations, and Lists
Before we start coding, let's play around with the Jupyter environment. Make a new cell below using the Alt+Return shortcut.
Take your newly created cell and write something in it. Switch the type of the cell between a code cell and a text/markdown cell by using the selection box in the top of the screen. See how it changes?
Insert a comment to yourself (this is always a great idea) by using the # symbol.
End of explanation
"""
a = 10
b = 7
print(a)
print(b)
print(a*b , a+b, a/b)
"""
Explanation: Unfortunately, the output of your calculations won't be saved anywhere, so you can't use them later in your code.
There's a way to get around this: by assigning them to variables. A variable is a way of referring to a memory location used by a computer program that can contain values, text, or even more complicated types. Think of variables as containers to store something so you can use or change it later. Variables can be a single letter (like x or y) but they are usually more helpful when they have descriptive names (like age, stars, total_sum). You want to have a descriptive variable name (so you don't have to keep looking up what it is) but also one that is not a pain to type repeatedly.
Let's assign some variables and print() them to the screen.
End of explanation
"""
a = 5
b = 7
print(a*b, a+b, a/b)
"""
Explanation: You can also write over variables with new values, but your previous values will be gone.
End of explanation
"""
numList = [0,1,2,3,4,5,6,7,8,9]
print(numList)
"""
Explanation: Next, let's create a list of numbers. A list is a way to store items in a group.
End of explanation
"""
L = len(numList)
print(L)
"""
Explanation: How many elements or numbers does the list numList contain? Yes, this is easy to count now, but you will eventually work with lists that contains MANY items. To get the length of a list, use len().
End of explanation
"""
# Insert your code below:
"""
Explanation: You can also access particular elements in an array by indexing. The syntax for this is the following:
numList[index_number]
This will return the value in the list that corresponds to the index number.
Arrays are numbered starting from 0, such that
First position = 0 (or 0th item)
Second position = 1
Third position = 2
etc.
It is a bit confusing, but after a bit of time, this becomes quite natural.
For example, getting the 4th item in the list you would need to type:
numList[3]
Try accessing elements of the list you just created:
End of explanation
"""
# Insert your code below:
"""
Explanation: How would you access the number 5 in numList?
End of explanation
"""
fibList = [1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
fibList[5]
"""
Explanation: Let's try making more complicated list:
End of explanation
"""
addList = [1, 1, 5, 4, 6, 7, 3, 2, 8]
print(addList)
# Now let's add some new numbers to the list
addList = addList + [4, 3, 2, 6]
print(addList)
"""
Explanation: Now let's say you create a list of numbers, but later on you want to add more numbers to the list. We can do that! We can quite literally add them to the list like so:
End of explanation
"""
# Run this code
%matplotlib inline
# this "magic" command puts the plots right in the jupyter notebook
import matplotlib
"""
Explanation: See how the list changed to now include the new numbers?
Now you know the basics of Python, let's see how it can be used as a graphing calculator
B. Our first plot!
Python is a fantastic language because it is very powerful and flexible. It is like modular furniture or a modular building. You have the Python foundation and choose which modules you want/need and load them before you start working. One of the most loved here at UMD is the matplotlib (https://matplotlib.org/), which provides lots of functionality for making beautiful, publishable plots.
End of explanation
"""
# Run this code
import matplotlib.pyplot as plt
"""
Explanation: When using modules (also sometimes called libraries or packages ) you can use a nickname through the as keyword so you don't have to type the long module name every time. For example, matplotlib.pyplot is typically shortened to plt like below.
End of explanation
"""
x = numList
y = numList
p = plt.plot(x, y)
"""
Explanation: Now let's do a quick simple plot using the list we defined earlier!
End of explanation
"""
# Clear the plotting field.
plt.clf() # No need to add anything inside these parentheses.
# First line
plt.plot(x, y, color='blue', linestyle='-', linewidth=1, label='num')
# Second line
z = fibList
# you can shorten the keywords like "color" to be just "c" for quicker typing
plt.plot(x, z, c='r', ls='--', lw=3, label='fib')
# add the labels and titles
plt.xlabel('x values')
plt.ylabel('y values')
plt.title('My First Plot')
plt.legend(loc='best')
#Would you like to save your plot? Uncomment the below line. Here, we use savefig('nameOffigure')
#It should save to the folder you are currently working out of.
#plt.savefig('MyFirstFigure.jpg')
"""
Explanation: You can change a lot of attributes about plots, like the style of the line, the color, and the thickness of the line. You can add titles, axis labels, and legends. You can also put more than one line on the same plot. This link includes all the ways you can modify plots: https://matplotlib.org/api/_as_gen/matplotlib.pyplot.plot.html.
Let's take a quick look at the stuff we can find in the matplotlib documentation, as it can be a little overwhelming to navigate.
Here is a quick example showing a few of the things you can do with matplotlib:
End of explanation
"""
# Insert your code below:
"""
Explanation: EXERCISE 1:
Create two lists of numbers: list1 will be the integers from 0 to 9 and list2 will be the elements of list1 squared.
Plot the two lists with matplotlib and make some changes to the color, linestyle, or linewidth.
Add labels, a title, and a legend to your plot.
Save the plot once you are done.
Be creative and feel free to look up the different linestyles using the link above.
End of explanation
"""
#Example conditional statements
x = 1
y = 2
x < y #x is less than y
"""
Explanation: C. Logic, If/Else, and Loops
Let's now switch gears a bit and discuss logic in Python. Conditional (logic) statements form the backbone of programming. These statements in Python return either True or False and have a special name in programming: Booleans. Sometimes this type of logic is also called Boolean logic.
End of explanation
"""
#x is greater than y
x > y
#x is less-than or equal to y
x <= y
#x is greater-than or equal to y
x >= y
"""
Explanation: Think of the statement $x<y$ as asking the question "is x less than y?" If it is, then it returns True and if x is not less than y it returns False.
End of explanation
"""
#Example of and operator
(1 < 2) and (2 < 3)
#Example of or operator
(1 < 2) or (2 > 3)
#Example of not operator
not(1 < 2)
"""
Explanation: If you let a and b be conditional statements (like the above statements, e.g. a = x < y), then you can combine the two together using logical operators, which can be thought of as functions for conditional statements.
There are three logical operators that are handy to know:
And operator: a and b
outputs True only if both a and b are True
Or operator: a or b
outputs True if at least one of a and b are True
Not operator: not(a)
outputs the negation of a
End of explanation
"""
x = 1
y = 2
if (x < y):
print("Yup, totally true!")
else:
print("Nope, completely wrong!")
"""
Explanation: Now, these might not seem especially useful at first, but they're the bread and butter of programming. Even more importantly, they are used when we are doing if/else statements or loops, which we will now cover.
An if/else statement (or simply an if statement) are segments of code that have a conditional statement built into it, such that the code within that segment doesn't activate unless the conditional statement is true.
Here's an example. Play around with the variables x and y to see what happens.
End of explanation
"""
x = 2
y = 1
if (x > y):
print("x is greater than y")
"""
Explanation: The idea here is that Python checks to see if the statement (in this case "x < y") is True. If it is, then it will do what is below the if statement. The else statement tells Python what to do if the condition is False.
Note that Python requires you to indent these segments of code, and WILL NOT like it if you don't. Some languages don't require it, but Python is very particular when it comes to this point. (The parentheses around the conditional statement, however, are optional.)
You also do not always need an "else" segment, which effectively means that if the condition isn't True, then that segment of code doesn't do anything, and Python will just continue on past the if statement.
Here is an example of such a case. Play around with it to see what happens when you change the values of x and y.
End of explanation
"""
x = 2
y = 2
if (x == y):
print("x and y are equal")
if (x != y):
print("x and y are not equal")
if (x > y or x < y):
print("x and y are not equal (again!)")
"""
Explanation: Here's a more complicated case. Here, we introduce some logic that helps you figure out if two objects are equal or not.
There's the == operator and the != operator. Can you figure out what they mean?
End of explanation
"""
x = 1
while (x <= 10):
print(x)
x = x + 1
"""
Explanation: Loops
While-loops are similar to if statements, in the sense that they also have a conditional statement built into them. The code inside the loop will execute when the conditional is True. And then it will check the conditional again and, if it evaluates to True, the code will execute... again. And so on and so forth...
The funny thing about while-loops is that they will KEEP executing that segment of code until the conditional statement evaluates to False...which hopefully will happen...right?
Although this seems a bit strange, you can get the hang of it!
For example, let's say we want Python to count from 1 to 10.
End of explanation
"""
x = 2
i = 0 #dummy variable
while (i<10):
x = 2*x
print(x)
i = i+1
"""
Explanation: Note here that we tell Python to print the number x (x starts at 1) and then redefining x as itself +1 (so, x=1 gets redefined to x = x+1 = 1+1 = 2). Python then executes the loop again, but now x has been incremented by 1. We continue this process from x = 1 to x = 10, printing out x every time. Thus, with a fairly compact bit of code, you get 10 lines of output.
It is sometimes handy to define what is known as a DUMMY VARIABLE, whose only job is to count the number of times the loop has been executed. Let's call this dummy variable i.
End of explanation
"""
myList = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
# we want to end the loop at the end of the list i.e., the length of the list
end = len(myList)
# your code here
"""
Explanation: Now we want to combine lists with loops! You can use the dummy variable as a way to access a value in the list through its index. In exercise 1 we asked you to square the elements in a given list by hand, let's now do it by using a loop. The setup for the loop is provided below but try completing the code on your own!
In Python, the command to square something is **. So 3**2 will give you 9.
End of explanation
"""
twoList = [2, 5, 6, 2, 4, 1, 5, 7, 3, 2, 5, 2]
count = 0 # this variable will count up how many times the number 2 appears in the above list
end = len(twoList)
i = 0
while i < end:
if twoList[i] == 2:
count = count + 1
i = i + 1
print(count)
"""
Explanation: Isn't that much easier than squaring everything by hand? Loops are your friends in programming and will make menial, reptitive tasks go by very quickly.
All this information with logic, loops, and lists may be confusing, but you will get the hang of it with practice! And by combining these concepts, your programming in Python can be very powerful. Let's try an example where we use an if/then nested inside of a loop by finding how many times the number 2 shows up in the following list. Remember that indentation is very important in Python!
End of explanation
"""
x = [True, True, False, False]
y = [True, False, True, False]
# Insert your code here below:
"""
Explanation: Notice how the indentation is set up. What happens if you indent the print statement? How about removing the indentation on the if statement? Play around with it so you get the hang of indentation in nested code.
If you are ever lost when writing or understanding a loop, just think through each iteration one by one. Think about how the variables, especially the dummy variables, are changing. Printing them to the screen can also help you figure out any problems you may have. With some more practice, you will be a loop master soon!
EXERCISE 2: Truth Table
A truth table is a way of showing the True and False values of different operations. This seems abstract and unuseful, but they can be really great ways to 'mask', or block out, data that is not needed. They typically start with values for two variables and then find the result when they are combined with different operators. Using two separate while-loops, generate two columns of a truth table for the lists x and y, defined below. That is, find the values for each element in the list for x and y in one loop and then the values of x or y in another. Note: you can always check your answer by doing it in your head! Checking your work is a good habit to have for the future :)
End of explanation
"""
|
google/earthengine-community | tutorials/sentinel-2-s2cloudless/index.ipynb | apache-2.0 | import ee
# Trigger the authentication flow.
ee.Authenticate()
# Initialize the library.
ee.Initialize()
"""
Explanation: Sentinel-2 Cloud Masking with s2cloudless
Author: jdbcode
This tutorial is an introduction to masking clouds and cloud shadows in Sentinel-2 (S2) surface reflectance (SR) data using Earth Engine. Clouds are identified from the S2 cloud probability dataset (s2cloudless) and shadows are defined by cloud projection intersection with low-reflectance near-infrared (NIR) pixels.
For a similar JavaScript API script, see this Code Editor example.
Run me first
Run the following cell to initialize the Earth Engine API. The output will contain instructions on how to grant this notebook access to Earth Engine using your account.
End of explanation
"""
AOI = ee.Geometry.Point(-122.269, 45.701)
START_DATE = '2020-06-01'
END_DATE = '2020-06-02'
CLOUD_FILTER = 60
CLD_PRB_THRESH = 50
NIR_DRK_THRESH = 0.15
CLD_PRJ_DIST = 1
BUFFER = 50
"""
Explanation: Assemble cloud mask components
This section builds an S2 SR collection and defines functions to add cloud and cloud shadow component layers to each image in the collection.
Define collection filter and cloud mask parameters
Define parameters that are used to filter the S2 image collection and determine cloud and cloud shadow identification.
|Parameter | Type | Description |
| :-- | :-- | :-- |
| AOI | ee.Geometry | Area of interest |
| START_DATE | string | Image collection start date (inclusive) |
| END_DATE | string | Image collection end date (exclusive) |
| CLOUD_FILTER | integer | Maximum image cloud cover percent allowed in image collection |
| CLD_PRB_THRESH | integer | Cloud probability (%); values greater than are considered cloud |
| NIR_DRK_THRESH | float | Near-infrared reflectance; values less than are considered potential cloud shadow |
| CLD_PRJ_DIST | float | Maximum distance (km) to search for cloud shadows from cloud edges |
| BUFFER | integer | Distance (m) to dilate the edge of cloud-identified objects |
The values currently set for AOI, START_DATE, END_DATE, and CLOUD_FILTER are intended to build a collection for a single S2 overpass of a region near Portland, Oregon, USA. When parameterizing and evaluating cloud masks for a new area, it is good practice to identify a single overpass date and limit the regional extent to minimize processing requirements. If you want to work with a different example, use this Earth Engine App to identify an image that includes some clouds, then replace the relevant parameter values below with those provided in the app.
End of explanation
"""
def get_s2_sr_cld_col(aoi, start_date, end_date):
# Import and filter S2 SR.
s2_sr_col = (ee.ImageCollection('COPERNICUS/S2_SR')
.filterBounds(aoi)
.filterDate(start_date, end_date)
.filter(ee.Filter.lte('CLOUDY_PIXEL_PERCENTAGE', CLOUD_FILTER)))
# Import and filter s2cloudless.
s2_cloudless_col = (ee.ImageCollection('COPERNICUS/S2_CLOUD_PROBABILITY')
.filterBounds(aoi)
.filterDate(start_date, end_date))
# Join the filtered s2cloudless collection to the SR collection by the 'system:index' property.
return ee.ImageCollection(ee.Join.saveFirst('s2cloudless').apply(**{
'primary': s2_sr_col,
'secondary': s2_cloudless_col,
'condition': ee.Filter.equals(**{
'leftField': 'system:index',
'rightField': 'system:index'
})
}))
"""
Explanation: Build a Sentinel-2 collection
Sentinel-2 surface reflectance and Sentinel-2 cloud probability are two different image collections. Each collection must be filtered similarly (e.g., by date and bounds) and then the two filtered collections must be joined.
Define a function to filter the SR and s2cloudless collections according to area of interest and date parameters, then join them on the system:index property. The result is a copy of the SR collection where each image has a new 's2cloudless' property whose value is the corresponding s2cloudless image.
End of explanation
"""
s2_sr_cld_col_eval = get_s2_sr_cld_col(AOI, START_DATE, END_DATE)
"""
Explanation: Apply the get_s2_sr_cld_col function to build a collection according to the parameters defined above.
End of explanation
"""
def add_cloud_bands(img):
# Get s2cloudless image, subset the probability band.
cld_prb = ee.Image(img.get('s2cloudless')).select('probability')
# Condition s2cloudless by the probability threshold value.
is_cloud = cld_prb.gt(CLD_PRB_THRESH).rename('clouds')
# Add the cloud probability layer and cloud mask as image bands.
return img.addBands(ee.Image([cld_prb, is_cloud]))
"""
Explanation: Define cloud mask component functions
Cloud components
Define a function to add the s2cloudless probability layer and derived cloud mask as bands to an S2 SR image input.
End of explanation
"""
def add_shadow_bands(img):
# Identify water pixels from the SCL band.
not_water = img.select('SCL').neq(6)
# Identify dark NIR pixels that are not water (potential cloud shadow pixels).
SR_BAND_SCALE = 1e4
dark_pixels = img.select('B8').lt(NIR_DRK_THRESH*SR_BAND_SCALE).multiply(not_water).rename('dark_pixels')
# Determine the direction to project cloud shadow from clouds (assumes UTM projection).
shadow_azimuth = ee.Number(90).subtract(ee.Number(img.get('MEAN_SOLAR_AZIMUTH_ANGLE')));
# Project shadows from clouds for the distance specified by the CLD_PRJ_DIST input.
cld_proj = (img.select('clouds').directionalDistanceTransform(shadow_azimuth, CLD_PRJ_DIST*10)
.reproject(**{'crs': img.select(0).projection(), 'scale': 100})
.select('distance')
.mask()
.rename('cloud_transform'))
# Identify the intersection of dark pixels with cloud shadow projection.
shadows = cld_proj.multiply(dark_pixels).rename('shadows')
# Add dark pixels, cloud projection, and identified shadows as image bands.
return img.addBands(ee.Image([dark_pixels, cld_proj, shadows]))
"""
Explanation: Cloud shadow components
Define a function to add dark pixels, cloud projection, and identified shadows as bands to an S2 SR image input. Note that the image input needs to be the result of the above add_cloud_bands function because it relies on knowing which pixels are considered cloudy ('clouds' band).
End of explanation
"""
def add_cld_shdw_mask(img):
# Add cloud component bands.
img_cloud = add_cloud_bands(img)
# Add cloud shadow component bands.
img_cloud_shadow = add_shadow_bands(img_cloud)
# Combine cloud and shadow mask, set cloud and shadow as value 1, else 0.
is_cld_shdw = img_cloud_shadow.select('clouds').add(img_cloud_shadow.select('shadows')).gt(0)
# Remove small cloud-shadow patches and dilate remaining pixels by BUFFER input.
# 20 m scale is for speed, and assumes clouds don't require 10 m precision.
is_cld_shdw = (is_cld_shdw.focalMin(2).focalMax(BUFFER*2/20)
.reproject(**{'crs': img.select([0]).projection(), 'scale': 20})
.rename('cloudmask'))
# Add the final cloud-shadow mask to the image.
return img_cloud_shadow.addBands(is_cld_shdw)
"""
Explanation: Final cloud-shadow mask
Define a function to assemble all of the cloud and cloud shadow components and produce the final mask.
End of explanation
"""
# Import the folium library.
import folium
# Define a method for displaying Earth Engine image tiles to a folium map.
def add_ee_layer(self, ee_image_object, vis_params, name, show=True, opacity=1, min_zoom=0):
map_id_dict = ee.Image(ee_image_object).getMapId(vis_params)
folium.raster_layers.TileLayer(
tiles=map_id_dict['tile_fetcher'].url_format,
attr='Map Data © <a href="https://earthengine.google.com/">Google Earth Engine</a>',
name=name,
show=show,
opacity=opacity,
min_zoom=min_zoom,
overlay=True,
control=True
).add_to(self)
# Add the Earth Engine layer method to folium.
folium.Map.add_ee_layer = add_ee_layer
"""
Explanation: Visualize and evaluate cloud mask components
This section provides functions for displaying the cloud and cloud shadow components. In most cases, adding all components to images and viewing them is unnecessary. This section is included to illustrate how the cloud/cloud shadow mask is developed and demonstrate how to test and evaluate various parameters, which is helpful when defining masking variables for an unfamiliar region or time of year.
In applications outside of this tutorial, if you prefer to include only the final cloud/cloud shadow mask along with the original image bands, replace:
return img_cloud_shadow.addBands(is_cld_shdw)
with
return img.addBands(is_cld_shdw)
in the above add_cld_shdw_mask function.
Define functions to display image and mask component layers.
Folium will be used to display map layers. Import folium and define a method to display Earth Engine image tiles.
End of explanation
"""
def display_cloud_layers(col):
# Mosaic the image collection.
img = col.mosaic()
# Subset layers and prepare them for display.
clouds = img.select('clouds').selfMask()
shadows = img.select('shadows').selfMask()
dark_pixels = img.select('dark_pixels').selfMask()
probability = img.select('probability')
cloudmask = img.select('cloudmask').selfMask()
cloud_transform = img.select('cloud_transform')
# Create a folium map object.
center = AOI.centroid(10).coordinates().reverse().getInfo()
m = folium.Map(location=center, zoom_start=12)
# Add layers to the folium map.
m.add_ee_layer(img,
{'bands': ['B4', 'B3', 'B2'], 'min': 0, 'max': 2500, 'gamma': 1.1},
'S2 image', True, 1, 9)
m.add_ee_layer(probability,
{'min': 0, 'max': 100},
'probability (cloud)', False, 1, 9)
m.add_ee_layer(clouds,
{'palette': 'e056fd'},
'clouds', False, 1, 9)
m.add_ee_layer(cloud_transform,
{'min': 0, 'max': 1, 'palette': ['white', 'black']},
'cloud_transform', False, 1, 9)
m.add_ee_layer(dark_pixels,
{'palette': 'orange'},
'dark_pixels', False, 1, 9)
m.add_ee_layer(shadows, {'palette': 'yellow'},
'shadows', False, 1, 9)
m.add_ee_layer(cloudmask, {'palette': 'orange'},
'cloudmask', True, 0.5, 9)
# Add a layer control panel to the map.
m.add_child(folium.LayerControl())
# Display the map.
display(m)
"""
Explanation: Define a function to display all of the cloud and cloud shadow components to an interactive Folium map. The input is an image collection where each image is the result of the add_cld_shdw_mask function defined previously.
End of explanation
"""
s2_sr_cld_col_eval_disp = s2_sr_cld_col_eval.map(add_cld_shdw_mask)
display_cloud_layers(s2_sr_cld_col_eval_disp)
"""
Explanation: Display mask component layers
Map the add_cld_shdw_mask function over the collection to add mask component bands to each image, then display the results.
Give the system some time to render everything, it should take less than a minute.
End of explanation
"""
AOI = ee.Geometry.Point(-122.269, 45.701)
START_DATE = '2020-06-01'
END_DATE = '2020-09-01'
CLOUD_FILTER = 60
CLD_PRB_THRESH = 40
NIR_DRK_THRESH = 0.15
CLD_PRJ_DIST = 2
BUFFER = 100
"""
Explanation: Evaluate mask component layers
In the above map, use the layer control panel in the upper right corner to toggle layers on and off; layer names are the same as band names, for easy code referral. Note that the layers have a minimum zoom level of 9 to avoid resource issues that can occur when visualizing layers that depend on the ee.Image.reproject function (used during cloud shadow project and mask dilation).
Try changing the above CLD_PRB_THRESH, NIR_DRK_THRESH, CLD_PRJ_DIST, and BUFFER input variables and rerunning the previous cell to see how the results change. Find a good set of values for a given overpass and then try the procedure with a new overpass with different cloud conditions (this S2 SR image browser app is handy for quickly identifying images and determining image collection filter criteria). Try to identify a set of parameter values that balances cloud/cloud shadow commission and omission error for a range of cloud types. In the next section, we'll use the values to actually apply the mask to generate a cloud-free composite for 2020.
Apply cloud and cloud shadow mask
In this section we'll generate a cloud-free composite for the same region as above that represents mean reflectance for July and August, 2020.
Define collection filter and cloud mask parameters
We'll redefine the parameters to be a little more aggressive, i.e. decrease the cloud probability threshold, increase the cloud projection distance, and increase the buffer. These changes will increase cloud commission error (mask out some clear pixels), but since we will be compositing images from three months, there should be plenty of observations to complete the mosaic.
End of explanation
"""
s2_sr_cld_col = get_s2_sr_cld_col(AOI, START_DATE, END_DATE)
"""
Explanation: Build a Sentinel-2 collection
Reassemble the S2-cloudless collection since the collection filter parameters have changed.
End of explanation
"""
def apply_cld_shdw_mask(img):
# Subset the cloudmask band and invert it so clouds/shadow are 0, else 1.
not_cld_shdw = img.select('cloudmask').Not()
# Subset reflectance bands and update their masks, return the result.
return img.select('B.*').updateMask(not_cld_shdw)
"""
Explanation: Define cloud mask application function
Define a function to apply the cloud mask to each image in the collection.
End of explanation
"""
s2_sr_median = (s2_sr_cld_col.map(add_cld_shdw_mask)
.map(apply_cld_shdw_mask)
.median())
"""
Explanation: Process the collection
Add cloud and cloud shadow component bands to each image and then apply the mask to each image. Reduce the collection by median (in your application, you might consider using medoid reduction to build a composite from actual data values, instead of per-band statistics).
End of explanation
"""
# Create a folium map object.
center = AOI.centroid(10).coordinates().reverse().getInfo()
m = folium.Map(location=center, zoom_start=12)
# Add layers to the folium map.
m.add_ee_layer(s2_sr_median,
{'bands': ['B4', 'B3', 'B2'], 'min': 0, 'max': 2500, 'gamma': 1.1},
'S2 cloud-free mosaic', True, 1, 9)
# Add a layer control panel to the map.
m.add_child(folium.LayerControl())
# Display the map.
display(m)
"""
Explanation: Display the cloud-free composite
Display the results. Be patient while the map renders, it may take a minute; ee.Image.reproject is forcing computations to happen at 100 and 20 m scales (i.e. it is not relying on appropriate pyramid level scales for analysis). The issues with ee.Image.reproject being resource-intensive in this case are mostly confined to interactive map viewing. Batch image exports and table reduction exports where the scale parameter is set to typical Sentinel-2 scales (10-60 m) are less affected.
End of explanation
"""
|
kgrodzicki/machine-learning-specialization | course-3-classification/module-4-linear-classifier-regularization-assignment-blank.ipynb | mit | from __future__ import division
import graphlab
"""
Explanation: Logistic Regression with L2 regularization
The goal of this second notebook is to implement your own logistic regression classifier with L2 regularization. You will do the following:
Extract features from Amazon product reviews.
Convert an SFrame into a NumPy array.
Write a function to compute the derivative of log likelihood function with an L2 penalty with respect to a single coefficient.
Implement gradient ascent with an L2 penalty.
Empirically explore how the L2 penalty can ameliorate overfitting.
Fire up GraphLab Create
Make sure you have the latest version of GraphLab Create. Upgrade by
pip install graphlab-create --upgrade
See this page for detailed instructions on upgrading.
End of explanation
"""
products = graphlab.SFrame('amazon_baby_subset.gl/')
"""
Explanation: Load and process review dataset
For this assignment, we will use the same subset of the Amazon product review dataset that we used in Module 3 assignment. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted of mostly positive reviews.
End of explanation
"""
# The same feature processing (same as the previous assignments)
# ---------------------------------------------------------------
import json
with open('important_words.json', 'r') as f: # Reads the list of most frequent words
important_words = json.load(f)
important_words = [str(s) for s in important_words]
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
# Remove punctuation.
products['review_clean'] = products['review'].apply(remove_punctuation)
# Split out the words into individual columns
for word in important_words:
products[word] = products['review_clean'].apply(lambda s : s.split().count(word))
"""
Explanation: Just like we did previously, we will work with a hand-curated list of important words extracted from the review data. We will also perform 2 simple data transformations:
Remove punctuation using Python's built-in string functionality.
Compute word counts (only for the important_words)
Refer to Module 3 assignment for more details.
End of explanation
"""
products
"""
Explanation: Now, let us take a look at what the dataset looks like (Note: This may take a few minutes).
End of explanation
"""
train_data, validation_data = products.random_split(.8, seed=2)
print 'Training set : %d data points' % len(train_data)
print 'Validation set : %d data points' % len(validation_data)
"""
Explanation: Train-Validation split
We split the data into a train-validation split with 80% of the data in the training set and 20% of the data in the validation set. We use seed=2 so that everyone gets the same result.
Note: In previous assignments, we have called this a train-test split. However, the portion of data that we don't train on will be used to help select model parameters. Thus, this portion of data should be called a validation set. Recall that examining performance of various potential models (i.e. models with different parameters) should be on a validation set, while evaluation of selected model should always be on a test set.
End of explanation
"""
import numpy as np
def get_numpy_data(data_sframe, features, label):
data_sframe['intercept'] = 1
features = ['intercept'] + features
features_sframe = data_sframe[features]
feature_matrix = features_sframe.to_numpy()
label_sarray = data_sframe[label]
label_array = label_sarray.to_numpy()
return(feature_matrix, label_array)
"""
Explanation: Convert SFrame to NumPy array
Just like in the second assignment of the previous module, we provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned: one representing features and another representing class labels.
Note: The feature matrix includes an additional column 'intercept' filled with 1's to take account of the intercept term.
End of explanation
"""
feature_matrix_train, sentiment_train = get_numpy_data(train_data, important_words, 'sentiment')
feature_matrix_valid, sentiment_valid = get_numpy_data(validation_data, important_words, 'sentiment')
"""
Explanation: We convert both the training and validation sets into NumPy arrays.
Warning: This may take a few minutes.
End of explanation
"""
arrays = np.load('module-4-assignment-numpy-arrays.npz')
feature_matrix_train, sentiment_train = arrays['feature_matrix_train'], arrays['sentiment_train']
feature_matrix_valid, sentiment_valid = arrays['feature_matrix_valid'], arrays['sentiment_valid']
"""
Explanation: Are you running this notebook on an Amazon EC2 t2.micro instance? (If you are using your own machine, please skip this section)
It has been reported that t2.micro instances do not provide sufficient power to complete the conversion in acceptable amount of time. For interest of time, please refrain from running get_numpy_data function. Instead, download the binary file containing the four NumPy arrays you'll need for the assignment. To load the arrays, run the following commands:
arrays = np.load('module-4-assignment-numpy-arrays.npz')
feature_matrix_train, sentiment_train = arrays['feature_matrix_train'], arrays['sentiment_train']
feature_matrix_valid, sentiment_valid = arrays['feature_matrix_valid'], arrays['sentiment_valid']
End of explanation
"""
'''
produces probablistic estimate for P(y_i = +1 | x_i, w).
estimate ranges between 0 and 1.
'''
def predict_probability(feature_matrix, coefficients):
# Take dot product of feature_matrix and coefficients
## YOUR CODE HERE
score= np.dot(feature_matrix, coefficients)
# Compute P(y_i = +1 | x_i, w) using the link function
## YOUR CODE HERE
predictions = 1 / (1 + np.exp(-score))
return predictions
"""
Explanation: Building on logistic regression with no L2 penalty assignment
Let us now build on Module 3 assignment. Recall from lecture that the link function for logistic regression can be defined as:
$$
P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))},
$$
where the feature vector $h(\mathbf{x}_i)$ is given by the word counts of important_words in the review $\mathbf{x}_i$.
We will use the same code as in this past assignment to make probability predictions since this part is not affected by the L2 penalty. (Only the way in which the coefficients are learned is affected by the addition of a regularization term.)
End of explanation
"""
def feature_derivative_with_L2(errors, feature, coefficient, l2_penalty, feature_is_constant):
# Compute the dot product of errors and feature
## YOUR CODE HERE
derivative = np.dot(errors, feature)
# add L2 penalty term for any feature that isn't the intercept.
if not feature_is_constant:
## YOUR CODE HERE
derivative += -2. * l2_penalty * coefficient
return derivative
"""
Explanation: Adding L2 penalty
Let us now work on extending logistic regression with L2 regularization. As discussed in the lectures, the L2 regularization is particularly useful in preventing overfitting. In this assignment, we will explore L2 regularization in detail.
Recall from lecture and the previous assignment that for logistic regression without an L2 penalty, the derivative of the log likelihood function is:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right)
$$
Adding L2 penalty to the derivative
It takes only a small modification to add a L2 penalty. All terms indicated in red refer to terms that were added due to an L2 penalty.
Recall from the lecture that the link function is still the sigmoid:
$$
P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))},
$$
We add the L2 penalty term to the per-coefficient derivative of log likelihood:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) \color{red}{-2\lambda w_j }
$$
The per-coefficient derivative for logistic regression with an L2 penalty is as follows:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) \color{red}{-2\lambda w_j }
$$
and for the intercept term, we have
$$
\frac{\partial\ell}{\partial w_0} = \sum{i=1}^N h_0(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right)
$$
Note: As we did in the Regression course, we do not apply the L2 penalty on the intercept. A large intercept does not necessarily indicate overfitting because the intercept is not associated with any particular feature.
Write a function that computes the derivative of log likelihood with respect to a single coefficient $w_j$. Unlike its counterpart in the last assignment, the function accepts five arguments:
* errors vector containing $(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w}))$ for all $i$
* feature vector containing $h_j(\mathbf{x}_i)$ for all $i$
* coefficient containing the current value of coefficient $w_j$.
* l2_penalty representing the L2 penalty constant $\lambda$
* feature_is_constant telling whether the $j$-th feature is constant or not.
End of explanation
"""
def compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty):
indicator = (sentiment==+1)
scores = np.dot(feature_matrix, coefficients)
lp = np.sum((indicator-1)*scores - np.log(1. + np.exp(-scores))) - l2_penalty*np.sum(coefficients[1:]**2)
return lp
"""
Explanation: Quiz question: In the code above, was the intercept term regularized?
To verify the correctness of the gradient ascent algorithm, we provide a function for computing log likelihood (which we recall from the last assignment was a topic detailed in an advanced optional video, and used here for its numerical stability).
$$\ell\ell(\mathbf{w}) = \sum_{i=1}^N \Big( (\mathbf{1}[y_i = +1] - 1)\mathbf{w}^T h(\mathbf{x}_i) - \ln\left(1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))\right) \Big) \color{red}{-\lambda\|\mathbf{w}\|_2^2} $$
End of explanation
"""
def logistic_regression_with_L2(feature_matrix, sentiment, initial_coefficients, step_size, l2_penalty, max_iter):
coefficients = np.array(initial_coefficients) # make sure it's a numpy array
for itr in xrange(max_iter):
# Predict P(y_i = +1|x_i,w) using your predict_probability() function
## YOUR CODE HERE
predictions = predict_probability(feature_matrix, coefficients)
# Compute indicator value for (y_i = +1)
indicator = (sentiment==+1)
# Compute the errors as indicator - predictions
errors = indicator - predictions
for j in xrange(len(coefficients)): # loop over each coefficient
is_intercept = (j == 0)
# Recall that feature_matrix[:,j] is the feature column associated with coefficients[j].
# Compute the derivative for coefficients[j]. Save it in a variable called derivative
## YOUR CODE HERE
#
derivative = feature_derivative_with_L2(errors, feature_matrix[:,j], coefficients[j], l2_penalty, is_intercept)
# add the step size times the derivative to the current coefficient
## YOUR CODE HERE
coefficients[j] += step_size * derivative
# Checking whether log likelihood is increasing
if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) \
or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0:
lp = compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty)
print 'iteration %*d: log likelihood of observed labels = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, lp)
return coefficients
"""
Explanation: Quiz question: Does the term with L2 regularization increase or decrease $\ell\ell(\mathbf{w})$?
The logistic regression function looks almost like the one in the last assignment, with a minor modification to account for the L2 penalty. Fill in the code below to complete this modification.
End of explanation
"""
# run with L2 = 0
coefficients_0_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=0, max_iter=501)
# run with L2 = 4
coefficients_4_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=4, max_iter=501)
# run with L2 = 10
coefficients_10_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=10, max_iter=501)
# run with L2 = 1e2
coefficients_1e2_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e2, max_iter=501)
# run with L2 = 1e3
coefficients_1e3_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e3, max_iter=501)
# run with L2 = 1e5
coefficients_1e5_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e5, max_iter=501)
"""
Explanation: Explore effects of L2 regularization
Now that we have written up all the pieces needed for regularized logistic regression, let's explore the benefits of using L2 regularization in analyzing sentiment for product reviews. As iterations pass, the log likelihood should increase.
Below, we train models with increasing amounts of regularization, starting with no L2 penalty, which is equivalent to our previous logistic regression implementation.
End of explanation
"""
table = graphlab.SFrame({'word': ['(intercept)'] + important_words})
def add_coefficients_to_table(coefficients, column_name):
table[column_name] = coefficients
return table
"""
Explanation: Compare coefficients
We now compare the coefficients for each of the models that were trained above. We will create a table of features and learned coefficients associated with each of the different L2 penalty values.
Below is a simple helper function that will help us create this table.
End of explanation
"""
add_coefficients_to_table(coefficients_0_penalty, 'coefficients [L2=0]')
add_coefficients_to_table(coefficients_4_penalty, 'coefficients [L2=4]')
add_coefficients_to_table(coefficients_10_penalty, 'coefficients [L2=10]')
add_coefficients_to_table(coefficients_1e2_penalty, 'coefficients [L2=1e2]')
add_coefficients_to_table(coefficients_1e3_penalty, 'coefficients [L2=1e3]')
add_coefficients_to_table(coefficients_1e5_penalty, 'coefficients [L2=1e5]')
"""
Explanation: Now, let's run the function add_coefficients_to_table for each of the L2 penalty strengths.
End of explanation
"""
coefficients = list(coefficients_0_penalty[1:]) # exclude intercept
word_coefficient_tuples = [(word, coefficient) for word, coefficient in zip(important_words, coefficients)]
word_coefficient_tuples = sorted(word_coefficient_tuples, key=lambda x:x[1], reverse=True)
positive_words = word_coefficient_tuples[:5]
negative_words = word_coefficient_tuples[-5:]
"""
Explanation: Using the coefficients trained with L2 penalty 0, find the 5 most positive words (with largest positive coefficients). Save them to positive_words. Similarly, find the 5 most negative words (with largest negative coefficients) and save them to negative_words.
Quiz Question. Which of the following is not listed in either positive_words or negative_words?
End of explanation
"""
positive_words = table.sort('coefficients [L2=0]', ascending = False)['word'][0:5]
positive_words
negative_words = table.sort('coefficients [L2=0]')['word'][0:5]
negative_words
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = 10, 6
def make_coefficient_plot(table, positive_words, negative_words, l2_penalty_list):
cmap_positive = plt.get_cmap('Reds')
cmap_negative = plt.get_cmap('Blues')
xx = l2_penalty_list
plt.plot(xx, [0.]*len(xx), '--', lw=1, color='k')
table_positive_words = table.filter_by(column_name='word', values=positive_words)
table_negative_words = table.filter_by(column_name='word', values=negative_words)
del table_positive_words['word']
del table_negative_words['word']
for i in xrange(len(positive_words)):
color = cmap_positive(0.8*((i+1)/(len(positive_words)*1.2)+0.15))
plt.plot(xx, table_positive_words[i:i+1].to_numpy().flatten(),
'-', label=positive_words[i], linewidth=4.0, color=color)
for i in xrange(len(negative_words)):
color = cmap_negative(0.8*((i+1)/(len(negative_words)*1.2)+0.15))
plt.plot(xx, table_negative_words[i:i+1].to_numpy().flatten(),
'-', label=negative_words[i], linewidth=4.0, color=color)
plt.legend(loc='best', ncol=3, prop={'size':16}, columnspacing=0.5)
plt.axis([1, 1e5, -1, 2])
plt.title('Coefficient path')
plt.xlabel('L2 penalty ($\lambda$)')
plt.ylabel('Coefficient value')
plt.xscale('log')
plt.rcParams.update({'font.size': 18})
plt.tight_layout()
"""
Explanation: Let us observe the effect of increasing L2 penalty on the 10 words just selected. We provide you with a utility function to plot the coefficient path.
End of explanation
"""
make_coefficient_plot(table, positive_words, negative_words, l2_penalty_list=[0, 4, 10, 1e2, 1e3, 1e5])
"""
Explanation: Run the following cell to generate the plot. Use the plot to answer the following quiz question.
End of explanation
"""
def get_classification_accuracy(feature_matrix, sentiment, coefficients):
scores = np.dot(feature_matrix, coefficients)
apply_threshold = np.vectorize(lambda x: 1. if x > 0 else -1.)
predictions = apply_threshold(scores)
num_correct = (predictions == sentiment).sum()
accuracy = num_correct / len(feature_matrix)
return accuracy
"""
Explanation: Quiz Question: (True/False) All coefficients consistently get smaller in size as the L2 penalty is increased.
Quiz Question: (True/False) The relative order of coefficients is preserved as the L2 penalty is increased. (For example, if the coefficient for 'cat' was more positive than that for 'dog', this remains true as the L2 penalty increases.)
Measuring accuracy
Now, let us compute the accuracy of the classifier model. Recall that the accuracy is given by
$$
\mbox{accuracy} = \frac{\mbox{# correctly classified data points}}{\mbox{# total data points}}
$$
Recall from lecture that that the class prediction is calculated using
$$
\hat{y}_i =
\left{
\begin{array}{ll}
+1 & h(\mathbf{x}_i)^T\mathbf{w} > 0 \
-1 & h(\mathbf{x}_i)^T\mathbf{w} \leq 0 \
\end{array}
\right.
$$
Note: It is important to know that the model prediction code doesn't change even with the addition of an L2 penalty. The only thing that changes is the estimated coefficients used in this prediction.
Based on the above, we will use the same code that was used in Module 3 assignment.
End of explanation
"""
train_accuracy = {}
train_accuracy[0] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_0_penalty)
train_accuracy[4] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_4_penalty)
train_accuracy[10] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_10_penalty)
train_accuracy[1e2] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e2_penalty)
train_accuracy[1e3] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e3_penalty)
train_accuracy[1e5] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e5_penalty)
validation_accuracy = {}
validation_accuracy[0] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_0_penalty)
validation_accuracy[4] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_4_penalty)
validation_accuracy[10] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_10_penalty)
validation_accuracy[1e2] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e2_penalty)
validation_accuracy[1e3] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e3_penalty)
validation_accuracy[1e5] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e5_penalty)
# Build a simple report
for key in sorted(validation_accuracy.keys()):
print "L2 penalty = %g" % key
print "train accuracy = %s, validation_accuracy = %s" % (train_accuracy[key], validation_accuracy[key])
print "--------------------------------------------------------------------------------"
"""
Explanation: Below, we compare the accuracy on the training data and validation data for all the models that were trained in this assignment. We first calculate the accuracy values and then build a simple report summarizing the performance for the various models.
End of explanation
"""
|
ethen8181/machine-learning | python/algorithms/basic_data_structure.ipynb | mit | from jupyterthemes import get_themes
from jupyterthemes.stylefx import set_nb_theme
themes = get_themes()
set_nb_theme(themes[1])
%load_ext watermark
%watermark -a 'Ethen' -d -t -v -p jupyterthemes
"""
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Basic-Data-Structure" data-toc-modified-id="Basic-Data-Structure-1"><span class="toc-item-num">1 </span>Basic Data Structure</a></span><ul class="toc-item"><li><span><a href="#Stack" data-toc-modified-id="Stack-1.1"><span class="toc-item-num">1.1 </span>Stack</a></span></li><li><span><a href="#Queue" data-toc-modified-id="Queue-1.2"><span class="toc-item-num">1.2 </span>Queue</a></span></li><li><span><a href="#Unordered-(Linked)List" data-toc-modified-id="Unordered-(Linked)List-1.3"><span class="toc-item-num">1.3 </span>Unordered (Linked)List</a></span></li><li><span><a href="#Ordered-(Linked)List" data-toc-modified-id="Ordered-(Linked)List-1.4"><span class="toc-item-num">1.4 </span>Ordered (Linked)List</a></span></li></ul></li></ul></div>
End of explanation
"""
class Stack:
"""
Last In First Out (LIFO)
creates a new stack that is empty,
assumes that the end of the list will hold
the top element of the stack
References
----------
https://docs.python.org/3/tutorial/datastructures.html#using-lists-as-stacks
"""
def __init__(self):
self.items = []
def is_empty(self):
"""
For sequences, (strings, lists, tuples),
use the fact that empty sequences are false
http://stackoverflow.com/questions/53513/best-way-to-check-if-a-list-is-empty
"""
return not self.items
def push(self, item):
"""adds a new item to the top of the stack"""
self.items.append(item)
def pop(self):
"""
removes the top item from the stack,
popping an empty stack (list) will result in an error
"""
return self.items.pop()
def peek(self):
"""returns the top item from the stack but does not remove it"""
return self.items[len(self.items) - 1]
def size(self):
return len(self.items)
s = Stack()
print(s.is_empty())
s.push(4)
s.push(5)
print(s.is_empty())
print(s.pop())
"""
Explanation: Basic Data Structure
Following the online book, Problem Solving with Algorithms and Data Structures. Chapter 4 works through basic data structures and some of its use cases.
Stack
End of explanation
"""
def rev_string(string):
s = Stack()
for char in string:
s.push(char)
rev_str = ''
while not s.is_empty():
rev_str += s.pop()
return rev_str
test = 'apple'
rev_string(test)
"""
Explanation: Write a function rev_string(mystr) that uses a stack to reverse the characters in a string.
End of explanation
"""
def match(top, char):
if top == '[' and char == ']':
return True
if top == '{' and char == '}':
return True
if top == '(' and char == ')':
return True
return False
def check_paren(text):
"""confirm balance parenthesis in operations"""
# define the set of opening
# and closing brackets
opens = '([{'
close = ')]}'
s = Stack()
balanced = True
for char in text:
if char in opens:
s.push(char)
if char in close:
# if a closing bracket appeared
# without a opening bracket
if s.is_empty():
balanced = False
break
else:
# if there is a mismatch between the
# closing and opening bracket
top = s.pop()
if not match(top, char):
balanced = False
break
if balanced and s.is_empty():
return True
else:
return False
test1 = '{{([][])}()}'
balanced = check_paren(test1)
print(balanced)
test2 = '{test}'
balanced = check_paren(test2)
print(balanced)
test3 = ']'
balanced = check_paren(test3)
print(balanced)
"""
Explanation: Check for balance parenthesis.
End of explanation
"""
def convert_binary(num):
"""assumes positive number is given"""
s = Stack()
while num > 0:
remainder = num % 2
s.push(remainder)
num = num // 2
binary_str = ''
while not s.is_empty():
binary_str += str(s.pop())
return binary_str
num = 42
binary_str = convert_binary(num)
binary_str
"""
Explanation: Convert numbers into binary representation.
End of explanation
"""
import string
def infix_to_postfix(formula):
"""assume input formula is space-delimited"""
s = Stack()
output = []
prec = {'*': 3, '/': 3, '+': 2, '-': 2, '(': 1}
operand = string.digits + string.ascii_uppercase
for token in formula.split():
if token in operand:
output.append(token)
elif token == '(':
s.push(token)
elif token == ')':
top = s.pop()
while top != '(':
output.append(top)
top = s.pop()
else:
while (not s.is_empty()) and prec[s.peek()] > prec[token]:
top = s.pop()
output.append(top)
s.push(token)
while not s.is_empty():
top = s.pop()
output.append(top)
postfix = ' '.join(output)
return postfix
formula = 'A + B * C'
output = infix_to_postfix(formula)
output
formula = '( A + B ) * C'
output = infix_to_postfix(formula)
output
"""
Explanation: Convert operators from infix to postfix.
End of explanation
"""
class Queue:
"""
First In First Out (FIFO)
assume rear is at position 0 in the list,
so the last element of the list is the front
"""
def __init__(self):
self.items = []
def is_empty(self):
return not self.items
def enqueue(self, item):
self.items.insert(0, item)
def dequeue(self):
return self.items.pop()
def size(self):
return len(self.items)
q = Queue()
q.enqueue(4)
q.enqueue(3)
q.enqueue(10)
# 4 is the first one added, hence it's the first
# one that gets popped
print(q.dequeue())
print(q.size())
"""
Explanation: Queue
End of explanation
"""
from collections import deque
def check_palindrome(word):
equal = True
queue = deque([token for token in word])
while len(queue) > 1 and equal:
first = queue.popleft()
last = queue.pop()
if first != last:
equal = False
return equal
test = 'radar'
check_palindrome(test)
"""
Explanation: From Python Documentation: Using Lists as Queues
Although the queue functionality can be implemented using a list, it is not efficient for this purpose. Because doing inserts or pops from the beginning of a list is slow (all of the other elements have to be shifted by one).
We instead use deque. It is designed to have fast appends and pops from both ends.
Implement the palindrome checker to check if a given word is a palindrom. A palindrome is a string that reads the same forward and backward, e.g. radar.
End of explanation
"""
class Node:
"""
node must contain the list item itself (data)
node must hold a reference that points to the next node
"""
def __init__(self, initdata):
# None will denote the fact that there is no next node
self._data = initdata
self._next = None
def get_data(self):
return self._data
def get_next(self):
return self._next
def set_data(self, newdata):
self._data = newdata
def set_next(self, newnext):
self._next = newnext
class UnorderedList:
def __init__(self):
"""
we need identify the position for the first node,
the head, When the list is first initialized
it has no nodes
"""
self.head = None
def is_empty(self):
return self.head is None
def add(self, item):
"""
takes data, initializes a new node with the given data
and add it to the list, the easiest way is to
place it at the head of the list and
point the new node at the old head
"""
node = Node(item)
node.set_next(self.head)
self.head = node
return self
def size(self):
"""
traverse the linked list and keep a count
of the number of nodes that occurred
"""
count = 0
current = self.head
while current is not None:
count += 1
current = current.get_next()
return count
def search(self, item):
"""
goes through the entire list to check
and see there's a matched value
"""
found = False
current = self.head
while current is not None and not found:
if current.get_data() == item:
found = True
else:
current = current.get_next()
return found
def delete(self, item):
"""
traverses the list in the same way that search does,
but this time we keep track of the current node and the previous node.
When delete finally arrives at the node it wants to delete,
it looks at the previous node and resets that previous node’s pointer
so that, rather than pointing to the soon-to-be-deleted node,
it will point to the next node in line;
this assumes item is in the list
"""
found = False
previous = None
current = self.head
while current is not None and not found:
if current.get_data() == item:
found = True
else:
previous = current
current = current.get_next()
# note this assumes the item is in the list,
# if not we'll have to write another if else statement
# here to handle it
if previous is None:
# handle cases where the first item is deleted,
# i.e. where we are modifying the head
self.head = current.get_next()
else:
previous.set_next(current.get_next())
return self
def __repr__(self):
value = []
current = self.head
while current is not None:
data = str(current.get_data())
value.append(data)
current = current.get_next()
return '[' + ', '.join(value) + ']'
mylist = UnorderedList()
mylist.add(31)
mylist.add(17)
mylist.add(91)
mylist.delete(17)
print(mylist.search(35))
mylist
"""
Explanation: Unordered (Linked)List
Blog: Implementing a Singly Linked List in Python
End of explanation
"""
class OrderedList:
def __init__(self):
self.head = None
def add(self, item):
"""
takes data, initializes a new node with the given data
and add it to the list while maintaining relative order
"""
# stop when the current node is larger than the item
stop = False
previous = None
current = self.head
while current is not None and not stop:
if current.get_data() > item:
stop = True
else:
previous = current
current = current.get_next()
# check whether it will be added to the head
# or somewhether in the middle and handle
# both cases
node = Node(item)
if previous is None:
node.set_next(self.head)
self.head = node
else:
previous.set_next(node)
node.set_next(current)
return self
def search(self, item):
"""
goes through the entire list to check
and see there's a matched value, but
we can stop if the current node's value
is greater than item since the list is sorted
"""
stop = False
found = False
current = self.head
while current is not None and not found and not stop:
if current.get_data() == item:
found = True
else:
if current.get_data() > item:
stop = True
else:
current = current.get_next()
return found
def __repr__(self):
value = []
current = self.head
while current is not None:
data = str(current.get_data())
value.append(data)
current = current.get_next()
return '[' + ', '.join(value) + ']'
mylist = OrderedList()
mylist.add(17)
mylist.add(77)
mylist.add(31)
print(mylist.search(31))
mylist
"""
Explanation: Ordered (Linked)List
As the name suggests, compared to unordered linkedlist, the elements will always be ordered for a ordered linked list. The is_empty and size method are exactly the same as the unordered linkedlist as they don't have anything to do with the actual item in the list.
End of explanation
"""
|
WNoxchi/Kaukasos | FADL1/L3CA2_rossmann-Copy1.ipynb | mit | %matplotlib inline
%reload_ext autoreload
%autoreload 2
# from fastai.imports import *
# from fastai.torch_imports import *
from fastai.structured import * # non-PyTorch specfc Machine-Learning tools; indep lib
# from fastai.dataset import * # lets us do fastai PyTorch stuff w/ structured columnar data
from fastai.column_data import *
np.set_printoptions(threshold=50, edgeitems=20)
# from sklearn_pandas import DataFrameMapper
# from sklearn.preprocessing import LabelEncoder, Imputer, StandardScaler
# import operator
PATH = 'data/rossmann/'
"""
Explanation: FastAI DL1 Lesson 3 CodeAlong II:
Rossmann - Structured Data
Code Along of lesson3-rossman.ipynb
1 Structured and Time Series Data
This notebook contains an implementation of the third place result in the Rossmann Kaggle competition as detailed in Gui/Berkhahn's Entity Embeddings of Categorical Variables.
The motivation behind exploring this architecture is it's relevance to real-world application. Most data used for decision making day-to-day in industry is structured and/or time-series data. Here we explore the end-to-end process of using neural networks with practical structured data problems.
End of explanation
"""
def concat_csvs(dirname):
path = f'{PATH}{dirname}'
filenames = glob.glob(f'{path}/*.csv')
wrote_header = False
with open(f'{path}.csv', 'w') as outputfle:
for filename in filenames:
name = filename.split('.')[0]
with open(filename) as f:
line = f.readline()
if not wrote_header:
wrote_header = True
outputfile.write('file' + line)
for line in f:
outputfile.write(name + ',' + line)
outputfile.write('\n')
# concat_csvs('googletrend')
# concat_csvs('weather')
"""
Explanation: 1.1 Create Datasets
In addition to the provided data, we'll be using external datasets put together by participants in the Kaggle competition. You can download all of them here.
For completeness, the implementation used to put them together is included below.
End of explanation
"""
table_names = ['train', 'store', 'store_states', 'state_names',
'googletrend', 'weather', 'test']
"""
Explanation: Feature Space:
* train: Training set provided by competition
* store: List of stores
* store_states: Mapping of store to the German state they're in
* List of German state names
* googletrend: Trend of certain google keywords over time, found by users to correlate well w/ given daya
* weather: Weather
* test: Testing set
End of explanation
"""
tables = [pd.read_csv(f'{PATH}{fname}.csv', low_memory=False) for fname in table_names]
from IPython.display import HTML
"""
Explanation: We'll be using the popular data manipulation framework pandas. Among other things, Pandas allows you to manipulate tables/DataFrames in Python as one would in a database.
We're going to go ahead and load all our csv's as DataFrames into the list tables.
End of explanation
"""
for t in tables: display(t.head())
"""
Explanation: We can use head() to get a quick look at the contents of each table:
* train: Contains store information on a daily basis, tracks things like sales, customers, whether that day was a holiday, etc.
* store: General info about the store including competition, etc.
* store_states: Maps store to state it's in
* state_names: Maps state abbreviations to names
* googletrend: Trend data for particular week/state
* weather: Weather conditions for each state
* test: Same as training table, w/o sales and customers
End of explanation
"""
# NOTE .summary() isn't printing, it's returning. So display() is
# needed to print the multiple calls.
for t in tables: display(DataFrameSummary(t).summary())
"""
Explanation: This is vert representative of a typical industry dataset.
The following returns summarized aggregate information to each table accross each field.
End of explanation
"""
train, store, store_states, state_names, googletrend, weather, test = tables
len(train), len(test)
"""
Explanation: 1.2 Data Cleaning / Feature Engineering
As a structuered data problem, we necessarily have to go through all the cleaning and feature engineering, even though we're using a neural network.
End of explanation
"""
train.StateHoliday = train.StateHoliday!='0'
test.StateHoliday = test.StateHoliday !='0'
"""
Explanation: We turn state Holidays to booleans, to make them more convenient for modeling. We can do calculations on pandas fields using notation very similar (often identical) to numpy.
End of explanation
"""
def join_df(left, right, left_on, right_on=None, suffix='_y'):
if right_on is None: right_on = left_on
return left.merge(right, how='left', left_on=left_on, right_on=right_on,
suffixes=("", suffix))
"""
Explanation: join_df is a function for joining tables on specific fields. By default, we'll be doing a left outer join of right on the left argument using the given fields for each table.
Pandas does joins using the merge method. The suffixes argument doescribes the naming convention for duplicate fields. We've elected to leave the duplicate field names on the left entouched, and append a "_y" to those on the right.
End of explanation
"""
weather = join_df(weather, state_names, "file", "StateName")
"""
Explanation: Join weather/state names:
End of explanation
"""
googletrend['Date'] = googletrend.week.str.split(' - ', expand=True)[0]
googletrend['State']= googletrend.file.str.split('_', expand=True)[2]
googletrend.loc[googletrend.State=='NI', "State"] = 'HB,NI'
"""
Explanation: In Pandas you can add new columns to a DataFrame by simply definint it. We'll do this for googletrends by extracting dates and state names from the given data an dadding those columns.
We're also going to replace all instances of state name 'NI' to match the usage in the rest of the data: 'HB,NI'. This is a good opportunity to highlight Pandas indexing. We can use .loc[rows, cols] to select a list of rows and a list of columns from the DataFrame. In this case, we're selecting rows w/ statename 'NI' by using a boolean list googletrend.State=='NI' and selecting "State".
End of explanation
"""
add_datepart(weather, "Date", drop=False)
add_datepart(googletrend, "Date", drop=False)
add_datepart(train, "Date", drop=False)
add_datepart(test, "Date", drop=False)
"""
Explanation: The following extracts particular date fields from a complete datetime for the purpose of constructing categoricals.
You should always consider this feature extraction step when working with date-time. Without expanding your date-time into these additional fields, you can't capture any trend/cyclical behavior as a function of time at any of these granularities. We'll add to every table with a date field.
End of explanation
"""
trend_de = googletrend[googletrend.file == 'Rossmann_DE']
"""
Explanation: The Google Trends data has a special category for the whole of the US - we'll pull that out so we can use it explicitly. * -- DE?*
End of explanation
"""
store = join_df(store, store_states, "Store")
len(store[store.State.isnull()])
# NOTE: if you do this multiple times, as per join_df(..)'s code,
# you'll add the last column onto store, with '_y' as its suffix.
joined = join_df(train, store, "Store")
joined_test = join_df(test, store, "Store")
len(joined[joined.StoreType.isnull()]), len(joined_test[joined_test.StoreType.isnull()])
joined = join_df(joined, googletrend, ["State","Year","Week"])
joined_test = join_df(joined_test, googletrend, ["State","Year","Week"])
len(joined[joined.trend.isnull()]), len(joined_test[joined_test.trend.isnull()])
joined = joined.merge(trend_de, 'left', ["Year", "Week"], suffixes=('', '_DE'))
joined_test = joined_test.merge(trend_de, 'left', ["Year", "Week"], suffixes=('', '_DE'))
len(joined[joined.trend_DE.isnull()]), len(joined_test[joined_test.trend_DE.isnull()])
joined = join_df(joined, weather, ["State","Date"])
joined_test = join_df(joined_test, weather, ["State","Date"])
len(joined[joined.Mean_TemperatureC.isnull()]), len(joined_test[joined_test.Mean_TemperatureC.isnull()])
# oh very cool -- I think this function removes extra/accidental column joins
for df in (joined, joined_test):
for c in df.columns:
if c.endswith('_y'):
if c in df.columns: df.drop(c, inplace=True, axis=1)
"""
Explanation: Now we can outer join all of our data into a single DataFrame. Recall that in outer joins everytime a value in the joining field on the left table doesn't have a corresponding value on the right table, the corresponding row in the new table has Null values for all right table fields.One way to check that all records are consistent and complete is to check for Null values post-join, as we do here.
Aside Why not just do an inner join? If you're assuming that all records are complete and match on the field you desire, an inner join will do the same thing as an outer join. However, in the event that you're wrong or a mistake is made, an outer join followed by a Null-Check will catch it (Comparing before/after # of rows for inner join is equivalent, but requires keeping track of before/after row #'s. Outer join is easier.).
End of explanation
"""
for df in (joined, joined_test):
df['CompetitionOpenSinceYear'] = df.CompetitionOpenSinceYear.fillna(1900).astype(np.int32)
df['CompetitionOpenSinceMonth']= df.CompetitionOpenSinceMonth.fillna(1).astype(np.int32)
df['Promo2SinceYear'] = df.Promo2SinceYear.fillna(1900).astype(np.int32)
df['Promo2SinceWeek'] = df.Promo2SinceWeek.fillna(1).astype(np.int32)
"""
Explanation: Next we'll fill in missing values to avoid complications with NA's. NA (not available) is how Pandas indicates missing values; many models have problems when missing values are present, so it's always important to think about how to deal with them. In these cases, we're picking an aribtirary signal value that doesn't otherwise appear in the data.
End of explanation
"""
for df in (joined, joined_test):
df["CompetitionOpenSince"] = pd.to_datetime(dict(year=df.CompetitionOpenSinceYear,
month=df.CompetitionOpenSinceMonth, day=15))
df["CompetitionDaysOpen"] = df.Date.subtract(df.CompetitionOpenSince).dt.days
"""
Explanation: Next we'll extract features "CompetitionOpenSince" and "CompetitionDaysOpen". Note the use of apply() in mapping a function across DataFrame values.
End of explanation
"""
for df in (joined, joined_test):
df.loc[df.CompetitionDaysOpen<0, "CompetitionDaysOpen"] = 0
df.loc[df.CompetitionOpenSinceYear<1990, "CompetitionDaysOpen"] = 0
"""
Explanation: We'll replace some erroneous / outlying data
End of explanation
"""
for df in (joined, joined_test):
df["CompetitionMonthsOpen"] = df["CompetitionDaysOpen"]//30
df.loc[df.CompetitionMonthsOpen>24, "CompetitionMonthsOpen"] = 24
joined.CompetitionMonthsOpen.unique()
"""
Explanation: We add the "CompetitionMonthsOpen" field, limiting the maximum to 2 years to limit number of unique categories.
End of explanation
"""
for df in (joined, joined_test):
df["Promo2Since"] = pd.to_datetime(df.apply(lambda x: Week(
x.Promo2SinceYear, x.Promo2SinceWeek).monday(), axis=1).astype(pd.datetime))
df["Promo2Days"] = df.Date.subtract(df["Promo2Since"]).dt.days
for df in (joined, joined_test):
df.loc[df.Promo2Days<0, "Promo2Days"] = 0
df.loc[df.Promo2SinceYear<1990, "Promo2Days"] = 0
df["Promo2Weeks"] = df["Promo2Days"]//7
df.loc[df.Promo2Weeks<0, "Promo2Weeks"] = 0
df.loc[df.Promo2Weeks>25, "Promo2Weeks"] = 25
df.Promo2Weeks.unique()
joined.to_feather(f'{PATH}joined')
joined_test.to_feather(f'{PATH}joined_test')
"""
Explanation: Same process for Promo dates.
End of explanation
"""
?? proc_df
def get_elapsed(fld, pre):
day1 = np.timedelta64(1, 'D')
last_date = np.datetime64()
last_store = 0
res = []
for s,v,d in zip(df.Store.values, df[fld].values, df.Date.values):
if s != last_store:
last_date = np.datetime64()
last_store = s
if v: last_date = d
res.append(((d-last_date).astype('timedelta64[D]') / day1).astype(int))
df[pre+fld] = res
"""
Explanation: NOTE: make sure joined or joined_test is loaded into memory (either reinitialized from above, or loaded from disk if saved as below) when switching between train and test datasets.
Actually you'll probably have to make sure you have them both read in anyway, if you're going to run some of the lines further below in the same cells together.
1.3 Durations
It's common when working with time series data to extract data that explains relationships accross rows as opposed to columns, eg:
* Running averages
* Time until next event
* Time since last event
This is often difficult to do with most table manipulation frameworks, since they're designed to work with relationships across columns. As such, we've created a class to handle this type of data.
We'll define a function get_elapsed for cumulative counting across a sorted DataFrame. Given a particular field fld to monitor, this function will start tracking time since the last occurrence of that field. When the field is seen again, the counter is set to zero.
Upon initialization, this will result in datetime NA's until the field is encountered. This is reset every time a new store is seen. We'll see how to use this shortly.
End of explanation
"""
# # reading back in joined & joined_test when switching df betwen train/test:
# joined = pd.read_feather(f'{PATH}joined')
# joined_test = pd.read_feather(f'{PATH}joined_test')
"""
Explanation: We'll be applying this to a subset of columns:
End of explanation
"""
columns = ["Date", "Store", "Promo", "StateHoliday", "SchoolHoliday"]
df = train[columns]
df = test[columns]
"""
Explanation: NOTE: Init columns and define df as train, then run all cells below until joined = join_df(.), then Re-init columns and define df as test, then run all cells below, this time with joined_test = join_df(.).
This took me a very long time to learn -- Hakan
End of explanation
"""
fld = 'SchoolHoliday'
df = df.sort_values(['Store', 'Date'])
get_elapsed(fld, 'After')
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
get_elapsed(fld, 'Before')
"""
Explanation: NOTE: You must rerun the cell below before running either df = train[columns] or df = test[columns], ie: before switching between train and test datasets -- since columns is redefined for the purpose of converting NaNs to zeros further on down below.
NOTE: when running on the train-set, do train[columns], when running on the test-set, do test[columns] --- idk yet if you have to run all the cells afterwards, but I think you do.
Let's walk through an exmaple:
Say we're looking at School Holiday. We'll first sort by Store, then Date, and then call add_elapsed('SchoolHoliday', 'After'): This'll apply to each row with School Holiday:
Applied to every row of the DataFrame in order of store and date
Will add to the DataFrame the days since seeing a School Holiday
If we sort in the other direction, this'll count the days until another holiday
End of explanation
"""
fld = 'StateHoliday'
df = df.sort_values(['Store', 'Date'])
get_elapsed(fld, 'After')
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
get_elapsed(fld, 'Before')
fld = 'Promo'
df = df.sort_values(['Store', 'Date'])
get_elapsed(fld, 'After')
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
get_elapsed(fld, 'Before')
"""
Explanation: We'll do this for two more fields:
End of explanation
"""
df = df.set_index("Date")
"""
Explanation: We're going to set the active index to Date
End of explanation
"""
columns = ['SchoolHoliday', 'StateHoliday', 'Promo']
for o in ['Before', 'After']:
for p in columns:
a = o+p
df[a] = df[a].fillna(0)
"""
Explanation: Then set Null values from elapsed field calculations to 0.
End of explanation
"""
bwd = df[['Store']+columns].sort_index().groupby("Store").rolling(7, min_periods=1).sum()
fwd = df[['Store']+columns].sort_index(ascending=False
).groupby("Store").rolling(7, min_periods=1).sum()
"""
Explanation: Next we'll demonstrate window functions in Pandas to calculate rolling quantities.
Here we're sorting by date (sort_index()) and counting the number of events of interest (sum() defined in columns in the following week (rolling()), grouped by Store (groupby()). We do the same in the opposite direction.
End of explanation
"""
bwd.drop('Store', 1, inplace=True)
bwd.reset_index(inplace=True)
fwd.drop('Store', 1, inplace=True)
fwd.reset_index(inplace=True)
df.reset_index(inplace=True)
"""
Explanation: Next we want to drop the Store indices grouped together in the window function.
Often in Pandas, there's an option to do this in place. This is time and memory efficient when working with large datasets.
End of explanation
"""
df = df.merge(bwd, 'left', ['Date', 'Store'], suffixes=['', '_bw'])
df = df.merge(fwd, 'left', ['Date', 'Store'], suffixes=['', '_fw'])
df.drop(columns, 1, inplace=True)
df.head()
"""
Explanation: Now we'll merge these values onto the df.
End of explanation
"""
df.to_feather(f'{PATH}df')
# df = pd.read_feather(f'{PATH}df', index_col=0)
df["Date"] = pd.to_datetime(df.Date)
df.columns
joined = join_df(joined, df, ['Store', 'Date'])
joined_test = join_df(joined_test, df, ['Store', 'Date'])
"""
Explanation: It's usually a good idea to back up large tables of extracted / wrangled features before you join them onto another one, that way you can go back to it easily if you need to make changes to it.
End of explanation
"""
joined = joined[joined.Sales != 0]
"""
Explanation: NOTE: *you'll get a Buffer dtype mismatch ValueError here unless you have joined or joined_test loaded in from before up above. -- Note for when rerunning this notebook: switching between test and train datasets.
The authors also removed all instances where the store had zero sale / was closed. We speculate that this may have cost them a higher standing in the competition. One reason this may be the case is that a little exploratory data analysis reveals that there are often periods where astores are closed, typically for refurbishment. Before and after these periods, there are naturally spikes in sales that one might expect. By ommitting this data from their training, the authors gave up the ability to leverage information about these periods to predict this otherwise volatile behavior.
End of explanation
"""
joined.reset_index(inplace=True)
joined_test.reset_index(inplace=True)
joined.to_feather(f'{PATH}joined')
joined_test.to_feather(f'{PATH}joined_test')
"""
Explanation: We'll back this up as well
End of explanation
"""
joined = pd.read_feather(f'{PATH}joined')
joined_test = pd.read_feather(f'{PATH}joined_test')
joined.head().T.head(40)
"""
Explanation: We now have our final set of engineered features.
While these steps were explicitly outlined in the paper, these are all fairly typical feature engineering steps for dealing with time series data and are practical in any similar setting.
1.4 Create Features
End of explanation
"""
cat_vars = ['Store', 'DayOfWeek', 'Year', 'Month', 'Day', 'StateHoliday', 'CompetitionMonthsOpen',
'Promo2Weeks', 'StoreType', 'Assortment', 'PromoInterval', 'CompetitionOpenSinceYear', 'Promo2SinceYear',
'State', 'Week', 'Events', 'Promo_fw', 'Promo_bw', 'StateHoliday_fw', 'StateHoliday_bw',
'SchoolHoliday_fw', 'SchoolHoliday_bw']
contin_vars = ['CompetitionDistance', 'Max_TemperatureC', 'Mean_TemperatureC', 'Min_TemperatureC',
'Max_Humidity', 'Mean_Humidity', 'Min_Humidity', 'Max_Wind_SpeedKm_h',
'Mean_Wind_SpeedKm_h', 'CloudCover', 'trend', 'trend_DE',
'AfterStateHoliday', 'BeforeStateHoliday', 'Promo', 'SchoolHoliday']
n = len(joined); n
dep = 'Sales'
joined = joined[cat_vars+contin_vars+[dep, 'Date']].copy()
joined_test[dep] = 0
joined_test = joined_test[cat_vars+contin_vars+[dep, 'Date', 'Id']].copy()
for v in cat_vars: joined[v] = joined[v].astype('category').cat.as_ordered()
apply_cats(joined_test, joined)
for v in contin_vars:
joined[v] = joined[v].astype('float32')
joined_test[v] = joined_test[v].astype('float32')
"""
Explanation: Now that we've engineered all our features, we need to convert to input compatible with a neural network.
This includes converting categorical variables into contiguous integers or one-hot encodings, normalizing continuous features to standard normal, etc..
End of explanation
"""
idxs = get_cv_idxs(n, val_pct=150000/n)
joined_samp = joined.iloc[idxs].set_index("Date")
samp_size = len(joined_samp); samp_size
"""
Explanation: We're going to run on a sample:
End of explanation
"""
samp_size = n
joined_samp = joined.set_index("Date")
"""
Explanation: To run on the full dataset, use this instead:
End of explanation
"""
joined_samp.head(2)
df, y, nas, mapper = proc_df(joined_samp, 'Sales', do_scale=True)
yλ = np.log(y) # the amount of times I mix up `1` with `l` ... so I use `λ`
joined_test = joined_test.set_index("Date")
df_test, _, nas, mapper = proc_df(joined_test, 'Sales', do_scale=True, skip_flds=['Id'],
mapper=mapper, na_dict=nas)
df.head(2)
"""
Explanation: We can now process our data...
End of explanation
"""
train_ratio = 0.75
# train_ration = 0.9
train_size = int(samp_size + train_ratio); train_size
val_idx = list(range(train_size, len(df)))
"""
Explanation: In time series data, cross-validation is not random. Instead, our holdout data is generally the most recent data, as it would be a in a real application. This issue is discussed in detail in this post on our website.
One approach is to take the last 25% of rows (sorted by date) as our validation set.
End of explanation
"""
val_idx = np.flatnonzero(
(df.index<=datetime.datetime(2014,9,17)) & (df.index>=datetime.datetime(2014,8,1)))
val_idx=[0]
"""
Explanation: An even better option for picking a validation set is using the exact same length of time period as the test set uses - this is implemented here:
End of explanation
"""
def inv_y(a): return np.exp(a)
def exp_rmspe(y_pred, targ):
targ = inv_y(targ)
pct_var = (targ - inv_y(y_pred))/targ
return math.sqrt((pct_var**2).mean())
max_log_y = np.max(yλ)
y_range = (0, max_log_y)
"""
Explanation: 1.6 DL
We're ready to put together our models.
Root-Mean-Squared Percent Error is the metric Kaggle used for this competition.
End of explanation
"""
md = ColumnarModelData.from_data_frame(PATH, val_idx, df, yλ.astype(np.float32), cat_flds=cat_vars, bs=128,
test_df=df_test)
"""
Explanation: We can create a ModelData object directly from our DataFrame.
End of explanation
"""
cat_sz = [(c, len(joined_samp[c].cat.categories)+1) for c in cat_vars]
cat_sz
"""
Explanation: Some categorical variables have a lot more levels than others. Store, in particular, has over a thousand!
End of explanation
"""
emb_szs = [(c, min(50, (c+1)//2)) for _, c in cat_sz]
emb_szs
m = md.get_learner(emb_szs, len(df.columns)-len(cat_vars),
0.04, 1, [1000,500], [0.001,0.01], y_range=y_range)
λr = 1e-3
m.lr_find()
m.sched.plot(100)
# learning rate finder run on full dataset:
m.lr_find()
m.sched.plot(100)
"""
Explanation: We use the cardinality of each variable (the number of unique values) to decide how large to make its embeddings. Each level will be associated with a vector with length defined as below.
End of explanation
"""
m = md.get_learner(emb_szs, len(df.columns)-len(cat_vars),
0.04, 1, [1000,500], [0.001,0.01], y_range=y_range)
λr = 1e-3
m.fit(λr, 3, metrics=[exp_rmspe])
m.fit(λr, 5, metrics=[exp_rmspe], cycle_len=1)
m.fit(λr, 2, metrics=[exp_rmspe], cycle_len=4)
"""
Explanation: 1.6.1 Sample
End of explanation
"""
m = md.get_learner(emb_szs, len(df.columns)-len(cat_vars),
0.04, 1, [1000,500], [0.001,0.01], y_range=y_range)
λr = 1e-3
m.fit(λr, 1, metrics=[exp_rmspe])
m.fit(λr, 3, metrics=[exp_rmspe])
m.fit(λr, 3, metrics=[exp_rmspe], cycle_len=1)
"""
Explanation: 1.6.2 All
End of explanation
"""
m = md.get_learner(emb_szs, len(df.columns)-len(cat_vars),
0.04, 1, [1000,500], [0.001,0.01], y_range=y_range)
λr = 1e-3
m.fit(λr, 3, metrics=[exp_rmspe])
m.fit(λr, 3, metrics=[exp_rmspe], cycle_len=1)
m.save('val0')
# m.load('val0')
x,y = m.predict_with_targs()
exp_rmspe(x,y)
pred_test = m.predict(True)
pred_test = np.exp(pred_test)
joined_test['Sales'] = pred_test
csv_fn = f'{PATH}tmp/sub.csv'
joined_test[['Id','Sales']].to_csv(csv_fn, index=False)
FileLink(csv_fn)
"""
Explanation: 1.6.3 Test
End of explanation
"""
from sklearn.ensemble import RandomForestRegressor
((val, trn), (y_val, y_trn)) = split_by_idx(val_idx, df.values, yλ)
m = RandomForestRegressor(n_estimators=40, max_features=0.99, min_samples_leaf=2,
n_jobs=-1, oob_score=True)
m.fit(trn, y_trn);
preds = m.predict(val)
m.score(trn, y_trn), m.score(val, y_val), m.oob_score_, exp_rmspe(preds, y_val)
"""
Explanation: RF
Random Forest
End of explanation
"""
|
atavory/ibex | examples/in_prog/movielens_simple_row_aggregating_features_with_stacking.ipynb | bsd-3-clause | from sklearn import datasets
print(datasets.load_boston()['DESCR'])
import os
from sklearn import base
from scipy import stats
import pandas as pd
import seaborn as sns
sns.set_style('whitegrid')
sns.despine()
import ibex
from ibex.sklearn import model_selection as pd_model_selection
from ibex.sklearn import linear_model as pd_linear_model
from ibex.xgboost import XGBRegressor as PdXGBRegressor
%pylab inline
ratings = pd.read_csv(
'../movielens_data/ml-100k/u.data',
sep='\t',
header=None,
names=['user_id', 'item_id', 'rating', 'timestamp'])
features = ['user_id', 'item_id']
ratings = ratings.sample(frac=1)
ratings[features + ['rating']].head()
"""
Explanation: Simple Row-Aggregating Features With Stacking In The Movielens Dataset
This example illustrates using pandas-munging capabilities in estimators building features that draw from several rows. We will use a single table from the Movielens dataset (F. Maxwell Harper and Joseph A. Konstan. 2015. The MovieLens Datasets: History and Context. ACM Transactions on Interactive Intelligent Systems (TiiS)).
Loading The Data
In this example, we'll only use the dataset table describing the ratings themselves. I.e., each row is an instance of a single rating given by a specific user to a specific movie.
End of explanation
"""
hist(ratings.user_id.groupby(ratings.user_id).count().values);
np.random.rand()
np.random.choice([1, 2], p=[0.1, 0.9])
p = 0.1
def sample_group(g):
frac = np.random.choice([1. / len(g), 2. / len(g), 2. / len(g), 1], p=[0.2, 0.2, 0.2, 0.4])
return g.sample(frac=frac)
reduced_ratings = ratings.groupby(ratings.user_id, as_index=False).apply(sample_group)
reduced_ratings.index = reduced_ratings.index.levels[1]
reduced_ratings = reduced_ratings.sample(frac=1)
reduced_ratings.head()
hist(reduced_ratings.user_id.groupby(reduced_ratings.user_id).count().values, bins=range(1, 100));
by_user_std = ratings.rating.groupby(ratings.user_id).std()
axvline(
ratings.rating.std(),
linewidth=2,
linestyle='dashed',
color='black');
annotate(
'mean std.',
xy=(1.01 * ratings.rating.std(), int(max(hist(by_user_std)[0]))));
hist(
by_user_std,
color='darkgrey');
xlabel('std.')
ylabel('Num Occurrences')
"""
Explanation: dd
End of explanation
"""
user_ratings[user_ratings.user_id == 1]
"""
Explanation: Stacking Using Pandas
Unsuprisingly, the results are even worse than before. Methodically, this is the correct way of building the feature without peeking, and so the CV result should intuitively be weaker.
End of explanation
"""
class ScoresAggregator(base.BaseEstimator, base.TransformerMixin, ibex.FrameMixin):
def fit(self, X, y):
self._mean = y.groupby(X.user_id).mean().mean()
self._user_id_stats = y.groupby(X.user_id).agg([np.mean, 'count'])
self._user_id_stats.columns = ['user_id_mean', 'user_id_count']
return self
def transform(self, X):
user_ratings = pd.merge(
X[['user_id']],
self._user_id_stats,
left_on='user_id',
right_index=True,
how='left')[['user_id_mean', 'user_id_count']]
user_ratings.user_id_mean = user_ratings.user_id_mean.fillna(self._mean)
user_ratings.user_id_count = user_ratings.user_id_count.fillna(0)
return user_ratings
f = ScoresAggregator()
f.fit_transform(ratings[features], ratings.rating);
f = ScoresAggregator()
f.fit_transform(ratings[features], ratings.rating);
class StackingScoresAggregator(ScoresAggregator):
def fit_transform(self, X, y):
user_rating_stats = \
y.groupby(X.user_id).agg([np.sum, 'count'])
user_rating_stats.columns = ['user_id_sum', 'user_id_count']
hist(user_rating_stats['user_id_count'])
X_ = pd.concat([X['user_id'], y], axis=1)
X_.columns = ['user_id', 'rating']
user_ratings = pd.merge(
X_,
user_rating_stats,
left_on='user_id',
right_index=True,
how='left')
user_ratings.user_id_count -= 1
user_ratings['user_id_mean'] = np.where(
user_ratings.user_id_count == 0,
y.groupby(X.user_id).mean().mean(), # Tmp Ami
(user_ratings.user_id_sum - user_ratings.rating) / user_ratings.user_id_count)
self.fit(X, y)
hist(user_ratings['user_id_count'].groupby(user_ratings['user_id']).min());
# return self.fit(X, y).transform(X)
return user_ratings[['user_id_mean', 'user_id_count']]
from ibex.sklearn.linear_model import LinearRegression as PdLinearRegression
from ibex.sklearn.preprocessing import PolynomialFeatures as PdPolynomialFeatures
prd = ScoresAggregator() | PdPolynomialFeatures(degree=2) | PdLinearRegression()
stacking_prd = StackingScoresAggregator() | PdPolynomialFeatures(degree=2) | PdLinearRegression()
stacking_prd.fit(ratings[features], ratings.rating).score(ratings[features], ratings.rating)
#stacking_prd.steps[1][1].feature_importances_
scores = pd_model_selection.cross_val_score(prd, ratings[features], ratings.rating, cv=100, n_jobs=1)
hist(scores);
stacking_scores = pd_model_selection.cross_val_score(stacking_prd, ratings[features], ratings.rating, cv=100, n_jobs=1)
hist(stacking_scores);
plot(scores, stacking_scores, '+');
plot(np.linspace(0, 0.25, 100), np.linspace(0, 0.25, 100))
PdPolynomialFeatures?
hist(stacking_scores - scores);
np.mean(stacking_scores - scores)
stacking_scores == scores
from scipy import stats
stats.mannwhitneyu(stacking_scores, scores, alternative='less')
stacking_prd.steps[1][1].feature_importances_
prd
prd.steps[1][1]
preprocessing.PolynomialFeatures?
from sklearn import preprocessing
preprocessing.PolynomialFeatures().fit_transform(ratings[features]).shape
ratings[features].shape
from ibex.sklearn import preprocessing as pd_preprocessing
pd_preprocessing.PolynomialFeatures().fit_transform(ratings[features])
"""
Explanation: Building A Pandas-Munging Stacking Estimator
We'll now use Pandas to build a feature building these features.
For each movie, we'll store the mean score & number of occurrences.
For each user, we'll store the mean score & number of occurrences.
End of explanation
"""
|
kylepjohnson/notebooks | skflow/tensorflow, misc.ipynb | mit | import tensorflow.contrib.learn as skflow
from sklearn.datasets import load_iris
from sklearn import metrics
iris = load_iris()
iris.keys()
iris.feature_names
iris.target_names
# Withhold 3 for testing
test_idx = [0, 50, 100]
train_data = np.delete(iris.data, test_idx, axis=0)
train_target = np.delete(iris.target, test_idx)
test_target = iris.target[test_idx] # array([0, 1, 2])
test_data = iris.data[test_idx] # array([[ 5.1, 3.5, 1.4, 0.2], [ 7. , 3.2, 4.7, 1.4], ...])
"""
Explanation: Make decision tree from iris data
Taken from Google's Visualizing a Decision Tree - Machine Learning Recipes #2
End of explanation
"""
classifier = skflow.TensorFlowDNNClassifier(hidden_units=[10, 20, 10], n_classes=3)
classifier.fit(iris.data, iris.target)
metrics.accuracy_score(iris.target, classifier.predict(iris.data))
"""
Explanation: Tensorflow
Examples from http://terrytangyuan.github.io/2016/03/14/scikit-flow-intro/
Deep neural network
3 layer deep neural network with 10, 20 and 10 hidden units in each layer, respectively.
End of explanation
"""
def my_model(X, y):
"""This is DNN with 10, 20, 10 hidden layers, and dropout of 0.5 probability."""
layers = skflow.ops.dnn(X, [10, 20, 10]) # keep_prob=0.5 causes error
return skflow.models.logistic_regression(layers, y)
classifier = skflow.TensorFlowEstimator(model_fn=my_model, n_classes=3)
classifier.fit(iris.data, iris.target)
metrics.accuracy_score(iris.target, classifier.predict(iris.data))
"""
Explanation: Custom model with TensorFlowEstimator()
End of explanation
"""
classifier = skflow.TensorFlowRNNClassifier(rnn_size=2, n_classes=15)
classifier.fit(iris.data, iris.target)
"""
Explanation: Recurrent neural network
See http://terrytangyuan.github.io/2016/03/14/scikit-flow-intro/#recurrent-neural-network.
End of explanation
"""
import numpy as np
from tensorflow.contrib.learn.python import learn
import tensorflow as tf
np.random.seed(42)
data = np.array(
list([[2, 1, 2, 2, 3], [2, 2, 3, 4, 5], [3, 3, 1, 2, 1], [2, 4, 5, 4, 1]
]),
dtype=np.float32)
# labels for classification
labels = np.array(list([1, 0, 1, 0]), dtype=np.float32)
# targets for regression
targets = np.array(list([10, 16, 10, 16]), dtype=np.float32)
test_data = np.array(list([[1, 3, 3, 2, 1], [2, 3, 4, 5, 6]]))
def input_fn(X):
return tf.split(1, 5, X)
# Classification
classifier = learn.TensorFlowRNNClassifier(rnn_size=2,
cell_type="lstm",
n_classes=2,
input_op_fn=input_fn)
classifier.fit(data, labels)
classifier.weights_
classifier.bias_
predictions = classifier.predict(test_data)
#assertAllClose(predictions, np.array([1, 0]))
classifier = learn.TensorFlowRNNClassifier(rnn_size=2,
cell_type="rnn",
n_classes=2,
input_op_fn=input_fn,
num_layers=2)
classifier.fit(data, labels)
classifier.predict(iris.data)
"""
Explanation: From https://github.com/tensorflow/tensorflow/blob/17dcc5a176d152caec570452d28fb94920cceb8c/tensorflow/contrib/learn/python/learn/tests/test_nonlinear.py
End of explanation
"""
|
tensorflow/probability | tensorflow_probability/examples/jupyter_notebooks/STS_approximate_inference_for_models_with_non_Gaussian_observations.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2019 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
"""
import time
import matplotlib.pyplot as plt
import numpy as np
import tensorflow.compat.v2 as tf
import tensorflow_probability as tfp
from tensorflow_probability import bijectors as tfb
from tensorflow_probability import distributions as tfd
tf.enable_v2_behavior()
"""
Explanation: Approximate inference for STS models with non-Gaussian observations
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/probability/examples/STS_approximate_inference_for_models_with_non_Gaussian_observations"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/STS_approximate_inference_for_models_with_non_Gaussian_observations.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/STS_approximate_inference_for_models_with_non_Gaussian_observations.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/STS_approximate_inference_for_models_with_non_Gaussian_observations.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This notebook demonstrates the use of TFP approximate inference tools to incorporate a (non-Gaussian) observation model when fitting and forecasting with structural time series (STS) models. In this example, we'll use a Poisson observation model to work with discrete count data.
End of explanation
"""
num_timesteps = 30
observed_counts = np.round(3 + np.random.lognormal(np.log(np.linspace(
num_timesteps, 5, num=num_timesteps)), 0.20, size=num_timesteps))
observed_counts = observed_counts.astype(np.float32)
plt.plot(observed_counts)
"""
Explanation: Synthetic Data
First we'll generate some synthetic count data:
End of explanation
"""
def build_model(approximate_unconstrained_rates):
trend = tfp.sts.LocalLinearTrend(
observed_time_series=approximate_unconstrained_rates)
return tfp.sts.Sum([trend],
observed_time_series=approximate_unconstrained_rates)
"""
Explanation: Model
We'll specify a simple model with a randomly walking linear trend:
End of explanation
"""
positive_bijector = tfb.Softplus() # Or tfb.Exp()
# Approximate the unconstrained Poisson rate just to set heuristic priors.
# We could avoid this by passing explicit priors on all model params.
approximate_unconstrained_rates = positive_bijector.inverse(
tf.convert_to_tensor(observed_counts) + 0.01)
sts_model = build_model(approximate_unconstrained_rates)
"""
Explanation: Instead of operating on the observed time series, this model will operate on the series of Poisson rate parameters that govern the observations.
Since Poisson rates must be positive, we'll use a bijector to transform the
real-valued STS model into a distribution over positive values. The Softplus
transformation $y = \log(1 + \exp(x))$ is a natural choice, since it is nearly linear for positive values, but other choices such as Exp (which transforms the normal random walk into a lognormal random walk) are also possible.
End of explanation
"""
def sts_with_poisson_likelihood_model():
# Encode the parameters of the STS model as random variables.
param_vals = []
for param in sts_model.parameters:
param_val = yield param.prior
param_vals.append(param_val)
# Use the STS model to encode the log- (or inverse-softplus)
# rate of a Poisson.
unconstrained_rate = yield sts_model.make_state_space_model(
num_timesteps, param_vals)
rate = positive_bijector.forward(unconstrained_rate[..., 0])
observed_counts = yield tfd.Poisson(rate, name='observed_counts')
model = tfd.JointDistributionCoroutineAutoBatched(sts_with_poisson_likelihood_model)
"""
Explanation: To use approximate inference for a non-Gaussian observation model,
we'll encode the STS model as a TFP JointDistribution. The random variables in this joint distribution are the parameters of the STS model, the time series of latent Poisson rates, and the observed counts.
End of explanation
"""
pinned_model = model.experimental_pin(observed_counts=observed_counts)
"""
Explanation: Preparation for inference
We want to infer the unobserved quantities in the model, given the observed counts. First, we condition the joint log density on the observed counts.
End of explanation
"""
constraining_bijector = pinned_model.experimental_default_event_space_bijector()
"""
Explanation: We'll also need a constraining bijector to ensure that inference respects the constraints on the STS model's parameters (for example, scales must be positive).
End of explanation
"""
#@title Sampler configuration
# Allow external control of sampling to reduce test runtimes.
num_results = 500 # @param { isTemplate: true}
num_results = int(num_results)
num_burnin_steps = 100 # @param { isTemplate: true}
num_burnin_steps = int(num_burnin_steps)
"""
Explanation: Inference with HMC
We'll use HMC (specifically, NUTS) to sample from the joint posterior over model parameters and latent rates.
This will be significantly slower than fitting a standard STS model with HMC, since in addition to the model's (relatively small number of) parameters we also have to infer the entire series of Poisson rates. So we'll run for a relatively small number of steps; for applications where inference quality is critical it might make sense to increase these values or to run multiple chains.
End of explanation
"""
sampler = tfp.mcmc.TransformedTransitionKernel(
tfp.mcmc.NoUTurnSampler(
target_log_prob_fn=pinned_model.unnormalized_log_prob,
step_size=0.1),
bijector=constraining_bijector)
adaptive_sampler = tfp.mcmc.DualAveragingStepSizeAdaptation(
inner_kernel=sampler,
num_adaptation_steps=int(0.8 * num_burnin_steps),
target_accept_prob=0.75)
initial_state = constraining_bijector.forward(
type(pinned_model.event_shape)(
*(tf.random.normal(part_shape)
for part_shape in constraining_bijector.inverse_event_shape(
pinned_model.event_shape))))
# Speed up sampling by tracing with `tf.function`.
@tf.function(autograph=False, jit_compile=True)
def do_sampling():
return tfp.mcmc.sample_chain(
kernel=adaptive_sampler,
current_state=initial_state,
num_results=num_results,
num_burnin_steps=num_burnin_steps,
trace_fn=None)
t0 = time.time()
samples = do_sampling()
t1 = time.time()
print("Inference ran in {:.2f}s.".format(t1-t0))
"""
Explanation: First we specify a sampler, and then use sample_chain to run that sampling
kernel to produce samples.
End of explanation
"""
f = plt.figure(figsize=(12, 4))
for i, param in enumerate(sts_model.parameters):
ax = f.add_subplot(1, len(sts_model.parameters), i + 1)
ax.plot(samples[i])
ax.set_title("{} samples".format(param.name))
"""
Explanation: We can sanity-check the inference by examining the parameter traces. In this case they appear to have explored multiple explanations for the data, which is good, although more samples would be helpful to judge how well the chain is mixing.
End of explanation
"""
param_samples = samples[:-1]
unconstrained_rate_samples = samples[-1][..., 0]
rate_samples = positive_bijector.forward(unconstrained_rate_samples)
plt.figure(figsize=(10, 4))
mean_lower, mean_upper = np.percentile(rate_samples, [10, 90], axis=0)
pred_lower, pred_upper = np.percentile(np.random.poisson(rate_samples),
[10, 90], axis=0)
_ = plt.plot(observed_counts, color="blue", ls='--', marker='o', label='observed', alpha=0.7)
_ = plt.plot(np.mean(rate_samples, axis=0), label='rate', color="green", ls='dashed', lw=2, alpha=0.7)
_ = plt.fill_between(np.arange(0, 30), mean_lower, mean_upper, color='green', alpha=0.2)
_ = plt.fill_between(np.arange(0, 30), pred_lower, pred_upper, color='grey', label='counts', alpha=0.2)
plt.xlabel("Day")
plt.ylabel("Daily Sample Size")
plt.title("Posterior Mean")
plt.legend()
"""
Explanation: Now for the payoff: let's see the posterior over Poisson rates! We'll also plot the 80% predictive interval over observed counts, and can check that this interval appears to contain about 80% of the counts we actually observed.
End of explanation
"""
def sample_forecasted_counts(sts_model, posterior_latent_rates,
posterior_params, num_steps_forecast,
num_sampled_forecasts):
# Forecast the future latent unconstrained rates, given the inferred latent
# unconstrained rates and parameters.
unconstrained_rates_forecast_dist = tfp.sts.forecast(sts_model,
observed_time_series=unconstrained_rate_samples,
parameter_samples=posterior_params,
num_steps_forecast=num_steps_forecast)
# Transform the forecast to positive-valued Poisson rates.
rates_forecast_dist = tfd.TransformedDistribution(
unconstrained_rates_forecast_dist,
positive_bijector)
# Sample from the forecast model following the chain rule:
# P(counts) = P(counts | latent_rates)P(latent_rates)
sampled_latent_rates = rates_forecast_dist.sample(num_sampled_forecasts)
sampled_forecast_counts = tfd.Poisson(rate=sampled_latent_rates).sample()
return sampled_forecast_counts, sampled_latent_rates
forecast_samples, rate_samples = sample_forecasted_counts(
sts_model,
posterior_latent_rates=unconstrained_rate_samples,
posterior_params=param_samples,
# Days to forecast:
num_steps_forecast=30,
num_sampled_forecasts=100)
forecast_samples = np.squeeze(forecast_samples)
def plot_forecast_helper(data, forecast_samples, CI=90):
"""Plot the observed time series alongside the forecast."""
plt.figure(figsize=(10, 4))
forecast_median = np.median(forecast_samples, axis=0)
num_steps = len(data)
num_steps_forecast = forecast_median.shape[-1]
plt.plot(np.arange(num_steps), data, lw=2, color='blue', linestyle='--', marker='o',
label='Observed Data', alpha=0.7)
forecast_steps = np.arange(num_steps, num_steps+num_steps_forecast)
CI_interval = [(100 - CI)/2, 100 - (100 - CI)/2]
lower, upper = np.percentile(forecast_samples, CI_interval, axis=0)
plt.plot(forecast_steps, forecast_median, lw=2, ls='--', marker='o', color='orange',
label=str(CI) + '% Forecast Interval', alpha=0.7)
plt.fill_between(forecast_steps,
lower,
upper, color='orange', alpha=0.2)
plt.xlim([0, num_steps+num_steps_forecast])
ymin, ymax = min(np.min(forecast_samples), np.min(data)), max(np.max(forecast_samples), np.max(data))
yrange = ymax-ymin
plt.title("{}".format('Observed time series with ' + str(num_steps_forecast) + ' Day Forecast'))
plt.xlabel('Day')
plt.ylabel('Daily Sample Size')
plt.legend()
plot_forecast_helper(observed_counts, forecast_samples, CI=80)
"""
Explanation: Forecasting
To forecast the observed counts, we'll use the standard STS tools to build a forecast distribution over the latent rates (in unconstrained space, again since STS is designed to model real-valued data), then pass the sampled forecasts through a Poisson observation model:
End of explanation
"""
surrogate_posterior = tfp.experimental.vi.build_factored_surrogate_posterior(
event_shape=pinned_model.event_shape,
bijector=constraining_bijector)
# Allow external control of optimization to reduce test runtimes.
num_variational_steps = 1000 # @param { isTemplate: true}
num_variational_steps = int(num_variational_steps)
t0 = time.time()
losses = tfp.vi.fit_surrogate_posterior(pinned_model.unnormalized_log_prob,
surrogate_posterior,
optimizer=tf.optimizers.Adam(0.1),
num_steps=num_variational_steps)
t1 = time.time()
print("Inference ran in {:.2f}s.".format(t1-t0))
plt.plot(losses)
plt.title("Variational loss")
_ = plt.xlabel("Steps")
posterior_samples = surrogate_posterior.sample(50)
param_samples = posterior_samples[:-1]
unconstrained_rate_samples = posterior_samples[-1][..., 0]
rate_samples = positive_bijector.forward(unconstrained_rate_samples)
plt.figure(figsize=(10, 4))
mean_lower, mean_upper = np.percentile(rate_samples, [10, 90], axis=0)
pred_lower, pred_upper = np.percentile(
np.random.poisson(rate_samples), [10, 90], axis=0)
_ = plt.plot(observed_counts, color='blue', ls='--', marker='o',
label='observed', alpha=0.7)
_ = plt.plot(np.mean(rate_samples, axis=0), label='rate', color='green',
ls='dashed', lw=2, alpha=0.7)
_ = plt.fill_between(
np.arange(0, 30), mean_lower, mean_upper, color='green', alpha=0.2)
_ = plt.fill_between(np.arange(0, 30), pred_lower, pred_upper, color='grey',
label='counts', alpha=0.2)
plt.xlabel('Day')
plt.ylabel('Daily Sample Size')
plt.title('Posterior Mean')
plt.legend()
forecast_samples, rate_samples = sample_forecasted_counts(
sts_model,
posterior_latent_rates=unconstrained_rate_samples,
posterior_params=param_samples,
# Days to forecast:
num_steps_forecast=30,
num_sampled_forecasts=100)
forecast_samples = np.squeeze(forecast_samples)
plot_forecast_helper(observed_counts, forecast_samples, CI=80)
"""
Explanation: VI inference
Variational inference can be problematic when inferring a full time series, like our approximate counts (as opposed to just
the parameters of a time series, as in standard STS models). The standard assumption that variables have independent posteriors is quite wrong, since each timestep is correlated with its neighbors, which can lead to underestimating uncertainty. For this reason, HMC may be a better choice for approximate inference over full time series. However, VI can be quite a bit faster, and may be useful for model prototyping or in cases where its performance can be empirically shown to be 'good enough'.
To fit our model with VI, we simply build and optimize a surrogate posterior:
End of explanation
"""
|
BadWizard/intro_programming | notebooks/classes.ipynb | mit | class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self):
# Each rocket has an (x,y) position.
self.x = 0
self.y = 0
"""
Explanation: Classes
So far you have learned about Python's core data types: strings, numbers, lists, tuples, and dictionaries. In this section you will learn about the last major data structure, classes. Classes are quite unlike the other data types, in that they are much more flexible. Classes allow you to define the information and behavior that characterize anything you want to model in your program. Classes are a rich topic, so you will learn just enough here to dive into the projects you'd like to get started on.
There is a lot of new language that comes into play when you start learning about classes. If you are familiar with object-oriented programming from your work in another language, this will be a quick read about how Python approaches OOP. If you are new to programming in general, there will be a lot of new ideas here. Just start reading, try out the examples on your own machine, and trust that it will start to make sense as you work your way through the examples and exercises.
Previous: More Functions |
Home
Contents
What are classes?
Object-Oriented Terminology
General terminology
A closer look at the Rocket class
The __init__() method
A simple method
Making multiple objects from a class
A quick check-in
Classes in Python 2.7
Exercises
Refining the Rocket class
Accepting parameters for the __init__() method
Accepting parameters in a method
Adding a new method
Exercises
Inheritance
The SpaceShuttle class
Inheritance in Python 2.7
Exercises
Modules and classes
Storing a single class in a module
Storing multiple classes in a module
A number of ways to import modules and classes
A module of functions
Exercises
Revisiting PEP 8
Imports
Module and class names
Exercises
top
What are classes?
Classes are a way of combining information and behavior. For example, let's consider what you'd need to do if you were creating a rocket ship in a game, or in a physics simulation. One of the first things you'd want to track are the x and y coordinates of the rocket. Here is what a simple rocket ship class looks like in code:
End of explanation
"""
###highlight=[11,12,13]
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self):
# Each rocket has an (x,y) position.
self.x = 0
self.y = 0
def move_up(self):
# Increment the y-position of the rocket.
self.y += 1
"""
Explanation: One of the first things you do with a class is to define the _init_() method. The __init__() method sets the values for any parameters that need to be defined when an object is first created. The self part will be explained later; basically, it's a syntax that allows you to access a variable from anywhere else in the class.
The Rocket class stores two pieces of information so far, but it can't do anything. The first behavior to define is a core behavior of a rocket: moving up. Here is what that might look like in code:
End of explanation
"""
###highlight=[15,16, 17]
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self):
# Each rocket has an (x,y) position.
self.x = 0
self.y = 0
def move_up(self):
# Increment the y-position of the rocket.
self.y += 1
# Create a Rocket object.
my_rocket = Rocket()
print(my_rocket)
"""
Explanation: The Rocket class can now store some information, and it can do something. But this code has not actually created a rocket yet. Here is how you actually make a rocket:
End of explanation
"""
###highlight=[15,16,17,18,19,20,21,22,23]
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self):
# Each rocket has an (x,y) position.
self.x = 0
self.y = 0
def move_up(self):
# Increment the y-position of the rocket.
self.y += 1
# Create a Rocket object, and have it start to move up.
my_rocket = Rocket()
print("Rocket altitude:", my_rocket.y)
my_rocket.move_up()
print("Rocket altitude:", my_rocket.y)
my_rocket.move_up()
print("Rocket altitude:", my_rocket.y)
"""
Explanation: To actually use a class, you create a variable such as my_rocket. Then you set that equal to the name of the class, with an empty set of parentheses. Python creates an object from the class. An object is a single instance of the Rocket class; it has a copy of each of the class's variables, and it can do any action that is defined for the class. In this case, you can see that the variable my_rocket is a Rocket object from the __main__ program file, which is stored at a particular location in memory.
Once you have a class, you can define an object and use its methods. Here is how you might define a rocket and have it start to move up:
End of explanation
"""
###highlight=[15,16,17,18,19,20,21,22,23]
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self):
# Each rocket has an (x,y) position.
self.x = 0
self.y = 0
def move_up(self):
# Increment the y-position of the rocket.
self.y += 1
# Create a fleet of 5 rockets, and store them in a list.
my_rockets = []
for x in range(0,5):
new_rocket = Rocket()
my_rockets.append(new_rocket)
# Show that each rocket is a separate object.
for rocket in my_rockets:
print(rocket)
"""
Explanation: To access an object's variables or methods, you give the name of the object and then use dot notation to access the variables and methods. So to get the y-value of my_rocket, you use my_rocket.y. To use the move_up() method on my_rocket, you write my_rocket.move_up().
Once you have a class defined, you can create as many objects from that class as you want. Each object is its own instance of that class, with its own separate variables. All of the objects are capable of the same behavior, but each object's particular actions do not affect any of the other objects. Here is how you might make a simple fleet of rockets:
End of explanation
"""
###highlight=[16]
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self):
# Each rocket has an (x,y) position.
self.x = 0
self.y = 0
def move_up(self):
# Increment the y-position of the rocket.
self.y += 1
# Create a fleet of 5 rockets, and store them in a list.
my_rockets = [Rocket() for x in range(0,5)]
# Show that each rocket is a separate object.
for rocket in my_rockets:
print(rocket)
"""
Explanation: You can see that each rocket is at a separate place in memory. By the way, if you understand list comprehensions, you can make the fleet of rockets in one line:
End of explanation
"""
###highlight=[18,19,20,21,22,23]
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self):
# Each rocket has an (x,y) position.
self.x = 0
self.y = 0
def move_up(self):
# Increment the y-position of the rocket.
self.y += 1
# Create a fleet of 5 rockets, and store them in a list.
my_rockets = [Rocket() for x in range(0,5)]
# Move the first rocket up.
my_rockets[0].move_up()
# Show that only the first rocket has moved.
for rocket in my_rockets:
print("Rocket altitude:", rocket.y)
"""
Explanation: You can prove that each rocket has its own x and y values by moving just one of the rockets:
End of explanation
"""
###highlight=[2]
class Rocket(object):
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self):
# Each rocket has an (x,y) position.
self.x = 0
self.y = 0
"""
Explanation: The syntax for classes may not be very clear at this point, but consider for a moment how you might create a rocket without using classes. You might store the x and y values in a dictionary, but you would have to write a lot of ugly, hard-to-maintain code to manage even a small set of rockets. As more features become incorporated into the Rocket class, you will see how much more efficiently real-world objects can be modeled with classes than they could be using just lists and dictionaries.
top
Classes in Python 2.7
When you write a class in Python 2.7, you should always include the word object in parentheses when you define the class. This makes sure your Python 2.7 classes act like Python 3 classes, which will be helpful as your projects grow more complicated.
The simple version of the rocket class would look like this in Python 2.7:
End of explanation
"""
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self):
# Each rocket has an (x,y) position.
self.x = 0
self.y = 0
"""
Explanation: This syntax will work in Python 3 as well.
top
Exercises
Rocket With No Class
Using just what you already know, try to write a program that simulates the above example about rockets.
Store an x and y value for a rocket.
Store an x and y value for each rocket in a set of 5 rockets. Store these 5 rockets in a list.
Don't take this exercise too far; it's really just a quick exercise to help you understand how useful the class structure is, especially as you start to see more capability added to the Rocket class.
top
Object-Oriented terminology
Classes are part of a programming paradigm called object-oriented programming. Object-oriented programming, or OOP for short, focuses on building reusable blocks of code called classes. When you want to use a class in one of your programs, you make an object from that class, which is where the phrase "object-oriented" comes from. Python itself is not tied to object-oriented programming, but you will be using objects in most or all of your Python projects. In order to understand classes, you have to understand some of the language that is used in OOP.
General terminology
A class is a body of code that defines the attributes and behaviors required to accurately model something you need for your program. You can model something from the real world, such as a rocket ship or a guitar string, or you can model something from a virtual world such as a rocket in a game, or a set of physical laws for a game engine.
An attribute is a piece of information. In code, an attribute is just a variable that is part of a class.
A behavior is an action that is defined within a class. These are made up of methods, which are just functions that are defined for the class.
An object is a particular instance of a class. An object has a certain set of values for all of the attributes (variables) in the class. You can have as many objects as you want for any one class.
There is much more to know, but these words will help you get started. They will make more sense as you see more examples, and start to use classes on your own.
top
A closer look at the Rocket class
Now that you have seen a simple example of a class, and have learned some basic OOP terminology, it will be helpful to take a closer look at the Rocket class.
The __init__() method
Here is the initial code block that defined the Rocket class:
End of explanation
"""
###highlight=[11,12,13]
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self):
# Each rocket has an (x,y) position.
self.x = 0
self.y = 0
def move_up(self):
# Increment the y-position of the rocket.
self.y += 1
"""
Explanation: The first line shows how a class is created in Python. The keyword class tells Python that you are about to define a class. The rules for naming a class are the same rules you learned about naming variables, but there is a strong convention among Python programmers that classes should be named using CamelCase. If you are unfamiliar with CamelCase, it is a convention where each letter that starts a word is capitalized, with no underscores in the name. The name of the class is followed by a set of parentheses. These parentheses will be empty for now, but later they may contain a class upon which the new class is based.
It is good practice to write a comment at the beginning of your class, describing the class. There is a more formal syntax for documenting your classes, but you can wait a little bit to get that formal. For now, just write a comment at the beginning of your class summarizing what you intend the class to do. Writing more formal documentation for your classes will be easy later if you start by writing simple comments now.
Function names that start and end with two underscores are special built-in functions that Python uses in certain ways. The __init()__ method is one of these special functions. It is called automatically when you create an object from your class. The __init()__ method lets you make sure that all relevant attributes are set to their proper values when an object is created from the class, before the object is used. In this case, The __init__() method initializes the x and y values of the Rocket to 0.
The self keyword often takes people a little while to understand. The word "self" refers to the current object that you are working with. When you are writing a class, it lets you refer to certain attributes from any other part of the class. Basically, all methods in a class need the self object as their first argument, so they can access any attribute that is part of the class.
Now let's take a closer look at a method.
A simple method
Here is the method that was defined for the Rocket class:
End of explanation
"""
###highlight=[15,16,17,18,19,20,21,22,23]
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self):
# Each rocket has an (x,y) position.
self.x = 0
self.y = 0
def move_up(self):
# Increment the y-position of the rocket.
self.y += 1
# Create a Rocket object, and have it start to move up.
my_rocket = Rocket()
print("Rocket altitude:", my_rocket.y)
my_rocket.move_up()
print("Rocket altitude:", my_rocket.y)
my_rocket.move_up()
print("Rocket altitude:", my_rocket.y)
"""
Explanation: A method is just a function that is part of a class. Since it is just a function, you can do anything with a method that you learned about with functions. You can accept positional arguments, keyword arguments, an arbitrary list of argument values, an arbitrary dictionary of arguments, or any combination of these. Your arguments can return a value or a set of values if you want, or they can just do some work without returning any values.
Each method has to accept one argument by default, the value self. This is a reference to the particular object that is calling the method. This self argument gives you access to the calling object's attributes. In this example, the self argument is used to access a Rocket object's y-value. That value is increased by 1, every time the method move_up() is called by a particular Rocket object. This is probably still somewhat confusing, but it should start to make sense as you work through your own examples.
If you take a second look at what happens when a method is called, things might make a little more sense:
End of explanation
"""
###highlight=[15,16,17,18,19,20,21,22,23]
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self):
# Each rocket has an (x,y) position.
self.x = 0
self.y = 0
def move_up(self):
# Increment the y-position of the rocket.
self.y += 1
# Create a fleet of 5 rockets, and store them in a list.
my_rockets = []
for x in range(0,5):
new_rocket = Rocket()
my_rockets.append(new_rocket)
# Show that each rocket is a separate object.
for rocket in my_rockets:
print(rocket)
"""
Explanation: In this example, a Rocket object is created and stored in the variable my_rocket. After this object is created, its y value is printed. The value of the attribute y is accessed using dot notation. The phrase my_rocket.y asks Python to return "the value of the variable y attached to the object my_rocket".
After the object my_rocket is created and its initial y-value is printed, the method move_up() is called. This tells Python to apply the method move_up() to the object my_rocket. Python finds the y-value associated with my_rocket and adds 1 to that value. This process is repeated several times, and you can see from the output that the y-value is in fact increasing.
top
Making multiple objects from a class
One of the goals of object-oriented programming is to create reusable code. Once you have written the code for a class, you can create as many objects from that class as you need. It is worth mentioning at this point that classes are usually saved in a separate file, and then imported into the program you are working on. So you can build a library of classes, and use those classes over and over again in different programs. Once you know a class works well, you can leave it alone and know that the objects you create in a new program are going to work as they always have.
You can see this "code reusability" already when the Rocket class is used to make more than one Rocket object. Here is the code that made a fleet of Rocket objects:
End of explanation
"""
###highlight=[15,16,17,18]
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self):
# Each rocket has an (x,y) position.
self.x = 0
self.y = 0
def move_up(self):
# Increment the y-position of the rocket.
self.y += 1
# Create a fleet of 5 rockets, and store them in a list.
my_rockets = []
for x in range(0,5):
my_rockets.append(Rocket())
# Show that each rocket is a separate object.
for rocket in my_rockets:
print(rocket)
"""
Explanation: If you are comfortable using list comprehensions, go ahead and use those as much as you can. I'd rather not assume at this point that everyone is comfortable with comprehensions, so I will use the slightly longer approach of declaring an empty list, and then using a for loop to fill that list. That can be done slightly more efficiently than the previous example, by eliminating the temporary variable new_rocket:
End of explanation
"""
###highlight=[6,7,8,9]
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self):
# Each rocket has an (x,y) position.
self.x = 0
self.y = 0
def move_up(self):
# Increment the y-position of the rocket.
self.y += 1
"""
Explanation: What exactly happens in this for loop? The line my_rockets.append(Rocket()) is executed 5 times. Each time, a new Rocket object is created and then added to the list my_rockets. The __init__() method is executed once for each of these objects, so each object gets its own x and y value. When a method is called on one of these objects, the self variable allows access to just that object's attributes, and ensures that modifying one object does not affect any of the other objecs that have been created from the class.
Each of these objects can be worked with individually. At this point we are ready to move on and see how to add more functionality to the Rocket class. We will work slowly, and give you the chance to start writing your own simple classes.
A quick check-in
If all of this makes sense, then the rest of your work with classes will involve learning a lot of details about how classes can be used in more flexible and powerful ways. If this does not make any sense, you could try a few different things:
Reread the previous sections, and see if things start to make any more sense.
Type out these examples in your own editor, and run them. Try making some changes, and see what happens.
Try the next exercise, and see if it helps solidify some of the concepts you have been reading about.
Read on. The next sections are going to add more functionality to the Rocket class. These steps will involve rehashing some of what has already been covered, in a slightly different way.
Classes are a huge topic, and once you understand them you will probably use them for the rest of your life as a programmer. If you are brand new to this, be patient and trust that things will start to sink in.
top
<a id="Exercises-oop"></a>
Exercises
Your Own Rocket
Without looking back at the previous examples, try to recreate the Rocket class as it has been shown so far.
Define the Rocket() class.
Define the __init__() method, which sets an x and a y value for each Rocket object.
Define the move_up() method.
Create a Rocket object.
Print the object.
Print the object's y-value.
Move the rocket up, and print its y-value again.
Create a fleet of rockets, and prove that they are indeed separate Rocket objects.
top
Refining the Rocket class
The Rocket class so far is very simple. It can be made a little more interesting with some refinements to the __init__() method, and by the addition of some methods.
Accepting parameters for the __init__() method
The __init__() method is run automatically one time when you create a new object from a class. The __init__() method for the Rocket class so far is pretty simple:
End of explanation
"""
###highlight=[6,7,8,9]
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self, x=0, y=0):
# Each rocket has an (x,y) position.
self.x = x
self.y = y
def move_up(self):
# Increment the y-position of the rocket.
self.y += 1
"""
Explanation: All the __init__() method does so far is set the x and y values for the rocket to 0. We can easily add a couple keyword arguments so that new rockets can be initialized at any position:
End of explanation
"""
###highlight=[15,16,17,18,19,20,21,22,23]
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self, x=0, y=0):
# Each rocket has an (x,y) position.
self.x = x
self.y = y
def move_up(self):
# Increment the y-position of the rocket.
self.y += 1
# Make a series of rockets at different starting places.
rockets = []
rockets.append(Rocket())
rockets.append(Rocket(0,10))
rockets.append(Rocket(100,0))
# Show where each rocket is.
for index, rocket in enumerate(rockets):
print("Rocket %d is at (%d, %d)." % (index, rocket.x, rocket.y))
"""
Explanation: Now when you create a new Rocket object you have the choice of passing in arbitrary initial values for x and y:
End of explanation
"""
###highlight=[11,12,13,14,15]
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self, x=0, y=0):
# Each rocket has an (x,y) position.
self.x = x
self.y = y
def move_rocket(self, x_increment=0, y_increment=1):
# Move the rocket according to the paremeters given.
# Default behavior is to move the rocket up one unit.
self.x += x_increment
self.y += y_increment
"""
Explanation: top
Accepting parameters in a method
The __init__ method is just a special method that serves a particular purpose, which is to help create new objects from a class. Any method in a class can accept parameters of any kind. With this in mind, the move_up() method can be made much more flexible. By accepting keyword arguments, the move_up() method can be rewritten as a more general move_rocket() method. This new method will allow the rocket to be moved any amount, in any direction:
End of explanation
"""
###highlight=[17,18,19,20,21,22,23,24,25,26,27]
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self, x=0, y=0):
# Each rocket has an (x,y) position.
self.x = x
self.y = y
def move_rocket(self, x_increment=0, y_increment=1):
# Move the rocket according to the paremeters given.
# Default behavior is to move the rocket up one unit.
self.x += x_increment
self.y += y_increment
# Create three rockets.
rockets = [Rocket() for x in range(0,3)]
# Move each rocket a different amount.
rockets[0].move_rocket()
rockets[1].move_rocket(10,10)
rockets[2].move_rocket(-10,0)
# Show where each rocket is.
for index, rocket in enumerate(rockets):
print("Rocket %d is at (%d, %d)." % (index, rocket.x, rocket.y))
"""
Explanation: The paremeters for the move() method are named x_increment and y_increment rather than x and y. It's good to emphasize that these are changes in the x and y position, not new values for the actual position of the rocket. By carefully choosing the right default values, we can define a meaningful default behavior. If someone calls the method move_rocket() with no parameters, the rocket will simply move up one unit in the y-direciton. Note that this method can be given negative values to move the rocket left or right:
End of explanation
"""
###highlight=[19,20,21,22,23,24,25,26,27,28,29,30,31]
from math import sqrt
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self, x=0, y=0):
# Each rocket has an (x,y) position.
self.x = x
self.y = y
def move_rocket(self, x_increment=0, y_increment=1):
# Move the rocket according to the paremeters given.
# Default behavior is to move the rocket up one unit.
self.x += x_increment
self.y += y_increment
def get_distance(self, other_rocket):
# Calculates the distance from this rocket to another rocket,
# and returns that value.
distance = sqrt((self.x-other_rocket.x)**2+(self.y-other_rocket.y)**2)
return distance
# Make two rockets, at different places.
rocket_0 = Rocket()
rocket_1 = Rocket(10,5)
# Show the distance between them.
distance = rocket_0.get_distance(rocket_1)
print("The rockets are %f units apart." % distance)
"""
Explanation: top
Adding a new method
One of the strengths of object-oriented programming is the ability to closely model real-world phenomena by adding appropriate attributes and behaviors to classes. One of the jobs of a team piloting a rocket is to make sure the rocket does not get too close to any other rockets. Let's add a method that will report the distance from one rocket to any other rocket.
If you are not familiar with distance calculations, there is a fairly simple formula to tell the distance between two points if you know the x and y values of each point. This new method performs that calculation, and then returns the resulting distance.
End of explanation
"""
###highlight=[25,26,27,28,29,30,31,32,33,34]
from math import sqrt
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self, x=0, y=0):
# Each rocket has an (x,y) position.
self.x = x
self.y = y
def move_rocket(self, x_increment=0, y_increment=1):
# Move the rocket according to the paremeters given.
# Default behavior is to move the rocket up one unit.
self.x += x_increment
self.y += y_increment
def get_distance(self, other_rocket):
# Calculates the distance from this rocket to another rocket,
# and returns that value.
distance = sqrt((self.x-other_rocket.x)**2+(self.y-other_rocket.y)**2)
return distance
class Shuttle(Rocket):
# Shuttle simulates a space shuttle, which is really
# just a reusable rocket.
def __init__(self, x=0, y=0, flights_completed=0):
super().__init__(x, y)
self.flights_completed = flights_completed
shuttle = Shuttle(10,0,3)
print(shuttle)
"""
Explanation: Hopefully these short refinements show that you can extend a class' attributes and behavior to model the phenomena you are interested in as closely as you want. The rocket could have a name, a crew capacity, a payload, a certain amount of fuel, and any number of other attributes. You could define any behavior you want for the rocket, including interactions with other rockets and launch facilities, gravitational fields, and whatever you need it to! There are techniques for managing these more complex interactions, but what you have just seen is the core of object-oriented programming.
At this point you should try your hand at writing some classes of your own. After trying some exercises, we will look at object inheritance, and then you will be ready to move on for now.
top
<a id="Exercises-refining"></a>
Exercises
Your Own Rocket 2
There are enough new concepts here that you might want to try re-creating the Rocket class as it has been developed so far, looking at the examples as little as possible. Once you have your own version, regardless of how much you needed to look at the example, you can modify the class and explore the possibilities of what you have already learned.
Re-create the Rocket class as it has been developed so far:
Define the Rocket() class.
Define the __init__() method. Let your __init__() method accept x and y values for the initial position of the rocket. Make sure the default behavior is to position the rocket at (0,0).
Define the move_rocket() method. The method should accept an amount to move left or right, and an amount to move up or down.
Create a Rocket object. Move the rocket around, printing its position after each move.
Create a small fleet of rockets. Move several of them around, and print their final positions to prove that each rocket can move independently of the other rockets.
Define the get_distance() method. The method should accept a Rocket object, and calculate the distance between the current rocket and the rocket that is passed into the method.
Use the get_distance() method to print the distances between several of the rockets in your fleet.
Rocket Attributes
Start with a copy of the Rocket class, either one you made from a previous exercise or the latest version from the last section.
Add several of your own attributes to the __init__() function. The values of your attributes can be set automatically by the __init__ function, or they can be set by paremeters passed into __init__().
Create a rocket and print the values for the attributes you have created, to show they have been set correctly.
Create a small fleet of rockets, and set different values for one of the attributes you have created. Print the values of these attributes for each rocket in your fleet, to show that they have been set properly for each rocket.
If you are not sure what kind of attributes to add, you could consider storing the height of the rocket, the crew size, the name of the rocket, the speed of the rocket, or many other possible characteristics of a rocket.
Rocket Methods
Start with a copy of the Rocket class, either one you made from a previous exercise or the latest version from the last section.
Add a new method to the class. This is probably a little more challenging than adding attributes, but give it a try.
Think of what rockets do, and make a very simple version of that behavior using print statements. For example, rockets lift off when they are launched. You could make a method called launch(), and all it would do is print a statement such as "The rocket has lifted off!" If your rocket has a name, this sentence could be more descriptive.
You could make a very simple land_rocket() method that simply sets the x and y values of the rocket back to 0. Print the position before and after calling the land_rocket() method to make sure your method is doing what it's supposed to.
If you enjoy working with math, you could implement a safety_check() method. This method would take in another rocket object, and call the get_distance() method on that rocket. Then it would check if that rocket is too close, and print a warning message if the rocket is too close. If there is zero distance between the two rockets, your method could print a message such as, "The rockets have crashed!" (Be careful; getting a zero distance could mean that you accidentally found the distance between a rocket and itself, rather than a second rocket.)
Person Class
Modeling a person is a classic exercise for people who are trying to learn how to write classes. We are all familiar with characteristics and behaviors of people, so it is a good exercise to try.
Define a Person() class.
In the __init()__ function, define several attributes of a person. Good attributes to consider are name, age, place of birth, and anything else you like to know about the people in your life.
Write one method. This could be as simple as introduce_yourself(). This method would print out a statement such as, "Hello, my name is Eric."
You could also make a method such as age_person(). A simple version of this method would just add 1 to the person's age.
A more complicated version of this method would involve storing the person's birthdate rather than their age, and then calculating the age whenever the age is requested. But dealing with dates and times is not particularly easy if you've never done it in any other programming language before.
Create a person, set the attribute values appropriately, and print out information about the person.
Call your method on the person you created. Make sure your method executed properly; if the method does not print anything out directly, print something before and after calling the method to make sure it did what it was supposed to.
Car Class
Modeling a car is another classic exercise.
Define a Car() class.
In the __init__() function, define several attributes of a car. Some good attributes to consider are make (Subaru, Audi, Volvo...), model (Outback, allroad, C30), year, num_doors, owner, or any other aspect of a car you care to include in your class.
Write one method. This could be something such as describe_car(). This method could print a series of statements that describe the car, using the information that is stored in the attributes. You could also write a method that adjusts the mileage of the car or tracks its position.
Create a car object, and use your method.
Create several car objects with different values for the attributes. Use your method on several of your cars.
top
Inheritance
One of the most important goals of the object-oriented approach to programming is the creation of stable, reliable, reusable code. If you had to create a new class for every kind of object you wanted to model, you would hardly have any reusable code. In Python and any other language that supports OOP, one class can inherit from another class. This means you can base a new class on an existing class; the new class inherits all of the attributes and behavior of the class it is based on. A new class can override any undesirable attributes or behavior of the class it inherits from, and it can add any new attributes or behavior that are appropriate. The original class is called the parent class, and the new class is a child of the parent class. The parent class is also called a superclass, and the child class is also called a subclass.
The child class inherits all attributes and behavior from the parent class, but any attributes that are defined in the child class are not available to the parent class. This may be obvious to many people, but it is worth stating. This also means a child class can override behavior of the parent class. If a child class defines a method that also appears in the parent class, objects of the child class will use the new method rather than the parent class method.
To better understand inheritance, let's look at an example of a class that can be based on the Rocket class.
The SpaceShuttle class
If you wanted to model a space shuttle, you could write an entirely new class. But a space shuttle is just a special kind of rocket. Instead of writing an entirely new class, you can inherit all of the attributes and behavior of a Rocket, and then add a few appropriate attributes and behavior for a Shuttle.
One of the most significant characteristics of a space shuttle is that it can be reused. So the only difference we will add at this point is to record the number of flights the shutttle has completed. Everything else you need to know about a shuttle has already been coded into the Rocket class.
Here is what the Shuttle class looks like:
End of explanation
"""
class NewClass(ParentClass):
"""
Explanation: When a new class is based on an existing class, you write the name of the parent class in parentheses when you define the new class:
End of explanation
"""
###highlight=[5]
class NewClass(ParentClass):
def __init__(self, arguments_new_class, arguments_parent_class):
super().__init__(arguments_parent_class)
# Code for initializing an object of the new class.
"""
Explanation: The __init__() function of the new class needs to call the __init__() function of the parent class. The __init__() function of the new class needs to accept all of the parameters required to build an object from the parent class, and these parameters need to be passed to the __init__() function of the parent class. The super().__init__() function takes care of this:
End of explanation
"""
###highlight=[7]
class Shuttle(Rocket):
# Shuttle simulates a space shuttle, which is really
# just a reusable rocket.
def __init__(self, x=0, y=0, flights_completed=0):
Rocket.__init__(self, x, y)
self.flights_completed = flights_completed
"""
Explanation: The super() function passes the self argument to the parent class automatically. You could also do this by explicitly naming the parent class when you call the __init__() function, but you then have to include the self argument manually:
End of explanation
"""
###highlight=[3, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65]
from math import sqrt
from random import randint
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self, x=0, y=0):
# Each rocket has an (x,y) position.
self.x = x
self.y = y
def move_rocket(self, x_increment=0, y_increment=1):
# Move the rocket according to the paremeters given.
# Default behavior is to move the rocket up one unit.
self.x += x_increment
self.y += y_increment
def get_distance(self, other_rocket):
# Calculates the distance from this rocket to another rocket,
# and returns that value.
distance = sqrt((self.x-other_rocket.x)**2+(self.y-other_rocket.y)**2)
return distance
class Shuttle(Rocket):
# Shuttle simulates a space shuttle, which is really
# just a reusable rocket.
def __init__(self, x=0, y=0, flights_completed=0):
super().__init__(x, y)
self.flights_completed = flights_completed
# Create several shuttles and rockets, with random positions.
# Shuttles have a random number of flights completed.
shuttles = []
for x in range(0,3):
x = randint(0,100)
y = randint(1,100)
flights_completed = randint(0,10)
shuttles.append(Shuttle(x, y, flights_completed))
rockets = []
for x in range(0,3):
x = randint(0,100)
y = randint(1,100)
rockets.append(Rocket(x, y))
# Show the number of flights completed for each shuttle.
for index, shuttle in enumerate(shuttles):
print("Shuttle %d has completed %d flights." % (index, shuttle.flights_completed))
print("\n")
# Show the distance from the first shuttle to all other shuttles.
first_shuttle = shuttles[0]
for index, shuttle in enumerate(shuttles):
distance = first_shuttle.get_distance(shuttle)
print("The first shuttle is %f units away from shuttle %d." % (distance, index))
print("\n")
# Show the distance from the first shuttle to all other rockets.
for index, rocket in enumerate(rockets):
distance = first_shuttle.get_distance(rocket)
print("The first shuttle is %f units away from rocket %d." % (distance, index))
"""
Explanation: This might seem a little easier to read, but it is preferable to use the super() syntax. When you use super(), you don't need to explicitly name the parent class, so your code is more resilient to later changes. As you learn more about classes, you will be able to write child classes that inherit from multiple parent classes, and the super() function will call the parent classes' __init__() functions for you, in one line. This explicit approach to calling the parent class' __init__() function is included so that you will be less confused if you see it in someone else's code.
The output above shows that a new Shuttle object was created. This new Shuttle object can store the number of flights completed, but it also has all of the functionality of the Rocket class: it has a position that can be changed, and it can calculate the distance between itself and other rockets or shuttles. This can be demonstrated by creating several rockets and shuttles, and then finding the distance between one shuttle and all the other shuttles and rockets. This example uses a simple function called randint, which generates a random integer between a lower and upper bound, to determine the position of each rocket and shuttle:
End of explanation
"""
###highlight=[5]
class NewClass(ParentClass):
def __init__(self, arguments_new_class, arguments_parent_class):
super(NewClass, self).__init__(arguments_parent_class)
# Code for initializing an object of the new class.
"""
Explanation: Inheritance is a powerful feature of object-oriented programming. Using just what you have seen so far about classes, you can model an incredible variety of real-world and virtual phenomena with a high degree of accuracy. The code you write has the potential to be stable and reusable in a variety of applications.
top
Inheritance in Python 2.7
The super() method has a slightly different syntax in Python 2.7:
End of explanation
"""
###highlight=[25,26,27,28,29,30,31,32,33,34]
from math import sqrt
class Rocket(object):
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self, x=0, y=0):
# Each rocket has an (x,y) position.
self.x = x
self.y = y
def move_rocket(self, x_increment=0, y_increment=1):
# Move the rocket according to the paremeters given.
# Default behavior is to move the rocket up one unit.
self.x += x_increment
self.y += y_increment
def get_distance(self, other_rocket):
# Calculates the distance from this rocket to another rocket,
# and returns that value.
distance = sqrt((self.x-other_rocket.x)**2+(self.y-other_rocket.y)**2)
return distance
class Shuttle(Rocket):
# Shuttle simulates a space shuttle, which is really
# just a reusable rocket.
def __init__(self, x=0, y=0, flights_completed=0):
super(Shuttle, self).__init__(x, y)
self.flights_completed = flights_completed
shuttle = Shuttle(10,0,3)
print(shuttle)
"""
Explanation: Notice that you have to explicitly pass the arguments NewClass and self when you call super() in Python 2.7. The SpaceShuttle class would look like this:
End of explanation
"""
###highlight=[2]
# Save as rocket.py
from math import sqrt
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self, x=0, y=0):
# Each rocket has an (x,y) position.
self.x = x
self.y = y
def move_rocket(self, x_increment=0, y_increment=1):
# Move the rocket according to the paremeters given.
# Default behavior is to move the rocket up one unit.
self.x += x_increment
self.y += y_increment
def get_distance(self, other_rocket):
# Calculates the distance from this rocket to another rocket,
# and returns that value.
distance = sqrt((self.x-other_rocket.x)**2+(self.y-other_rocket.y)**2)
return distance
"""
Explanation: This syntax works in Python 3 as well.
top
<a id="Exercises-inheritance"></a>
Exercises
Student Class
Start with your program from Person Class.
Make a new class called Student that inherits from Person.
Define some attributes that a student has, which other people don't have.
A student has a school they are associated with, a graduation year, a gpa, and other particular attributes.
Create a Student object, and prove that you have used inheritance correctly.
Set some attribute values for the student, that are only coded in the Person class.
Set some attribute values for the student, that are only coded in the Student class.
Print the values for all of these attributes.
Refining Shuttle
Take the latest version of the Shuttle class. Extend it.
Add more attributes that are particular to shuttles such as maximum number of flights, capability of supporting spacewalks, and capability of docking with the ISS.
Add one more method to the class, that relates to shuttle behavior. This method could simply print a statement, such as "Docking with the ISS," for a dock_ISS() method.
Prove that your refinements work by creating a Shuttle object with these attributes, and then call your new method.
top
Modules and classes
Now that you are starting to work with classes, your files are going to grow longer. This is good, because it means your programs are probably doing more interesting things. But it is bad, because longer files can be more difficult to work with. Python allows you to save your classes in another file and then import them into the program you are working on. This has the added advantage of isolating your classes into files that can be used in any number of different programs. As you use your classes repeatedly, the classes become more reliable and complete overall.
Storing a single class in a module
When you save a class into a separate file, that file is called a module. You can have any number of classes in a single module. There are a number of ways you can then import the class you are interested in.
Start out by saving just the Rocket class into a file called rocket.py. Notice the naming convention being used here: the module is saved with a lowercase name, and the class starts with an uppercase letter. This convention is pretty important for a number of reasons, and it is a really good idea to follow the convention.
End of explanation
"""
###highlight=[2]
# Save as rocket_game.py
from rocket import Rocket
rocket = Rocket()
print("The rocket is at (%d, %d)." % (rocket.x, rocket.y))
"""
Explanation: Make a separate file called rocket_game.py. If you are more interested in science than games, feel free to call this file something like rocket_simulation.py. Again, to use standard naming conventions, make sure you are using a lowercase_underscore name for this file.
End of explanation
"""
###highlight=[27,28,29,30,31,32,33]
# Save as rocket.py
from math import sqrt
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self, x=0, y=0):
# Each rocket has an (x,y) position.
self.x = x
self.y = y
def move_rocket(self, x_increment=0, y_increment=1):
# Move the rocket according to the paremeters given.
# Default behavior is to move the rocket up one unit.
self.x += x_increment
self.y += y_increment
def get_distance(self, other_rocket):
# Calculates the distance from this rocket to another rocket,
# and returns that value.
distance = sqrt((self.x-other_rocket.x)**2+(self.y-other_rocket.y)**2)
return distance
class Shuttle(Rocket):
# Shuttle simulates a space shuttle, which is really
# just a reusable rocket.
def __init__(self, x=0, y=0, flights_completed=0):
super().__init__(x, y)
self.flights_completed = flights_completed
"""
Explanation: This is a really clean and uncluttered file. A rocket is now something you can define in your programs, without the details of the rocket's implementation cluttering up your file. You don't have to include all the class code for a rocket in each of your files that deals with rockets; the code defining rocket attributes and behavior lives in one file, and can be used anywhere.
The first line tells Python to look for a file called rocket.py. It looks for that file in the same directory as your current program. You can put your classes in other directories, but we will get to that convention a bit later. Notice that you do not
When Python finds the file rocket.py, it looks for a class called Rocket. When it finds that class, it imports that code into the current file, without you ever seeing that code. You are then free to use the class Rocket as you have seen it used in previous examples.
top
Storing multiple classes in a module
A module is simply a file that contains one or more classes or functions, so the Shuttle class actually belongs in the rocket module as well:
End of explanation
"""
###highlight=[3,8,9,10]
# Save as rocket_game.py
from rocket import Rocket, Shuttle
rocket = Rocket()
print("The rocket is at (%d, %d)." % (rocket.x, rocket.y))
shuttle = Shuttle()
print("\nThe shuttle is at (%d, %d)." % (shuttle.x, shuttle.y))
print("The shuttle has completed %d flights." % shuttle.flights_completed)
"""
Explanation: Now you can import the Rocket and the Shuttle class, and use them both in a clean uncluttered program file:
End of explanation
"""
from module_name import ClassName
"""
Explanation: The first line tells Python to import both the Rocket and the Shuttle classes from the rocket module. You don't have to import every class in a module; you can pick and choose the classes you care to use, and Python will only spend time processing those particular classes.
A number of ways to import modules and classes
There are several ways to import modules and classes, and each has its own merits.
import module_name
The syntax for importing classes that was just shown:
End of explanation
"""
# Save as rocket_game.py
import rocket
rocket_0 = rocket.Rocket()
print("The rocket is at (%d, %d)." % (rocket_0.x, rocket_0.y))
shuttle_0 = rocket.Shuttle()
print("\nThe shuttle is at (%d, %d)." % (shuttle_0.x, shuttle_0.y))
print("The shuttle has completed %d flights." % shuttle_0.flights_completed)
"""
Explanation: is straightforward, and is used quite commonly. It allows you to use the class names directly in your program, so you have very clean and readable code. This can be a problem, however, if the names of the classes you are importing conflict with names that have already been used in the program you are working on. This is unlikely to happen in the short programs you have been seeing here, but if you were working on a larger program it is quite possible that the class you want to import from someone else's work would happen to have a name you have already used in your program. In this case, you can use simply import the module itself:
End of explanation
"""
import module_name
"""
Explanation: The general syntax for this kind of import is:
End of explanation
"""
module_name.ClassName
"""
Explanation: After this, classes are accessed using dot notation:
End of explanation
"""
import module_name as local_module_name
"""
Explanation: This prevents some name conflicts. If you were reading carefully however, you might have noticed that the variable name rocket in the previous example had to be changed because it has the same name as the module itself. This is not good, because in a longer program that could mean a lot of renaming.
import module_name as local_module_name
There is another syntax for imports that is quite useful:
End of explanation
"""
# Save as rocket_game.py
import rocket as rocket_module
rocket = rocket_module.Rocket()
print("The rocket is at (%d, %d)." % (rocket.x, rocket.y))
shuttle = rocket_module.Shuttle()
print("\nThe shuttle is at (%d, %d)." % (shuttle.x, shuttle.y))
print("The shuttle has completed %d flights." % shuttle.flights_completed)
"""
Explanation: When you are importing a module into one of your projects, you are free to choose any name you want for the module in your project. So the last example could be rewritten in a way that the variable name rocket would not need to be changed:
End of explanation
"""
import rocket as rocket_module
"""
Explanation: This approach is often used to shorten the name of the module, so you don't have to type a long module name before each class name that you want to use. But it is easy to shorten a name so much that you force people reading your code to scroll to the top of your file and see what the shortened name stands for. In this example,
End of explanation
"""
import rocket as r
"""
Explanation: leads to much more readable code than something like:
End of explanation
"""
from module_name import *
"""
Explanation: from module_name import *
There is one more import syntax that you should be aware of, but you should probably avoid using. This syntax imports all of the available classes and functions in a module:
End of explanation
"""
# Save as multiplying.py
def double(x):
return 2*x
def triple(x):
return 3*x
def quadruple(x):
return 4*x
"""
Explanation: This is not recommended, for a couple reasons. First of all, you may have no idea what all the names of the classes and functions in a module are. If you accidentally give one of your variables the same name as a name from the module, you will have naming conflicts. Also, you may be importing way more code into your program than you need.
If you really need all the functions and classes from a module, just import the module and use the module_name.ClassName syntax in your program.
You will get a sense of how to write your imports as you read more Python code, and as you write and share some of your own code.
top
A module of functions
You can use modules to store a set of functions you want available in different programs as well, even if those functions are not attached to any one class. To do this, you save the functions into a file, and then import that file just as you saw in the last section. Here is a really simple example; save this is multiplying.py:
End of explanation
"""
###highlight=[2]
from multiplying import double, triple, quadruple
print(double(5))
print(triple(5))
print(quadruple(5))
"""
Explanation: Now you can import the file multiplying.py, and use these functions. Using the from module_name import function_name syntax:
End of explanation
"""
###highlight=[2]
import multiplying
print(multiplying.double(5))
print(multiplying.triple(5))
print(multiplying.quadruple(5))
"""
Explanation: Using the import module_name syntax:
End of explanation
"""
###highlight=[2]
import multiplying as m
print(m.double(5))
print(m.triple(5))
print(m.quadruple(5))
"""
Explanation: Using the import module_name as local_module_name syntax:
End of explanation
"""
###highlight=[2]
from multiplying import *
print(double(5))
print(triple(5))
print(quadruple(5))
"""
Explanation: Using the from module_name import * syntax:
End of explanation
"""
# this
import sys
import os
# not this
import sys, os
"""
Explanation: top
<a id="Exercises-importing"></a>
Exercises
Importing Student
Take your program from Student Class
Save your Person and Student classes in a separate file called person.py.
Save the code that uses these classes in four separate files.
In the first file, use the from module_name import ClassName syntax to make your program run.
In the second file, use the import module_name syntax.
In the third file, use the import module_name as different_local_module_name syntax.
In the fourth file, use the import * syntax.
Importing Car
Take your program from Car Class
Save your Car class in a separate file called car.py.
Save the code that uses the car class into four separate files.
In the first file, use the from module_name import ClassName syntax to make your program run.
In the second file, use the import module_name syntax.
In the third file, use the import module_name as different_local_module_name syntax.
In the fourth file, use the import * syntax.
top
Revisiting PEP 8
If you recall, PEP 8 is the style guide for writing Python code. PEP 8 has a little to say about writing classes and using import statements, that was not covered previously. Following these guidelines will help make your code readable to other Python programmers, and it will help you make more sense of the Python code you read.
Import statements
PEP8 provides clear guidelines about where import statements should appear in a file. The names of modules should be on separate lines:
End of explanation
"""
from rocket import Rocket, Shuttle
"""
Explanation: The names of classes can be on the same line:
End of explanation
"""
|
emiliom/stuff | DRB_vizer_json_services.ipynb | cc0-1.0 | import json
import requests
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
import datetime
import time
import calendar
import pytz
#from matplotlib.dates import date2num, num2date
utc_tz = pytz.utc
def epochsec_to_dt(epochsec):
""" Return the datetime object for epoch seconds epochsec
"""
dtnaive_dt = datetime.datetime.utcfromtimestamp(epochsec)
dtutc_dt = dtnaive_dt.replace(tzinfo=pytz.utc)
return dtutc_dt
def get_measurement_byvarid(metaresult, var_id):
return [e for e in metaresult['measurements'] if e['var_id'] == var_id][0]
"""
Explanation: DRB Vizer json services
10/30/2015. Emilio Mayorga.
Run with Emilio's "IOOS_test1" conda envioronment (recent versions of most packages)
http://www.wikiwatershed-vs.org examples:
- meta requests
- http://www.wikiwatershed-vs.org/services/get_asset_info.php?opt=meta&asset_type=siso
- http://www.wikiwatershed-vs.org/services/get_asset_info.php?opt=meta&asset_type=siso&asset_id=CRBCZO_WCC019
- data & recent_values requests
- http://www.wikiwatershed-vs.org/services/get_asset_info.php?opt=data&asset_type=siso&asset_id=CRBCZO_WCC019&var_id=all&units_mode=v1
- http://www.wikiwatershed-vs.org/services/get_asset_info.php?opt=data&asset_type=siso&asset_id=CRBCZO_WCC019&var_id=H1_WaterTemp
- http://www.wikiwatershed-vs.org/services/get_asset_info.php?opt=recent_values&asset_type=siso&asset_id=CRBCZO_WCC019&var_id=H1_WaterTemp&units_mode=v2
- plot request
- http://www.wikiwatershed-vs.org/services/get_asset_info.php?opt=plot&asset_type=siso&asset_id=USGS_01474500&var_id=H1_WaterTemp&range=30d&y_axis_mode=global
Import packages and set up utility functions
End of explanation
"""
vz_gai_url = "http://www.wikiwatershed-vs.org/services/get_asset_info.php"
"""
Explanation: Service end point
For vizer instance that currently serves only the CRB and outlying areas, but will be expanded to the DRB to support the William Penn project needs.
End of explanation
"""
meta_r = requests.get(vz_gai_url, params={'asset_type':'siso', 'opt':'meta'})
meta = meta_r.json()
meta.keys(), meta['success']
type(meta['result']), len(meta['result'])
# siso_id is the unique identifier (string type) for the station
siso_id_lst = [e['siso_id'] for e in meta['result']]
# Examine the response for the first station (index 0) in the returned list
meta['result'][0]['siso_id']
meta['result'][0]
"""
Explanation: Meta info (metadata) requests
siso = "stationary in-situ observations" (eg, a river gage, weather stationg, moored buoy, etc)
End of explanation
"""
stations_rec = []
for sta in meta['result']:
sta_rec = {key:sta[key] for key in ['siso_id', 'name', 'lat', 'lon',
'platform_type', 'provider']}
stations_rec.append(sta_rec)
stations_df = pd.DataFrame.from_records(stations_rec)
stations_df.set_index('siso_id', inplace=True, verify_integrity=True)
stations_df.index.name = 'siso_id'
print len(stations_df)
stations_df.head(10)
"""
Explanation: Examine all stations (siso assets) by first importing into a Pandas Dataframe
End of explanation
"""
stations_df.platform_type.value_counts()
stations_df.provider.value_counts()
"""
Explanation: Summaries (station counts) by platform_type and provider
End of explanation
"""
# USGS Schuylkill River at Philadelphia
siso_id = 'USGS_01474500'
"""
Explanation: Request and examine one station
End of explanation
"""
# (asset_id, a more generic descriptor for the unique id of any asset)
meta_r = requests.get(vz_gai_url, params={'asset_type':'siso', 'opt':'meta',
'asset_id':siso_id})
# use [0] to pull out the dict from the single-element list
metaresult = meta_r.json()['result'][0] # ideally, should first test for success
# var_id is the unique identifier for a "measurement" (or variable)
[(d['var_id'], d['depth']) for d in metaresult['measurements']]
metaresult['name']
"""
Explanation: Meta info request
End of explanation
"""
data_r = requests.get(vz_gai_url, params={'asset_type':'siso', 'opt':'data', 'units_mode': 'v1',
'asset_id':siso_id, 'var_id':'all'})
data = data_r.json()
data['success'], len(data['result']), data['result'][0].keys(), len(data['result'][0]['data'])
# Mapping of var_id string to 'result' list element index
var_ids = {e['var_id']:i for i,e in enumerate(data['result'])}
var_ids
var_id = 'H1_Discharge'
get_measurement_byvarid(metaresult, var_id)
data['result'][var_ids[var_id]]['data'][-10:]
# Pull out data time series for one variable based on var_id
# returns a list of dicts
data_lst = data['result'][var_ids[var_id]]['data']
data_df = pd.DataFrame.from_records(data_lst)
data_df.head()
"""
Explanation: Data request
End of explanation
"""
data_df['dtutc'] = data_df.time.map(lambda es: epochsec_to_dt(es))
data_df.set_index('dtutc', inplace=True, verify_integrity=True)
data_df.index.name = 'dtutc'
data_df = data_df.rename(columns={'value':var_id})
data_df.info()
data_df.head()
data_df.describe()
var_info = get_measurement_byvarid(metaresult, var_id)
title = "%s (%s) at %s" % (var_info['name'], var_id, metaresult['name'])
data_df[var_id].plot(title=title, figsize=[11,5])
plt.ylabel(var_info['units']);
"""
Explanation: Create dtutc column with parsed datetime. Also, it's safer to rename the "value" column to something unlikely to conflict with pandas method names.
End of explanation
"""
|
strandbygaard/deep-learning | gan_mnist/Intro_to_GANs_Exercises.ipynb | mit | %matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
"""
Explanation: Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
Pix2Pix
CycleGAN
A whole list
The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.
The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator.
The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.
End of explanation
"""
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z
"""
Explanation: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively.
End of explanation
"""
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
''' Build the generator network.
Arguments
---------
z : Input tensor for the generator
out_dim : Shape of the generator output
n_units : Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('generator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(z, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
# Logits and tanh output
logits = tf.layers.dense(h1, out_dim, activation=None)
out = tf.tanh(logits)
return out
"""
Explanation: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement:
python
with tf.variable_scope('scope_name', reuse=False):
# code here
Here's more from the TensorFlow documentation to get another look at using tf.variable_scope.
Leaky ReLU
TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:
$$
f(x) = max(\alpha * x, x)
$$
Tanh Output
The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
Exercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope.
End of explanation
"""
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('discriminator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(x, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
logits = tf.layers.dense(h1, 1, activation=None)
out = tf.sigmoid(logits)
return out, logits
"""
Explanation: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope.
End of explanation
"""
# Size of input image to discriminator
input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Label smoothing
smooth = 0.1
"""
Explanation: Hyperparameters
End of explanation
"""
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Build the model
g_model = generator(input_z, input_size)
# g_model is the generator output
d_model_real, d_logits_real = discriminator(input_real)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True)
"""
Explanation: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise: Build the network from the functions you defined earlier.
End of explanation
"""
# Calculate losses
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real,
labels=tf.ones_like(d_logits_real) * (1 - smooth)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.zeros_like(d_logits_real)))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.ones_like(d_logits_fake)))
"""
Explanation: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.
End of explanation
"""
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if var.name.startswith('generator')]
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)
"""
Explanation: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately.
End of explanation
"""
batch_size = 100
epochs = 100
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
"""
Explanation: Training
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
"""
Explanation: Training loss
Here we'll check out the training losses for the generator and discriminator.
End of explanation
"""
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
"""
Explanation: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
End of explanation
"""
_ = view_samples(-1, samples)
"""
Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
End of explanation
"""
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
"""
Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
End of explanation
"""
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
view_samples(0, [gen_samples])
"""
Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
End of explanation
"""
|
turbomanage/training-data-analyst | courses/fast-and-lean-data-science/06_MNIST_Estimator_to_TPUEstimator.ipynb | apache-2.0 | import os, re, math, json, shutil, pprint, datetime
import PIL.Image, PIL.ImageFont, PIL.ImageDraw # "pip3 install Pillow" or "pip install Pillow" if needed
import numpy as np
import tensorflow as tf
from matplotlib import pyplot as plt
from tensorflow.python.platform import tf_logging
print("Tensorflow version " + tf.__version__)
"""
Explanation: <a href="https://colab.research.google.com/github/GoogleCloudPlatform/training-data-analyst/blob/master/courses/fast-and-lean-data-science/06_MNIST_Estimator_to_TPUEstimator.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
MNIST Estimator to TPUEstimator
This notebook will show you how to port an Estimator model to TPUEstimator.
All the lines that had to be changed in the porting are marked with a "TPU REFACTORING" comment.
You do the porting only once. TPUEstimator then works on both TPU and GPU with the use_tpu=False flag.
Imports
End of explanation
"""
BATCH_SIZE = 32 #@param {type:"integer"}
BUCKET = 'gs://' #@param {type:"string"}
assert re.search(r'gs://.+', BUCKET), 'You need a GCS bucket for your Tensorboard logs. Head to http://console.cloud.google.com/storage and create one.'
training_images_file = 'gs://mnist-public/train-images-idx3-ubyte'
training_labels_file = 'gs://mnist-public/train-labels-idx1-ubyte'
validation_images_file = 'gs://mnist-public/t10k-images-idx3-ubyte'
validation_labels_file = 'gs://mnist-public/t10k-labels-idx1-ubyte'
#@title visualization utilities [RUN ME]
"""
This cell contains helper functions used for visualization
and downloads only. You can skip reading it. There is very
little useful Keras/Tensorflow code here.
"""
# Matplotlib config
plt.rc('image', cmap='gray_r')
plt.rc('grid', linewidth=0)
plt.rc('xtick', top=False, bottom=False, labelsize='large')
plt.rc('ytick', left=False, right=False, labelsize='large')
plt.rc('axes', facecolor='F8F8F8', titlesize="large", edgecolor='white')
plt.rc('text', color='a8151a')
plt.rc('figure', facecolor='F0F0F0')# Matplotlib fonts
MATPLOTLIB_FONT_DIR = os.path.join(os.path.dirname(plt.__file__), "mpl-data/fonts/ttf")
# pull a batch from the datasets. This code is not very nice, it gets much better in eager mode (TODO)
def dataset_to_numpy_util(training_dataset, validation_dataset, N):
# get one batch from each: 10000 validation digits, N training digits
unbatched_train_ds = training_dataset.apply(tf.data.experimental.unbatch())
v_images, v_labels = validation_dataset.make_one_shot_iterator().get_next()
t_images, t_labels = unbatched_train_ds.batch(N).make_one_shot_iterator().get_next()
# Run once, get one batch. Session.run returns numpy results
with tf.Session() as ses:
(validation_digits, validation_labels,
training_digits, training_labels) = ses.run([v_images, v_labels, t_images, t_labels])
# these were one-hot encoded in the dataset
validation_labels = np.argmax(validation_labels, axis=1)
training_labels = np.argmax(training_labels, axis=1)
return (training_digits, training_labels,
validation_digits, validation_labels)
# create digits from local fonts for testing
def create_digits_from_local_fonts(n):
font_labels = []
img = PIL.Image.new('LA', (28*n, 28), color = (0,255)) # format 'LA': black in channel 0, alpha in channel 1
font1 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'DejaVuSansMono-Oblique.ttf'), 25)
font2 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'STIXGeneral.ttf'), 25)
d = PIL.ImageDraw.Draw(img)
for i in range(n):
font_labels.append(i%10)
d.text((7+i*28,0 if i<10 else -4), str(i%10), fill=(255,255), font=font1 if i<10 else font2)
font_digits = np.array(img.getdata(), np.float32)[:,0] / 255.0 # black in channel 0, alpha in channel 1 (discarded)
font_digits = np.reshape(np.stack(np.split(np.reshape(font_digits, [28, 28*n]), n, axis=1), axis=0), [n, 28*28])
return font_digits, font_labels
# utility to display a row of digits with their predictions
def display_digits(digits, predictions, labels, title, n):
plt.figure(figsize=(13,3))
digits = np.reshape(digits, [n, 28, 28])
digits = np.swapaxes(digits, 0, 1)
digits = np.reshape(digits, [28, 28*n])
plt.yticks([])
plt.xticks([28*x+14 for x in range(n)], predictions)
for i,t in enumerate(plt.gca().xaxis.get_ticklabels()):
if predictions[i] != labels[i]: t.set_color('red') # bad predictions in red
plt.imshow(digits)
plt.grid(None)
plt.title(title)
# utility to display multiple rows of digits, sorted by unrecognized/recognized status
def display_top_unrecognized(digits, predictions, labels, n, lines):
idx = np.argsort(predictions==labels) # sort order: unrecognized first
for i in range(lines):
display_digits(digits[idx][i*n:(i+1)*n], predictions[idx][i*n:(i+1)*n], labels[idx][i*n:(i+1)*n],
"{} sample validation digits out of {} with bad predictions in red and sorted first".format(n*lines, len(digits)) if i==0 else "", n)
# utility to display training and validation curves
def display_training_curves(training, validation, title, subplot):
if subplot%10==1: # set up the subplots on the first call
plt.subplots(figsize=(10,10), facecolor='#F0F0F0')
plt.tight_layout()
ax = plt.subplot(subplot)
ax.grid(linewidth=1, color='white')
ax.plot(training)
ax.plot(validation)
ax.set_title('model '+ title)
ax.set_ylabel(title)
ax.set_xlabel('epoch')
ax.legend(['train', 'valid.'])
"""
Explanation: Parameters
End of explanation
"""
IS_COLAB_BACKEND = 'COLAB_GPU' in os.environ # this is always set on Colab, the value is 0 or 1 depending on GPU presence
if IS_COLAB_BACKEND:
from google.colab import auth
auth.authenticate_user() # Authenticates the backend and also the TPU using your credentials so that they can access your private GCS buckets
"""
Explanation: Colab-only auth for this notebook and the TPU
End of explanation
"""
#TPU REFACTORING: detect the TPU
try: # TPU detection
tpu = tf.contrib.cluster_resolver.TPUClusterResolver() # Picks up a connected TPU on Google's Colab, ML Engine, Kubernetes and Deep Learning VMs accessed through the 'ctpu up' utility
#tpu = tf.contrib.cluster_resolver.TPUClusterResolver('MY_TPU_NAME') # If auto-detection does not work, you can pass the name of the TPU explicitly (tip: on a VM created with "ctpu up" the TPU has the same name as the VM)
print('Running on TPU ', tpu.cluster_spec().as_dict()['worker'])
USE_TPU = True
except ValueError:
tpu = None
print("Running on GPU or CPU")
USE_TPU = False
"""
Explanation: TPU detection
End of explanation
"""
def read_label(tf_bytestring):
label = tf.decode_raw(tf_bytestring, tf.uint8)
label = tf.reshape(label, [])
label = tf.one_hot(label, 10)
return label
def read_image(tf_bytestring):
image = tf.decode_raw(tf_bytestring, tf.uint8)
image = tf.cast(image, tf.float32)/256.0
image = tf.reshape(image, [28*28])
return image
def load_dataset(image_file, label_file):
imagedataset = tf.data.FixedLengthRecordDataset(image_file, 28*28, header_bytes=16)
imagedataset = imagedataset.map(read_image, num_parallel_calls=16)
labelsdataset = tf.data.FixedLengthRecordDataset(label_file, 1, header_bytes=8)
labelsdataset = labelsdataset.map(read_label, num_parallel_calls=16)
dataset = tf.data.Dataset.zip((imagedataset, labelsdataset))
return dataset
def get_training_dataset(image_file, label_file, batch_size):
dataset = load_dataset(image_file, label_file)
dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset
dataset = dataset.shuffle(5000, reshuffle_each_iteration=True)
dataset = dataset.repeat() # Mandatory for TPU for now
dataset = dataset.batch(batch_size, drop_remainder=True) # drop_remainder is important on TPU, batch size must be fixed
dataset = dataset.prefetch(-1) # prefetch next batch while training (-1: autotune prefetch buffer size)
return dataset
#TPU REFACTORING: training and eval batch sizes must be the same: passing batch_size parameter here too
# def get_validation_dataset(image_file, label_file):
def get_validation_dataset(image_file, label_file, batch_size):
dataset = load_dataset(image_file, label_file)
dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset
#TPU REFACTORING: training and eval batch sizes must be the same: passing batch_size parameter here too
# dataset = dataset.batch(10000, drop_remainder=True) # 10000 items in eval dataset, all in one batch
dataset = dataset.batch(batch_size, drop_remainder=True)
dataset = dataset.repeat() # Mandatory for TPU for now
return dataset
# instantiate the datasets
training_dataset = get_training_dataset(training_images_file, training_labels_file, BATCH_SIZE)
validation_dataset = get_validation_dataset(validation_images_file, validation_labels_file, 10000)
# For TPU, we will need a function that returns the dataset
# TPU REFACTORING: input_fn's must have a params argument though which TPUEstimator passes params['batch_size']
# training_input_fn = lambda: get_training_dataset(training_images_file, training_labels_file, BATCH_SIZE)
# validation_input_fn = lambda: get_validation_dataset(validation_images_file, validation_labels_file)
training_input_fn = lambda params: get_training_dataset(training_images_file, training_labels_file, params['batch_size'])
validation_input_fn = lambda params: get_validation_dataset(validation_images_file, validation_labels_file, params['batch_size'])
"""
Explanation: tf.data.Dataset: parse files and prepare training and validation datasets
Please read the best practices for building input pipelines with tf.data.Dataset
End of explanation
"""
N = 24
(training_digits, training_labels,
validation_digits, validation_labels) = dataset_to_numpy_util(training_dataset, validation_dataset, N)
display_digits(training_digits, training_labels, training_labels, "training digits and their labels", N)
display_digits(validation_digits[:N], validation_labels[:N], validation_labels[:N], "validation digits and their labels", N)
font_digits, font_labels = create_digits_from_local_fonts(N)
"""
Explanation: Let's have a look at the data
End of explanation
"""
# This model trains to 99.4% sometimes 99.5% accuracy in 10 epochs
# TPU REFACTORING: model_fn must have a params argument. TPUEstimator passes batch_size and use_tpu into it
#def model_fn(features, labels, mode):
def model_fn(features, labels, mode, params):
is_training = (mode == tf.estimator.ModeKeys.TRAIN)
x = features
y = tf.reshape(x, [-1, 28, 28, 1])
y = tf.layers.Conv2D(filters=6, kernel_size=3, padding='same', use_bias=False)(y) # no bias necessary before batch norm
y = tf.layers.BatchNormalization(scale=False, center=True)(y, training=is_training) # no batch norm scaling necessary before "relu"
y = tf.nn.relu(y) # activation after batch norm
y = tf.layers.Conv2D(filters=12, kernel_size=6, padding='same', use_bias=False, strides=2)(y)
y = tf.layers.BatchNormalization(scale=False, center=True)(y, training=is_training)
y = tf.nn.relu(y)
y = tf.layers.Conv2D(filters=24, kernel_size=6, padding='same', use_bias=False, strides=2)(y)
y = tf.layers.BatchNormalization(scale=False, center=True)(y, training=is_training)
y = tf.nn.relu(y)
y = tf.layers.Flatten()(y)
y = tf.layers.Dense(200, use_bias=False)(y)
y = tf.layers.BatchNormalization(scale=False, center=True)(y, training=is_training)
y = tf.nn.relu(y)
y = tf.layers.Dropout(0.5)(y, training=is_training)
logits = tf.layers.Dense(10)(y)
predictions = tf.nn.softmax(logits)
classes = tf.math.argmax(predictions, axis=-1)
if (mode != tf.estimator.ModeKeys.PREDICT):
loss = tf.losses.softmax_cross_entropy(labels, logits)
step = tf.train.get_or_create_global_step()
# TPU REFACTORING: step is now increased once per GLOBAL_BATCH_SIZE = 8*BATCH_SIZE. Must adjust learning rate schedule accordingly
# lr = 0.0001 + tf.train.exponential_decay(0.01, step, 2000, 1/math.e)
lr = 0.0001 + tf.train.exponential_decay(0.01, step, 2000//8, 1/math.e)
# TPU REFACTORING: custom Tensorboard summaries do not work. Only default Estimator summaries will appear in Tensorboard.
# tf.summary.scalar("learn_rate", lr)
optimizer = tf.train.AdamOptimizer(lr)
# TPU REFACTORING: wrap the optimizer in a CrossShardOptimizer: this implements the multi-core training logic
if params['use_tpu']:
optimizer = tf.contrib.tpu.CrossShardOptimizer(optimizer)
# little wrinkle: batch norm uses running averages which need updating after each batch. create_train_op does it, optimizer.minimize does not.
train_op = tf.contrib.training.create_train_op(loss, optimizer)
#train_op = optimizer.minimize(loss, tf.train.get_or_create_global_step())
# TPU REFACTORING: a metrics_fn is needed for TPU
# metrics = {'accuracy': tf.metrics.accuracy(classes, tf.math.argmax(labels, axis=-1))}
metric_fn = lambda classes, labels: {'accuracy': tf.metrics.accuracy(classes, tf.math.argmax(labels, axis=-1))}
tpu_metrics = (metric_fn, [classes, labels]) # pair of metric_fn and its list of arguments, there can be multiple pairs in a list
# metric_fn will run on CPU, not TPU: more operations are allowed
else:
loss = train_op = metrics = tpu_metrics = None # None of these can be computed in prediction mode because labels are not available
# TPU REFACTORING: EstimatorSpec => TPUEstimatorSpec
## return tf.estimator.EstimatorSpec(
return tf.contrib.tpu.TPUEstimatorSpec(
mode=mode,
predictions={"predictions": predictions, "classes": classes}, # name these fields as you like
loss=loss,
train_op=train_op,
# TPU REFACTORING: a metrics_fn is needed for TPU, passed into the eval_metrics field instead of eval_metrics_ops
# eval_metric_ops=metrics
eval_metrics = tpu_metrics
)
# Called once when the model is saved. This function produces a Tensorflow
# graph of operations that will be prepended to your model graph. When
# your model is deployed as a REST API, the API receives data in JSON format,
# parses it into Tensors, then sends the tensors to the input graph generated by
# this function. The graph can transform the data so it can be sent into your
# model input_fn. You can do anything you want here as long as you do it with
# tf.* functions that produce a graph of operations.
def serving_input_fn():
# placeholder for the data received by the API (already parsed, no JSON decoding necessary,
# but the JSON must contain one or multiple 'image' key(s) with 28x28 greyscale images as content.)
inputs = {"serving_input": tf.placeholder(tf.float32, [None, 28, 28])} # the shape of this dict should match the shape of your JSON
features = inputs['serving_input'] # no transformation needed
return tf.estimator.export.TensorServingInputReceiver(features, inputs) # features are the features needed by your model_fn
# Return a ServingInputReceiver if your features are a dictionary of Tensors, TensorServingInputReceiver if they are a straight Tensor
"""
Explanation: Estimator model
If you are not sure what cross-entropy, dropout, softmax or batch-normalization mean, head here for a crash-course: Tensorflow and deep learning without a PhD
End of explanation
"""
EPOCHS = 10
# TPU_REFACTORING: to use all 8 cores, increase the batch size by 8
GLOBAL_BATCH_SIZE = BATCH_SIZE * 8
# TPU_REFACTORING: TPUEstimator increments the step once per GLOBAL_BATCH_SIZE: must adjust epoch length accordingly
# steps_per_epoch = 60000 // BATCH_SIZE # 60,000 images in training dataset
steps_per_epoch = 60000 // GLOBAL_BATCH_SIZE # 60,000 images in training dataset
MODEL_EXPORT_NAME = "mnist" # name for exporting saved model
# TPU_REFACTORING: the TPU will run multiple steps of training before reporting back
TPU_ITERATIONS_PER_LOOP = steps_per_epoch # report back after each epoch
tf_logging.set_verbosity(tf_logging.INFO)
now = datetime.datetime.now()
MODEL_DIR = BUCKET+"/mnistjobs/job" + "-{}-{:02d}-{:02d}-{:02d}:{:02d}:{:02d}".format(now.year, now.month, now.day, now.hour, now.minute, now.second)
# TPU REFACTORING: the RunConfig has changed
#training_config = tf.estimator.RunConfig(model_dir=MODEL_DIR, save_summary_steps=10, save_checkpoints_steps=steps_per_epoch, log_step_count_steps=steps_per_epoch/4)
training_config = tf.contrib.tpu.RunConfig(
cluster=tpu,
model_dir=MODEL_DIR,
tpu_config=tf.contrib.tpu.TPUConfig(TPU_ITERATIONS_PER_LOOP))
# TPU_REFACTORING: exporters do not work yet. Must call export_savedmodel manually after training
#export_latest = tf.estimator.LatestExporter(MODEL_EXPORT_NAME, serving_input_receiver_fn=serving_input_fn)
# TPU_REFACTORING: Estimator => TPUEstimator
#estimator = tf.estimator.Estimator(model_fn=model_fn, config=training_config)
estimator = tf.contrib.tpu.TPUEstimator(
model_fn=model_fn,
model_dir=MODEL_DIR,
# TPU_REFACTORING: training and eval batch size must be the same for now
train_batch_size=GLOBAL_BATCH_SIZE,
eval_batch_size=10000, # 10000 digits in eval dataset
predict_batch_size=10000, # prediction on the entire eval dataset in the demo below
config=training_config,
use_tpu=USE_TPU,
# TPU REFACTORING: setting the kind of model export we want
export_to_tpu=False) # we want an exported model for CPU/GPU inference because that is what is supported on ML Engine
# TPU REFACTORING: train_and_evaluate does not work on TPU yet, TrainSpec not needed
# train_spec = tf.estimator.TrainSpec(training_input_fn, max_steps=EPOCHS*steps_per_epoch)
# TPU REFACTORING: train_and_evaluate does not work on TPU yet, EvalSpec not needed
# eval_spec = tf.estimator.EvalSpec(validation_input_fn, steps=1, exporters=export_latest, throttle_secs=0) # no eval throttling: evaluates after each checkpoint
# TPU REFACTORING: train_and_evaluate does not work on TPU yet, must train then eval manually
# tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
estimator.train(training_input_fn, steps=steps_per_epoch*EPOCHS)
estimator.evaluate(input_fn=validation_input_fn, steps=1)
# TPU REFACTORING: exporters do not work yet. Must call export_savedmodel manually after training
estimator.export_savedmodel(os.path.join(MODEL_DIR, MODEL_EXPORT_NAME), serving_input_fn)
tf_logging.set_verbosity(tf_logging.WARN)
"""
Explanation: Train and validate the model, this time on TPU
End of explanation
"""
# recognize digits from local fonts
# TPU REFACTORING: TPUEstimator.predict requires a 'params' in ints input_fn so that it can pass params['batch_size']
#predictions = estimator.predict(lambda: tf.data.Dataset.from_tensor_slices(font_digits).batch(N),
predictions = estimator.predict(lambda params: tf.data.Dataset.from_tensor_slices(font_digits).batch(N),
yield_single_examples=False) # the returned value is a generator that will yield one batch of predictions per next() call
predicted_font_classes = next(predictions)['classes']
display_digits(font_digits, predicted_font_classes, font_labels, "predictions from local fonts (bad predictions in red)", N)
# recognize validation digits
predictions = estimator.predict(validation_input_fn,
yield_single_examples=False) # the returned value is a generator that will yield one batch of predictions per next() call
predicted_labels = next(predictions)['classes']
display_top_unrecognized(validation_digits, predicted_labels, validation_labels, N, 7)
"""
Explanation: Visualize predictions
End of explanation
"""
PROJECT = "" #@param {type:"string"}
NEW_MODEL = True #@param {type:"boolean"}
MODEL_NAME = "estimator_mnist_tpu" #@param {type:"string"}
MODEL_VERSION = "v0" #@param {type:"string"}
assert PROJECT, 'For this part, you need a GCP project. Head to http://console.cloud.google.com/ and create one.'
#TPU REFACTORING: TPUEstimator does not create the 'export' subfolder
#export_path = os.path.join(MODEL_DIR, 'export', MODEL_EXPORT_NAME)
export_path = os.path.join(MODEL_DIR, MODEL_EXPORT_NAME)
last_export = sorted(tf.gfile.ListDirectory(export_path))[-1]
export_path = os.path.join(export_path, last_export)
print('Saved model directory found: ', export_path)
"""
Explanation: Deploy the trained model to ML Engine
Push your trained model to production on ML Engine for a serverless, autoscaled, REST API experience.
You will need a GCS bucket and a GCP project for this.
Models deployed on ML Engine autoscale to zero if not used. There will be no ML Engine charges after you are done testing.
Google Cloud Storage incurs charges. Empty the bucket after deployment if you want to avoid these. Once the model is deployed, the bucket is not useful anymore.
Configuration
End of explanation
"""
# Create the model
if NEW_MODEL:
!gcloud ml-engine models create {MODEL_NAME} --project={PROJECT} --regions=us-central1
# Create a version of this model (you can add --async at the end of the line to make this call non blocking)
# Additional config flags are available: https://cloud.google.com/ml-engine/reference/rest/v1/projects.models.versions
# You can also deploy a model that is stored locally by providing a --staging-bucket=... parameter
!echo "Deployment takes a couple of minutes. You can watch your deployment here: https://console.cloud.google.com/mlengine/models/{MODEL_NAME}"
!gcloud ml-engine versions create {MODEL_VERSION} --model={MODEL_NAME} --origin={export_path} --project={PROJECT} --runtime-version=1.10
"""
Explanation: Deploy the model
This uses the command-line interface. You can do the same thing through the ML Engine UI at https://console.cloud.google.com/mlengine/models
End of explanation
"""
# prepare digits to send to online prediction endpoint
digits = np.concatenate((font_digits, validation_digits[:100-N]))
labels = np.concatenate((font_labels, validation_labels[:100-N]))
with open("digits.json", "w") as f:
for digit in digits:
# the format for ML Engine online predictions is: one JSON object per line
data = json.dumps({"serving_input": digit.tolist()}) # "serving_input" because that is what you defined in your serving_input_fn: {"serving_input": tf.placeholder(tf.float32, [None, 28, 28])}
f.write(data+'\n')
# Request online predictions from deployed model (REST API) using the "gcloud ml-engine" command line.
predictions = !gcloud ml-engine predict --model={MODEL_NAME} --json-instances digits.json --project={PROJECT} --version {MODEL_VERSION}
predictions = np.array([int(p.split('[')[0]) for p in predictions[1:]]) # first line is the name of the input layer: drop it, parse the rest
display_top_unrecognized(digits, predictions, labels, N, 100//N)
"""
Explanation: Test the deployed model
Your model is now available as a REST API. Let us try to call it. The cells below use the "gcloud ml-engine"
command line tool but any tool that can send a JSON payload to a REST endpoint will work.
End of explanation
"""
|
joseerlang/PySpark_docker | notebook/Contador de palabras.ipynb | apache-2.0 | fileName='book.txt'
"""
Explanation: Leer texto
Lo primero que tenemos que hacer es cargar el texto. Para nuestro ejemplo, cargaremos una obra del proyecto
Gutenberg.
End of explanation
"""
import re
def removePunctuation(text):
return re.sub('[^a-z| |0-9]', '', text.strip().lower())
"""
Explanation: Ahora vamos a eliminar todo aquello que no se consideren cadenas de texto válidas. Para ello definiremos una función que elimine aquello que no queremos contabilizar.
End of explanation
"""
shakespeareRDD = (sc
.textFile(fileName, 8)
.map(removePunctuation))
shakespeareRDD.take(4)
print '\n'.join(shakespeareRDD
.zipWithIndex() # to (line, lineNum)
.map(lambda (l, num): '{0}: {1}'.format(num, l)) # to 'lineNum: line'
.take(15))
shakespeareWordRDD=shakespeareRDD.flatMap(lambda x: x.split(' '))
shakespeareWordCount=shakespeareWordRDD.count()
print shakespeareWordCount
print shakespeareWordRDD.take(10)
"""
Explanation: Ahora vamos a crear el primer RDD del contenido del libro.
End of explanation
"""
ranking= shakespeareWordRDD.map(lambda x: (x,1)).reduceByKey(lambda y,z: y+z).takeOrdered(10,lambda x : -x[1])
print ranking
"""
Explanation: A continuación vamos a detectar cual es la palabra que más veces aparece en el texto. Generaremos un ranking de las 10 más numerosas para que se vea parte del poder de spark.
End of explanation
"""
|
thomasantony/CarND-Projects | Exercises/Term1/TensorFlow-L2/lab.ipynb | mit | import hashlib
import os
import pickle
from urllib.request import urlretrieve
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
from tqdm import tqdm
from zipfile import ZipFile
print('All modules imported.')
"""
Explanation: <h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html">notMNIST</a>, consists of images of a letter from A to J in differents font.
The above images are a few examples of the data you'll be training on. After training the network, you will compare your prediction model against test data. Your goal, by the end of this lab, is to make predictions against that test set with at least an 80% accuracy. Let's jump in!
To start this lab, you first need to import all the necessary modules. Run the code below. If it runs successfully, it will print "All modules imported".
End of explanation
"""
def download(url, file):
"""
Download file from <url>
:param url: URL to file
:param file: Local file path
"""
if not os.path.isfile(file):
print('Downloading ' + file + '...')
urlretrieve(url, file)
print('Download Finished')
# Download the training and test dataset.
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')
# Make sure the files aren't corrupted
assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\
'notMNIST_train.zip file is corrupted. Remove the file and try again.'
assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\
'notMNIST_test.zip file is corrupted. Remove the file and try again.'
# Wait until you see that all files have been downloaded.
print('All files downloaded.')
def uncompress_features_labels(file):
"""
Uncompress features and labels from a zip file
:param file: The zip file to extract the data from
"""
features = []
labels = []
with ZipFile(file) as zipf:
# Progress Bar
filenames_pbar = tqdm(zipf.namelist(), unit='files')
# Get features and labels from all files
for filename in filenames_pbar:
# Check if the file is a directory
if not filename.endswith('/'):
with zipf.open(filename) as image_file:
image = Image.open(image_file)
image.load()
# Load image data as 1 dimensional array
# We're using float32 to save on memory space
feature = np.array(image, dtype=np.float32).flatten()
# Get the the letter from the filename. This is the letter of the image.
label = os.path.split(filename)[1][0]
features.append(feature)
labels.append(label)
return np.array(features), np.array(labels)
# Get the features and labels from the zip files
train_features, train_labels = uncompress_features_labels('notMNIST_train.zip')
test_features, test_labels = uncompress_features_labels('notMNIST_test.zip')
# Limit the amount of data to work with a docker container
docker_size_limit = 150000
train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)
# Set flags for feature engineering. This will prevent you from skipping an important step.
is_features_normal = False
is_labels_encod = False
# Wait until you see that all features and labels have been uncompressed.
print('All features and labels uncompressed.')
"""
Explanation: The notMNIST data is a large dataset to handle for most computers. It contains 500 thousands images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
End of explanation
"""
# Problem 1 - Implement Min-Max scaling for greyscale image data
def normalize_greyscale(image_data):
"""
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
"""
a = 0.1
b = 0.9
Xmin = 0
Xmax = 255
norm_img = a + (image_data - Xmin)*(b-a)/(Xmax - Xmin)
return norm_img
### DON'T MODIFY ANYTHING BELOW ###
# Test Cases
np.testing.assert_array_almost_equal(
normalize_greyscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),
[0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,
0.125098039216, 0.128235294118, 0.13137254902, 0.9],
decimal=3)
np.testing.assert_array_almost_equal(
normalize_greyscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),
[0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,
0.896862745098, 0.9])
train_features = normalize_greyscale(train_features)
test_features = normalize_greyscale(test_features)
is_features_normal = True
print('Tests Passed!')
# Turn labels into numbers and apply One-Hot Encoding
encoder = LabelBinarizer()
encoder.fit(train_labels)
train_labels = encoder.transform(train_labels)
test_labels = encoder.transform(test_labels)
# Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
train_labels = train_labels.astype(np.float32)
test_labels = test_labels.astype(np.float32)
is_labels_encod = True
print('Labels One-Hot Encoded')
assert is_features_normal, 'You skipped the step to normalize the features'
assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'
# Get randomized datasets for training and validation
train_features, valid_features, train_labels, valid_labels = train_test_split(
train_features,
train_labels,
test_size=0.05,
random_state=832289)
print('Training features and labels randomized and split.')
# Save the data for easy access
pickle_file = 'notMNIST.pickle'
if not os.path.isfile(pickle_file):
print('Saving data to pickle file...')
try:
with open('notMNIST.pickle', 'wb') as pfile:
pickle.dump(
{
'train_dataset': train_features,
'train_labels': train_labels,
'valid_dataset': valid_features,
'valid_labels': valid_labels,
'test_dataset': test_features,
'test_labels': test_labels,
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
print('Data cached in pickle file.')
"""
Explanation: <img src="image/mean_variance.png" style="height: 75%;width: 75%; position: relative; right: 5%">
Problem 1
The first problem involves normalizing the features for your training and test data. I want you to implement Min-Max scaling in the normalize() function to a range of a=0.1 and b=0.9. Since the notMNIST image data is in greyscale, you'll have to use a max of 255 and min of 0.
Min-Max Scaling:
$
X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}}
$
If you're having trouble solving problem 1, you can view the solution here.
End of explanation
"""
%matplotlib inline
# Load the modules
import pickle
import math
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
# Reload the data
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f)
train_features = pickle_data['train_dataset']
train_labels = pickle_data['train_labels']
valid_features = pickle_data['valid_dataset']
valid_labels = pickle_data['valid_labels']
test_features = pickle_data['test_dataset']
test_labels = pickle_data['test_labels']
del pickle_data # Free up memory
print('Data and modules loaded.')
"""
Explanation: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
End of explanation
"""
features_count = 784
labels_count = 10
# ToDo: Set the features and labels tensors
features = tf.placeholder(tf.float32, shape=(None, features_count))
labels = tf.placeholder(tf.float32, shape=(None, labels_count))
# ToDo: Set the weights and biases tensors
weights = tf.Variable(tf.truncated_normal(shape=(784, 10), mean=0.0, stddev=0.1, dtype=tf.float32))
biases = tf.Variable(tf.zeros((10,)))
### DON'T MODIFY ANYTHING BELOW ###
#Test Cases
from tensorflow.python.ops.variables import Variable
assert features._op.name.startswith('Placeholder'), 'features must be a placeholder'
assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'
assert isinstance(weights, Variable), 'weights must be a TensorFlow variable'
assert isinstance(biases, Variable), 'biases must be a TensorFlow variable'
assert features._shape == None or (\
features._shape.dims[0].value is None and\
features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'
assert labels._shape == None or (\
labels._shape.dims[0].value is None and\
labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'
# assert labels._shape in [None, 10], 'The shape of labels is incorrect' ## Changed this test
assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'
assert biases._variable._shape == (10), 'The shape of biases is incorrect'
assert features._dtype == tf.float32, 'features must be type float32'
assert labels._dtype == tf.float32, 'labels must be type float32'
# Feed dicts for training, validation, and test session
train_feed_dict = {features: train_features, labels: train_labels}
valid_feed_dict = {features: valid_features, labels: valid_labels}
test_feed_dict = {features: test_features, labels: test_labels}
# Linear Function WX + b
logits = tf.matmul(features, weights) + biases
prediction = tf.nn.softmax(logits)
# Cross entropy
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
loss = tf.reduce_mean(cross_entropy)
# Create an operation that initializes all variables
init = tf.initialize_all_variables()
# Test Cases
with tf.Session() as session:
session.run(init)
session.run(loss, feed_dict=train_feed_dict)
session.run(loss, feed_dict=valid_feed_dict)
session.run(loss, feed_dict=test_feed_dict)
biases_data = session.run(biases)
assert not np.count_nonzero(biases_data), 'biases must be zeros'
print('Tests Passed!')
# Determine if the predictions are correct
is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))
# Calculate the accuracy of the predictions
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))
print('Accuracy function created.')
"""
Explanation: <img src="image/weight_biases.png" style="height: 60%;width: 60%; position: relative; right: 10%">
Problem 2
For the neural network to train on your data, you need the following <a href="https://www.tensorflow.org/resources/dims_types.html#data-types">float32</a> tensors:
- features
- Placeholder tensor for feature data(train_features/valid_features/test_features)
- labels
- Placeholder tensor for label data(train_labels/valid_labels/test_labels)
- weights
- Variable Tensor with random numbers from a truncated normal distribution.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#truncated_normal">tf.truncated_normal() documentation</a> for help.
- biases
- Variable Tensor with all zeros.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#zeros"> tf.zeros() documentation</a> for help.
If you're having trouble solving problem 2, review "TensorFlow Linear Function" section of the class. If that doesn't help, the solution for this problem is available here.
End of explanation
"""
# ToDo: Find the best parameters for each configuration
epochs = 5
batch_size = 500
learning_rate = 0.1
### DON'T MODIFY ANYTHING BELOW ###
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# The accuracy measured against the validation set
validation_accuracy = 0.0
# Measurements use for graphing loss and accuracy
log_batch_step = 50
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: batch_features, labels: batch_labels})
# Log every 50 batches
if not batch_i % log_batch_step:
# Calculate Training and Validation accuracy
training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
loss_batch.append(l)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
# Check accuracy against Validation data
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'b', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print('Validation accuracy at {}'.format(validation_accuracy))
"""
Explanation: <img src="image/learn_rate_tune.png" style="height: 60%;width: 60%">
Problem 3
You're given 3 parameter configurations for training the neural network. One of the parameters in each configuration has multiple options. Choose the option for each configuration that gives the best acccuracy.
Parameter configurations:
Configuration 1
* Epochs: 1
* Batch Size:
* 2000
* 1000
* 500
* 300
* 50
* Learning Rate: 0.01
Configuration 2
* Epochs: 1
* Batch Size: 100
* Learning Rate:
* 0.8
* 0.5
* 0.1
* 0.05
* 0.01
Configuration 3
* Epochs:
* 1
* 2
* 3
* 4
* 5
* Batch Size: 100
* Learning Rate: 0.2
The code will print out a Loss and Accuracy graph, so you can see how well the neural network performed.
If you're having trouble solving problem 3, you can view the solution here.
End of explanation
"""
# ToDo: Set the epochs, batch_size, and learning_rate with the best parameters from problem 3
epochs = 5
batch_size = 500
learning_rate = 0.1
### DON'T MODIFY ANYTHING BELOW ###
# The accuracy measured against the test set
test_accuracy = 0.0
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer
_ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})
# Check accuracy against Test data
test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)
assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)
print('Nice Job! Test Accuracy is {}'.format(test_accuracy))
"""
Explanation: Test
Set the epochs, batch_size, and learning_rate with the best learning parameters you discovered in problem 3. You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of atleast 80%.
End of explanation
"""
|
gregunz/ada2017 | 01 - Pandas and Data Wrangling/Homework 1.ipynb | mit | # all imports here
import pandas as pd
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
import os
import glob
import re
import math
import calendar
import datetime
import random
from calendar import monthrange
from sklearn import datasets, linear_model, ensemble
from sklearn.model_selection import train_test_split
%matplotlib inline
DATA_FOLDER = './Data' # Use the data folder provided in Tutorial 02 - Intro to Pandas.("/")
"""
Explanation: Table of Contents
<p><div class="lev1"><a href="#Task-1.-Compiling-Ebola-Data"><span class="toc-item-num">Task 1. </span>Compiling Ebola Data</a></div>
<div class="lev1"><a href="#Task-2.-RNA-Sequences"><span class="toc-item-num">Task 2. </span>RNA Sequences</a></div>
<div class="lev1"><a href="#Task-3.-Class-War-in-Titanic"><span class="toc-item-num">Task 3. </span>Class War in Titanic</a></div></p>
End of explanation
"""
country_folder_names = ['guinea_data', 'liberia_data', 'sl_data']
country_paths = [DATA_FOLDER + '/ebola/' + name for name in country_folder_names]
# all files about the ebola task (by country)
all_ebola_files = [glob.glob(os.path.join(path, "*.csv")) for path in country_paths]
"""
Explanation: Task 1. Compiling Ebola Data
The DATA_FOLDER/ebola folder contains summarized reports of Ebola cases from three countries (Guinea, Liberia and Sierra Leone) during the recent outbreak of the disease in West Africa. For each country, there are daily reports that contain various information about the outbreak in several cities in each country.
Use pandas to import these data files into a single Dataframe.
Using this DataFrame, calculate for each country, the daily average per month of new cases and deaths.
Make sure you handle all the different expressions for new cases and deaths that are used in the reports.
Path extracting
First, we need to collect into an array every file name where the data is located
We will separate the data of each country at first, because they do not follow the same conventions and change the columns name that we are interested in into the same ones for each country.
We'll finally merge them as requested for the rest of the computations.
End of explanation
"""
guinea_files = all_ebola_files[0]
guinea_df = pd.concat([pd.read_csv(path) for path in guinea_files], axis=0).rename(
columns={'Date': 'Date', 'Description': 'Info', 'Totals': 'Total'})
guinea_df['Country'] = 'Guinea'
#'Total deaths of confirmed' | 'Total cases of confirmed'
liberia_files = all_ebola_files[1]
liberia_df = pd.concat([pd.read_csv(path) for path in liberia_files], axis=0).rename(
columns={'Date': 'Date', 'Variable': 'Info', 'National': 'Total'})
liberia_df['Country'] = 'Liberia'
# 'Total death/s in confirmed cases' | 'Total confirmed cases'
sl_files = all_ebola_files[2]
sl_df = pd.concat([pd.read_csv(path) for path in sl_files], axis=0).rename(
columns={'date': 'Date', 'variable': 'Info', 'National': 'Total'})
sl_df['Country'] = 'Sierra Leone'
#'death_confirmed' | 'cum_confirmed' || totals
"""
Explanation: Data reading
Using the paths we extracted above, we read and then merge the data into 3 Dataframes, one for each country. Finally we add a column to each Dataframe containing the name of the country the data comes from
End of explanation
"""
ebola_df = pd.concat(
[
guinea_df,
liberia_df,
sl_df
],
axis=0
)
# replace missing values by 0
ebola_df.fillna('0', inplace=True)
"""
Explanation: We then merge the three Dataframes into one and replace missing values by 0.
End of explanation
"""
# unify dates
ebola_df['Date'] = pd.to_datetime(ebola_df['Date'])
# build index
ebola_df.index = [ebola_df['Country'], ebola_df['Date']]
ebola_df.index.rename(['Country', 'Date_index'], inplace=True)
# displaying some
ebola_df[:5]
"""
Explanation: The values in the date column are not all in the same format, therefore we need to uniformize their format.<br/>
Then, we set the index of the Dataframe into a combination of the country and the date.
End of explanation
"""
ebola_df = ebola_df[['Total', 'Info']]
# displaying some rows
ebola_df[:5]
"""
Explanation: Assumption
We will assume that for each row the total sum is contained in the column named "Total". Therefore if there is a difference between the value in the column "Total" and the sum of the values in the other columns, the ground truth resides in the value in the column "Total".
Based on this assumption, only the columns "Info" and the "Total" will be used.
End of explanation
"""
deaths_info_to_keep = ['Total deaths of confirmed', 'Total death/s in confirmed cases', 'death_confirmed']
cases_info_to_keep = ['Total cases of confirmed', 'Total confirmed cases', 'cum_confirmed']
ebola_df['Total'] = pd.to_numeric(ebola_df['Total'].replace(',|%', '', regex=True))
ebola_df['Deads'] = np.where(ebola_df['Info'].isin(deaths_info_to_keep), ebola_df['Total'], 0)
ebola_df['Cases'] = np.where(ebola_df['Info'].isin(cases_info_to_keep), ebola_df['Total'], 0)
# displaying some data the dataframe
ebola_df.head(20)
countries = ['Guinea', 'Liberia', 'Sierra Leone']
infos = ['Deads', 'Cases']
# we don't need the "Total" and "Info" columns anymore
ebola_infos_df = ebola_df[infos]
# plotting data by country
for country in countries:
ebola_infos_df.loc[country].groupby(['Date_index']).agg(sum).plot(title=country)
"""
Explanation: Assumptions
We will assume from now on that:
- Only data of confirmed cases and deaths is reliable and important. Therefore we will ignore 'suspected' and 'probable' cases in our calculations.
- The descriptions 'Total deaths of confirmed', 'Total death/s in confirmed cases' and 'death_confirmed', 'Total cases of confirmed', 'Total confirmed cases' and 'cum_confirmed' are cumulative totals.
- The descriptions 'Total deaths of confirmed', 'Total death/s in confirmed cases' and 'death_confirmed' contain data about the number of death of people that we were sure had Ebola.
- The descriptions 'Total cases of confirmed', 'Total confirmed cases' and 'cum_confirmed' contain data about the number of cases of people that we are sure have Ebola.
Reasoning
We could have chosen the daily counts instead of cumulative counts. However, they were not always given in all countries which means that we couldn't have used a uniform method of computation for every country. <br>
Moreover, using culumative is sufficient and easier. It is also more consistent.
Following our assumptions, we need to keep the results concerned by the mentionned descriptions only.<br><br>
Furthermore, some entries containing the data do not only constitute of numbers, but of '%' or ',', those characters need to be removed.
End of explanation
"""
day_offset = datetime.datetime(2014, 1, 1,)
def days_in_month(year, month):
return monthrange(year, month)[1]
def days_in_interval(start, end):
return (end.total_seconds() - start.total_seconds()) / (3600 * 24)
# example : delta of 35 days would return "February"
def days_delta_to_month(days):
return calendar.month_name[math.floor(days/30) + 1]
"""
Explanation: As we can see above, lots of data is missing where y = 0, we will need to ignore those point in the future. Moreover, some points seem to be incorrect unless one can resuscitate.
Methods
To get better estimations and ignore unrealistic values, we will create and train a model to detect outliers and create another one to create an extrapolation. We will use a RANSAC Regressor to detect the outliers and an ExtraTreesRegressor to extrapolate the values.<br/>
Alternatives
We could have used a LinearRegressor instead of the ExtraTreesRegressor, this would have lead to a better approximation of the trend and it could also compensate human error induced during data fetching. However, we are more interested in the exact daily average given by the data knowing that outliers are taken care of by the RANSAC Regressor and thus the error is minimal enough to be ignored.
End of explanation
"""
def build_interval_by_month(start, end):
assert(start.year == end.year) # works within the same year only
assert(start.month < end.month ) # can't go backwards or same month
interval = []
# corner case #1 : start.day is not the first of the month
interval.append([
datetime.datetime(start.year, start.month, start.day),
datetime.datetime(start.year, start.month, days_in_month(start.year, start.month))
])
for month_idx in range(start.month + 1, end.month):
interval.append([
datetime.datetime(start.year, month_idx, 1),
datetime.datetime(start.year, month_idx, days_in_month(start.year, month_idx))
])
# corner case #2 : end.day in not necessary the last of the month
interval.append([
datetime.datetime(end.year, end.month, 1),
datetime.datetime(end.year, end.month, end.day)
])
return [[date-day_offset for date in dates] for dates in interval]
intervals_of_interest = {}
for country in countries:
intervals_of_interest[country] = {}
for info in infos:
agg_data = ebola_infos_df.loc[country].groupby(['Date_index']).agg(sum)
agg_data_greater_zero = agg_data[agg_data[info]>0]
start = agg_data_greater_zero.index[0]
end = agg_data_greater_zero.index[-1]
intervals_of_interest[country][info] = build_interval_by_month(start, end)
"""
Explanation: Since data is sometimes missing, we then extract the intervals for each country where we have data about the number of cases, and where we have the number of deads.
For each of those intervals:
- we get the length in days for each month in the interval.
- we create pairs for each month in the interval, where the first item is the earliest day of that month contained in the interval, and the second one, the latest one.
End of explanation
"""
def get_features(dates):
X = pd.DataFrame(
data=[date.total_seconds() for date in dates],
index=range(len(dates)),
columns=['Date_index']
)
X["Date_index"] = X["Date_index"] / 100000.
X["log"] = X["Date_index"].apply(np.log)
X["Date1/2"] = X["Date_index"]**(1/2.)
X["Date^2"] = X["Date_index"]**2
return X
# note : it also plots the predictions
def get_model(country, info):
agg_data = ebola_infos_df.loc[country].groupby(['Date_index']).agg(sum)
agg_data_greater_zero = agg_data[agg_data[info] > 0]
delta = pd.DataFrame(agg_data_greater_zero[info].index)["Date_index"] - day_offset
X = get_features(delta.tolist())
reg1 = linear_model.RANSACRegressor(random_state=1024)
reg2 = ensemble.ExtraTreesRegressor(random_state=1411)
reg1.fit(X, agg_data_greater_zero[info])
inlier_mask = reg1.inlier_mask_
outlier_mask = np.logical_not(inlier_mask)
x_train, x_test, y_train, y_test = train_test_split(X[inlier_mask], agg_data_greater_zero[info][inlier_mask], test_size=0.0)
reg2.fit(x_train, y_train)
pred_df = pd.DataFrame(reg2.predict(X)).rename(columns={0:"Prediction"})
pred_df["Real Values (zeroes filtered)"] = agg_data_greater_zero[info].values
pred_df["Date"] = agg_data_greater_zero.index
pred_df.plot(x="Date", title=country+' - '+info)
return reg2
def plot_info_per_month(intervals, plot_name):
intervals_df = pd.DataFrame(
data=[interval[0] for interval in intervals],
index=[interval[1] for interval in intervals],
)
intervals_df.index.name = "Months"
intervals_df.plot(kind="bar", title=plot_name, legend=False)
return intervals_df
pred = []
final_df = pd.DataFrame()
for country in countries:
for info in infos:
model = get_model(country, info)
intervals = []
for interval in intervals_of_interest[country][info]:
features = get_features(interval)
pred.append(model.predict(features))
if(interval[0] == interval[1]):
intervals.append([pred[-1][1] - pred[-2][1], days_delta_to_month(interval[0].days)])
else:
intervals.append([
(pred[-1][1] - pred[-1][0]) / days_in_interval(interval[0], interval[1]),
days_delta_to_month(interval[0].days)
])
temp_df = plot_info_per_month(intervals, country + " - " + info + " - daily average per month")
temp_df['Country'] = country
temp_df['Info'] = info
final_df = pd.concat([final_df, temp_df], axis=0)
final_df.index = [final_df.index, final_df['Country'], final_df['Info']]
final_df = final_df[[0]].rename(
columns={0: 'Count'})
final_df.groupby(['Country', 'Info', 'Months']).head()
"""
Explanation: To train our models, we created some new features to complement the only one we have. The primary feature being the date in seconds of the record since a previously decided day offset (beginning of the year) divided by 100'000.
End of explanation
"""
task2_files = glob.glob(os.path.join(DATA_FOLDER+'/microbiome/', "*.xls"))
task2_files.sort()
task2_files
"""
Explanation: Task 2. RNA Sequences
In the DATA_FOLDER/microbiome subdirectory, there are 9 spreadsheets of microbiome data that was acquired from high-throughput RNA sequencing procedures, along with a 10<sup>th</sup> file that describes the content of each.
Use pandas to import the first 9 spreadsheets into a single DataFrame.
Then, add the metadata information from the 10<sup>th</sup> spreadsheet as columns in the combined DataFrame.
Make sure that the final DataFrame has a unique index and all the NaN values have been replaced by the tag unknown.
Our solution
-> We import the metadata into a dataframe
-> We import each data files (MIDx.xls) into dataframes (1 file = 1 dataframe) <br/>
note: we handle the first column as row indices and then use it to concat the different dataframes
-> For each data files, we add their respective index that we get from the dataframe of the metadata <br/>
note: it's said we must add it as columns, we assumed it was as column names for each data source which makes lots of sense
-> We concat rows of the MIDs dataframes in one single dataframe
-> We replace NaN values by "unknown"
reading all filenames in the folder ending with .xls
End of explanation
"""
metadata_file = task2_files[-1]
mids_files = task2_files[:-1]
"""
Explanation: Thanks to the sort, we can see that the metadata is at the end of the list. Hence we extract it using this info.
separating data file paths (0 to 8) and metadata file path (last = 9)
End of explanation
"""
metadata_df = pd.read_excel(metadata_file, index_col=None)
metadata_df
"""
Explanation: importing metadata file into a dataframe and showing it
End of explanation
"""
mids_df = []
for idx, file in enumerate(mids_files):
mids_df.append(pd.read_excel(file, index_col=0, header=None))
mids_df[idx].columns = [[index] for index in metadata_df.loc[range(len(metadata_df)), ["GROUP", "SAMPLE"]].T[idx]]
mids_df[3][:5]
"""
Explanation: note: we can check the that the order of the rows is the same as the order of the files (MID1 -> MID9).
This makes it easy to associate file with its corresponding colum name using indices :
End of explanation
"""
mids_df_concat = pd.concat(mids_df, axis=1)
mids_df_concat.fillna(value="unknown", inplace=True)
mids_df_concat
"""
Explanation: (above : showing dataframe samples of file MID4.xls)
concat of the 9 dataframes into one (concat by row) and NA are being replaced by "unknown"
End of explanation
"""
from IPython.core.display import HTML
HTML(filename=DATA_FOLDER+'/titanic.html')
"""
Explanation: (above : showing final dataframe)
Task 3. Class War in Titanic
Use pandas to import the data file Data/titanic.xls. It contains data on all the passengers that travelled on the Titanic.
End of explanation
"""
data = pd.read_excel(DATA_FOLDER+'/titanic.xls')
data.head(5)
data.describe(include='all')
"""
Explanation: For each of the following questions state clearly your assumptions and discuss your findings:
1. Describe the type and the value range of each attribute. Indicate and transform the attributes that can be Categorical.
2. Plot histograms for the travel class, embarkation port, sex and age attributes. For the latter one, use discrete decade intervals.
3. Calculate the proportion of passengers by cabin floor. Present your results in a pie chart.
4. For each travel class, calculate the proportion of the passengers that survived. Present your results in pie charts.
5. Calculate the proportion of the passengers that survived by travel class and sex. Present your results in a single histogram.
6. Create 2 equally populated age categories and calculate survival proportions by age category, travel class and sex. Present your results in a DataFrame with unique index.
Data loading
End of explanation
"""
data['pclass'] = data.pclass.astype('category')
data['sex'] = data.sex.astype('category')
data['embarked'] = data.embarked.astype('category')
"""
Explanation: 1. Describe the type and the value range of each attribute. Indicate and transform the attributes that can be Categorical.
Data description
pclass: is an Integer between 1 and 3 that represents in which class that passenger was in. It can be categorical because we cannot say that 3rd class is twice as far as 2nd class from 1st class.
survived: is an Integer that tells us whether the passenger survived (1) or died (0). Even though it should be categorical, it is easier to let it non-categorical as it is then possible to sum the values to get the total number of people who survived.
name: is a String that contains the name of the passenger. The title could be extracted from this attribute and be transformed into a categorical attribut but we don't use it here.
sex: is String that is either 'female' or 'male'. It contains the information about the sex of the passenger. It can be categorical.
age: is a Double that represents the age in year of the passenger. It could be transformed into ranges and which could then be considered as categorical.
sibsp: is an Integer that contains the number of siblings and spouse also embarked aboard the Titanic.
parch: is an Integer that contains the number of children and parents also embarked aboard the Titanic.
ticket: is a String that contains the ticket number of the passenger.
fare: is an Integer that represents the price that the passenger paid for its ticket.
cabin: is a String that contains the deck letter and the cabin number of the passenger.
embarked: is a character {S, Q, C} that represent the Port where the passenger embarked.
boat: is the id of the boat that the passenger took after the Titanic crashed into the iceberg.
body: is the number attributed to a dead body when it was found.
home.dest: represents two informations: before the / is the home of the person and after the / is the destination of the person.
End of explanation
"""
sns.countplot(x="pclass", data=data, palette="Greens_d")
sns.countplot(x="embarked", data=data, palette="Greens_d")
sns.countplot(x="sex", data=data, palette="Greens_d")
"""
Explanation: 2. Plot histograms for the travel class, embarkation port, sex and age attributes. For the latter one, use discrete decade intervals.
End of explanation
"""
sns.countplot(x=pd.cut(data.age, [0,10,20,30,40,50,60,70,80,90], right=False), palette="Greens_d")
"""
Explanation: For the discrete interval, we decide to divide the ages in 9 intervals: 1 for each decades from 0 to 90 years old. Thanks to the use of panda cut function we do not have to bother about NAs.
End of explanation
"""
data['cabin'].astype(str).str[0].value_counts(sort=False).plot(kind='pie')
"""
Explanation: 3. Calculate the proportion of passengers by cabin floor. Present your results in a pie chart.
Let's first plot a pie for every values we have for the cabin floors.
End of explanation
"""
data['cabin'].astype(str).str[0].value_counts()[1:].plot(kind='pie')
"""
Explanation: As we can see above, around 3/4 of the data is missing for this attribute. We need to ignore the missing values to have a better view of the pie chart. Note that the cabin floor T might seem to be a typo, we will assume it refers to the tank top that one can find on a titanic deckplan and hence keep the value.
End of explanation
"""
df=data.groupby("pclass").survived.agg(["sum", lambda x: len(x) - sum(x)])
df.columns=["survived", "dead"]
df=df.T
df.columns=["1st class", "2nd class", "3rd class"]
df.plot(kind="pie", subplots=True, figsize=(18, 6))
"""
Explanation: 4. For each travel class, calculate the proportion of the passengers that survived. Present your results in pie charts.
End of explanation
"""
sns.barplot(x="sex", y="survived", hue="pclass", data=data);
"""
Explanation: 5. Calculate the proportion of the passengers that survived by travel class and sex. Present your results in a single histogram.
Seaborn allows us to plot such a graph. The black bars show the variance.
End of explanation
"""
df = data[['pclass', 'sex', 'age', 'survived']][data['age'].isnull() != True]
median_age = df[['age']].median()
df['age_cat'] = np.where(df[['age']] > median_age, '>{}'.format(median_age[0]), '<{}'.format(median_age[0]))
df = df[['pclass', 'sex', 'age_cat', 'survived']]
df.groupby(['age_cat', 'pclass', 'sex']).agg(np.mean)
"""
Explanation: 6. Create 2 equally populated age categories and calculate survival proportions by age category, travel class and sex. Present your results in a DataFrame with unique index.
To split the datas in two equally populated age categories, we use the median and group the data on both side of the median. Obviously some datas will reside in the median and hence we will not get two perfectly equally populated categories but we assume that considering the number of datas we have, we condiser a small delta is acceptable
End of explanation
"""
|
mne-tools/mne-tools.github.io | dev/_downloads/7119b87595b4873e173edd147fee099b/dics_source_power.ipynb | bsd-3-clause | # Author: Marijn van Vliet <w.m.vanvliet@gmail.com>
# Roman Goj <roman.goj@gmail.com>
# Denis Engemann <denis.engemann@gmail.com>
# Stefan Appelhoff <stefan.appelhoff@mailbox.org>
#
# License: BSD-3-Clause
import os.path as op
import numpy as np
import mne
from mne.datasets import somato
from mne.time_frequency import csd_morlet
from mne.beamformer import make_dics, apply_dics_csd
print(__doc__)
"""
Explanation: Compute source power using DICS beamformer
Compute a Dynamic Imaging of Coherent Sources (DICS) :footcite:GrossEtAl2001
filter from single-trial activity to estimate source power across a frequency
band. This example demonstrates how to source localize the event-related
synchronization (ERS) of beta band activity in the
somato dataset <somato-dataset>.
End of explanation
"""
data_path = somato.data_path()
subject = '01'
task = 'somato'
raw_fname = op.join(data_path, 'sub-{}'.format(subject), 'meg',
'sub-{}_task-{}_meg.fif'.format(subject, task))
# Use a shorter segment of raw just for speed here
raw = mne.io.read_raw_fif(raw_fname)
raw.crop(0, 120) # one minute for speed (looks similar to using all ~800 sec)
# Read epochs
events = mne.find_events(raw)
epochs = mne.Epochs(raw, events, event_id=1, tmin=-1.5, tmax=2, preload=True)
del raw
# Paths to forward operator and FreeSurfer subject directory
fname_fwd = op.join(data_path, 'derivatives', 'sub-{}'.format(subject),
'sub-{}_task-{}-fwd.fif'.format(subject, task))
subjects_dir = op.join(data_path, 'derivatives', 'freesurfer', 'subjects')
"""
Explanation: Reading the raw data and creating epochs:
End of explanation
"""
freqs = np.logspace(np.log10(12), np.log10(30), 9)
"""
Explanation: We are interested in the beta band. Define a range of frequencies, using a
log scale, from 12 to 30 Hz.
End of explanation
"""
csd = csd_morlet(epochs, freqs, tmin=-1, tmax=1.5, decim=20)
csd_baseline = csd_morlet(epochs, freqs, tmin=-1, tmax=0, decim=20)
# ERS activity starts at 0.5 seconds after stimulus onset
csd_ers = csd_morlet(epochs, freqs, tmin=0.5, tmax=1.5, decim=20)
info = epochs.info
del epochs
"""
Explanation: Computing the cross-spectral density matrix for the beta frequency band, for
different time intervals. We use a decim value of 20 to speed up the
computation in this example at the loss of accuracy.
End of explanation
"""
csd = csd.mean()
csd_baseline = csd_baseline.mean()
csd_ers = csd_ers.mean()
"""
Explanation: To compute the source power for a frequency band, rather than each frequency
separately, we average the CSD objects across frequencies.
End of explanation
"""
fwd = mne.read_forward_solution(fname_fwd)
filters = make_dics(info, fwd, csd, noise_csd=csd_baseline,
pick_ori='max-power', reduce_rank=True, real_filter=True)
del fwd
"""
Explanation: Computing DICS spatial filters using the CSD that was computed on the entire
timecourse.
End of explanation
"""
baseline_source_power, freqs = apply_dics_csd(csd_baseline, filters)
beta_source_power, freqs = apply_dics_csd(csd_ers, filters)
"""
Explanation: Applying DICS spatial filters separately to the CSD computed using the
baseline and the CSD computed during the ERS activity.
End of explanation
"""
stc = beta_source_power / baseline_source_power
message = 'DICS source power in the 12-30 Hz frequency band'
brain = stc.plot(hemi='both', views='axial', subjects_dir=subjects_dir,
subject=subject, time_label=message)
"""
Explanation: Visualizing source power during ERS activity relative to the baseline power.
End of explanation
"""
|
StingraySoftware/notebooks | Transfer Functions/Data Preparation.ipynb | mit | from PIL import Image
im = Image.open('2d.png')
width, height = im.size
"""
Explanation: Setting Up Data
We use Image module from Python Imaging library to digitize 2-d plot from Uttley et al. (2014)
End of explanation
"""
intensity = np.array([[1 for j in range(width)] for i in range(height)])
"""
Explanation: Initialize an intensity array.
End of explanation
"""
for x in range(0, height):
for y in range(0, width):
RGB = im.getpixel((y, x))
intensity[x][y] = (0.2126 * (255-RGB[0]) + 0.7152 * (255-RGB[1]) + 0.0722 * (255-RGB[2]))
"""
Explanation: Below, we retrieve each pixel and then calculate darkness value. The perceived brightness is given by:
0.2126R + 0.7152G + 0.0722*B
To get darkness, the formula is corrected as follows:
0.2126(255-R) + 0.7152(255-G) + 0.0722*(255-B)
End of explanation
"""
intensity = intensity[::-1]
np.savetxt('intensity.txt', intensity)
"""
Explanation: Invert along Y-axis to account for some conventions.
End of explanation
"""
|
hcorona/recsys-101-workshop | notebooks/notebook-2-toy-example.ipynb | mit | import os
os.chdir('..')
# Import all the packages we need to generate recommendations
import numpy as np
import pandas as pd
import src.utils as utils
import src.recommenders as recommenders
import src.similarity as similarity
# Enable logging on Jupyter notebook
import logging
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
# imports necesary for plotting
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: An example with small data, to understand user-based collaborative recommendations
End of explanation
"""
data=[[1, 1, 5],[1, 2, 4],[1, 3, 3],
[2, 1, 3], [2, 2, 4],[2, 3, 5],[2, 4, 2],
[3, 2, 2], [3, 3, 4],[3, 4, 4]]
ratings = pd.DataFrame(columns=["customer", "movie", "rating"], data=data)
ratings
"""
Explanation: Here, we create some fake data with fo users and 4 items; see that some ratings are missing, something common in a real recommender system
End of explanation
"""
# the data is stored in a long pandas dataframe
# we need to pivot the data to create a [user x movie] matrix
ratings_matrix = ratings.pivot_table(index='customer', columns='movie', values='rating', fill_value=0)
ratings_matrix
"""
Explanation: We then need to transform the data into a customer x movie rating matrix
End of explanation
"""
target_customer = 3
similarity_metric = "cosine"
K = 10
# get the nearest neighbours and compute the total distance
# only compute from [1:K] to avoid self correlation (index 0)
neighbours = similarity.compute_nearest_neighbours(target_customer, ratings_matrix, similarity_metric)[1:K+1]
neighbours
"""
Explanation: Computing similarity
target_customer : the customer we want to make recommendations for
similarity metric: How do we want to measure similarity between customers
K : the number of neigbhours to consider (in this case K > number customers, therefore we will use all)
End of explanation
"""
recommendations = {}
simSums = {}
supportRatings = {}
# Iterate through the k nearest neighbors, accumulating their ratings
for neighbour in neighbours.item.unique():
weight = neighbours.similarity[neighbours.item == neighbour]
neighbour_ratings = ratings.ix[ratings.customer == neighbour]
# calculate the predicted rating for each recommendations
for movie in neighbour_ratings.movie.unique():
prediction = neighbour_ratings.rating[neighbour_ratings.movie == movie]*weight.values[0]
# if there is a new movie, set the similarity and sums to 0
recommendations.setdefault(movie, 0)
simSums.setdefault(movie, 0)
supportRatings.setdefault(movie, 0)
recommendations[movie] += prediction.values[0]
simSums[movie] += weight.values[0]
supportRatings[movie] += 1
recommendations
"""
Explanation: Compute Recommendations for target customer
End of explanation
"""
# normalise so that the sum of weights for each movie adds to 1
recs_normalized = [(recommendations/simSums[movie], movie) for movie, recommendations in recommendations.items()]
recs_normalized
"""
Explanation: The normalization can be done by the sum of the weights for each movie.However, as you can see, this might lead to recommendations with high values being promoted by a single neighbour
End of explanation
"""
# normalise so that the sum of weights for each movie adds to 1 and we have a threshold in place
threshold = 2
recs_normalized = [(recommendations/simSums[movie]*min(supportRatings[movie]-(threshold-1), 1), movie) for movie, recommendations in recommendations.items()]
recs_normalized
"""
Explanation: To mitigate this, we can add a threshold, so we only get recommendations from movies for which we have a minimum evidence
End of explanation
"""
# for example the recommendation for item 1 would be calculated as
rec_item_1 = (0.471405*5 + 0.816497*3)/(0.816497+0.471405)
rec_item_1
"""
Explanation: Lets see an example in detail :
End of explanation
"""
|
TheMitchWorksPro/DataTech_Playground | PY_Basics/TMWP_PY_Recursion_Factorial_Example.ipynb | mit | # some may find this code more readable ... single line solutions to the function are provided immediately below it.
import sys
n = int(input())
def factorial(n):
if n <= 1:
return n
else:
return n*factorial(n-1)
factorial(n)
# this cell explores creating a function that solves the problem with only one real line of working code
import sys
n = int(input())
def factorial(n):
return n if n <= 1 else n*factorial(n-1) # this says: "return n if ... else return <value after else>"
factorial(n) # this cell prompts for input into the function to test it
# using lambda, a single line solution could be done like this:
import sys
n = int(input())
factorial_lambda = lambda n : 1 if n<=1 else n*factorial(n-1)
factorial_lambda(n)
# without recursion ... a loop could be created like this
import sys
def factorial_loop(n):
f = n
while n > 1:
f = f*(n-1)
if f <= 1: return 1
n -= 1
return f
print("factorial of 7:")
factorial_loop(7)
"""
Explanation: <div align="right">Python 3.6 [PY36 Env]</div>
Python Recursion - Factorial Example
Code on this page is to illustrate the basics of recursion. Code was tested in Python 3.6 but should work in Python 2.7 as well. The Factorial example is presented a few ways including condensing the code to a single line.
<a id="factorial" name="factorial"></a>
Factorial Example
If we had to roll our own factorial in Python, this example is how we would do it.
End of explanation
"""
%%timeit # %% magic only works in a iPython cells. Use import timeit and timeit() in .py code
factorial(800)
%%timeit
factorial_lambda(800)
%%timeit
factorial_loop(800)
import math
%%timeit
math.factorial(800)
"""
Explanation: Performance testing
These tests show performance comparisons. They also prove that we should use the math library instead of our own function.
End of explanation
"""
|
ganguli-lab/twpca | notebooks/demo-crossval.ipynb | mit | from twpca.datasets import jittered_population
from scipy.ndimage import gaussian_filter1d
rates, spikes = jittered_population()
smooth_std = 1.0
data = gaussian_filter1d(spikes, smooth_std, axis=1)
fig, axes = plt.subplots(1, 3, sharex=True, sharey=True, figsize=(9,3))
for ax, trial in zip(axes, data):
ax.imshow(trial.T, aspect='auto')
ax.set_xlabel('time')
ax.set_ylabel('neuron')
[ax.set_title('trial {}'.format(k+1)) for k, ax in enumerate(axes)]
fig.tight_layout()
"""
Explanation: Generate the synthetic dataset (same as in demo.ipynb).
End of explanation
"""
def sample_hyperparams(n):
"""Randomly draws `n` sets of hyperparameters for twPCA
"""
n_components = np.random.randint(low=1, high=5, size=n)
warp_scale = np.random.lognormal(mean=-6, sigma=2, size=n)
time_scale = np.random.lognormal(mean=-6, sigma=2, size=n)
return n_components, warp_scale, time_scale
"""
Explanation: We will randomly sample hyperparameters of twPCA using this function
Note: Randomly sampling hyperparameters is generally better than doing a search over a pre-specified grid (Bergstra & Bengio, 2012).
End of explanation
"""
for param in sample_hyperparams(1000):
pct = np.percentile(param, [0, 25, 50, 75, 100])
print(('percentiles: ' +
'0% = {0:.2e}, ' +
'25% = {1:.2e}, ' +
'50% = {2:.2e}, ' +
'75% = {3:.2e}, ' +
'100% = {4:.2e}').format(*pct))
from twpca.crossval import hyperparam_search
# optimization parameters
fit_kw = {
'lr': (1e-1, 1e-2),
'niter': (100, 100),
'progressbar': False
}
# other model options
model_kw = {
'warptype': 'nonlinear',
'warpinit': 'identity'
}
# search 20 sets of hyperparameters
n_components, warp_scales, time_scales = sample_hyperparams(20)
cv_results = hyperparam_search(
data, # the full dataset
n_components, warp_scales, time_scales, # hyperparameter arrays
drop_prob = 0.5,
fit_kw = fit_kw,
model_kw = model_kw
)
for cv_batch in cv_results['crossval_data']:
for cv_run in cv_batch:
plt.plot(cv_run['obj_history'])
plt.title('learning curves')
fig, axes = plt.subplots(1, 3, sharey=True, figsize=(9,3))
axes[0].scatter(cv_results['n_components'], cv_results['mean_test'])
axes[0].set_xticks(np.unique(cv_results['n_components']))
axes[0].set_xlabel('number of components')
axes[0].set_ylabel('reconstruction error')
axes[1].set_xscale('log')
axes[1].scatter(cv_results['warp_scale'], cv_results['mean_test'])
axes[1].set_xlabel('warp regularization')
axes[1].set_title('test error on held out neurons')
axes[2].set_xscale('log')
axes[2].scatter(cv_results['time_scale'], cv_results['mean_test'])
axes[2].set_xlabel('time regularization')
axes[0].set_ylim((min(cv_results['mean_test'])*0.99, max(cv_results['mean_test'])*1.01))
fig.tight_layout()
fig, axes = plt.subplots(1, 3, sharey=True, figsize=(9,3))
axes[0].scatter(cv_results['n_components'], cv_results['mean_train'], color='r', alpha=0.8)
axes[0].set_xticks(np.unique(cv_results['n_components']))
axes[0].set_xlabel('number of components')
axes[0].set_ylabel('reconstruction error')
axes[1].set_xscale('log')
axes[1].scatter(cv_results['warp_scale'], cv_results['mean_train'], color='r', alpha=0.8)
axes[1].set_xlabel('warp regularization')
axes[1].set_title('error during training')
axes[2].set_xscale('log')
axes[2].scatter(cv_results['time_scale'], cv_results['mean_train'], color='r', alpha=0.8)
axes[2].set_xlabel('time regularization')
axes[0].set_ylim((min(cv_results['mean_train'])*0.99, max(cv_results['mean_train'])*1.01))
fig.tight_layout()
"""
Explanation: Print percentiles to get a sense of the hyperparameter ranges we're sampling
End of explanation
"""
|
fancompute/ceviche | examples/simulate_splitter_fdtd.ipynb | mit | def reshape_arr(arr, Nx, Ny):
return arr.reshape((Nx, Ny, 1))
eps_r = np.load('data/eps_r_splitter4.npy')
eps_wg = np.load('data/eps_waveguide.npy')
plt.imshow(eps_r.T, cmap='gist_earth_r')
plt.show()
Nx, Ny = eps_r.shape
J_in = np.load('data/J_in.npy')
J_outs = np.load('data/J_list.npy')
J_wg = np.flipud(J_in.copy())
eps_wg = reshape_arr(eps_wg, Nx, Ny)
eps_r = reshape_arr(eps_r, Nx, Ny)
J_in = reshape_arr(J_in, Nx, Ny)
J_outs = [reshape_arr(J, Nx, Ny) for J in J_outs]
J_wg = reshape_arr(J_wg, Nx, Ny)
Nz = 1
"""
Explanation: Load in the permittivity distribution from an FDFD simulation
End of explanation
"""
nx, ny, nz = Nx//2, Ny//2, Nz//2
dL = 5e-8
pml = [20, 20, 0]
F = fdtd(eps_r, dL=dL, npml=pml)
F_wg = fdtd(eps_wg, dL=dL, npml=pml)
# source parameters
steps = 10000
t0 = 2000
sigma = 100
source_amp = 5
omega = 2 * np.pi * C_0 / 2e-6 # units of 1/sec
omega_sim = omega * F.dt # unitless time
gaussian = lambda t: np.exp(-(t - t0)**2 / 2 / sigma**2) * np.cos(omega_sim * t)
source = lambda t: J_in * source_amp * gaussian(t)
plt.plot(1e15 * F.dt * np.arange(steps), gaussian(np.arange(steps)))
plt.xlabel('time (femtoseconds)')
plt.ylabel('source amplitude')
plt.show()
"""
Explanation: Initial setting up of parameters
End of explanation
"""
measured_wg = measure_fields(F_wg, source, steps, J_wg)
plt.plot(1e15 * F.dt*np.arange(steps), gaussian(np.arange(steps))/gaussian(np.arange(steps)).max(), label='source (Jz)')
plt.plot(1e15 * F.dt*np.arange(steps), measured_wg/measured_wg.max(), label='measured (Ez)')
plt.xlabel('time (femtoseconds)')
plt.ylabel('amplitude')
plt.legend()
plt.show()
"""
Explanation: Compute the transmission (as a function of time) for a straight waveguide (to normalize later)
End of explanation
"""
aniplot(F_wg, source, steps, num_panels=10)
"""
Explanation: Show the field plots at 10 time steps
End of explanation
"""
plot_spectral_power(gaussian(np.arange(steps)), F.dt, f_top=3e16)
gaussian(np.arange(steps)).shape
"""
Explanation: Compute the power spectrum transmitted
End of explanation
"""
plot_spectral_power(measured_wg[:,0], F.dt, f_top=2e14)
"""
Explanation: Now for the measured fields
End of explanation
"""
measured = measure_fields(F, source, steps, J_outs)
ham = np.hamming(steps).reshape((steps,1))
# plt.plot(measured*ham**5)
plot_spectral_power(measured*ham**100, dt=F.dt, f_top=4e16)
plt.plot(1e15 * F.dt*np.arange(steps), gaussian(np.arange(steps))/gaussian(np.arange(steps)).max(), label='source (Jz)')
plt.plot(1e15 * F.dt*np.arange(steps), measured/measured.max(), label='measured (Ez)')
plt.xlabel('time (femtoseconds)')
plt.ylabel('amplitude')
plt.legend()
plt.show()
aniplot(F_wg, source, steps, num_panels=10)
series_in = gaussian(np.arange(steps))
series_wg = None
plot_spectral_power(measured_wg, F.dt, f_top=15e14)
series_in = gaussian(np.arange(steps))
measured_wg
freq1, spect1 = get_spectrum(series_in, dt=F.dt)
freq2, spect2 = get_spectrum(measured_wg, dt=F.dt)
plt.plot(series_in/series_in.max())
plt.plot(measured_wg/measured_wg.max())
plt.show()
spect1 = np.fft.fft(series_in/series_in.max())
spect2 = np.fft.fft(measured_wg/measured_wg.max())
plt.plot(np.fft.fftshift(np.abs(spect1)))
plt.plot(np.fft.fftshift(np.abs(spect2)))
plt.show()
plt.plot(freq1, np.abs(spect1/spect1.max()))
plt.plot(freq2, np.abs(spect2/spect2.max()))
plt.show()
freq, spect = get_spectrum(measured_wg, dt=F.dt)
plt.plot(freq, np.abs(spect))
plt.xlim([0, 4e14])
ex = 1e-50
plt.ylim([-0.1*ex, ex])
plt.show()
plot_spectral_power(measured, dt=F.dt, f_top=1e16)
plt.plot(series_in)
series_out = measured_wg/measured_wg.max()
plt.plot(series_out)
plt.plot(np.abs(np.fft.fft(series_in*np.hanning(steps))))
plt.plot(np.abs(np.fft.fft(series_out*np.hanning(steps))))
(measured_wg*np.hamming(steps)).shape
plt.plot(np.hamming(steps))
F.dt
from ceviche.utils import get_spectrum_lr
get_spectrum_lr(measured_wg[:,0], dt=F.dt)
get_spectrum_lr(gaussian(np.arange(steps)), dt=F.dt)
"""
Explanation: Now measure the transmission for the inverse designed splitter (at each of the four ports)
End of explanation
"""
|
statsmaths/stat665 | lectures/lec18/notebook18.ipynb | gpl-2.0 | %pylab inline
import copy
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from keras.datasets import mnist, cifar10
from keras.models import Sequential, Graph
from keras.layers.core import Dense, Dropout, Activation, Flatten, Reshape
from keras.optimizers import SGD, RMSprop
from keras.utils import np_utils
from keras.regularizers import l2
from keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D, AveragePooling2D
from keras.callbacks import EarlyStopping
from keras.preprocessing.image import ImageDataGenerator
from keras.layers.normalization import BatchNormalization
from PIL import Image
"""
Explanation: Computer vision: LeNet-5 and Distortions
Import various modules that we need for this notebook.
End of explanation
"""
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape(X_train.shape[0], 1, 28, 28).astype('float32') / 255
X_test = X_test.reshape(X_test.shape[0], 1, 28, 28).astype('float32') / 255
Y_train = np_utils.to_categorical(y_train, 10)
Y_test = np_utils.to_categorical(y_test, 10)
"""
Explanation: Load the MNIST dataset, flatten the images, convert the class labels, and scale the data.
End of explanation
"""
model = Sequential()
model.add(Convolution2D(6, 5, 5, border_mode='valid', input_shape = (1, 28, 28)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Activation("sigmoid"))
model.add(Convolution2D(16, 5, 5, border_mode='valid'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Activation("sigmoid"))
model.add(Dropout(0.5))
model.add(Convolution2D(120, 1, 1, border_mode='valid'))
model.add(Flatten())
model.add(Dense(84))
model.add(Activation("sigmoid"))
model.add(Dense(10))
model.add(Activation('softmax'))
l_rate = 1
sgd = SGD(lr=l_rate, mu=0.8)
model.compile(loss='categorical_crossentropy', optimizer=sgd)
model.fit(X_train, Y_train, batch_size=32, nb_epoch=2,
verbose=1, show_accuracy=True, validation_data=(X_test, Y_test))
sgd = SGD(lr=0.8 * l_rate, mu=0.8)
model.compile(loss='categorical_crossentropy', optimizer=sgd)
model.fit(X_train, Y_train, batch_size=32, nb_epoch=3,
verbose=1, show_accuracy=True, validation_data=(X_test, Y_test))
sgd = SGD(lr=0.4 * l_rate, mu=0.8)
model.compile(loss='categorical_crossentropy', optimizer=sgd)
model.fit(X_train, Y_train, batch_size=32, nb_epoch=3,
verbose=1, show_accuracy=True, validation_data=(X_test, Y_test))
sgd = SGD(lr=0.2 * l_rate, mu=0.8)
model.compile(loss='categorical_crossentropy', optimizer=sgd)
model.fit(X_train, Y_train, batch_size=32, nb_epoch=4,
verbose=1, show_accuracy=True, validation_data=(X_test, Y_test))
sgd = SGD(lr=0.08 * l_rate, mu=0.8)
model.compile(loss='categorical_crossentropy', optimizer=sgd)
model.fit(X_train, Y_train, batch_size=32, nb_epoch=8,
verbose=1, show_accuracy=True, validation_data=(X_test, Y_test))
print("Test classification rate %0.05f" % model.evaluate(X_test, Y_test, show_accuracy=True)[1])
"""
Explanation: I. LeNet-5 for MNIST10
Here is my attempt to replicate the LeNet-5 model as closely as possibly the original paper: LeCun, Yann, Léon Bottou, Yoshua Bengio, and Patrick Haffner. "Gradient-based learning applied to document recognition." Proceedings of the IEEE 86, no. 11 (1998): 2278-2324.
As few modern neural network libraries allow for partially connected convolution layers, I've substituted this with a dropout layer. I've also replaced momentum with the Hessian approximation, and rescaled the learning rate schedule, though the proportional decay remains the same.
End of explanation
"""
y_hat = model.predict_classes(X_test)
test_wrong = [im for im in zip(X_test,y_hat,y_test) if im[1] != im[2]]
plt.figure(figsize=(10, 10))
for ind, val in enumerate(test_wrong[:100]):
plt.subplots_adjust(left=0, right=1, bottom=0, top=1)
plt.subplot(10, 10, ind + 1)
im = 1 - val[0].reshape((28,28))
plt.axis("off")
plt.text(0, 0, val[2], fontsize=14, color='blue')
plt.text(8, 0, val[1], fontsize=14, color='red')
plt.imshow(im, cmap='gray')
"""
Explanation: And once again, let's look at the misclassified examples.
End of explanation
"""
# this will do preprocessing and realtime data augmentation
datagen = ImageDataGenerator(
featurewise_center=False, # set input mean to 0 over the dataset
samplewise_center=False, # set each sample mean to 0
featurewise_std_normalization=False, # divide inputs by std of the dataset
samplewise_std_normalization=False, # divide each input by its std
zca_whitening=False, # apply ZCA whitening
rotation_range=25, # randomly rotate images in the range (degrees, 0 to 180)
width_shift_range=0.1, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.1, # randomly shift images vertically (fraction of total height)
horizontal_flip=False, # randomly flip images
vertical_flip=False) # randomly flip images
datagen.fit(X_train)
"""
Explanation: II. LeNet-5 with "Distortions" (i.e., Data augmentation)
The LeNet paper also introduced the idea of adding tweaks to the input data set in order to artificially increase the trainin set size. They suggested slightly distorting the image by shifting or stretching the pixels. The idea is that these distortions should not change the output image classification. Keras has a pre-built library for doing this; let us try to use it here to improve the classification rate. Note that we do not want to flip the image, as this would change the meaning of some digits (6 & 9, for example). Minor rotations are okay, however.
End of explanation
"""
model = Sequential()
model.add(Convolution2D(6, 5, 5, border_mode='valid', input_shape = (1, 28, 28)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Activation("sigmoid"))
model.add(Convolution2D(16, 5, 5, border_mode='valid'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Activation("sigmoid"))
model.add(Dropout(0.5))
model.add(Convolution2D(120, 1, 1, border_mode='valid'))
model.add(Flatten())
model.add(Dense(84))
model.add(Activation("sigmoid"))
model.add(Dense(10))
model.add(Activation('softmax'))
"""
Explanation: We'll use the same adaptation of LeNet-5 architecture.
End of explanation
"""
model.compile(loss='categorical_crossentropy', optimizer=RMSprop())
model.fit(X_train, Y_train, batch_size=32, nb_epoch=25,
verbose=1, show_accuracy=True, validation_data=(X_test, Y_test))
"""
Explanation: Now we'll fit the model. Notice that the format for this is slightly different as the data is coming from datagen.flow rather than a single numpy array. We set the number of sample per epoch to be the same as before (60k). I am also using the non-augmented version with RMS prop for the first 2 epochs, as the details are not specified in the paper and this seems to greatly improve the convergence.
End of explanation
"""
print("Test classification rate %0.05f" % model.evaluate(X_test, Y_test, show_accuracy=True)[1])
"""
Explanation: How does the performance stack up? Not quite as good as the non-distorted version, though notice how the classifier does not overfit the same was as it would without the data augmentation. I have a hunch that there is something non-optimal about the RMSprop implementation when using data augmentation.
At any rate, the true advantage of data augmentation comes when we have large models (regularization) or more complex learning tasks (generalization).
End of explanation
"""
model = Sequential()
model.add(Convolution2D(96, 5, 5, border_mode='valid', input_shape = (1, 28, 28)))
model.add(MaxPooling2D(pool_size=(3, 3), strides=(2,2)))
model.add(Activation("relu"))
model.add(Convolution2D(192, 5, 5, border_mode='valid'))
model.add(MaxPooling2D(pool_size=(3, 3), strides=(2,2)))
model.add(Activation("relu"))
model.add(Convolution2D(192, 3, 3, border_mode='valid'))
model.add(Activation("relu"))
model.add(Convolution2D(192, 1, 1, border_mode='valid'))
model.add(Activation("relu"))
model.add(Convolution2D(10, 1, 1, border_mode='valid'))
model.add(Activation("relu"))
model.add(Flatten())
model.add(Activation('softmax'))
rms = RMSprop()
model.compile(loss='categorical_crossentropy', optimizer=rms)
"""
Explanation: III. Pure convolution
For reference, here is the architecture of a Pure Convolution network: Springenberg, J. T., Dosovitskiy, A., Brox, T., & Riedmiller, M. (2014). Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.24/_downloads/f398f296c84e53a14339d2c3c36e91a4/movement_detection.ipynb | bsd-3-clause | # Authors: Adonay Nunes <adonay.s.nunes@gmail.com>
# Luke Bloy <luke.bloy@gmail.com>
# License: BSD-3-Clause
import os.path as op
import mne
from mne.datasets.brainstorm import bst_auditory
from mne.io import read_raw_ctf
from mne.preprocessing import annotate_movement, compute_average_dev_head_t
# Load data
data_path = bst_auditory.data_path()
data_path_MEG = op.join(data_path, 'MEG')
subject = 'bst_auditory'
subjects_dir = op.join(data_path, 'subjects')
trans_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-trans.fif')
raw_fname1 = op.join(data_path_MEG, 'bst_auditory', 'S01_AEF_20131218_01.ds')
raw_fname2 = op.join(data_path_MEG, 'bst_auditory', 'S01_AEF_20131218_02.ds')
# read and concatenate two files, ignoring device<->head mismatch
raw = read_raw_ctf(raw_fname1, preload=False)
mne.io.concatenate_raws(
[raw, read_raw_ctf(raw_fname2, preload=False)], on_mismatch='ignore')
raw.crop(350, 410).load_data()
raw.resample(100, npad="auto")
"""
Explanation: Annotate movement artifacts and reestimate dev_head_t
Periods, where the participant moved considerably, are contaminated by low
amplitude artifacts. When averaging the magnetic fields, the more spread the
head position, the bigger the cancellation due to different locations.
Similarly, the covariance will also be affected by severe head movement,
and source estimation will suffer low/smeared coregistration accuracy.
This example uses the continuous head position indicators (cHPI) times series
to annotate periods of head movement, then the device to head transformation
matrix is estimated from the artifact-free segments. The new head position will
be more representative of the actual head position during the recording.
End of explanation
"""
# Get cHPI time series and compute average
chpi_locs = mne.chpi.extract_chpi_locs_ctf(raw)
head_pos = mne.chpi.compute_head_pos(raw.info, chpi_locs)
original_head_dev_t = mne.transforms.invert_transform(
raw.info['dev_head_t'])
average_head_dev_t = mne.transforms.invert_transform(
compute_average_dev_head_t(raw, head_pos))
fig = mne.viz.plot_head_positions(head_pos)
for ax, val, val_ori in zip(fig.axes[::2], average_head_dev_t['trans'][:3, 3],
original_head_dev_t['trans'][:3, 3]):
ax.axhline(1000 * val, color='r')
ax.axhline(1000 * val_ori, color='g')
# The green horizontal lines represent the original head position, whereas the
# red lines are the new head position averaged over all the time points.
"""
Explanation: Plot continuous head position with respect to the mean recording position
End of explanation
"""
mean_distance_limit = .0015 # in meters
annotation_movement, hpi_disp = annotate_movement(
raw, head_pos, mean_distance_limit=mean_distance_limit)
raw.set_annotations(annotation_movement)
raw.plot(n_channels=100, duration=20)
"""
Explanation: Plot raw data with annotated movement
End of explanation
"""
new_dev_head_t = compute_average_dev_head_t(raw, head_pos)
raw.info['dev_head_t'] = new_dev_head_t
mne.viz.plot_alignment(raw.info, show_axes=True, subject=subject,
trans=trans_fname, subjects_dir=subjects_dir)
"""
Explanation: After checking the annotated movement artifacts, calculate the new transform
and plot it:
End of explanation
"""
|
davidgutierrez/HeartRatePatterns | Jupyter/LoadDataMimic-II.ipynb | gpl-3.0 | import sys
sys.version_info
"""
Explanation: Cargue de datos s SciDB
1) Verificar Prerequisitos
Python
SciDB-Py requires Python 2.6-2.7 or 3.3
End of explanation
"""
import numpy as np
np.__version__
"""
Explanation: NumPy
tested with version 1.9 (1.13.1)
End of explanation
"""
import requests
requests.__version__
"""
Explanation: Requests
tested with version 2.7 (2.18.1) Required for using the Shim interface to SciDB.
End of explanation
"""
import pandas as pd
pd.__version__
"""
Explanation: Pandas (optional)
tested with version 0.15. (0.20.3) Required only for importing/exporting SciDB arrays as Pandas Dataframe objects.
End of explanation
"""
import scipy
scipy.__version__
"""
Explanation: SciPy (optional)
tested with versions 0.10-0.12. (0.19.0) Required only for importing/exporting SciDB arrays as SciPy sparse matrices.
End of explanation
"""
import scidbpy
scidbpy.__version__
from scidbpy import connect
"""
Explanation: 2) Importar scidbpy
pip install git+http://github.com/paradigm4/scidb-py.git@devel
End of explanation
"""
sdb = connect('http://localhost:8080')
"""
Explanation: conectarse al servidor de Base de datos
End of explanation
"""
import urllib.request # urllib2 in python2 the lib that handles the url stuff
target_url = "https://www.physionet.org/physiobank/database/mimic2wdb/matched/RECORDS-waveforms"
data = urllib.request.urlopen(target_url) # it's a file like object and works just like a file
lines = data.readlines();
line = str(lines[100])
"""
Explanation: 3) Leer archivo con cada una de las ondas
End of explanation
"""
carpeta,onda = line.replace('b\'','').replace('\'','').replace('\\n','').split("/")
onda
"""
Explanation: Quitarle caracteres especiales
End of explanation
"""
import wfdb
sig, fields = wfdb.srdsamp(onda,pbdir='mimic2wdb/matched/'+carpeta) #, sampfrom=11000
print(sig)
print("signame: " + str(fields['signame']))
print("units: " + str(fields['units']))
print("fs: " + str(fields['fs']))
print("comments: " + str(fields['comments']))
print("fields: " + str(fields))
"""
Explanation: 4) Importar WFDB para conectarse a physionet
End of explanation
"""
signalII = None
try:
signalII = fields['signame'].index("II")
except ValueError:
print("List does not contain value")
if(signalII!=None):
print("List contain value")
"""
Explanation: Busca la ubicacion de la señal tipo II
End of explanation
"""
array = wfdb.processing.normalize(x=sig[:, signalII], lb=-2, ub=2)
arrayNun = array[~np.isnan(array)]
arrayNun = np.trim_zeros(arrayNun)
arrayNun
"""
Explanation: Normaliza la señal y le quita los valores en null
End of explanation
"""
ondaName = onda.replace("-", "_")
if arrayNun.size>0 :
sdb.input(upload_data=array).store(ondaName,gc=False)
# sdb.iquery("store(input(<x:int64>[i], '{fn}', 0, '{fmt}'), "+ondaName+")", upload_data=array)
"""
Explanation: Cambiar los guiones "-" por raya al piso "_" porque por algun motivo SciDB tiene problemas con estos caracteres
Si el arreglo sin valores nulos no queda vacio lo sube al SciDB
End of explanation
"""
|
jonathansick/androcmd | notebooks/Brick 23 IR V2.ipynb | mit | %matplotlib inline
%config InlineBackend.figure_format='retina'
# %config InlineBackend.figure_format='svg'
import os
import time
from glob import glob
import numpy as np
brick = 23
STARFISH = os.getenv("STARFISH")
isoc_dir = "b23ir2_isoc"
lib_dir = "b23ir2_lib"
synth_dir = "b23ir2_synth"
fit_dir = "b23ir2_fit"
wfc3_bands = ['F110W', 'F160W']
"""
Explanation: Fitting the Brick 23 IR CMD (V2)
End of explanation
"""
from astropy.coordinates import Distance
import astropy.units as u
from padova import AgeGridRequest, IsochroneRequest
from starfisher import LibraryBuilder
z_grid = [0.012, 0.015, 0.019, 0.024, 0.03]
delta_gyr = 0.5
late_ages = np.log10(np.arange(0.5 * 1e9 + delta_gyr, 13e9, delta_gyr * 1e9))
print late_ages
if not os.path.exists(os.path.join(STARFISH, isoc_dir)):
for z in z_grid:
# Old ages in linear grid
for logage in late_ages:
r = IsochroneRequest(z, logage,
phot='wfc3', photsys_version='odfnew')
r.isochrone.export_for_starfish(os.path.join(STARFISH, isoc_dir),
bands=wfc3_bands)
d = Distance(785 * u.kpc)
builder = LibraryBuilder(isoc_dir, lib_dir,
nmag=len(wfc3_bands),
dmod=d.distmod.value,
iverb=3)
if not os.path.exists(builder.full_isofile_path):
builder.install()
"""
Explanation: Download Isochrones
Isochrones are distributed in a linear grid of 0.5 Gyr thereafter between 0.5 Gyr and 13 Gyr. We use a range of five metallicities centered on solar ($z=0.019$).
End of explanation
"""
from collections import namedtuple
from starfisher import Lockfile
from starfisher import Synth
from starfisher import ExtinctionDistribution
from starfisher import ExtantCrowdingTable
from starfisher import ColorPlane
from m31hst.phatast import PhatAstTable
if not os.path.exists(os.path.join(STARFISH, synth_dir)):
os.makedirs(os.path.join(STARFISH, synth_dir))
# No binning in our lockfile
lockfile = Lockfile(builder.read_isofile(), synth_dir, unbinned=True)
# No extinction, yet
young_av = ExtinctionDistribution()
old_av = ExtinctionDistribution()
rel_extinction = np.ones(len(wfc3_bands), dtype=float)
for av in (young_av, old_av):
av.set_uniform(0.)
# Use PHAT AST from the outer field
crowd_path = os.path.join(synth_dir, "crowding.dat")
full_crowd_path = os.path.join(STARFISH, crowd_path)
tbl = PhatAstTable()
tbl.write_crowdfile_for_field(full_crowd_path, 0,
bands=('f110w', 'f160w'))
crowd = ExtantCrowdingTable(crowd_path)
# Define CMD planes
Lim = namedtuple('Lim', 'x y')
ir_lim = Lim(x=(0.3, 1.3), y=(24, 16.5))
ir_cmd = ColorPlane((wfc3_bands.index('F110W'),
wfc3_bands.index('F160W')),
wfc3_bands.index('F160W'),
ir_lim.x,
(min(ir_lim.y), max(ir_lim.y)),
28.,
suffix='f110f160',
x_label=r'$\mathrm{F110W}-\mathrm{F160W}$',
y_label=r'$\mathrm{F110W}$',
dpix=0.1)
ir_cmd.mask_region((-1., 0.), (22., 16))
ir_cmd.mask_region((0, 0.3), (22., 16))
ir_cmd.mask_region((0.3, 0.7), (20., 16))
ir_cmd.mask_region((0.7, 0.8), (19., 16))
ir_cmd.mask_region((0.8, 0.9), (18., 16))
ir_cmd.mask_region((1.1, 1.5), (28, 21))
colour_planes = [ir_cmd]
synth = Synth(synth_dir, builder, lockfile, crowd,
rel_extinction,
young_extinction=young_av,
old_extinction=old_av,
planes=colour_planes,
mass_span=(0.08, 150.),
nstars=10000000)
if len(glob(os.path.join(STARFISH, synth_dir, "z*"))) == 0:
synth.run_synth(n_cpu=4)
# synth.plot_all_hess(os.path.join(STARFISH, synth_dir, 'hess'))
"""
Explanation: Build the Isochrone Library and Synthesize CMD planes
End of explanation
"""
from astropy.table import Table
from m31hst import phat_v2_phot_path
if not os.path.exists(os.path.join(STARFISH, fit_dir)):
os.makedirs(os.path.join(STARFISH, fit_dir))
data_root = os.path.join(fit_dir, "b23ir.")
full_data_path = os.path.join(STARFISH, '{0}f110f160'.format(data_root))
brick_table = Table.read(phat_v2_phot_path(brick), format='fits')
# Only use stars within the fitting box
c = brick_table['f110w_vega'] - brick_table['f160w_vega']
m = brick_table['f160w_vega']
sel = np.where((c > min(ir_lim.x)) & (c < max(ir_lim.x)) &
(m > min(ir_lim.y)) & (m < max(ir_lim.y)))[0]
brick_table = brick_table[sel]
print("Fitting {0:d} stars".format(len(brick_table)))
if not os.path.exists(full_data_path):
phot_dtype = np.dtype([('x', np.float), ('y', np.float)])
photdata = np.empty(len(brick_table), dtype=phot_dtype)
photdata['x'][:] = brick_table['f110w_vega'] - brick_table['f160w_vega']
photdata['y'][:] = brick_table['f160w_vega']
np.savetxt(full_data_path, photdata, delimiter=' ', fmt='%.4f')
import matplotlib as mpl
import matplotlib.pyplot as plt
from androcmd.plot import contour_hess
fig = plt.figure(figsize=(6, 6))
ax = fig.add_subplot(111)
contour_hess(ax, brick_table['f110w_vega'] - brick_table['f160w_vega'],
brick_table['f160w_vega'], ir_lim.x, (max(ir_lim.y), min(ir_lim.y)),
plot_args={'ms': 3})
ir_cmd.plot_mask(ax)
# ir_cmd.plot_mask(ax, imshow_args=dict(origin='upper'))
ax.set_xlabel(r'$\mathrm{F110W}-\mathrm{F160W}$')
ax.set_ylabel(r'$\mathrm{F110W}$')
ax.set_xlim(ir_lim.x)
ax.set_ylim(ir_lim.y)
fig.show()
"""
Explanation: Export the dataset for StarFISH
End of explanation
"""
from starfisher import SFH, Mask
print colour_planes
mask = Mask(colour_planes)
sfh = SFH(data_root, synth, mask, fit_dir)
if not os.path.exists(sfh.full_outfile_path):
sfh.run_sfh()
sfh_table = sfh.solution_table()
from starfisher.sfhplot import LinearSFHCirclePlot
fig = plt.figure(figsize=(6, 6))
ax = fig.add_subplot(111)
cp = LinearSFHCirclePlot(sfh_table)
cp.plot_in_ax(ax, max_area=800)
ax.set_ylim(-0.5, 0.4)
fig.show()
import cubehelix
cmapper = lambda: cubehelix.cmap(startHue=240,endHue=-300,minSat=1,maxSat=2.5,minLight=.3,maxLight=.8,gamma=.9)
from starfisher.sfhplot import ChiTriptykPlot
fig = plt.figure(figsize=(7, 5))
ctp = ChiTriptykPlot(sfh.full_chi_path, 1, ir_cmd.x_span, ir_cmd.y_span,
ir_cmd.dpix, ir_cmd.x_label, ir_cmd.y_label,
flipy=True)
ax_obs, ax_mod, ax_chi = ctp.setup_axes(fig)
ctp.plot_obs_in_ax(ax_obs, cmap=cmapper())
ctp.plot_mod_in_ax(ax_mod, cmap=cmapper())
ctp.plot_chi_in_ax(ax_chi, cmap=cubehelix.cmap())
ax_obs.text(0.0, 1.01, "Observed", transform=ax_obs.transAxes, size=8, ha='left')
ax_mod.text(0.0, 1.01, "Model", transform=ax_mod.transAxes, size=8, ha='left')
ax_chi.text(0.0, 1.01, r"$\log \chi^2$", transform=ax_chi.transAxes, size=8, ha='left')
fig.show()
"""
Explanation: Run StarFISH SFH
End of explanation
"""
|
random-forests/tensorflow-workshop | archive/zurich/01_tensorflow_warmup.ipynb | apache-2.0 | import numpy as np
import tensorflow as tf
"""
Explanation: TensorFlow warmup
This is a notebook to get you started with TensorFlow.
End of explanation
"""
# This is for graph visualization.
from IPython.display import clear_output, Image, display, HTML
def strip_consts(graph_def, max_const_size=32):
"""Strip large constant values from graph_def."""
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = "<stripped %d bytes>"%size
return strip_def
def show_graph(graph_def, max_const_size=32):
"""Visualize TensorFlow graph."""
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code = """
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
""".format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe = """
<iframe seamless style="width:1200px;height:620px;border:0" srcdoc="{}"></iframe>
""".format(code.replace('"', '"'))
display(HTML(iframe))
"""
Explanation: Graph visualisation
This is for visualizing a TF graph in an iPython notebook; the details are not interesting.
(Borrowed from the DeepDream iPython notebook)
End of explanation
"""
# This code only creates the graph. No computation is done yet.
tf.reset_default_graph()
x = tf.constant(7.0, name="x")
y = tf.add(x, tf.constant(2.0, name="y"), name="add_op")
z = tf.subtract(x, tf.constant(2.0, name="z"), name="sub_op")
w = tf.multiply(y, tf.constant(3.0)) # If no name is given, TF will chose a unique name for us.
# Visualize the graph.
show_graph(tf.get_default_graph().as_graph_def())
"""
Explanation: The execution model
TensorFlow allows you to specify graphs representing computations and provides a runtime for efficiently executing those graphs across a range of hardware.
The graph nodes are Ops and the edges are Tensors.
End of explanation
"""
# We can also use shorthand syntax
# Notice the default names TF chooses for us.
tf.reset_default_graph()
x = tf.constant(7.0)
y = x + 2
z = x - 2
w = y * 3
# Visualize the graph.
show_graph(tf.get_default_graph().as_graph_def())
"""
Explanation: Ops
Every node in the computation graph corresponds to an op. tf.constant, tf.sub and tf.add are Ops.
There are many built-in Ops for low-level manipulation of numeric Tensors, e.g.:
Arithmetic (with matrix and complex number support)
Tensor operations (reshape, reduction, casting)
Image manipulation (cropping, sizing, coloring, ...)
Batching (arranging training examples into batches)
Almost every object in TensorFlow is an op. Even things that don't look like they are!
TensorFlow uses the op abstraction for a surprising range of things:
Queues
Variables
Variable initializers
This can be confusing at first. For now, remember that because many things are Ops, some things have to be done in a somewhat non-obvious fashion.
A list of TF Ops can be found at https://www.tensorflow.org/api_docs/python/.
Tensors
x, y, w and z are Tensors - a description of a multidimensional array.
A Tensor is a symbolic handle to one of the outputs of an Operation. It does not hold the values of that operation's output, but instead provides a means of computing those values in a TensorFlow tf.Session.
Tensor shapes can usually be derived from the computation graph. This is called shape inference.
For example, if you perform a matrix multiply of a [4,2] and a [2,3] Tensor,
then TensorFlow infers that the output Tensor has shape [4,3].
End of explanation
"""
tf.reset_default_graph()
x = tf.constant(7.0, name="x")
y = tf.add(x, tf.constant(2.0, name="y"), name="add_op")
z = y * 3.0
# Create a session, which is the context for running a graph.
with tf.Session() as sess:
# When we call sess.run(y) the session is computing the value of Tensor y.
print(sess.run(y))
print(sess.run(z))
"""
Explanation: Session
The actual computations are carried out in a Session.
Each session has exactly one graph, but it is completely valid to have multiple disconnected subgraphs in the same graph.
The same graph can be used to initialize two different Sessions, yielding two independent environments with independent states.
Unless specified otherwise, nodes and edges are added to the default graph.
By default, a Session will use the default graph.
End of explanation
"""
tf.reset_default_graph()
# tf.get_variable returns a tf.Variable object. Creating such objects directly
# is possible, but does not have a sharing mechanism. Hence, tf.get_variable is
# preferred.
x = tf.get_variable("x", shape=[], initializer=tf.zeros_initializer())
assign_x = tf.assign(x, 10, name="assign_x")
z = tf.add(x, 1, name="z")
# Variables in TensorFlow need to be initialized first. The following op
# conveniently takes care of that and initializes all variables.
init = tf.global_variables_initializer()
# Visualize the graph.
show_graph(tf.get_default_graph().as_graph_def())
"""
Explanation: Variables
Variables maintain state in a Session across multiple calls to Session.run().
You add a variable to the graph by constructing an instance of the class tf.Variable.
For example, model parameters (weights and biases) are stored in Variables.
We train the model with multiple calls to Session.run(), and each call updates the model parameters.
For more information on Variables see https://www.tensorflow.org/programmers_guide/variables
End of explanation
"""
with tf.Session() as sess:
# Assign an initial value to the instance of the variable in this session,
# determined by the initializer provided above.
sess.run(init)
print (sess.run(z))
"""
Explanation: The variable we added represents a variable in the computational graph, but is not an instance of the variable.
The computational graph represents a program, and the variable will exist when we run the graph in a session.
The value of the variable is stored in the session.
Take a guess: what is the output of the code below?
End of explanation
"""
with tf.Session() as sess:
# When we create a new session we need to initialize all Variables again.
sess.run(init)
sess.run(assign_x)
print (sess.run(z))
"""
Explanation: The output might surprise you: it's 1.0! The op assign_x is not a dependency of x or z, and hence is never evaluated.
One way to solve this problem is:
End of explanation
"""
tf.reset_default_graph()
x = tf.placeholder("float", None)
y = x * 2
# Visualize the graph.
show_graph(tf.get_default_graph().as_graph_def())
"""
Explanation: Placeholders
So far you have seen Variables, but there is a more basic construct: the placeholder. A placeholder is simply a variable that we will assign data to at a later date. It allows us to create our operations and build our computation graph, without needing the data. In TensorFlow terminology, we then feed data into the graph through these placeholders.
End of explanation
"""
with tf.Session() as session:
result = session.run(y, feed_dict={x: [1, 2, 3]})
print(result)
"""
Explanation: At execution time, we feed data into the graph using a feed_dict: for each placeholder, it contains the value we want to assign to it. This can be useful for batching up data, as you will see later.
End of explanation
"""
tf.reset_default_graph()
q = tf.FIFOQueue(3, "float", name="q")
initial_enqueue = q.enqueue_many(([0., 0., 0.],), name="init")
x = q.dequeue()
y = x + 1
q_inc = q.enqueue([y])
with tf.Session() as session:
session.run(initial_enqueue)
outputs = []
for _ in range(20):
_, y_val = session.run([q_inc, y])
outputs.append(y_val)
print(outputs)
# Visualize the graph.
show_graph(tf.get_default_graph().as_graph_def())
"""
Explanation: Queues
Queues are TensorFlow’s primitives for writing asynchronous code.
Queues provide Ops with queue semantics.
Queue Ops, like all Ops, need to be executed to do anything.
Are often used for asynchronously processing data (e.g., an input pipeline with data augmentation).
Queues are stateful graph nodes. The state is associated with a session.
There are several different types of queues, e.g., FIFOQueue and RandomShuffleQueue.
See the Threading and Queues for more details.
Note: You probably will never need to directly use these low level implementations of queues yourself.
Do note, however, that several important operations (for example, reading and batching) are implemented as queues.
End of explanation
"""
tf.reset_default_graph()
number_to_check = 29
# Define graph.
a = tf.Variable(number_to_check, dtype=tf.int32)
pred = tf.equal(0, tf.mod(a, 2))
b = tf.cast(
tf.cond(
pred,
lambda: tf.div(a, 2),
lambda: tf.add(tf.multiply(a, 3), 1)),
tf.int32)
assign_op = tf.assign(a, b)
with tf.Session() as session:
# 1. Implement graph execution.
pass
"""
Explanation: Exercise: Collatz Conjecture
And now some fun! Collatz conjecture states that after applying the following rule
$f(n) = \begin{cases} n/2 &\text{if } n \equiv 0 \pmod{2},\ 3n+1 & \text{if } n\equiv 1 \pmod{2} .\end{cases}$
a finite number of times to any given number, we will end up at $1$ (cf. https://xkcd.com/710/).
Implement the checking routine in TensorFlow (i.e. implement some code that given a number, checks that it satisfies Collatz conjecture).
Bonus: use a queue.
End of explanation
"""
|
DiXiT-eu/collatex-tutorial | unit8/unit8-collatex-and-XML/Read files.ipynb | gpl-3.0 | import os
os.listdir('partonopeus')
"""
Explanation: Loading and parsing XML files from the file system
Our XML files are in a subdirectory called 'partonopeus'. We load the os library and use its listdir() method to verify the contents of that directory.
End of explanation
"""
inputFiles = {}
for inputFile in os.listdir('partonopeus'):
siglum = inputFile[0]
contents = open('partonopeus/' + inputFile,'rb').read()
inputFiles[siglum] = contents
"""
Explanation: We create a dictionary to hold our input files, using the single-letter filename before the '.xml' extension as the key and the file itself as the value. The lxml library that we use to parse XML requires that we open the XML file for reading in bytes mode.
End of explanation
"""
from lxml import etree
print(etree.tostring(etree.XML(inputFiles['A'])))
"""
Explanation: We load the lxml library and use the .XML() method to parse the file and the .tostring() method to stringify the results so that we can examine them. In Real Life, if we need to manipulate the XML (e.g., to search it with XPath), we would keep it as XML. That is, the tostring() process is used here just to create something human-readable for pedagogical purposes.
End of explanation
"""
|
lionell/laboratories | eco_systems/dima4.ipynb | mit | from scipy.integrate import ode
birth_rate = 128
death_rate = 90
intraspecific_competition = 2
ps = [birth_rate, death_rate, intraspecific_competition]
def f(t, N, ps):
return ps[0] * (N ** 2) / (N + 1) - ps[1] * N - ps[2] * (N ** 2)
def solve(N0, t0=0, t1=1, h=0.05):
r = ode(f).set_integrator('dopri5')
r.set_initial_value(N0, t0).set_f_params(ps)
N = [N0]
t = [t0]
while r.successful() and r.t < t1:
t.append(r.t + h)
N.append(r.integrate(r.t + h))
return N, t
"""
Explanation: <center>Частина І</center>
$$ f(t, N, \alpha, \beta, \gamma) = \frac{\alpha N^{2}}{N + 1} - \beta N - \gamma N^{2} $$
End of explanation
"""
num_part = ((ps[0] - ps[1] - ps[2]) ** 2 - 4*ps[1]*ps[2]) ** 0.5
L = (-num_part - ps[0] + ps[1] + ps[2]) / (-2 * ps[2])
K = (num_part - ps[0] + ps[1] + ps[2]) / (-2 * ps[2])
if K < L:
L, K = K, L
print("Нижня межа: {}, верхня межа: {}".format(L, K))
L, K
options = [
[1./4. * L, "< L/2"],
[3./4. * L, "> L/2"],
[L, "L"],
[1./4. * (K + L), "< (K + L)/2"],
[3./4. * (K + L), "> (K + L)/2"],
[K, "K"],
[1.25 * K, "> K"]
]
options
import matplotlib.pyplot as plt
t0 = 0
t1 = 0.5
fig, ax = plt.subplots()
lines=[]
for ind, opt in enumerate(options):
N0 = opt[0]
def_text = opt[1]
N, t = solve(N0, h=0.01)
lines.append(ax.plot(t, N, label=def_text)[0])
ax.legend(handles=lines)
plt.show()
"""
Explanation: $$ L = \frac{-\sqrt{(\alpha - \beta - \gamma)^2 - 4\beta\gamma} - \alpha + \beta + \gamma}{-2\gamma} $$
$$ K = \frac{\sqrt{(\alpha - \beta - \gamma)^2 - 4\beta\gamma} - \alpha + \beta + \gamma}{-2\gamma} $$
End of explanation
"""
options = [
[100, "N(0) = 100"],
[140, "N(0) = 140"],
[180, "N(0) = 180"]
]
t1 = 24
def f(t, N):
return -0.056 * N + 0.0004 * (N**2)
def solve(N0, t0=0, t1=1, h=0.05):
r = ode(f).set_integrator('vode', method='bdf')
r.set_initial_value(N0, t0)
N = [N0]
t = [t0]
while r.successful() and r.t < t1:
t.append(r.t + h)
N.append(r.integrate(r.t + h))
return N, t
plt.gcf().clear()
fig, ax = plt.subplots()
lines = []
for ind, opt in enumerate(options):
N0 = opt[0]
def_text = opt[1]
N, t = solve(N0, t0=0, t1=t1, h=0.01)
lines.append(ax.plot(t, N, label=def_text)[0])
ax.legend(handles=lines)
plt.show()
"""
Explanation: <center>Частина ІI</center>
$$ \frac{dN}{dt} = -0.056 * N + 0.0004 * N^2 $$
End of explanation
"""
|
jeffzhengye/pylearn | tensorflow_learning/tf2/notebooks/transfer_learning-中文-checkpoint.ipynb | unlicense | import numpy as np
import tensorflow as tf
from tensorflow import keras
"""
Explanation: 迁移学习与微调,Transfer learning & fine-tuning
Author: fchollet<br>
Date created: 2020/04/15<br>
Last modified: 2020/05/12<br>
Description: Complete guide to transfer learning & fine-tuning in Keras. <br>
翻译: 叶正
设置,Setup
End of explanation
"""
layer = keras.layers.Dense(3)
layer.build((None, 4)) # Create the weights
print("weights:", len(layer.weights), layer.weights)
print("trainable_weights:", len(layer.trainable_weights),layer.trainable_weights)
print("non_trainable_weights:", len(layer.non_trainable_weights))
"""
Explanation: 介绍,Introduction
Transfer learning consists of taking features learned on one problem, and
leveraging them on a new, similar problem. For instance, features from a model that has
learned to identify racoons may be useful to kick-start a model meant to identify
tanukis.
迁移学习 就是把从一个问题中学习到的特征使用新的、类似的问题。例如,一个识别浣熊的模型中学到的特征,可能用来初始化识别狸猫的模型中是有用的。
Transfer learning is usually done for tasks where your dataset has too little data to
train a full-scale model from scratch.
迁移学习通常在当你只有较少的数据难以从头开始训练一个模型的时候使用。
The most common incarnation of transfer learning in the context of deep learning is the
following workflow:
深度学习中最典型的迁移学习工作流如下:
从已经训练好的模型中取出部分层
固定这些层的参数,在后面的训练中不破坏之前学到的信息
在这些固定的层上面,添加一些新的、可以训练的层。这些层将学习如何把之前学到的老的特征变成在新的数据集上的预测。
在新的数据集上,开始训练新的层。
Take layers from a previously trained model.
Freeze them, so as to avoid destroying any of the information they contain during
future training rounds.
Add some new, trainable layers on top of the frozen layers. They will learn to turn
the old features into predictions on a new dataset.
Train the new layers on your dataset.
A last, optional step, is fine-tuning, which consists of unfreezing the entire
model you obtained above (or part of it), and re-training it on the new data with a
very low learning rate. This can potentially achieve meaningful improvements, by
incrementally adapting the pretrained features to the new data.
最后,也是可选的步骤,就是所谓的微调fine-tuning。这里包括unfreezing整个模型(或者部分模型),并且使用非常小的学习率重新训练。通过逐步地在新的数据上调节预训练好的特征,有潜力获得可观的提高。
First, we will go over the Keras trainable API in detail, which underlies most
transfer learning & fine-tuning workflows.
首先,我们将复习keras trainable API,这套api是大多数迁移学习和微调工作量的基础。
Then, we'll demonstrate the typical workflow by taking a model pretrained on the
ImageNet dataset, and retraining it on the Kaggle "cats vs dogs" classification
dataset.
然后,展示典型的工作量:选取一个从ImageNet 数据集上训练好的模型,然后在Kaggle的"cats vs dogs"分类数据集上重新训练
该实例从下列例子中修改的来。
Deep Learning with Python
and the 2016 blog post
"building powerful image classification models using very little
data".
Freezing layers: understanding the trainable attribute
冻住一些层:理解trainable属性
Layers & models have three weight attributes:
层和模型有三个权重属性:
weights :层所有权重变量的列表 list
trainable_weights : 训练过程中,以上中可以更新(通过梯度下降)来最小化损失函数。
non_trainable_weights: 以上中不能被训练的固定参数。
通常,这些变量在前向传播过程中被更新。
weights is the list of all weights variables of the layer.
trainable_weights is the list of those that are meant to be updated (via gradient
descent) to minimize the loss during training.
non_trainable_weights is the list of those that aren't meant to be trained.
Typically they are updated by the model during the forward pass.
Example: the Dense layer has 2 trainable weights (kernel & bias)
例子:Dense层有2个可训练的权重(kernel & bias)
End of explanation
"""
layer = keras.layers.BatchNormalization()
layer.build((None, 4)) # Create the weights
print("weights:", len(layer.weights))
print("trainable_weights:", len(layer.trainable_weights))
print("non_trainable_weights:", len(layer.non_trainable_weights))
"""
Explanation: 通常,所有的权重都是可以训练的权重。keras自带的layer中只有BatchNormalization有不可训练的权重。BatchNormalization使用不可训练的权重来跟踪训练过程中输入的mean和variance。
学习在自定义layers,如何使用不可训练权重,请看
guide to writing new layers from scratch.
In general, all weights are trainable weights. The only built-in layer that has
non-trainable weights is the BatchNormalization layer. It uses non-trainable weights
to keep track of the mean and variance of its inputs during training.
To learn how to use non-trainable weights in your own custom layers, see the
guide to writing new layers from scratch.
Example: the BatchNormalization layer has 2 trainable weights and 2 non-trainable
weights
例子:BatchNormalization层有3个可训练权重和2个不可训练权重
End of explanation
"""
layer = keras.layers.Dense(3)
layer.build((None, 4)) # Create the weights
layer.trainable = False # Freeze the layer
print("weights:", len(layer.weights))
print("trainable_weights:", len(layer.trainable_weights))
print("non_trainable_weights:", len(layer.non_trainable_weights))
"""
Explanation: Layers & models also feature a boolean attribute trainable. Its value can be changed.
Setting layer.trainable to False moves all the layer's weights from trainable to
non-trainable. This is called "freezing" the layer: the state of a frozen layer won't
be updated during training (either when training with fit() or when training with
any custom loop that relies on trainable_weights to apply gradient updates).
Example: setting trainable to False
层和模型都有一个布尔型的属性trainable,其值可以改变。把layer.trainable的值设置为False就可以把该层所有权重都变成不可训练。该过程我们称为冷冻"freezing"该层:该层的状态在训练过程中不会改变。
例子:设置trainable to `False
End of explanation
"""
# Make a model with 2 layers
layer1 = keras.layers.Dense(3, activation="relu")
layer2 = keras.layers.Dense(3, activation="sigmoid")
model = keras.Sequential([keras.Input(shape=(3,)), layer1, layer2])
# Freeze the first layer
layer1.trainable = False
# Keep a copy of the weights of layer1 for later reference
initial_layer1_weights_values = layer1.get_weights()
# Train the model
model.compile(optimizer="adam", loss="mse")
model.fit(np.random.random((2, 3)), np.random.random((2, 3)))
# Check that the weights of layer1 have not changed during training
final_layer1_weights_values = layer1.get_weights()
np.testing.assert_allclose(
initial_layer1_weights_values[0], final_layer1_weights_values[0]
)
np.testing.assert_allclose(
initial_layer1_weights_values[1], final_layer1_weights_values[1]
)
"""
Explanation: When a trainable weight becomes non-trainable, its value is no longer updated during
training.
当可训练的权重被设置成不可训练non-trainable,其值在训练过程中不再可以更新。
End of explanation
"""
inner_model = keras.Sequential(
[
keras.Input(shape=(3,)),
keras.layers.Dense(3, activation="relu"),
keras.layers.Dense(3, activation="relu"),
]
)
model = keras.Sequential(
[keras.Input(shape=(3,)), inner_model, keras.layers.Dense(3, activation="sigmoid"),]
)
model.trainable = False # Freeze the outer model
assert inner_model.trainable == False # All layers in `model` are now frozen
assert inner_model.layers[0].trainable == False # `trainable` is propagated recursively
"""
Explanation: 不要混淆layer.trainable属性和layer.__call__()中参数training ,后者是控制该层在推理或者训练模式下是否运行前向过程。
Do not confuse the layer.trainable attribute with the argument training in
layer.__call__() (which controls whether the layer should run its forward pass in
inference mode or training mode). For more information, see the
Keras FAQ.
Recursive setting of the trainable attribute 递归地设置 trainable
If you set trainable = False on a model or on any layer that has sublayers,
all children layers become non-trainable as well.
如果我们再一个模型或者任何一个layer上设置trainable = False,则其所有子layers 也会变成non-trainable
Example:
End of explanation
"""
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
train_ds, validation_ds, test_ds = tfds.load(
"cats_vs_dogs",
# Reserve 10% for validation and 10% for test
split=["train[:40%]", "train[40%:50%]", "train[50%:60%]"],
as_supervised=True, # Include labels
)
print("Number of training samples: %d" % tf.data.experimental.cardinality(train_ds))
print(
"Number of validation samples: %d" % tf.data.experimental.cardinality(validation_ds)
)
print("Number of test samples: %d" % tf.data.experimental.cardinality(test_ds))
"""
Explanation: 典型的迁移学习流程, The typical transfer-learning workflow
This leads us to how a typical transfer learning workflow can be implemented in Keras:
Instantiate a base model and load pre-trained weights into it.
Freeze all layers in the base model by setting trainable = False.
Create a new model on top of the output of one (or several) layers from the base
model.
Train your new model on your new dataset.
keras中典型迁移学习流程实现如下:
生成一个基础模型并加载预训练好的权重
冷冻该模型中所有参数:trainable = False
在该基础模型的输出层(或者几个不同的输出层)基础上创建新的模型
在新的数据集上训练新的模型。
Note that an alternative, more lightweight workflow could also be:
Instantiate a base model and load pre-trained weights into it.
Run your new dataset through it and record the output of one (or several) layers
from the base model. This is called feature extraction.
Use that output as input data for a new, smaller model.
注意一个可替换且更轻量级的流程如下:
生成一个基础模型并加载预训练好的权重
把新数据放入该模型,然后记录该模型的输出(或者几个不同的输出层的输出)。这个过程称作为 特征抽取
使用以上输出为一个新的且更小的模型的输入
A key advantage of that second workflow is that you only run the base model once on
your data, rather than once per epoch of training. So it's a lot faster & cheaper.
该流程的一个关键好处是:你只需要运行base模型一次,而不需要每个epoch的训练中都运行。所以将会更快和低成本。
An issue with that second workflow, though, is that it doesn't allow you to dynamically
modify the input data of your new model during training, which is required when doing
data augmentation, for instance. Transfer learning is typically used for tasks when
your new dataset has too little data to train a full-scale model from scratch, and in
such scenarios data augmentation is very important. So in what follows, we will focus
on the first workflow.
但是第二个流程的问题是,在训练过程中我们无法动态的修改模型的输入,也就是说我们无法做数据的增强等操作。
迁移学习一般用在只有很少数据可用于训练的任务中,因为数据太少我们无法从头开始训练一个好的模型,而且该条件下旺旺数据增强是非常重要的。接下来,我们主要关注第一个流程。
Here's what the first workflow looks like in Keras:
First, instantiate a base model with pre-trained weights.
以下是第一个流程在Keras中使用的样例:
首先,使用预训练好的权重实例化一个基础模型:
python
base_model = keras.applications.Xception(
weights='imagenet', # Load weights pre-trained on ImageNet.
input_shape=(150, 150, 3),
include_top=False) # Do not include the ImageNet classifier at the top.
Then, freeze the base model.
然后,冷冻该模型。
python
base_model.trainable = False
Create a new model on top.
在旧模型上,创建一个新的模型:
```python
inputs = keras.Input(shape=(150, 150, 3))
We make sure that the base_model is running in inference mode here,
by passing training=False. This is important for fine-tuning, as you will
learn in a few paragraphs.
x = base_model(inputs, training=False)
Convert features of shape base_model.output_shape[1:] to vectors
x = keras.layers.GlobalAveragePooling2D()(x)
A Dense classifier with a single unit (binary classification)
outputs = keras.layers.Dense(1)(x)
model = keras.Model(inputs, outputs)
```
Train the model on new data.
在新数据上训练该模型:
python
model.compile(optimizer=keras.optimizers.Adam(),
loss=keras.losses.BinaryCrossentropy(from_logits=True),
metrics=[keras.metrics.BinaryAccuracy()])
model.fit(new_dataset, epochs=20, callbacks=..., validation_data=...)
微调,Fine-tuning
Once your model has converged on the new data, you can try to unfreeze all or part of
the base model and retrain the whole model end-to-end with a very low learning rate.
一旦模型在新的数据上收敛了,我们就可以解冻所有或者部分base模型,并使用非常小的学习率端到端地重新训练整个模型。
This is an optional last step that can potentially give you incremental improvements.
It could also potentially lead to quick overfitting -- keep that in mind.
最后一步是可选的,但有潜力进一步提高模型的性能,也可以导致overfitting过学习。
It is critical to only do this step after the model with frozen layers has been
trained to convergence. If you mix randomly-initialized trainable layers with
trainable layers that hold pre-trained features, the randomly-initialized layers will
cause very large gradient updates during training, which will destroy your pre-trained
features.
重要的是仅当带冷冻层的模型训练到收敛再进行这一步。
如果把随机初始化的可训练的层与保存预训练特征的可训练层会在一起的话,随机初始化的层会导致非常大的梯度更新,
从而破坏预训练的特征。
It's also critical to use a very low learning rate at this stage, because
you are training a much larger model than in the first round of training, on a dataset
that is typically very small.
As a result, you are at risk of overfitting very quickly if you apply large weight
updates. Here, you only want to readapt the pretrained weights in an incremental way.
另外一个关键点是,微调阶段使用非常小的学习率,因为该阶段要训练的模型比第一阶段大不少而且数据集较小。
如果使用较大的权重更新,过学习的风险很大。这里,我们只应该逐步的改造预训练好的权重。
This is how to implement fine-tuning of the whole base model:
以下是如何实现对整个基础模型的微调
```python
Unfreeze the base model,解冻基础模型
base_model.trainable = True
It's important to recompile your model after you make any changes
to the trainable attribute of any inner layer, so that your changes
are take into account
重新编译模型,使得更改生效
model.compile(optimizer=keras.optimizers.Adam(1e-5), # Very low learning rate
loss=keras.losses.BinaryCrossentropy(from_logits=True),
metrics=[keras.metrics.BinaryAccuracy()])
Train end-to-end. Be careful to stop before you overfit!
端到端训练,overfit前停止训练
model.fit(new_dataset, epochs=10, callbacks=..., validation_data=...)
```
Important note about compile() and trainable
重要注意:compile() 和 trainable
Calling compile() on a model is meant to "freeze" the behavior of that model. This
implies that the trainable
attribute values at the time the model is compiled should be preserved throughout the
lifetime of that model,
until compile is called again. Hence, if you change any trainable value, make sure
to call compile() again on your
model for your changes to be taken into account.
调用compile()意味着其对应的模型的行为就固定了。这意味着trainable属性在模型编译后就不能改了,除非compile被重新调用。因此,修改trainable 后必须重新编译。
Important notes about BatchNormalization layer
BatchNormalization层的重要注意事项
Many image models contain BatchNormalization layers. That layer is a special case on
every imaginable count. Here are a few things to keep in mind.
很多图像模型包含BatchNormalization层。
以下几点需要注意。
BatchNormalization 包含2个在训练过程中需要更新得non-trainable的权重。这些事跟踪输入的均值和方差。
当bn_layer.trainable = False时, BatchNormalization层将运行推理模式,并且不会更新mean & variance。
这跟其它层基本都不一样,因为权重可训练性与推理/训练模型是两个正交的概念。
weight trainability & inference/training modes are two orthogonal concepts.
但是这两个概念在BatchNormalization中纠缠在一起
当我们解冻一个包含BatchNormalization层的模型来做微调时,需要记住在推理模式时设置training=False。
否则学习到的模型整个都会被破坏。
在该指南的的最后面,将在一个端到端的例子中实践展示这种模式。
BatchNormalization contains 2 non-trainable weights that get updated during
training. These are the variables tracking the mean and variance of the inputs.
When you set bn_layer.trainable = False, the BatchNormalization layer will
run in inference mode, and will not update its mean & variance statistics. This is not
the case for other layers in general, as
weight trainability & inference/training modes are two orthogonal concepts.
But the two are tied in the case of the BatchNormalization layer.
When you unfreeze a model that contains BatchNormalization layers in order to do
fine-tuning, you should keep the BatchNormalization layers in inference mode by
passing training=False when calling the base model.
Otherwise the updates applied to the non-trainable weights will suddenly destroy
what the model has learned.
You'll see this pattern in action in the end-to-end example at the end of this guide.
Transfer learning & fine-tuning with a custom training loop
迁移学习&微调中自定义训练loop
If instead of fit(), you are using your own low-level training loop, the workflow
stays essentially the same. You should be careful to only take into account the list
model.trainable_weights when applying gradient updates:
如果不使用fit(),需要使用自定义的底层的训练循环,该流程本质上还是一样。
你需要非常小心,在应用梯度更新时,仅对 model.trainable_weights列表中的更新。
```python
Create base model
base_model = keras.applications.Xception(
weights='imagenet',
input_shape=(150, 150, 3),
include_top=False)
Freeze base model
base_model.trainable = False
Create new model on top.
inputs = keras.Input(shape=(150, 150, 3))
x = base_model(inputs, training=False)
x = keras.layers.GlobalAveragePooling2D()(x)
outputs = keras.layers.Dense(1)(x)
model = keras.Model(inputs, outputs)
loss_fn = keras.losses.BinaryCrossentropy(from_logits=True)
optimizer = keras.optimizers.Adam()
Iterate over the batches of a dataset.
for inputs, targets in new_dataset:
# Open a GradientTape.
with tf.GradientTape() as tape:
# Forward pass.
predictions = model(inputs)
# Compute the loss value for this batch.
loss_value = loss_fn(targets, predictions)
# Get gradients of loss wrt the *trainable* weights.
gradients = tape.gradient(loss_value, model.trainable_weights)
# Update the weights of the model.
optimizer.apply_gradients(zip(gradients, model.trainable_weights))
```
Likewise for fine-tuning.
微调也类似
An end-to-end example: fine-tuning an image classification model on a cats vs. dogs
端到端例子:在cats vs. dogs数据集上微调图像分类模型
dataset 数据集
To solidify these concepts, let's walk you through a concrete end-to-end transfer
learning & fine-tuning example. We will load the Xception model, pre-trained on
ImageNet, and use it on the Kaggle "cats vs. dogs" classification dataset.
为了巩固这些概念,我们一起过一个具体的端到端的迁移学习和微调的例子。本例子中使用Xception模型,在ImageNet上训练的,并且使用它来解决kaggle的"cats vs. dogs"的数据集分类问题。
Getting the data
获取数据
First, let's fetch the cats vs. dogs dataset using TFDS. If you have your own dataset,
you'll probably want to use the utility
tf.keras.preprocessing.image_dataset_from_directory to generate similar labeled
dataset objects from a set of images on disk filed into class-specific folders.
首先,使用TFDS获取该数据集。如果你已经有了该数据集,可以使用tf.keras.preprocessing.image_dataset_from_directory来从硬盘上图像集中生产类似的带标签的数据集对象。
Transfer learning is most useful when working with very small datasets. To keep our
dataset small, we will use 40% of the original training data (25,000 images) for
training, 10% for validation, and 10% for testing.
迁移学习在数据集非常小的时候是最有用的。为了保持我们的数据集较小,将使用40%的原始数据 (25,000 images)来训练,10% 为验证集,10%为测试集。
End of explanation
"""
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 10))
for i, (image, label) in enumerate(train_ds.take(9)):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(image)
plt.title(int(label))
plt.axis("off")
"""
Explanation: These are the first 9 images in the training dataset -- as you can see, they're all
different sizes.
以下展示训练集中前9张图片,都不一样。
End of explanation
"""
size = (150, 150)
train_ds = train_ds.map(lambda x, y: (tf.image.resize(x, size), y))
validation_ds = validation_ds.map(lambda x, y: (tf.image.resize(x, size), y))
test_ds = test_ds.map(lambda x, y: (tf.image.resize(x, size), y))
"""
Explanation: We can also see that label 1 is "dog" and label 0 is "cat".
标签为1的是 "dog",0是"cat".
Standardizing the data
数据标准化
Our raw images have a variety of sizes. In addition, each pixel consists of 3 integer
values between 0 and 255 (RGB level values). This isn't a great fit for feeding a
neural network. We need to do 2 things:
Standardize to a fixed image size. We pick 150x150.
Normalize pixel values between -1 and 1. We'll do this using a Normalization layer as
part of the model itself.
原始图片有着不同的大小。此外,每个像素点包含3个0-255的整型数值(RGB)。不便于直接放入神经网络训练,我们需要做一下两件事:
标准化每张图片到固定大小。本例子选择150*150
规范化像素值到-1到1之间,我们使用Normalization层来实现这个,该层也作为模型的一层。
In general, it's a good practice to develop models that take raw data as input, as
opposed to models that take already-preprocessed data. The reason being that, if your
model expects preprocessed data, any time you export your model to use it elsewhere
(in a web browser, in a mobile app), you'll need to reimplement the exact same
preprocessing pipeline. This gets very tricky very quickly. So we should do the least
possible amount of preprocessing before hitting the model.
通常来说,相对使用已经处理好的数据作为输入,使用原始数据作为输入input来开发模型是很好的做法。
原因是在不同应用环境(app,浏览器),我们不必重新实现预处理过程。
Here, we'll do image resizing in the data pipeline (because a deep neural network can
only process contiguous batches of data), and we'll do the input value scaling as part
of the model, when we create it.
这里,我们将在数据pipeline中实现图像resizing大小调整(因为深度神经网络只能处理了连续小批的数据),
并且将把输入数据缩放作为模型的一部分。
Let's resize images to 150x150:
调整图像大小为150x150:
End of explanation
"""
batch_size = 32
train_ds = train_ds.cache().batch(batch_size).prefetch(buffer_size=10)
validation_ds = validation_ds.cache().batch(batch_size).prefetch(buffer_size=10)
test_ds = test_ds.cache().batch(batch_size).prefetch(buffer_size=10)
"""
Explanation: Besides, let's batch the data and use caching & prefetching to optimize loading speed.
此外,把数据准备成batch小批,并使用缓冲和预读取来优化读取速度。
End of explanation
"""
from tensorflow import keras
from tensorflow.keras import layers
data_augmentation = keras.Sequential(
[
layers.experimental.preprocessing.RandomFlip("horizontal"),
layers.experimental.preprocessing.RandomRotation(0.1),
]
)
"""
Explanation: Using random data augmentation
使用随机数据增强
When you don't have a large image dataset, it's a good practice to artificially
introduce sample diversity by applying random yet realistic transformations to
the training images, such as random horizontal flipping or small random rotations. This
helps expose the model to different aspects of the training data while slowing down
overfitting.
当没有大的数据集时,人工地通过使用随机但现实的变换原始训练数据,从而引入样本的多样性是一个好的办法。
例如随机水平翻转或小的随机旋转。这有助于把训练数据的不同方面暴露给模型,从而减慢过学习。
End of explanation
"""
import numpy as np
for images, labels in train_ds.take(1):
plt.figure(figsize=(10, 10))
first_image = images[0]
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
augmented_image = data_augmentation(
tf.expand_dims(first_image, 0), training=True
)
plt.imshow(augmented_image[0].numpy().astype("int32"))
plt.title(int(labels[i]))
plt.axis("off")
"""
Explanation: Let's visualize what the first image of the first batch looks like after various random
transformations:
下面可视化第一个batch中的第一张图片随机变换后的样子:
End of explanation
"""
base_model = keras.applications.Xception(
weights="imagenet", # Load weights pre-trained on ImageNet.
input_shape=(150, 150, 3),
include_top=False,
) # Do not include the ImageNet classifier at the top.
# Freeze the base_model
base_model.trainable = False
# Create new model on top
inputs = keras.Input(shape=(150, 150, 3))
x = data_augmentation(inputs) # Apply random data augmentation
# Pre-trained Xception weights requires that input be normalized
# from (0, 255) to a range (-1., +1.), the normalization layer
# does the following, outputs = (inputs - mean) / sqrt(var)
norm_layer = keras.layers.experimental.preprocessing.Normalization()
mean = np.array([127.5] * 3)
var = mean ** 2
# Scale inputs to [-1, +1]
x = norm_layer(x)
norm_layer.set_weights([mean, var])
# The base model contains batchnorm layers. We want to keep them in inference mode
# when we unfreeze the base model for fine-tuning, so we make sure that the
# base_model is running in inference mode here.
x = base_model(x, training=False)
x = keras.layers.GlobalAveragePooling2D()(x)
x = keras.layers.Dropout(0.2)(x) # Regularize with dropout
outputs = keras.layers.Dense(1)(x)
model = keras.Model(inputs, outputs)
model.summary()
"""
Explanation: Build a model
构建模型
Now let's built a model that follows the blueprint we've explained earlier.
现在开始构建模型,如前面所述。
Note that:
We add a Normalization layer to scale input values (initially in the [0, 255]
range) to the [-1, 1] range.
We add a Dropout layer before the classification layer, for regularization.
We make sure to pass training=False when calling the base model, so that
it runs in inference mode, so that batchnorm statistics don't get updated
even after we unfreeze the base model for fine-tuning.
注意:
我们需要增加一个Normalization层把输入数值缩放到[-1, 1]
在分类层中增加Dropout层进行规范化
调用基础模型时确保传入training=False,因为此刻是推理模式批量。这样规范化统计量不会被更新,即使当我们解冻基础模型进行微调时也一样。
End of explanation
"""
model.compile(
optimizer=keras.optimizers.Adam(),
loss=keras.losses.BinaryCrossentropy(from_logits=True),
metrics=[keras.metrics.BinaryAccuracy()],
)
epochs = 20
model.fit(train_ds, epochs=epochs, validation_data=validation_ds)
"""
Explanation: Train the top layer
训练最上面的层
End of explanation
"""
# Unfreeze the base_model. Note that it keeps running in inference mode
# since we passed `training=False` when calling it. This means that
# the batchnorm layers will not update their batch statistics.
# This prevents the batchnorm layers from undoing all the training
# we've done so far.
base_model.trainable = True
model.summary()
model.compile(
optimizer=keras.optimizers.Adam(1e-5), # Low learning rate
loss=keras.losses.BinaryCrossentropy(from_logits=True),
metrics=[keras.metrics.BinaryAccuracy()],
)
epochs = 10
model.fit(train_ds, epochs=epochs, validation_data=validation_ds)
"""
Explanation: Do a round of fine-tuning of the entire model
微调整个模型一轮
Finally, let's unfreeze the base model and train the entire model end-to-end with a low
learning rate.
最后,解冻基础模型并且使用较小的学习率端到端训练整个模型
Importantly, although the base model becomes trainable, it is still running in
inference mode since we passed training=False when calling it when we built the
model. This means that the batch normalization layers inside won't update their batch
statistics. If they did, they would wreck havoc on the representations learned by the
model so far.
重要的是,尽管基础模型变得可以训练了,它任然是运行在推理模式,因为构建模型时传入了 training=False 。
内部的batch normalization层将不会更新batch统计量。如果更新了,将会破坏模型到目前为止所学习到的表示。
End of explanation
"""
|
turbomanage/training-data-analyst | courses/machine_learning/deepdive/09_sequence/labs/sinewaves.ipynb | apache-2.0 | # You must update BUCKET, PROJECT, and REGION to proceed with the lab
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
SEQ_LEN = 50
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
os.environ['SEQ_LEN'] = str(SEQ_LEN)
os.environ['TFVERSION'] = '1.15'
"""
Explanation: <h1> Time series prediction, end-to-end </h1>
This notebook illustrates several models to find the next value of a time-series:
<ol>
<li> Linear
<li> DNN
<li> CNN
<li> RNN
</ol>
End of explanation
"""
import warnings
warnings.filterwarnings("ignore")
import tensorflow as tf
print(tf.__version__)
import numpy as np
import seaborn as sns
def create_time_series():
freq = (np.random.random()*0.5) + 0.1 # 0.1 to 0.6
ampl = np.random.random() + 0.5 # 0.5 to 1.5
noise = [np.random.random()*0.3 for i in range(SEQ_LEN)] # -0.3 to +0.3 uniformly distributed
x = np.sin(np.arange(0,SEQ_LEN) * freq) * ampl + noise
return x
flatui = ["#9b59b6", "#3498db", "#95a5a6", "#e74c3c", "#34495e", "#2ecc71"]
for i in range(0, 5):
sns.tsplot( create_time_series(), color=flatui[i%len(flatui)] ); # 5 series
def to_csv(filename, N):
with open(filename, 'w') as ofp:
for lineno in range(0, N):
seq = create_time_series()
line = ",".join(map(str, seq))
ofp.write(line + '\n')
import os
try:
os.makedirs('data/sines/')
except OSError:
pass
np.random.seed(1) # makes data generation reproducible
to_csv('data/sines/train-1.csv', 1000) # 1000 sequences
to_csv('data/sines/valid-1.csv', 250)
!head -5 data/sines/*-1.csv
"""
Explanation: <h3> Simulate some time-series data </h3>
Essentially a set of sinusoids with random amplitudes and frequencies.
End of explanation
"""
%%bash
DATADIR=$(pwd)/data/sines
OUTDIR=$(pwd)/trained/sines
rm -rf $OUTDIR
gcloud ai-platform local train \
--module-name=sinemodel.task \
--package-path=${PWD}/sinemodel \
-- \
--train_data_path="${DATADIR}/train-1.csv" \
--eval_data_path="${DATADIR}/valid-1.csv" \
--output_dir=${OUTDIR} \
--model=linear --train_steps=10 --sequence_length=$SEQ_LEN
"""
Explanation: <h3> Train model locally </h3>
Make sure the code works as intended.
Please remember to update the "--model=" variable on the last line of the command
You may ignore any tensorflow deprecation warnings.
<b>Note:</b> This step will be complete when you see a message similar to the following:
"INFO : tensorflow :Loss for final step: N.NNN...N"
End of explanation
"""
import shutil
shutil.rmtree('data/sines', ignore_errors=True)
os.makedirs('data/sines/')
np.random.seed(1) # makes data generation reproducible
for i in range(0,10):
to_csv('data/sines/train-{}.csv'.format(i), 1000) # 1000 sequences
to_csv('data/sines/valid-{}.csv'.format(i), 250)
%%bash
gsutil -m rm -rf gs://${BUCKET}/sines/*
gsutil -m cp data/sines/*.csv gs://${BUCKET}/sines
%%bash
for MODEL in linear dnn cnn rnn rnn2 rnnN; do
OUTDIR=gs://${BUCKET}/sinewaves/${MODEL}
JOBNAME=sines_${MODEL}_$(date -u +%y%m%d_%H%M%S)
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=sinemodel.task \
--package-path=${PWD}/sinemodel \
--job-dir=$OUTDIR \
--scale-tier=BASIC \
--runtime-version=$TFVERSION \
-- \
--train_data_path="gs://${BUCKET}/sines/train*.csv" \
--eval_data_path="gs://${BUCKET}/sines/valid*.csv" \
--output_dir=$OUTDIR \
--train_steps=3000 --sequence_length=$SEQ_LEN --model=$MODEL
done
"""
Explanation: <h3> Cloud AI Platform</h3>
Now to train on Cloud AI Platform with more data.
End of explanation
"""
|
KJE2001/seminars | 02_linear_algebra.ipynb | mit | import numpy as np
a = np.array([1, 2, 3]) # Create a rank 1 array
print(type(a)) # Prints "<type 'numpy.ndarray'>"
print(a.shape) # Prints "(3,)"
"""
Explanation: <figure>
<IMG SRC="gfx/Logo_norsk_pos.png" WIDTH=100 ALIGN="right">
</figure>
Linear algebra crash course
Roberto Di Remigio, Luca Frediani
This is a very brief and incomplete introduction to a few selected topics in linear algebra.
An extremely well-written book, covering many topics in mathematics heavily used in physics is James Nearing's Mathematical Tools for Physics. The book is freely available online on the author's website.
These notes were translated from the italian version, originally written by Filippo Lipparini and Benedetta Mennucci.
Vector spaces and bases
Definition: Vector space
Let $\mathbb{R}$ be the set of real numbers. Elements of $\mathbb{R}$ will be called scalars, to distinguish them from vectors. We define as a vector space $V$ on the field $\mathbb{R}$ the set of vectors such that:
the sum of two vectors is still a vector. For any pair of vectors $\mathbf{u},\mathbf{v}\in V$ their sum
$\mathbf{w} = \mathbf{u}+\mathbf{v} = \mathbf{v} + \mathbf{u}$ is still an element of $V$.
the product of a vector and a scalar is still a vector. For any $\mathbf{v}\in V$ and for any $\alpha \in \mathbb{R}$
the product $\mathbf{w} = \alpha\mathbf{v}$ is still an element of $V$.
We will write vectors using a boldface font, while we will use normal font for scalars.
Real numbers already are an example of a vector space: the sum of two
real numbers is still a real number, as is their product.
Vectors in the plane are yet another example (when their starting point is the origin of axes):
a sum of two vectors, given by the parallelogram rule, is still a vector in the plane; the product of a vector by a scalar is a vector with direction equal to that of the original vector and magnitude given by the product of the scalar and the magnitude
of the original vector.
How big is a vector space? It is possible to get an intuitive idea of dimension for ``geometric'' vector spaces:
the plane has dimension 2, the space has dimension 3, while the set of real numbers has dimension 1.
Some more definitions will help in getting a more precise idea of what the dimension of a vector space is.
Definition: Linear combination
A linear combination of vectors $\mathbf{v}1, \mathbf{v}_2, \ldots, \mathbf{v}_n$ is the vector obtained by summing
these vectors, possibly multiplied by a scalar:
\begin{equation}
\mathbf{w} = c_1\mathbf{v}_1 + c_2\mathbf{v}_2 + \ldots + c_n\mathbf{v}_n = \sum{i = 1}^n c_i\mathbf{v}_i
\end{equation}
Definition: Linear dependence
Two non-zero vectors are said to be linearly dependent if and only if (iff) there exist two non-zero scalars $c_1, c_2$
such that:
\begin{equation}
c_1\mathbf{v}1 + c_2\mathbf{v}_2 = \mathbf{0}
\end{equation}
where $\mathbf{0}$ is the null vector. Equivalently, two vectors are said to be linearly dependent if one can be written as
the other multiplied by a scalar:
\begin{equation}
\mathbf{v}_2 = -\frac{c_1}{c_2} \mathbf{v}_1
\end{equation}
Conversely, two vectors are said to be _linearly independent when the following is true:
\begin{equation}
c_1\mathbf{v}_1 + c_2\mathbf{v}_2 = \mathbf{0} \quad\Leftrightarrow\quad c_1=c_2 = 0
\end{equation}
Consider now two linearly dependent vectors in the plane. What does this imply? As one can be expressed as a scalar multiplied by the other,
they have the same direction and magnitude differing by a factor.
The notions above are easily generalized to more than a pair of vectors: vectors $\mathbf{v}1, \mathbf{v}_2, \ldots\mathbf{v}_n$
are linearly independent iff the only linear combination that gives the null vector is the one where all the coefficients are zero:
\begin{equation}
\sum{i=1}^n c_n\mathbf{v}_n = \mathbf{0} \quad\Leftrightarrow\quad c_i = 0 \forall i
\end{equation}
How many linearly independent vectors are there in the plane? As our intuition told us before: only 2! Let us think about the unit vectors
$\mathbf{i}$ and $\mathbf{j}$, i.e. the orthogonal vectors of unit magnitude we can draw on the Cartesian $x$ and $y$ axes. All the vectors in
the plane can be written as a linear combination of those two:
\begin{equation}
\mathbf{r} = x\mathbf{i} + y\mathbf{j}
\end{equation}
We are already used to specify a vector by means of its components $x$ and $y$. So, by taking linear combinations of the two unit vectors
we are able to generate all the vectors in the plane. Doesn't this look familiar? Think about the set of complex numbers $\mathbb{C}$
and the way we represent it.
It's now time for some remarks:
1. the orthogonal, unit vectors in the plane are linearly independent;
2. they are the minimal number of vectors needed to span, i.e. to generate, all the other vectors in the plane.
Definition: Basis
Given the vector space $V$ its basis is defined as the minimal set of linearly independent vectors
$\mathcal{B} = \lbrace \mathbf{e}1,\mathbf{e}_2, \ldots,\mathbf{e}_n \rbrace$ spanning all the vectors in the space.
This means that any suitable linear combination of vectors in $\mathcal{B}$ generates a vector $\mathbf{v}\in V$:
\begin{equation}
\mathbf{v} = c_1\mathbf{e}_1 + c_2\mathbf{e}_2 + \ldots + c_n\mathbf{e}_n = \sum{i=1}^n c_n \mathbf{e}n
\end{equation}
the coefficients in the linear combination are called _components or coordinates of the vector $\mathbf{v}$ in the given
basis $\mathcal{B}$.
Turning back to our vectors in the plane, the set $\lbrace \mathbf{i}, \mathbf{j} \rbrace$ is a basis for the plane and that
that $x$ and $y$ are the components of the vector $\mathbf{r}$.
We are now ready to answer the question: how big is a vector space?
Defintion: Dimension of a vector space
The dimension of a vector space is the number of vectors making up its basis.
Vectors using NumPy
We have already seen how to use NumPy to handle quantities that are vector-like. For example, when we needed to plot a function with matplotlib, we first generated a linearly spaced vector of $N$ points inside an interval and then evaluated the function in those points.
Vectors, matrices and objects with higher dimensions are all represented as arrays.
An array is a homogeneous grid of values that is indexed by a tuple of nonnegative integers.
There are two important terms related to NumPy arrays:
1. the rank of an array is the number of dimensions,
2. the shape is a tuple of nonnegative integers, giving the size of the array along each dimension.
Let's see what that means by starting with the simplest array: a one-dimensional array.
End of explanation
"""
print(a[0], a[1], a[2]) # Prints "1 2 3"
a[0] = 5 # Change an element of the array
print(a) # Prints "[5, 2, 3]"
"""
Explanation: As you can see, we can easily access all this information. One-dimensional arrays in NumPy can be used to represent vectors, while two-dimensional arrays can be used to represent matrices.
a = np.array([1, 2, 3]) is equivalent to the vector:
\begin{equation}
\mathbf{a} = \begin{pmatrix}
1 \
2 \
3
\end{pmatrix}
\end{equation}
The components of the vector can be accessed using so-called subscript notation, as in the following code snippet:
End of explanation
"""
a = np.zeros(10) # Create an array of all zeros
print(a)
b = np.ones(10) # Create an array of all ones
print(b) # Prints "[[ 1. 1.]]"
c = np.full(10, 7) # Create a constant array
print(c) # Prints "[[ 7. 7.]
# [ 7. 7.]]"
e = np.random.random(10) # Create an array filled with random values
print(e)
"""
Explanation: index counting in Python starts from 0 not from 1! As you can see, subscript notation can also be used to modify the value at a given index.
There a number of predefined functions to create arrays with certain properties.
We have already met linspace and here is a list of some additional functions:
End of explanation
"""
a = np.random.random(15) # Create an array of rank 15
print(a.shape)
print(a)
# Select elements 10, 11 and 15 of the array
print(a[9], a[10], a[14])
# Now take a slice: all elements, but the first and second
print('Taking a slice!')
b = a[2:]
print(b)
# And notice how the rank changed!
print('We have removed two elements!')
print(b.shape)
print('Take another slice: all elements in between the third and tenth')
c = a[2:11]
print(c)
"""
Explanation: NumPy supports indexing arrays via slicing like Matlab does. While indexing via a single integer number will return the element at that position of the array, i.e. a scalar, slicing will return another array with fewer dimensions. For example:
End of explanation
"""
a = np.full(10, 7.)
b = np.full(10, 4.)
print(a)
print(b)
print('Sum of a and b')
print(a + b)
print('Difference of a and b')
print(a - b)
print('Elementwise product of a and b')
print(a * b)
print('Elementwise division of a and b')
print(a / b)
print('Elementwise square root of b')
print(np.sqrt(b))
"""
Explanation: As you can see, we can quite easily select just parts of an array and either create new, smaller arrays or just obtain the value at a certain position.
Algebraic operations and functions are also defined on arrays. The definitions for addition and subtraction are obvious. Those for multiplication, division and other mathematical functions are less obvious: in NumPy they are always performed element-wise.
The following example will clarify.
End of explanation
"""
|
rafburzy/Statistics | 05_hypothesis_testing.ipynb | mit | # importing required modules
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
# runing the functions script
%run stats_func.py
# loading the iris dataset
df = pd.read_csv('iris.csv')
df.head()
# for further analysis sepal length and sepal width will be used
sepalLength = df['SepalLengthCm'].values
sepalWidth = df['SepalWidthCm'].values
# the plot above is for all iris spieces, which can be listed via:
df['Species'].unique()
# the remaining analysis will focus only on Iris-setosa
sepalLengthSetosa = np.array(df['SepalLengthCm'][df['Species'] == 'Iris-setosa'])
sepalWidthSetosa = np.array(df['SepalWidthCm'][df['Species'] == 'Iris-setosa'])
sepalLengthVirginica = np.array(df['SepalLengthCm'][df['Species'] == 'Iris-virginica'])
"""
Explanation: Hypothesis testing
End of explanation
"""
(np.mean(sepalLengthSetosa), np.mean(sepalLengthVirginica))
for i in range(50):
# Generate permutation samples
perm_sample_1, perm_sample_2 = permutation_sample(sepalLengthSetosa, sepalLengthVirginica)
# Compute ECDFs
x_1, y_1 = ecdf(perm_sample_1)
x_2, y_2 = ecdf(perm_sample_2)
# Plot ECDFs of permutation sample
_ = plt.plot(x_1, y_1, marker='.', linestyle='none',
color='red', alpha=0.02)
_ = plt.plot(x_2, y_2, marker='.', linestyle='none',
color='blue', alpha=0.02)
# Create and plot ECDFs from original data
x_1, y_1 = ecdf(sepalLengthSetosa)
x_2, y_2 = ecdf(sepalLengthVirginica)
_ = plt.plot(x_1, y_1, marker='.', linestyle='none', color='red', label='Sepal Length - setosa')
_ = plt.plot(x_2, y_2, marker='.', linestyle='none', color='blue', label='Sepal Length - virginica')
# Label axes, set margin, and show plot
plt.margins(0.02)
_ = plt.xlabel('sepal lengths and widths [cm]')
_ = plt.ylabel('ECDF')
plt.legend();
"""
Explanation: Samples permutation
Special function is defined to mix the data (that should be very similar):
def permutation_sample(data1, data2):
"""Generate a permutation sample from two data sets."""
# Concatenate the data sets: data
data = np.concatenate((data1, data2))
# Permute the concatenated array: permuted_data
permuted_data = np.random.permutation(data)
# Split the permuted array into two: perm_sample_1, perm_sample_2
perm_sample_1 = permuted_data[:len(data1)]
perm_sample_2 = permuted_data[len(data1):]
return perm_sample_1, perm_sample_2
End of explanation
"""
# definition of the difference of means
def diff_of_means(data_1, data_2):
"""Difference in means of two arrays."""
# The difference of means of data_1, data_2: diff
diff = np.mean(data_1) - np.mean(data_2)
return diff
# empirical difference of means
empirical_diff_means = diff_of_means(sepalLengthSetosa, sepalLengthVirginica)
empirical_diff_means
# permutation replicates
perm_replicates = draw_perm_reps(sepalLengthSetosa, sepalLengthVirginica, diff_of_means, size=10000)
# computing p-value
p = np.sum(perm_replicates <= empirical_diff_means) / len(perm_replicates)
p
"""
Explanation: Comparison of means
for this purpose a new function is defined - draw permutation replicates, that uses permutation samples function inside:
def draw_perm_reps(data_1, data_2, func, size=1):
"""Generate multiple permutation replicates."""
# Initialize array of replicates: perm_replicates
perm_replicates = np.empty(size)
for i in range(size):
# Generate permutation sample
perm_sample_1, perm_sample_2 = permutation_sample(data_1, data_2)
# Compute the test statistic
perm_replicates[i] = func(perm_sample_1, perm_sample_2)
return perm_replicates
For this example difference of means is looked at
End of explanation
"""
|
ShubhamDebnath/Coursera-Machine-Learning | Course 5/Neural machine translation with attention v4.ipynb | mit | from keras.layers import Bidirectional, Concatenate, Permute, Dot, Input, LSTM, Multiply
from keras.layers import RepeatVector, Dense, Activation, Lambda
from keras.optimizers import Adam
from keras.utils import to_categorical
from keras.models import load_model, Model
import keras.backend as K
import numpy as np
from faker import Faker
import random
from tqdm import tqdm
from babel.dates import format_date
from nmt_utils import *
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Neural Machine Translation
Welcome to your first programming assignment for this week!
You will build a Neural Machine Translation (NMT) model to translate human readable dates ("25th of June, 2009") into machine readable dates ("2009-06-25"). You will do this using an attention model, one of the most sophisticated sequence to sequence models.
This notebook was produced together with NVIDIA's Deep Learning Institute.
Let's load all the packages you will need for this assignment.
End of explanation
"""
m = 10000
dataset, human_vocab, machine_vocab, inv_machine_vocab = load_dataset(m)
dataset[:10]
"""
Explanation: 1 - Translating human readable dates into machine readable dates
The model you will build here could be used to translate from one language to another, such as translating from English to Hindi. However, language translation requires massive datasets and usually takes days of training on GPUs. To give you a place to experiment with these models even without using massive datasets, we will instead use a simpler "date translation" task.
The network will input a date written in a variety of possible formats (e.g. "the 29th of August 1958", "03/30/1968", "24 JUNE 1987") and translate them into standardized, machine readable dates (e.g. "1958-08-29", "1968-03-30", "1987-06-24"). We will have the network learn to output dates in the common machine-readable format YYYY-MM-DD.
<!--
Take a look at [nmt_utils.py](./nmt_utils.py) to see all the formatting. Count and figure out how the formats work, you will need this knowledge later. !-->
1.1 - Dataset
We will train the model on a dataset of 10000 human readable dates and their equivalent, standardized, machine readable dates. Let's run the following cells to load the dataset and print some examples.
End of explanation
"""
Tx = 30
Ty = 10
X, Y, Xoh, Yoh = preprocess_data(dataset, human_vocab, machine_vocab, Tx, Ty)
print("X.shape:", X.shape)
print("Y.shape:", Y.shape)
print("Xoh.shape:", Xoh.shape)
print("Yoh.shape:", Yoh.shape)
"""
Explanation: You've loaded:
- dataset: a list of tuples of (human readable date, machine readable date)
- human_vocab: a python dictionary mapping all characters used in the human readable dates to an integer-valued index
- machine_vocab: a python dictionary mapping all characters used in machine readable dates to an integer-valued index. These indices are not necessarily consistent with human_vocab.
- inv_machine_vocab: the inverse dictionary of machine_vocab, mapping from indices back to characters.
Let's preprocess the data and map the raw text data into the index values. We will also use Tx=30 (which we assume is the maximum length of the human readable date; if we get a longer input, we would have to truncate it) and Ty=10 (since "YYYY-MM-DD" is 10 characters long).
End of explanation
"""
index = 0
print("Source date:", dataset[index][0])
print("Target date:", dataset[index][1])
print()
print("Source after preprocessing (indices):", X[index])
print("Target after preprocessing (indices):", Y[index])
print()
print("Source after preprocessing (one-hot):", Xoh[index])
print("Target after preprocessing (one-hot):", Yoh[index])
"""
Explanation: You now have:
- X: a processed version of the human readable dates in the training set, where each character is replaced by an index mapped to the character via human_vocab. Each date is further padded to $T_x$ values with a special character (< pad >). X.shape = (m, Tx)
- Y: a processed version of the machine readable dates in the training set, where each character is replaced by the index it is mapped to in machine_vocab. You should have Y.shape = (m, Ty).
- Xoh: one-hot version of X, the "1" entry's index is mapped to the character thanks to human_vocab. Xoh.shape = (m, Tx, len(human_vocab))
- Yoh: one-hot version of Y, the "1" entry's index is mapped to the character thanks to machine_vocab. Yoh.shape = (m, Tx, len(machine_vocab)). Here, len(machine_vocab) = 11 since there are 11 characters ('-' as well as 0-9).
Lets also look at some examples of preprocessed training examples. Feel free to play with index in the cell below to navigate the dataset and see how source/target dates are preprocessed.
End of explanation
"""
# Defined shared layers as global variables
repeator = RepeatVector(Tx)
concatenator = Concatenate(axis=-1)
densor1 = Dense(10, activation = "tanh")
densor2 = Dense(1, activation = "relu")
activator = Activation(softmax, name='attention_weights') # We are using a custom softmax(axis = 1) loaded in this notebook
dotor = Dot(axes = 1)
"""
Explanation: 2 - Neural machine translation with attention
If you had to translate a book's paragraph from French to English, you would not read the whole paragraph, then close the book and translate. Even during the translation process, you would read/re-read and focus on the parts of the French paragraph corresponding to the parts of the English you are writing down.
The attention mechanism tells a Neural Machine Translation model where it should pay attention to at any step.
2.1 - Attention mechanism
In this part, you will implement the attention mechanism presented in the lecture videos. Here is a figure to remind you how the model works. The diagram on the left shows the attention model. The diagram on the right shows what one "Attention" step does to calculate the attention variables $\alpha^{\langle t, t' \rangle}$, which are used to compute the context variable $context^{\langle t \rangle}$ for each timestep in the output ($t=1, \ldots, T_y$).
<table>
<td>
<img src="images/attn_model.png" style="width:500;height:500px;"> <br>
</td>
<td>
<img src="images/attn_mechanism.png" style="width:500;height:500px;"> <br>
</td>
</table>
<caption><center> Figure 1: Neural machine translation with attention</center></caption>
Here are some properties of the model that you may notice:
There are two separate LSTMs in this model (see diagram on the left). Because the one at the bottom of the picture is a Bi-directional LSTM and comes before the attention mechanism, we will call it pre-attention Bi-LSTM. The LSTM at the top of the diagram comes after the attention mechanism, so we will call it the post-attention LSTM. The pre-attention Bi-LSTM goes through $T_x$ time steps; the post-attention LSTM goes through $T_y$ time steps.
The post-attention LSTM passes $s^{\langle t \rangle}, c^{\langle t \rangle}$ from one time step to the next. In the lecture videos, we were using only a basic RNN for the post-activation sequence model, so the state captured by the RNN output activations $s^{\langle t\rangle}$. But since we are using an LSTM here, the LSTM has both the output activation $s^{\langle t\rangle}$ and the hidden cell state $c^{\langle t\rangle}$. However, unlike previous text generation examples (such as Dinosaurus in week 1), in this model the post-activation LSTM at time $t$ does will not take the specific generated $y^{\langle t-1 \rangle}$ as input; it only takes $s^{\langle t\rangle}$ and $c^{\langle t\rangle}$ as input. We have designed the model this way, because (unlike language generation where adjacent characters are highly correlated) there isn't as strong a dependency between the previous character and the next character in a YYYY-MM-DD date.
We use $a^{\langle t \rangle} = [\overrightarrow{a}^{\langle t \rangle}; \overleftarrow{a}^{\langle t \rangle}]$ to represent the concatenation of the activations of both the forward-direction and backward-directions of the pre-attention Bi-LSTM.
The diagram on the right uses a RepeatVector node to copy $s^{\langle t-1 \rangle}$'s value $T_x$ times, and then Concatenation to concatenate $s^{\langle t-1 \rangle}$ and $a^{\langle t \rangle}$ to compute $e^{\langle t, t'}$, which is then passed through a softmax to compute $\alpha^{\langle t, t' \rangle}$. We'll explain how to use RepeatVector and Concatenation in Keras below.
Lets implement this model. You will start by implementing two functions: one_step_attention() and model().
1) one_step_attention(): At step $t$, given all the hidden states of the Bi-LSTM ($[a^{<1>},a^{<2>}, ..., a^{<T_x>}]$) and the previous hidden state of the second LSTM ($s^{<t-1>}$), one_step_attention() will compute the attention weights ($[\alpha^{<t,1>},\alpha^{<t,2>}, ..., \alpha^{<t,T_x>}]$) and output the context vector (see Figure 1 (right) for details):
$$context^{<t>} = \sum_{t' = 0}^{T_x} \alpha^{<t,t'>}a^{<t'>}\tag{1}$$
Note that we are denoting the attention in this notebook $context^{\langle t \rangle}$. In the lecture videos, the context was denoted $c^{\langle t \rangle}$, but here we are calling it $context^{\langle t \rangle}$ to avoid confusion with the (post-attention) LSTM's internal memory cell variable, which is sometimes also denoted $c^{\langle t \rangle}$.
2) model(): Implements the entire model. It first runs the input through a Bi-LSTM to get back $[a^{<1>},a^{<2>}, ..., a^{<T_x>}]$. Then, it calls one_step_attention() $T_y$ times (for loop). At each iteration of this loop, it gives the computed context vector $c^{<t>}$ to the second LSTM, and runs the output of the LSTM through a dense layer with softmax activation to generate a prediction $\hat{y}^{<t>}$.
Exercise: Implement one_step_attention(). The function model() will call the layers in one_step_attention() $T_y$ using a for-loop, and it is important that all $T_y$ copies have the same weights. I.e., it should not re-initiaiize the weights every time. In other words, all $T_y$ steps should have shared weights. Here's how you can implement layers with shareable weights in Keras:
1. Define the layer objects (as global variables for examples).
2. Call these objects when propagating the input.
We have defined the layers you need as global variables. Please run the following cells to create them. Please check the Keras documentation to make sure you understand what these layers are: RepeatVector(), Concatenate(), Dense(), Activation(), Dot().
End of explanation
"""
# GRADED FUNCTION: one_step_attention
def one_step_attention(a, s_prev):
"""
Performs one step of attention: Outputs a context vector computed as a dot product of the attention weights
"alphas" and the hidden states "a" of the Bi-LSTM.
Arguments:
a -- hidden state output of the Bi-LSTM, numpy-array of shape (m, Tx, 2*n_a)
s_prev -- previous hidden state of the (post-attention) LSTM, numpy-array of shape (m, n_s)
Returns:
context -- context vector, input of the next (post-attetion) LSTM cell
"""
### START CODE HERE ###
# Use repeator to repeat s_prev to be of shape (m, Tx, n_s) so that you can concatenate it with all hidden states "a" (≈ 1 line)
s_prev = repeator(s_prev)
# Use concatenator to concatenate a and s_prev on the last axis (≈ 1 line)
concat = concatenator([a, s_prev])
# Use densor1 to propagate concat through a small fully-connected neural network to compute the "intermediate energies" variable e. (≈1 lines)
e = densor1(concat)
# Use densor2 to propagate e through a small fully-connected neural network to compute the "energies" variable energies. (≈1 lines)
energies = densor2(e)
# Use "activator" on "energies" to compute the attention weights "alphas" (≈ 1 line)
alphas = activator(energies)
# Use dotor together with "alphas" and "a" to compute the context vector to be given to the next (post-attention) LSTM-cell (≈ 1 line)
context = dotor([alphas, a])
### END CODE HERE ###
return context
"""
Explanation: Now you can use these layers to implement one_step_attention(). In order to propagate a Keras tensor object X through one of these layers, use layer(X) (or layer([X,Y]) if it requires multiple inputs.), e.g. densor(X) will propagate X through the Dense(1) layer defined above.
End of explanation
"""
n_a = 32
n_s = 64
post_activation_LSTM_cell = LSTM(n_s, return_state = True)
output_layer = Dense(len(machine_vocab), activation=softmax)
"""
Explanation: You will be able to check the expected output of one_step_attention() after you've coded the model() function.
Exercise: Implement model() as explained in figure 2 and the text above. Again, we have defined global layers that will share weights to be used in model().
End of explanation
"""
# GRADED FUNCTION: model
def model(Tx, Ty, n_a, n_s, human_vocab_size, machine_vocab_size):
"""
Arguments:
Tx -- length of the input sequence
Ty -- length of the output sequence
n_a -- hidden state size of the Bi-LSTM
n_s -- hidden state size of the post-attention LSTM
human_vocab_size -- size of the python dictionary "human_vocab"
machine_vocab_size -- size of the python dictionary "machine_vocab"
Returns:
model -- Keras model instance
"""
# Define the inputs of your model with a shape (Tx,)
# Define s0 and c0, initial hidden state for the decoder LSTM of shape (n_s,)
X = Input(shape=(Tx, human_vocab_size))
s0 = Input(shape=(n_s,), name='s0')
c0 = Input(shape=(n_s,), name='c0')
s = s0
c = c0
# Initialize empty list of outputs
outputs = []
### START CODE HERE ###
# Step 1: Define your pre-attention Bi-LSTM. Remember to use return_sequences=True. (≈ 1 line)
a = Bidirectional(LSTM(n_a, return_sequences = True))(X)
# Step 2: Iterate for Ty steps
for t in range(Ty):
# Step 2.A: Perform one step of the attention mechanism to get back the context vector at step t (≈ 1 line)
context = one_step_attention(a, s)
# Step 2.B: Apply the post-attention LSTM cell to the "context" vector.
# Don't forget to pass: initial_state = [hidden state, cell state] (≈ 1 line)
s, _, c = post_activation_LSTM_cell(context, initial_state = [s, c])
# Step 2.C: Apply Dense layer to the hidden state output of the post-attention LSTM (≈ 1 line)
out = output_layer(s)
# Step 2.D: Append "out" to the "outputs" list (≈ 1 line)
outputs.append(out)
# Step 3: Create model instance taking three inputs and returning the list of outputs. (≈ 1 line)
model = Model(inputs = [X, s0, c0], outputs = outputs)
### END CODE HERE ###
return model
"""
Explanation: Now you can use these layers $T_y$ times in a for loop to generate the outputs, and their parameters will not be reinitialized. You will have to carry out the following steps:
Propagate the input into a Bidirectional LSTM
Iterate for $t = 0, \dots, T_y-1$:
Call one_step_attention() on $[\alpha^{<t,1>},\alpha^{<t,2>}, ..., \alpha^{<t,T_x>}]$ and $s^{<t-1>}$ to get the context vector $context^{<t>}$.
Give $context^{<t>}$ to the post-attention LSTM cell. Remember pass in the previous hidden-state $s^{\langle t-1\rangle}$ and cell-states $c^{\langle t-1\rangle}$ of this LSTM using initial_state= [previous hidden state, previous cell state]. Get back the new hidden state $s^{<t>}$ and the new cell state $c^{<t>}$.
Apply a softmax layer to $s^{<t>}$, get the output.
Save the output by adding it to the list of outputs.
Create your Keras model instance, it should have three inputs ("inputs", $s^{<0>}$ and $c^{<0>}$) and output the list of "outputs".
End of explanation
"""
model = model(Tx, Ty, n_a, n_s, len(human_vocab), len(machine_vocab))
"""
Explanation: Run the following cell to create your model.
End of explanation
"""
model.summary()
"""
Explanation: Let's get a summary of the model to check if it matches the expected output.
End of explanation
"""
### START CODE HERE ### (≈2 lines)
opt = Adam(lr = 0.005, beta_1 = 0.9, beta_2 = 0.999, decay = 0.01)
model.compile(opt, loss = 'categorical_crossentropy', metrics = ['accuracy'])
### END CODE HERE ###
"""
Explanation: Expected Output:
Here is the summary you should see
<table>
<tr>
<td>
**Total params:**
</td>
<td>
52,960
</td>
</tr>
<tr>
<td>
**Trainable params:**
</td>
<td>
52,960
</td>
</tr>
<tr>
<td>
**Non-trainable params:**
</td>
<td>
0
</td>
</tr>
<tr>
<td>
**bidirectional_1's output shape **
</td>
<td>
(None, 30, 64)
</td>
</tr>
<tr>
<td>
**repeat_vector_1's output shape **
</td>
<td>
(None, 30, 64)
</td>
</tr>
<tr>
<td>
**concatenate_1's output shape **
</td>
<td>
(None, 30, 128)
</td>
</tr>
<tr>
<td>
**attention_weights's output shape **
</td>
<td>
(None, 30, 1)
</td>
</tr>
<tr>
<td>
**dot_1's output shape **
</td>
<td>
(None, 1, 64)
</td>
</tr>
<tr>
<td>
**dense_3's output shape **
</td>
<td>
(None, 11)
</td>
</tr>
</table>
As usual, after creating your model in Keras, you need to compile it and define what loss, optimizer and metrics your are want to use. Compile your model using categorical_crossentropy loss, a custom Adam optimizer (learning rate = 0.005, $\beta_1 = 0.9$, $\beta_2 = 0.999$, decay = 0.01) and ['accuracy'] metrics:
End of explanation
"""
s0 = np.zeros((m, n_s))
c0 = np.zeros((m, n_s))
outputs = list(Yoh.swapaxes(0,1))
"""
Explanation: The last step is to define all your inputs and outputs to fit the model:
- You already have X of shape $(m = 10000, T_x = 30)$ containing the training examples.
- You need to create s0 and c0 to initialize your post_activation_LSTM_cell with 0s.
- Given the model() you coded, you need the "outputs" to be a list of 11 elements of shape (m, T_y). So that: outputs[i][0], ..., outputs[i][Ty] represent the true labels (characters) corresponding to the $i^{th}$ training example (X[i]). More generally, outputs[i][j] is the true label of the $j^{th}$ character in the $i^{th}$ training example.
End of explanation
"""
model.fit([Xoh, s0, c0], outputs, epochs=1, batch_size=100)
"""
Explanation: Let's now fit the model and run it for one epoch.
End of explanation
"""
model.load_weights('models/model.h5')
"""
Explanation: While training you can see the loss as well as the accuracy on each of the 10 positions of the output. The table below gives you an example of what the accuracies could be if the batch had 2 examples:
<img src="images/table.png" style="width:700;height:200px;"> <br>
<caption><center>Thus, dense_2_acc_8: 0.89 means that you are predicting the 7th character of the output correctly 89% of the time in the current batch of data. </center></caption>
We have run this model for longer, and saved the weights. Run the next cell to load our weights. (By training a model for several minutes, you should be able to obtain a model of similar accuracy, but loading our model will save you time.)
End of explanation
"""
EXAMPLES = ['3 May 1979', '5 April 09', '21th of August 2016', 'Tue 10 Jul 2007', 'Saturday May 9 2018', 'March 3 2001', 'March 3rd 2001', '1 March 2001']
for example in EXAMPLES:
source = string_to_int(example, Tx, human_vocab)
source = np.array(list(map(lambda x: to_categorical(x, num_classes=len(human_vocab)), source))).swapaxes(0,1)
prediction = model.predict([source, s0, c0])
prediction = np.argmax(prediction, axis = -1)
output = [inv_machine_vocab[int(i)] for i in prediction]
print("source:", example)
print("output:", ''.join(output))
"""
Explanation: You can now see the results on new examples.
End of explanation
"""
model.summary()
"""
Explanation: You can also change these examples to test with your own examples. The next part will give you a better sense on what the attention mechanism is doing--i.e., what part of the input the network is paying attention to when generating a particular output character.
3 - Visualizing Attention (Optional / Ungraded)
Since the problem has a fixed output length of 10, it is also possible to carry out this task using 10 different softmax units to generate the 10 characters of the output. But one advantage of the attention model is that each part of the output (say the month) knows it needs to depend only on a small part of the input (the characters in the input giving the month). We can visualize what part of the output is looking at what part of the input.
Consider the task of translating "Saturday 9 May 2018" to "2018-05-09". If we visualize the computed $\alpha^{\langle t, t' \rangle}$ we get this:
<img src="images/date_attention.png" style="width:600;height:300px;"> <br>
<caption><center> Figure 8: Full Attention Map</center></caption>
Notice how the output ignores the "Saturday" portion of the input. None of the output timesteps are paying much attention to that portion of the input. We see also that 9 has been translated as 09 and May has been correctly translated into 05, with the output paying attention to the parts of the input it needs to to make the translation. The year mostly requires it to pay attention to the input's "18" in order to generate "2018."
3.1 - Getting the activations from the network
Lets now visualize the attention values in your network. We'll propagate an example through the network, then visualize the values of $\alpha^{\langle t, t' \rangle}$.
To figure out where the attention values are located, let's start by printing a summary of the model .
End of explanation
"""
attention_map = plot_attention_map(model, human_vocab, inv_machine_vocab, "Tuesday 09 Oct 1993", num = 7, n_s = 64)
"""
Explanation: Navigate through the output of model.summary() above. You can see that the layer named attention_weights outputs the alphas of shape (m, 30, 1) before dot_2 computes the context vector for every time step $t = 0, \ldots, T_y-1$. Lets get the activations from this layer.
The function attention_map() pulls out the attention values from your model and plots them.
End of explanation
"""
|
learn1do1/learn1do1.github.io | python_notebooks/Watershed Problem.ipynb | mit | heights = [[1,2,3,4],[1,2,1,0],[1,0,1,0],[0,0,1,4]]
class position():
def __init__(self, coordinates, height):
self.coordinates = coordinates
self.height = height
def __repr__(self):
return ','.join([str(self.coordinates), str(self.height)])
"""
Explanation: Watershed Problem
Given a 2D surface in 3D space, find the distinct watersheds in it.
In other words, if you have a three-dimensional landcsape, you want to find the different regions where all rainfall flows down to the same final location.
Your input will be a n by n matrix of integers, where the high integers denote peaks and hills while low integers denote valleys.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.dates
plt.rcParams['figure.figsize'] = (20.0, 8.0)
plt.figure()
plt.imshow(heights , interpolation='nearest', cmap='jet')
plt.title('heights')
plt.show()
"""
Explanation: To kick us off, I will draw this landscape, using a matplotlib heatmap function that shows the highest altitude in red, down to the lowest altitudes in blue:
End of explanation
"""
watersheds = [[None] * len(heights) for x in range(len(heights))]
slopes = [[-1] * len(heights) for x in range(len(heights))]
"""
Explanation: We have to make a decision. Should the water always flow down the steepest slope? Let's assume yes, even tho it may upset Ian Malcolm from Jurassic Park:
Should it pool together when the slope is 0? This describes the 3 adjacent blocks of height Zero in the heights matrix, drawn above. I'd argue yes. In order to guarantee that adjacent blocks of equal height pool together, I will initialize the slopes array to have a slope of -1.
End of explanation
"""
import operator
def initialize_positions(heights):
positions = []
for i in range(len(heights)):
for j in range(len(heights)):
positions.append(position((i,j), heights[i][j]))
positions.sort(key=operator.attrgetter('height'))
return positions
positions = initialize_positions(heights)
"""
Explanation: The watershed matrix stores an integer for each cell. When Cells in that matrix that share the same integer, it means they belong to the same watershed.
The slopes matrix stores the steepest slope that the water can flow in each cell
End of explanation
"""
# Will return all neighbors where the slope to the current position is steeper than we have yet seen.
def flow_up(heights, (i, j)):
up_coordinates = set()
neighbor_coordinates = set()
local_height = heights[i][j]
# look up, down, left, right
neighbor_coordinates.add((max(i - 1, 0),j))
neighbor_coordinates.add((min(i + 1, len(heights) - 1),j))
neighbor_coordinates.add((i,max(j - 1, 0)))
neighbor_coordinates.add((i,min(j + 1, len(heights) - 1)))
for c in neighbor_coordinates:
slope = heights[c[0]][c[1]] - local_height
if slope > slopes[c[0]][c[1]]:
slopes[c[0]][c[1]] = slope
up_coordinates.add(c)
return up_coordinates
def main():
for k, position in enumerate(positions):
if watersheds[position.coordinates[0]][position.coordinates[1]] == None:
new_coordinates = [position.coordinates]
while len(new_coordinates) > 0:
for (i, j) in new_coordinates:
watersheds[i][j] = k
past_coordinates = list(new_coordinates)
new_coordinates = set()
for coordinates in past_coordinates:
new_coordinates.update(flow_up(heights, coordinates))
main()
print watersheds
print slopes
plt.rcParams['figure.figsize'] = (20.0, 8.0)
plt.figure()
plt.imshow(heights , interpolation='nearest', cmap='jet')
plt.title('heights')
plt.figure()
plt.imshow(slopes , interpolation='nearest', cmap='jet')
plt.title('slopes')
plt.figure()
plt.imshow(watersheds , interpolation='nearest', cmap='jet')
plt.title('watersheds')
"""
Explanation: Our strategy is to sort positions from deepest to highest. Starting at the deepest, let's find all adjacent positions that would flow into it. We determine those positions by using the flow_up function. We continue this search from each of the new positions we have just moved up to, until every cell in the slopes array has been visited.
End of explanation
"""
n = 10
heights = [[x] * n for x in range(n)]
watersheds = [[None] * len(heights) for x in range(len(heights))]
slopes = [[-1] * len(heights) for x in range(len(heights))]
positions = initialize_positions(heights)
positions.sort(key=operator.attrgetter('height'))
main()
plt.figure()
plt.imshow(heights , interpolation='nearest', cmap='jet')
plt.rcParams['figure.figsize'] = (20.0, 8.0)
plt.title('heights')
plt.figure()
plt.imshow(slopes , interpolation='nearest', cmap='jet')
plt.title('slopes')
plt.figure()
plt.imshow(watersheds , interpolation='nearest', cmap='jet')
plt.title('watersheds')
"""
Explanation: Let's do a simple test of our functions. Let's give it a landscape that looks like a wide staircase and make sure the output is just a single watershed.
End of explanation
"""
import random
heights = [[random.randint(0, n) for x in range(n)] for x in range(n)]
watersheds = [[None] * len(heights) for x in range(len(heights))]
slopes = [[-1] * len(heights) for x in range(len(heights))]
positions = initialize_positions(heights)
main()
plt.figure()
plt.imshow(heights , interpolation='nearest', cmap='jet')
plt.rcParams['figure.figsize'] = (20.0, 8.0)
plt.title('heights')
plt.figure()
plt.imshow(slopes , interpolation='nearest', cmap='jet')
plt.title('slopes')
plt.figure()
plt.imshow(watersheds , interpolation='nearest', cmap='jet')
plt.title('watersheds')
"""
Explanation: It's interesting in this single-watershed case to see how simple the slopes object becomes. Either water is spreading in a flat basin (slope of 0) or it is flowing down the staircase (slope of 1).
Now we can showcase the watershed code on a random input landscape
End of explanation
"""
|
qutip/qutip-notebooks | examples/smesolve-jc-photocurrent.ipynb | lgpl-3.0 | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import qutip.settings
from qutip import *
from qutip.ipynbtools import HTMLProgressBar
from matplotlib import rcParams
rcParams['font.family'] = 'STIXGeneral'
rcParams['mathtext.fontset'] = 'stix'
rcParams['font.size'] = '14'
N = 15
w0 = 1.0 * 2 * np.pi
g = 0.2 * 2 * np.pi
times = np.linspace(0, 15, 150)
dt = times[1] - times[0]
gamma = 0.01
kappa = 0.1
ntraj = 150
a = tensor(destroy(N), identity(2))
sm = tensor(identity(N), destroy(2))
H = w0 * a.dag() * a + w0 * sm.dag() * sm + g * (sm * a.dag() + sm.dag() * a)
rho0 = tensor(fock(N, 5), fock(2, 0))
e_ops = [a.dag() * a, a + a.dag(), sm.dag() * sm]
"""
Explanation: Stochastic photo-current detection in a JC model
Mixing stochastic and deterministic master equations
Copyright (C) 2011 and later, Paul D. Nation & Robert J. Johansson
End of explanation
"""
c_ops = [np.sqrt(gamma) * sm] # collapse operator for qubit
sc_ops = [np.sqrt(kappa) * a] # stochastic collapse for resonator
result_ref = mesolve(H, rho0, times, c_ops+sc_ops, e_ops)
result1 = photocurrent_mesolve(H, rho0, times, c_ops=c_ops, sc_ops=sc_ops, e_ops=e_ops,
ntraj=1, nsubsteps=100,
store_measurement=True,
options=Options(store_states=True))
"""
Explanation: Highly efficient detection
End of explanation
"""
result2 = photocurrent_mesolve(H, rho0, times, c_ops=c_ops, sc_ops=sc_ops, e_ops=e_ops,
ntraj=ntraj, nsubsteps=100,
store_measurement=True,
options=Options(store_states=True),
progress_bar=HTMLProgressBar(),
map_func=parallel_map)
fig, axes = plt.subplots(2, 3, figsize=(16, 8), sharex=True)
axes[0,0].plot(times, result1.expect[0], label=r'Stochastic ME (ntraj = 1)', lw=2)
axes[0,0].plot(times, result_ref.expect[0], label=r'Lindblad ME', lw=2)
axes[0,0].set_title("Cavity photon number (ntraj = 1)")
axes[0,0].legend()
axes[1,0].plot(times, result2.expect[0], label=r'Stochatic ME (ntraj = %d)' % ntraj, lw=2)
axes[1,0].plot(times, result_ref.expect[0], label=r'Lindblad ME', lw=2)
axes[1,0].set_title("Cavity photon number (ntraj = 10)")
axes[1,0].legend()
axes[0,1].plot(times, result1.expect[2], label=r'Stochastic ME (ntraj = 1)', lw=2)
axes[0,1].plot(times, result_ref.expect[2], label=r'Lindblad ME', lw=2)
axes[0,1].set_title("Qubit excition probability (ntraj = 1)")
axes[0,1].legend()
axes[1,1].plot(times, result2.expect[2], label=r'Stochatic ME (ntraj = %d)' % ntraj, lw=2)
axes[1,1].plot(times, result_ref.expect[2], label=r'Lindblad ME', lw=2)
axes[1,1].set_title("Qubit excition probability (ntraj = %d)" % ntraj)
axes[1,1].legend()
axes[0,2].step(times, dt * np.cumsum(result1.measurement[0].real), lw=2)
axes[0,2].set_title("Cummulative photon detections (ntraj = 1)")
axes[1,2].step(times, dt * np.cumsum(np.array(result2.measurement).sum(axis=0).real) / ntraj, lw=2)
axes[1,2].set_title("Cummulative avg. photon detections (ntraj = %d)" % ntraj)
fig.tight_layout()
"""
Explanation: Run the smesolve solver in parallel by passing the keyword argument map_func=parallel_map:
End of explanation
"""
from qutip.ipynbtools import version_table
version_table()
"""
Explanation: Versions
End of explanation
"""
|
hawkdidy/Southwest_Report | Exploring_Domestic_Flights.ipynb | mit | df_flights_2008 = pd.read_csv('/home/hakim/Documents/Airline_delays-/flight_data_historical/2008.csv')
print(df_flights_2008.tail())
df_flights_2008.shape
df_flights_2008.isnull().sum(axis=0)
"""
Explanation: This notebook will be describing an analysis of US domestic flight data. We will first take a dive into a single year, 2008 and do some basic analysis. Then we may expand this into multiple years to try and see trends. Finally we will use a single year to build a predictive model to predict weather or not a flight will be delayed or not.
End of explanation
"""
df = df_flights_2008
df = df[df.Cancelled != 1]
"""
Explanation: Next thing we would like to do is remove cancelled flights. ALthough they can be interesting for other analysis for ours they are not needed. As a side note only 0.02% of flights were cancelled
End of explanation
"""
df = df[df.Diverted != 1]
df.isnull().sum(axis=0)
"""
Explanation: We will also remove diverted flights. For this analysis we would like to only focus on delays.
End of explanation
"""
df = df.drop(['Cancelled', 'Diverted', 'CancellationCode'], axis = 1)
"""
Explanation: What we have left are all non-cancelled diverted flights. Therefore we can remove the columns related to that. Also as we can see by looking at the numbers about 76% percent of flights were on time. That is not great.
End of explanation
"""
conditionsArr = [(df['ArrDelay'] > 10)]
choicesArr = [1]
df['isDelArr'] = np.select(conditionsArr, choicesArr, default = 0)
conditionsDep = [(df['DepDelay'] > 10)]
choicesDep = [1]
df['isDelDep'] = np.select(conditionsDep, choicesDep, default = 0)
ts = df[['isDelArr', 'isDelDep']].sum().to_frame()
ts.columns = ['Count']
sns.barplot(x = ts.index , y = ts.Count);
"""
Explanation: If a flight is more than 10 minutes late for leaving or arriving I have marked it with a boolean variable below.
End of explanation
"""
dforig = pd.value_counts(df.Origin, sort=True).to_frame().reset_index()
dforig.columns = ['Origin','Count']
topdforig = dforig.head(n=10)
topdforig.set_index('Origin',inplace = True)
df = df[df['Origin'].isin(topdforig.index)]
grouped = df.groupby('Origin').sum()
grouped = grouped[['isDelArr','isDelDep']]
grouped['DelArrRa'] = grouped.isDelArr
grouped['DelDepRa'] = grouped.isDelDep
gs = gridspec.GridSpec(3, 1)
ax1 = plt.subplot(gs[0])
ax2 = plt.subplot(gs[1])
ax3 = plt.subplot(gs[2])
sns.barplot(x = grouped.index, y = grouped.DelArrRa, ax = ax1);
sns.barplot(x = grouped.index, y = grouped.DelDepRa, ax = ax2);
sns.barplot(x = topdforig.index, y = topdforig.Count, ax = ax3);
pd.options.display.float_format = '{:.2f}'.format
df[['ActualElapsedTime', 'ArrDelay', 'DepDelay', 'Distance']].describe()
gs = gridspec.GridSpec(2, 2)
ax1 = plt.subplot(gs[0])
ax2 = plt.subplot(gs[1])
ax3 = plt.subplot(gs[2])
ax4 = plt.subplot(gs[3])
sns.distplot(df.ActualElapsedTime, ax = ax1);
sns.distplot(df.ArrDelay, ax = ax2);
sns.distplot(df.DepDelay, ax = ax3);
sns.distplot(df.Distance, ax = ax4);
dforig = pd.value_counts(df.Origin, sort=True).to_frame().reset_index()
dforig.columns = ['Origin','Count']
topdforig = dforig.head(n=10)
topdforig.set_index('Origin',inplace = True)
df = df[df['Origin'].isin(topdforig.index)]
dfa = pd.value_counts(df.UniqueCarrier, sort=True).to_frame().reset_index().head(n=10)
dfa.columns = ['Carrier','Count']
dfa.set_index('Carrier', inplace = True)
df = df[df['UniqueCarrier'].isin(dfa.index)]
delay_type_sum = df[['UniqueCarrier','CarrierDelay', 'WeatherDelay', 'NASDelay', 'SecurityDelay', 'LateAircraftDelay']].groupby(df.UniqueCarrier).sum().dropna()
flights = df.groupby('UniqueCarrier').count()
flights = flights[flights.index.isin(delay_type_sum.index)]
flights['Number_Flights'] = flights.Year
flights = flights[[ 'Number_Flights']]
flights.dropna(inplace = True)
delay_per_flight = delay_type_sum[['CarrierDelay', 'WeatherDelay', 'NASDelay', 'SecurityDelay', 'LateAircraftDelay']].div(flights.Number_Flights, axis='index')
delay_per_flight.describe()
gs = gridspec.GridSpec(2, 3)
ax1 = plt.subplot(gs[0])
ax2 = plt.subplot(gs[1])
ax3 = plt.subplot(gs[2])
ax4 = plt.subplot(gs[3])
ax5 = plt.subplot(gs[4])
sns.pointplot(x= delay_per_flight.index, y='CarrierDelay', data=delay_per_flight, ax = ax1)
sns.pointplot(x= delay_per_flight.index, y='WeatherDelay', data=delay_per_flight, ax = ax2)
sns.pointplot(x= delay_per_flight.index, y='NASDelay', data=delay_per_flight, ax = ax3)
sns.pointplot(x= delay_per_flight.index, y='SecurityDelay', data=delay_per_flight, ax = ax4);
sns.pointplot(x= delay_per_flight.index, y='LateAircraftDelay', data=delay_per_flight, ax = ax5);
dfWN = df[df['UniqueCarrier']=='WN']
delayed_per_month = dfWN[['isDelArr', 'Month']].groupby('Month').sum()
delayed_ratio_pm = delayed_per_month / dfWN[['isDelArr', 'Month']].groupby('Month').count()
sns.barplot(x = delayed_per_month.index, y = delayed_ratio_pm.isDelArr);
delayed_per_day = dfWN[['isDelArr', 'DayOfWeek']].groupby('DayOfWeek').sum()
delayed_ratio_pd = delayed_per_day / dfWN[['isDelArr', 'DayOfWeek']].groupby('DayOfWeek').count()
sns.barplot(x = delayed_per_day.index, y = delayed_ratio_pd.isDelArr);
"""
Explanation: Now we will only focus on the top 10 airports. They most likely generate the most revenue.
End of explanation
"""
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn import metrics
from sklearn import svm
from matplotlib import pyplot as plt
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import make_classification
from sklearn import cross_validation
from sklearn.model_selection import train_test_split
from sklearn.neural_network import MLPClassifier
#defining our target variable
y = dfWN.isDelArr
X = dfWN[['Month','DayofMonth', 'DayOfWeek', 'CRSDepTime', 'CRSArrTime', 'TailNum', 'ActualElapsedTime', 'Origin', 'Dest', 'Distance']]
for col in ['Month','DayofMonth', 'DayOfWeek']:
X[col] = X[col].astype('category')
X_dum = pd.get_dummies(X)
X_train, X_test, y_train, y_test = train_test_split(X_dum, y, test_size=0.2)
print(X_train.shape, y_train.shape)
print(X_test.shape, y_test.shape)
model = LogisticRegression()
model = model.fit(X_train, y_train)
# check the accuracy on the training set
print(model.score(X_train, y_train),
model.score(X_test, y_test))
"""
Explanation: Now we will start building a predictive model to predict if a plane will be delayed or not. We will start with a simple logistic regression. If it is not good enough we will move onto more complicated algorithms like SVM.
End of explanation
"""
X_dum[['DepTime_2', 'ArrTime_2', 'ActualElapsedTime_2','Distance_2']] = np.log(X_dum[['DepTime', 'ArrTime', 'ActualElapsedTime','Distance']])
X_train, X_test, y_train, y_test = train_test_split(X_dum, y, test_size=0.3)
#model = LogisticRegression()
#model = model.fit(X_train, y_train)
# check the accuracy on the training set
#print(model.score(X_train, y_train),
#model.score(X_test, y_test))
clf = svm.SVC(kernel='linear', C=1, random_state=0)
clf = clf.fit(X_train.head(n=10000), y_train.head(n=10000))
clf.score(X_test, y_test)
"""
Explanation: The test and training errors are very similar this means the model is most likely bias. For this model since 30% of flights are normally delayed the accuracy is really only 4% better than random guessing. That is not very good. We need to play around a bit. We will try some transformations next
End of explanation
"""
clf = RandomForestClassifier(n_jobs=2, random_state=0, max_features = None, oob_score=True)
clf = clf.fit(X_train, y_train)
print(clf.oob_score_,clf.score(X_test, y_test))
clf = MLPClassifier(activation='tanh', alpha=1e-05, batch_size='auto',
beta_1=0.9, beta_2=0.999, early_stopping=False,
epsilon=1e-08, hidden_layer_sizes=(100, 4), learning_rate='constant',
learning_rate_init=0.001, max_iter=1000, momentum=0.9,
nesterovs_momentum=True, power_t=0.5, random_state=1, shuffle=True,
solver='adam', tol=0.0001, validation_fraction=0.1, verbose=False,
warm_start=False)
clf = clf.fit(X_train, y_train)
clf.score(X_test,y_test)
import sqlite3
conn = sqlite3.connect("/home/hakim/Documents/Airline_delays-/flight_data_historical/delayed.sqlite3")
flightcount = pd.read_sql_query("select distinct Year ,count(Origin) as number_of_flights from delayed where Cancelled != 1 group by Year;", conn)
flightcount.set_index('Year', inplace = True)
fc = sns.pointplot(x = flightcount.index, y = flightcount.number_of_flights)
fc.set_xticklabels(flightcount.index, rotation=90);
flightscancelled = pd.read_sql_query("select distinct Year ,sum(Cancelled) as canceled_flights from delayed group by Year;", conn)
flightscancelled.set_index('Year', inplace = True)
delflcount = pd.read_sql_query("select distinct Year ,count(Origin) as number_of_flights from delayed where Cancelled != 1 group by Year;", conn)
fcl = sns.pointplot(x = flightscancelled.index, y = (flightscancelled.canceled_flights /flightcount.number_of_flights) * 100)
fcl.set_xticklabels(flightscancelled.index, rotation=90);
flightsdiverted = pd.read_sql_query("select distinct Year ,sum(Diverted) as Diverted from delayed group by Year;", conn)
flightsdiverted.set_index('Year', inplace = True)
dd = sns.pointplot(x = flightsdiverted.index, y = flightsdiverted.Diverted /flightcount.number_of_flights)
dd.set_xticklabels(flightsdiverted.index, rotation=90);
delayarr = pd.read_sql_query("select distinct year,sum(ArrDelay) as ArrDelay from delayed where ArrDelay >= 30 group by year;", conn)
delayarr.set_index('Year', inplace = True)
da = sns.pointplot(x = delayarr.index, y = delayarr.ArrDelay /flightcount.number_of_flights)
da.set_xticklabels(delayarr.index, rotation=90);
delaytrendweather = pd.read_sql_query("select distinct year,sum(WeatherDelay) as weatherdelay from delayed group by year;", conn)
delaytrendweather.set_index('Year', inplace = True)
dw = sns.pointplot(x = delaytrendweather.index, y = delaytrendweather.weatherdelay)
dw.set_xticklabels(delaytrendweather.index, rotation=90);
delaytrendsecuirty = pd.read_sql_query("select sum(SecurityDelay) from delayed group by year;", conn)
delaytrendsecuirty.columns=['SecurityDelay']
sns.pointplot(x = delaytrendsecuirty.index, y = delaytrendsecuirty.SecurityDelay);
delaytrendlateflight= pd.read_sql_query("select sum(LateAircraftDelay) from delayed group by year;", conn)
delaytrendlateflight.columns =['LateAircraftDelay']
sns.pointplot(x = delaytrendlateflight.index, y = delaytrendlateflight.LateAircraftDelay);
topar = pd.read_sql_query("select Year, Origin, (select count(Origin) as count_origin from (select count(Origin) from delayed as b where b.Year = a.Year limit 1) ) as count_origin from delayed as a group by Year;", conn)
"""
Explanation: Transformations did not lead to better results so we will move on to a more complicated model like SVM.
End of explanation
"""
|
srcole/qwm | misc/Power spectra of nonstationary rhythms.ipynb | mit | import numpy as np
import neurodsp
%matplotlib inline
import matplotlib.pyplot as plt
np.random.seed(0)
"""
Explanation: It was once claimed
Alpha rhythms have nonzero power only at an alpha-band frequency and its higher harmonics
However, this assumes the alpha rhythm is stationary, which is often not true in neural signals in which the oscillations are bursting and have variable amplitudes, periods, and shapes.
This notebook shows how changes in the rhythms affect the power spectra.
End of explanation
"""
freq = 10
T = 100
Fs = 1000
cycle_features_use = {'amp_mean': 1, 'amp_burst_std': 0, 'amp_std': 0,
'period_mean': 100,
'period_burst_std': 0,
'period_std': 0,
'rdsym_mean': .3, 'rdsym_burst_std': 0, 'rdsym_std': 0}
prob_leave_burst = .2
loop_prob_enter_burst = [.1, .5, 1]
loop_oscs = []
for p_enter in loop_prob_enter_burst:
if p_enter == 1:
p_leave = 0
else:
p_leave = prob_leave_burst
osc = neurodsp.sim_bursty_oscillator(freq, T, Fs,
cycle_features = cycle_features_use,
prob_enter_burst=p_enter,
prob_leave_burst=p_leave)
loop_oscs.append(osc)
t = np.arange(0,T,1/Fs)
tidx = t <= 2
plt.figure(figsize=(12,3))
for i in range(len(loop_oscs)):
plt.plot(t[tidx], loop_oscs[i][tidx] + i*2)
plt.xlim((0,2))
plt.xlabel('Time (s)')
plt.ylabel('Voltage')
fs, psds = [], []
for i in range(len(loop_oscs)):
f, psd = neurodsp.spectral.psd(loop_oscs[i], Fs)
fs.append(f)
psds.append(psd)
plt.figure(figsize=(10,5))
for i in range(len(loop_oscs)):
fidx = fs[i] <= 100
plt.semilogy(fs[i][fidx], psds[i][fidx],
label=loop_prob_enter_burst[i])
plt.legend(title='burst prevalence')
"""
Explanation: 1. Effect of bursting changes on PSD
More bursting: more broadband power
Bursting to any extent will significantly increase non-harmonic power
End of explanation
"""
freq = 10
T = 100
Fs = 1000
cycle_features_use = {'amp_mean': 1, 'amp_burst_std': 0, 'amp_std': 0,
'period_mean': 100,
'period_burst_std': 0,
'period_std': None,
'rdsym_mean': .3, 'rdsym_burst_std': 0, 'rdsym_std': 0}
prob_enter_burst = 1
prob_leave_burst = 0
loop_period_stds = [0, 3, 15]
loop_oscs = []
for per_std in loop_period_stds:
cycle_features_use['period_std'] = per_std
osc = neurodsp.sim_bursty_oscillator(freq, T, Fs,
cycle_features = cycle_features_use,
prob_enter_burst=prob_enter_burst,
prob_leave_burst=prob_leave_burst)
loop_oscs.append(osc)
t = np.arange(0,T,1/Fs)
tidx = t <= 2
plt.figure(figsize=(12,3))
for i in range(len(loop_oscs)):
plt.plot(t[tidx], loop_oscs[i][tidx] + i*2)
plt.xlim((0,2))
plt.xlabel('Time (s)')
plt.ylabel('Voltage')
fs, psds = [], []
for i in range(len(loop_oscs)):
f, psd = neurodsp.spectral.psd(loop_oscs[i], Fs)
fs.append(f)
psds.append(psd)
plt.figure(figsize=(10,5))
for i in range(len(loop_oscs)):
fidx = fs[i] <= 100
plt.semilogy(fs[i][fidx], psds[i][fidx],
label=loop_period_stds[i])
plt.legend(title='period st. dev.')
"""
Explanation: 2. Effect of period changes on PSD
End of explanation
"""
cycle_features_use = {'amp_mean': 1, 'amp_burst_std': 0, 'amp_std': 0,
'period_mean': 100,
'period_burst_std': 0,
'period_std': 0,
'rdsym_mean': .3, 'rdsym_burst_std': 0, 'rdsym_std': None}
prob_enter_burst = 1
prob_leave_burst = 0
loop_rdsym_stds = [0, .01, .07]
loop_oscs = []
for rdsym_std in loop_rdsym_stds:
cycle_features_use['rdsym_std'] = rdsym_std
osc = neurodsp.sim_bursty_oscillator(freq, T, Fs,
cycle_features = cycle_features_use,
prob_enter_burst=prob_enter_burst,
prob_leave_burst=prob_leave_burst)
loop_oscs.append(osc)
t = np.arange(0,T,1/Fs)
tidx = t <= 2
plt.figure(figsize=(12,3))
for i in range(len(loop_oscs)):
plt.plot(t[tidx], loop_oscs[i][tidx] + i*2)
plt.xlim((0,2))
plt.xlabel('Time (s)')
plt.ylabel('Voltage')
fs, psds = [], []
for i in range(len(loop_oscs)):
f, psd = neurodsp.spectral.psd(loop_oscs[i], Fs)
fs.append(f)
psds.append(psd)
plt.figure(figsize=(10,5))
for i in range(len(loop_oscs)):
fidx = fs[i] <= 100
plt.semilogy(fs[i][fidx], psds[i][fidx],
label=loop_rdsym_stds[i])
plt.legend(title='rdsym st. dev.')
"""
Explanation: 3. Effect of symmetry changes on PSD
End of explanation
"""
|
vpoggi/catalogue_toolkit | notebooks/Homogenisation.ipynb | agpl-3.0 | parser = ISFReader("inputs/isc_test_catalogue_isf.txt",
selected_origin_agencies=["ISC", "GCMT", "HRVD", "NEIC", "EHB", "BJI"],
selected_magnitude_agencies=["ISC", "GCMT", "HRVD", "NEIC", "BJI"])
catalogue = parser.read_file("ISC_DB1", "ISC Global M >= 5")
print "Catalogue contains: %d events" % catalogue.get_number_events()
"""
Explanation: Load in Catalogue - Limit to ISC, GCMT/HRVD, EHB, NEIC, BJI
End of explanation
"""
origin_rules = [
("2005/01/01 - 2005/12/31", ['EHB', 'ISC', 'NEIC', 'GCMT', 'HRVD', 'BJI']),
("2006/01/01 - 2007/01/01", ['ISC', 'EHB', 'NEIC', 'BJI', 'GCMT', 'HRVD'])
]
"""
Explanation: Define Rule Sets
The catalogue covers the years 2005/06. To illustrate how to apply time variable hierarchies we consider two set of rules:
For the origin the order of preference is:
(For 2005): EHB, ISC, NEIC, GCMT/HRVD, BJI
(For 2006): ISC, EHB, NEIC, BJI, GCMT/HRVD
End of explanation
"""
def gcmt_hrvd_mw(magnitude):
"""
For Mw recorded by GCMT take the value with no uncertainty
"""
return magnitude
def gcmt_hrvd_mw_sigma(magnitude):
"""
No additional uncertainty
"""
return 0.0
"""
Explanation: Magnitude Rules
GCMT/HRVD
End of explanation
"""
def neic_mw(magnitude):
"""
If Mw reported by NEIC,
"""
return magnitude
def neic_mw_sigma(magnitude):
"""
Uncertainty of 0.11 units
"""
return 0.11
def scordillis_ms(magnitude):
"""
Scordilis (2006) indicates ISC and NEIC Ms can treated (almost) equivalently
"""
if magnitude < 6.1:
return 0.67 * magnitude + 2.07
else:
return 0.99 * magnitude + 0.08
def scordillis_ms_sigma(magnitude):
"""
With Magnitude dependent uncertainty
"""
if magnitude < 6.1:
return 0.17
else:
return 0.20
def scordillis_mb(magnitude):
"""
Scordilis (2006) finds NEIC and ISC mb nearly equivalent
"""
return 0.85 * magnitude + 1.03
def scordillis_mb_sigma(magnitude):
"""
"""
return 0.29
"""
Explanation: ISC/NEIC
End of explanation
"""
def bji_mb(magnitude):
"""
"""
return 0.9 * magnitude + 0.15
def bji_mb_sigma(magnitude):
"""
"""
return 0.2
def bji_ms(magnitude):
"""
"""
return 0.9 * magnitude + 0.15
def bji_ms_sigma(magnitude):
"""
"""
return 0.2
"""
Explanation: BJI
For BJI - no analysis has been undertaken. We apply a simple scaling of 0.9 M + 0.15 with uncertainty of 0.2. This is for illustrative purposes only
End of explanation
"""
rule_set_2005 = [
MagnitudeConversionRule("GCMT", "Mw", gcmt_hrvd_mw, gcmt_hrvd_mw_sigma),
MagnitudeConversionRule("HRVD", "Mw", gcmt_hrvd_mw, gcmt_hrvd_mw_sigma),
MagnitudeConversionRule("ISC", "Ms", scordillis_ms, scordillis_ms_sigma),
MagnitudeConversionRule("NEIC", "Ms", scordillis_ms, scordillis_ms_sigma),
MagnitudeConversionRule("ISC", "mb", scordillis_mb, scordillis_mb_sigma),
MagnitudeConversionRule("NEIC", "mb", scordillis_mb, scordillis_mb_sigma),
MagnitudeConversionRule("BJI", "Ms", bji_ms, bji_ms_sigma),
MagnitudeConversionRule("BJI", "mb", bji_mb, bji_mb_sigma)
]
rule_set_2006 = [
MagnitudeConversionRule("GCMT", "Mw", gcmt_hrvd_mw, gcmt_hrvd_mw_sigma),
MagnitudeConversionRule("HRVD", "Mw", gcmt_hrvd_mw, gcmt_hrvd_mw_sigma),
MagnitudeConversionRule("ISC", "Ms", scordillis_ms, scordillis_ms_sigma),
MagnitudeConversionRule("BJI", "Ms", bji_ms, bji_ms_sigma),
MagnitudeConversionRule("NEIC", "Ms", scordillis_ms, scordillis_ms_sigma),
MagnitudeConversionRule("ISC", "mb", scordillis_mb, scordillis_mb_sigma),
MagnitudeConversionRule("BJI", "mb", bji_mb, bji_mb_sigma),
MagnitudeConversionRule("NEIC", "mb", scordillis_mb, scordillis_mb_sigma)
]
magnitude_rules = [
("2005/01/01 - 2005/12/31", rule_set_2005),
("2006/01/01 - 2007/01/01", rule_set_2006)
]
"""
Explanation: Define Magnitude Hierarchy
End of explanation
"""
preprocessor = HomogenisorPreprocessor("time")
catalogue = preprocessor.execute(catalogue, origin_rules, magnitude_rules)
"""
Explanation: Pre-processing
Before executing the homogenisation it is necessary to run a preprocessing step. This searches through the catalogue and identifies which conversion rule to apply:
The preprocessor is instantiated with a string describing the sort of rules to be applied.
"time" - Applies time only
"key" - Applies key rules only
"depth" - Applies depth rules only
"time|key" - Applies joint time and key rules
"time|depth" - Applied joint time and depth rules
"depth|key" - Applies joint depth and key rules
End of explanation
"""
harmonisor = DynamicHomogenisor(catalogue, logging=True)
homogenised_catalogue = harmonisor.homogenise(magnitude_rules, origin_rules)
"""
Explanation: Harmonise the Catalogue
End of explanation
"""
log_file = "outputs/homogenisor_log.csv"
if os.path.exists(log_file):
os.remove(log_file)
harmonisor.dump_log(log_file)
"""
Explanation: As logging was enabled, we can dump the log to a csv file and explore which rules and which hierarchy was applied for each event
End of explanation
"""
output_catalogue_file = "outputs/homogeneous_catalogue.csv"
if os.path.exists(output_catalogue_file):
os.remove(output_catalogue_file)
harmonisor.export_homogenised_to_csv(output_catalogue_file)
"""
Explanation: Export the Homogenised Catalogue to CSV
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.